content
stringlengths
86
994k
meta
stringlengths
288
619
Need some help with a hard question January 22nd 2007, 04:18 PM #1 Dec 2006 Need some help with a hard question Ok the question is Coroners estimate the time of death from body temperature using the simple rule that a body cools about 1C in the first hour after death and about 1/2C for each addictional hour. Assuming an air temperature of 20C and a living body temperature of 37C, the temperature, T(t), in degrees Celcius is given by T(t) = 20 + 17e^-kt, where t=0 is the instant that death occurred.. a) For what value of k will the body cool by 1C in the first hour? b) Using the value of k found in part a) after how many hours will the temperature of the body be decreaing at a rate of 1/2 C per hour? c) Using the value of k found in part a) Show that 24h after death, the coroner's rule gives approx the same temp as the forumula. I know I need an equation to do this but Im not sure what it is, I understood my other log study questions but this one stumps me. Any help would be appreciated greatly. Ok the question is Coroners estimate the time of death from body temperature using the simple rule that a body cools about 1C in the first hour after death and about 1/2C for each addictional hour. Assuming an air temperature of 20C and a living body temperature of 37C, the temperature, T(t), in degrees Celcius is given by T(t) = 20 + 17e^-kt, where t=0 is the instant that death occurred.. a) For what value of k will the body cool by 1C in the first hour? b) Using the value of k found in part a) after how many hours will the temperature of the body be decreaing at a rate of 1/2 C per hour? c) Using the value of k found in part a) Show that 24h after death, the coroner's rule gives approx the same temp as the forumula. I know I need an equation to do this but Im not sure what it is, I understood my other log study questions but this one stumps me. Any help would be appreciated greatly. a) You have: $T(t) = 20 + 17e^{-kt}$ Now if we measure time in hours we have if $1$ is the time to cool by 1 degree C $<br /> 36=20+17 e^{-k}<br />$ $<br /> e^{-k}=16/17<br />$ $<br /> k=-\ln(16/17)\approx 0.060625<br />$ b) The rate of fall of temprature is: $\frac{dT}{dt}=-k \times 17 e^{-kt}$ If the temprature is falling at a rate of 1/2 a degree per hour we have: $-k \times 17 e^{-kt}=-1/2$ $e^{-kt}=0.5/(k \times 17)$ $-kt=\ln(0.5/(k \times 17))$ $t=-(1/k)\ln(0.5/(k \times 17))\approx 11.28 \mbox{ hours}$ Last edited by CaptainBlack; January 23rd 2007 at 05:16 AM. January 23rd 2007, 05:05 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/10457-need-some-help-hard-question.html","timestamp":"2014-04-17T05:09:28Z","content_type":null,"content_length":"36706","record_id":"<urn:uuid:455fa4f6-2210-4929-8f65-757d92627344>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Selecting the number of classes Daniel posted on Tuesday, August 24, 2004 - 12:08 pm I read your recent chapter in the Kaplan text on latent variable modeling, and have a question about a paper I'm revising. I have a continuous outcome that is not normally distributed. I modeled it with freely estimated class variances, and found three trajectories. However, the variables are not normally distributed. I gather that I cannot use the LMR LRT for testing the number of trajectories. The BIC supports four classes, but the change in BIC from three to four classes is minimal, and the fourth class is very superficial. Can I use the BIC criteria, since I cannot use the LMR LRT to select 3 classes? bmuthen posted on Tuesday, August 24, 2004 - 12:23 pm Note that the assumption of within-class normality does not imply that the mixture is normal but can be very non-normal. Your observed variables correspond to the mixture. So these mixture models handle non-normal observed variables. Therefore, you can use the LMR. The performance of the LMR - and the BIC - is, however, not sufficiently well-known in most cases. I would not decide on number of classes based only on statistical measures but also interpretability. Given what you say about the superficial 4th class, it sounds like the way to go is 3 classes. Anonymous posted on Wednesday, August 17, 2005 - 2:51 pm I am currently working on a model of behavior change about which there is some controversy in the literature. Specifically, there is some debate regarding the nature of the latent "readiness to change" construct. Some of the debate about the measurement model for the readiness to change construct essentially centers on whether individuals occupy discrete/discontinuous states of readiness or whether the construct is better modeled as continuous. My question is whether evaluation using mixture modeling might be able to shed some light on this, on statistical grounds. That is would a finding that one class best explained the data argue against the notion of different discrete classes and suggest a continuous latent construct? Or is the distinction between continuous and categorical latent variables largely a matter of heuristic concern? Vermunt, for instance, notes that the distinction between latent classes and latent traits is largely a matter of the number of points across which one intergrates. So I suppose the question is does mixture modeling provide statistical evidence regarding the categorical/continuous nature of a latent construct? bmuthen posted on Thursday, August 18, 2005 - 9:47 am That is a good question that we still know too little about. I was less hopeful about this earlier. But I am looking at this currently for categorical items, contrasting latent class modeling with factor analysis modeling and with hybrids of the two, and I am gradually getting the impression that in several cases these models are distinguishable in terms of statistical fit. A one-class model with a continuous factor might fit considerable better or worse than a 2-class model without continuous factors. And often, hybrids fit considerably better than either. True, the classes can be seen as discrete points on a continuum - taking a non-parametric view of the factor distribution (which I think your Vermunt reference is about) - and this matter can only be resolved by relating the classes to other variables - antecedents and consequences - to see if classes are significantly different on those variables. Given that we can now do these analyses conveniently in a general modeling framework, it would be interesting to see more investigations of this type. Chuck Green posted on Friday, August 19, 2005 - 8:26 am Yes, that is the reference to Vermunt to which I was referring. When you discuss hybrids are you referring to latent categorical variables derived from a comnbination continuous latent factors and observed variables? bmuthen posted on Friday, August 19, 2005 - 9:23 am By hybrid I mean a model that has both continuous and categorical latent variables (the outcomes can be of any kind). For example, a latent class model that has factor variation within classes making the items correlate within class. Patrick Malone posted on Thursday, July 20, 2006 - 3:33 pm This is a really simple question, the answer to which has the potential to make me feel a complete idiot. The general reference to using BIC, ABIC, etc. to select the number of classes is to choose the "lowest" BIC -- in fact, that is the terminology used in the Nylund et al. draft on the website. But something Bengt said in passing in either Maryland or San Antonio in May stuck with me -- I *think* he said, "the BIC closest to zero." So do we want lowest value of BIC or lowest absolute value of BIC? I couldn't find guidance in the manual, the technical appendices, or the discussions. Bengt O. Muthen posted on Thursday, July 20, 2006 - 3:50 pm I should have said lowest BIC. Perhaps I was talking about the possibility of a negative BIC which could happen when logL is positive so that the first BIC term is negative - if the sample size and/ or #par.'s is small, the negative first tems will dominate the positive second term and BIC will be negative. But even so, we want smallest BIC (not in an absolute sense). Patrick Malone posted on Thursday, July 20, 2006 - 7:14 pm Thank you -- you just saved a paper! Justin Jager posted on Tuesday, September 26, 2006 - 9:00 am I'm using the tech14 option in conjunction with Type=mixture to request a PBL ratio test. I am having a difficult time manipulating which class is the first class identified (and therefore is the class deleted to obtain the c-1 comparison model). I ran a 3 class solution and a 4 class solution. Comparing the mean estimates across the 3-class solution and the 4-class solution, it is clear which class out of the 4 class solution is the "new" class. Intuitively, it seems to me that I want the "new" class to be the first class identified, so that when comparing the c model to c-1 model the deleted class will be the "new" class. In order to accomplish this, I re-ran the 4-class solution, but this time used the tech14 option, and listed the start values for the "new" class first, and then listed the start values for the three remaining classes (listing the start values for the largest class last). However, the first class identified is not the "new" class whose start values are listed first. Given the above I have two questions: (1) Am I being to stringent in my use of the ratio test by identifying the "new" class and manipulating the start values so that it is the first class identifed? (2) If the answer to #1 is No, then do you have any suggestions for manipulating the model so that the "new" class is the first class identified? Bengt O. Muthen posted on Sunday, October 01, 2006 - 12:14 pm (1) You don't have to do this. As long as the best loglikelihood values are obtained for a given number of classes, Tech14 is fine. Christian M. Connell posted on Friday, October 26, 2007 - 10:05 am Two questions re: use of Tech14 with LCA: 1) How do you identify the optimal optseed # in cases were several appear to have been replicated (sample output below)? Loglikelihood values at local maxima, seeds, and initial stage start numbers: -2571.979 181293 212 -2571.979 688839 273 -2571.979 830570 369 -2571.979 unperturbed 0 -2694.316 350608 334 -2694.316 478421 311 -2696.832 967237 48 -2696.832 247224 94 -2696.832 471398 74 2) in using tech14 with the following values: lrtstarts = 40 10 2500 20; I get a warning indicating that 4 of 5 bootstraps failed to replicate. The p-value for the BLRT is significant, but I assume the lack of replication is a significant problem. How should the lrtstarts be increased, and at what point would you determine that the solution can not replicate? Linda K. Muthen posted on Friday, October 26, 2007 - 4:05 pm You can use any one of the optseeds that result in the best loglikelihood. Version 5 will have improvements for TECH14. I would wait until it is out and use TECH11 in the meantime. Sylvana Robbers posted on Friday, March 28, 2008 - 5:59 am Dear dr. Muthen, I would like to react on one of the messages above (July 20, 2006) about the interpretation of the BIC. Dr. Nagin describes in his book (Group-based modeling of development, 2005) that the lowest absolute BIC-value is preferred, so not the closest to zero. So, what should be the right approach? I look forward to your reaction. Sylvana Robbers Linda K. Muthen posted on Friday, March 28, 2008 - 6:46 am I think the post after that clarified the statement and said the lowest BIC. Sylvana Robbers posted on Friday, March 28, 2008 - 7:00 am Maybe my question was not clear. I mean that dr. Nagin states that the BIC closest to zero is the best, and you propose the lowest BIC, which contradicts each other. So, is it the lowest BIC or the closest to zero? Sylvana Robbers posted on Friday, March 28, 2008 - 7:08 am By the way, the sentence in my first message should be: Dr. Nagin describes in his book (Group-based modeling of development, 2005) that the lowest absolute BIC-value is preferred, so the closest to zero. (without 'not'). Thanks in advance for your time. Bengt O. Muthen posted on Friday, March 28, 2008 - 11:35 am Let me see if I get this right. The Mplus BIC is BIC(M) = -2logL + r log n, where L is the likelihood, r is the number of parameters and n is the sample size. In Mplus we want the smallest BIC(M). Nagin in (4.1) of his book uses the alternative BIC(N) = logL - 0.5 r log n, so that BIC(N) = -2 BIC(M). Nagin wants the largest BIC(N). With BIC(M) the second term is always positive (or non-negative). The first term is typically positive as well because logL is typically negative. The term decreases as the likelihood increases (gets better). So here we want small positive BIC(M) values. In the rare cases when logL is positive (L > 1) the first term is negative and gets bigger negative as the likelihood increases. So here too do we want smaller BIC(M) where say -10 is smaller than -5 (-10 is further to the left on the real line). I think this in line with my earlier post. Sylvana Robbers posted on Monday, March 31, 2008 - 1:55 am Finally I understand it :-) Thank you very much for your clear answer! Alexandre Morin posted on Friday, April 25, 2008 - 1:01 pm Could you have made a mistake ? I've been playing with BICs formulas and something does not add right... -2 * Bic(M)= -2 (-2LogL + r log n) = 4 logL - 2 r log n, which is not Bic(N). I believe it is the reverse: - 2 Bic(N) = BIC(M) -2 (logL - 0.5 r log n)= -2 logL + r log n. This does not change the conclusion for the positive versus negative BIC issues however. Christian M. Connell posted on Friday, July 11, 2008 - 12:50 pm Is there a way to test the difference between two LCAs with the same number of classes, but in which one model includes a forced zero-behavior class (i.e., a forced no-substance use class)? Specifically, I've determined a 4-class solution fits the data best relative to 1,2,3, and 5-class models. However the fit indices for a restricted and unrestricted 4-class model are nearly Adj BIC: 8138.76 (unrest) 8140.35 (rest) LMR-LRT p-value: .03 (unrest)<.001 (rest) Entropy: .83 (unrest) .83 (rest) My sense is that the BIC discrepancy is negligible (?) and that the restricted model should be selected based upon parsimony (i.e., fewer parameters estimated). The actual make-up of the 4-classes is comparable (basically, one of the unrestricted model classes has a high number of non-use individuals), but prevalence of the classes is different across the models, and predictors vary slightly. Any guidance -- or a specific test that I might run to determine the "best" model? Bengt O. Muthen posted on Sunday, July 13, 2008 - 5:13 pm I think I would go with the unrestricted model if it has the lower BIC, unless you have a specific theory for the existence of a zero class. I am not much for "model trimming", but just reporting the results even if some parameters may not be needed. But if you want to test for one of the classes being at zero probability for all item, perhaps you can use the Wald test of Model Test. For instance, you can define item j's probability as Model Constraint: pj = 1/(1+exp(tj)); where tj is the label for the threshold of item j. Then you use Model test: You do this for all items at the same time. I haven't tried it, but I think it should work. Christian M. Connell posted on Monday, July 14, 2008 - 4:01 am Thank you for the response. A couple of follow-ups: Is there any indication as to the "sensitivity" of the Adj. BIC -- given these differences are less that 2 pts? Would these two 4-class models be considered nested, since the same predictors and number of classes are specified -- only one model includes constraints that force one class to include youth with no use? If so, is it appropriate to conducted a difference in chi-square (or equivalent) test to see whether the constraints significantly worsen fit? Also, any utility in examining the quality of classification table -- even though entropy is identical, there is some variation both the diagonal and off-diagonal elements. Bengt O. Muthen posted on Monday, July 14, 2008 - 8:16 am There is a literature on how differences between BIC values should be viewed. See for instance Kass R. E. and Raftery, A. E. (1993). Bayes factors. Journal of the American Statistical Association 90, 773-795 The models are nested, but the assumptions of the likelihood-ratio chi-square test are not fulfilled because the zero-class model specifies parameters that are on the border of their admissible space, namely zero item probabilities conditional on class. Come to think of it, the Wald test would be negatively affected by this too. The classification table might tell you about differences across models in being able to tell certain classes apart. Bruce A. Cooper posted on Monday, July 14, 2008 - 3:23 pm I'm evaluating GMM solutions to identify the "best" number of classes for an outcome (total sleep time) measured over 14 occasions. There is clearly one large class, and some number of smaller classes, probably 2 or 3. I take it from Bengt's response in this thread on October 01, 2006 - 12:14 pm that specifying starting values is not necessary to get a valid BLRT test of the K vs K-1 solutions with TECH14. I assume that the way the classes differ would therefore be based on the profile means for the two solutions? So, how to identify the K and K-1 mean profiles that were identified in the BLRT? (I've found that the "K" BLRT solution is not identical to the model solution it follows.") Bruce A. Cooper posted on Monday, July 14, 2008 - 3:53 pm Next question on selecting the "best" number of classes. I'm puzzled about the interpretation of the VLMR and BLRT tests for the K vs K-1 solutions. I've obtained identical H0 LL values (and -2LL diff values) from TECH11 and TECH14 outputs in the same run, and VLMR will have a p-value WAY larger than .05, while BLRT will have a p-value below .0000. Further, after obtaining BLRT for a sequence of solutions with increasing classes (even specifying LRTBOOTSTRAP=100), I have found that it is always significant at .0000 even when the #classes is getting silly. I've also found that the BIC and SABIC also get smaller with additional classes, even for a large number of classes (5 or more, with most being very small), so not much help there. Tofighi & Enders (2008) recommended the BLRT and sample-size adjusted BIC as the most useful indices for GMM solutions, and Nylund et al. (2006 revised draft) also like the BLRT for GMM. Do you have further suggestions on the use of these indices vis a vis the # of classes and substantive interpretations in trying to find the "real" solution? Bruce A. Cooper posted on Monday, July 14, 2008 - 5:31 pm Still another question re the # of classes (my ignorance seems bottomless!) - How does one decide what values to specify for the LRTSTARTS command? I've read a couple of posts here about it, and the V5 manual (pp. 500-501). The defaults are 0 0 20 5, and you suggest perhaps 2 1 50 15 as an example of a different specification. Why so few for the K-1 class solution? Why that number for the K-class solution? (The question arises partly because of the problematic data set you've helped me with, for which I needed to specify 1500 random starts to get a replicated maximum before going to the BLRT. So, I can use the OPTSEED option from one or more of the replicated solutions when I run the TECH14 analysis, but what issues dictate how to choose the number of draws for the bootstrapped K-1 and K solution analysis?) Any references would be great, so I don't have to keep bothering you! Bruce A. Cooper posted on Tuesday, July 15, 2008 - 7:38 am Belay that second message re the use of the BLRT resulting in p-values < .0000. That happened with specified starting values and LRTBOOTSTRAP >= 100, but after running more models last night, and another TECH14 BLRT this morning using the default for holding random effects constant, and allowing the TECH14 to run on its own (no specs for LRTBOOTSTRAP or LRTSTARTS), I got a BLRT that made sense (in this case, not significant)! This was after specifying OPTSEED from an analysis I ran overnight, where I used 4000 random starts and got two (2!?!) replicated maximums for the LL. I also checked the K-1 LL in the TECH14 output, and it was the same as the maximum from the 3-class solution I had gotten previously with the same ANALYSIS settings. So there's some consistency! and a way to know what the K-1 solution would look like from the TECH14 K-1 model. I'm still troubled though, that out of 4000 random starts, I would get only two replications of the largest LL (-3880.753), then just one LL that I had gotten multiple replications of in prior runs of this 4-class analysis (-3882.103), then 25 identical LL (-3883.747), with the difference between the largest and smallest LL from these three solutions being only 2.994. Having only two out of 4000 replicated LL still seems pretty chancy, making me wonder about a local maximum that I just happened to hit by chance, twice. Bengt O. Muthen posted on Tuesday, July 15, 2008 - 10:28 am Your 4 last posts touch on topics we teach at our 2-day Mplus Short Courses. One just took place as the psychometric meeting and another one is coming up in November at Ann Arbor. This is in the area of Topic 5 and 6 (see our web site for topics and handouts). It is too large a topic to teach on Mplus Discussion - so I will just give some brief comments. Topic 5 has on slide 197: "More On LCA Testing Of K – 1 Versus K Classes Bootstrap Likelihood Ratio Test (LRT): TECH14 • LRT = 2*[logL(model 1) – logL(model2)], where model 2 is nested within model 1 • When testing a k-1-class model against a k-class model, the LRT does not have a chi-square distribution due to boundary conditions, but its distribution can be determined empirically by Bootstrap steps: 1. In the k-class run, estimate both the k-class and the k-1-class model to get the LRT value for the data 2. Generate (at most) 100 samples using the parameter estimates from the k-1-class model and for each generated sample get the log likelihood value for both the k-1 and the k-class model to compute the LRT values for all generated samples 3. Get the p value for the data LRT by comparing its value to the distribution in 2." Because step 2 generates data according to the k-1-class model, the k-1-class model is easier to fit than the k-class model and therefore requires fewer starts. Having only 2 replicated best LLs out of 4000 is a sign of a problem - it typically indicates that the model tries to read too much out of the data. This happens when using too many parameters such as too many classes, particularly when the sample size is not large and the data signal is not strong. Bruce A. Cooper posted on Tuesday, July 15, 2008 - 11:26 am Thanks very much for the information, Bengt, and for confirming that getting only 2 maximum LL out of 4000 does indicate a problem with this model! I *have* been worried that I was trying to squeeze water from this stone! I saw an earlier announcement about the November short course and I have been planning to attend. Meanwhile, I see that your handouts are available to view online and will look through the ones for Topics 5 & 6. You used to sell the short course handouts through the Mplus website, but I couldn't find the page for ordering them. It's very generous of you to offer them for download now! I like to read as much as I can to find answers to my questions before bothering you folks, so I really appreciate the references you provide and the short-course handouts. Thank you. Bruce A. Cooper posted on Tuesday, July 15, 2008 - 5:04 pm I've run a 3-class GMM to test a 3 vs 2-class model with BLRT. I first ensured that I had a maximum LL that had many replications, then selected two seeds to check the solution. OPTSEED with both seeds produced identical solutions. I ran the model with one of the seeds using OPTSEED, first with no specifications for the TECH14 BLRT. I got a p = 0.3333 with 9 successful BS draws. Then, I ran the model with the same OPTSEED, but this time specifying K-1STARTS = 100 20 ; LRTBOOTSTRAP = 50 ; LRTSTARTS = 5 2 20 10; to increase the reliability of the BLRT and the K-1 solution. I get identical LL for the K model (same seed), and identical LL for the K-1 model (and it is identical to the LL from the previous 2-class model I ran without BLRT). However, the p-value for the BLRT is now 0.0000 with 50 successful BS draws. This is the same type of result I referred to earlier, where the BLRT produces p = .0000 no matter how many classes are in the model, when specifying LRTBOOTSTRAP at some number, usually 50, 100, or 150 for the models I've run. Could you direct me to a reference that will help me understand this inconsistency in BLRT p-values for the same LL and -2LL diff? Linda K. Muthen posted on Wednesday, July 16, 2008 - 3:14 pm We don't have any further references. Please send your input, data, output, and license number to support@statmodel.com. Bruce A. Cooper posted on Monday, July 28, 2008 - 11:12 am Erratum: My posting on July 14, 2008 - 3:53 pm was wrong about the recommendation of Tofighi & Enders (2008) regarding the indices they recommended for choosing the number of classes in a GMM. I have found my own error in a Google(TM) search using their names, and hope this correction will also show up in future Google searches so folks won't be misled by my error. I wrote that Tofighi & Enders "recommended the BLRT and sample-size adjusted BIC as the most useful indices for GMM solutions" but my memory failed me, and I should have looked at the paper again before citing them. In fact, Tofighi & Enders (2006; 2008) did not evaluate the BLRT at all in their Monte Carlo study of GMM indices for choosing the number of classes, because it had just been added to Mplus when they did their study. Instead, their recommendation was that the sample-size adjusted BIC was best overall index in its performance across a number of conditions, and the Lo-Mendell-Rubin LRT was next best in several situations. In fact, Nylund, Asparouhov, & Muthen (2006 revised draft; Structural Equation Modeling, 2007) recommended the BLRT for choosing the number of classes in GMM under some circumstances, but their Monte Carlo study did not evaulate very many conditions for GMM. Sorry for the error! Bruce A. Cooper posted on Monday, July 28, 2008 - 11:38 am Linda - Thanks for your note on July 16th and the follow-up tech spt via email. My question is "where-to-go-from-here-for-now" in using the BLRT to help choose the number of classes for GMM. I understand that it is best to set the largest class as the last class when using the BLRT, but (from another posting) that the order of the other classes is not essential. However, I have been unsuccessful in making the largest class the last class, whether I use class intercepts from the solution being tested as starting values, or whether I use the categorical latent variable means. With either method, the program still re-orders the classes so that the largest class is not last in the TECH14 run, defeating the purpose of the BLRT to some extent, and taking a lot of time to run repeated bootstrap tests trying to make the last class the largest. Using the categorical latent variable means, for example, I got this from the prior solution for the 4-class model being tested, in which the last class WAS largest: Categorical Latent Variables C#1 -1.146 C#2 -2.576 C#3 -2.860 So, I specified in the Model %OVERALL% statement for the TECH14 run: [ c#1*-1.146 c#2*-2.576 c#3*-2.860 ]; But no matter how I ordered those three values, the subsequent runs put the largest class as number 2 or 3. Any thoughts about what I'm doing wrong? Linda K. Muthen posted on Monday, July 28, 2008 - 11:41 am Please send your files and license number to support@statmodel.com. Mogens Fenger posted on Sunday, August 17, 2008 - 9:28 am Dear Linda and Bengt, I’m running a lot of LCA and SEM analysis with and without factors and covariates. A few questions (hope they are not too "simple"): 1) A suppose that the dot in scale correction factor is a decimal identifier. 2) The simplified Satora-Bentler use of scale correction factors in the Difference testing I suppose that the number of parameters are the number of free parameters. 3) When I incorporate co-variates in an analysis the number of subjects may differ considerably because of exclusion of missing values for the co-variates. Concomitantly, the LL and the statistics (AIC etc) change considerably, say with 2100 subjects the BICadjusted may be 31,000 and when incorporating a co-variate only 1700 subjects are included with a BICadjusted being 25,000. The correction for sample size may not seem appropriate. Is it possible to compare the two models by calculating F in chi-square (n-1)*F? Or alternatively correcting e.g. BIC by correcting for sample size (e.g. using the above numbers BICcorr = (25,000/1700)*2100 = 30,882)? An alternative way to do comparisons is by using the USEOBSERVATION option to only include a full data set. Its easy to do, but becomes tedious as more covariates are included. 4) How do you calculate DF in a mixture model? 5) Is there any rules for using the entropy meassure in decision of best fit? Linda K. Muthen posted on Monday, August 18, 2008 - 8:33 am 1. Yes. 2. Yes. 3. Only models with the same set of observed variables and the same set of observations can be compared. 4. Degrees of freedom are relevant only for models where means, variances, and covariances are sufficient statistics for model estimation. In other cases, the number of free parameters is used. 5. Entropy is not a fit statistic. Mogens Fenger posted on Tuesday, August 19, 2008 - 1:50 am Thanks Linda, This cleared up a few things for me. A few follow up questions: If we in a model with the same set of obersvations replace one covariate with another so the number of paramters and subjects are the same, shoudln't it be possible to compare the two models? If the first model gives a BIC say 31000 and the second 30000, wouldn't you conclude that the second model is a better model and should be preferred? Entropi: although entropi is not a fit statisitcs, is there any formal way to conclude that an entropy of 0,7 is worse than 0.8 (e.g. in the example above), and would you be able to include such a result in your decission of which model to choose? Linda K. Muthen posted on Tuesday, August 19, 2008 - 7:25 am You cannot change the set of observed variables if you want to compare models. Entropy ranges from zero to one with the higher value being the better value as far as classification is concerned. Michael Spaeth posted on Wednesday, October 29, 2008 - 2:57 am I have a question concerning tech14. My LRT-value in the real data is dramatically different as compared to the LRT-value in the simulated data sets, which I monitored in the tech8 window (e. g. 124 vs. 66). Should I alter the settings of lrtstarts, or is this not really a problem? In addition the p-values of VMLR and tech14 dramatically differ too (p = .95 vs. p < .001). The real data and generated data H0-LL's (also H1-LLs) in tech14 are the same, as described in the manual. So I think it's a problem with the generated data sets and their H1- and H0-LLs (lrtstarts)!? Linda K. Muthen posted on Wednesday, October 29, 2008 - 8:02 am I would need to see your outputs to answer this question. Please send them and your license number to support@statmodel.com. Michael Spaeth posted on Wednesday, October 29, 2008 - 9:12 am Ok, it takes some days to finally compute the tech14s then I would send it. But I guess it's rather hard to find a hint on only the outputs because BLRT LRT-values of generated data sets are available only on the tech8 window (disappearing after computation). The following sentence in my last post was nonsense: 'The real data and generated data H0-LL's (also H1-LLs) in tech14 are the same, as described in the manual.'--> I only wanted to say, that I reproduced the H0 and H1 LLs of 'real data k-1 run' in my tech 14 run, probably pointing to the fact that something is wrong with bootrapping of the generated data set. But this is a guess based on the phenomena I described regarding the LLs in the tech8 thank you and so long, michael Mogens Fenger posted on Sunday, November 16, 2008 - 1:32 am Dear Linda and Bengt, In a SEM mixture model can you compare (e.g. using BIC) two models in which one model treats an indicator as continuous and the second model treats the indicator as ordinal? Bengt O. Muthen posted on Sunday, November 16, 2008 - 10:29 am No, that gives different likelihood scales. Alexandre Morin posted on Friday, January 23, 2009 - 12:25 pm In mixture models (especially with large samples) it sometimes happens that the examined fit indices (CAIC, BIC, aBIC, etc) keep on decreasing while additional classes are added, potentially because of their sensitivity to sample size. In most cases when this happens, the additional classes don't necessarily make sense (susbstantively or statistically: very small classes, classes that only represent a meaningless division of preceding classes, etc.). In those cases, to choose the number of classes one is left with theory and subjectivity. It seems to me that in such cases the fit indices (CAIC, BIC, aBIC, etc) associated with varying number of classes might be depicted graphically and interpreted as an EFA scree test to help in the determination of the correct number of classes. 1) Do you have any misgiving about this method ? 3) Do you know of any references of a paper either suggesting the use of this method (scree test) or using this method to choose the correct number of classes)? Thank you very much. Bengt O. Muthen posted on Friday, January 23, 2009 - 5:53 pm 1) No 2) No However, you need to always be open to the possibility that you are fishing in the wrong pond - the model family you are in may not be the best for the data and if you switch model family you might find a minimum BIC. For example switching from LCGA to GMM. Matthew Cole posted on Wednesday, June 10, 2009 - 4:56 pm Hi Linda and Bengt, Can you tell me what the difference is in Tech11 between the VLMR and the adjusted LMR? Also, you recommend that the last class is the largest class. However, you also recommended that model identifying restrictions not be included in the first class. Would you consider starting values for the first class to be model identifying restrictions? I am currently using starting values for the first class to make it the smallest class, thereby making sure that the first class for Tech11 is not the largest class. Thanks, Matt Linda K. Muthen posted on Thursday, June 11, 2009 - 11:15 am The authors provided a post-hoc adjustment. You can see the original article for the details. We do not use the adjusted LMR. Starting values are not model identifying restrictions. These would be some restrictions on model parameters. I would use starting values to make the last class the largest class not the first class the smallest class. Keng-Han Lin posted on Thursday, February 18, 2010 - 1:28 pm Hi Linda and Bengt, I'm using LCA on a complex survey data with 48 indicators (13 of them are continuous). The BIC suggests 6-class model, 46213.84(2-class model) 45634.95(3) 45419.72(4) 45322.00(5) 45195.95(6) 45290.84 (7), but we are wondering if there's other rule we could follow to better determine the number of class. BLRT(tech 14) doesn't support for complex data. The results (p-value )of LMR test are as following, 0.047(2-class model) 0.373(3) 0.582(4) 0.591(5) 1.000(6) 1.000(7). In 3-class model which has LMR p-value of 0.373, does it suggest 2-class(H0) is good enough in our case? If so, which statistics should I depend on? Or other criteria I should take into account? Thank you so much for your help. Linda K. Muthen posted on Thursday, February 18, 2010 - 2:03 pm Perhaps the following paper which is available in the website can help: Nylund, K.L., Asparouhov, T., & Muthen, B. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling. A Monte Carlo simulation study. Structural Equation Modeling, 14, 535-569. You should also consider whether the classes have any substantive or theoretical basis. Ingrid Holsen posted on Tuesday, December 07, 2010 - 5:14 am I am investigating trajectories of body image over time, 13 to 30 years. The sample in GMM is 1082. I have problems deciding number of classes (we are several here who are puzzled). 3 classes; BIC the lowest, but LMR-LRT 0.079, with 4 classes it is 0.038). Entropy for 3 classes is 0.727 and for 4 classes 0.734 (not very high!). However, the 4 class solution has one class which is only 1.5%, that's 15 persons, don't make much sense. I have learned to trust the LMR-LRT, but find it hard to proceed with 4 classes due to the sample size in each class. What is your opinion, also regarding the relatively low entrophy? Best regards, Ingrid Ingrid Holsen posted on Tuesday, December 07, 2010 - 6:32 am Adding to my questions above... The Bootstrap test is significant for both a 3 and 4 class solution. The LMR-LRT value above is of course the p-value. A collegue suggested to add q@0 to my modelcommand, then the 3 class solution performed somewhat better. The LMR-LRT (p) 0.024 ( 4 class (p) 0.21 ). However the Entropy is around 0.70 (a bit lower for both a 3 and 4 class solution). Thanks for your help. Linda K. Muthen posted on Tuesday, December 07, 2010 - 10:34 am The significance of LMR-LRT should be interpreted only the first time it is greater than .05 not after that. Your first post suggests two or three classes. The meaningfulness of the classes should determine your choice. tomas dvorak posted on Thursday, January 03, 2013 - 6:54 am a have a question about selecting the number of classes in LCA. For any reasonable number of classes Tech11 and Tech 14 show zero p-values. Also BIC decreases (for 3 or more classes) rather slowly. Does this mean LCA does not fit the data and should not be used? On the other hand, 4 and 5 class solutions have high entropy (around 0.9) and(given my research goals) seem to make sense. Is it ok to use LCA and choose a solution that makes sense with respect to my research goals? Thanks for your help, Linda K. Muthen posted on Thursday, January 03, 2013 - 11:36 am This may point to the need for a different type of model, for example, a factor mixture model. If your indicators are categorical, TECH10 might help you see what the problem is. Alysia Blandon posted on Tuesday, June 11, 2013 - 9:31 am I am running a Latent Profile Analysis with 5 continuous variables. My sample size is 430. I have been able to replicate the loglikelihood values for the 1 and 2 class solutions. I have also been using the steps outlined by Asparouhov & Muthén webnote 14 to test the number of latent classes using the BLRT from TECH 14 using the OPTSEED option. Based on those recommendations I get the warning THE BEST LOGLIKELIHOOD WAS NOT REPLICATED. I continue to get the message even after increasing LRTSTARTS to 50 20 100 20 and using LRTBOOTSTRAP from 100 through 500. In all of these, the loglikelihood for the k-1 class is the correct one from step 1. Bengt O. Muthen posted on Tuesday, June 11, 2013 - 10:31 am Please send input, output, data and license number to support@statmodel.com. Send the outputs from all the steps recommended in Web Note 14. Miriam Forbes posted on Monday, March 03, 2014 - 7:16 pm Hi Linda and Bengt, In my LPA and FMA analyses, the BLRT remains significant (at .0000) for every analysis. My samples are n = 533 and n = 181, so I'm not sure it's a result of too much power. Shaunna Clark thought she had read cases of this, and suggested relying on ICs and substantive interpretation of the models (and the other LRTs, which do reach non-significance). I was wondering whether you know of any citations to justify this kind of decision? Thanks and all the best, Bengt O. Muthen posted on Tuesday, March 04, 2014 - 6:50 pm I would use BIC and substantive interpretation. The Nylund et al paper also shows that BIC is one of the top indices (second best?). Notwithstanding the Nylund et al article, my experience is that BLRT is less dependable in practice for some reason. Miriam Forbes posted on Tuesday, March 04, 2014 - 9:43 pm Thanks, Bengt! I'll emphasise the Nyuland et al. paper. All the best, Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=461","timestamp":"2014-04-16T14:14:58Z","content_type":null,"content_length":"116095","record_id":"<urn:uuid:011b21b7-4016-4dc6-8bf8-3a8ea47baed1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Practical Introduction to Digital Filtering Compensating for Delay Introduced by Filtering Digital filters introduce delay in your signal. Depending on the filter characteristics, the delay can be constant over all frequencies, or it can vary with frequency. The type of delay determines the actions you have to take to compensate for it. The grpdelay function allows you to look at the filter delay as a function of frequency. Looking at the output of this function allows you to identify if the delay of the filter is constant or if it varies with frequency (i.e. if it is frequency-dependent). Filter delay that is constant over all frequencies can be easily compensated for by shifting the signal in time. FIR filters usually have constant delay. On the other hand, delay that varies with frequency causes phase distortion and can alter a signal waveform significantly. Compensating for frequency-dependent delay is not as trivial as for the constant delay case. IIR filters introduce frequency-dependent delay. Compensating for Constant Filter Delay As mentioned before, you can measure the group of delay of the filter to verify that it is a constant function of frequency. You can use the grpdelay function to measure the filter delay, D, and compensate for this delay by appending D zeros to the input signal and shifting the output signal in time by D samples. Consider a noisy electrocardiogram signal that you want to filter to remove high frequency noise above 75 Hz. You want to apply an FIR lowpass filter and compensate for the filter delay so that the noisy and filtered signals are aligned correctly and can be plotted on top of each other for comparison. Fs = 500; % sample rate in Hz N = 500; % number of signal samples rng default; x = ecg(N)'+0.25*randn(N,1); % noisy waveform t = (0:N-1)/Fs; % time vector % Design a 70th order lowpass FIR filter with cutoff frequency of 75 Hz. Fnorm = 75/(Fs/2); % Normalized frequency df = designfilt('lowpassfir','FilterOrder',70,'CutoffFrequency',Fnorm); Plot the group delay of the filter to verify that it is constant across all frequencies indicating that the filter is linear phase. Use the group delay to measure the delay of the filter. grpdelay(df,2048,Fs) % plot group delay D = mean(grpdelay(df)) % filter delay in samples D = Before filtering, append D zeros at the end of the input data vector, x. This ensures that all the useful samples are flushed out of the filter, and that the input signal and the delay-compensated output signal have the same length. Filter the data and compensate for the delay by shifting the output signal by D samples. This last step effectively removes the filter transient. y = filter(df,[x; zeros(D,1)]); % Append D zeros to the input data y = y(D+1:end); % Shift data to compensate for delay title('Filtered Waveforms'); xlabel('Time (s)') legend('Original Noisy Signal','Filtered Signal'); grid on axis tight Compensating for Frequency-Dependent Delay Frequency-dependent delay causes phase distortion in the signal. Compensating for this type of delay is not as trivial as for the constant delay case. If your application allows off-line processing, you can remove the frequency-dependent delay by implementing zero-phase filtering using the filtfilt function. filtfilt performs zero-phase filtering by processing the input data in both the forward and reverse directions. The main effect is that you obtain zero-phase distortion, i.e., you filter data with an equivalent filter that has a constant delay of 0 samples. Other effects are that you get a filter transfer function which equals the squared magnitude of the original filter transfer function, and a filter order that is double the order of the original filter. Consider the ECG signal defined in the previous section. Filter this signal with and without delay compensation. % Design a 7th order lowpass IIR elliptic filter with cutoff frequency % of 75 Hz. Fnorm = 75/(Fs/2); % Normalized frequency df = designfilt('lowpassiir',... Plot the group delay of the filter and notice that it varies with frequency indicating that the filter delay is frequency-dependent. Filter the data and look at the effects of each filter implementation on the time signal. y1 = filter(df,x); % non-linear phase filter - no delay compensation y2 = filtfilt(df,x); % zero-phase implementation - delay compensation hold on title('Filtered Waveforms'); xlabel('Time (s)') legend('Original Signal','Non-linear phase IIR output',... 'Zero-phase IIR output'); ax = axis; axis([0.25 0.55 ax(3:4)]) grid on Notice how zero-phase filtering effectively removes the filter delay. Zero-phase filtering is a great tool if your application allows for the non-causal forward/backward filtering operations, and for the change of the filter response to the square of the original Filters that introduce constant delay are linear phase filters. Filters that introduce frequency-dependent delay are non-linear phase filters. Removing Unwanted Spectral Content from a Signal Filters are commonly used to remove unwanted spectral content from a signal. You can choose from a variety of filters to do this. You choose a lowpass filter when you want to remove high frequency content, or a highpass filter when you want to remove low frequency content. You can also choose a bandpass filter to remove low and high frequency content while leaving an intermediate band of frequencies intact. You choose a bandstop filter when you want to remove frequencies over a given band. Consider an audio signal that has a power-line hum and white noise. The power-line hum is caused by a 60 Hz tone. White noise is a signal that exists across all the audio bandwidth. Load the audio signal. Fs = 44100; % Sample rate y = audioread('noisymusic.wav'); Plot the power spectrum of the signal. The red triangular marker shows the strong 60 Hz tone interfering with the audio signal. [P,F] = pwelch(y,ones(8192,1),8192/2,8192,Fs,'power'); helperFilterIntroductionPlot1(F,P,[60 60],[-9.365 -9.365],... {'Original signal power spectrum', '60 Hz Tone'}) You can first remove as much white noise spectral content as possible using a lowpass filter. The passband of the filter should be set to a value that offers a good trade-off between noise reduction and audio degradation due to loss of high frequency content. Applying the lowpass filter before removing the 60 Hz hum is very convenient since you will be able to downsample the band-limited signal. The lower rate signal will allow you to design a sharper and narrower 60 Hz bandstop filter with a smaller filter order. Design a lowpass filter with passband frequency of 1 kHz, and stopband frequency of 1.4 kHz. Choose a minimum order design. Fp = 1e3; % Passband frequency in Hz Fst = 1.4e3; % Stopband frequency in Hz Ap = 1; % Passband ripple in dB Ast = 95; % Stopband attenuation in dB % Design the filter df = designfilt('lowpassfir','PassbandFrequency',Fp,... % Analyze the filter response hfvt = fvtool(df,'Fs',Fs,'FrequencyScale','log',... 'FrequencyRange','Specify freq. vector','FrequencyVector',F); % Filter the data and compensate for delay D = mean(grpdelay(df)); % filter delay ylp = filter(df,[y; zeros(D,1)]); ylp = ylp(D+1:end); Look at the spectrum of the lowpass filtered signal. Note how the frequency content above 1400 Hz has been removed. [Plp,Flp] = pwelch(ylp,ones(8192,1),8192/2,8192,Fs,'power'); {'Original signal','Lowpass filtered signal'}) From the power spectrum plot above, you can see that the maximum non-negligible frequency content of the lowpass filtered signal is at 1400 Hz. By the sampling theorem, a sample frequency of 2*1400 = 2800 Hz would suffice to represent the signal correctly, you however, are using a sample rate of 44100 Hz which is a waste since you will need to process more samples than those necessary. You can downsample the signal to reduce the sample rate and reduce the computational load by reducing the number of samples that you need to process. A lower sample rate will also allow you to design a sharper and narrower bandstop filter, needed to remove the 60 Hz noise, with a smaller filter order. Downsample the lowpass filtered signal by a factor of 10 to obtain a sample rate of Fs/10 = 4.41 kHz. Plot the spectrum of the signal before and after downsampling. Fs = Fs/10; yds = downsample(ylp,10); [Pds,Fds] = pwelch(yds,ones(8192,1),8192/2,8192,Fs,'power'); {'Signal sampled at 44100 Hz', 'Downsampled signal, Fs = 4410 Hz'}) Now remove the 60 Hz tone using an IIR bandstop filter. Let the stopband have a width of 4 Hz centered at 60 Hz. We choose an IIR filter to achieve a sharp frequency notch, small passband ripple, and a relatively low order. Process the data using filtfilt to avoid phase distortion. % Design the filter df = designfilt('bandstopiir','PassbandFrequency1',55,... % Analyze the magnitude response hfvt = fvtool(df,'Fs',Fs,'FrequencyScale','log',... 'FrequencyRange','Specify freq. vector','FrequencyVector',Fds(Fds>F(2))); Perform zero-phase filtering to avoid distortion. ybs = filtfilt(df,yds); Finally, upsample the signal to bring it back to the original audio sample rate of 44.1 kHz which is compatible with audio soundcards. yf = interp(ybs,10); Fs = Fs*10; Take a final look at the spectrum of the original and processed signals. Notice how the high frequency noise floor and the 60 Hz tone have been attenuated by the filters. [Pfinal,Ffinal] = pwelch(yf,ones(8192,1),8192/2,8192,Fs,'power'); {'Original signal','Final filtered signal'}) Listen to the signal before and after processing. As mentioned above, the end result is that you have effectively attenuated the 60 Hz hum and the high frequency noise on the audio file. % Play the original signal hplayer = audioplayer(y, Fs); % Play the noise-reduced signal hplayer = audioplayer(yf, Fs); The MATLAB diff function differentiates a signal with the drawback that you can potentially increase the noise levels at the output. A better option is to use a differentiator filter that acts as a differentiator in the band of interest, and as an attenuator at all other frequencies, effectively removing high frequency noise. As an example, analyze the speed of displacement of a building floor during an earthquake. Displacement or drift measurements were recorded on the first floor of a three story test structure under earthquake conditions and saved in the quakedrift.mat file. The length of the data vector is 10e3, the sample rate is 1 kHz, and the units of the measurements are cm. Differentiate the displacement data to obtain estimates of the speed and acceleration of the building floor during the earthquake. Compare the results using diff and an FIR differentiator filter. load quakedrift.mat Fs = 1000; % sample rate dt = 1/Fs; % time differential t = (0:length(drift)-1)*dt; % time vector Design a 50th order differentiator filter with a passband frequency of 100 Hz which is the bandwidth over which most of the signal energy is found. Set the stopband frequency of the filter to 120 Hz. df = designfilt('differentiatorfir','FilterOrder',50,... The diff function can be seen as a first order FIR filter with response . Use FVTool to compare the magnitude response of the 50th order differentiator FIR filter and the response of the diff function. Clearly, both responses are equivalent in the passband region (from 0 to 100 Hz). However, in the stopband region, the 50th order filter attenuates components while the diff response amplifies components. This effectively increases the levels of high frequency noise. hfvt = fvtool(df,[1 -1],1,'magnitudedisplay','zero-phase','Fs',Fs); legend(hfvt,'50th order FIR differentiator','Response of diff function'); Differentiate using the diff function. Add zeros to compensate for the missing samples due to the diff operation. v1 = diff(drift)/dt; a1 = diff(v1)/dt; v1 = [0; v1]; a1 = [0; 0; a1]; Differentiate using the 50th order FIR filter and compensate for delay. D = mean(grpdelay(df)); % filter delay v2 = filter(df,[drift; zeros(D,1)]); v2 = v2(D+1:end); a2 = filter(df,[v2; zeros(D,1)]); a2 = a2(D+1:end); v2 = v2/dt; a2 = a2/dt^2; Plot a few data points of the floor displacement. Plot also a few data points of the speed and acceleration as computed with diff and with the 50th order FIR filter. Notice how the noise has been slightly amplified in the speed estimates and largely amplified in the acceleration estimates obtained with diff. A leaky integrator filter is an all-pole filter with transfer function where is a constant that must be smaller than 1 to ensure stability of the filter. It is no surprise that as approaches one, the leaky integrator approaches the inverse of the diff transfer function. Apply the leaky integrator to the acceleration and speed estimates obtained in the previous section to get back the speed and the drift respectively. Use the estimates obtained with the diff function since they are noisier. Use a leaky integrator with . Plot the magnitude response of the leaky integrator filter. Notice that the filter acts as a lowpass filter effectively eliminating high frequency noise. fvtool(1,[1 -.999],'Fs',Fs) Filter the velocity and acceleration with the leaky integrator. v_original = v1; a_original = a1; d_leakyint = filter(1,[1 -0.999],v_original); v_leakyint = filter(1,[1 -0.999],a_original); % Multiply by time differential d_leakyint = d_leakyint * dt; v_leakyint = v_leakyint * dt; Plot the displacement and speed estimates and compare to the original signals v1 and a1. You can also integrate a signal using the cumsum and cumtrapz functions. Results will be similar to those obtained with the leaky integrator. In this example you learned about linear and nonlinear phase filters and you learned how to compensate for the phase delay introduced by each filter type. You also learned how to apply filters to remove unwanted frequency components from a signal, and how to downsample a signal after limiting its bandwidth with a lowpass filter. Finally, you learned how to differentiate and integrate a signal using digital filter designs. Throughout the example you also learned how to use analysis tools to look at the response and group delay of your filters. For more information on filter applications see the Signal Processing Toolbox. For more information on how to design digital filters see the "Practical Introduction to Digital Filter Design" example. References: J.G. Proakis and D. G. Manolakis, "Digital Signal Processing. Principles, Algorithms, and Applications", Prentice-Hall, 1996. S. J. Orfanidis, "Introduction To Signal Processing", Prentice-Hall, 1996. The following helper functions are used in this example.
{"url":"http://www.mathworks.se/help/signal/examples/practical-introduction-to-digital-filtering.html?prodcode=SG&nocookie=true","timestamp":"2014-04-24T01:39:13Z","content_type":null,"content_length":"49689","record_id":"<urn:uuid:81581fdb-0335-4cd8-a861-a764f7e08295>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How can i calculate my gpa? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5079d0ffe4b035d9313549e5","timestamp":"2014-04-16T19:44:00Z","content_type":null,"content_length":"61163","record_id":"<urn:uuid:cd31ead0-5416-45e9-99a1-c21e442af4d1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Heat Transfer along a Rod This Demonstration shows the solution to the heat equation for a one-dimensional rod. The rod is initially submerged in a bath at 100 degrees and is perfectly insulated except at the ends, which are then held at 0 degrees. This is a a Sturm–Liouville boundary value problem for the one-dimensional heat equation with boundary conditions , , and , where is time, is distance along the rod, is the length of the rod, and . The solution is of the form where is the conductivity parameter (a product of the density, thermal conductivity, and specific heat of the rod) and If you increase the number of terms , the solution improves as long as the time is small. As (the final state), the entire rod approaches a temperature of 0 degrees. You can see the effect of the thermal properties by varying the conductivity parameter . [1] R. Haberman, Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, 4th ed., Saddle River, NJ: Prentice Hall, 2003. [2] J. R. Brannan and W. E. Boyce, Differential Equations with Boundary Value Problems: An Introduction to Modern Methods and Applications , New York: John Wiley and Sons, 2010. (Department of Mathematical Sciences at the United States Military Academy, West Point, NY)
{"url":"http://demonstrations.wolfram.com/HeatTransferAlongARod/","timestamp":"2014-04-16T21:55:58Z","content_type":null,"content_length":"44552","record_id":"<urn:uuid:401fdcc5-b926-4888-b2e9-f96418e67778>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
A003433 - OEIS A003433 Hadamard maximal determinant problem: largest determinant of (+1,-1)-matrix of order n. 10 (Formerly M1291) 1, 2, 4, 16, 48, 160, 576, 4096, 14336, 73728, 327680, 2985984, 14929920, 77635584, 418037760, 4294967296, 21474836480, 146028888064, 894426939392, 10240000000000, 59392000000000 (list; graph; refs; listen; history; text; internal format) OFFSET 1,2 REFERENCES N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). See A003432 for further references, links and formulae. LINKS Table of n, a(n) for n=1..21. Richard P. Brent and Judy-anne H. Osborn, On minors of maximal determinant matrices, Arxiv preprint arXiv:1208.3819, 2012. W. P. Orrick and B. Solomon, Large-determinant sign matrices of order 4k+1, Discr. Math. 307 (2007), 226-236. Eric Weisstein's World of Mathematics, -11-Matrix Index entries for sequences related to binary matrices Index entries for sequences related to Hadamard matrices Index entries for sequences related to maximal determinants FORMULA a(n) = 2^(n-1)*A003432(n-1). E.g., a(6) = 32*A003432(5) = 32*5 = 160. a(n) <= n^(n/2). CROSSREFS A003432 is the main entry for this sequence. Cf. A051753. Cf. A188895 (number of distinct matrices having this maximal determinant). Sequence in context: A119000 A034917 A215724 * A153951 A165905 A104354 Adjacent sequences: A003430 A003431 A003432 * A003434 A003435 A003436 KEYWORD nonn,hard,nice AUTHOR N. J. A. Sloane EXTENSIONS Added a(19)-a(21). Edited by William P. Orrick, Dec 20 2011 STATUS approved
{"url":"http://oeis.org/A003433","timestamp":"2014-04-19T21:40:48Z","content_type":null,"content_length":"17550","record_id":"<urn:uuid:ac2bd795-5c99-4203-a33a-d89bf61d187d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Compound Interest November 25th 2012, 08:25 PM #1 Nov 2012 Compound Interest I am having some trouble working out this problem if someone could please explain the steps to solve it. A dealer advertises that a computer is sold at $450 cash down followed by two yearly installments of $680 and $590 at the end of the first and second year respectively. If the interest charged is 18% per annum compounded annually, find the cash price of the computer. Re: Compound Interest Let x be the cash price of the computer. "Cash Down" means you pay off some of the price before compounded interest. Hence the amount of money that is required to pay off the computer becomes x-450 In the first year, if you don't make any more payments at the end of year 0, the debt of (x-450) is interest charged by 18%. In other words, the debt is increased by 18%. Hence the debt becomes At the end of the first year, we make a payment of $680. Hence our debt is now what we owe now - what we have paid off 1.18(x-450) - 680 [This is the debt after year 1] In year 2, the interest is charged again at 18%. Hence our debt now becomes 1.18 [1.18(x-450) - 680] With a payment of $590 our debt decreases 1.18 [1.18(x-450) - 680] - 590. If we have fully paid it off, then our debt is equal to 0. Hence 1.18 [1.18(x-450)-680]-590 = 0. Solve for x. Re: Compound Interest Thanks for the explanation November 26th 2012, 09:43 AM #2 November 26th 2012, 03:36 PM #3 Nov 2012
{"url":"http://mathhelpforum.com/math-topics/208426-compound-interest.html","timestamp":"2014-04-19T02:58:28Z","content_type":null,"content_length":"34200","record_id":"<urn:uuid:6f5f164e-b1ec-4bc4-8ac1-15cc9ea928e8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Clifford Algebra -- from Wolfram Library Archive Clifford Algebra Organization: University of Windsor An experimental version of a Clifford Algebra package. This package uses an experimental version of the tensor calculus package Tensorial 3.0 that is included with the Clifford package. TCliffordAlgebra is an add-on application for Tensorial by Renan Cabrera that implements Clifford algebra operations. This is the first version of the package and it will probably be extended in the future. It is closely based on the William E. Baylis Electrodynamics: A Modern Geometric Approach book. There are Help files for the commands and a number of examples. It installs in the same manner as Tensorial except that it creates the folder TCliffordAlgebra1. It requires the 28 Jan 2004 version, or later, of Tensorial. Tensorial is a general purpose tensor calculus package for Mathematica 4.1 or better. Some of its features are: complete freedom in choosing tensor labels and indices; base indices may be any set of integers or symbols; tensor shortcuts for easy entry of tensors; flavored (colored or annotated) indices for different coordinate systems; CircleTimes notation available; easy methods for storing and substituting tensor values; routines for partial, covariant, total, absolute (Intrinsic) and Lie derivatives; There is extensive documentation, with a Help page and numerous examples for each command. In addition there are a number of tutorial and sample application notebooks. You may wish to check the site occasionally for updates. A section in the Help Introduction now gives a history of the major additions and changes in usage. The sites involved are: http://www.websamba.com/cabrer7 Applied Mathematics Mathematics > Algebra Mathematics > Calculus and Analysis > Differential Geometry Mathematics > Geometry Science > Physics Science > Physics > Relativity Theory Clifford Algebra, Geometry TCliffordAlgebra1.zip (132.1 KB) - ZIP archive TContinuumMechanics1.zip (405 KB) - ZIP archive TMecanica1.zip (64.3 KB) - ZIP archive TensorCalculus3.zip (326.4 KB) - ZIP archive
{"url":"http://library.wolfram.com/infocenter/MathSource/5101/","timestamp":"2014-04-21T07:07:56Z","content_type":null,"content_length":"39243","record_id":"<urn:uuid:599edb5c-d98b-484a-8c8f-808038b4c32b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Pigeonhole principle January 31st 2009, 10:12 AM #1 Oct 2008 Pigeonhole principle Hi, I need help to understand how to solve problems based on the pigeonhole principle. I need hints on solving them. Could you help me please? A computer network consists of 6 computers. Every computer is connected to 0 or more computers. Prove that there are 2 computers, that are connected to the same number of computers. Suppose that we have cables to connect computers to printers. Find the smallest number of cables that guarantee that if we connect 8 computers to 4 printers, there are at least 4 computers that are connected to 4 different printers. Calculate the smallest number of cables for 100 computers and 20 printers, such as 20 computers are connected to 20 different printers. The population of Greece is 11.000.000. Prove that there exists a day in the year that at least 50 people have birthday and that all of them have the same initial letters (name and surname). *alphabet consists of 24 letters. This problem can be generalized. Prove that in a network of $n$ computers there are two which are connected to the same number of computers. The proof goes by induction. Of course it is true for $n=2$ so let us assume it is true for $k\geq 2$ network of computers. Say we have $k+1$ network. There are two cases: every computer is connected; there exists a computer not connected to the rest of the other computers. In the first case if every computer is connected then it means the # of connected computers for each computer is between 1 and k. But there are k+1 computers, so there exist two computers that have the same number of connections. In the second case isolate the computer not connected to anything else. Then we are left with a k-computer network. But then it follows by induction that there exists two computers in this network that are connected to the same number of computers. A general solution for 1) to show a way to use the principle. We give to every computer a number, which is its number of connections. If they are $2$ computers, either they're connected or not, but in the two cases they have the same number ( $1$ or $0$). Assume that a $n$-computers network has always at least two computers with the same number. Consider a network with $n+1$ computers. What about the computers whose number is $0$? If they're more than $2$, it's over. If there is one, then the $n$ other computers form a $n$-computers network, so there are at least $2$ computers that have the same number (our hypothesis). Finally, if there is no computer with number $0$, then the $n+1$ computers have numbers which belong to $\{1,...,n\}$. Therefore if you want to sort computers by numbers, the pigeonhole principle states that at least two computers will have the same number. Therefore for all integer $n$ greater or equal than $2$, a $n$-computers netword has $2$ computers which have the same number of connections. EDIT: ThePerfectHacker writes faster Here is strong hint on #3. Using the floor function: $\left\lfloor {\frac{{\frac{{11\left( {10^6 } \right)}}{{24^2 }}}}{{365}}} \right\rfloor = 52$ BTW: If your discrete mathematics course includes a section on graph theory then #1 above is an important theorem. In any simple graph of order two or more there are at least two vertices with the same degree. Last edited by Plato; January 31st 2009 at 03:01 PM. Here is strong hint on #3. Using the floor function: $\left\lfloor {\frac{{\frac{{11\left( {10^6 } \right)}}{{24^2 }}}}{{365}}} \right\rfloor = 52$ BTW: If your discrete mathematics course includes a section on graph theory then #1 above is an important theorem. In any simple graph of order two or more there are at least two vertices with the same degree. OK, about this I understand that the function gives the correct answer, but I wonder how you can get there. This is how I think of the problem: There are 11.000.000 people and 365 days of possible birthdays. 11.000.000/365= 30136 people have birthday at 365 different days but none of them share the same birthday. If I want 2 people who have the same birthday then I should choose 30137 people among 11.000.000. But I want 50 people who share the same birthday, so I should choose 30137 x 25 = 753425 people. In addition to that, there are 24x24=576 possible initial letters. Among 11.000.000 people there are 11.000.000/576 = 19097 people who have unique initials. If I want 2 who have the same initials, I need 19098 people. But I want 50, so I need 19098x25 = 477450 people from 11.000.000. But I cannot connect the birthdays to the initial letters About graphs, there is a chapter at my notes and I think that the next week we'll be doing that. About the 1st problem, I need some time to think about that. Thanks for your answers! January 31st 2009, 02:00 PM #2 Global Moderator Nov 2005 New York City January 31st 2009, 02:03 PM #3 Senior Member Nov 2008 January 31st 2009, 02:43 PM #4 January 31st 2009, 03:59 PM #5 Oct 2008
{"url":"http://mathhelpforum.com/discrete-math/70963-pigeonhole-principle.html","timestamp":"2014-04-21T08:53:25Z","content_type":null,"content_length":"50028","record_id":"<urn:uuid:a1966335-f5e9-4d09-8424-c6d5c2ff1eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: The MYTH of UNCOMPUTABLE FUNCTIONS Replies: 3 Last Post: Jan 18, 2013 11:17 PM Messages: [ Previous | Next ] Re: The MYTH of UNCOMPUTABLE FUNCTIONS Posted: Jan 18, 2013 11:17 PM On Jan 19, 1:13 pm, George Greene <gree...@email.unc.edu> wrote: > On Jan 18, 7:13 pm, Graham Cooper <grahamcoop...@gmail.com> wrote: > > Assume a process exists that runs any other process and ADDS 1. > No. > Gee, THAT was easy! So is falling off the Special Bus > > Run 2 of these processes and cross the inputs. > We DON'T DO *processes* around here! WE do PROGRAMS! > Every TM *has a PROGRAM*. We are basically identifying these TMs > with THE PROGRAM, NOT the machine! Same thing but you have 2 identical programs so people with brains refer to the uniquely identified processes. > > Each process has it's one required argument. > No, it doesn't. Trivially so, just not trivial to write down in Text Post format. > > P_1(P_2) > The INNER P_2 in that DOES NOT HAVE an argument. This is Process notation, not functions. P1 --> P2 P2 --> P1 Same Proof as Turing's. Just because YOU'RE too stupid to know anything about computers. Halt is not a pure function, Turing proved the 1st Process Deadlock But nobody in SCI.MATH or SCI.LOGIC with their MATHS DEGREES even knows what a DEADLOCK IS! this is WAY ABOVE GEORGE'S HEAD... I think a HALTING PROGRAM will work with PROGRAM TRANSITIONS. You Start with: 10 PRINT "FINISH" and you CONSTRUCT ANY OTHER HALTING PROGRAM with ALLOWABLE TRANSITIONS. But like I said... WAAAAY Above George's head and all the *Maths Grads* who studied LOGIC LITERATURE because ART HISTORY DEGREE was Date Subject Author 1/18/13 Re: The MYTH of UNCOMPUTABLE FUNCTIONS Graham Cooper 1/18/13 Re: The MYTH of UNCOMPUTABLE FUNCTIONS Graham Cooper
{"url":"http://mathforum.org/kb/message.jspa?messageID=8109327","timestamp":"2014-04-20T07:03:00Z","content_type":null,"content_length":"19194","record_id":"<urn:uuid:9aa1c2a5-7ca5-4101-99aa-9059daabe691>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Solving School Challenge help i fail maths if not if veri soon May 31st 2007, 02:38 AM #1 Tassie decides to run a competion at her chocolate factory. the prize is a years supply of her best sellers: Rockies, Delights, Nutbix and Creams. She has 4 jars, one containing Rockies and Delights, one containing Delights and Nutbix, one containing Nutbix and Creams and one containing creams only. :The chocolates are identical in size and are wrapped in identical foil. :The jars are lablled Rockies and Delights, Delights and Nutbix, Nutbix and Creams and Creams, but no label is on the correct jar. :The aim of the competion is to identify the jars correctly after selecting and unwrapping as few chocolates as possible. :No contestant sees what another contestant unwraps. A) Emily unwraps a chocolate from the jar labelled Rockies and Delights and it is a Nutbix. From the jar labelled Delights and Nutbix she selects a cream, from the jar labbled Nutbix and Creams she selects a nutbix and from the jar labelled Creams she draws a rockie. Show how Emily can correctly relable the jars. B) Matt unwraps one chocolate from one and jar is able to relabel that jar correctly. Make a list of nine ways he could have done this. Explain why your list is complete. C)Juliets selection of chocolate was lucky and she won the competition. She onli had to select 2 chocolates before she was able to relabel all the jars correctly. Explain how she could have achieved this. Hello, Tomm! I think I've got it . . . Tassie decides to run a competion at her chocolate factory. The prize is a years supply of her best sellers: Rockies, Delights, Nutbix and Creams. She has 4 jars, one with R & D, one with D & N, one with N & C and one with C only. ~ The chocolates are identical in size and are wrapped in identical foil. ~ The jars are lablled: R & D, D & N, N & C, and C, but no label is on the correct jar. ~ The aim of the competion is to identify the jars correctly after selecting . . and unwrapping as few chocolates as possible. ~ No contestant sees what another contestant unwraps. A) Emily unwraps a chocolate from the jar labelled R & D, and it is a N. From the jar labelled D & N she gets a C, from the jar labelled N & C, she gets a N, and from the jar labelled C she draws a R. Show how Emily can correctly relable the jars. The jars look like this: . . . $\#1\qquad\#2\qquad\#3\qquad\#4$ . . $\boxed{\begin{array}{c}R\\D\end{array}}\quad\boxed {\begin{array}{c}D\\N\end{array}}\quad\boxed{\begi n{array}{c}N\\C\end{array}}\quad\boxed{\begin{arra y}{c}C\\. \end{array}}$ She drew R from #4. Since the only jar with an R is R&D, #4 is $R\&D$ She drew N from #3. The only other jar with an N is D&N; #3 is $D\&N$ She drew N from #1. The only other jar with an N is N&C; #1 is $N\&C$ This leave $C$ for jar #2. B) Matt unwraps one chocolate from one jar and is able to relabel it correctly. Make a list of nine ways he could have done this. (1) He draws D from #1. The only other jar with D is D&N; jar #1 is $D\&N$ (2) He draws R from #2. The only jar with an R is R&D; jar #2 is $R\&D$ (3) He draws N from #2. The only other jar with an N is N&C; jar #2 is $N\&C$ (4) He draws D from #2. The only other jar with a D is R&D; jar #2 is $R\&d$ (5) He draws R from #3. The only jar with an R is R&D; jar #3 is $R\&D$ (6) He draws N from #3. The only other jar with an N is D&N; jar #3 is $D\&N$ (7) He draws C from #3. The only other jar with a C is C; jar#3 is $C$ (8) He draws R from #4. The only jar with an R is R&D; jar #4 is $R\&D$ (9) He draws C from #4. The only other jar with a C is N&C; jar #4 is $N\&C$ C) Juliet's selection of chocolate was lucky and she won the competition. She had to select only 2 chocolates before she was able to relabel all the jars correctly. Explain how she could have achieved this. She drew R from jar #4. The only jar with an R is R&D; jar #4 is $R\&D$ She drew D from jar #1. The only other jar with an D is D&N; jar #1 is $D\&N$ Since jar #3 is not N&C, jar #2 is $N\&C$ Finally, jar #3 is $C$ May 31st 2007, 09:22 AM #2 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/math-topics/15506-problem-solving-school-challenge-help-i-fail-maths-if-not-if-veri-soon.html","timestamp":"2014-04-19T21:38:28Z","content_type":null,"content_length":"39235","record_id":"<urn:uuid:1c66e6c9-c099-4951-ad3f-631e8de3d431>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics - MATH MATH F103X Concepts and Contemporary Applications of Mathematics (m) 3 Credits Applications of mathematics in modern society. Topics include voting systems, probability and statistics and applications of graph theory in management science; uses of probability and statistics in industry, government and science; and applications of geometry to engineering and astronomy. Problem solving emphasized. Also available via eLearning and Distance Education. Prerequisites: DEVM F105 or DEVM F106 or placement; or high school geometry and algebra II. (3+0) MATH F107X Functions for Calculus (m) 4 Credits A study of algebraic, logarithmic and exponential functions; sequences and series; conic sections; and as time allows, systems of equations, matrices and counting methods. A brief review of basic algebra in the first week prepares students for the rigor expected. The primary purpose of this course, in conjunction with MATH F108, is to prepare students for calculus. Note: Credit may be earned for taking MATH F107X or MATH F161X, but not for both. Also available via eLearning and Distance Education. Prerequisites: DEVM F105 or DEVM F106 with a grade of B (3.0) or higher; or two years of high school algebra and MATH F107X placement or higher. (4.5+0) MATH F108 Trigonometry (m) 2-3 Credits A study of the trigonometric functions. Also available via eLearning and Distance Education. Prerequisites: MATH F107X or placement or concurrent enrollment in MATH F107X. (2-3+0) MATH F161X Algebra for Business and Economics (m) 3 Credits Functions of one and several variables with attention to linear, polynomial, rational, logarithmic and exponential relationships. Geometric progressions as applied to compound interest and present value. Linear systems of equations and inequalities. Note: Credit may be earned for taking MATH F107X or MATH F161X, but not for both. Also available via eLearning and Distance Education. Prerequisites: DEVM F105 or DEVM F106 or higher or two years of high school algebra and MATH F161X placement or higher. (3+0) MATH F200X Calculus I (m) 4 Credits Limits, including those with indeterminate form, continuity, tangents, derivatives of polynomial, exponential, logarithmic and trigonometric functions, including product, quotient and chain rules, and the mean value theorem. Applications of derivatives including graphing functions and rates of change. Antiderivatives, Newton's method, definite and indefinite integrals, methods for substitution in integrals and the fundamental theorem of calculus. Applications of integrals include areas, distances, and volumes. Note: No credit may be earned for more than one of MATH F200X, MATH F262X or MATH F272X. Also available via eLearning and Distance Education. Prerequisites: MATH F107X and MATH F108 or placement in MATH F200X. (4+1) MATH F201X Calculus II (m) 4 Credits Techniques and applications of integration. Integration of trigonometric functions, volumes including those using slicing, arc-length, integration by parts, trigonometric substitutions, partial fractions, hyperbolic functions, and improper integrals. Numeric integration including Simpson's rule, first order differential equations with applications to population dynamics and rates of decay, sequences, series, tests for convergence including comparison and alternating series tests, conditional convergence, power series, Taylor series, polar coordinates including tangent lines and areas, and conic sections. Notes: Also available via eLearning and Distance Education. Prerequisites: MATH F200X or placement in MATH F201X. (4+0) MATH F202X Calculus III (m) 4 Credits Partial derivatives and multiple integrals (double and triple). Vectors, parametric curves, motion in three dimensions, limits, continuity, chain rule, tangent planes, directional derivatives, optimization, Lagrange multipliers, integrals in polar coordinates, parametric surfaces, Jacobians, line integrals, Green's Theorem, surface integrals and Stokes' Theorem. Also available via eLearning and Distance Education. Prerequisites: MATH F201X. (4+0) MATH F205 Mathematics for Elementary School Teachers I (m) 3 Credits Offered Fall Elementary set theory, numeration systems, and algorithms of arithmetic, divisors, multiples, integers and introduction to rational numbers. Emphasis on classroom methods. Also available via eLearning and Distance Education. Prerequisites: MATH F107X, MATH F161X or placement. Restricted to BAS. and BA Elementary Education degree students; others by permission of instructor. (3+1) MATH F206 Mathematics for Elementary School Teachers II (m) 3 Credits Offered Spring A continuation of MATH F205. Real number systems and subsystems, logic, informal geometry, metric system, probability and statistics. Emphasis on classroom methods. Also available via eLearning and Distance Education. Prerequisites: MATH F205. (3+1) MATH F215 Introduction to Mathematical Proofs (m) 3 Credits Offered Spring Emphasis on proof techniques with topics including logic, sets, cardinality, relations, functions, equivalence, induction, number theory, congruence classes and elementary counting. In addition, a rigorous treatment of topics from calculus or a selection of additional topics from discrete mathematics may be included. Prerequisites: MATH F200X, MATH F201X or concurrent with MATH F201X or permission of instructor. (3+0) MATH F262X Calculus for Business and Economics (m) 4 Credits Ordinary and partial derivatives. Maxima and minima problems, including the use of Lagrange multipliers. Introduction to the integral of a function of one variable. Applications include marginal cost, productivity, revenue, point elasticity of demand, competitive/complementary products, consumer's surplus, etc. Note: No credit may be earned for more than one of MATH F200X, MATH F262X or MATH F272X. Also available via eLearning and Distance Education. Prerequisites: MATH F161X or placement. (4+0) MATH F272X Calculus for Life Sciences (m) 3 Credits Offered Fall Differentiation and integration with applications to the life sciences. Note: No credit may be earned for more than one of MATH F200X, MATH F262X or MATH F272X. Prerequisites: MATH F107X and MATH F108 or placement. (3+0) MATH F301 Topics in Mathematics 3 Credits Offered Spring An elective course in mathematics for majors. Topics will vary from year to year and may be drawn from mathematical biology, numerical linear algebra, graph theory, Gelois theory, logic or other areas of mathematics. May be repeated with permission of instructor for a total of nine credits. Prerequisites: MATH F215 or permission of instructor. (0+0) MATH F302 Differential Equations 3 Credits Nature and origin of differential equations, first order equations and solutions, linear differential equations with constant coefficients, systems of equations, power series solutions, operational methods, and applications. Prerequisites: MATH F202X. (3+0) MATH F305 Geometry 3 Credits Offered Spring Even-numbered Years Topics selected from such fields as Euclidean and non-Euclidean plane geometry, affine geometry, projective geometry, and topology. Prerequisites: MATH F202X and MATH F215 or permission of instructor. (3+0) MATH F306 Introduction to the History and Philosophy of Mathematics 3 Credits Offered Spring Odd-numbered Years Important periods of history as exemplified by such thinkers as Plato, B. Russell, D. Hilbert, L.E.J. Brouwer and K. Godel. For students of mathematics, science, history and philosophy. Prerequisites: MATH F202X or permission of instructor. (3+0) MATH F307 Discrete Mathematics 3 Credits Logic, counting, sets and functions, recurrence relations, graphs and trees. Additional topics chosen from probability theory. Prerequisites: MATH F201X or permission of instructor. Cross-listed with CS F307. (3+0) MATH F310 Numerical Analysis 3 Credits Offered Fall Direct and iterative solutions of systems of equations, interpolation, numerical differentiation and integration, numerical solutions of ordinary differential equations, and error analysis. Prerequisites: MATH F302 or MATH F314 or permission of instructor. Recommended: Knowledge of programming. (3+0) MATH F314 Linear Algebra 3 Credits Linear equations, finite dimensional vector spaces, matrices, determinants, linear transformations and characteristic values. Inner product spaces. Prerequisites: MATH F201X. (3+0) MATH F320 Topics in Combinatorics 3 Credits Offered Fall Odd-numbered Years Introduction to some fundamental ideas of combinatorics. Topics selected from such fields as enumerative combinatorics, generating functions, set systems, recurrence relations, directed graphs, matchings, Hamiltonian and Eulerian graphs, trees and graph colorings. Prerequisites: MATH F215 or permission of instructor. (3+0) MATH F321 Number Theory 3 Credits Offered Fall Even-numbered Years The theory of numbers is concerned with the properties of the integers, one of the most basic of mathematical sets. Seemingly naive questions of number theory stimulated much of the development of modern mathematics and still provide rich opportunities for investigation. Topics studied include classical ones such as primality, congruences, quadratic reciprocity and Diophantine equations, as well as more recent applications to cryptography. Additional topics such as continued fractions, elliptical curves or an introduction to analytic methods may be included. Prerequisites: MATH F215 or permission of instructor. (3+0) MATH F371 Probability 3 Credits Offered Fall Odd-numbered Years Probability spaces, conditional probability, random variables, continuous and discrete distributions, expectation, moments, moment generating functions, and characteristic functions. Prerequisites: MATH F202X. (3+0) MATH F401 W Introduction to Real Analysis 3 Credits Offered Fall Completeness of the real numbers and its consequences convergence of sequences and series, limits and continuity, differentiation, the Riemann integral. Prerequisites: ENGL F111X; ENGL F211X or ENGL F213X or permission of instructor; MATH F202X; MATH F215. (3+0) MATH F404 Topology 3 Credits Offered Fall Even-numbered Years Introduction to topology, set theory, open sets, compactness, connectedness, product spaces, metric spaces and continua. Prerequisites: MATH F202X; MATH F215. Recommended: MATH F314 and/or MATH F405. MATH F405 W Abstract Algebra 3 Credits Offered Spring Theory of groups, rings and fields. Prerequisites: ENGL F111X; ENGL F211X or ENGL F213X; MATH F215; or permission of instructor. Recommended: MATH F307 and/or MATH F314. (3+0) MATH F408 Mathematical Statistics 3 Credits Offered Spring Even-numbered Years Distribution of random variables and functions of random variables, interval estimation, point estimation, sufficient statistics, order statistics, and test of hypotheses including various criteria for tests. Prerequisites: MATH F371; STAT F200X. (3+0) MATH F412 Differential Geometry 3 Credits Offered Spring Odd-numbered Years Introduction to the differential geometry of curves, surfaces, and Riemannian manifolds. Basic concepts covered include the Frenet-Serret apparatus, surfaces, first and second fundamental forms, geodesics, Gauss curvature and the Gauss-Bonnet Theorem. Time permitting, topics such as minimal surfaces, theory of hypersurfaces and/or tensor analysis may be included. Prerequisites: MATH F314 and MATH F401; or permission of instructor. (3+0) MATH F421 Applied Analysis 4 Credits Offered Fall Vector calculus, including gradient, divergence, and curl in orthogonal curvilinear coordinates, ordinary and partial differential equations and boundary value problems, and Fourier series and integrals. Prerequisites: MATH F302. (4+0) MATH F422 Introduction to Complex Analysis 3 Credits Offered Spring Complex functions including series, integrals, residues, conformal mapping and applications. May be taken independently of MATH F421. Prerequisites: MATH F302. (3+0) MATH F430 Topics in Mathematics 3 Credits Offered Spring An elective course in mathematics for majors. Topics will vary from year to year and may be drawn from mathematical biology, numerical linear algebra, graph theory, logic, or other areas of mathematics. May be repeated with permission of instructor for a total of nine credits. Prerequisites: MATH F215 or permission of instructor. (3+0) MATH F460 Mathematical Modeling 3 Credits Offered Fall Odd-numbered Years Introduction to mathematical modeling using differential or difference equations. Emphasis is on formulating models and interpreting qualitative behavior such models predict. Examples will be taken from a variety of fields, depending on the interest of the instructor. Students develop a modeling project. Prerequisites: COMM F131X or COMM F141X; ENGL F111X; ENGL F211X or ENGL F213X; MATH F201X; or permission of instructor. Recommended: One or more of MATH F302; MATH F310; MATH F314; MATH F401; STAT F300; some programming experience. (3+0) MATH F490 O Senior Seminar 2 Credits Offered Spring Advanced topics selected from areas outside the usual undergraduate offerings. A substantial level of mathematical maturity is assumed. Prerequisites: COMM F131X or COMM F141X, at least one of MATH F401 or MATH F405, senior standing. (2+0) MATH F600 Teaching Seminar 1 Credits Fundamentals of teaching mathematics in a university setting. Topics may include any aspect of teaching: university regulations, class and lecture organization, testing, book selection, teaching evaluations, etc. Specific topics will vary on the basis of student and instructor interest. Individual classroom visits will also be used for class discussion. May be repeated for credit. Prerequisites: Graduate standing. (1+0) MATH F611 Mathematical Physics 3 Credits Offered Fall Mathematical tools and theory for classical and modern physics. Core topics: Linear algebra including eigenvalues, eigenvectors and inner products in finite dimensional spaces. Infinite series. Hilbert spaces and generalized functions. Complex analysis, including Laurent series and contour methods. Applications to problems arising in physics. Selected additional topics, which may include operator and spectral theory, groups, tensor fields, hypercomplex numbers. Prerequisites: MATH F302; MATH F314; MATH F421; MATH F422; or permission of instructor. Cross-listed with PHYS F611. (3+0) MATH F612 Mathematical Physics 3 Credits Offered Spring Continuation of Mathematical Physics I; mathematical tools and theory for classical and modern physics. Core topics: classical solutions to the principal linear partial differential equations of electromagnetism, classical and quantum mechanics. Boundary value problems and Sturm-Liouville theory. Green's functions and eigenfunction expansions. Integral transforms. Orthogonal polynomials and special functions. Applications to problems arising in physics. Selected additional topics, which may include integral equations and Hilbert-Schmidt theory, perturbation methods, probability theory. Prerequisites: PHYS/MATH F611 or equivalent; or permission of instructor. Cross-listed with PHYS F612. (3+0) MATH F615 Numerical Analysis of Differential Equations 3 Credits Offered Alternate Spring Review of numerical differentiation and integration, and the numerical solution of ordinary differential equations. Main topics to include the numerical solution of partial differential equations, curve fitting, splines, and the approximation of functions. Supplementary topics such as the numerical method of lines, the fast Fourier transform, and finite elements may be included as time permits and interest warrants. Prerequisites: CS F201, MATH F310, MATH F314, MATH F421, MATH F422 or permission of instructor. (3+0) MATH F617 Functional Analysis 3 Credits Offered Spring Even-numbered Years Study of Banach and Hilbert spaces, and continuous linear maps between them. Linear functionals and the Hahn-Banach theorem. Applications of the Baire Category theorem. Compact operators, self adjoint operators, and their spectral properties. Weak topology and its applications. Prerequisites: MATH F314; MATH F401 or equivalent. Recommended: MATH F422; MATH F641 or equivalent. (3+0) MATH F631 Algebra I 4 Credits Offered Fall Even-numbered Years Rigorous development of groups, rings and fields. Prerequisites: MATH F405 or permission of instructor. (4+0) MATH F632 Algebra II 3 Credits Offered Fall Odd-numbered Years Advanced topics taken from group theory, category theory, ring theory, homological algebra and field theory. Prerequisites: MATH F631 or instructor permission. (3+0) MATH F641 Real Analysis 4 Credits General theory of Lebesgue measure and Lebesgue integration on the real line. Convergence properties of the integral. Introduction to the general theory of measures and integration. Differentiation, the product measures and an introduction to LP spaces. Prerequisites: MATH F401-F402 or permission of instructor. (4+0) MATH F645 Complex Analysis 4 Credits Offered Spring Even-numbered Years Analytic functions, power series, Cauchy integral theory, residue theorem. Basic topology of the complex plane and the structure theory of analytic functions. The Riemann mapping theorem. Infinite products. Prerequisites: Math F641 or permission of instructor. (4+0) MATH F651 Topology 4 Credits Offered Spring Odd-numbered Years Treatment of the fundamental topics of point-set topology. Separation axioms, product and quotient spaces, convergence via nets and filters, compactness and compactifications, paracompactness, metrization theorems, countability properties, and connectedness. Set theory as needed for examples and proof techniques. Prerequisites: MATH F401-F402 or MATH F404 or permission of instructor. (4+0) MATH F660 Advanced Mathematical Modeling 3 Credits Offered Spring Even-numbered Years The mathematical formulation and analysis of problems arising in the physical, biological, or social sciences. The focus area of the course may vary, but emphasis will be given to modeling assumptions, derivation of model equations, methods of analysis, and interpretation of results for the particular applications. Examples include heat conduction problems, random walk processes, molecular evolution, perturbation theory. Students will develop a modeling project as part of the course requirements. Prerequisites: Permission of instructor. (3+0) MATH F661 Optimization 3 Credits Offered Fall Even-numbered Years Linear and nonlinear programming, simplex method, duality and dual simplex method, post-optimal analysis, constrained and unconstrained nonlinear programming, Kuhn-Tucker conditions. Applications to management, physical and life sciences. Computational work with the computer. Prerequisites: Knowledge of calculus, linear algebra, and computer programming. Cross-listed with CS F661. (3+0) MATH F663 Applied Combinatorics and Graph Theory 3 Credits Offered Spring Even-numbered Years A study of combinatorial and graphical techniques for complexity analysis including generating functions, recurrence relations, theory of counting, planar directed and undirected graphs, and applications to NP complete problems. Prerequisites: MATH F307 and MATH F314 or instructor permission. (3+0) MATH F665 Topics in Graduate Mathematics 3 Credits Offered As Demand Warrants Elective courses in graduate mathematics offered by faculty on a rotating basis. Topics may include, but are not limited to, graph theory, glaciology modeling, general relativity, mathematical biology, Galois theory and numerical linear algebra. May be repeated for credit with permission of instructor. (3+0)
{"url":"http://www.uaf.edu/courses/courses-detail/index.xml?name=Mathematics%20-%20MATH&abrev=MATH","timestamp":"2014-04-18T21:01:29Z","content_type":null,"content_length":"37308","record_id":"<urn:uuid:4bbd8d00-a4f7-45b0-a413-fd8b2d20365f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Breaking RSA may not be equivalent to factoring Breaking RSA may not be equivalent to factoring Authors: D. Boneh and R. Venkatesan We provide evidence that breaking low-exponent RSA cannot be equivalent to factoring integers. We show that an algebraic reduction from factoring to breaking low-exponent RSA can be converted into an efficient factoring algorithm. Thus, in effect an oracle for breaking RSA does not help in factoring integers. Our result suggests an explanation for the lack of progress in proving that breaking RSA is equivalent to factoring. We emphasize that our results do not expose any weakness in the RSA system. In Proceedings Eurocrypt '98, Lecture Notes in Computer Science, Vol. 1233, Springer-Verlag, pp. 59--71, 1998 Full paper: gzipped-PostScript, PDF [first posted 3/1998 ]
{"url":"http://crypto.stanford.edu/~dabo/pubs/abstracts/no_rsa_red.html","timestamp":"2014-04-19T09:25:26Z","content_type":null,"content_length":"3016","record_id":"<urn:uuid:ab31c59d-737f-47bd-9fee-85be0069609e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
A Tiny Expression Evaluator @TinyExe stands for "a Tiny Expression Evaluator". It is a small commandline utility that allows you to enter simple and more complex mathematical formulas which will be evaluated and calculated on the spot. Even though there are already a number of expression evaluators around on CodeProject and beyond, this particular project is meant mainly to demonstrate the possibilities of @TinyPG. @TinyPG is a parser generator used to create various types of languages and is described in another article here on CodeProject. This expression evaluator therefore is based on a typical parser/lexer based compiler theory. The implementation of the sementics is done in pure C# codebehind. Since @TinyPG also generates pure and clearly readable C#, consequently this expression evaluator is a set of fully contained C# source code, without requiring any external dependencies. It can therefore be easily used within your own projects. This project also contains the grammar file used to generate the scanner and the parser, so feel free to modify the grammar for your own needs. In this article, I will expain mainly: • Some of the features currently supported by this expression evaluator • How to use this evaluator engine within your own projects • How to extend the functionality of this evaluator as to adapt it to your own purposes Due to a lack of good example grammars and demos on how to use @TinyPG, I decided to build a demonstration project which shows how @TinyPG can be used for more advanced grammars such as expression evaluators. So why create an Expression Evaluator for the purpose of a demo? Because well, runtime Expression Evaluators are cool! Take Excel for example, it's the most widely used runtime Expression Evaluator used today. Wouldn't it be awesome to unleash some of that power of Excel inside your own applications? So, since a runtime expression evaluator may just come in handy, I thought this would make a nice demo project. Note that this is not just a demo though. This expression evaluator is fully functional and ready to execute! Even though @TinyPG comes with a small tutorial on how to write a simple expression evaluator, I decided to show that @TinyPG can be used to produce powerful LL(1)-grammars. This project also nicely demonstrates how the grammar and syntax can be cleanly separated from the semantics. The calculation rules are implemented separately from the parser and scanner. Using the Tool The functionality of the tool is based on the implementation as used in Excel. Currently the expression evaluator supports the following features: • It can parse mathematical expressions, including support for the most commonly used functions,e.g.: □ 4*(24/2-5)+14 □ cos(Pi/4)*sin(Pi/6)^2 □ 1-1/E^(0.5^2) □ min(5;2;9;10;42;35) • The following functions are supported: □ About Abs Acos And Asin Atan Atan2 Avg Ceiling Clear Cos Cosh Exp Fact Floor Format Help Hex If Floor Left Len Ln Log Lower Max Min Mid Min Not Or Pow Right Round Sign Sin Sinh Sqr Sqrt StDev Trunc Upper Val Var • Basic string functions: □ "Hello " & "world" □ "Pi = " & Pi □ Len("hello world") • Boolean operators: □ true != false □ 5 > 6 ? "hello" : "world" □ If(5 > 6;"hello";"world") • Function and variable declaration □ x := 42 □ f(x) := x^2 □ f(x) := sin(x) / cos(x) // declare new dynamic functions using built-in functions □ Pi □ E • Recursion and scope □ fac(n) := (n = 0) ? 1 : fac(n-1)*n // fac calls itself with different parameters □ f(x) = x*Y // x is in function scope, Y is global scope • Helper functions □ Help() - lists all built-in functions □ About() - displays information about the utility □ Clear() - clears the display Basically when starting the tool, simply type the expression you want to calculate directly on the commandline. Use up and down buttons for autocompletion of previously entered expressions and formulas. Isn't this just so much easier than using the windows calculator? Anyway, currently only 5 datatypes are supported: double, hexidecimal, int, string and boolean. Note that integers (and hexadecimals also) are always converted to doubles when used in a calculation by default. Use the int() function to convert to integer explicitly. The tool uses the following precedence rules for its operators: 1. ( ), f(x) Grouping, functions 2. ! ~ - + (most) unary operations 3. ^ Power to (Excel rule: that is a^b^c -> (a^b)^c 3. * / % Multiplication, division, modulo 4. + - Addition and subtraction 4. & concatenation of strings 5. < <= > >= Comparisons: less-than, ... 6. = != <> Comparisons: equal and not equal 7. && Logical AND 8. || Logical OR 9. ?: Conditional expression 10 := Assignment Embedding the Evaluator Engine If you would like to embed this Tiny Expression Evaluator inside your own projects, there are only a few simple steps involved. 1. Copy the Evaluator folder including all classes inside it into your own C# project. In short, we have the following classes: 1. Context - The context holds all available declared functions and variables and the scope stack. 2. Expression - Wrapper class that holds and evaluates the expression. 3. Function - Defines the prototype for a function. A function must have a name, a pointer (delegate) to an actual implementation of a function and it must have the minimum and maximum allowed number of paramters set. 4. Functions - This class defines the list of default available functions. Feel free to add your own to the list. 5. Parser - The parser for the expression. This code is generated by TinyPG. 6. ParseTree - the resulting parse tree after parsing the expression. This code is generated by TinyPG. 7. ParseTreeEvaluator - This is a subclass of ParseTree and implements the core semantics of the operators. The code should be pretty easy to understand, since the methods of the class correspond directly with the defined grammar (see TinyExe.tpg). 8. Scanner - This is the scanner used by the parser to match against terminals inside the expression. This class is also generated by TinyPG. 9. Variables - This is currently implemented as a (case-sensitive) dictionary. A variable is simply a <name, value> pair. 2. Add the namespace (in this case TinyExe, but feel free to change it) to your classes. 3. Then, insert the following code to execute an evaluation: string expr = "1*3/4"; // define the expression as a string // create the Expression object providing the string Expression exp = new Expression(expr); // check for parse errors if (exp.Errors.Count > 0) // No parse error, go ahead and evaluate the expression // Note that Eval() always returns an object that can be of various types // you will need to check for the type yourself object val = exp.Eval(); // check for interpretation errors (e.g. is the function defined?) if (exp.Errors.Count > 0) // nothing returned, nothing to evaluate or some kind of error occured? if (val == null) //print the result to output as a string Console.WriteLine(string.Format(CultureInfo.InvariantCulture, "{0}", val)); The code above handles any expression gracefully. But just to be absolutely sure, you might want to trap any exception in a try...catch statement. Extending the Evaluator Engine Basically, there are 2 kinds of extensions you can make: 1. Add your own built-in functions within the allowed syntax of the evaluator 2. Enhance or change the syntax, therefore changing the grammar Adding a Static Function The easiest way to add a new function is to open up the Functions class, and add your implementation in the InitDefaults() method. If you prefer to externalize your function, then you should add your function to the Context.Default.Functions. For example: Context.Default.Functions.Add("myfunc", new StaticFunction("MyFunc", MyFunc, 2, 3)); where the MyFunc function is declared as: private static object MyFunc(object[] parameters) { ... } Parameters are passed as a list of objects. The number of objects will always be the same as specified in the declaration, in this case a minumum of 2 parameters and a mixumum of 3. The function will need to check the number of parameters and check for the correct type being passed. In a more advanced setting, e.g., if you need access to the Context object, or to other classes in your project, you can implement your own version of the Function class. You will need to create a subclass derived from the Function class and implement the Eval() method. Also, you will need to take care of the initialization of arguments, Parametersettings and handle the scope. As an example, take a look at the ClearFunction class. Changing the Syntax In order to change the syntax and add new features to the expression language, e.g. add support for extra datatypes (i.e., Date/Time, Money) or allow custom datatypes (i.e., structs), or maybe even more exotic: allow evaluaton of JavaScript, you will need to have a fundamental understanding of parsers and compiler theory. Please have a look at the @TinyPG parser generator article, it explains the basics on how to create a parser for your language. Just changing the syntax will not be sufficient. It's quite easy to change the grammar used to parse the input, but the semantics (code behind) will also need to be updated accordingly. Luckily the ParseTree that is generated is quite straightforward in use. Suppose for example that we would like to support an new rule, e.g. an IF-THEN-ELSE statement. We could add a new statement in the grammer file (see the included TinyExe.tpg): IfThenElseStatement -> IF RelationalExpression THEN Expression (ELSE Expression)?; When generating the code with @TinyPG for the Scanner, Parser and ParseTree, typically the ParseTree will now contain an addition method called: protected virtual object EvalIfThenElseStatement (ParseTree tree, params object[] paramlist) As you can see, the method is declared as virtual, meaning you can override this method in a subclass. This is exactly what I did in TinyExe. The ParseTreeEvaluator is a subclass of ParseTree and contains all necessary overrides. The main reason for putting this in a subclass, is that I can now change the grammar of the parser over and over again, and generate a new ParseTree, without the subclass being overwritten. So what you need to do is override the function in the ParseTreeEvaluator class. You need to understand that this method is called just-in-time, while evaluating the parsetree. At some point during parsing the input, the Parser created a new ParseNode of type IfThenElseStatement. During evaluation of this node, the corresponding EvalIfThenElseStatement (your overriden method!) is called. At the point of entry in this method, you need to understand that the current ParseNode (of type IfThenElseStatement) is actually this. Because the statement contains 6 parts (of which the last 2 of the ELSE part are optional), this will contain 4 or 6 Nodes: 1. this.Nodes[0] corresponds to the ParseNode of type IF 2. this.Nodes[1] corresponds to the RelationalExpression 3. this.Nodes[2] corresponds to the ParseNode of type THEN 4. this.Nodes[3] corresponds to the Expression Node 5. If this.Nodes[4] exists, it will correspond to the ELSE node 6. if this.Nodes[5] exists, it will correspond to the Expression Node. So again, I hope this makes clear that the structure of the ParseTree is straightforward and can be quickly resolved back to the original grammar. Now, the nodes that are of real interest are nodes 1, 3 and 5 of course. So first, we evaluate Nodes[1]. Because Nodes[1] is a non-terminal, it means it can contains a complete subtree. This subtree needs to be evaluated. To make this easier, you can make use of the helper function this.GetValue(). object result = this.GetValue(tree, TokenType.RelationalExpression, 0); Note that we expect the result of the evaluation to be a boolean value (true or false), however we cannot be certain. So make sure to first check the type of the return value. If this turns out not to be boolean, raise an error. If result is true, then we can repeat the procedure and evaluate Nodes[3] and return this value. Otherwise we evaluate Nodes[5] (if it exists) and return that. This is basically in a nutshell 2 ways how extensions are supported. If you have additional questions, just drop me a line. Points of Interest Apart from writing a fully functional-handy-comprehensive-easy-to-use-tiny-formula-calculation-utility that by far outperforms your default windows calculator, I also hope that this project will serve as a good demonstration on how @TinyPG can be used in a real-world-scenario. Of course, there are always new features that could be added, however for now I think this demontration shows nicely how you can create a quite powerful language with some basic knowledge of grammars, parsers and of course a bit of c#. So that's it. If you have any ideas for new features, comments or remarks, please drop a note! @TinyExe v1.0 Tiny Expression Evaluator Version 1.0 was released on 16^st of August 2011. This version includes the following features: • Evaluation of mathematical functions and expressions • Default built-in functions • Runtime function and variable declarations • Function-scoped and global variables • Recursive function calls • Multiple datatype support (double, int, hex, bool and string) • Recursive function calls • Predefined constants Pi en E • Boolean operators and assignments
{"url":"http://www.codeproject.com/Articles/241830/a-Tiny-Expression-Evaluator?msg=4002161","timestamp":"2014-04-18T09:41:28Z","content_type":null,"content_length":"142048","record_id":"<urn:uuid:d5033a9d-4ba6-497b-9297-5ff848af4e59>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
The descriptions below will briefly summarize what we did up to now and will give the plan for the upcoming lecture. The link for each lecture will also give more detailed (.ps or .pdf) lecture notes for the corresponding lecture. These lecture notes are heavily based on the Therefore, if you want to see what we will cover next, I recommend you consult the above notes. Sometimes, at the end of the summary I might suggest additional (optional) reading for each lecture. Make sure you also check the Handouts section for other handouts. • Lecture 1 (9/2/2008) (.pdf). Problem of secret communication. One-time pad and Shannon impossibility result. Modern Cryptography: computationally bounded adversaries. Private-Key vs. Public-Key Cryptography. In search of public-key solution: motivation for one-way functions and trapdoor permutations. Read: [Katz-Lindell, chap. 1,2], [Boneh-Shoup, sec 2.1,2.2] • Lecture 2 (9/9/2008) (.pdf). One-way functions and permutations. Collections of OWFs/OWPs. (Conjectured) examples of one-way functions: Integer Multiplication, Modular Exponentiation. Applications: UNIX password authentication, S/Key one-time password system. Cryptographic assumptions and proof by reduction. Formal security proof for S/Key. Read: [Katz-Lindell, sec 3.1, 6.1.1-6.1.2, 7.4.1] [Boneh-Shoup, 10.1-10.4]. • Lecture 3 (9/16/2008) (.pdf). Brish-up on number theory. Primes vs. composites, easy and hard problems. RSA, discrete log, factoring, square root extraction. Chinese remainder theorem. Examples. Read: see number-theoretic handouts in the Handouts section, [Katz-Lindell, sec. 7]. • Lecture 4 (9/23/2008) (.pdf). Trapdoor permutations. Toy PKE. Main problems and criticism of one-way functions: reveal partial information, not ``pseudorandom''. Motivation to hardcore bits. Examples: MSB for discrete log, LSB for squaring. Definition. General construction (Goldreich-Levin). [maybe: Relation to list-decodable codes.] Getting more bits out: construction based on hardcore bits of one-way permutations. Informal Applications to public- and secret-ket encryption. Definitions of pseudorandom generators. Definition of next-bit test. Read: [Katz-Lindell, 6.1.3, 6.3, 3.3, 3.4.1], [Boneh-Shoup, 21.2]. • Lecture 5 (9/30/2008) (.pdf). Proving the general construction satisfies the next-bit test. Showing that next-bit test implies all statistical tests. Computational Indistinguishability and its properties, hybrid argument and its importance. PRG Examples: Blum-Micali, Blum-Blum-Shub. Properties of PRG's (e.g., closure under composition). Equivalence to OWF's. Forward-Secure PRG's: generic construction is forward-secure, builing forward-secure PRG from any PRG, application to secret-key encryption. Read: [Katz-Lindell, 3.3, 3.4.1, 6.4], [Boneh-Shoup, 3.1, 3.4-3.5] • Lecture 6 (10/7/2008) .pdf). Public-Key encryption. Problems with TDP approach and deterministic encryption in general. Encrypting single bits, definition of indistinguishability. Scheme based on TDP's. Extending to many bits: PK-only definition. Blum-Goldwasser scheme and formal proof of security (using PRG's). Key Encapsulation mechanisms (KEMs). General one-bit => many bits construction. General indistinguishability under CPA attack. Read: [Katz-Lindell, 10.1, 10.2, 10.4, 10.7], [Boneh-Shoup, 10.1-10.4, 11.1-11.3] • Lecture 7 (10/14/2008) (.pdf). Semantic security and its equivalence to CPA indistinguishability. General GL transformation from OW to CPA secuirty. Diffie-Hellman Key Exchange. ElGamal Key Encapsulation and Encryption Scheme. CDH and DDH assumptions. Security of ElGamal under CHD and DDH. Other PK encryption schemes. Read: [Katz-Lindell, 3.2.2, 10.5, 10.7, 9.3, 9.4], [Boneh-Shoup, 10.5-10.6, 11.4-11.5] • Lecture 8 (10/21/2008) (.pdf). Symmetric-Key Encryption. One-time definition and scheme. CPA definition and closure under composition. Stateful schemes (stream ciphers) based on forward-secure PRGs. Towards stateless schemes: pseudorandom functions (PRFs). Definition, construction using PRGs, Naor-Reingold construction using DDH. Read: [Katz-Lindell, 3.2-3.4, 6.5], [Boneh-Shoup, 3.1-3.3, 4.4., 4.6, 5.1-5.3] • Lecture 9 (10/28/2008) (.pdf). Applications of PRFs: friend-and-foe, secret-key encryption. CTR and XOR schemes, their comparison. Pseudorandom permutations (PRPs). PRPs vs. PRFs. Luby-Rackoff construction using the Feistel network. Strong PRPs. Block ciphers and their modes of operation: ECB, CTR, CFB, OFB, XOR, CBC. Read: [Katz-Lindell, 3.5-3.6, 6.6], [Boneh-Shoup, 4.1, 4.3, 4.5, 5.2-5.4] • Lecture 10 (11/04/2008) (.pdf). CPA-security of CFB,OFB,CBC modes. Exact security and its importance. Practical ciphers: DES, AES. Integration of symmetric and asymmetric encryption schemes. The problem of authentication. Message authentication codes (MACs). Definition of security: existential unforgeability against chosen message attack. Construction using PRFs. Unpredictable functions, their relation to MACs and PRFs. Reducing MAC length: using e-universal hash functions (e-UHFs). Read: [Katz-Lindell, 3.6, 4.1-4.4], [Boneh-Shoup, 5.4, 6.1-6.2, 7.1, 7.3] • Lecture 11 (11/11/2008) (.pdf). Examples of UHFs: information-theoretic examples, XOR-MAC, CBC-MAC, HMAC. Another XOR-MAC and e-xor-universal hash functions. Variable-length messages. CCA security for symmetric encryption. Example PRP(m|r). Authenticated encryption. AE implies CCA security. Encrypt-then-MAC method. More advanced methods. Read: [Katz-Lindell, 4.5, 4.7-4.9, 3.7], [Boneh-Shoup, 7.2-7.5, 6.3-6.6, 6.10, 8.7, 9.1-9.3] • Lecture 12 (11/18/2008) (.pdf). Collision-Resistant Hash Functions (CRHFs). Merkle-Damgard domain extender. Merkle trees and applications. Davied-Meyer's construction in the Ideal-Cipher Model. Claw-Free Permutation (CFP) Pairs. Examples from RSA, discrete log and squaring. CRHFs from CFPs. CRHFs imply OWFs but not conversely. Optimizing the construction using Discrete Log (homework for RSA). Digital signatures. Definition: attack (CMA), goal (existential unforgeability). Hash-then-sign paradigm. Read: [Katz-Lindell, 4.6, 12.1-12.4], [Boneh-Shoup, 8.1-8.5, 10.8, 13.1-13.2] • Lecture 13 (11/25/2008) (.pdf). Moving to public-key: digital signatures. Definitions, RSA and Rabin's signatures. Trapdoor approach and its deficiency. Signature paradox and its resolution. Towards better signatures: one-time signatures. Lamport Scheme. Merkle signatures. Naor-Yung construction. Random oracle model and practical hash-then-sign signatures: full domain hash. Practical signature without random oracles: Cramer-Shoup scheme (mention). Read: [Katz-Lindell, 12.5-12.6, 12.8, 13.1, 13.3], [Boneh-Shoup, 13.3-13.4, 14.1-14.5] • Lecture 14 (12/2/2008) (.pdf). Commitment Schemes. Definition and properties. Increasing input size: bit-by-bit composition, hash-then-commit technique using CRHF's. Constructions: from (1) OWF's, (2) OWP's, (3) CRHF's and (4) Pedersen commitment (based on DL). Relaxed commitments and composition using UOWHF's. Applications: bidding, coin-flipping, parallel authenticated encryption, password authentication, zero-knowledge. Last modified: January 25, 2012
{"url":"http://cs.nyu.edu/courses/fall08/G22.3210-001/syllabus.html","timestamp":"2014-04-21T02:40:14Z","content_type":null,"content_length":"12494","record_id":"<urn:uuid:f6f6bbce-efd6-4dba-80b9-8b613462f7d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
3 x 3 matrix eigenvectors [-za,-2za-bz,z]=z*[-a,-2a-b,1]. Remember eigenvectors are only defined up to an multiplicative constant. Ok i see that now, but for s = +/-sqrt(2) im finding even harder to do it in this way. It seems very lengthy and im wondering if there is an easier way to do it for this case or even if the result i am getting is right. When i use the eigenvalue of sqrt(2) i again try to write it all in terms of z because i thought this looked easiest as one of them is already written in terms of z. So i get from subsitution x(1 - sqrt(2)) + y +az = 0 (1) x + y(1 - sqrt(2)) + bz = 0 (2) z(1- sqrt(2)) = 0 (3) Originally i did think that this meant z = 0 but then i substituted that into 1 and 2 but then ended up getting 2 different equations for x and y eg. x = -y(1-sqrt(2)) and x = -y/(sqrt(2)), so instead i got x from 1, substituted it back into 1 then rearranged to get y in terms of z. Using y i got x in terms of z from 1. x = bz((-1 - sqrt(2))/2) y = -bz/2 Subsituting all into 1, 2 and 3 in terms of z i get: az = 0 0 = 0 z(1 - sqrt(2)) = 0 so i get eigenvector of z*(a, 0, 1 - sqrt(2)) Is there a way i can check this is right? Many thanks
{"url":"http://www.physicsforums.com/showthread.php?t=195772","timestamp":"2014-04-20T05:49:01Z","content_type":null,"content_length":"55773","record_id":"<urn:uuid:6ef84769-6dd6-4a24-8bcf-1ae05d7c337f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
John von Neumann Summary This section contains 670 words (approx. 3 pages at 300 words per page) World of Invention on John von Neumann John von Neumann, who was born in Budapest, Hungary, in 1903, was primarily a mathematician, and wrote numerous papers on both pure and applied math. He also made important contributions to a number of other fields of inquiry, including quantum physics, economics and computer science. Von Neumann studied mathematics, physics and chemistry at German and Swiss universities for several years, finally receiving a Ph.D. in mathematics from the University of Budapest in 1926. He taught at Berlin and Hamburg from 1927 to 1930 and then emigrated to the United States to join the faculty at Princeton University. Three years later he took a position at the Institute of Advanced Studies at Princeton. Until the outbreak of World War II, Von Neumann mostly did work in pure math, making important contributions to the fields of mathematical logic, set theory and operator theory. However, his work in operator theory had powerful applications... This section contains 670 words (approx. 3 pages at 300 words per page)
{"url":"http://www.bookrags.com/biography/john-von-neumann-woi/","timestamp":"2014-04-17T09:42:33Z","content_type":null,"content_length":"32227","record_id":"<urn:uuid:22bdfca7-8671-4ffd-80ed-f7466c1e34ac>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Detrended correspondence analysis: An improved ordination technique Find out how to access preview-only content Detrended correspondence analysis: An improved ordination technique Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access Detrended correspondence analysis (DCA) is an improvement upon the reciprocal averaging (RA) ordination technique. RA has two main faults: the second axis is often an ‘arch’ or ‘horseshoe’ distortion of the first axis, and distances in the ordination space do not have a consistent meaning in terms of compositional change (in particular, distances at the ends of the first RA axis are compressed relative to the middle). DCA corrects these two faults. Tests with simulated and field data show DCA superior to RA and to nonmetric multidimensional sealing in giving clear, interpretable results. DCA has several advantages. (a) Its performance is the best of the ordination techniques tested, and both species and sample ordinations are produced simultaneously. (b) The axes are scaled in standard deviation units with a definite meaning, (c) As implemented in a FORTRAN program called DECORANA, computing time rises only linearly with the amount of data analyzed, and only positive entries in the data matrix are stored in memory, so very large data sets present no difficulty. However, DCA has limitations, making it best to remove extreme outliers and discontinuities prior to analysis. DCA consistently gives the most interpretable ordination results, but as always the interpretation of results remains a matter of ecological insight and is improved by field experience and by integration of supplementary environmental data for the vegetation sample sites. This research was supported by the Institute of Terrestrial Ecology, Bangor, Wales, and by a grant from the National Science Foundation to R.H. Whittaker. We thank R.H. Whittaker for encouragement and comments, S.B. Singer for assistance with the Cornell computer, and H.J.B. Birks, S.R. Sabo, T.C.E. Wells, and R.H. Whittaker for data sets used for ordination tests. 1. Austin, M.P. 1976a. On non-linear species response models in ordination. Vegetatio 33: 33–41. 2. Austin, M.P. 1976b. Performance of four ordination techniques assuming three different non-linear species response models. Vegetatio 33: 43–49. 3. Austin, M.P. & I., Noy-Meir. 1972. The problem of non-linearity in ordination: experiments with two-gradient models. J. Ecol. 59: 763–773. 4. Beals, E.W. 1973. Ordination: mathematical elegance and ecological naïveté. J. Ecol. 61: 23–35. 5. Benzécri, J.P. 1973. L'Analyse des données (vol. 2: L'analyse des Correspondances). Dunod, Paris, 619 pp. 6. Curtis, J.T. 1959. The Vegetation of Wisconsin: An Ordination of Plant Communities. Chiversity of Wisconsin, Madison, 657 pp. 7. Dale, M.B. 1975. On objectives of ordination. Vegetatio 30: 15–32. 8. Ellenberg, H. 1956. Aufgaben und Methoden der Vegetationskunde. Ulmer, Stuttgart, 136 pp. 9. Fasham, M.J.R. 1977. A comparison of nonmetric multidimensional scaling, principal components and reciprocal averaging for the ordination of simulated coenoclines, and coenoplanes. Ecology 58: 10. Gauch, H.G. 1973. The relationship between sample similarity and ecological distance. Ecology 54: 618–622. 11. Gauch, H.G. 1977. ORDIFLEX — A flexible computer program for four ordination techniques: weighted averages, polar ordination, principal components analysis, and reciprocal averaging. Release B. Ecology and Systematics, Cornell University, Ithaca, New York 14850, 185 pp. 12. Gauch, H.G. 1980. Rapid initial clustering of large data sets. In: E. van der Maarel (ed.) Advances in vegetation science: Classification and ordination. Vegetatio 42: 103–111. 13. Gauch, H.G. & W.M., Scruggs. 1980. Variants of Bray-Curtis polar ordination. Vegetatio 40: 147–153. 14. Gauch, H.G. & R.H., Whittaker. 1972. Comparison of ordination techniques. Ecology 53: 868–875. 15. Gauch, H.G., G.B., Chase & R.H., Whittaker. 1974. Ordination of vegetation samples by Gaussian species distributions. Ecology 55: 1382–1390. 16. Gauch, H.G., R.H. Whittaker & S.B. Singer. 1979. Acomparative study of nonmetric ordinations. J. Ecol. (in press). 17. Gauch, H.G., R.H., Whittaker & T.R., Wentworth. 1977. A comparative study of reciprocal averaging and other ordination techniques. J. Ecol. 65: 157–174. 18. Hill, M.O. 1973. Reciprocal averaging: an eigenvector method of ordination. J. Ecol. 61: 237–249. 19. Hill, M.O. 1974. Correspondence analysis: a neglected multivariate method. J. Roy. Stat. Soc., Ser. C 23: 340–354. 20. Hill, M.O. 1979. DECORANA—A FORTRAN program for detrended correspondence analysis and reciprocal averaging. Ecology and Systematics, Cornell University, Ithaca, New York 14850, 52 pp. 21. Ihm, P. & H.van, Groenewoud. 1975. A multivariate ordering of vegetation data based on Gaussian type gradient response curves. J. Ecol. 63: 767–777. 22. Kendall, D.G. 1971. Seriation from abundance matrices. In: F.R. Hodson, D.G. Kendall & P. Tautu (eds.). Mathematics in the archeological and historical sciences, p. 215–252. Edinburgh University 23. Kessell, S.R. & R.H., Whittaker. 1976. Comparisons of three ordination techniques. Vegetatio 32: 21–29. 24. Maarel, E.van der. 1979. Transformation of cover-abundance values in phytosociology and its effects on community similarity. Vegetatio 39: 97–114. 25. Maarel, E.van der, J.G.M., Janssen & J.M.W., Louppen. 1978. TABORD, A program for structuring phytosociological tables. Vegetatio 38: 143–156. 26. Mueller-Dombois, D. & H., Ellenberg. 1974. Aims and methods of vegetation ecology. John Wiley & Sons, New York, 547 pp. 27. Noy-Meir, I. 1974. Catenation: quantitative methods for the definition of coenoclines. Vagetatio 29: 89–99. 28. Noy-Meir, I. & R.H., Whittaker, 1977. Continuous multivariate methods in community analysis: some problems and developments. Vegetatio 33: 79–98. 29. Noy-Meir, I. & R.H., Whittaker. 1978. Recent developments in continuous multivariate techniques. In: R.H., Whittaker (ed.). Ordination of plant communities, p. 337–378. Junk, The Hague. 30. Orlóci, L. 1978. Multivariate analysis in vegetation research. Junk, The Hague, 451 pp. 31. Prentice, I.C. 1977. Non-metric ordination methods in ecology. J. Ecol. 65: 85–94. 32. Sabo, S.R. 1979. Niche and habitat relations of birds in subalpine forests, New Hampshire, Ecology (in press). 33. Swan, J.M.A. 1970. An examination of some ordination problems by use of simulated vegetational data. Ecology 51: 89–102. 34. Whittaker, R.H. 1954. The ecology of serpentine soils. IV. The vegetational response to serpentine soils. Ecology 35: 275–288. 35. Whittaker, R.H. 1956. Vegetation of the Great Smoky Mountains. Ecol. Monogr. 26: 1–80. 36. Whittaker, R.H. 1960. Vegetation of the Siskiyou Mountains, Oregon and California. Ecol. Monogr. 30: 279–338. 37. Whittaker, R.H. & H.G., Gauch. 1978. Evaluation of ordination techniques. In: R.H., Whittaker (ed.). Ordination of plant communities, p. 277–336, Junk, The Hague. Detrended correspondence analysis: An improved ordination technique Cover Date Print ISSN Online ISSN Kluwer Academic Publishers Additional Links □ Correspondence analysis □ Multivariate technique □ Nonmetric multidimensional scaling □ Ordination □ Reeiprocal averaging Author Affiliations □ 1. Section of Ecology and Systematics, Cornell University, 14850, Ithaca, New York, USA
{"url":"http://link.springer.com/article/10.1007%2FBF00048870","timestamp":"2014-04-16T17:30:12Z","content_type":null,"content_length":"49056","record_id":"<urn:uuid:458ed99d-3d69-40d8-b7d7-084c9cf8bd63>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Phillip on Sunday, April 24, 2011 at 5:50pm. A 32.5g cube of aluminum initially at 45.8 C is submerged into 105.3g water at 15.4 C. What is the final temp of both substances at thermal equilibrium. Assume the al and the H2O are thermally isolated from everything else) Specific heat cap Al= .903 Specific heat cap H2O= 4.18 I understand this problem but In my math i get to: (DeltaT) Al = -14.998 X (DeltaT) H2O And im lost. Anyone able to explain the completion of the problem? Any help is greatly appreciated • Chemistry - DrBob222, Sunday, April 24, 2011 at 6:15pm You need to use Tfinal-Tinitial for delta T. heat lost by Al + heat gained by water=0 [mass Al x specific heat Al x (Tfinal-Tinitial)] + [mass H2O x specific heat H2O x (Tfinal-Tinitial)] = 0 Substitute and solve for Tf. • Chemistry - Phillip, Sunday, April 24, 2011 at 6:23pm I'm having trouble once i get to T(final)=-14.998 X T(final) + 14.998 X T(initial h2o) + T(initial Al) Any suggestions? • Chemistry - DrBob222, Sunday, April 24, 2011 at 6:45pm Yes. I think it is tough to try to manipulate the algebra. It is much easier to substitute the numbers first and manipulate them. [32.5 x 0.903 x (Tf-45.8)]+[105.3 x 4.18 x (Tf-15.4)] = 0 29.35Tf - 1344.1 + 440.15Tf - 6778.4 = 0 Check those numbers to make sure I didn't make an error on my calculator, then solve for Tf. • Chemistry - Phillip, Sunday, April 24, 2011 at 7:05pm Thank you so much DrBob! I ended up with 469.5Tf = 8122.54 Tf = 17.3 Thanks again, Related Questions chem - A 25.0 g aluminum block is warmed to 65.2 C and plunged into an insulated... Chemistry - a 15.7 g aluminum block is warmed to 53.2 degrees celsius and ... chemistry - A 32.2 g iron rod, initially at 22.3 C, is submerged into an unknown... physic - What mass of water at 22.0 °C must be allowed to come to thermal ... physic - Hi! Help me please, thank you so much. What mass of water at 28.0 °C ... Chemistry - A 35.0-g iron rod, initially at 25.6°C, is submerged into an unknown... Physics - a): Two 50 g ice cubes are dropped into 200 g of water in a thermally... chemistry - A 115 g piece of metal, initially at 60.0 °C, is submerged into 100.... Physics 2 - An unknown substance has a mass of 0.125 kg and an initial ... Chemistry - I have a few problems im having trouble with, any help..? 1. If a ...
{"url":"http://www.jiskha.com/display.cgi?id=1303681856","timestamp":"2014-04-16T14:04:13Z","content_type":null,"content_length":"9894","record_id":"<urn:uuid:3e7c971f-658e-40d9-afdc-3224bfd8fcd4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Peter Rohde Full text here. Boson-sampling is a highly simplified, but non-universal, approach to implementing optical quantum computation. It was shown by Aaronson & Arkhipov that this protocol cannot be efficiently classically simulated unless the polynomial hierarchy collapses, which would be a shocking result in computational complexity theory. Based on this, numerous authors have made the claim that experimental boson-sampling would provide evidence against, or disprove, the Extended Church-Turing thesis — that any physically realisable system can be efficiently simulated on a Turing machine. We argue against this claim on the basis that, under a general, physically realistic independent error model, boson-sampling does not implement a provably hard computational problem in the asymptotic limit of large systems. Related posts: 1. New paper: Sampling generalized cat states with linear optics is probably hard Full paper here. Boson-sampling has been presented as a simplified... 2. New paper: Boson sampling with photon-added coherent states Full paper here. Boson sampling is a simple and experimentally... 3. New paper: Scalable boson-sampling with time-bin encoding using a loop-based architecture Full text here. We present an architecture for arbitrarily scalable... Related posts brought to you by Yet Another Related Posts Plugin.
{"url":"http://www.peterrohde.org/feed/atom/","timestamp":"2014-04-19T01:50:43Z","content_type":null,"content_length":"47649","record_id":"<urn:uuid:25ed428e-34ea-430d-b232-6c7d753a1ee2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Playing with Infinity Most mathematicians, if they have heard of Rózsa Péter at all, know her as the author of Playing with Infinity, an acknowledged classic of mathematical popularization. However, her many other mathematical contributions deserve to be better known. During the 1930s she helped establish the field of recursion/computability theory, publishing even before Turing and Kleene. (For a detailed account, see the recent book by Rod Adams.) Her work quickly achieved worldwide recognition, and in 1937 she joined the editorial board of The Journal of Symbolic Logic. Her colleagues on the board included such luminaries as Kleene, Church, Tarski, Bernays, Quine, and Mac Lane. The year 1951 marked the appearance of Péter’s Recursive Functions, the first monograph in that area. Many of her later publications investigated applications of recursion theory, particularly to linguistics and to computer programming languages. Péter’s activity in research was matched by that in mathematics education. She taught for several years at the Budapest Teachers Training College and was most disappointed when it shut down and she had to relocate to Loránd Eötvös University, a move that many other mathematicians would have considered a step up. Indeed, for many years Péter taught in middle and secondary schools. With Tibor Gallai she wrote a couple of secondary school textbooks. Péter’s teaching exerted a huge influence on Peter Lax, whom she tutored for a few years until his family emigrated to the United States, when he was fifteen. She also devoted much effort to bringing women into mathematics. (Erdős once described her as “an immoderate feminist”!) Besides her mathematical accomplishments, Péter made an impact in the realm of Hungarian literature as well. She wrote and translated poetry, and her translations of Rilke, in particular, were highly acclaimed. Her Nachlass contains a Hungarian version of Brecht’s lyrics for the Barbara-Song from The Threepenny Opera, rendered so seamlessly as to make one forget that the Magyar language does not even belong to the Indo-European family. She also moonlighted as a film critic. All of these facets of Rózsa Péter contributed to Playing with Infinity. The book arose because of Marcell Benedek, a friend of Péter’s from Budapest literary circles. He regretted his lack of background in mathematics, so Péter wrote him a series of letters trying to convey the essence of some mathematical ideas. Benedek then suggested that the letters could form the basis for a book. As Péter stated in the Preface, “This book is written for intellectually minded people who are not mathematicians. … I have received a great deal from the arts and I would now like in my turn to present mathematics and let everyone see that mathematics and the arts are not so different from each other. I love mathematics not only for its technical applications, but principally because it is beautiful.” The aesthetic side of mathematics was a recurring theme for Péter, as was its unity. Often an idea or technique that Playing with Infinity introduces in one context unexpectedly (at least, to the lay reader) reappears later in a different setting, conveying effectively the cohesive whole that mathematics forms. By such devices, by the images and examples she used to put across the concepts, and explicitly by her recounting of classroom incidents, Péter the teacher is well in evidence throughout the book. But one can also discern Péter the cutting-edge logician. The book concludes with what must have been one of the first (and, for my money, is still one of the best) presentations of Gödel incompleteness for the general public. Playing with Infinity treats many of the same topics as another classic popularization, Kasner and Newman’s Mathematics and the Imagination, both books dating from around seventy years ago. Modern readers might find Péter’s book a bit old-fashioned. It certainly predates fractals, public-key cryptography, and internet search engines, to name a few staples (clichés?) of much current exposition. The fifty-year-old English translation, not totally idiomatic and including references to shillings and half-crown pieces, adds a further touch of quaintness. But Péter’s love for mathematics and desire to share its beauty still shine through timelessly. Leon Harkleroad admits to some bias here, having done historical research on and translated many works of Rózsa Péter.
{"url":"http://www.maa.org/publications/maa-reviews/playing-with-infinity","timestamp":"2014-04-18T19:17:25Z","content_type":null,"content_length":"102552","record_id":"<urn:uuid:5f8c9dfe-de8d-4791-a37e-232903f3a876>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutations & Combination Problems - Need Help January 27th 2009, 02:57 PM #1 Sep 2008 Permutations & Combination Problems - Need Help Jason bought a combination lock. There are 25 numbers on the lock. Jason must use four numbers to create the combination. How many different ways could the numbers be arranged to create the four number combination? (Frustrated please help Assuming that the order in which the numbers are placed is important (after all this a combination lock) and that he can repeat the numbers, then there are 25^4 possibilities (25 for each January 27th 2009, 05:21 PM #2 Junior Member Nov 2008
{"url":"http://mathhelpforum.com/statistics/70224-permutations-combination-problems-need-help.html","timestamp":"2014-04-17T21:38:07Z","content_type":null,"content_length":"31315","record_id":"<urn:uuid:f1679772-f821-43ab-8dff-2c5a155e7bd6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Rumford, RI Algebra Tutor Find a Rumford, RI Algebra Tutor I am a rising senior at Brown University, majoring in Cell and Molecular Biology. I have taken numerous STEM courses, and am very capable with science, and math. In addition to helping with general subjects/AP courses, I also can help high school students prepare for the ACT, SAT, and SAT Subject tests. 39 Subjects: including algebra 1, algebra 2, English, reading ...Along with my certifications, I am trained as a Montessori teacher, and use this as my primary teaching method. Manipulative materials and games make learning challenging and fun. My students do not feel that it's just another day in the classroom. 16 Subjects: including algebra 1, reading, ESL/ESOL, GED ...My references will gladly provide details about their own experiences. I have a master's degree in computer engineering and run my own data analysis company. Before starting that company, I developed software for large and small companies and was most recently the IT director at a large accounting firm. 11 Subjects: including algebra 1, algebra 2, geometry, precalculus Coming from a household where both my parents are teachers, I know and understand the importance of a good education and the benefits of hard work. From my hard work in high school, I received the Dean's Scholarship from the University of New Hampshire from which I recently graduated from in 3.5 ye... 18 Subjects: including algebra 2, vocabulary, European history, English ...I would like to tutor English and physiology. I enjoy science and want to help you to enjoy it as I do. I want to help students develop a good foundation in Science and enjoy learning science, so I became a tutor. 33 Subjects: including algebra 1, algebra 2, writing, reading Related Rumford, RI Tutors Rumford, RI Accounting Tutors Rumford, RI ACT Tutors Rumford, RI Algebra Tutors Rumford, RI Algebra 2 Tutors Rumford, RI Calculus Tutors Rumford, RI Geometry Tutors Rumford, RI Math Tutors Rumford, RI Prealgebra Tutors Rumford, RI Precalculus Tutors Rumford, RI SAT Tutors Rumford, RI SAT Math Tutors Rumford, RI Science Tutors Rumford, RI Statistics Tutors Rumford, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/rumford_ri_algebra_tutors.php","timestamp":"2014-04-20T11:08:12Z","content_type":null,"content_length":"23846","record_id":"<urn:uuid:350dbb0f-e621-46ed-9011-f70bcb29d149>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Topics: Back to homepage, wavelet transform, discrete wavelet transform, continuous wavelet transform, wavelet denoising, wavelet compression Wavelet denoising Discrete wavelet transform can be used for easy and fast denoising of a noisy signal. If we take only a limited number of highest coefficients of the discrete wavelet transform spectrum, and we perform an inverse transform (with the same wavelet basis) we can obtain more or less denoised signal. There are several ways how to choose the coefficients that will be keeped. Here, only two most simple methods were tried - hard and soft tresholding. In the Fig. 1 one can see a simple signal (sine with a lineary increasing frequency) that was used for most of the demonstrations. In the next examples, a noise was added to this signal and the denoising procedure ws used. For this denoising, approximately 4 % of the wavelet coefficients were used for the reconstruction of the original &nbsp &nbsp &nbsp &nbsp Original signal Example 1 The uniform noise was added to the 1024 points long signal. Signal with uniform noise (noise range (-0.5; 0.5)) Signal denoised by means of DWT Example 2 The uniform noise was added to the 65536 points long signal. Signal with uniform noise (noise range (-0.5; 0.5)) Signal denoised by means of DWT Example 3 Gaussian noise with sigma=0.2 was added to the 1024 points long signal. Signal with gaussian noise (sigma=0.2) Signal denoised by means of DWT Example 4 Gaussian noise with sigma=0.5 was added to the 1024 points long signal. Signal with gaussian noise (sigma=0.5) Signal denoised by means of DWT Example 5 Gaussian noise with sigma=1 was added to the 1024 points long signal. Signal with gaussian noise (sigma=1) Signal denoised by means of DWT Wavelet type influence As it is known from the theory of discrete wavelet transform, the choice of the proper wavelet scaling function is allways the most important thing. Generally, for the denoising the wavelet scaling function should have properties similar to the original signal (continuity, continuity in derivatives etc.). Here, the effects of using two different wavelets were compared. We used the Daubechies 8 and Daubechies 20 wavelet. Note that for both the wavelets the denoising procedure resulted in quite satisfying result. If we use for example Haar wavelet (box scaling function) for our signal constructed from sine functions, the result would be very poor. As a signal we used a sine function which was in some parts of the signal multiplied by a constant. &nbsp &nbsp &nbsp &nbsp Original signal We than added a gaussian noise to this function by a similar way as in previous examples. Example 6 Here denoised signals are plotted for two basis wavelets. The original signal is plotted as well (red line). Denoising using Daubechies 8 scaling function Denoising using Daubechies 20 scaling function We can see that in both cases signal was reconstructed in a satisfactory way. The comparison between the difference between the signals and the original one is given below. Square deviation of the noisy and denoised signals with a respect to the original one: the yellow lines give the noisy results, e.g. the noise. The other two lines are the denoised signals. Only part of whole signal is plotted to see the differences better. the big suppression of the noise in the denoised data is seen. The Daubechies 20 wavelet scaling function gives better results here. Also the standard deviation of the denoised signal with a respect to the original one is for this (Daubechies 20) wavelet approximately 30 % lower than for the Daubechies 8 wavelet. Real data examples Created by Petr Klapetek, February 2002
{"url":"http://klapetek.cz/wdenoise.html","timestamp":"2014-04-19T17:01:32Z","content_type":null,"content_length":"6450","record_id":"<urn:uuid:a32152fe-7bc9-44f5-8f1b-7a7c72ca7378>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Haverford ACT Tutor Find a Haverford ACT Tutor ...My first career was in business as a Vice President, consultant, and trainer. During this time, I taught Business Management at the University of Wisconsin and Chestnut Hill College. I hold BS, MS, and MBA degrees. 35 Subjects: including ACT Math, chemistry, English, physics ...I am also familiar with MAPS tests and ASK (soon to be PARCC) testing. Depending on the level of math and the personality of each student, I teach using many different teaching styles and math programs such as Everyday Math, Connected Math, PMI and traditional text books. As a mother of three, I know what it is like to place your trust in someone to care for your child. 12 Subjects: including ACT Math, geometry, algebra 1, algebra 2 ...Here are some testimonials from some of my students and their parents: "Jonathan was able to work with my son and decode what he needed to know to put him on par with the other students in his class" R.B (Mother of a 5th grader) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... 22 Subjects: including ACT Math, calculus, writing, algebra 2 ...I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the last 10 years and my students are always among the top performers in the school. My goal is to provide students with the skills, organization, and confidence to become independent mathematics 15 Subjects: including ACT Math, calculus, geometry, algebra 1 ...I have a college degree in mathematics. I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. Therefore, I am qualified to tutor students in SAT Math. 16 Subjects: including ACT Math, English, calculus, physics
{"url":"http://www.purplemath.com/haverford_act_tutors.php","timestamp":"2014-04-18T03:52:41Z","content_type":null,"content_length":"23714","record_id":"<urn:uuid:82efb81f-b6c2-4d1d-9931-6ae43441d6c9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: \[\LARGE \color{green}{\star}\text{Happy 13th birthday Parth} \color{red}{\star}\] We hope you have a great day and celebrate well!! • one year ago • one year ago Best Response You've already chosen the best response. That didn't go as planned... It was supposed to be pretty Latex :D xD Apoorvk sends his biggest wishes and hopes you have a really, really fun time! Welcome to the Teenage years! :D Best Response You've already chosen the best response. \[\ \text \ happy \ 13th \ birthday\ Parth! \] Best Response You've already chosen the best response. \[\LARGE \color{green}{\star}\text{Happy 13th birthday Parth} \color{red}{\star}\] Best Response You've already chosen the best response. Show off ^ Best Response You've already chosen the best response. Best Response You've already chosen the best response. Happy 556th PARTH! :D Best Response You've already chosen the best response. Happy birthday Parth from all the admins at OpenStudy :) Best Response You've already chosen the best response. Who changed meh post? Best Response You've already chosen the best response. Best Response You've already chosen the best response. want it back the old way? Best Response You've already chosen the best response. Guilty party haha is there a way to make it such that Zepp can have credit for creating it then? He made that. lol Best Response You've already chosen the best response. Who copied my \(\LaTeX\)? :P Best Response You've already chosen the best response. Turing did it XD Best Response You've already chosen the best response. Happy Bday! Best Response You've already chosen the best response. Thanks :) All credit for the birthday sign go to @Zepp :) lol zepp do you wanna delete our unnecessary posts? Best Response You've already chosen the best response. happy birthday ,you're a man now Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[\Large \mathcal {HaPpY~BiRtHdAy~PaRtH } =)\] Best Response You've already chosen the best response. hah finally you're not a 12 year old "genius" >:)))) lol just kidding. happy birthday from all the lgbaists Best Response You've already chosen the best response. HAAAAPPPPPYYYYY BBBIIIIRRRTTHHDDDAAAYYYYYYY!!!!! Sorry i dont know how to do the latex:D Best Response You've already chosen the best response. Woah! The best birthday ever? Fo' sho'! Best Response You've already chosen the best response. BECAUSE OF ME! :D Best Response You've already chosen the best response. And I don't care as far as the LaTeX is concerned. Best Response You've already chosen the best response. Yeah, @Pac1f1cIslander :) Best Response You've already chosen the best response. :D All grown up haha :D And seriously don't forget Apoorvk. He really wanted it to be special for you and sent his best wishes and hopes you have an incredible day. And yes, I apologize about my lack of latex abilities. Best Response You've already chosen the best response. I knew it! xD Best Response You've already chosen the best response. Yeah.. I talked to apoorv last night. :) Best Response You've already chosen the best response. Yes. Apoorvk had sent mee all his regards for u! So.... HAAPPPYYY BBIIRRTTHHDDAYY!! -Apoorvk Best Response You've already chosen the best response. Woh toh sahi hai, lekin KC aapko school nahi jaana? Best Response You've already chosen the best response. \[\textbf{marvel at the } \mathbf{\LaTeX}\]@rebeccaskell94 Best Response You've already chosen the best response. Parth meri abhi chootiya chal rahi hai! Best Response You've already chosen the best response. I always do LG xD Best Response You've already chosen the best response. \( \Huge \color{Maroon}{\mathtt{\text{I can live without} \LaTeX}} \) Best Response You've already chosen the best response. lol, BTW. I am wonderin when apoorvk will be back.:( miss him already Best Response You've already chosen the best response. Of course :/ Best Response You've already chosen the best response. He have an email he ever reply to? Best Response You've already chosen the best response. \[\Huge \color{midnightblue}{\ddot \smile}\] Best Response You've already chosen the best response. I have an idea!! Brb Best Response You've already chosen the best response. I have an imposter in chat? Best Response You've already chosen the best response. You still won't be able to implement it in the next 9 hours, because I am going to school in a few. Best Response You've already chosen the best response. \[\Huge \stackrel{\LaTeX}{\stackrel{\LaTeX}{\stackrel{\LaTeX}{\LaTeX}}}\] Best Response You've already chosen the best response. psh show off Best Response You've already chosen the best response. LaTeX LaTeX LaTeX LaTeX Why LaTeXify things that you can type out right on the keyboard? Best Response You've already chosen the best response. what code? Best Response You've already chosen the best response. \[\Huge \stackrel{\text{mada}}{\stackrel{\mathsf{mada}}{\mathbb{dane}}}\] Best Response You've already chosen the best response. \[\huge \begin{array}{cccc} \mathsf{you} \\ \mathcal{don't} \\ \mathbf{think} \\ \mathbb{fourth-dimensionally} \end{array}\] Best Response You've already chosen the best response. Darn. I was trying to find an old Latex Apoorvk did for someone's birthday and then I was going to put it over here so it would be lovely. But I couldn't find it. Sorry, Parth. Happy Birthday, either way. :) Hope you have fun joining the rest of us in teenage-hood :D Best Response You've already chosen the best response. \[\begin{array}{} \mathtt{I} \\ \mathbf{Can} \\ \mathbf{Do} \\ That \\ Too! \end{array} \] Best Response You've already chosen the best response. \[\huge \begin{array}{cccc} \color{blue}{\mathbf{don't}} \\ \color{red}{\mathtt{do}} \\\color{gold}{\mathbb{this}} \\ \color{green}{\mathsf{Kohli}} \end {array}\] Best Response You've already chosen the best response. dont challenge me...it'll only end badly Best Response You've already chosen the best response. \\[\text this\ is\ all \ I \ can\ do \] Best Response You've already chosen the best response. \[ \begin{array}{} \text{Too busy to start a }\LaTeX \textbf{ fight}. \end{array} \] Best Response You've already chosen the best response. \[\LARGE \mathtt{\color{green}{g} \color{red}{o} \color{darkblue}{o} \color{orange}{d}}\] Best Response You've already chosen the best response. sheesh..im out of touch with my artistic side...these color combos suck Best Response You've already chosen the best response. Best Response You've already chosen the best response. Happy birthday party:D Best Response You've already chosen the best response. \[\huge≤\color{red}{\textbf{Happy}} \space \color{blue}{Birthday} \space \color{green}{\text{Parth!}}≥\]Sorry, I don't know as much \(\LaTeX\) as some people XD Best Response You've already chosen the best response. @ParthKohli @karatechopper aap desi hai? Best Response You've already chosen the best response. \[\Huge\color{gold} \bigstar\color{Green} { \mathbb {Happy Birthday Parth :)}}\] Best Response You've already chosen the best response. ≤Wow, Parth is just 13 and has done so much, happy birthday bro!≥ Best Response You've already chosen the best response. Now How Do I make that in latex Best Response You've already chosen the best response. @sarah43 YES! Best Response You've already chosen the best response. Happy birthday parth So many wishes !!! may god bless u my friend !! Best Response You've already chosen the best response. It may not be your bday there, but its still your bday in america, so HAPPY BDAYYY!!!!! hahaha. Hope all your wishes come true! Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[\huge\mathfrak {\color{#DD721F}{Happy}} \text{ } \mathfrak{\color{#1FA1DD}{birthday!!!}} \text{ } \mathsf{\color{green}{@ParthKohli}}\] \[\large \text{**}celebrating\text{ } 13 \text{ } happy \ text{ } years \text{ }of \text{ }awesomeness \text{ }\text{**}\] Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50118d42e4b08ffc0ef3c0f0","timestamp":"2014-04-16T10:21:52Z","content_type":null,"content_length":"189340","record_id":"<urn:uuid:36d9587b-87bc-49ce-94f2-ba591a96bf1b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Provided by: expr - evaluate expressions expr EXPRESSION expr OPTION --help display this help and exit output version information and exit Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. EXPRESSION may be: ARG1 | ARG2 ARG1 if it is neither null nor 0, otherwise ARG2 ARG1 & ARG2 ARG1 if neither argument is null or 0, otherwise 0 ARG1 < ARG2 ARG1 is less than ARG2 ARG1 <= ARG2 ARG1 is less than or equal to ARG2 ARG1 = ARG2 ARG1 is equal to ARG2 ARG1 != ARG2 ARG1 is unequal to ARG2 ARG1 >= ARG2 ARG1 is greater than or equal to ARG2 ARG1 > ARG2 ARG1 is greater than ARG2 ARG1 + ARG2 arithmetic sum of ARG1 and ARG2 ARG1 - ARG2 arithmetic difference of ARG1 and ARG2 ARG1 * ARG2 arithmetic product of ARG1 and ARG2 ARG1 / ARG2 arithmetic quotient of ARG1 divided by ARG2 ARG1 % ARG2 arithmetic remainder of ARG1 divided by ARG2 STRING : REGEXP anchored pattern match of REGEXP in STRING match STRING REGEXP same as STRING : REGEXP substr STRING POS LENGTH substring of STRING, POS counted from 1 index STRING CHARS index in STRING where any CHARS is found, or 0 length STRING length of STRING + TOKEN interpret TOKEN as a string, even if it is a keyword like `match' or an operator like `/' ( EXPRESSION ) value of EXPRESSION Beware that many operators need to be escaped or quoted for shells. Comparisons are arithmetic if both ARGs are numbers, else lexicographical. Pattern matches return the string matched between \( and \) or null; if \( and \) are not used, they return the number of characters matched or 0. Exit status is 0 if EXPRESSION is neither null nor 0, 1 if EXPRESSION is null or 0, 2 if EXPRESSION is syntactically invalid, and 3 if an error occurred. Written by Mike Parker, James Youngman, and Paul Eggert. Report expr bugs to bug-coreutils@gnu.org GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> Report expr translation bugs to <http://translationproject.org/team/> Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. The full documentation for expr is maintained as a Texinfo manual. If the info and expr programs are properly installed at your site, the info coreutils 'expr invocation' should give you access to the complete manual.
{"url":"http://manpages.ubuntu.com/manpages/precise/en/man1/expr.1.html","timestamp":"2014-04-16T07:16:40Z","content_type":null,"content_length":"7321","record_id":"<urn:uuid:7bc0bc52-401d-4d8b-8db7-0d66c88a3202>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Analysis, Notes 8: Banach spaces (application: weak solutions to PDEs) by Orr Shalit Today I will show you an application of the Hahn-Banach Theorem to partial differential equations (PDEs). I learned this application in a seminar in functional analysis, run by Haim Brezis, that I was fortunate to attend in the spring of 2008 at the Technion. As often happens with serious applications of functional analysis, there is some preparatory material to go over, namely, weak solutions to PDEs. 1. Weak solutions to PDEs In our university, Ben-Gurion University of the Negev, Pure Math majors can finish their studies without taking a single course in physics. Therefore I will say the obvious: partial differential equations are one of the most important and useful branches of mathematics. It is a huge subject. When working in PDEs one requires an arsenal of different tools, and functional analysis is just one of the many tools that PDE specialists use. Since my only goal here is to give an example, and so as to be very very concrete, I will discuss only one PDE, the PDE (*) $div(u) = F .$ Here the function $u = (u_1, u_2)$ is a vector valued function on the plane $u: \mathbb{R}^2 \rightarrow \mathbb{R}^2$, $F$ is a scalar valued function on the plane $F: \mathbb{R}^2 \rightarrow \ mathbb{R}$, and $div$ is the divergence operator $div(u) = \frac{\partial u_1}{\partial x} + \frac{\partial u_2}{\partial y} .$ In its simplest form, the problem is: given a specified function $F$, does there exist a solution $u$ that satisfies $div(u) = F$? Classically, a solution to equation (*) means a differentiable function $u$ (meaning that both $u_1$ and $u_2$ are differentiable functions) such that $div(u) (x,y)= \frac{\partial u_1}{\partial x}(x,y) + \frac{\partial u_2}{\partial y}(x,y) = F(x,y)$ holds for every $(x,y) \in \mathbb{R}^2$. The question whether a classical solution exists or not is a respectable mathematical question, but as I noted above, PDEs arise in applications and exist for applications, and it is sometimes not reasonable to expect that the solution will be differentiable or even continuous. So one is led to consider weak solutions, that is, functions $u$ which are not differentiable, but which solve the PDE (*) in some sense. (There is another reason to consider weak solutions besides the need the arises in applications: sometimes the existence of a classical solutions is shown in two steps. First step: a weak solution is shown to exist. Second step: the weak solution is shown to enjoy some regularity properties and is shown to be a solution in the classical case). In what sense? Assume that $F \in C = C(\mathbb{R}^2)$ and that $u \in C^1= C^1(\mathbb{R}^2)$ is a solution to (*). It then follows that for every smooth function $w \in C_c^\infty(\mathbb{R}^2)$ (i.e., $w$ is an infinitely differentiable compactly supported function; sometimes such functions $w$ are called test functions) the following holds: (**)$\int (\frac{\partial u_1}{\partial x}(x,y) + \frac{\partial u_2}{\partial y}(x,y)) w(x,y) dx dy = \int F(x,y) w(x,y) dx dy .$ In fact, if $F \in C$, then $u \in C^1$ is a classical solution to (*) if and only if the above equality of integrals holds for every $w \in C_c^\infty$. This follows from the following exercise. Exercise A: A function $f \in C(\mathbb{R}^n)$ is everywhere zero if and only if for all $w \in C_c^\infty(\mathbb{R}^n)$, $\int f w = 0$. If we integrate (**) by parts, we find that $u$ is a classical solution to (*) if and only if $-\int (u_1 w_x + u_2 w_y) = \int F w$ (*’) $\int u \cdot grad(w) = -\int F w ,$ for all $w \in C_c^\infty$. So (*) is equivalent to (*’) for $u \in C^1$ and $F \in C$ (here $grad(w) = (w_x, w_y)$ is the gradient of $w$). But (*’) makes sense also if $u$ and $F$ are merely locally integrable. Thus for a locally integrable $F$, we say that a locally integrable function $u$ is a weak solution to (*) if it satisfies (*’). Experience has shown that this is a reasonable notion of solution to the original PDE. Now we are free to study (*’) where $F$ belongs to a certain class of functions, and ask whether a solution $u$ in a given class of functions exists. We will now show that for every $F \in L^2$ there exists an $F \in L^\infty$ such that $u$ is weak solution to $div(u) = F$. There are other notions of generalized solutions, see also Terry Tao’s PCM article or the Wikipedia article. 2. The existence of $L^\infty$ solutions to $div(u) = F$ Let us fix some notation. For simiplicity, let all our functions be real valued. We let $L^1 \oplus L^1$ denote the space of all pairs $(f,g)$, where $f,g \in L^1(\mathbb{R}^2)$. We equip this space with the norm $\|(f,g)\| = \|f\|_1 + \|g\|_1.$ Likewise, $L^\infty \oplus \L^\infty$ is the space of pairs of functions with the norm $\|(f,g)\| = \max\{\|f\|_\infty, \|g\|_\infty \}.$ Exercise B: $L^1 \oplus L^1$ is a Banach space, and $(L^1 \oplus L^1)^* = L^\infty \oplus L^\infty$. Theorem 1: For every $F \in L^2$, there exists a $u = (u_1, u_1) \in L^\infty \oplus L^\infty$ such that solves (in the weak sense) the PDE $div(u) = F$. Proof: Let $M \subset L^1 \oplus L^1$ be the space $M =\{(f,g) \in L^1 \oplus L^1 : \exists w \in C^\infty_c . (f,g) = grad(w) \}.$ Since $M$ is the range of a linear map, it is a linear subspsace of $L^1 \oplus L^1$. The following exercise is not difficult. Lemma 2: If $(f,g) \in M$, then there is a unique $w \in C_c^\infty$ for which $(f,g) = grad(w)$. The map $(f,g) \mapsto w$ is linear and bounded as a map from $M \subset L^1 \oplus L^1$ into $L^2$. Assume the lemma for now, and let us proceed with the proof of the theorem. On $M$ we define the linear functional $\phi : M \rightarrow \mathbb{R}$ $\phi(f,g) = - \int F w$, where $w \in C^\infty_c$ is such that $(f,g) = grad(w)$. Now since $F \in L^2$ by assumption, the map $w \mapsto - \int Fw$ is a bounded functional on $L^2$. Using this fact together with Lemma 2 we conclude that $\phi$ (which is nothing but the composition of the map $M i grad(w) \mapsto w \in L^2$ with the map $L^2 i w \mapsto - \int Fw$) is a well defined, linear and bounded functional on $M \subset L^1 \oplus L^1$. By the Hahn Banach extension theorem (Theorem 12 in Notes 6), $\phi$ extends to a bounded functional $\Phi$ on $L^1 \oplus L^1$. By Exercise B, there exists a $u \in L^\infty \oplus L^\infty$ such that $\Phi(f,g) = \int (u_1 f + u_2 g)$ for all $f, g \in L^1 \oplus L^1$. Restricting only to elements of the form $(f,g) = grad(w) \in M$, we find that $\int u \cdot grad(w) = \phi(grad(w)) = -\int F w$ for all $w \in C_c^\infty$. In other words, $u \in L^\infty \oplus L^\infty$ is a weak solution to the equation $div(u) = F$. This may seem a little magical, but don’t forget that we still haven’t proved Lemma 2. Lemma 2 is a typical example of an estimate that one has to prove in order to apply functional analysis to PDEs, and falls under the wide umbrella of the Sobolev–Nirenberg inequalities. Proof of Lemma 2: Since the gradient operator $grad : C^\infty \rightarrow C^\infty \oplus C^\infty$ annihilates only constant functions, its restriction to $C_c^\infty$ has no kernel. Therefore, the linear transformation $grad : C_c^\infty \rightarrow M$ has a linear inverse $grad^{-1}: M \rightarrow C^\infty_c$ which sends every $(f,g) \in M$ to the unique $w \in C_c^\infty$ such that $grad(w) = (f,g)$. The only nontrivial issue is boundedness with respect to the appropriate norms. The operator $grad^{-1}$ actually has a nice formula $grad^{-1}(f,g)(x,y) = \int_{-\infty}^x f(t,y) dt .$ Thus, if $w \in C^\infty_c$, we have $w(x,y) = \int_{-\infty}^x \frac{d w}{d x}(t,y) dt .$ We obtain the estimate $|w(x,y)| \leq \int_{-\infty}^\infty |\frac{d w}{d x}(t,y)| dt$. Similarly, $|w(x,y)| \leq \int_{-\infty}^\infty |\frac{d w}{d y}(x,s)| ds$. Multiplying the two estimates that we obtained we have $|w(x,y)|^2 \leq \int_{-\infty}^\infty |\frac{d w}{d x}(t,y)| dt \times \int_{-\infty}^\infty |\frac{d w}{d y}(x,s)| ds .$ Integrating with respect to $x$ and $y$, we obtain $\|w\|_2^2 \leq \|\frac{dw}{dx}\|_1 \|\frac{dw}{dy}\|_1 \leq \frac{1}{2}\|grad(w) \|^2_{L^1 \oplus L^1}$, as required.
{"url":"http://noncommutativeanalysis.wordpress.com/2012/11/07/advanced-analysis-notes-8-banach-spaces-application-weak-solutions-to-pdes/","timestamp":"2014-04-19T09:34:35Z","content_type":null,"content_length":"93879","record_id":"<urn:uuid:d1e477e5-01a8-4d8a-8060-a24f285fa382>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof involving functions December 7th 2012, 11:37 AM #1 Sep 2012 Proof involving functions f(x) = (ax+b)/(cx+d) , x is real, x=/= -d/c , b=/=0 , c=/=0 Prove that if a + d =/= 0 (a-d)^2 +4bc = 0 then y=f(x) and y=f^-1(x) intersect at exactly one point So far I've tried setting f(x) = f^(x) and writing as a quadratic in x. I then tried to find the discriminant and show that it was (a-d)^2 +4bc but I just got into a big mess. Is this method correct or am I missing something? Is there another approach? Re: Proof involving functions Have you found f^(-1)(x)? What is it? (I hope you understand that f^{-1} is NOT the reciprocal "1/f(x)" but the inverse function.) Re: Proof involving functions f^-1(x) = (b-dx)/(cx-a) Re: Proof involving functions Re: Proof involving functions (ax+b)/(cx+d) = (b-dx)/(cx-a) (ax+b)(cx-a) = (b-dx)(cx+d) then all I can think to do is write as a polynamial equal to zero, and find the discriminant. This didn't get me (a-d)^2 +4bc. I don't know December 7th 2012, 12:09 PM #2 MHF Contributor Apr 2005 December 7th 2012, 12:36 PM #3 Sep 2012 December 7th 2012, 04:10 PM #4 December 10th 2012, 07:48 AM #5 Sep 2012
{"url":"http://mathhelpforum.com/algebra/209292-proof-involving-functions.html","timestamp":"2014-04-18T19:13:07Z","content_type":null,"content_length":"42908","record_id":"<urn:uuid:1b2f6cc3-a11f-4596-9acf-b56b96714237>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Bell Curve in Lotto A bell curve is the normal distribution, also called Gaussian distribution. The bell curve is an extremely important probability distribution in many fields. It is often called the bell curve because the graph of its probability density resembles a bell. The bell curve can be seen in lotto when looking at the most probable range of sums The above photo shows an example of a lotto game's sum frequency graph. You can use Advantage Plus to create this chart for your game. You will see that every lotto game follows a similar bell curve.
{"url":"http://www.smartluck.com/lotteryterms/bell-curve.htm","timestamp":"2014-04-19T22:05:56Z","content_type":null,"content_length":"4823","record_id":"<urn:uuid:6bee2b4e-e351-46ef-91aa-b5cb3242271b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
- PHYS 152 Essentials of Physics- PHYS 152 Lecture 5 and 6-- Standing Waves, Overtone Series, and Synthesis of Waves Reading: Chapter 4 │ │ │ │ │ │ │ │ A Moog synthesizer, circa 1968 │ In-class assignment, problem 2 Let's spend a little time working out problem 2 of the in-class assignment from Lecture 4. Mersenne's Laws During our slinky lab we noticed that stretching the slinky to twice its original length of 1 meter resulted in a doubling of wave speed while frequency stayed the same. What's that about? Marin Mersenne (1588-1648) determined three laws that help us answer this question. They focus primarily on frequency, but we already know the relationship between frequency, wavelength and wave speed. First we might ask ourselves which variables (or fundamental descriptors) are important. Think of a violin. The strings have different thicknesses, and it is tuned by changing the tension on the • What affect does slinky mass have on this? This might be a good time to use linear density (or mass per length) as a fundamental descriptor. • What about tension? Does it generally increase or decrease wave speed? • Also, what affect does string length have on frequency, all other things being equal? What does doubling length do to frequency? Table 3-4 of your textbook gives the three laws attributed to Mersenne. They say that: ┃ │ Doubling the string length (L) halves the frequency (f) ┃ ┃ │ Doubling the TENSION (F) results in an increase in frequency by square root of two ┃ ┃ │ Doubling the mass per length (W) results in reducing frequency by square root of two ┃ All of these laws can be summarized as follows: This explains why the strings of guitars, violas, etc., are of different thicknesses and kept at different tensions. In terms of Mersenne's equations, then, try to answer the following questions: • Why are strings for lower open notes (think low E on a guitar) thicker than those for higher notes? What is 'wire-wound' about? • Which string on a violin is under the highest tension and why? • Why are the strings of a cello longer than those of a viola? • How would doubling the mass per length of a string, without doing anything else, change the frequency? (be quantitative). Some of you were wondering about how one gets different frequency standing waves (fundamental, 2nd harmonic, 3rd harmonic, etc.) on a string all at the same time. Here is another applet (courtesy of falstad.com) that let's you pluck a string (try it in different places) and see how this affects the mixture of standing waves (or overtone content) of the string. Be sure and turn on sound so you can hear what plucking at different locations sounds like. Also, if you select "Display Modes" you can see and hear how the different overtones appear and decay over time, giving a guitar it's unique sound. You guitar players may want to check out Ian Billington's web page on The Physics of the Acoustic Guitar. Longitudinal Standing Waves (pipes) Many musical instruments use methods other than the excitation of stretched strings to create musical sounds. In some instruments, sound waves within air are directly generated by exciting changes in the motion air molecules within and near the ends of the pipe. These types of standing waves are longitudinal (in the direction of wave travel) as contrasted with transverse waves of a stretched string. One can 'monitor' the air inside the pipe in two ways-- 1. pressure (average spacing between air molecules, versus unaffected atmospheric pressure) 2. air velocity (net transport of air molecules, versus unaffected air velocity) The relationship between these two is really not so apparent (at least to me!). Let's look at Figure 3-13 of your book and think about what it means: At an air velocity antinode (marked "A" on the figures), air molecule 'layers' are rushing to the left or to the right with maximal variations in velocity over time. However, the 'layers' are about the same separation at those points (look under an "A" on each of the four (successive time) plots). So the air pressure there is constant It turns out that a pressure node (a place where air pressure stays the same, like string displacement on the bridge of a guitar) is an air velocity antinode, and vice versa. In other words, an air pressure antinode-- a place where air pressure changes most-- is a place where air velocity doesn't change at all over time. A few important TIPS about pipes So, whenever you see a tube with sine waves drawn in it (like above), your first question should be: "What does that graph represent, air velocity or pressure?" The next step is to remember that: "Air velocity nodes (N's) are equivalent to pressure antinodes (A's)-- and vice versa" (The figures in your textbook typically show air velocity in plots of standing waves in pipes.) Ends, nodes, antinodes and flips Now it worthwhile to think about (boundary) conditions at the closed or open ends of a pipe. This will tell us where we expect to find nodes and antinodes. It will also help us understand if and where sound waves "flip" (reverse phase) when they reach an open or a closed end. □ What will the air velocity do at the closed end of the pipe? Will that be a node or antinode for air velocity? □ What about the open end? Will that be an air velocity node or antinode? □ What about pressure, where will the nodes and antinodes be? The air velocity at the closed end must be zero. Air layers can't move to the left, into the closed end. So there is an air velocity node (N) at the closed end. Remember that this corresponds to an air pressure antinode. At the other end, the air pressure must be the same as that outside the pipe, so there is an air pressure node at the open end. This corresponds to an air velocity antinode (A). What about wave pulses reflecting off of an open or closed end. What happens there? It's scientific process time! And they don't flip upon encountering a closed end. I think about the open end of a pipe as a pressure node--where pressure is 'clamped' to outside (atmospheric) pressure. We noted in our slinky lab that (transverse) slinky pulses flipped upon reflecting off of 'clamped' ends. Ditto for sound waves encountering ('pressure clamped') open pipe ends. Now that we know what to expect for open and closed pipe ends, we can work out the overtone series (set of standing waves that fit in...) for pipes. Let's consider, first, a pipe with both ends open. We start by drawing a graph of air velocity (following the book's lead) for the simplest standing wave that 'fits' in the pipe. So we remember that the open ends of pipes are air velocity antinodes (A's). And the simplest standing wave that fits is one with one air velocity node (N) in the middle. To determine the wavelength (l) of this standing wave, compared the the length (L) of the pipe, we recognize that we must 'complete' the sine wave by adding to both ends. This gives us that: l = 2 L for the fundamental (or 1st harmonic) standing wave for an open-ended pipe. And we remember that: and with that method, we can develop both a pictoral representation of the overtone series (Figure 3-20 of your textbook), and a table showing these and giving corresponding wavelengths and Table of nodes, etc., wavelengths and frequencies for open-ended pipe standing waves ┃ Harmonic number (N) │ # nodes │ # antinodes │ wavelength (l) │ frequency (f=v/l) │ AKA ┃ ┃ 1 │ 1 │ 2 │ 2L │ v/2L (call this f1) │ ┃ ┃ 2 │ 2 │ 3 │ L (2/2 L) │ v/L (= 2 f1) │ ┃ ┃ 3 │ 3 │ 4 │ 2/3 L │ v/(2/3 L) (=3fl) │ ┃ ┃ 4 │ 4 │ 5 │ 1/2 L (2/4 L) │ v/(1/2 L) (=4fl) │ ┃ ┃ 5 │ 5 │ 6 │ 2/5 L │ v/(2/5 L) (=5fl) │ ┃ ┃ n │ n │ n+1 │ 2/n L │ v / (2/n L) (=n fl) │ (n is the number of nodes in this case) ┃ ┃ (somewhat superflous comments │ not Modes │ I personally have nothing against │ L is the length of the │ a wee high │ to stand or not to stand.. that is the ┃ ┃ follow) │ │ nodes │ string │ frequency │ question ┃ We leave it to you to develop on your own graphs of (pictoral representations of) air velocity in a pipe with one end closed and the other open. This situation is representative of a flute, for example. You can see Figure 3-20 of your book and Table 3-7 to see if you understand how this works. Synthesis of Complex Waves To understand how one can synthesize the sounds of different musical instruments, it is useful to understand, in a conceptual way, something called Fourier synthesis. Fourier synthesis, named after French mathematician Jean Baptiste Joseph Fournier, is a high-faluting way of saying "let's add sine waves together and see what we get." So let's just do that. We can use the beautiful Fourier Synthesis Applet, courtesy of Falstad.com, for this: Using this applet we note that we can compose somewhat complex waveforms (triangular, square, sawtooth, etc.) using appropriate combinations (where we choose frequency, amplitude and phase... so many choices!) of pure sine waves. What about going the other way around? Can we take a signal (say a "sawtooth" wave) and somehow determine which sine waves comprise it? If so, can we determine what relationships exist between the successive frequencies, their amplitudes and the phases of the Fourier components of that signal? Time to break out the demo room's trusty microphone, LabPro and LoggerPro software to determine the Fourier components of a sawtooth wave. This process of analyzing seemingly complex signals in terms of sums of sine and cosine waves is, amazingly, called Fourier Analysis This is not to be confused with Freudian Analysis. Let's analyze various musical tones with Raven Software and see if some of this makes sense in that setting. Let's start with a single note played on a clarinet. Our goal is to synthesize the sounds of musical instruments. So it is useful to combine some Fourier analysis with a little physics of sound and music to guide how we do that.
{"url":"http://hendrix2.uoregon.edu/~dlivelyb/phys152/L5.html","timestamp":"2014-04-16T15:59:45Z","content_type":null,"content_length":"29727","record_id":"<urn:uuid:cd646ea1-daa0-4737-9a78-499b552b496c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply This is a standard Gambler's ruin problem first expressed by Christian Huygen. There are 2 possibilities. Maria can go broke or Maria can keep winning indefinitely and not go broke. Obviously Sam can never lose because he can not go broke. The relevant formula is: where i is the amount Maria starts with i = 10 bets. n = ∞ which means Maria never goes broke even in an infinite number of plays. p = 2 / 3. Plugging in we get: So Maria has a 99.90234375% chance of winning indefinitely and a .09765625% chance of going broke. It has been known since the time of Huygen that if Maria and Sam both have the same percentage of winning then she is sure to go broke. But that is not the case here because Maria has a better chance of winning each bet.
{"url":"http://www.mathisfunforum.com/post.php?tid=20021&qid=284761","timestamp":"2014-04-20T18:39:23Z","content_type":null,"content_length":"20555","record_id":"<urn:uuid:f7b6912c-44db-4d7d-b815-320170368642>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Basic hypergeometric series and applications. With a foreword by George E. Andrews. (English) Zbl 0647.05004 Mathematical Surveys and Monographs, 27. Providence, RI: American Mathematical Society (AMS). xiii, 124 p. (1988). One of the problems of working with basic hypergeometric series is the scarcity of good reference books. A measure of this scarcity is that although it has taken roughly thirty years for this book to see print, it still meets a very real need for a good introduction to this subject. The book moves in a natural progression from techniques of deriving identities for basic hypergeometric series, through applications to partition theory and especially the Ramanujan partition congruences, into mock theta functions, and ultimately to results in modular equations. The book is a treasure house of new and little known identities and relationships. Chapter notes by George Andrews tie Fine’s identities to the current literature. 33D15 Basic hypergeometric functions of one variable, ${}_{r}{\varphi }_{s}$ 33-02 Research monographs (special functions) 05A15 Exact enumeration problems, generating functions 11P82 Analytic theory of partitions 11F37 Forms of half-integer weight, etc.
{"url":"http://zbmath.org/?q=an:0647.05004","timestamp":"2014-04-20T16:24:33Z","content_type":null,"content_length":"21659","record_id":"<urn:uuid:b0458baa-8053-4357-9dd7-9254a5d0494e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Brookline, NH Math Tutor Find a Brookline, NH Math Tutor ...If you are looking for a tutor with exceptional instructional knowledge and skill, then I could be a good match for you. This is particularly true if the student needing help has struggled academically for reasons related to dyslexia, ADHD, ASD, or another condition that affects his/her learning... 26 Subjects: including algebra 2, reading, precalculus, prealgebra ...I have an M.S. in Secondary Education, C. W. Post, from Long Island University and a B.S. in MIS and Finance, from Rensselaer Polytechnic Institute. 6 Subjects: including algebra 2, algebra 1, precalculus, geometry ...She was tracheostomy dependent, gastrostomy fed, with a litany of diagnoses. To name a few: congenital encephalopathy, seizure disorder, developmental delay, neurogenic bladder and COPD. Further, I worked as social worker in a pediatric nursing care facility. 45 Subjects: including algebra 2, precalculus, discrete math, SAT math ...I am a skilled math tutor who is strong in many areas of math, including Pre-Algebra, Algebra, Trigonometry. I feel it's important to be comfortable in these areas because you will need to use it in your everyday life. When it comes to my teaching style, I have a sense of humor when it comes to... 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. 36 Subjects: including logic, ACT Math, reading, probability
{"url":"http://www.purplemath.com/Brookline_NH_Math_tutors.php","timestamp":"2014-04-21T13:06:37Z","content_type":null,"content_length":"23811","record_id":"<urn:uuid:de9bde93-8070-4106-8353-6e7d615cf94a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Kärkkäinen J: Sparse suffix trees - in Proceedings of the 32nd Annual ACM Symposium on the Theory of Computing , 2000 "... Abstract. The proliferation of online text, such as found on the World Wide Web and in online databases, motivates the need for space-efficient text indexing methods that support fast string searching. We model this scenario as follows: Consider a text T consisting of n symbols drawn from a fixed al ..." Cited by 189 (17 self) Add to MetaCart Abstract. The proliferation of online text, such as found on the World Wide Web and in online databases, motivates the need for space-efficient text indexing methods that support fast string searching. We model this scenario as follows: Consider a text T consisting of n symbols drawn from a fixed alphabet Σ. The text T can be represented in n lg |Σ | bits by encoding each symbol with lg |Σ | bits. The goal is to support fast online queries for searching any string pattern P of m symbols, with T being fully scanned only once, namely, when the index is created at preprocessing time. The text indexing schemes published in the literature are greedy in terms of space usage: they require Ω(n lg n) additional bits of space in the worst case. For example, in the standard unit cost RAM, suffix trees and suffix arrays need Ω(n) memory words, each of Ω(lg n) bits. These indexes are larger than the text itself by a multiplicative factor of Ω(lg |Σ | n), which is significant when Σ is of constant size, such as in ascii or unicode. On the other hand, these indexes support fast searching, either in O(m lg |Σ|) timeorinO(m +lgn) time, plus an output-sensitive cost O(occ) for listing the occ pattern occurrences. We present a new text index that is based upon compressed representations of suffix arrays and suffix trees. It achieves a fast O(m / lg |Σ | n +lgɛ |Σ | n) search time in the worst case, for any constant - Proc. 3rd South American Workshop on String Processing (WSP'96 , 1996 "... String matching over a long text can be significantly speeded up with an index structure formed by preprocessing the text. For very long texts, the size of such an index can be a problem. This paper presents the first sublinear-size index structure. The new structure is based on Lempel-Ziv parsing ..." Cited by 48 (1 self) Add to MetaCart String matching over a long text can be significantly speeded up with an index structure formed by preprocessing the text. For very long texts, the size of such an index can be a problem. This paper presents the first sublinear-size index structure. The new structure is based on Lempel-Ziv parsing of the text and has size linear in N, the size of the Lempel-Ziv parse. For a text of length n, N = O(n = log n) and can be still smaller if the text is compressible. With the new index structure, all occurrences of a pattern string of length m can be found in time O(m 2 - In ICDE , 2000 "... We propose an indexing technique for fast retrieval of similar subsequences using time warping distances. A time warping distance is a more suitable similarity measure than the Euclidean distance in many applications, where sequences may be of different lengths or different sampling rates. Our index ..." Cited by 39 (4 self) Add to MetaCart We propose an indexing technique for fast retrieval of similar subsequences using time warping distances. A time warping distance is a more suitable similarity measure than the Euclidean distance in many applications, where sequences may be of different lengths or different sampling rates. Our indexing technique uses a disk-based suffix tree as an index structure and employs' lower-bound distance functions to filter out dissimilar subsequences without false dismissals. To make the index structure compact and thus accelerate the query processing, we convert sequences of continuous values to sequences of discrete values via a categorization method and store only a subset of suffixes whose first values are different from their preceding values. The experimental results' reveal that our proposed technique can be a few orders' of magnitude faster than sequential scanning. - In Proc. 7th International Conference on Discovery Science (DS’04 , 2004 "... We consider the problem of discovering the optimal pair of substring patterns with bounded distance #, from a given set S of strings. ..." Cited by 5 (4 self) Add to MetaCart We consider the problem of discovering the optimal pair of substring patterns with bounded distance #, from a given set S of strings. - in Proc. 10th International Symp. on String Processing and Information Retrieval (SPIRE’03 , 2003 "... Abstract. Given a text, grammar-based compression is to construct a grammar that generates the text. There are many kinds of text compression techniques of this type. Each compression scheme is categorized as being either off-line or on-line, according to how a text is processed. One representative ..." Cited by 3 (3 self) Add to MetaCart Abstract. Given a text, grammar-based compression is to construct a grammar that generates the text. There are many kinds of text compression techniques of this type. Each compression scheme is categorized as being either off-line or on-line, according to how a text is processed. One representative tactics for off-line compression is to substitute the longest repeated factors of a text with a production rule. In this paper, we present an algorithm that compresses a text basing on this longestfirst principle, in linear time. The algorithm employs a suitable index structure for a text, and involves technically efficient operations on the structure. 1 - in Proc. 13th International Symp. on String Processing and Information Retrieval (SPIRE’06), Lecture Notes in Computer Science , 2006 "... Abstract. The suffix tree of string w is a text indexing structure that represents all suffixes of w. A sparse suffix tree of w represents only a subset of suffixes of w. An application to sparse suffix trees is composite pattern discovery from biological sequences. In this paper, we introduce a new ..." Cited by 3 (1 self) Add to MetaCart Abstract. The suffix tree of string w is a text indexing structure that represents all suffixes of w. A sparse suffix tree of w represents only a subset of suffixes of w. An application to sparse suffix trees is composite pattern discovery from biological sequences. In this paper, we introduce a new data structure named sparse directed acyclic word graphs (SDAWGs), which are a sparse text indexing version of directed acyclic word graphs (DAWGs) of Blumer et al. We show that the size of SDAWGs is linear in the length of w, and present an on-line linear-time construction algorithm for SDAWGs. 1 "... integration of multi-scale data for the host-pathogen studies ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=50678","timestamp":"2014-04-18T12:09:49Z","content_type":null,"content_length":"29385","record_id":"<urn:uuid:aa0a2747-11b5-4ef7-9055-97982d8abe97>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Thomas Industrial Library 9.2 RESONANCE The natural frequency (and its overtones) are of great interest to the designer as they define the frequencies at which the system will resonate . The single- DOF lumped parameter systems shown in Figures 9-1 and 9-2 (pp. 214-215) are the simplest possible to describe a dynamic system, yet they contain all the basic dynamic elements. Masses and springs are energy storage elements. A mass stores kinetic energy, and a spring stores potential energy. The damper is a dissipative element. It uses energy and converts it to heat. Thus all the losses in the model of Figure 9-1 occur through the damper. These are “pure” idealized elements which possess only their own special characteristics. That is, the spring has no damping and the damper no springiness, etc. Any system that contains more than one energy storage device, such as a mass and a spring, will possess at least one natural frequency. If we excite the system at its natural frequency, we will set up the condition called resonance in which the energy stored in the system’s elements will oscillate from one element to the other at that frequency. The result can be violent oscillations in the displacements of the movable elements in the system as the energy moves from potential to kinetic form and vice versa. Figure 9-6a shows a plot of the amplitude and phase angle of the displacement response Y of the system to a sinusoidal input forcing function at various frequencies f . In our case, the forcing frequency is the angular velocity at which the cam is rotating. The plot normalizes the forcing frequency as the frequency ratio f / n . The forced response amplitude Y is normalized by dividing the dynamic deflection y by the static deflection F 0 / k that the same force amplitude would create on the system. Thus at a frequency of zero, the output is one, equal to the static deflection of the spring at the amplitude of the input force. As the forcing frequency increases toward the natural frequency n , the amplitude of the output motion, for zero damping, increases rapidly and becomes theoretically infinite when f = n . Beyond this point the amplitude decreases rapidly and asymptotically toward zero at high frequency ratios. Figure 9-6c shows that the phase angle between input and output of a forced system switches abruptly at resonance. Figure 9-6b shows the amplitude response of a “self-excited” system for which there is no externally applied force. An example might be a shaft coupling a motor and a generator. The loading is theoretically pure torsion. However, if there is any unbalance in the rotors on the shaft, the centrifugal force will provide a forcing function proportional to angular velocity. Thus, when stopped there is no dynamic deflection, so the amplitude is zero. As the system passes through resonance, the same large response as the forced case is seen. At large frequency ratios, well above critical, the deflection becomes static at an amplitude ratio of 1. Cam-follower systems are subject to both of these types of vibratory behavior. An unbalanced camshaft will self-excite, and the follower force creates a forced response.
{"url":"http://www.thomasglobal.com/library/ui/Cams/Cam%20Design%20and%20Manufacturing%20Handbook/resonance/1/default.aspx","timestamp":"2014-04-19T04:19:25Z","content_type":null,"content_length":"114210","record_id":"<urn:uuid:a964781a-303b-48a5-be78-e1fdcfe5a8d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Egypttian and Greek sqare root Date: Dec 1, 2012 9:17 AM Author: Milo Gardner Subject: Re: Egypttian and Greek sqare root The proposed scaled aspect of the square root of 6, and most numbers less than 1,000 need not have used 6/6 at any time. Updates to the Planetmath writeup will contibue until the project is complete. Planetmath will also be updated to show that Egyptian scribes commonly mentioned inverse relationships in shorthand discussions of square roots of perfect squares 4, 25, 36, 49, 64, 81, 100 and higher, an important historical point will be reviewed on another level. What was used focused upon were remainders(R). Remainders were often estimated in n/5 parts such that n/5 was scaled by 24/24 = 24n/120 as discussed by: 1/5(24/24) = 24/120 = (20 + 3 + 1)/120 2/5(24/24).= 48/120 = (40 + 5 + 3)/120 3/5(24/24) = 72/120 = (60 + 8 + 3 + 1)/120 4/4(24/24) = 96/120 = (80 + 8 + 5 + 3)/120 and other partitions. Initial estimated square root of N statements, with Quotient (Q) used the form (Q + n/5)^2 The raw square root data was processed in shorthand formats that considered a rational error E1 that scribes reduced to a lower acceptable error E2. The complete historical method (found by Occam's Razor, and historical data) will be posted to a Planetmath page in the next few days, and maybe weeks. The final method will likely describe selections of (irrational) errors E2 associated with adding or subtracting from the last term (1/40, most often, and as I suspect, at other times 1/120). Thanks to Peter D. for the prodding. This unresolved topic had been running around in my head for far too many years, over 20 to be sure.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7930724","timestamp":"2014-04-16T08:06:14Z","content_type":null,"content_length":"2587","record_id":"<urn:uuid:df035d1b-e750-4e41-a27b-fe95eb26bfa6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Gate Previous Question Papers Solutions Cse Gate Previous Question Papers Solutions Cse PDF Sponsored High Speed Downloads www.OneStopGATE.com – GATE Study Material, Forum, downloads, discussions & more! GATE CS – 2000 SECTION - A 1. ... question (2.1 — 2.26), four possible alternatives (A, B, C and D) are given, out of which ONLY ONE is correct. Previous GATE question Papers | GATE online tests | GATE Question Bank | GATE textbooks | Forum | GATE Coaching | Study tips and ... does this system of equations have infinitely many solutions? (A) 0 (B) 1 (C) 2 (D) infinitely many . 42. A ... ... two distinct solutions (c) unique (d) none of the above . 72. The ... Previous GATE question Papers | GATE online tests | GATE Question Bank | GATE textbooks | Forum | GATE Coaching | Study tips Previous GATE papers (Question papers only) GK ... ... Gate 2012 Pdf Cse Full Gate 2012 Physics Paper ... Gate 2012 electrical paper gate cse material pdf free ... with solutions pdf gk publications gate ... Gate 2012 electrical engineering books gate 2013 ... ... SAIL MT written examination previous years solved question papers,SAIL free solved sample placement papers with solutions, SAIL ... CSE,EEE,ECE Mechanical engineering cse and all branch wise solved question papers,All IT GATE Papers"..... ... (Answer of the previous question is not required to solve the next ... ideal and non-ideal solutions; phase transformation: phase rule and phase diagrams – one, two, and three To assist students preparing for GATE exam, a set of previous question papers is also made available to the students in the department library. ... 25 EEE Problems And Solutions A K Jairath 19659 ... 33 CSE MICROPROCESSORS M RAFIQUZZAMAN 19699 THEORY AND example where our simple existential condition may work deeper than the main question highlighted in ... and nally the output sum gate. The gates have unbounded fan-in from the previous layer (only), GATE – 2014 MOCK Tests ECE, EIE, EE, CSE, ME, CE, CH, FT, ... (Previous papers) 2. Year wise (Previous papers) 3. Practice Tests (Topic wise) 4. ... * Ensures ready availability of tagged question banks with detailed solutions ... deeper than the main question highlighted in [9] ... the previous layer (only), ... b in the variables XnB, for each b2B, such that Ais the set of solutions of fx b = l b: b2Bg. This representation is not unique. The set Bis called a base of A. It has been accepted for inclusion in CSE Conference and Workshop Papers by an authorized administrator of DigitalCommons@University of Nebraska ... with each node representing a gate or a fanout point. With each gate, we associate a logic ... generate 2SAT solutions to the Boolean gate three principled methods for robustly estimating mixed ... UIUC CSE Fellowships. different moving objects, ... tive to noise than the previous two approaches [27,31,34]. Unfortunately, though the previous methods can provide
{"url":"http://ebookily.org/pdf/gate-previous-question-papers-solutions-cse","timestamp":"2014-04-16T14:07:33Z","content_type":null,"content_length":"22324","record_id":"<urn:uuid:5086a52a-cd56-479b-adf2-5d5bc5a2ae32>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Rebalancing an Investment Portfolio in the Presence of Convex Transaction Rebalancing an Investment Portfolio in the Presence of Convex Transaction Costs Download the paper, in pdf. John E. Mitchell Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA Stephen Braun Warren & Selbert, Inc. Santa Barbara, CA 93101 December 17, 2004. The inclusion of transaction costs is an essential element of any realistic portfolio optimization. In this paper, we consider an extension of the standard portfolio problem in which convex transaction costs are incurred to rebalance an investment portfolio. In particular, we consider linear, piecewise linear, and quadratic transaction costs. The Markowitz framework of mean-variance efficiency is used. If there is no risk-free security, it may be possible to reduce the measure of risk by discarding assets, which is not an attractive practical strategy. In order to properly represent the variance of the resulting portfolio, we suggest rescaling by the funds available after paying the transaction costs. This results in a fractional programming problem, which can be reformulated as an equivalent convex program of size comparable to the model without transaction costs. An optimal solution to the convex program can always be found that does not discard assets. The results of the paper extend the classical Markowitz model to the case of convex transaction costs in a natural manner with limited computational cost. Computational results for two empirical datasets are discussed. Download the paper, in pdf. Return to my list of papers.
{"url":"http://homepages.rpi.edu/~mitchj/papers/transcostsconvex.html","timestamp":"2014-04-16T13:06:19Z","content_type":null,"content_length":"2715","record_id":"<urn:uuid:f5482c0a-1b30-4bb1-b0a5-97e61a5b92c0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Football and ErrorFootball and Error Over at Uncertain Principles, Chad is talking football. There’s this pesky problem of spotting the ball at the end of the play. In a game where fractions of an inch can make or break the end result, too often the issue is determined by a more or less random guess by the referee of where the ball stopped. Instant replay has helped the issue, but not come anywhere close to fixing it. It’s too imprecise and often made less useful because there’s enormous football players diving for the ball and thus obscuring it from the cameras. There’s good suggestions. DGPS, radar, optical tracking, and lots of other things have been proposed. I threw in dead reckoning by accelerometer as a left field suggestion, though it’s probably among the least practical. The idea is that you put an accelerometer in the ball and integrate the acceleration twice with respect to time to determine the final position. Submarines (and I think maybe some aircraft) do this, though uncertainty builds up and the position becomes more and more uncertain with time. How uncertain? Well, let’s take a look at the equation which gives you distance traveled if you’re accelerating with some uniform acceleration a for a time t. (In football the acceleration is far from constant, but you can treat it as the limiting case of many very small accelerations over the course of the play.) The equation is: There’s two sources of error. The first is error involving uncertainty in the measured acceleration. If your measured value is a little lower than the real value, your calculated distance will be shorter than the real distance. Same thing with time. If you get the acceleration perfectly right but incorrectly measure the time the same kind of error will result. We can put approximate boundaries on this error by calculating how fast the distance function varies with slightly different accelerations and times, and multiplying by the error in those measurements. We do this with a little calculus. Calling the distance error delta d, we get: That’s more formal than we need. In our football application it will be very easy to measure the time of the motion very precisely. The accelerations involved are large, but not so large that a modern clock circuit can’t easily measure intervals small enough so that the acceleration is constant for all practical purposes. That means the second term under the square root is zero. And that means the square and square root undo each other, leading to: We would like delta d to be small, maybe something close to 1cm. The accelerometer need only function over very short time intervals, because so long as the ball is not covered by the bodies of the players it can be tracked by optical means or something similar. Only during the final hit that stops the play is the position really at issue. The time t should be in the single second range or so. I propose that adequate accuracy might be reachable even with an accelerometer that’s only good to within an error of 0.01 m/s^2 or so. Maybe even looser tolerances would do. That’s still pretty precise, especially when the hits are so hard and produce such large accelerations. And you don’t want the sensor to distort the handling characteristics of the ball in any way. As such my impractical suggestion probably remains impractical. But it might at least be an interesting thing for a project for a clever undergraduate engineer looking for a project for class! 1. #1 Max Fagin January 6, 2009 “But it might at least be an interesting thing for a project for a clever undergraduate engineer looking for a project for class!” Figures. You propose this just AFTER I complete an engineering class where the whole point was to find and solve a simple engineering problem. 2. #2 Matthias January 6, 2009 A while ago there were somewhat similar considerations over here in Germany considering a tracking system for Soccer. There the main Problem was: Has or has not the ball crossed the line (see this about the infamous Wembley goal of ’66). IIRC they didn’t consider accelerometers, but putting something inside the ball was actually considered, so it should be feasible without distorting the characteristics of the ball. 3. #3 yud January 6, 2009 For the curious, here’s a product brochure for a commercially available inertial navigation system: MK 39 MOD 3A Ring Laser. The military loves inertial nav systems, because you can’t rely on GPS on the battlefield. 4. #4 andyb January 6, 2009 The fact that the ball is likely to be spinning (faster than an aircraft or submarine) and that the mass of the device will have to be small will make the challenge intractable, I think. 5. #5 Matt Springer January 6, 2009 Multiple accelerometers would probably do the trick, allowing you to subtract out spin. But fitting all that into a small pigskin without going over the 15 oz limit would be very tricky. 6. #6 TBRP January 6, 2009 I think this has been attempted before, but I don’t think they used any external tracking to synch up the IMU to reduce accumulated error. 7. #7 Tercel January 6, 2009 Just pointing out that the sort of laser accelerometer that they use for submarine navigation is staggeringly sensitive, and would reduce accumulated error problems to insignificance in football. Of course, they are also so fragile, big, etc. that you couldn’t actually put one in a football. The small accelerometers that you’d use for football are not quite so accurate, but they are still pretty good. Those laser accelerometers are really neat pieces of optical technology, actually. If you’re an optics dork like me. 8. #8 Comrade PhysioProf January 6, 2009 As I pointed out at Chad’s place, this is silly. The amount of variability introduced into the system by the inescapable judgment call as to *when* the ball is dead–either by the player being downed, going out of bounds, or having forward progress stopped–vastly outweighs the errors introduced by the ball-spotting and chain-measurement system. 9. #9 Matt Springer January 6, 2009 Don’t be too sure, a good real-time tracking system can in and of itself fix the forward progress and out of bounds calls. Both of those are purely positional. Just note where the ball leaves the field of play. It wouldn’t help for deciding when a player is downed, but instant replay could be adapted to work with a tracking system to at least fix things in the vast majority of cases. 10. #10 Comrade PhysioProf January 6, 2009 Do you guys even watch any fucking football? The ball is dead the instant when (1) the player’s foot touches the out of bounds line, (2) the player is downed, and (3) forward progress is stopped. Most plays end with the player being downed. This is a very difficult thing to call and cannot be automated. And your not gonna have an instant replay review of every fucking play that ends on a player being downed, or every game will last six fucking hours. There is absolutely no need for a more accurate measuring system for spotting the ball or measuring the first down! I can’t believe we’re even having this discussion! If you damn physicists wanna pop woodies about all kinds of laser doohickeys and satellites and nanoseconds and shit, how about the fucking baseball strike zone? Now THAT you fuckers should go to fucking TOWN on!! 11. #11 Donalbain January 7, 2009 OK.. here it is! Magnets! REALLY strong electromagnets. With switches. Magnets in the ball AND in the ground When the ball hits the ground, the magnets somehow get switched on. Then the ball can’t move. And they can set up that line up thing they do before turning off the magnets. Please note that I have not thought about this at all. 12. #12 Oldfart January 8, 2009 Comrade PhysioProf: You need to invest in an upgrade of your adjective bank. You seem to have run out and are using the same adjective over and over again. It’s pretty boring even if your comments are otherwise germane. 13. #13 Bob Sykes January 8, 2009 In every sport, there are three teams on the field, one of them being the officials. Each team makes errors of all kinds, which is one of the attractions of sports. The idea that one team must be perfect and the others may make errors is absurd. The only thing the players can reasonably expect is that the officials be consistent. It is up to the players to learn how the officials are calling the game, what fouls are being ignored, the size and location of the strike zone, etc. It follows that I am a fierce opponent of officials using instant replay. Instant replay destroys the flow of the game. Suck it up and play!. For a long time, baseball was perfect because there was no instant replay. Instant replay is like aluminum bats and worse than the designated hitter. I do like instant replay. But it should restricted to the amusement and incitement of the fans. I would allow it to be used to judge the performance of the officials in their end of year reviews. 14. #14 IBY January 8, 2009 How does delta d/delta a goes to 1/2a^2?
{"url":"http://scienceblogs.com/builtonfacts/2009/01/06/football-and-error/","timestamp":"2014-04-19T22:22:51Z","content_type":null,"content_length":"61902","record_id":"<urn:uuid:deeb9f6f-5228-462e-a2ee-7f69b9ea4a3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple program that makes it faster and easier to solve simultaneous equations using the solve command. This customised function solves one type of non-linear system of 3 simultaneous equations in 3 unknowns: k, L and r0 in 3 equations with 4 knowns: zeta, eeta, d and angle. function matrixresults =... Fully documented script file but this essentially solves the simultaneous equations Yvonne presented in a previous thread and does so for two 1 x 4 vectors. I then built four graphs and displayed them all on one plot. A function to solve simultaneous equations in two variables. >>> solve('3*x + 5*y = 29; 12*x - 3*y = 24') (3.0, 4.0) Two applications to set up Analog Computer using operational amplifiers have been shown here using Simscape 2.1 version. The applications solve i) Simultaneous Equations X+Y=2; 2X+Y=2.5 and ii) Differential Equation... a must be a n x n matrix of coefficients and b must be a n x 1 matrix of constants Example usage to solve this system of 2 simultaneous linear equations in 2 unknowns: 2x + 4x = 3 and 3x + 5x =... solvesimul.m can be easily adapted to solve more equations but as it stands it will solve 2 unknowns in 2 equations (1 non-linear and one linear). To solve equations of this form... a*x + b*y = A (1) a1*x^2 + b1*y^2 =... symsolvesimulv2.m can be easily adapted to solve more equations but as it stands it will solve 2 unknowns in 2 non-linear equations (can be linear too or a mixture). Big change: It now has data input integrity checking. Solves simultaneous linear equations of any order using Crammer's rule. Required input is two lists..one for coefficients, and other for constants eg. 2x+3y=8 x+4y=6 will be written as Lagrange is a function that calculate equations of motion (Lagrange's equations) d/dt(dL/d(dq))- dL/dq=0. It Uses the Lagrangian that is a function that summarizes the dynamics of the system. Symbolic Math Toolbox is required. Graphical user interface (GUI) is used to solve up to two ordinary differential equations (ODEs). Results can be plotted easily. Choose between MATLAB's ode45 (non-stiff solver) or ode15s (stiffer This is primarily a teaching... this is a simple GUI program to plot a beautiful graphs from mathematical equations Durand-Kerner method for solving polynomial equations. Halley's method for solving equations of type f(x)=0. Just a little bit of hack: a linear equations solver using eval and built-in complex numbers: >>> solve("x - 2*x + 5*x - 46*(235-24) = x + 2") It Solves linear homogeneous and non homogeneous differential equations with constant coefficients. The inputs and outputs are in symbolic format. You enter the symbolic differential equation and you get the answer in symbolic format. For an input of n equations, it converges to the solution. For an input of >n equations, there is no exact solution. In this case, the function minimizes sum( (individual equation errors).^2) It estimates the Newton Raphson optimization procedure for (m) unknowns of (n) non-linear equations. In case no Jacobian vector is presented, then the initial Jacobian vector is estimated by Broyden Method (multivariate secant approach) and it is... It is application of of Simulink Block to Cardiac PDE VI1 system of 2 non linear coupled equations of PDE and try to design solution with dicretized space and time and issue in 1D.I associate a Doc file to introduction theory.Just see this page... This function solves the linear fractional-order differential equations (FODE) with constant coefficients. The short memory principle has not neen used here, so the length of input signal is limited to few hundred samples. The parameters of the...
{"url":"http://www.sourcecodeonline.com/list?q=simultaneous_equations_aolution","timestamp":"2014-04-20T18:39:38Z","content_type":null,"content_length":"49688","record_id":"<urn:uuid:4ffc7c56-6202-43ce-8ec2-de71c898ef27>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
6moons industryfeatures: Thales Walking the paths less trodden, US importer Daniel R. Barnum of Half Note Audio just announced the addition of the Thales tangential pivotal tonearm from Switzerland to his lineup. The unusual construction and excellent photographs submitted suggested a somewhat more sizable notice than our News Room allows for so our Industry Features section it was instead. Relays Daniel: "The Thales arms are one of a kind custom-built tonearms built specifically for each turntable. Micha Huber, the designer of the Thales arm was inspired by the mathematical genius of Thales of Miletus, an ancient Greek mathematician who was part of a group known as the Seven Sages. Thales discovered that the circumferential angle subtended by a triangle in a semicircle is always a right angle. As a result, the half circle above the hypotenuse of a right-angled triangle is called the Thales Circle. "Mr. Huber used the Thales Circle to create the Thales Tonearm. The arm combines the advantages of pivoted tonearms with absolute tangential tracking. The patented construction reduces the perfectly tangential tracking to pivot points, while the cartridge is aligned on the Thales Circle. "The Thales Tonearm advantages are: • No tracking error and consequential resulting distortions • Minimal friction due to pivot bearings, no linear bearings, no active tracking • Short tonearm with little resonance • Symmetric inertia at the tracking point in all axis • Damping and compensation of the skating force through weights." Michael Fremer wrote in feeling that the animated graphic above was confusing because it "shows the stylus tracing an arc across the record, which is precisely what the arm is designed not to do". Explains Mr. Barnum that the arm is indeed designed to do exactly what the graphic shows. It's tangential but not linear. The stylus is guided on an arc across the record. This arc has "M" as center. But the stylus is aligned not to this center but at "B". So the tangential tracking occurs not by a linear movement but by guiding and aligning to different points. The following graphic illustrates this from the top to settle the confusion.
{"url":"http://www.6moons.com/industryfeatures/thales/thales.html","timestamp":"2014-04-16T10:10:15Z","content_type":null,"content_length":"9618","record_id":"<urn:uuid:f8f4657d-42f4-4b28-ac77-d3391e11ba42>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Time-Current Curves Time-current curves: similar to the one shown on the following page, are used to show how fast a breaker will trip at any magnitude of current. The following illustration shows how to read a time-current curve. The figures along the bottom (horizontal axis) represent multiples of the continuous current rating (In) for the breaker. The figures along the left side (vertical axis) represent time in seconds. Time in Seconds Multiple of In To determine how long a breaker will take to trip at a given multiple of In, find the multiple on the bottom of the graph and draw a vertical line to the point where it intersects the curve. Then draw a horizontal line to the left side of the graph and find the time to trip. For example, in this illustration a circuit breaker will trip when current remains at six times In for .6 seconds. Note that the higher the current, the shorter the time the circuit breaker will remain closed. Time-current curves are usually drawn on log-log paper. Many time-current curves also show the bandwidth, tolerance limits, of the curve. From the information box in the upper right hand corner, note that the time-current curve illustrated on the next page defines the operation of a Siemens MG frame circuit breaker. For this example, operation with an 800 ampere trip unit is shown, but, depending upon the specific breaker chosen, this circuit breaker may be purchased with a 600, 700, or 800 amp continuous current rating. Overload Protection: The top part of the time-current curve shows the continuous Current performance of the circuit breaker. The black line shows the nominal performance of the circuit breaker and the gray band represents possible variation from this nominal performance that can occur even under specified conditions. Using the example of an MG breaker with an 800 amp continuous current rating (In), note that the circuit breaker can be operated at 800 amps (1.0 times In) indefinitely without tripping. However, the top of the trip curve shows that an overload trip will occur in 10,000 seconds at 1000 amps (1.25 times In). Additionally, the gray area on either side of the trip curve shows the range of possible variation from this response. Keep in mind that this trip curve was developed based upon predefined specifications, such as operation at a 40°C ambient temperature. Variations in actual operating conditions will result in variations in circuit breaker performance. Instantaneous Trip: The middle and bottom parts of this time-current curve show Instantaneous trip (short circuit) performance of the circuit breaker. Note that the maximum clearing time (time it takes for the breaker to completely open) decreases as current increases. This is because of high-speed contact designs which utilize the magnetic field built up around the contacts. As current increases, the magnetic field strength increases, which speeds the opening of the This circuit breaker has an adjustable instantaneous trip point from 3250 to 6500 amps, which is approximately four to eight times the 800 amp continuous current unit rating. This adjustment affects the middle portion of the trip curve, but not the top and bottom parts of the curve. The breaker shown in this example has a thermal-magnetic trip unit. Circuit breakers with solid-state trip units typically have additional adjustments True RMS Sensing Some solid state circuit breakers sense the peak values of the current sine wave. This method accurately measures the heating effect of the current when the current sine waves are perfectly sinusoidal. Frequently, however, the sine waves are harmonically distorted by non-linear loads. When this happens, peak current measurement does not adequately evaluate the true heating effect of the Siemens solid state trip unit circuit breakers incorporate true root-mean-square (RMS) sensing to accurately sense the effective value of circuit current. True RMS sensing is accomplished by taking multiple, instantaneous “samples” of the actual current wave shape for a more accurate picture of its true heating value. The microcomputer in Siemens solid state trip unit breakers samples the AC current waveform many times a second, converting each value into a digital representation. The microcomputer then uses the samples to calculate the true RMS value of the load current. This capability allows these circuit breakers to perform faster, more efficiently and with repeatable accuracy. Being able to monitor true RMS current precisely is becoming more important in today’s electrical distribution systems because of the increasing number of power electronic devices being used that can distort the current waveform. Adjustable Trip Curves One of the key features of solid state trip unit circuit breakers is the ability to make selective adjustments to the circuit breaker’s time-current curve. The time-current curve shown here is for a circuit breaker in the SJD6-SLD6 family. Solid State Circuit Breaker Adjustments The type of trip unit included in a circuit breaker determines the specific time-current curve adjustments available. . The following illustration and associated table describes the adjustments available. 1 comment: 1. Thanx MP
{"url":"http://aircircuitbreakers.blogspot.com/2009/01/time-current-curves.html","timestamp":"2014-04-19T06:53:56Z","content_type":null,"content_length":"51181","record_id":"<urn:uuid:a6439ee8-043a-4167-84de-873c10d22fcc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Think of something observable – countable – that you care about with only one outcome or another. It could be the votes cast in a two-way election in your town, or the free throw shots the center on your favorite... precise pangolin (Ubuntu 12.04) Following the crash of my hard drive right before leaving Kyoto, I bought a cheap Compaq Presario CQ57 to reinstall Ubuntu 12.04 over the weekend (and have a laptop available before leaving for Australia…) It took about one hour to install from the DVD and everything seems to be working out of the box. The My first competition at Kaggle For me Kaggle becomes a social network for data scientist, as stackoverflow.com or github.com for programmers. If you are data scientist, machine learner or statistician you better off to have a profile there, otherwise you do not exist. Nevertheless, I won’t bet on rosy future for data scientist as journalists suggest (sexy job for next impacTwit : How big is your work on twitter? There’s a great Tom Waits song from the album “Mule Variations” called “Big in Japan”. The beauty of saying you’re big in Japan is that no one can ever really verify the statement (or at least that was more true in 1999). You might assert “my work is big on twitter”, and hey, how would I know?... Solving Big Problems with Oracle R Enterprise, Part II Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet. We demonstrated the calculations against sample data for a small set of accounts. While this Factor Attribution 2 I want to continue with Factor Attribution theme that I presented in the Factor Attribution post. I have re-organized the code logic into the following 4 functions: factor.rolling.regression – Factor Attribution over given rolling window factor.rolling.regression.detail.plot – detail time-series plot and histogram for each factor factor.rolling.regression.style.plot – historical style plot for selected 2 factors factor.rolling.regression.bt.plot Bayesian Nonparametrics in R On July 25th, I’ll be presenting at the Seattle R Meetup about implementing Bayesian nonparametrics in R. If you’re not sure what Bayesian nonparametric methods are, they’re a family of methods that allow you to fit traditional statistical models, such as mixture models or latent factor models, without having to fully specify the number of Hodgkin-Huxley model in R One of the great research papers of the 20th century celebrates its 60th anniversary in a few weeks time: A quantitative description of membrane current and its application to conduction and excitation in nerve by Alan Hodgkin and Andrew Huxley. Only a... Actuarial models with R, Meielisalp I will be giving a short course in Switzerland next week, at the 6th R/Rmetrics Meielisalp Workshop & Summer School on Computational Finance and Financial Engineering organized by ETH Zürich, https:/ /www.rmetrics.org/. The long... The R-Podcast Screencast 2: Visualization with ggplot2 Here is the second screencast episode of the R-Podcast to accompany episode 8 of the R-Podcast: Visualization with ggplot2. In this screencast I demonstrate a real-time session of using ggplot2 to create boxplots for a visualization of hockey attendance in the NHL. The R code created in this screencast is available in our GitHub repository,
{"url":"http://www.r-bloggers.com/search/time%20series/page/106/","timestamp":"2014-04-17T21:38:30Z","content_type":null,"content_length":"38889","record_id":"<urn:uuid:bd2378d8-a5c9-41bf-a10e-e9541ef76074>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
The halting problem revisited On 27.03.2011 23:18, Patricia Shanahan wrote: > On 3/27/2011 2:03 PM, Screamin Lord Byron wrote: >> On 27.03.2011 22:30, Patricia Shanahan wrote: >>> On 3/27/2011 1:19 PM, Screamin Lord Byron wrote: >>>> On 27.03.2011 21:18, Patricia Shanahan wrote: >>>>> On 3/27/2011 9:10 AM, Screamin Lord Byron wrote: >>>>>> On 26.03.2011 21:56, Dirk Bruere at NeoPax wrote: >>>>>>> The Human brain is also subject to the Turing >>>>>>> limitation (as far as is known) >>>>>> If what you're saying is true, then how can the human brain prove >>>>>> things >>>>>> like, say, Fermat's last theorem? >>>>> How would you deduce inability to prove Fermat's last theorem from the >>>>> Turing limitation? >>>> There can not be an algorithmic proof of Fermat's last theorem because >>>> of the halting problem.... >>> Why does the halting problem imply anything at all about the existence >>> or otherwise of a proof for Fermat's last theorem? >>> Can you give a proof, or point to one I can read? >> Yes. Sir Roger Penrose's book "Emperor's New Mind". I have a copy which >> is translated to my language, so I can't give you exact page numbers >> where he talks about that specifically (should be within first 100 >> pages), but the book is absolutely worth a read in its entirety. > I don't have a copy, so that is not currently one I can read. Perhaps > you can restate the proof in your own words, or give an on-line reference? In fact I have. I didn't have much hope to find it online, but I got lucky I guess. Long link: <http://books.google.hr/books?id=oI0grArWHUMC&pg=PA75&lpg=PA75&dq=emperor' s+new+mind+hilbert+problem&source=bl&ots=04Ljj-YNVy&sig=GNJwD-YkfZ1R5u2oFHD2vjrEqns&hl=hr&ei=T6yPTYnyHIrysgbi7Y2 NCg&sa=X&oi =book_result&ct=result&resnum=1&ved=0CB gQ6AEwAA#v=onepage&q&f=false>
{"url":"http://www.velocityreviews.com/forums/t745747-p2-the-halting-problem-revisited.html","timestamp":"2014-04-19T05:03:59Z","content_type":null,"content_length":"69887","record_id":"<urn:uuid:2e79fa8b-b796-4538-8752-853e658f1a85>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
May 6th 2008, 09:45 AM I'm a newbie so didn't really know where to post this, but I hope someone here can help me!I'm just beginning to study sequences, and I'm getting nowhere with it. Can anyone please help me with these two problems: 1.)Show that the sequence (1/n^k)nEN is convergent if and only if k>=0, and that the limit is 0 for all k>0. 2.)Determine the least value of N such that n/(n^2+1)<0.0001 for all n>=N May 6th 2008, 01:57 PM I'm a newbie so didn't really know where to post this, but I hope someone here can help me!I'm just beginning to study sequences, and I'm getting nowhere with it. Can anyone please help me with these two problems: 1.)Show that the sequence (1/n^k)nEN is convergent if and only if k>=0, and that the limit is 0 for all k>0. 2.)Determine the least value of N such that n/(n^2+1)<0.0001 for all n>=N $p \implies q$ Since $a_n$ is convergent it must be bounded(why?) Now consider the three cases k < 0; k=0; k> 0 if k < 0 then k = -m for some m > 0 But then $\frac{1}{n^k}=\frac{1}{n^{-m}}=n^m$ But this cannot happen because the above is not bounded. If k=0 $a_n=1$ for all n if k > 0 the sequence in monotically decreasing $a_{n+1}< a_{n}$ for all n This in bounded below by zero and must converge to zero. $q \implies p$ is fairly straight forward. use the definition Hint: let $\epsilon > 0$ Choose $N=\left( \frac{1}{\epsilon} \right)^{1/k}$ For #2 Clear the fraction to get $n=0.0001n^2+0.0001$ Solve the quadratic for n. Good luck. May 10th 2008, 05:08 AM On part 2 though, why is it $<br /> n=0.0001n^2+0.0001<br />$?and not $<br /> n<0.0001n^2+0.0001<br />$? May 10th 2008, 01:47 PM It is. It is just easier to solve the equality to get n=9999.9999 but since n needs to be a natural number we choose 10,000. P.S we don't use the other root. (why?) I hope this helps. May 11th 2008, 06:23 AM It does help, thank you so much! :) I do have to ask though, why isn't the other root used?? & in the first question, ok, I get the first part, and taking the diferent values for k, and I understand in my head because it is bounded below by 0, it must converge to 0, but I don't get the proof? Like, what do I do with Hint: let http://www.mathhelpforum.com/math-he...fcabafdf-1.gif Choose http://www.mathhelpforum.com/math-he...69506a61-1.gif ?Sorry about this, but could you spell it out for me please?I'm never hit such a absolute immovable wall before over a maths topic, but I am not getting sequences at all (Sadsmile) and I have this assignment due in soon (Worried) Thank you so much for your time and help! (Handshake) May 11th 2008, 08:48 AM It does help, thank you so much! :) I do have to ask though, why isn't the other root used?? & in the first question, ok, I get the first part, and taking the diferent values for k, and I understand in my head because it is bounded below by 0, it must converge to 0, but I don't get the proof? Like, what do I do with Hint: let http://www.mathhelpforum.com/math-he...fcabafdf-1.gif Choose http://www.mathhelpforum.com/math-he...69506a61-1.gif ?Sorry about this, but could you spell it out for me please?I'm never hit such a absolute immovable wall before over a maths topic, but I am not getting sequences at all (Sadsmile) and I have this assignment due in soon (Worried) Thank you so much for your time and help! (Handshake) We wish to show that $|a_n-L| < \epsilon$ let $\epsilon > 0$ be given Let $N=\left( \frac{1}{\epsilon} \right)^{1/k}$ Then for all $n> N$ we get... $\left( \frac{1}{\epsilon}\right)^{1/k}<n$ moving some factors around we get Note: the above manipulation is okay becuase n and epsilon are both positive. Rasing both sides to the kth power we get $\frac{1}{n^k}< \epsilon$ we will use this to prove what we want. Now for all $n>N$ we get $<br /> |a_n-L|=|\frac{1}{n^k}-0|=|\frac{1}{n^k}|<\epsilon<br />$ May 11th 2008, 12:45 PM Thank you so much!I get all of the first one now! (Sun) The thing I still don't get about the 2nd one,(& I'm sorry to keep bothering you on this!), is that normally you have n> [some equation with epsilon], but in his case you are just given the value of epislon, so you don't have to sub a value for epsilon into an equation to get the value of N...so where do you get the value of N?How does having n=10,000 help? And a follow up question, you know the way it should be |a_n-L|< epsilon whenever n > N, well what if the sign is reversed?And it's |a_n-L|>= epislon for all n > N? It's in a similar question to the above one, i.e.: Determine the least value of N such that n^2 + 2n >= 9999 for all n>N Thank you so much for all you help!I'm stumped! May 11th 2008, 01:10 PM Thank you so much!I get all of the first one now! (Sun) The thing I still don't get about the 2nd one,(& I'm sorry to keep bothering you on this!), is that normally you have n> [some equation with epsilon], but in his case you are just given the value of epislon, so you don't have to sub a value for epsilon into an equation to get the value of N...so where do you get the value of N?How does having n=10,000 help? And a follow up question, you know the way it should be |a_n-L|< epsilon whenever n > N, well what if the sign is reversed?And it's |a_n-L|>= epislon for all n > N? It's in a similar question to the above one, i.e.: Determine the least value of N such that n^2 + 2n >= 9999 for all n>N Thank you so much for all you help!I'm stumped! We are trying to find the smallest natural number N such that the inequality is true for example with the equation $n^2+2n \ge 9999$ if we solve this for equality (and get a fraction) we can than choose the next natural number to make the inequality hold so we want to solve $n^2+2n=9999 \iff n^2+2n-9999=0 \iff (n+101)(n-99)=0$ so we get 2 solutions n=99 or n =-101 we get rid of -101 becuase it is not a natural number i.e (positive integer) The above inequality will hold for all $n \ge 99$ we can check this in the original inequality $99^2+2(99)=9801+198=9999 \ge 9999$ if you check anhy number greater than 99 it will work but any number less than 99 will not work. I hope this clears it up. May 11th 2008, 01:14 PM It certainly does TheEmptySet!!Thank you so so much for your help!!You rock!!! (Rock)
{"url":"http://mathhelpforum.com/advanced-math-topics/37400-sequences-print.html","timestamp":"2014-04-20T18:54:09Z","content_type":null,"content_length":"19889","record_id":"<urn:uuid:42fda7b5-9a80-44dc-8c75-70c151e6310b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: (no subject) Paul LEVY levy at pps.jussieu.fr Mon Feb 18 16:11:42 EST 2002 After all, the initial year of someone Joe Shipman is called FIRST year. Indeed it is, but that's a bad convention. Here is the terminology I use, that I offer to anyone who might find it useful. There are two conventions for ordinals: "obaz" - ordinals begins at zero "obao" - ordinals begin at one My suggestion is to explicitly state which convention you are using. The obaz first US president from the Bush family is George W. Bush. The obao second US president from the Bush family is George W. Bush. The obaz zeroth year of the obaz twentieth century was obaz 2000. The obao first year of the obao twenty-first century was obao 2001. The answer is the contents of cell obaz i in the array. The answer is the contents of cell obao i+1 in the array. The convention in English is obao with the exception of floors (in British English) and ages. But of course it is obaz that is mathematically more natural. Stating "obaz" explicitly allosws you to use it without clashing with the conventions of English. A related useful notation, where alpha is an ordinal, is to write $alpha for the set of ordinals < alpha, the canonical well-ordered set of order-type alpha. Thus an array of size n is obaz-indexed by $n. Of course in the usual ZF implementation, alpha and $alpha are the same, but that's just an implementation. The Pope is mathematically correct calling the 2000th year year number 2000 anno domini. I think that the Pope would not regard the year 1 BCE as an annus domini. So he would consider the year you're referring to be the obaz 1999th year, or year number obaz 1999 anno domini. Since I consider obaz to be mathematically more natural, I don't agree with your statement. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-February/005297.html","timestamp":"2014-04-17T22:02:25Z","content_type":null,"content_length":"4114","record_id":"<urn:uuid:80bc3651-e251-4dcc-98b4-f81685edfa11>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorials - Power of Two Challenge Solution Solution to Power of Two Challenge A power of two will look like this in memory: a string of zeros, with a lone one. Now, if you subtract 1 from a power of two, you'll get, with all numbers in binary: 01000000 - 00000001 = 00111111 a string of ones! If you take the bitwise AND of the two values, you get 0. In binary: 01000000 & 00111111 = 00000000 On the other hand, if you don't have a power of two, you'll have at least one additional 1: When you subtract 1, you'll still have at least one "on" bit (with a value of 1) in the same position as before, so taking the bitwise AND of the two numbers will not result in a string of 0s: 01000001 - 00000001 = 01000000 01000000 & 01000001 = 01000000 So to tell if an integer is a power of two: int is_power(int x) return !((x-1) & x); Note that we have to use the logical NOT, !, instead of the bitwise complement since the bitwise complement will not negate non-zero values; it just flips bits.
{"url":"http://www.cprogramming.com/tutorial/powtwosol.html","timestamp":"2014-04-19T22:12:59Z","content_type":null,"content_length":"21276","record_id":"<urn:uuid:6dcc41e0-ebf6-4b65-9930-53bf2e2fbb67>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Why would one prefer ZFC to ZC? Andrej Bauer andrej.bauer at andrej.com Sat Jan 30 15:08:09 EST 2010 On Sat, Jan 30, 2010 at 8:24 AM, <T.Forster at dpmms.cam.ac.uk> wrote: > You can take the axiom of infinity in at least three forms > (i) The Von Neumann omega exists > (ii) Thee is a Dedekind-infinite set > (iii) V_\omega exists > In ZF they are all equivalent. Without replacement they are all > inequivalent. See e.g. Adrian Mathias' *slim models* paper in the JSL. If I understand the discussion correctly, this is supposed to be a reason why replacement is desirable, i.e., we want different notions of infinity to be equivalent. But why do we want such a thing? For example, in computable mathematics it is beneficial to distinguish between different notions of finiteness because they naturally arise and are not computably equivalent. For a specific example, consider the set of complex zeroes of a general polynomial of degree at most n in two cases: (a) when the polynomial has integer coefficients (b) when the polynomial has (computable) real coefficients In case (a) the set of zeroes can be computably listed without repetitions, because algebraic numbers form a decidable field. In case (b) we cannot list them computably, even with repetitions, but we can compute for every epsilon > 0 a list of at most n approximate zeroes such that the Hausdorff distance from the actual zeroes is less than The way I see it, the desire for just one notion of infinity is just a reflection of the deeper desire for just one eternal and absolute universe of sets in which we can all live happily ever after. But that's a form of mysticism. It is entirely possible to adopt a standpoint in which one appreciates a wealth of different notions, even when they cannot be all put together in a single consistent picture. This way we get to study a gallery of pictures rather than just a single one. With kind regards, More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-January/014356.html","timestamp":"2014-04-18T20:47:50Z","content_type":null,"content_length":"4445","record_id":"<urn:uuid:71d4bafb-2704-4829-bce3-caefecd6a95d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
homework question Functions It asks to find the domain and range f(x) = 3x + 5 Domain and range are both real numbers (or could be complex) giving $f:\mathbb{R}\to\mathbb{R}$ You should recognise this as the equation of a straight line. so.... the domain is {x|x∈R} and the range is {y|y ≤ 3, y∈R} ? 'cause I'm also having trouble with this.
{"url":"http://mathhelpforum.com/pre-calculus/52408-homework-question-functions.html","timestamp":"2014-04-17T05:35:12Z","content_type":null,"content_length":"33407","record_id":"<urn:uuid:36172529-252b-45fd-8808-4c6f42aff412>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
HPL_pnum Rank determination. #include "hpl.h" int HPL_pnum( const HPL_T_grid * GRID, const int MYROW, const int MYCOL ); HPL_pnum determines the rank of a process as a function of its coordinates in the grid. GRID (local input) const HPL_T_grid * On entry, GRID points to the data structure containing the process grid information. MYROW (local input) const int On entry, MYROW specifies the row coordinate of the process whose rank is to be determined. MYROW must be greater than or equal to zero and less than NPROW. MYCOL (local input) const int On entry, MYCOL specifies the column coordinate of the process whose rank is to be determined. MYCOL must be greater than or equal to zero and less than NPCOL. See Also HPL_grid_init, HPL_grid_info, HPL_grid_exit.
{"url":"http://www.netlib.org/benchmark/hpl/HPL_pnum.html","timestamp":"2014-04-17T15:30:20Z","content_type":null,"content_length":"1725","record_id":"<urn:uuid:7310212a-3cf2-4e34-ab65-eb6635eb386c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional/variational derivative and the Leibniz rule up vote 3 down vote favorite I am currently trying to understand the BV-formalism, which makes heavy use of the functional derivative. Let us consider the functional derivative, as defined in for example its Wikipedia article. Let $F$ be a functional, i.e. a map from, say, $C^\infty(\mathbb{R})$ to $\mathbb{R}$, and suppose it may be written as $F[\phi] = \int f\big(x,\phi(x),\phi'(x),\dots,\phi^{(n)}(x)\big)\\,dx$ for some function $f$ which depends on the derivatives of $\phi$ up to order $n$. Then the functional derivative of $F$ is $\displaystyle \frac{\delta F}{\delta \phi} = \sum_{i=1}^n(-1)^i\frac{d^i}{dx^i}\frac{\partial f}{\partial \phi^{(i)}}$. Now, my background is that of differential equations and differential geometry, i.e. jet spaces and variational calculus and the like. In that area, the latter operator, $\sum_{i}(-1)^i\frac{d^i}{dx^ i}\frac{\partial}{\partial \phi^{(i)}}$, is well known; it is called the variational derivative. Summarizing, then, we seem to have that the functional derivative of a functional is the variational derivative of (one of its) densities. Since the variational derivative involves lots of derivatives, it certainly does not satisfy the Leibniz rule, i.e. it is not a derivation. In various places, however, I've come across the statement that the functional derivative does satisfy the Leibniz rule. (That already seems unexpected to me: how can an operator which is so intimately connected to a decidedly non-derivation be a derivation?) There are various ways to prove it, but I would like to understand this fact in terms of the variational derivative, if possible. So: how can the Leibniz rule of the functional derivative related to variational derivative; can the former be expressed somehow in terms of the latter? fa.functional-analysis dg.differential-geometry mp.mathematical-physics 1 1) Maybe you should provide a precise statement of the Leibniz rule you're referring to. 2) I don't see any difference between what you call the "functional derivative" and what is often called the "variational derivative". Either way, it tells you what the directional derivative $ \left. F[\phi + t\dot\phi]\right|_{t=0} $ is, where you've integrated by parts to shift all the derivatives off $\dot\phi$. – Deane Yang Oct 15 '12 at 13:57 The difference is that the variational derivative (as I understand it anyway) acts on ordinary functions, such as $f$, by the operator described above; while the functional derivative acts on function*als*, such as $F$. – Sietse Ringers Oct 15 '12 at 14:05 And what is the variational derivative (acting on an ordinary function) used for? – Deane Yang Oct 15 '12 at 14:15 The Leibniz rule probably holds on the level of the functionals, which leads to the question: is the product $F[\varphi]G[\varphi]$ of two such functionals still a functional which can be written as the integral over a function $F[\varphi]G[\varphi]=\int h(\ldots)$? – Michael Bächtold Oct 15 '12 at 14:45 Deane: the variational derivative is defined on the space of functions on jet space more or less as a jet space-analog of the functional derivative, but without the actual functional aspect (as described above and below). In short, it is important in for example the geometry of jet spaces (in particular in the horizontal cohomology, where it comes from the de Rham differential along the fibers); and in mathematical physics, since one can express Euler-Lagrange equations in terms of it. – Sietse Ringers Oct 16 '12 at 7:44 add comment 1 Answer active oldest votes Connection of functional derivative with variational derivative: $\frac{\delta}{\delta\phi(x)} F[\phi] = \frac{\delta F[\phi]}{\delta\phi}(x)$. Note that the variational derivative carries an extra coordinate variable dependence. It helps to make it explicit when there is similar confusion. Functional derivative Leibniz rule: $\frac{\delta}{\delta\phi(x)} F[\phi] G[\phi] = \frac{\delta F[\phi]}{\delta\phi}(x) G[\phi] + F[\phi] \frac{\delta G[\phi]}{\delta\phi}(x)$. Special case: $F_x[\phi] = \phi(x)$, $G_{i,y}[\phi] = (\partial_i\phi)(y)$, and $$\frac{\delta}{\delta\phi(z)} F_x[\phi] G_{i,y}[\phi] = \delta(x-z) (\partial_i\phi)(y) - \phi(x) \frac{d} up vote 1 Notice the distributional coefficients in the derivatives. There is no way to get away from them if you wish to consider $\phi(x)$ and such as functionals in their own right. down vote accepted If you are interested in the BV formalism in the physics formalism, where the distinction between the functional and variational derivatives is barely remarked, I recommend the reviews by Henneaux and by Gomis, París and Samuel: doi:10.1016/0920-5632(90)90647-D, doi:10.1016/0370-1573(94)00112-G. If you are interested in the BV formalism purely from the point of view of jets, without bringing functionals into the picture, other than peripherally, I recommend the early paper of McCloud and this sequence of papers by Barnich, Brandt and Henneaux: arXiv:hep-th/9307022, arXiv:hep-th/9405109, arXiv:hep-th/9405194, arXiv:hep-th/0002245. If you are more interested in the BV formalism more from the functional point of view, with the appropriate level of functional analysis included, and with jets appearing only peripherally, I recommend the papers by Fredenhagen and Rejzner, as well as Rejzner's thesis: arXiv:1101.5112, arXiv:1110.5232, arXiv:1111.5130. I should have asked for an explanation for what "BV formalism" is, so thank you for answering this anyway. – Deane Yang Oct 15 '12 at 15:36 Thank you, Igor, for this answer. It was not precisely what I was looking for, but that's to be expected because I don't think I managed to write down exactly what I was looking for. – Sietse Ringers Oct 16 '12 at 7:36 I would welcome your favorite expository reference for one of the many various interpretations of `BV formalism'. – Jim Stasheff Oct 14 '13 at 14:43 @JimStasheff, I would say that my personal favorite expository reference is the review article by Henneaux, which is already cited above. It conveys a lot of intuition and contains pretty much all the heuristics you need to make the constructions formal. – Igor Khavkine Oct 15 '13 at 10:19 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis dg.differential-geometry mp.mathematical-physics or ask your own question.
{"url":"http://mathoverflow.net/questions/109717/functional-variational-derivative-and-the-leibniz-rule?sort=oldest","timestamp":"2014-04-19T12:35:09Z","content_type":null,"content_length":"66028","record_id":"<urn:uuid:52295425-0f62-4435-8e14-72afa74d335b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Matheology § 163 Replies: 1 Last Post: Nov 27, 2012 2:49 AM Messages: [ Previous | Next ] Matheology § 163 Posted: Nov 27, 2012 1:53 AM Matheology § 163 First hidden necessary condition of Cantor's proof. - In the middle of the XX c., meta-mathematics announced Cantor's set theory "naive" and soon the very mention of the term "actual infinity" was banished from all meta-mathematical and set theoretical tractates. The ancient logical, philosophical, and mathematical problem, which during millenniums troubled outstanding minds of humankind, was "solved" according to the principle: "there is no term - there is no problem". So, today we have a situation when Cantor's theorem and its famous diagonal proof are described in every manual of axiomatic set theory, but with no word as to the "actual infinity". However, it is obvious that if the infinite sequence (1) of Cantor's proof is potential then no diagonal method will allow to construct an individual mathematical object, i.e., to complete the infinite binary sequence y*. Thus, just the actuality of the infinite sequence (1) is a necessary condition (a Trojan Horse) of Cantor's proof, and therefore the traditional, set- theoretical formulation of Cantor's theorem (above) is, from the standpoint of classical mathematics, simply wrong and must be re- written as follows without any contradiction with any logic. [A.A. Zenkin: "Scientific Intuition of Genii Against Mytho-'Logic' of Cantor?s Transfinite Paradise" Procs. of the International Symposium on ?Philosophical Insights into Logic and Mathematics,? Nancy, France, 2002, p. 2] Regards, WM
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2417066","timestamp":"2014-04-16T19:57:07Z","content_type":null,"content_length":"18611","record_id":"<urn:uuid:04e71c84-3822-44a0-93ea-bdae0f36ec4c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
absolute values are SCArY!!! August 26th 2009, 05:46 PM absolute values are SCArY!!! i understand that an absolute value is the distance on a number line from zero, and that abs(y)=8 would be 8, -8, but when it gets to like abs(c/4)=9, i know you have to make two equations, c/4=9 and c/4=-9, because my sheet says so, but why? i don't understand why the 9 is affected when it is not in the absolute value bars.......(Surprised) August 26th 2009, 05:59 PM By definition $|a|\leq b \Rightarrow -b \leq a \leq b$ Look at the function $y = |x|$ August 26th 2009, 06:12 PM [quote=pickslides;355065]By definition $|a|\leq b \Rightarrow -b \leq a \leq b$ i was never taught absolute value by that equation before... so if abs(w-8)=7, then would i set up the equation to be w-8=16 and w-8=-16 and then solve? August 26th 2009, 06:26 PM So what are you asking? I am confused how 7 can be equal to 16? I'm guessing you are saying $|w-8|\leq 7 \Rightarrow -7 \leq w-8 \leq 7$ Therefore solve both $-7 \leq w-8$ $1 \leq w$ $w \geq 1$ $w-8 \leq 7$ $w \leq 15$
{"url":"http://mathhelpforum.com/algebra/99341-absolute-values-scary-print.html","timestamp":"2014-04-18T22:34:31Z","content_type":null,"content_length":"6907","record_id":"<urn:uuid:5d5a3bc0-560e-4094-a3b1-b8d19943c0fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Multiplying a Double-Digit Number by a Single-Digit Number Here is a useful way to think of multiplying a double-digit number by a single-digit number. First, think of the two-digit number as two separate numbers. For example, to find : The first digit of this number represents the number of blocks of tens and the second digit represents the number of blocks of ones. Then you individually multiply the blocks of tens and ones by the single-digit number: Once you add up the two products you get the answer you would get from multiplying the two starting numbers. Suggested by: Eileen Lichtblau
{"url":"http://demonstrations.wolfram.com/MultiplyingADoubleDigitNumberByASingleDigitNumber/","timestamp":"2014-04-16T19:46:34Z","content_type":null,"content_length":"42767","record_id":"<urn:uuid:cc73b124-01b0-4666-9807-2c018dea8657>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: UNIVERSAL KUMMER CONGRUENCES MOD PRIME POWERS Abstract. We have previously proved Kummer congruences mod primes p such that p 16 j n for the universal divided Bernoulli numbers ^ Bn=n. In this paper we strengthen these congruences to hold mod powers of p. 1. Introduction The strongest form of the classical Kummer congruences says that if p is prime and p 16 j m and b m = (1 p m 1 )Bm=m, where Bm is a Bernoulli number, then if is the Euler -function and n m mod (p N+1 ), then (1.1) b n b m mod p N+1 : This periodic behavior of the divided Bernoulli numbers Bm=m is closely related to the existence of a p-adic zeta function (cf. [17]). The factor 1 p m 1 is called an Euler factor. The congruence is now usually proved by means of p-adic measures and p-adic integration (cf. [22]). As a corollary, we have a congruence without Euler factors, namely if p 16 j m and n m mod (p N+1 ) and n; m N + 2, then (1.2) Bn=n Bm=m mod p N+1 : This congruence is the one that we generalize in this paper to the divided uni- versal Bernoulli numbers ^ Bn=n. As our concluding examples in Section 4 show, the
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/530/3758315.html","timestamp":"2014-04-21T05:46:48Z","content_type":null,"content_length":"8249","record_id":"<urn:uuid:976eb624-600f-4a21-bb22-40bd50cdc903>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Challenging Question January 9th 2013, 07:23 PM Challenging Question READ THIS FIRST OK. I was doing my algebra I homework and I came across this problem during one of my questions I gave myself to practice. This will be a challenge question for all of you Mathematicians and those of utmost intelligence on the net. I have searched the internet for this question and yet no one has actually answered this. Logically it is straight forward but graphically on the x axis it is not possible. This question is a pretty good one indeed, yet there is no answer on the net. Not even one discussion topic for this on the net. No one has actually put this on the internet and I find this confusing. If there is a controversial issue here why does it not have such a discussion? Of all the questions in the world this one is not answered. Look these up on Google and look for your results: what is bigger the square root of -7 or the square root of 7? what is bigger the square root of negative 7 or the square root of negative 7? what is bigger the square root of negative seven or the square root of negative seven? You will find results on how to square, how to square root, but not the question stated above. Logically when I look at this if you squared √ -7 and √ 7 you would get -7 and 7, hence 7 is larger than -7. But if you try to plot this on a graph as many engineers like to look at it on the x axis it is impossible. i√7 (or √-7) does not touch the x axis, it is undefined, while √7 does touch the x axis. √-7 is imaginary therefore is not even allowed to be a number yet the logical reasoning makes it possible. So tell me and please reply what do you think, is √ -7 smaller than √ 7, or is it impossible because √-7 is imaginary? January 10th 2013, 12:29 AM Re: Challenging Question Comparison operations are not defined for complex numbers.
{"url":"http://mathhelpforum.com/algebra/211091-challenging-question-print.html","timestamp":"2014-04-19T05:59:39Z","content_type":null,"content_length":"5478","record_id":"<urn:uuid:89f4e502-e78d-4997-861b-e558057f2973>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Georgia Perimeter College - ENGR 1011 Common Course Outline Revised: November 1998 COURSE ABBREVIATION ENGR 1011 CREDIT HOURS 4 semester hours COURSE TITLE Introduction to AutoCAD PREREQUISITE Prior instruction or practice in engineering This is the first of a two part course sequence which offers hands on instruction in the use of drawing, editing and utility commands of AutoCAD for Windows to produce two dimensional drawings. Prior knowledge by the registrant to read and produce orthographic, isometric and other forms of pictorial representations using traditional tools is assumed. As a result of completing this course, the student will be able to do the following: 1. Explain the advantages of computer-aided design/drafting; 2. Use the menu structure and data input conventions to create, view, edit, and plot drawings; 3. Use the basic two-dimensional entity draw commands 4. Use the basic two-dimensional edit and inquiry commands 5. Use the basic display controls needed for viewing two-dimensional drawings 6. Use layers and other supplied drawing aids 7. Use the basic dimensioning commands. I. This course addresses the general education outcome relating to communications as follows: A. Students enhance reading skills by reading topics from assigned text book/reference to learn various kinds of commands and how to apply them in the production of simple two dimensional drawings. B. Students develop writing skills when they need to provide short answers to test questions. C. Students improve their listening skills by actively participating in class discussion/lecture or demonstration to learn basic drawing/editing commands of the software to produce simple engineering drawings. II. This course addresses the general education outcome relating to problem-solving and critical thinking skills by making these skills an important part of their course work. Students learn to apply technical problem-solving and use critical thinking techniques to plan preliminary steps needed to start a drawing and to develop most efficient way to complete it. The class/home assignments attempt to enhance their ability to learn and practice major commands on wide range of drawing exercises. III. This course addresses the general education outcome relating to mathematical concept usage and scientific inquiry as follows: Use of appropriate scales and units to produce drawings from many engineering Assignments require calculations based on geometry to determine sizes of planar figures and use of coordinates for construction multi-views. IV. Students organize and analyze the information required to complete assignments by Computer-Assisted Design and Drafting software package. 1. Menu structure and input conventions (7%) 2. Drawing entities (20%) 3. Editing (20%) 4. Display controls (10%) 5. Layers and drawing aids (20%) 6. Dimensioning (20%) 7. Inquiry commands (3%) Upon entering this course the student should be able to do the following: 1. Exited developmental studies and meet course objectives of Applied Technology course ATEC 1201 or be able to use computer to copy files from hard drive to floppy disks, to load a file into any word processing or other applications software. 2. Be able to apply polar and Cartesian coordinates to produce outlines of simple geometric figures like lines, squares, rectangles etc. I. COURSE GRADE The course grade is to be determined by the individual instructor by variety of evaluation techniques consistent with the overall college policy including the class attendance. The procedure should include at least two tests (30% to 35%) and a comprehensive final examination (25% to 30%) a final drawing project (20%) and class/home work (25%). Student, with the approval from the instructor should select final drawing project. Instructor must ensure that project approved employs at least 80-90% of all commands of the software covered in the course. II. DEPARTMENTAL ASSESSMENT Assessment of the expected educational results of this course must be conducted every five years. The assessment instrument will be a drawing project and selected questions that cover majority of the topics in the course content section from the final examination. III. USE OF ASSESSMENT FINDINGS The Engineering committee will evaluate the findings and determine the level of success in expected educational results and consider recommending to the Discipline Academic Group executive committee, any changes in the curriculum after careful review of curricula of transfer institutions. EFFFECTIVE DATE: August, 1998 APPROVED DATE: May, 1998
{"url":"http://depts.gpc.edu/~mcse/CourseDocs/engr/1011_cco_1998.htm","timestamp":"2014-04-18T08:03:53Z","content_type":null,"content_length":"5403","record_id":"<urn:uuid:b0b46988-cadb-41d3-bfb0-18d8152dfd8e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
building a non inductive load ? - diyAudio Re: Re: Re: building a non inductive load ? Originally posted by AndrewT does this mean that the effective impedance of the zero inductance pair is R/2? At what frequency does the impedance come down to R/2? become significantly less than R? With a correctly sized capacitor, the resultant impedance becomes a pure resistance and is not frequency dependant. The resultant resistance remains as R. I won't detail the mathematics here (email me if you want the details) but it can be shown that if you have a resistor R in series with an inductor L and the pair are in parallel with another resistor R in series with a capacitor C the resulting network impedance Z is L/RC (L in Henrys, C in Farads). If we say that Z equals R (equals 8ohm) then the equation becomes R=L/RC. If L is known C can be deduced from C=L/R^2. For example, if the two resistors are 8ohm and L is 1mH, C needs to be 15.6uF to give an overall network impedance of 8ohm (resistive). However, this analysis is too simple for your application as the resistor in series with the capacitor will also have an inductance (as will the capacitor, along with some resistance of its own). How far one goes with the complexity of the calculation depends upon how close to a pure resistance you wish to achieve. Again, in view of the maths involve this is probably better discussed by email. Perhaps a non-inductive power resistor would be an easier option.
{"url":"http://www.diyaudio.com/forums/solid-state/95528-building-non-inductive-load.html","timestamp":"2014-04-18T16:49:56Z","content_type":null,"content_length":"83934","record_id":"<urn:uuid:9ff7359b-a733-47fb-911f-6a3b7419b483>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Haverhill, MA Algebra 2 Tutor Find a Haverhill, MA Algebra 2 Tutor Hello! My name is Robert. I am a mathematics major at the University of Massachusetts Lowell. 13 Subjects: including algebra 2, chemistry, calculus, biology ...I therefore recommend that my SAT tutoring clients purchase an SAT prep book with practice tests. Students who devote significant time to completing practice tests tend to see the greatest improvement in their scores. Successful PSAT tutoring requires the student to complete practice on his/her... 28 Subjects: including algebra 2, English, writing, calculus ...As testament, I have completed three written theses and have published two peer-reviewed scientific papers. My teaching knowledge/experience: I am a certified teacher and have a maters degree in science teaching. Through my masters program, I took teaching methods courses focused on biology, p... 22 Subjects: including algebra 2, reading, writing, geometry As a 2011 Graduate of Merrimack College with an English degree, a member of the school's English club and someone who got a 750 in the writing portion of the SATs, I have extensive experience in writing and literature-related subjects. I also have worked at Artworks art studio in Medford, MA for th... 27 Subjects: including algebra 2, English, reading, writing ...I am an experienced high school math and computer science teacher for grades 9-12. I am experienced with Common Core Standards and MCAS preparation. My course teaching experience includes Algebra 1, Algebra 2, PreCalculus, Computer Programming and Robotics. 22 Subjects: including algebra 2, algebra 1, trigonometry, SAT math Related Haverhill, MA Tutors Haverhill, MA Accounting Tutors Haverhill, MA ACT Tutors Haverhill, MA Algebra Tutors Haverhill, MA Algebra 2 Tutors Haverhill, MA Calculus Tutors Haverhill, MA Geometry Tutors Haverhill, MA Math Tutors Haverhill, MA Prealgebra Tutors Haverhill, MA Precalculus Tutors Haverhill, MA SAT Tutors Haverhill, MA SAT Math Tutors Haverhill, MA Science Tutors Haverhill, MA Statistics Tutors Haverhill, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Andover, MA algebra 2 Tutors Atkinson, NH algebra 2 Tutors Bradford, MA algebra 2 Tutors Georgetown, MA algebra 2 Tutors Groveland, MA algebra 2 Tutors Lawrence, MA algebra 2 Tutors Lowell, MA algebra 2 Tutors Merrimac, MA algebra 2 Tutors Methuen algebra 2 Tutors Nashua, NH algebra 2 Tutors North Andover algebra 2 Tutors Plaistow algebra 2 Tutors Riverside, MA algebra 2 Tutors Salem, NH algebra 2 Tutors West Newbury, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/Haverhill_MA_Algebra_2_tutors.php","timestamp":"2014-04-16T07:54:46Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:6fc675ef-6ee3-4e17-8a03-cc1261ba0eeb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Database Design UMBC CMSC 461 Spring '99 CSEE | 461 | 461 S'99 | lectures | news | help Lecture 14 Functional Dependencies Functional dependencies (FD) are are type of constraint that is based on keys. A superkey is defined as in the relational schema R , where: a subset K of R is a subkey of R if, in any legal relation r(R), for all pairs, t[1 ]and t[2] in tuple r such that t[1 ]is not equal to t[2] then t[1][K][ ]is not equal to t[2][K]. Or, no two rows (tuples) have the same value in the attribute(s) K, which is the key. Now, if there are two attributes (or sets of attributes) A and B that are legal in the relation schema R, we can have a functional dependency where for all pairs of tuples such that t[1][A][ ]is equal to t[2][A] and t[1][B][ ]is equal to t[2][B]. This allows us to state that K is a superkey of R if K implies R. For example, in a relation that has names and social security numbers, whenever your Social Security number is the student ID, the name in that tuple can only contain your name. That is because your name is not unique, but your Social Security is. If I go to the Social Security Administration and search their database for the name "Gary Burt", the results is a large number of people. If I search of the social security number "123-45-6789", the result is one and only one person. Another example is in the loan information that we looked at before: Loan-info-schema = (branch-name, loan-name, customer-name, amount) it can be shown that the loan-number implies both the amount and the branch-name. It does not imply the customer-name because there may be more than one person listed on the load, such as a husband and wife, or parent and child (when the parent co-signs the loan). Functional dependencies: • specify a set of constraints on a legal relation. • test relations to see if they are legal. Some relations are said to be trivial when they are satisfied by all relations:. • A implies A • A implies B and B implies A. Closure of a Set of Functional Dependencies It is not enough to look at a single FD. All FDs must be considered in a relation! Given the schema R = ( A, B, C, G, H, I) and the FDs of: A implies B A implies C CG implies H CG implies I B implies H We can show that A implies H because A implies B which implies H. The notional for a FD is F. The notation of F^+ is the set of all FDs logically implied by F. There is a set of rules, called Armstrong's axioms, that we can use to to compute closure. • Reflexivity rule: If A is a set of attributes, and B is a set of attributes that are completely contained in A, the A implies B. • Augmentation rule: If A implies B, and C is a set of attributes, then if A implies B, then AC implies BC. • Transitivity rule: If A implies B and B implies C, then A implies C. These can be simplified if we also use: • Union rule: If A implies B and A implies C, the A implies AC. • Decomposition rule: If A implies BC then A implies B and A implies C. • Pseudotransitivity rule: If A implies B and CB implies D, then AC implies D. Using mathematical principles, we can not test a set of attributes to see if they are a legal superkey. Pitfalls in Relational-Database Design Obviously, we can have good and bad designs. Among the undesirable design items are: • Repetition of information • Inability to represent certain information The relation lending with the schema is an example of a bad design: Lending-Schema=(branch-name, branch-city, assets, cutomer-name, loan-number, amount) │branch-name│branch-city│assets │customer-name │loan-number│amount│ │Downtown │Brooklyn │9000000│Jones │L-17 │ 1000│ │Redwood │Palo Alto │2100000│Smith │L-23 │ 2000│ │Perryridge │Horseneck │1700000│Hayes │L-15 │ 1500│ │Downtown │Brooklyn │9000000│Jackson │L-14 │ 1500│ │Mianus │Horseneck │ 400000│Jones │L-93 │ 500│ │Round Hill │Horseneck │8000000│Turner │L-11 │ 900│ │Pownal │Bennington │ 300000│Williams │L-29 │ 1200│ │North Town │Rye │3700000│Hayes │L-16 │ 1300│ │Downtown │Brooklyn │9000000│Johnson │L-23 │ 2000│ │Perryridge │Horseneck │1700000│Glenn │L-25 │ 2500│ │Brighton │Brooklyn │7100000│Brooks │L-10 │ 2200│ Looking at the Downtown and Perryridge, when a new loan is added, the branch-city and assets must be repeated. That makes updating the table more difficult, because the update must guarantee that all tuples are updated. Additional problems come from having two people take out one loan (L-23). More complexity is involved when Jones took out a loan at a second branch (maybe one near home and the other near work.) Notice that there is no way to represent information on a branch unless there is a loan. The obvious solution is that we should decompose this relation. As an alternative design, we can use the Decomposition rule: If A implies BC then A implies B and A implies C. This gives us the schemas: • branch-customer-schema = (branch-name, branch-city, assets, customer-name) • customer-loan-schema = (customer-name, loan-number, amount) │branch-name│branch-city│assets │customer-name │ │Downtown │Brooklyn │9000000│Jones │ │Redwood │Palo Alto │2100000│Smith │ │Perryridge │Horseneck │1700000│Hayes │ │Downtown │Brooklyn │9000000│Jackson │ │Mianus │Horseneck │ 400000│Jones │ │Round Hill │Horseneck │8000000│Turner │ │Pownal │Bennington │ 300000│Williams │ │North Town │Rye │3700000│Hayes │ │Downtown │Brooklyn │9000000│Johnson │ │Perryridge │Horseneck │1700000│Glenn │ │Brighton │Brooklyn │7100000│Brooks │ │customer-name │loan-number │amount│ │Jones │L-17 │ 1000│ │Smith │L-23 │ 2000│ │Hayes │L-15 │ 1500│ │Jackson │L-14 │ 1500│ │Jones │L-93 │ 500│ │Turner │L-11 │ 900│ │Williams │L-29 │ 1200│ │Hayes │L-16 │ 1300│ │Johnson │L-23 │ 2000│ │Glenn │L-25 │ 2500│ │Brooklyn │L-10 │ 2200│ Then when we need to get back to the original table, we can do a natural join on the two relations branch-customer and customer-loan. Evaluating this design, how does it compare to the first version? • Looking at the Downtown and Perryridge, when a new loan is added, the branch-city and assets must be repeated. Problem still exists. • Problems come from having two people take out one loan (L-23). Problem still exists. • More complexity is involved when Jones took out a loan at a second branch. Problem still exists. • Notice that there is no way to represent information on a branch unless there is a loan. Problem still exists. Worse, there is a new problem! When we do the natural join, we get back four additional tuples that did not exists in the original table: • (Downtown, Brooklyn, 9000000, Jones, L-93, 500) • (Perryridge, Horseneck, 1700000, Hayes, L-16, 1300) • (Mianus, Horseneck, 400000, Jones, L-17, 1000) • (North Town, Rye, 3700000, Hayes, L-15, 1500) We are no long able to represent in the database information about which customers are borrows from which branch. This is called a lossy decomposition or lossy-join decomposition. A decomposition that is not a lossy-decomposition is a lossless-join decomposition. Lossless-joins are a requirement for good design and this causes constraints on the set of possible relations. We say that a relation is legal if it satisfies all rules, or constraints, imposed. The proper way to decomposition this example so that we can have a lossless-join is to use three relations. • branch-schema = (branch-name, assets, branch-city) • loan-schema = (branch-name, loan-number, amount) • borrower-schema = (customer-name, loan-number) Normalization Using Functional Dependencies Using FDs, it is possible to define several normal forms to help develop good database designs. The two that we will example are Boyce-Codd Normal Form (BCNF) and Third Normal Form (3NF). The requirements for good decomposition are • Lossless-Join Decomposition • Dependency Preservation • Lack of Repetition of Information We've discussed the lossless decomposition. Dependency preservation specifies that the design insure that when an update is made to the database, that it does not create an illegal relation. In regard to the repetition of information, it is necessary to include the key of another table, so that the joins can be properly formed. That is the only information that should be in both tables! Boyce-Codd Normal Form A relation schema R is said to be in BCNF with respect to a set F of FDs, if for all FDs in F^+ of the form A implies B, where A is a subset of R and B is a subset of R and at least one of the following rules is true: • A implies B is a trivial FD (B is a subset of A) • A is a superkey for schema R Without doing the mathematically proofs, it can be shown that the BCNF results in dependency preservation. Third Normal Form A relation schema R is in 3NF with respect to a set F of FDs if, for all FDs in the F^+ of the form A implies B, where A is a subset of R and B is a subset of R and at least one of the following rules is true: • A implies B is a trivial FD • A is a superkey for schema R • Each attribute in the result of the expression B-A is contained in a candidate key for R CSEE | 461 | 461 S'99 | lectures | news | help
{"url":"http://www.csee.umbc.edu/courses/461/current/burt/lectures/lec14/","timestamp":"2014-04-19T01:48:24Z","content_type":null,"content_length":"16121","record_id":"<urn:uuid:71602df0-19dd-4378-a054-4ee6659f0f78>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
South Miami, FL Algebra 2 Tutor Find a South Miami, FL Algebra 2 Tutor ...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others and always do my best to make sure the information is enjoyable and being presented effectively... 30 Subjects: including algebra 2, reading, biology, algebra 1 I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including algebra 2, calculus, physics, geometry ...My name is Morgan, and I'm currently enrolled in my last undergraduate semester at the University of Miami. I've been a professional tutor for over three years now, and I have significant experience in providing academic aid to my students in a variety of subjects. I began tutoring because I no... 42 Subjects: including algebra 2, English, reading, Spanish ...Presently I devote 2 hours a day to tutors students in the foster care and refugee programs and while I travel to their homes I feel that my skills will be more utilized with additional students and especially the online feature will make scheduling more adaptable. My experience so far has been ... 7 Subjects: including algebra 2, reading, English, vocabulary ...I also worked in various banks as an IT Manager and Vice President. As manager, I also mentored the staff on various aspects of systems and program development. I also taught computer programming in one of the computer schools in the Philippines. 4 Subjects: including algebra 2, geometry, algebra 1, trigonometry Related South Miami, FL Tutors South Miami, FL Accounting Tutors South Miami, FL ACT Tutors South Miami, FL Algebra Tutors South Miami, FL Algebra 2 Tutors South Miami, FL Calculus Tutors South Miami, FL Geometry Tutors South Miami, FL Math Tutors South Miami, FL Prealgebra Tutors South Miami, FL Precalculus Tutors South Miami, FL SAT Tutors South Miami, FL SAT Math Tutors South Miami, FL Science Tutors South Miami, FL Statistics Tutors South Miami, FL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Coconut Grove, FL algebra 2 Tutors Coral Gables, FL algebra 2 Tutors Cutler Bay, FL algebra 2 Tutors Doral, FL algebra 2 Tutors Hialeah algebra 2 Tutors Hialeah Lakes, FL algebra 2 Tutors Maimi, OK algebra 2 Tutors Miami algebra 2 Tutors Miami Beach algebra 2 Tutors Miami Shores, FL algebra 2 Tutors North Miami, FL algebra 2 Tutors Palmetto Bay, FL algebra 2 Tutors Pinecrest, FL algebra 2 Tutors Sweetwater, FL algebra 2 Tutors West Miami, FL algebra 2 Tutors
{"url":"http://www.purplemath.com/South_Miami_FL_Algebra_2_tutors.php","timestamp":"2014-04-16T10:34:09Z","content_type":null,"content_length":"24226","record_id":"<urn:uuid:a42366b8-6ccd-44eb-a4fb-9ab88bd28a03>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Towing Icebergs, Falling Dominoes, and Other Adventures in Applied Mathematics Robert B. Banks (1922-2002) has written two marvelous books illustrating what applied mathematics really is about. The present one was the first to appear in 1998 and his Slicing Pizzas, Racing Turtles, and Further Adventures in Applied Mathematics was a sequel that was published in 1999. This version is the first edition in paperback. In 24 chapters the reader is bombarded by a firework of models and solutions for serious and amusing problems. The opening paragraph is typical giving all the data about the meteor that hit the earth some 50,000 years ago near Flagstaff (AZ). It induces a chapter on different units, which is useful for the rest of the book. Although not in a particular order, one might recognize some recurrent themes in the different applications: things (large and small) falling from the sky (meteor, parachute, raindrops, etc.) but later also trajectories of basketballs, baseballs, water jets, and ski jumpers. Other applications are related to growth models (population, epidemic spread, national deficit, length of people, and world records running, etc). Some chapters deal with wave phenomena (traffic, water waves, and falling dominos), and others with statistics (monte carlo simulation) or curves (in architecture, jumping ropes and Darrieus wind turbines). But this enumeration is far from complete. There are two chapters completely working out the economic project of towing icebergs from the Antarctic to North and South America, Africa, and Australia. This includes the computation of the energy needed, the optimal route to be followed, the thickness of the cables needed, the melting process, etc. And there are many other models for phenomena, I have not mentioned. The models are sometimes derived, but in many occasions, they are mostly just given in the form of a differential equation (but also delay differential equations and integro-differential equations appear). It is indicated how to obtain solutions (often analytic, sometimes numerical), but intermediate steps are left to the reader to check. At several places also suggestions for assignments or extra problems to work out are included. Historical comments ad suggestions for further reading are often summarized. Hence teachers may find here inspiration for (if not ready-made examples of) exercises to give to their students. The book stands out because the examples are all treated as real-life examples with real data, and taking into account all the complications that are usually left out in academic examples: the earth is not a perfect sphere, a baseball is rough because of its stitches, it is thrown with spin, there is resistance of the air, and the resistence differs with the height, etc. Even though, there is a lot of formulas and numbers, the reading is pleasant and smooth. It may be much harder if one wants to work out the details and/or the exercises for oneself. The chapters can be used independently, although there are some forward or backward references, but these are not essential. One does however need some knowledge of differential equations (usually linear and first order but sometimes going beyond these), integrals are clearly needed (even elliptic integrals are used). The edition is still the same as the original one. That means that references are still the older ones that have not been updated. Robert B. Banks has passed away some 10 years ago. If not, given his enthusiasm displayed in this book, I would have expected an update about the models for economic evolution, taking into account the banking problems in 2008 and the aftermath of the economic crisis that we are still living in, or perhaps also data about the tsunami that hit Japan in 2011 with the nuclear disaster of Fukushima as a consequence, or the impact and fall-out of the eruption of the Eyjafjallajökull vulcano in 2010. Perhaps someday, someone will add a third volume to these wonderful collections of applied problems. Post new comment
{"url":"http://euro-math-soc.eu/node/4012","timestamp":"2014-04-20T13:58:06Z","content_type":null,"content_length":"20154","record_id":"<urn:uuid:d36ffdc7-198a-4d5e-b4a5-68e36c10d668>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Post-Newtonian equations of motion and radiation 1.3 Post-Newtonian equations of motion and radiation By equations of motion we mean the explicit expression of the accelerations of the bodies in terms of the positions and velocities. In Newtonian gravity, writing the equations of motion for a system of [156]. Subsequently, Einstein, Infeld and Hoffmann [106 ] obtained the 1PN corrections by means of their famous “surface-integral” method, in which the equations of motion are deduced from the vacuum field equations, and which are therefore applicable to any compact objects (be they neutron stars, black holes, or, perhaps, naked singularities). The 1PN-accurate equations were also obtained, for the motion of the centers of mass of extended bodies, by Petrova [179] and Fock [112] (see also Ref. [169]). The 2PN approximation was tackled by Ohta et al. [165 , 167 , 166 ], who considered the post-Newtonian iteration of the Hamiltonian of [86 , 85 , 104 , 80 , 81 ], building on a non-linear iteration of the metric of two particles initiated in Ref. [11]. The corresponding result for the ADM-Hamiltonian of two particles at the 2PN order was given in Ref. [98 ] (see also Refs. [195, 196]). Kopeikin [149 ] derived the 2.5PN equations of motion for two extended compact objects. The 2.5PN-accurate harmonic-coordinate equations as well as the complete gravitational field (namely the metric [42 ], following a method based on previous work on wave generation [15 ]. Up to the 2PN level the equations of motion are conservative. Only at the 2.5PN order appears the first non-conservative effect, associated with the gravitational radiation reaction. The (harmonic-coordinate) equations of motion up to that level, as derived by Damour and Deruelle [86, 85 , 104, 80, 81 ], have been used for the study of the radiation damping of the binary pulsar - its orbital [81 , 82, 102]. It is important to realize that the 2.5PN equations of motion have been proved to hold in the case of binary systems of strongly self-gravitating bodies [81 ]. This is via an “effacing” principle (in the terminology of Damour [81 ]) for the internal structure of the bodies. As a result, the equations depend only on the “Schwarzschild” masses, 7), do not enter the equations of motion, as has been explicitly verified up to the 2.5PN order by Kopeikin et al. [149 , 127 ], who made a “physical” computation, à la Fock, taking into account the internal structure of two self-gravitating extended bodies. The 2.5PN equations of motion have also been established by Itoh, Futamase and Asada [134 , 135 ], who use a variant of the surface-integral approach of Einstein, Infeld and Hoffmann [106 ], that is valid for compact bodies, independently of the strength of the internal gravity. The present state of the art is the 3PN approximation. To this order the equations have been worked out independently by two groups, by means of different methods, and with equivalent results. On the one hand, Jaranowski and Schäfer [139 , 140 , 141 ], and Damour, Jaranowski, and Schäfer [95 , 97 , 96 ], following the line of research of Refs. [165, 167, 166, 98 ], employ the ADM-Hamiltonian formalism of general relativity; on the other hand, Blanchet and Faye [37 , 38 , 36 , 39 ], and de Andrade, Blanchet, and Faye [103 ], founding their approach on the post-Newtonian iteration initiated in Ref. [42 ], compute directly the equations of motion (instead of a Hamiltonian) in harmonic coordinates. The end results have been shown [97 , 103 ] to be physically equivalent in the sense that there exists a unique “contact” transformation of the dynamical variables that changes the harmonic-coordinates Lagrangian obtained in Ref. [103 ] into a new Lagrangian, whose Legendre transform coincides exactly with the Hamiltonian given in Ref. [95 ]. The 3PN equations of motion, however, depend on one unspecified numerical coefficient, dimensional regularization, both within the ADM-Hamiltonian formalism [96 ], and the harmonic-coordinates equations of motion [30 ]. The works [96 , 30 ] have demonstrated the power of dimensional regularization and its perfect adequateness for the problem of the interaction between point masses in general relativity. Furthermore, an important work by Itoh and Futamase [133 , 132 ] (using the same surface-integral method as in Refs. [134 , 135 ]) succeeded in obtaining the complete 3PN equations of motion in harmonic coordinates directly, i.e. without ambiguity and containing the correct value for the parameter So far the status of the post-Newtonian equations of motion is quite satisfying. There is mutual agreement between all the results obtained by means of different approaches and techniques, whenever it is possible to compare them: point particles described by Dirac delta-functions, extended post-Newtonian fluids, surface-integrals methods, mixed post-Minkowskian and post-Newtonian expansions, direct post-Newtonian iteration and matching, harmonic coordinates versus ADM-type coordinates, and different processes or variants of the regularization of the self field of point particles. In Part B of this article, we shall present the complete results for the 3PN equations of motion, and for the associated Lagrangian and Hamiltonian formulations (from which we deduce the center-of-mass The second sub-problem, that of the computation of the energy flux [217 , 49 ], at a time when the post-Newtonian corrections in [33 , 122 ], and, independently, by Will and Wiseman [220 ], using their own formalism (see Refs. [35 , 46 ] for joint reports of these calculations). The preceding approximation, 1.5PN, which represents in fact the dominant contribution of tails in the wave zone, had been obtained in Refs. [221, 50 ] by application of the formula for tail integrals given in Ref. [29 ]. Higher-order tail effects at the 2.5PN and 3.5PN orders, as well as a crucial contribution of tails generated by the tails themselves (the so-called “tails of tails”) at the 3PN order, were obtained by Blanchet [16 , 19 ]. However, unlike the 1.5PN, 2.5PN, and 3.5PN orders that are entirely composed of tail terms, the 3PN approximation also involves, besides the tails of tails, many non-tail contributions coming from the relativistic corrections in the (source) multipole moments of the binary. These have been “almost” completed in Refs. [45 , 40 , 44 ], in the sense that the result still involves one unknown numerical coefficient, due to the use of the Hadamard regularization, which is a combination of the parameter [31 , 32 ]. In Part B of this article, we shall present the most up-to-date results for the 3.5PN energy flux and orbital phase, deduced from the energy balance equation (5), supposed to be valid at this order. The post-Newtonian flux [182] at the 1.5PN order (following the pioneering work of Galt’sov et al. [116]), and by Tagoshi and Nakamura [203 ], using a numerical code, up to the 4PN order. This technique has culminated with the beautiful analytical methods of Sasaki, Tagoshi and Tanaka [194, 205 , 206] (see also Ref. [160]), who solved the problem up to the extremely high 5.5PN order.
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/LRG/Articles/lrr-2006-4/articlesu3.html","timestamp":"2014-04-18T18:11:02Z","content_type":null,"content_length":"53489","record_id":"<urn:uuid:91a8e7b1-072e-4cad-8c3b-94b1551a4962>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Superhero Transformations -- Hands-On STEM Project The first-ever 21st Century Math Project Blog poll winner is now an official 21st Century Math Project. Turn your Geometry, Algebra 2 or Pre-Calculus classroom into Superhero City while teaching the useful skills of tranformations. Specifically focusing on translations and reflections, heroes paired with a specific mathematics function family, have to zap enemies all over the coordinate grid into submission to save their city. Students will use cutouts of functions to turn this into a hands-on math project that will serve many different types of learners and plays into their childhood superhero infatuations! Name: Superhero Transformations Suggested Grade Level: 7-12 (Geometry & Pre-Calculus skills) Math Concepts: Transformations, Translation, Reflection, Families of Functions Interdisciplinary Connections: Comics, Art Teaching Duration: 3-5 Days (can be modified) Cost: $6 for a 25 Page PDF (4 assignments and answer key) The Product: Students complete a series of tasks that culminate in taking down the mysterious Big Boss Villain. Can be expanded to creating a Pixton Comic (not provided)! Hey Kid, put down that printer. BOOM! POW! WHAMO! If those words remind you of your 7th Period Class, I... think you should find another blog regarding classroom management. If those words remind you of the hours you'd spend trying to hunt down that elusive Magneto action figure this might be perfect for you. If your kids dig comics and comic-book movies. This might be perfect for them! I've noticed a comic book renaissance of sorts in the school hallway especially with the awesomeness of the Avengers. One of the things that I do that many would consider unconventional is that before I teach how to manipulate any non-linear functions (square roots, exponentials etc.), I teach families of functions and transformations. I think this breaks down students intimidation of long equations with these different functions in them, it makes them more accessible and helps them understand that in many respects, they work the same. Does this mean I won't feel like I want to disappear from class for a few days? Are we actually going to do something challenging? In this project, I have assembled a dynamic mathematical superhero team where each hero has a different power that behaves like a different function. Heroes with Linear, Quadratic, Exponential, Cubic, Square Root and Absolute Value functions are stars of the show. There are a couple special guest appearances from the villains. No hero with a Wolfhead? I'm slightly put off by the omission. In creating this project, it was critical to me that it's not just glitz and glamor, but there truly is a bunch of hardcore mathematics at its core. I feel I have created something that authentically teaching translations and reflections and will serve both ends of the classroom. By creating functions that can be cut-out, this will make the tasks hands-on and accessible for all learners. By creating wicked challenging scenarios, the most advanced students will be enriched with the puzzle that the later problems create. By creating colorful, amusing heroes and villains, the most difficult to engage will be grabbed. EXTENSION: Perhaps there can be a comic book assignment that emerges that uses solid math jokes. Don't use class time for this. I'm begging you. Maybe a weekend extra credit assignment. :-) So here it is, the 21st Century Math Project for the peeps. Hopefully you dig. I'll be setting up a little challenge for the blog followers tomorrow to give away a free copy! Keep you eyes open. No comments:
{"url":"http://www.21stcenturymathprojects.com/2012/11/superhero-transformations-hands-on-stem.html","timestamp":"2014-04-17T21:45:16Z","content_type":null,"content_length":"87806","record_id":"<urn:uuid:6ac5ff24-0b3d-43db-95d7-da35602a4597>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: To meet a challenge Replies: 18 Last Post: Oct 6, 2012 10:30 AM Messages: [ Previous | Next ] Re: To meet a challenge Posted: Oct 5, 2012 12:53 AM We teach teachers to teach the adding and subtracting of fractions (and this includes rational functions in algebra) using the least common denominator with a method so clunky, long, and complicated that it cannot be written as a single equation, the left side written as the sum/difference of the fractions, and the right side being a single fraction with no fraction in the numerator or denominator. (We do not do this with respect to the teaching of the multiplying and dividing of fractions - they are taught with methods that are elegant and concise enough to be written as single equation. Imagine how much worse the outcome would be compared to the present situation if we taught students the multiplying or dividing of fractions with methods that are so clunky, long, and complicated that they cannot be written elegantly and concisely as a single equation.) To address this problem: Here is an algorithm for fraction addition/subtraction that is written as a single equation, the left side being the sum/difference of the fractions, and the right side being a single fraction with no fraction in the numerator or denominator. (Where I use the "/" symbol surrounded by a space to be the main dividing line): a/b +- c/d = (m/b)a +- (m/d)c / m. (Each of (m/b) and (m/d) is an integer when each of the variables is an integer - when we are in the rationals. And so the right side is a single fraction with no fraction in the numerator or This generalizes to any number of fractions elegantly and concisely, letting n be any number of fractions: a_1/b_1 +- ... +- a_n/b_n = (m/b_1)a_1 +- ... +- (m/b_n)a_n / m. How to teach this equality as an algorithm to students? I like the (easy to memorize) verbalization "m over the bottom times the top" for each addend. And since this method generalizes so elegantly and concisely to any number of fractions, try introducing the method on three fractions to show how easy it is - if they see that it's easy on three or more fractions, it of course is easy on just two. Here's a favorite set of three fractions I like to use, one with an easy LCD: 3/4 + 5/6 - 7/8 = 18 + 20 - 21 / 24. The rest is just any needed simplification of the single non-complex fraction on the right side, which is what we have to consider anyway with the usual way of fraction addition/subtraction and with fraction multiplication and fraction division when we get to a single noncomplex fraction. If we wanted to write out the pattern that they would use in algebra in the adding/subtracting of rational functions, we would do everything exactly the same, keeping the same pattern, but doing only "m over the bottom" in our heads, resulting in 3/4 + 5/6 - 7/8 = (6)(3) + (4)(5) - (3)(7) / 24. Note that where we replace each of these integers with polynomials, this last pattern on the right side is exactly what we typically have with the usual method on rational fractions, but with this equality-based method arrived at faster, in one written step. A major reason we teach fraction addition/subtraction with the LCD is because it is an anticipation of rational function addition/subtraction. Once we have an LCD, we can in one written step go straight to a single noncomplex fraction in fraction addition/subtraction, just as we can with fraction multiplication/division. In algebra, being able to handle expressions of which fractional forms are a part is an important skill that many have a hard time with. So my ultimate motivation here is about making this aspect of algebra easier. Message was edited by: Paul A. Tanner III
{"url":"http://mathforum.org/kb/message.jspa?messageID=7901059","timestamp":"2014-04-20T04:26:22Z","content_type":null,"content_length":"40584","record_id":"<urn:uuid:2947f6ce-6259-4902-8c51-789b97d6eefe>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: BANACH SPACES WITH THE 2-SUMMING PROPERTY A. Arias, T. Figiel, W. B. Johnson and G. Schechtman Abstract. A Banach space X has the 2-summing property if the norm of every linear operator from X to a Hilbert space is equal to the 2-summing norm of the operator. Up to a point, the theoryof spaces which havethis propertyis independent of the scalar eld: the propertyis self-dualand any space with the propertyis a nite dimensional space of maximal distance to the Hilbert space of the same dimension. In the case of real scalars only the real line and real `2 1 have the 2-summingproperty. In the complex case there are more examples e.g., all subspaces of complex `3 1 and their duals. 0. Introduction: Some important classical Banach spaces in particular, C(K) spaces, L1 spaces, the disk algebra as well as some other spaces (such as quotients of L1 spaces by re exive subspaces K], Pi]), have the property that every (bounded, linear) oper- ator from the space into a Hilbert space is 2-summing. (Later we review equivalent formulations of the de nition of 2-summing operator. Here we mention only that an operator T : X ! `2 is 2-summing provided that for all operators u : `2 ! X the composition Tu is a Hilbert-Schmidt operator moreover, the 2-summing norm
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/201/1604432.html","timestamp":"2014-04-17T09:48:42Z","content_type":null,"content_length":"8356","record_id":"<urn:uuid:a3407dcb-fc8d-48a6-9143-b67f3deabdc7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2001 [00263] [Date Index] [Thread Index] [Author Index] Re: Sparse Matrix, Memory Allocation • To: mathgroup at smc.vnet.net • Subject: [mg31654] Re: [mg31608] Sparse Matrix, Memory Allocation • From: Daniel Lichtblau <danl at wolfram.com> • Date: Fri, 23 Nov 2001 05:46:25 -0500 (EST) • References: <200111161138.GAA07073@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Inkyu Rhee wrote: > I have a 70000 by 70000(or more) banded sparse matrix. > and this matrix will be updated by specific law for 50 loops. > In each loop, I need to linear solution of this. > My prblems are: > I could not specify this matrix: > K=Table[0,{70000},{70000}];(* Initialize the matrix K *) > when I trying this in my machines ((1) sun:ram 750M, swap 2665M, > (2) window:A800mhz, 128Mb), > machine gives me 'Out of Memory, Exiting'. > How do you specify this matrix efficiently? > If this works well, I will update these components of matrix > using certain law. Then I need to solve the equations. > Developer`SparseLinearSolve[K,x] > I tried this part using 10000 by 10000 instead of 70000. > It also give me 'Out of Memory ...'. > Thanks for any help, > I. Rhee In[3]:= ?Developer`SparseLinearSolve SparseLinearSolve[smat, vec] solves a sparse linear system; the matrix smat is represented in the form {{i1, j1}->a1, {i2, j2}->a2, ... }, so that element at position ik, jk has value ak and all unspecified elements taken to be zero. So we need to get the matrix appropriately formatted. Code below will generate a tridiagonal matrix, first as three vectors and then put into the form of a sparse matrix representation. tridiagonal[n_] := {Table[Random[],{n-1}], Table[Random[],{n}], toSparseMat[td_] := With[{n=Length[td[[2]]]}, Join[Table[{i,i+1}->td[[1,i]], {i,n-1}], Table[{i,i}->td[[2,i]], {i,n}], Table[{i+1,i}->td[[3,i]], {i,n-1}]]] Now we generate a particular example of dimension 100000 . n = 100000; td = tridiagonal[n]; smat = toSparseMat[td]; rhs = Table[Random[], {n}]; In[28]:= Timing[soln = Developer`SparseLinearSolve[smat, rhs];] Out[28]= {78.42 Second, Null} A bit slow perhaps but it seems to work. Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Nov/msg00263.html","timestamp":"2014-04-19T19:46:42Z","content_type":null,"content_length":"36439","record_id":"<urn:uuid:0cc9c00c-5d2b-48a3-877b-ed2728e3bcee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Chemical Engineering Serviços Personalizados Links relacionados versão impressa ISSN 0104-6632 Braz. J. Chem. Eng. v.22 n.4 São Paulo out./dez. 2005 Application of the wavelet image analysis technique to monitor cell concentration in bioprocesses G. J. R. Garófano; C. G. Venancio; C. A. T. Suazo; P. I. F. Almeida^* Federal University of São Carlos, Department of Chemical Engineering, Phone: +(55) (16) 3351-8264, Fax: (55) (16) 3351- 8266, Via Washington Luís , Km 235 CEP: 13565-905, São Carlos - SP, Brazil. E-mail: claudio@power.ufscar.br, E-mail: gerson.garofano@chemtech.com.br E-mail: pauloalmeida@power.ufscar.br The growth of cells of great practical interest, such as, the filamentous cells of bacterium Streptomyces clavuligerus, the yeast Saccharomyces cerevisiae and the insect Spodoptera frugiperda (Sf9) cell, cultivated in shaking flasks with complex media at appropriate temperatures and pHs, was quantified by the new wavelet transform technique. This image analysis tool was implemented using Matlab 5.2 software to process digital images acquired of samples taken of these three types of cells throughoot their cultivation. The values of the average wavelet coefficients (AWCs) of simplified images were compared with experimental measurements of cell concentration and with computer-based densitometric measurements. AWCs were shown to be directly proportional to measurements of cell concentration and to densitometric measurements, making evident the great potential of the wavelet transform technique to quantitatively estimate the growth of several types of cells. Keywords: Image analysis; Wavelet transform; Streptomyces clavuligerus; Saccharomyces cerevisiae; Spodoptera frugiperda; Cell growth. Even though for several decades efforts by biochemical engineers to monitor and to control bioprocesses by monitoring indirect variables such as pH, temperature and dissolved oxygen, have been successful, the quantitative monitoring of cell growth has still not been fully achieved. Over the past decade, several modern resources for monitoring biomass have been made available, including the outstanding sensor of Bitter et al. (1998), which allows microscopic observation of microorganisms in situ in a fermentor; the biomass estimator based on capacitance measurements that makes monitoring of adhered and suspended cell concentration possible (Coremans et al., 1996), and the several types of optical sensors evaluated by Konstantinov et al. (1994). However, these resources involve measurement methodologies that can only be used in very specific applications and that are usually very expensive. Paradoxically, even though cell concentration is the most important variable in a bioprocess, in practice it is least monitored as a consequence of the experimental difficulties encountered. It normally needs to be quantified in an aseptic, noninvasive, discriminative manner (viable and nonviable) in real time and, above all, it must be reliable. Due to these difficulties, little progress has been made in quantification of cell concentration in bioprocesses, a situation that is hindering the use of techniques "based on knowledge" (Shioya et al., 1999) to control and optimize bioprocesses on a large scale. The image analysis technique has been under development since the 90s, and is attracting a lot of attention as a resource for use in monitoring biomass and cell morphology in a fast, robust and economical way (Thomas & Paul, 1996; Pons et al., 1998). According to several researchers' predictions (Konstantinov et al. ,1994; Shioya et al., 1999), it will be a powerful tool for implementing modern strategies of control and bioprocess optimization, such as the ones that are based on so-called expert systems. The significant recent progress in relation to the cost and performance of new computers make it possible to implement complex numerical procedures in a very short period of time, so the analysis of images is becoming a very useful tool in bioprocess monitoring and control. Densitometric techniques (grey-scale analysis of a given image) were used successfully by Treskatis et al. (1997) for biomass quantification of a submerged culture of the filamentous bacterium Streptomyces tendae, in an attempt to accompany the fermentative process on line. On the other hand, the new mathematical wavelet transform technique in its computer-based version is attracting the attention of specialists in the field of image processing for its capacity to represent attributes of images in a much simpler, efficient and compact way than the traditional Fourier transform (Misiti et al. ,1997), characteristics which have been increasing interest in applications of pattern recognition in a variety areas of human activity (the finance market, identification of human voice and of fingerprints, etc.). However, although it has commercial algorithms such as the MATLAB 5.2 toolbox that facilitate its use, this tool still had not been tested in computer-based image processing to quantify and characterize cell growth. Hence, the objective of the present work is to evaluate the potential of the wavelet transform technique by comparing it with traditional measurements (dried biomass and cell counting) and with densitometry to estimate cell concentration when cultivating cells of great practical interest, such as those of the bacterium Streptomyces clavuligerus, the yeast Saccharomyces cerevisiae and the insect Spodoptera frugiperda (Sf9) cell. Signal Analysis Using the Wavelet Transform Technique Traditionally, Fourier transform has been used to process stationary signals acquired by computers. In this way, the representative spectrum of frequencies is obtained from the time series produced during acquisition of the signal by the computer. For nonstationary signals, typical of biological processes, the existing methodologies have not been fully developed. Windowed Fourier transform, also called short-time Fourier transform, was first applied by Gabor (1946), using a Gaussian-type window. For a given signal f(t), a conveniently defined signal g(t-t[0]) is applied to a window of time that moves along with the original signal, forming a new family of functions: Functions formed in this way are centered on t[0] and have a duration defined by the characteristic time window of the function g(t). Windowed Fourier transform is thus defined as This transform is calculated for all t[0] values and it gives a representation of the signal f(t) in the time-frequency domain. If a space function f(x) instead of a time signal, is considered, a representation is given in the space-frequency domain. However, as a windowed Fourier transform represents a signal by the sum of its sine and cosine functions, it restricts the flexibility of the function g(t-t[0]) or g(x-x[0]), making a characterization of a signal and simultaneous location of its high-frequency and low-frequency components difficult in the time-frequency domain or the space-frequency domain. Wavelet transform was developed to overcome this deficiency of windowed Fourier transform in representing non-stationary signals. Wavelet transform is obtained from a signal by dilation-contraction and by the translation of a special wavelet within the time or space domain. The expansion of this signal into wavelets thus permits the signal's local transient behavior to be captured, while the sine and cosines can only capture the overall behavior of the signal as they always oscillate indefinitely. In the Fourier analysis, every periodic function having a period of 2p and an integrable square is generated by an overlay of exponential complexes, W[n](x)=e^inx, n=0, +1, +2...., obtained by dilations of the function W (x) = e^ix:W[n](x) = W(nx). Extending the idea to space for Y integrable square functions, the following is defined: The function y is called a mother wavelet, where a is the scale factor and b is the translation parameter. The family of simpler wavelets, which will be adopted in the present work, is that of the Haar wavelet: When constructing images on the computer, the discreet image composition points (pixels) and the gray-scale tones of which each pixel is composed, are usually multiples of two. It is thus convenient to redefine the wavelets in an orthogonal binary base. For the unidimensional nonstationary functions f(x) that decrease to zero when x®¥, the following assumption is normally adopted : The scale factor of 2^-jk is called the localization or dyadic translation and k is the translation index associated with the localization, where j and k Î Z. Meyer (1985) proved that wavelets thus defined are orthogonal, i.e., equal to the scalar product and d refer to the delta function of Dirac. Thus, the function f(x) can be rewritten as follows: The values of the constants c[j,k] are obtained by wavelet transform in its discrete form. Then f(x) is expanded into a series of wavelets with their coefficients obtained from The wavelet transform can also be calculated using special filters called Quadrature Mirror filters, as proposed by Mallat (1989). They are defined as a low-pass filter, associated with the coarser scale, and a high-pass filter to characterize the details of the signal. The signal f(x) then is described as: In the expansion of f(x) by equation (8.a), the first term represents the approximation of the signal and the second the signal details, filtered by the approximation. The function f[jo,k] is denominated a scale function or father wavelet, and it is responsible for obtaining the approximation of the signal, while the mother wavelets, y[j,k], are responsible for the generation of the details filtered by the approximation. For the family of Haar wavelets, the scale function is The mother wavelets, responsible for the details in the Haar family, are expressed as Image Analysis Utilizing the Wavelet Transform Technique The "filming" of a process of cell growth can be understood as the generation of a sequence of pictures or images, captured over a period of time. An image of that sequence will then be the space registration in two dimensions of a phase of the process, characterizing a specific instant in cell growth. To represent that image using wavelets, a possibility would be to build a base of orthogonal functions derived from the tensorial product of two unidimensional bases having different scales: This base may be represented by or, in other words, we have a father wavelet and three distinct mother wavelets which sweep the image in horizontal, vertical and diagonal directions: The father wavelet originating in the scale product represents the coarse part of the image and three distinct mother wavelets originating in the crossed products represent the horizontal (h), vertical (v) and diagonal (d) detais. An integrable square function can then be written in the following form: with the wavelet coefficients given by with k (k[1] ,k[2]) as translation parameters in the directions of x and y. The Matlab 5.2 software used adopts a "pyramidal" algorithm (Misiti et al., 1997) to calculate the discreet wavelet transform, as mentioned here. The algorithm uses low-pass [e[k]] and high-pass [h [k]] filters to determine the coefficients cj[0,k] and d^µ [j, k] for each scale desired. For each value of the ordered pair (k[1], k[2] ) corresponding to a certain pixel (x,y) of the image, a value of cj [0,k] and three values of d^µ [j,k] are found, representing the approximation coefficient and the detail coefficients, respectively. From the coarser scale j[o], an approximate figure with half the resolution of the original image is obtained and another three figures with details in the horizontal, vertical and diagonal directions are obtained. These three images have the same resolutions as the approximate figure. The approximate figure will constitute the starting point for calculation of quantity of cells in an image, since when the Haar wavelet, whose values alternate between 0 and 1, is used the image is just represented by the sum of the coefficients, i. e., equation 14 is reduced to f(x,y) =S cj[0,k]. In reality, Matlab 5.2 provides the average value of the coefficients as the final processing result. Varying the level j>j[0], a reduction in scale is produced and a new quartet of figures is generated, an approximate one and three of detail ones, starting the previous image with the resolution again reduced by half. The original image can be reconstructed following the procedure in reverse order. Morettin (1999) presents this and other procedures to determine the wavelet transform of two-dimensional images. Cell Species, Culture Media and Cultivation Conditions Three species of cells were cultivated in a shaker (rotating incubator, New Brunswick Scientific G-25KC), establishing conditions favorable to good cell growth in each case. At predefined time intervals, samples were taken for quantification of cell concentration, either by specific experimental methods for each cell type or by image analysis techniques, as described below. The bacterium Streptomyces clavuligerus NRRL l3585 was aerobically cultivated in medium distributed betwen several 250 mL Erlenmeyer flasks with cotton plugs. Each flask contained 50mL of medium with 15g/L of glycerol, 0.8 g/L of K[2]HPO[4] and 32 g/L of peptone with the pH adjusted to 6.5± 0.1. The Erlenmeyer flasks with the inoculated medium were incubated in the shaker at 28 ºC and 250 rpm for 72 hours. The baker yeast, Saccharomyces cerevisiae (Fleischmann), was anaerobically cultivated in a 500 mL Erlenmeyer flask with a rubber stopcock. The flask contained 250mL of culture medium of the following composition: 2g of yeast extract, 1.0g of K[2]HPO[4], 1.3g of NH[4]Cl, 0.82g of MgSO[4].7H[2]O, 1.1g of sodium citrate, 1.5g of citric acid, 1.0 g of CaCl[2]H[2]O and 100g of glucose with the pH of the medium adjusted to 4.2±0.1. The inoculum in the Erlenmeyer flask was incubated in the shaker at 30º C and 200 rpm for 9 hours. To cultivate cells of the insect Spodoptera frugiperda, they were first activated after being stored in liquid nitrogen at -196 °C to recover their normal activity. The same cells were then used to inoculate ~5 mL of Sf-900II medium from Gibco in T flasks with a 50mL capacity. Then the contents of the T flasks were used as inoculum for 12 mL of the same medium as that contained in a 100 mL Schott flask. The flask containing the inoculated medium was put in a shaker for incubation at 28 °C and 100 rpm for 11 days. Oxygenation of the medium was achieved by opening the flask daily in a safety cabinet for air renewal. All the components used in the preparation of media for the cultivation of the three type of cells were of analytical grade. Quantification of Cell Concentration Streptomyces clavuligerus: the biomass was quantified using the dry mass method, filtering through a Millipore membrane with a 0.45mm pore diameter and drying at 105±2ºC for 4 hours. To analyze the images, 1mL of fermentation broth was diluted ten times with 5 mL of methylene-blue solution with the composition used by Tucker et al. (1992) and 4 mL of distilled water and then slides with cover slips having samples of colored cells were prepared for analysis under an optical microscope. Saccharomyces cerevisiae: the biomass was quantified by the dry mass method, filtering through a Schleicher & Schüll membrane with a 0.45 µm pore diameter and drying at 105±2 ºC for 4 hours. To analyze the images, 2mL of fermented medium containing Sac. cerevisiae yeast was mixed with 2 mL of methylene blue solution (0.3 g in 130 mL of 22% v/v ethyl alcohol solution). A sample of 5 µL of the yeast suspension was loaded in a Neubauer chamber and covered with a cover slip. Spodoptera frugiperda: To determine cell concentration, samples of several dilutions were taken, along with another 10 µL of coloring, and the remainder was diluated with SPB (saline phosphate buffer), for a total volume of 100µL, and then cells were counted using an Olympus CK3 inverted-field optical microscope. To analyze the images, a 5µL sample specimen was put on a microscope slide and covered with a cover slip. At preset times, in conjunction with counting analysis, samples of the Sf9 insect cell were removed from 90µL of suspension with a micropipette and were placed in 10µL of a 0.2% Trypan-blue dye ethyl solution. Then a 10µL specimen of this sample was put in a Neubauer chamber and covered with a cover slip. Preparation of the Samples for Image Analysis For all three cases of cells being studied, the samples prepared in the Neubauer chamber and on the slide were examined using an optical microscope (Olympus BX50) setup with a video camera (Sony CCD-Iris DXC-107A) connected to a microcomputer (Pentium 100 MHz, 65Mb RAM) to acquire and process 20 different images chosen randomly by sample, with a magnification of 200x for Sac. cerevisiae, 100x for Str. clavuligerus and 200x for Sf9. Application of Image Analysis Techniques Before applying the wavelet transform and densitometric techniques, the images were pretreated to filter out undesirable particles (solids, debris, etc.) as well as to maintain a neutral and uniform background. The process of image treatment is illustrated in Figures 1 and 3 for the cells having a more complex morphology, which were used in this study. Besides permitting a quite compact image to be captured and using little computer memory, the pretreatment helps to reveal the main attributes of the cells analyzed with good definition to the naked eye. When processing an image using the Haar-1-family MATLAB 5.2 wavelet technique, a wavelet transform is generated in two dimensions composed of four images: an approximate image and three other images called "details". The approximate image is characterized by high-scale and low-frequency components, while the other three images have small-scale and high-frequency components, resulting from a filtering process with vertical, horizontal and diagonal sweeps (Misiti et al., 1997). The process decomposition diagram of an indexed image using the wavelet transform analysis toolbox can be seen in Figure 2 and the four images generated are shown in Figure 4. In this work, special attention was paid to the average wavelet coefficient (AWC) as a measure of the quantity of cells in accordance with equation 14, as postulated above. The Matlab 5.2 image processing toolbox and the wavelet software were used to analyze pretreated images served as a tool to calculate the AWCs, a process wich took approximately 20 minutes. It was necessary to carry out a densitometric analysis of the images obtained for the purpose of comparing the results of the wavelet technique results with another computer-based method. In this work, the densitometric technique used was that proposed by Treskatis et al. (1997), which is based on measurement of the area occupied by the cells and the maximum and average cell grey-scale tones. The gray-scale tone quotient logarithmic values represent a measurement of biomass thickness. By following the principle of Lambert and Beer, the biomass was estimated from Equation 17: where is the cell concentration by densitometry, k´is the proportionality factor, A is the projected area of cells, G[max] is the maximum grey-scale tone and G[mean] is the average grey-scale tone. In the three experiments with different cells, cell growth was analyzed by monitoring from the lag phase to the decline phase. The objective of these experiments was to verify the validity of the computed measurements in all of the growth phases. Figures 5a and 5d represent the image acquired and the one obtained after pretreatment, as described in "Materials and Methods" for the cells of Sac. cerevisiae. Figures 5b and 5e are for the Str. clavuligerus cells and figures 5c and 5f are for the Sf9 cells. In the three screens at the top of Figure 6, typical results for an intermediate position of the exponential phase of the cell growth obtained by wavelet analysis with the MATLAB 5.2 software for cells of Sac. cerevisiae, Str. clavuligerus and Sf9 are presented. Worthy of notice in the three results is the strong influence of the tone associated with the image background. The most significant case is that of the Sf9 cell, where the frequency of tones associated with the background is predominant (the AWC value is the smallest obtained for the three cells shown). Although Sac. cerevisiae has a morphology similar to that of Sf9, it appears in larger numbers in the acquired image and, as predicted by equation 14, the AWC is larger. On the other hand, the AWC obtained for the Str. clavuligerus cell, which has a more complex structure and a smaller background interference, is of the same order of magnitude as that obtained from the image of Sac. cerevisiae. In the three screens at the bottom of Figure 6, a typical result on the growth of the Str. clavuligerus cell is presented in pellet form. An increase in AWC is observed, indicating that it corresponds to the cell growth observed in the image. The result can also be visualized using histograms, which show the predominance of larger wavelet coefficients for the images with a higher content of biomass or number of cells. The results from the three cells studied are shown in Figure 7. It can be seen that the behavior of the AWC analysis curve and the densitometric analysis curve are very similar to curves obtained from experimental cell growth. For the bacterium Str. clavuligerus, the curve estimated by the AWC technique also has a maximum biomass growth at around 60 hours of fermentation followed by a decline. In the experiments with the yeast Sac. cerevisiae, the AWC analysis followed the biomass growth well, showing smaller deviations than the one given by densitometric measurement. In the case of the insect S. frugiperda cell, a fall off of cell growth at around the third to fourth day is shown. The equivalence of computer-based AWC and densitometric measurements and experimental estimates of cell concentration was also analysed by plotting both measurements in an adimensional manner, as shown in Figure 7. A linear relationship between the measurements can be observed in this figure with a very satisfactory correlation coefficient for the three cases of cells analyzed. The results of the statistical analysis can be seen in Table 1. The larger AWC deviations at the beginning of the experiment with Str. clavuligerus can be attributed to the fact that the samples were diluted by a factor of ten. In a way similar to what happens under a microscope with the manual counting method, the small number of cells in the images analyzed generated relatively large errors in the computer-based image processing. This problem was corrected in the AWC measurements of Sac. Cerevisiae and Sf9, generating results which were more in line with the respective experimental measurements (see Figures 7e). Use of software to pretreat the images and the Matlab 5.2 image processing toolbox and the wavelet software to calculate the AWCs took approximately 40 minutes per sample with a 100 MHz personal computer. The speed of the wavelet analysis can be increased by developing a specific program for serial image analysis. Matlab did not allow this flexibility, so the image processing time could not be reduced. Detailed image analysis was not adressed in this text because it would not greatly benefit the specific objective of the present work. However, it is worth remembering that, when doing wavelet tranform analysis, the detail images can produce useful recognition and quantification results on morphological patterns of different types of cells. This additional use could also be of great value in the area of monitoring and bioprocess control. The following conclusions were reached from the results obtained: 1. The average wavelet coefficients (AWCs) obtained from the images of the different types of cells in this study can be used as a quantifying measurement of cell concentrations during their growth. 2. The computer-based measurements obtained from the average wavelet coefficients (AWCs) and from the densitometric measurements showed a good linear correlation with experimental measurements of dry biomass and counting of cells in a Neubauer chamber. 3. Wavelet transform analysis was shown to be quite promising as a fast, simple and compact tool for measuring cell concentration and, therefore, has great potential for use in modern bioprocess monitoring techniques. We wish to thank Fapesp (Proc.00/08741-0) for the Scientific Initiation scholarship and we also wish to thank Capes for the Master's scholarship. a scale factor, t b translation parameter, t c approximation wavelet coefficient (-) d detailed wavelet coefficcient (-) f mathematical function (-) g mathematical function k[1] translation parameter in x coordinate (-) k[2] translation parameter in y coordinate (-) k´ proportionality factor, ML^-5 i imaginary number "-1 (-) t time (-) x independent variable (-) y independent variable (-) A projected area of cells, L^2 B base of orthogonal functions (-) F[g] windowed Fourier transform (-) G grey-scale tone in an image (-) W mathematical function (-) Greek Letters d delta function of Dirac (-) ? translation parameter in wavelet analysis (-) µ sweep directions (vertical, horizontal or diagonal) ? cell concentration by densitometry, ML^3 f father wavelet function or scale function (-) y mother wavelet function (-) Bitter, C., Wehnert, G. and Scheper, T. In situ Microscopy for On-line Determination of Biomass, Biotechnology Bioengineering, 60(1), pp. 24-35 (1998). [ Links ] Coremans, J.M., Joly, V., Dehottay P.H. and Gosselé, F., The Use of Capacitance and Conductance Measurements to Monitor Growth and Physiological States of Streptomyces virginiae in Industrial Fermentations, 6th Netherlands Biotechnology Congress, 12 March, Amsterdam (1996). [ Links ] Gabor, D., Theory of Communications, Jounal of the Institute of Electrical Engineering, London, III, 93, pp. 429-457 (1946). [ Links ] Konstantinov, K., Chuppa, S., Sajan, E., Tsai, Y., Yoon, S. and Golini, F., Real Time Biomass Concentration Monitoring in Animal Cell Cultures, TIBTECH, 12, pp. 324-333 (1994). [ Links ] Mallat, S., A Theory for Multiresolution Signal Decomposition: the Wavelet Representation, IEEE Pattern Analysis and Machine Intelligence, 11(7), pp. 674-693 (1989). [ Links ] Meyer, Y., Principe Díncertitude, Bases Itilbertienna et Algebres D'Operateurs, Seminare Bourbaki, 1985-1986, 662 (1985). [ Links ] Misiti, M., Misiti Y., Oppenheim G. and Poggi J. M., Wavelet Toolbox User's Guide, The Mathworks, New York (1997). [ Links ] Morettin, P.A., Ondas e Ondaletas Da Análise de Fourier à Análise de Ondaletas, Ed. Edusp, São Paulo, (1999), in Portuguese. [ Links ] Polikar, P., The Engineer's Ultimate Guide to Wavelet Analysis The Wavelet Tutorial, http:/engineering.rowan.edu/~polikar, 1999. [ Links ] Pons, M.N., Drowin J.F., Louvel L., Vanhoutte B., Vivier H. and Germain P., Physiological Investigations by Image Analysis, Journal of Biotechnology, 65, pp. 3-14 (1998). [ Links ] Shioya, S., Shimizu, K. and Yoshida, T., knowledge-based Design and Operation of Bioprocess Systems, Journal Bioscience Bioengineering, 87(3), pp. 261-266 (1999). [ Links ] Thomas, C.R. and Paul, G.C., Applications of Image Analysis in Cell Biology, Current Opinion Biotechnology, 7, pp. 35-45 (1996). [ Links ] Treskatis, S. K., Orgeldinger, V. and Gilles, E. D., Morphological Characterization of Filamentous Microorganisms in Submerged Cultures by On-line Digital Image Analysis and Pattern Recognition, Biotechnology Bioengineering, 53, pp. 191-201 (1997). [ Links ] Tucker, K. G., Kelly T., Delgrazia P. and Thomas C.R. , Fully Automatic Measurement of Mycelial Morphology by Image Analysis, Biotechnology Progress, 8, pp. 353-359 (1992). [ Links ] Received: October 20, 2005 Accepted: July 7, 2005 * To whom correspondence should be addressed
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-66322005000400010&lng=pt&nrm=iso&tlng=en","timestamp":"2014-04-20T04:47:39Z","content_type":null,"content_length":"70053","record_id":"<urn:uuid:68d86073-d26d-4222-b263-7558574318b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
A184970 - OEIS A184970 Irregular triangle C(n,g) counting the connected 7-regular simple graphs on 2n vertices with girth exactly g. 7 1, 5, 1547, 21609300, 1, 733351105933, 1 (list; graph; refs; listen; history; text; internal format) OFFSET 4,2 COMMENTS The first column is for girth exactly 3. The row length sequence starts: 1, 1, 1, 2, 2, 2, 2, 2. The row length is incremented to g-2 when 2n reaches A054760(7,g). LINKS Table of n, a(n) for n=4..10. Jason Kimberley, Index of sequences counting connected k-regular simple graphs with girth exactly g Jason Kimberley, Incomplete table of i, n, g, C(n,g)=a(i) for row n = 4..11 EXAMPLE 1; 21609300, 1; 733351105933, 1; ?, 8; ?, 741; ?, 2887493; CROSSREFS Connected 7-regular simple graphs with girth at least g: A184971 (triangle); chosen g: A014377 (g=3), A181153 (g=4). Connected 7-regular simple graphs with girth exactly g: this sequence (triangle); chosen g: A184973 (g=3), A184974 (g=4)). Triangular arrays C(n,g) counting connected simple k-regular graphs on n vertices with girth exactly g: A198303 (k=3), A184940 (k=4), A184950 (k=5), A184960 (k=6), this sequence (k=7), A184980 (k=8). Sequence in context: A169620 A181992 A145694 * A184973 A184971 A014377 Adjacent sequences: A184967 A184968 A184969 * A184971 A184972 A184973 KEYWORD nonn,hard,more,tabf AUTHOR Jason Kimberley, Feb 25 2011 STATUS approved
{"url":"http://oeis.org/A184970","timestamp":"2014-04-16T22:27:12Z","content_type":null,"content_length":"17387","record_id":"<urn:uuid:f7c0b792-7396-437d-9c69-a82de79cbca0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
calculation of focal length TS wrote: > I am trying to calculate the focal length of the lens in my digital > camera. > I understand that a smaller number such as 18mm is wide angle and a > larger number such as 200mm is a tele-foto lens. > If I calculate the angle of view of the camera, can I calculate the > focal length? > If so, how - and do I use the horizontal angle or the vertical plane > (assuming the camera is used in landscape). I cannot determine from your post exactly what you want to do, but I am going to assume you want to determine what focal length you are using (or need to use) in a given situation. To do this, you will need three measurements, which are as follows: 1. The horizontal (or vertical) size of an object you're going to take a picture of. 2. The distance from that object to your camera when said object completely fills the image horizontally (or vertically if you used vertical in #1 above). 3. The horizontal (or vertical, if you used vertical in #1 above) size of the sensor in your camera. Items #1 and #2 must be measured using the same units (i.e. both of them are measured in feet, or both of them are measured in meters, or furlongs, or whatever you want, as long as the units are the same). Item #3 should be measured in millimeters. Most manuals for cameras list the sensor dimension in the specifications section. Take the distance from the object to your camera, divide it by the object's size (#2 / #1). Multiply the result by the size of the sensor, and you have a pretty good approximation of your current focal length. For wide angle lenses, the calculation is less accurate, but it should be good enough for most purposes. For example, suppose I want to find the focal length that lets me just fit a one foot ruler in the frame while standing ten feet away. #1 = 1 foot #2 = 10 feet The camera I'm using is a Canon 300D. Its sensor measures about 22.7 millimeters horizontally. #3 = 22.7 #2 / #1 = 10 feet / 1 foot = 10 That result multiplied by #3 gives us 10 * 22.7 = 227 millimeters. At ten feet, I would need a 227 millimeter focal length to fill the frame of my 300D with a ruler measuring one foot across. I would probably use a 200mm lens. > Is this calculation an absolute or does it differ for 35mm cameras > and digital cameras? As long as you completely fill the frame with the object you've measured, the calculation above should be independent of the type of film or size of sensor. Crop factor is irrelevant for the calculation
{"url":"http://www.velocityreviews.com/forums/t292518-calculation-of-focal-length.html","timestamp":"2014-04-20T19:24:53Z","content_type":null,"content_length":"65713","record_id":"<urn:uuid:0ed005aa-02bf-4038-88ed-e85e611d2521>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Zome Model of Gosset's Figure Gosset's Figure in 8 Dimensions, A Zome Model This is a Zome model of Gosset's figure 4[21] in 8 dimensions. The vertices of 4[21] (from here on called "Gosset's figure") coincide with the 240 shortest non-zero elements of the E(8) lattice, also known as the E(8) root system. This page has brief descriptions on how to build the model and the mathematics behind the model. For more details on the mathematics, one should consult the references given below. The Physical Model A requisite first step to understanding this model is to study the geometry and Zome model of the 600-cell. As it turns out, this model of Gosset's figure is the union of two concentric models of the 600-cell. It is a remarkable fact that there are only two "natural" models of the 600-cell which can be built using standard Zome pieces, and that the union of these two objects turns out to represent the Gosset figure. Before attempting to build the Zome model of Gosset's figure, one should build two different sizes of the Zome model of the 600-cell, where the ratio of the two radii, naturally, are in golden proportion. One may find instructions on how to build the Zome model of the 600-cell here. After doing this, one can move on to Gosset's figure. First build the smaller of the two Zome models of the 600-cell. Having done this, one may build the larger of the two where the smaller serves as a core. Upon building the larger of the two around the smaller, one will notice that many of the blue edges meet in false intersections. This model is the union of two concenteric models of the 600-cell, one large and one small. Exactly 30 of the blue edges on the outer shell of the small model intersect 30 of the blue edges of the core icosahedron of the large model. Instead of resorting to half-blue struts, one can build the model so that 30 of the long blue struts of the large 600-cell are bent around 30 of the medium blue struts of the small 600-cell. One should note that this is a "vertex model" of Gosset's figure. That is, although all of the vertices are represented by balls, many of the projected edges are not present in the model. The difficulty arises because there are simply too many edges. While it is true that all of the projected edges lie in the usual red, blue, or yellow zones of Zome, many of them are too short to be This model illustrates a phenomenon of "density variation". Gosset's figure in eight dimensions can be regarded to "reside" on the surface of an 8-dimensional ball. In fact, all of the vertices of Gosset's figure are equidistant from the origin in 8-dimensional space, and all of the edges lie in one orbit of the group acting on it. Thus, in some sense, Gosset's figure is "uniform" in the way it is distributed about the 7-dimensional sphere. The model, however, represents the image of an orthogonal projection from 8-dimensional space to 3-dimensional space. The measure on the 3-dimensional ball induced by this projection, assuming the measure on the 7-sphere to be invariant, has a greater density towards the center of the ball. One can clearly see this phenomenon in this model because it has more "stuff" near the center. Cross-Eyed and Parallel Free-View Stereographs A Few Mathematical Details Here is a brief outline of some of the mathematics of Gosset's figure and why Zome works so well to model it. If the reader needs more details, the references given below, especially the article by Moody and Patera, are certainly more complete and probably more clear. Denote by a and b the real roots of the quadratic x^2-x-1 such that a < b. Thus, a = (1-sqrt(5))/2, and b = (1+sqrt(5))/2. In other words, suppose b is the Golden Ratio, and a is its field-theoretic conjugate. Regard the quaternions H as the real span of {1,i,j,k}, where i, j, and k obey Hamilton's identity, i^2 = j^2 = k^2 = ijk = -1. Consider the set of four quaternions, S={1, (1/2)(-1-i-j-k), i, (1/2)(-bi-aj+k)}. This set S serves as a system of simple roots for the exceptional Coxeter group H(4). (Under quaternion multiplication, S generates a group isomorphic to the binary icosahedral group, the double cover in Spin(3) of the group of rotations of the regular icosahedron.) Moreover, these 120 quaternions constitute the vertices of the 600-cell. The Zome model represents the image under the w+ix+jy+kz --> ix+jy+kz from R^4 to R^3, where the balls occupy the locations of the projected vertices. Next, let T=aS, the set of quaternions in S shortened by the factor a = (1-sqrt(5))/2. Then, under quaternion addition, the union of S and T generates a rank-8 subgroup L of H. Let K denote the field extension of the rational numbers Q by adjoining the golden ratio b. There are two norms on K^4 which are important for this discussion. Choose a vector v in K^4. Regarding v as a four-dimensional vector, one may identify it with a quaternion and define the "quaternion" norm Q(v) = |v|^2 = x + y sqrt(5), where x and y are rational numbers and |.| denotes the usual quaternion length equivalent to the usual length function on R^4. Regarding v as an 8-dimensional vector over Q, one also has the "icosian" norm I(v) = x + y, where x and y are the rational numbers appearing in the quaternion norm. (Conway and Sloane describe this as the "Euclidean" norm.) Under the usual quaternion norm, L is a dense subset of the quaternions H. One may regard this subset as the source of many quasicrystalline phenomena. However, under the icosian norm, L is isomorphic to the root lattice L(8) for the exceptional Lie algebra E(8). There are 240 vectors of minimal norm in this lattice, and these are none other than than the vertices of Gosset's figure. The Zome model works because the small 600-cell is scaled down from the large 600-cell in the same way the set T is scaled down from the set S, namely, by dividing by the Golden Ratio. Cross-section of the brain of a Zomer, chronic stage. Veit Elser and N. J. A. Sloane. A highly symmetric four-dimensional quasicrystal. Phys. A: Math. Gen. 20 (1987), 6161-6168. J. H. Conway and N. J. A. Sloane. Sphere Packings, Lattices and Groups. 2nd ed. Springer-Verlag, New York, 1992. Thorold Gosset. On the regular and semi-regular figures in space of n dimensions. Messenger of Math. 29 (1900), 43-48. R. V. Moody and J. Patera. Quasicrystals and icosians. Phys. A: Math. Gen. 26 (1993), 2829-2853.
{"url":"http://homepages.wmich.edu/~drichter/gossetzome.htm","timestamp":"2014-04-20T20:55:22Z","content_type":null,"content_length":"8309","record_id":"<urn:uuid:27e812c6-ef46-4aa2-b146-2c1c9d41c14b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Design and Evaluate Research in Education | A-B-A-B design Same as an A-B-A design, except that a second treatment is added. A-B-A design Same as the A-B design except a second baseline is added. A-B-C-B design Same as A-B-A-B design except that the second baseline phase is replaced by a modified treatment phase. A-B design A single-subject experimental design in which measurements are repeatedly made until stability is presumably established (baseline), after which treatment is introduced and an appropriate number of measurements are made. Abstract A summary of a study that describes its most important aspects, including major results and conclusions. Accessible population The population from which the researcher can realistically select subjects for a sample, and to which the researcher is entitled to generalize findings. Achievement test An instrument used to measure the proficiency level of individuals in given areas of knowledge or skill. Action plan A plan to implement change as a result of an action research study. Action research A type of research focused on a specific local problem and resulting in an action plan to address the problem. Age-equivalent score A score that indicates the age level for which a particular performance (score) is typical. Alpha coefficient see Cronbach alpha. Alternating-treatment A single-subject design for studying two or more treatments. Analysis of covariance A statistical technique for equating groups on one or more variables when testing for statistical significance; it adjust scores on a dependent variable for initial (ANCOVA) differences on other variables, such as pretest performance or IQ. Analysis of variance A statistical technique for determining the statistical significance of differences among means; it can be used with two or more groups. Anecdotal records Records of observed behaviors written down in the form of anecdotes. The best anecdotes tell exactly what the participant did or said without making evaluative statements in the process of reporting this information. Aptitude test An instrument used to predict performance in a future situation. Associational research A general type of research in which a researcher looks for relationships having predictive and/or explanatory power. Both correctional and causal-comparative studies are Assumption Any important assertion presumed to be true but not actually verified; major assumptions should be described in one of the first sections of a research proposal or report. Attitude scale A set of statements to which the participant responds. Average A number representing the typical score attained by a group of subjects. See measures of central tendency. B-A-B design The same as an A-B-A-B design, except that the initial baseline phase is omitted. Background question Question asked by an interviewer or on a questionnaire to obtain information about a respondent's background (age, occupation, etc.). Bar graph A graphic way of illustrating differences among groups. Baseline The graphic record of measurements taken prior to introduction of an intervention in a time-series design. Behavior questions See experience questions. Behavior rating scale Bias See researcher bias Bibliography A list of references that pertain to a topic. Biography/biographical A form of qualitative research in which the researcher works with the individual to clarify important life experiences Case study A form of qualitative research in which a single individual or example is studies through extensive data collection. Categorical data/variables Data (variables) that differ only in kind, not in amount or degree. Causal-comparative research Research to determine the cause for, or consequences of, existing differences in groups of individuals; also referred to as ex post facto research. Census An attempt to acquire data from each and every member of a population. Chaos theory A theory and methodology of science that emphasizes the rarity of general laws, the need for very large data bases, and the importance of studying exceptions to overall Chi-square test A non parametric test of statistical significance appropriate when the data are in the form of frequency counts; it compares frequencies actually observed in a study with expected frequencies to see if they are significantly different. Closed-ended question A question and a list of alternative responses form which the respondent selects; also referred to as a closed-form item. Cluster sampling/cluster The selection of groups of individuals, called clusters, rather than single individuals. All individuals in a cluster are included in the sample; the clusters are random sampling preferably selected randomly from the larger population of clusters. Coding The specification of categories in content analysis research. May be done ahead of time or emerge from familiarity with the raw data. Coefficient of The square of the correlation coefficient. It indicates the degree of relationship between two variables. determination (r2) Coefficient of multiple An index of the strength of the relationship among a combination of predictor variables and the criterion variable. Like the usual correlation coefficient, a coefficient correlation of zero would indicate that the variables are not related. On the other extreme, a coefficient of one would indicate that scores on the criterion variable can be perfectly predicted from the set of predictor variables. Cohort study A design (in survey research) in which a particular population is studied over time by taking different random samples at various points in time. The population remains conceptually the same, but individuals change (for example, graduates of San Francisco State University surveyed 10, 20 and 30 years after graduation). Collective case study One that studies multiple cases at the same time. Comparison group The group in a research study that receives a different treatment from that of the experimental group. Computer search of the A method whereby key terms are used to locate research literature about a topic. Concurrent validity The degree to which the scores on an instrument are related to the scores on another instrument administered at the same time, or to some other criterion available at the (evidence of) same time. Confidence interval An interval used to estimate a parameter that is constructed in such a way that the interval has a predetermined probability of including the parameter. Confirming sample In qualitative research; a sample selected to validate or extend previous findings. Constant A characteristic that has the same value for all individuals. Constitutive definition The explanation of the meaning of a term by using other words to describe what is meant. Construct-related validity The degree to which an instrument measures an intended hypothetical psychological construct, or nonobservable trait. (evidence of) Content analysis A method of studying human behavior indirectly by analyzing communications, usually through a process of categorization. Content-related validity The degree to which an instrument logically appears to measure an intended variable; it is determined to expert judgment. (evidence of) Contextualization Placing information/data into a larger perspective, especially in ethnography. Contingency coefficient An index of relationship derived from a crossbreak table. Contingency question A question whose answer depends on the answer to a prior question. Contingency table See crossbreak table. Control Efforts on the part of the researcher to remove the effects of any variable other than the independent variable that might affect performance on a dependent variable. Control group The group in a research study that is treated "as usual." Convenience sample A sample that is easily accessible. Correlational research Research that involves collecting data in order to determine the degree to which a relationship exists between two or more variables. Correlation coefficient (r) A decimal number between .00 and +1.00 and –1.00 that indicates the degree to which two quantitative variables are related. Counterbalanced design A design in which all groups receive all treatments. Each group receives the treatments in a different order, and all groups are posttested after each treatment. Criterion-referenced An instrument that specifies a particular goal, or criterion, for students to achieve. Criterion-related evidence The degree to which performance on an instrument is related to performance on other instruments intended to measure the same variable, or to other variables logically of validity (evidence of) related to the variable being measured. Criterion variable The variable that is predicted in a prediction study; also any variables used to assess the criterion-related validity of an instrument. Critical researchers Researchers who raise philosophical and ethical questions about the way educational research is conducted. Critical sample In qualitative research; a sample considered to be enlightening because it is unusual. Cronback alpha (_) An internal consistency or reliability coefficient for an instrument requiring only one test administration. Crossbreak table A table that shows all combinations of two or more categorical variables and portrays the relationship (if any) between the variables. Cross-sectional survey A survey in which data are collected at one point in time from a predetermined population or populations. Cross-validation Validation of a prediction equation with at least one group other than the group on which it was based. Crystallization Occasions, especially in ethnography, when different kinds of data 'fall in place' to make a coherent picture. Culture The sum of a social group's observable patterns of behavior and/or their customs, beliefs and knowledge. Curvilinear relationship A relationship shown in a scatterplot in which the line that best fits the points is not straight. Data Any information obtained about a sample or a population. Data analysis The process of simplifying data in order to make it comprehensible. Degrees of freedom A number indicating how many instances out of a given number of instance are "free to vary" _ that is, not predetermined. Demographics Characteristics of a sample or population (e.g., age, ethnicity, education). Dependent variable A variable affected or expected to be affected by the independent variable; also called "criterion" or "outcome variable." Derived scores A score obtained from a raw score in order to aid in interpretation. Derived scores provide a quantitative measure of each student's performance relative to a comparison Descriptive field notes Notes that describe what the researcher has observed. Descriptive studies Research to describe existing conditions without analyzing relationships among variables. Descriptors Terms used to locate sources during a computer search of the literature. Directional hypothesis A relational hypothesis stated in such a manner that a direction, often indicated by "greater than" or "less than," is hypothesized for the results. Discriminant function A statistical procedure for predicting group membership (a categorical variable) from two or more quantitative variables. Ecological generalizibility The degree to which results can be generalized to environments and conditions outside the research setting. Educational Resources Information Center (ERIC) Effect size (ES) An index used to indicate the magnitude of an obtained result or relationship. Emic perspective The view of reality of a cultural 'insider'; especially in ethnography. Empirical Based on observable evidence. Equivalent forms Two tests identical in every way except for the actual items included. Equivalent-forms method A method to obtain to reliability coefficient; a way of checking consistency by correlating scores on equivalent forms of an instrument. It is also referred to as alternate-forms reliability. Errors of measurement Inconsistency of individual scores on the same instrument. Eta (_) An index that indicates the degree of a curvilinear relationship. Ethnography/ethnographic The collection of data on many variables over an extended period of time in a naturalistic setting, usually using observation and interviews. Etic perspective The 'outsider' or 'objective' view of a culture's reality, especially in ethnography. Expectancy table A table used to analyze data obtained from a categorical variable and a criterion that is categorical. Experience questions Questions a researcher asks to find out what sorts of things an individual is doing or has done. Experiment A research study in which one or more independent variables is systematically varied by the researcher to determine the effects of this variation. Experimental group The group in a research study that receives the treatment (or method) of special interest in the study. Experimental research Research in which at least one independent variable is manipulated, other relevant variables are controlled, and the effect on one of more dependent variables is observed. Experimental variable The variable that is manipulated (systematically altered) in an intervention study by the researcher. Explanatory mixed method A study in which quantitative data are collected first and findings tested with subsequent quantitative data. Exploratory mixed method A study in which qualitative data are collected first and further clarified with qualitative data. External audit An individual outside the study is asked to review the methods and interpretations of a qualitative study. External criticism Evaluation of the genuineness of a document in historical research. External validity The degree to which results are generalizable, or applicable, to groups and environments outside the research setting. External validity of single-subject studies Extraneous event(s) See history threat. Extraneous variable A variable that makes possible an alternative explanation of results; an uncontrolled variable. Extraneous variable A variable that makes possible an alternative explanation of results; an uncontrolled variable. Factor analysis A statistical method for reducing a set of variables to a smaller number of factors. Factorial design An experimental design that involves two or more independent variables (at least one of which is manipulated) in order to study the effects of the variables individually, and in interaction with each other, upon a dependent variable. Feelings questions Questions researchers ask to find out how people feel about things. Field diary A personal statement of a researcher's opinions about people and events he or she comes in contact with during research. Field jottings Quick notes taken by an ethnographer. Field log A running account of how an ethnographer plans to, and actually does, spend his or her time in the field. Field notes The notes researchers take about what they observe and think about in the field. Findings see results (of a study). Five-number summary Consists of the lowest score, the first quartile, the median, the third quartile, and the highest score. This summary provides a quick overview about the central tendency, variability, and shape of the distribution with just five numbers. Flowchart Types of tally sheets used to indicate the frequency and direction of a participant's remarks. Focus group interview An interview conducted with a group in which respondents hear the views of each other. Follow-up study A study conducted to determine the characteristics of a group after some period of time. Foreshadowed problems The problem or topic that serves, in a general way, as the focus for a qualitative inquiry. Frequency distribution A tabular method of showing all the scores obtained by a group of individuals. Frequency polygon A graphic method of showing all of the scores obtained by a group of individuals. Friedman two-way analysis A nonparametric inferential statistic used to compare two or more groups that are not independent. of variance Gain score The difference between the pretest and posttest scores of a measure. Generalizing See ecological generalizibility; population generalizability. General references Sources that researchers use to identify more specific references (e.g., indexes, abstracts). Grade-equivalent score A score that indicates the grade level for which a particular performance (score) is typical. Grounded theory A form of qualitative research which derives interpretations inductively from raw data with continual interplay between data and emerging interpretations. Hawthorne effect A positive effect of an intervention resulting from the subjects' knowledge that they are involved in a study or their feeling that they are in some way receiving "special" attention. Histogram A graphic representation, consisting of rectangles, of the scores in a distribution; the height of each rectangle indicates the frequency of each score, or group of Historical research The systematic collection and objective evaluation of data related to past occurrences to determine causes, effects, or trends of those events that may help explain present events and anticipate future events. History threat The possibility that results are due to an event that is not part of an intervention, but which may affect performance on the dependent variable, thereby affecting internal validity. Holistic perspective The attempt to incorporate all aspects of a culture into an ethnographic interpretation. Homogeneous sample In qualitative research, a sample selected in which all members are similar with respect to one or more characteristics. Hypothesis A tentative, testable assertion regarding the occurrence of certain behaviors, phenomena, or events; a prediction of study outcomes. Implementation threat The possibility that results are due to variations in the implementation of the treatment in an intervention study, thereby affecting internal validity. Independent variable A variable that affects (or is presumed to affect) the dependent variable under study and is included in the research design so that its effect can be determined; sometimes called the "experimental" or "treatment" variable. Inferential statistics Data analysis techniques for determining how likely it is that results based on a sample or samples are similar to results that would have been obtained for the entire Informal interviews Less-structured form of interview, usually conducted by qualitative researchers. They do not involve any specific type or sequence of questioning, but resemble more the give and take of a casual conversation. Instrument Any device for systematically collecting data, such as a test, a questionnaire, or an interview schedule. Instrumental case study One that focuses on a particular individual or situation with little effort to generalize. Instrumentation Instruments and procedures used in collecting data in a study. Instrumentation threat The possibility that results are due to variations in the way data are collected, thereby affecting internal validity. Instrument decay Changes in instrumentation over time that may affect the internal validity of a study. Interaction An effect created by unique combinations of two or more independent variables; systematically evaluated in a factorial design. Interjudge reliability The consistency of two (or more) independent scorers, raters, or observers. Internal-consistency Procedures for estimating reliability of scores using only one administration of the instrument. Internal criticism Determining if the contents of a document are accurate. Internal validity The degree to which observed differences on the dependent variable are directly related to the independent variable, not to some other (uncontrollable) variable. Interval scale A measurement scale that, in addition to ordering scores from high to low, also establishes a uniform unit in the scale so that equal distance between two scores is of equal magnitude. Intervention A specified treatment or method that is intended to modify one or more dependent variables. Intervention study/research A general type of research in which variables are manipulated in order to study the effect on one of more dependent variables. Interview A form of data collection in which individuals or groups are questioned orally. Intrinsic case study One that attempts to generalize beyond the particular case. Item validity The degree to which each of the items in an instrument measures the intended variable. Justification (of a study) A rationale statement in which a researcher indicates why the study is important to conduct; includes implications for theory and/or practice. Key actors see key informants Key informants Individuals identified as expert sources of information, especially in qualitative research. Knowledge questions Questions interviewers ask to find out what factual information a respondent possesses about a particular topic. Kruskal-Wallis one-way A nonparametric inferential statistic used to compare two or more independent groups for statistical significance of differences. analysis of variance Kuder-Richardson approaches Procedures for determining an estimate of the internal consistency reliability of a test or other instrument from a single administration of the test without splitting the test into halves. Latent content The underlying meaning of a communication. Level of confidence The probability associated with a confidence interval; the probability that the interval will contain the corresponding parameter. Commonly used confidence levels in educational research are the 95 and 99 percent confidence levels. Level of significance The probability that a discrepancy between a sample statistic and a specified population parameter is due to sampling error, or chance. Commonly used significance levels in educational research are .05 and .01. Likert scale A self-reporting instrument in which an individual responds to a series of statements by indicating the extent of agreement. Each choice is given a numerical value, and the total score is presumed to indicate the attitude or belief in question. Limitation An aspect of a study that the researcher knows may influence the results or generalizability of the results, but over which he or she has no control. Linear relationship A relationship in which an increase (or decrease) in one variable is associated with a corresponding increase (or decrease) in another variable. Literature review The systematic identification, location, and analysis of documents containing information related to a research problem. Location threat The possibility that results are due to characteristics of the setting or location in which a study is conducted, thereby producing a threat to internal validity. Logic Using knowledge to create new knowledge. Longitudinal survey A study in which information is collected at different points in time in order to study changes over time (usually of considerable length, such a several months or years). Manifest content The obvious meaning of a communication. Manipulated variable See experimental variable. Mann-Whitney U test A nonparametric inferential statistic used to determine whether two uncorrelated groups differ significantly. Matching Consists of two groups of items listed in columns. Respondents are required to match the item in the left column that corresponds most closely with an item in the right Matching design A technique for equating groups on one or more variables, resulting in each member of one group having a direct counterpart in another group. Maturation threat The possibility that results are due to changes that occur in subjects as a direct result of the passage of time and that may affect their performance on the dependent variable, thereby affecting internal validity. Maximal variation sample In qualitative research, a sample selected in order to represent diversity in one or more characteristics. Mean/arithmetic mean The sum of the scores in a distribution divided by the number of scores in the distribution; the most commonly used measure of central tendency. Measures of central Indices representing the average or typical score attained by a group of subjects; the most commonly used in educational research are the mean and the median. Measures of variability Indices indicating how spread out the scores are in a distribution. Those most commonly used in educational research are the range, standard deviation, and variance. Mechanical matching A process of pairing two persons whose scores on a particular variable are similar. Median That point in a distribution having 50 percent of the scores above it and 50 percent of the scores below it. Member checking Participants in a qualitative study are asked to check the accuracy of the research report. Meta-analysis A statistical procedure for combining the results of several studies on the same topic. Mixed-method design A study combining quantitative and qualitative methods. Mode The score that occurs most frequently in a distribution of scores. Moderator variable A variable that may or may not be controlled but has an effect on the research situation. Mortality threat The possibility that results are due to the fact that subjects who are for whatever reason "lost" to a study may differ from those who remain so that their absence has an important effect on the results of the study. Multiple analysis of An extension of analysis of covariance that incorporates two or more dependent variables in the same analysis. covariance (MANCOVA) Multiple-baseline design A single-subject experimental design in which baseline data are collected on several behaviors for one subject, after which the treatment is applied sequentially over a period of time to each behavior one at a time until all behaviors are under treatment. Also used to collect data on different subjects with regard to a single behavior, or to assess a subject's behavior in different settings. Multiple correlation (R) A numerical index describing the relationship between predicted and actual scores using multiple regression. The correlation between a criterion and the "best combination" of predictors. Multiple perspectives The recognition and acceptance of multiple views of reality, especially in ethnography. Multiple regression A technique using a prediction equation with two or more variables in combination to predict a criterion (y = a + b1X1 + b2X2 + b3X3…). Multiple-treatment The carryover or delayed effects of prior experimental treatments when individuals receive two or more experimental treatments in succession. Naturalistic observation Observation in which the observer controls or manipulates nothing, and tries not to affect the observed situation in any way. Natural setting A specific place in which events and interactions among individuals typically occur. Negatively skewed A distribution in which there are more extreme scores at the lower end than at the upper, or higher, end. Nominal scale A measurement scale that classifies elements into two or more categories, the numbers indicating that the elements are different, but not according to order or magnitude. Nondirectional hypothesis A prediction that a relationship exists without specifying its exact nature. Nonequivalent control group An experimental design involving at least two groups, both of which may be pretested; one group receives the experimental treatment, and both groups are posttested. design Individuals are not randomly assigned to treatments. Nonparametric technique A test of statistical significance appropriate when the data represent an ordinal or nominal scale, or when assumptions required for parametric tests cannot be met. Nonparticipant observation Observation in which the observer is not directly involved in the situation to be observed Nonrandom sample/sampling The selection of a sample in which every member of the population does not have an equal chance of being selected. Normal distribution A theoretical "bell-shaped" distribution having a wide application to both descriptive and inferential statistics. It is known or thought to portray many human characteristics in "typical" populations. Norm group The sample group used to develop norms for an instrument. Norm-referenced instrument An instrument that permits comparison of an individual score to the scores of a group of individuals on the same instrument. Norms Descriptive statistics that summarize the test performance of a reference group of individuals and permit meaningful comparison of individuals to the group. Null hypothesis A statement that any difference between obtained sample statistics and specified population parameters is due to sampling error, or "chance." Objectivity A lack of bias or prejudice. Observational data Data obtained through direct observation. Observer bias The possibility that an observer does not observe objectively and accurately, thus producing invalid observations and a threat to the internal validity of a study. Observer effect The impact of an observer's presence on the behavior observed. Observer expectations The effect that an observer's prior information can have on observational data. One-group pretest-posttest A weak experimental design involving one group that is pretested, exposed to a treatment, and posttested. One-shot case study design A weak experimental design involving one group that is exposed to a treatment and then posttested. One-tailed test The use of only one tail of the sampling distribution of statistic – used when a directional hypothesis is stated. Open-ended question A question giving the responder complete freedom of response. Operational definition Defining a term by stating the actions, processes, or operations used to measure or identify examples of it. Opinion questions Questions a researcher asks to find out what people think about a topic. Opportunistic sample In qualitative research, a sample chosen to take advantage of conditions that arise during a study. Oral statements Some form of oral expression. Ordinal scale A measurement scale that ranks individuals in terms of the degree to which they possess a characteristic of interest. Outcome variable See dependent variable. Outlier Scores or other observations that deviate or fall considerably outside most of the other scores or observation in a distribution or pattern. Panel study A longitudinal design (in survey research) in which the same random sample is measured at different points in time. Parameter A numerical index describing a characteristic of a population. Parametric technique A test of significance appropriate when the data represent an interval or ratio scale of measurement and other specific assumptions have been met. Partial correlation A method of controlling the subject characteristics threat in correlational research by statistically holding one or more variables constant. Participant observation Observation in which the observer actually becomes a participant in the situation to be observed. Participants Individuals whose involvement in a study can range from providing data to initiating and designing the study. Participatory action Action research intended not only to address a local problem but also to empower individuals and to bring about social change. Path analysis A type of sophisticated analysis investigating causal connections among correlated variables. Pearson Product-moment correlation coefficient Pearson r An index of correlation appropriate when the data represent either internal or ratio scales; it takes into account each and every pair of scores and produces a coefficient between .00 and either + or – 1.00. Percentile rank An index of relative position indicating the percentage of scores that fall at or below a given score. Performance checklist Used to keep track of behaviors that occur. Performance test Measures an individual's performance on a particular task. Phenomenology/ A form of qualitative research in which the researcher attempts to identify commonalilties in the perceptions of several individuals regarding a particular phenomenon. phenomenological research Pie chart A graphic method of displaying the breakdown of data into categories. Pilot study A small-scale study administered before conducting an actual study _ its purpose is to reveal defects in the research plan. Population The group to which the researcher would like the results of a study to be generalizable; it includes all individuals with certain specified characteristics. Population generalizability The extent to which the results obtained from a sample are generalizable to a larger group. Portraiture A form of qualitative research in which the researcher and the individual being portrayed work together to define meaning. Positively skewed A distribution in which there are more extreme scores at the upper, or higher, end than at the lower end. Positivism A philosophic viewpoint emphasizing an 'objective' reality which includes universal laws governing all things including human behavior. Posttest-only control group An experimental design involving at least two randomly formed groups; one group receives a treatment, and both groups are posttested. Power of a statistical test The probability that the null hypothesis will be rejected when there is a difference in the populations; the ability of a test to avoid a Type II error. Practical action research Action research intended to address a specific local problem. Practical significance A difference large enough to have some practical effect. Contrast with statistical significance, which may be so small as to have no practical consequences. Predicted score The score a researcher predicts that someone will obtain when measured on one variable after it is known what score the person obtained when measured on another variable. Prediction The estimation of scores on one variable from information about one or more other variables. Prediction equation A mathematical equation used in a prediction study. Prediction equation A mathematical equation used in a prediction study. Prediction studies Attempts to determine variables that are related to a criterion variable. Prediction study An attempt to determine variables that are related to a criterion variable. Predictive validity The degree to which scores on an instrument predict characteristics of individuals in a future situation. (evidence of) Predictor variable The variable from which projections are made in a prediction study. Predictor variable(s) The variable(s) from which projections are made in a prediction study. Pretest-posttest control An experimental design that involves at least two groups; both groups are pretested, one group receives a treatment, and both groups are posttested. For effective control group design of extraneous variables, the groups should be randomly formed. Pretest-treatment The possibility that subjects may respond or react differently to a treatment because they have been pretested, thereby creating a threat to internal validity. Primary source Firsthand information such as the testimony of an eyewitness, an original document, a relic, or a description of a study written by the person who conducted it. Primary source Firsthand information such as the testimony of an eyewitness, an original document, a relic, or a description of a study written by the person who conducted it. Probability The relative frequency with which a particular event occurs among all events of interest. Problem statement A statement that indicates the specific purpose of the research, the variables of interest to the researcher, and any specific relationship between those variables that is to be, or was, investigated; includes description of background and rationale (justification) for the study. Procedures A detailed description by the researcher of what was (or will be) done in carrying out a study. Purpose (of a study) A specific statement by a researcher of what he or she intends to accomplish. Purposive sample A nonrandom sample selected because prior knowledge suggests it is representative, or because those selected have the needed information. Qualitative data Data that are not numerical. Qualitative research/study Research in which the investigator attempts to study naturally occurring phenomena in all their complexity. Qualitative variable A variable that is conceptualized and analyzed as distinct categories, with no continuum implied. Quantitative data Data that differ in amount or degree, along a continuum from less to more. Quantitative research Research in which the investigator attempts to clarify phenomena through carefully designed and controlled data collection and analysis. Quantitative variable A variable that is conceptualized and analyzed as distinct categories, with no continuum implied. Quasi-experimental designs A type of experimental design in which the researcher does not use random assignment of subjects to groups. Questionnaire A list of questions that the participant answers in writing or by marking answers on an answer sheet. Random assignment The process of assigning individuals or groups randomly to different treatment conditions. Random numbers, table of A table of numbers that provides one of the best means of random selection or random assignment. Random sample A sample selected in such a way that every member of the population has an equal chance of being selected. Random sampling Methods designed to select a representative sample by using chance selection so that biases will not systematically alter the sample. Random selection sampling The process of selecting a random sample. Range The difference between the highest and lowest scores in a distribution; measure of variability. Rating scale The rating scale is an instrument on which a researcher or participant or observer can record a rating of a behavior, a product, or a performance. Ratio scale A measurement scale that, in addition to being an interval scale, also has an absolute zero in the scale. Raw score The score attained by an individual on the items on a test or other instrument. Reflective field notes A record of the observer's thoughts and reflections during and after observation. Regressed gain score A score indicating amount of change that is determined by the correlation between scores on a posttest and a pretest (and/or other scores). It provides more stable information than a simple posttest-pretest difference. Regression line The line of best fit for a set of scores plotted on coordinate axes (on a scatterplot). Regression threat The possibility that results are due to a tendency for groups, selected on the basis of extreme scores, to regress toward a more average score on subsequent measurements, regardless of the experimental treatment. Relationship A connection between two qualities or characteristics (e.g., motivation and learning). Relationship study A study investigating relationships among two or more variables, one of which may be a treatment (method) variable. Reliability The degree to which scores obtained with an instrument are consistent measures of whatever the instrument measures. Reliability coefficient An index of the consistency of scores on the same instrument. There are several methods of computing a reliability coefficient, depending on the type of consistency and characteristics of the instrument. Relics Any object that can provide some information about the past. Replication Refers to conducting a study again; the second study may be a repetition of the original study, using different subjects, or may change specified aspects of the study. Representativeness The extent to which a sample is identical (in all characteristics) to the intended population. Representative sample A sample that is like the population in terms of relevant characteristics. Research The formal, systematic application of scholarship, disciplined inquiry, and most often the scientific method to the study of problems. Research bias see threat to internal validity. Research design The overall plan for collecting data in order to answer the research question. Also the specific data analysis techniques or methods that the researcher intends to use. Researcher bias A situation in which the researcher's hopes or expectations concerning the outcomes of the study actually contribute to producing various outcomes, thereby creating a threat to internal validity. Research hypothesis A prediction of study outcomes. Often a statement of the expected relationship between two or more variables. Research problem A problem that someone would like to research; it is the focus of a research investigation. Research proposal A detailed description of a proposed study designed to investigate a given problem. Research question A question that we can answer by collecting and analyzing data. Research report A description of how a study was conducted, including results and conclusions. Results (of a study) A statement that explains what is shown by analysis of the data collected; includes tables and graphs when appropriate. Retrospective interview A form of interview in which the researcher tries to get a respondent to reconstruct past experiences. Sample The group on which information is obtained. Sampling The process of selecting a number of individuals (a sample) from a population, preferably in such a way that the individuals are representative of the larger group from which they were selected. Sampling distribution The theoretical distribution of all possible values of a statistic from all possible samples of a given size selected from a population. Sampling error Expected, chance variation in sample statistics that occurs when successive samples are selected for the sample in systematic sampling. Sampling interval The distance in a list between individuals chosen when sampling systematically. Scatterplot The plot of points determined by the cross-tabulation of scores on coordinate axes; used to represent and illustrate the relationship between two quantitative variables. Scientific method A way of knowing that it is characterized by the public nature of its procedures and conclusions and by rigorous testing of conclusions. Search terms see descriptors. Secondary source Secondhand information, such as a description of historical events by someone not present when the event occurred. Self-checklist A list of characteristics or activities that the participants in a study reads and then checks to identify those characteristics that they possess or the activities that they have engaged in. Semistructured interview A structured interview, combined with open-ended questions. Sensory questions Questions asked by a researcher to find out what a person has seen, heard, or experienced through his or her senses. Short-answer items A type of supply item in which the respondent is required to supply a word, phrase, number, or symbol that is necessary to complete a statement or answer the question. Sign test A nonparametric inferential statistic used to compare two groups that are not independent. Simple random sample see random sample. Simulation Research in which an "artificial" situation is created and participants are told what activities they are to engage in. Single-subject designs Designs applied when the sample size is one; used to study the behavior change that an individual exhibits as a result of some intervention or treatment. Single-subject research Research that focuses on individual study participants, rather than groups. Skewed distribution A nonsymmetrical distribution in which there are more extreme scores at one end of the distribution than the other. Snowball sample In qualitative research, a sample selected as the need arises during a study. Split-half procedure A method of estimating the internal-consistency reliability of an instrument; it is obtained by giving an instrument once but scoring it twice – for each of two equivalent "half tests." These scores are then correlated. Stability The extent to which scores are reliable (consistent) over time. Standard deviation (SD) The most stable measure of variability; it takes into account each and every score in a distribution Standard error of a The standard deviation of the sampling distribution of a statistic. Standard error of estimate An estimate of the size of the error to be expected in predicting a criterion score. Standard error of An estimate of the size of the error that one can expect in an individual's score. measurement (SEMeas) Standard error of the The most stable measure of variability; it takes into account each and every score in a distribution.. difference (SED) Standard error of the difference between means Standard error of the mean The standard deviation of sample means that indicates by how much the sample means can be expected to differ if other samples from the same population are used. Standard score A derived score that expresses how far a given raw score is from the mean, in terms of standard deviation units. Static-group comparison A weak experimental design that involves at least two nonequivalent groups; one receives a treatment and both are posttested. Static-group The same as the static-group comparison design, except that both groups are pretested. pretest-posttest design Statistic A numerical index describing a characteristic of a sample. Statistical equating see statistical matching. Statistically significant The conclusion that results are unlikely to have occurred due to sampling error or "chance;" an observed correlation or difference probably exists in the population. Statistical matching A means of equating groups using statistical prediction. Statistical regression see regression threat. Statistics A numerical index describing a characteristic of a sample. Stratified random sampling The process of selecting a sample in such a way that identified subgroups in the population are represented in the sample in the same proportion as they exist in the Structured interview A formal type of interview, in which the researcher asks, in order, a set of predetermined questions. Subject characteristics The possibility that characteristics of the subjects in a study may account for observed relationships, thereby producing a threat to internal validity. Subjects Individuals whose participation in a study is limited to providing information. Survey A method of collecting information by asking a sample of participants questions in order to find out information about a population. Survey study/research An attempt to obtain data from members of a population (or a sample) to determine the current status of that population with respect to one or more variables. Systematic sampling A selection procedure in which all sample elements are determined after the selection of the first element, since each element on a selected list is separated from the first element by a multiple of the selection interval. Table of random numbers A table of numbers that provides one of the best means of random selection or random assignment. Tally sheet A device used by researchers to report the frequency of student behaviors, activities, or remarks. Target population The population to which the researcher, ideally, would like to generalize results. Testing threat A threat to internal validity that refers to improved scores on a posttest that are a result of subjects having taken a pretest. Test of significance A statistical test used to determine whether or not the obtained results for a sample are likely to represent the population. Test-retest method A procedure for determining the extent to which scores from an instrument are reliable over time by correlating the scores from two administrations of the same instrument to the same individuals. Theme A means of organizing and interpreting data in a content analysis by grouping codes as the interpretation progresses. Theoretical sample In qualitative research, a sample that helps the researcher understand or formulate a concept or interpretation. Thick description In ethnography, the provision of great detail on the basic data/information. Thick description In ethnography, the provision of great detail on the basic data/information. Time-and-motion logs Reporting of what is observed and the time it is observed. Time-series design An experimental design involving one group that is repeatedly pretested, exposed to an experimental treatment, and repeatedly posttested. Treatment variable see experimental variable. Trend study A longitudinal design (in survey research) in which the same population (conceptually but not literally) is studied over time by taking different random samples. Triangulation Cross-checking of data using multiple data sources or multiple data-collection procedures. Triangulation mixed method A study in which quantitative and qualitative data are collected simultaneously and used to validate and clarify findings. True-false item A statement that is either true or false and the respondent must indicate which it is. T score A standard score derived from a z score by multiplying the z score by 10 and adding 50. t-test for correlated means A parametric test of statistical significance used to determine whether there is a statistically significant difference between the means of two matched, or nonindependent, samples. It is also used for pre-post comparisons. t-test for correlated A parametric test of statistical significance used to determine whether there is a statistically significant difference between two proportions based on the same sample or proportions otherwise non-independent groups. t-test for independent A parametric test of significance used to determine whether there is a statistically significant difference between the means of two independent samples. t-test for independent A parametric test of statistical significance used to determine whether there is a statistically significant difference between two independent proportions. t-test for means A parametric technique for comparing two means. t-test for r A parametric technique for determining if there is a non-zero correlation among two variables in the population. Two-stage random sampling A process in which clusters are first randomly selected and then individuals are selected from each cluster. Two-tailed test Use of both tails of the sampling distribution of a statistic – when a nondirectional hypothesis is stated. Type I error The rejection by the researcher of a null hypothesis that is actually true. Also called alpha error. Type II error The failure of a researcher to reject a null hypothesis that is really false. Also called beta error. Typical sample In qualitative research, a sample judged to be representative of the population of interest. Unit of analysis The unit that is used in data analysis (individuals, objects, groups, classrooms, etc.). Unobtrusive measures Measures obtained without subjects being aware that they are being observed or measured, or by examining inanimate objects (such as school suspension lists) that can be used in order to obtain desired information. Validity The degree to which correct inferences can be made based on results from an instrument; depends not only on the instrument itself, but also on the instrumentation process and the characteristics of the group studied. Validity coefficient An index of the validity of scores; a special application of the correlation coefficient. Values questions see opinion questions. Variability The extent to which scores differ from one another. Variable A characteristic that can assume any one of several values, for example, cognitive ability, height, aptitude, teaching method. Variance (SD2) The square of the standard deviation; a measure of variability. Wilk's lambda The numerical index calculated when carrying out MANOVA or MANCOVA. z-score The most basic standard score that expresses how far a score is from a mean in terms of standard deviation units
{"url":"http://highered.mcgraw-hill.com/sites/0072532491/student_view0/glossary.html","timestamp":"2014-04-19T02:08:02Z","content_type":null,"content_length":"214714","record_id":"<urn:uuid:a90cecee-9691-4bda-bb29-5997c09dc7ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
To Identify the Important Soil Properties Affecting Dinoseb Adsorption with Statistical Analysis The Scientific World Journal Volume 2013 (2013), Article ID 362854, 7 pages Research Article To Identify the Important Soil Properties Affecting Dinoseb Adsorption with Statistical Analysis ^1State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098, China ^2Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT), 82467 Garmisch-Partenkirchen, Germany ^3National Center for Computational Hydroscience and Engineering, University of Mississippi, Oxford, MS 38655, USA ^4Nanjing Hydraulic Research Institute, Nanjing 210029, China Received 8 March 2013; Accepted 3 April 2013 Academic Editors: H. Filik and A. Hursthouse Copyright © 2013 Yiqing Guan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Investigating the influences of soil characteristic factors on dinoseb adsorption parameter with different statistical methods would be valuable to explicitly figure out the extent of these influences. The correlation coefficients and the direct, indirect effects of soil characteristic factors on dinoseb adsorption parameter were analyzed through bivariate correlation analysis, and path analysis. With stepwise regression analysis the factors which had little influence on the adsorption parameter were excluded. Results indicate that pH and CEC had moderate relationship and lower direct effect on dinoseb adsorption parameter due to the multicollinearity with other soil factors, and organic carbon and clay contents were found to be the most significant soil factors which affect the dinoseb adsorption process. A regression is thereby set up to explore the relationship between the dinoseb adsorption parameter and the two soil factors: the soil organic carbon and clay contents. A 92% of the variation of dinoseb sorption coefficient could be attributed to the variation of the soil organic carbon and clay contents. 1. Introduction Dinoseb (2-sec-butyl-4,6-dinitrophenol) is a member of the dinitrophenol family of pesticides, commonly used for controlling the growth of annual grassy and broadleaf weeds. It has long persistence, which leads to an accumulation in soil. It has been found in many areas of the world [1–3]. Many countries have prohibited the usage of the dinoseb. In USA, EPA banned dinoseb usage in 1986. Much research focuses on dinoseb’s toxic effects on human beings, animals, and microorganisms [4–6]. And the measuring technique has also well been studied [7–10]. After being applied to soil, the transport and fate of herbicides are controlled by many complicated mechanisms, including sorption to soil, uptake by plants, transport vial runoff and leaching, biodegradation, photodegradation, volatilization, and chemical degradation [11, 12]. Sorption is one of the most important mechanisms that influence the presence of herbicides in soil [12]. To evaluate the sorption property of herbicides, the popular methods include batch equilibrium technique, column experiments, and field experiments. Comparing with other two techniques, batch experiments are easy and fast to perform, and the cost is low [13]. The results of the batch sorption equilibrium experiments are usually fitted with linear sorption model or Freundlich sorption model to derive the sorption parameters. Since the sorption of herbicides to a large extent was determined by soil properties, such as organic carbon (OC) content [14, 15], clay content [16, 17], and pH value [18], the multiple regression is usually applied to explore the relationships between the sorption parameters and soil environmental factors. However, in multilinear regression, if the predictor variables are not independent, multicollinearity will be a common statistical phenomenon. Multicollinearity may not affect the goodness of the multiple regression prediction, but it reflects the determination of the importance of each environmental factor. Stepwise regression can be used to correct for multicollinearity. It has been frequently applied in educational and psychological research, both to select useful subsets of variables and to evaluate the order of importance of variables [19]. Path analysis was developed around 1918 by Sewall Wright. It was usually used to decompose correlations into different pieces for interpretation of effects. The two methods have been applied to many studies, including biology, sociology, and econometrics [20, 21]. The objectives of this study, therefore, are (1) to use the bivariable correlation analysis and path analysis to investigate the extent of these influences and explicitly explain direct and indirect effects of soil characteristic factors on dinoseb adsorption, (2) to use stepwise regression analysis for excluding the factors which have little influence on the adsorption parameter, and (3), based on these results, to set up regression equation of adsorption values with the most important soil factors. 2. Materials and Methods 2.1. Soil Sample Collection The soil samples were collected in the upper Rhone river valley in Southwest Switzerland. Dinoseb had been found in the groundwater of the Rhone plain. This alluvial plain is well cultivated and of great economic and ecological importance, but it was alleged a high vulnerability of the groundwater to contamination [22]. Along a transect, pits A, B, C, and D were excavated; the distances between the pits were 6.5m, 8.1m, and 6.5m, respectively. At these four sites, altogether 55 small disturbed soil samples were collected at 15, 30, 55, 70, and 85cm depths. 2.2. Characteristics of Dinoseb The purity of the dinoseb production used in the study was 93% (Dr. Ehrenstorfer, Germany). The properties of dinoseb are summarized in Table 1. 2.3. Experimental Design First of all, basic soil properties such as bulk density, porosity, particle size distribution, pH, cation exchange capacity (CEC), and organic carbon content were determined. Soil samples collected from the field were air-dried at room temperature and sieved at 2mm. Then 3g of dry soil was mixed with 6mL of dinoseb solutions at different concentrations into a 9mL polypropylene centrifuge tube. The concentrations are 0, 1.5, 4.5, 9, and 15mg/L, respectively. The tubes were shaken for 24hrs on a rotary tumbler at 20°C. This duration was sufficient to achieve sorption equilibrium, but not long enough for chemical or biological transformations to significantly affect the results, as attested by sorption kinetics tests. The aqueous phase was separated from the solid phase by centrifugation for 15min at 7000rpm. The supernatant was filtered through a disposable 0.45μm cellulous filter. The filtrates were analyzed by injection into a high performance liquid chromatography with diode array detection (Hewlett Packard series 1050) using a C18 column of 25cm length (VYDAC). The light absorption wavelength used for the detection of dinoseb is 265nm. The flow was set as 1mL/min; the solvents used for HPLC were distilled water, acetonitrile (purity ≥ 99.8%), and 0.05% trifluoroacetic acid. The original composition of flow was 40% acetonitril, 40% water, and 20% trifluoroacetic acid. During each measuring, trifluoroacetic acid was kept as constant as 20%; acetonitrile increased from 40% to 65% over the first 15min, then increased to 80% over the next 5min after this measuring started; acetonitrile then decreased back to 40% over 5min and sustained at this level until the end of this measuring, which was 30min after measuring starts. The adsorbed dinoseb mass was calculated with the difference between the initial dinoseb concentration and that measured in the supernatant. All batch sorption experiments were conducted in 2.4. Statistical Analysis Correlation analysis and path analysis in this study were used to demonstrate the degrees of the variables’ interactions or interferences with each other and the exact variable with the most exerting influence. Stepwise multiple-linear regression analysis was used for identifying the linear relationship between dinoseb absorption coefficients with soil properties. Significance of differences was either tested by using a parametric -test or -statistics in ANOVA (analysis of variance). Stepwise multiple-linear regression [24] is one method in multiple linear regressions that used to analyze the linear relationship between single dependent variable with several independent variables. It was selected for this research because (1) multiple-linear regression makes use of the most of the directly observed and experimental information that has been available [25]; (2) the number of controlled variables (OC, CEC, pH, Clay) is fairly small so that it could be easily performed to analyze including all of them; (3) the bivariate correlations among soil properties with the dinoseb adsorption values are not explicitly fixed especially with the influence of multicollinearity; (4) the problem of overfitting could be avoided by adding or deleting variable with the specific criteria. Therefore, backward elimination [26] is applied to build up the final regression equation describing a predicted variable as a function of several independent variables. It follows these procedures: firstly adding all the independent variables into regression, secondly analyzing significance of difference about the partial coefficient of each independent variable and deleting the one with lowest significant contribution to the regression equation compared with the removing criteria (alpha-to-remove value), and finally repeating the regression modeling and testing with remaining variables and removing until all the remaining variables have significant contribution to the regression equation. But some issues of stepwise regression still exist such as that it cannot explicitly interpret the multicollinearity between controlled variables [27]. Due to the problem of multicollinearity in regression [28, 29], before setting up a stepwise multilinear regression, bivariate correlation analysis and path analysis [30] based on the causal relationship were adopted to make explicit the rational of conventional regression calculations. Path analysis have special usefulness in decomposing the soil property effects on the dinoseb adsorption into direct and indirect effects and quantifying the collinearity in the regression model. Note that the direct and indirect effects importantly depend on how the model is built [31]. In this study, only the regression model which includes all variables was applied to path analysis to capture the overall direct and indirect effect from four soil properties on the dinoseb adsorption 3. Results and Discussion 3.1. Sorption Isotherms The physical and chemical properties of the soil samples collected at four sits over 5 depth were summarized in Table 2. Fifty-five sorption isotherms of dinoseb are determined. The isotherms have been fitted by Freundlich model: where is the adsorbed chemical concentration (g/g), is the Freundlich partition coefficient, (cm^3/g), is an empirical coefficient and is the equilibrium concentration (mg/L). Thirty out of the 55 dinoseb sorption isotherm fittings have a , more than 0.95. The values of 19 fittings are from 0.90 to 0.95. The left 6 fittings have a value from 0.87 to 0.90. Therefore, the model can well describe most of the dinoseb sorption at concentrations less than 15mg/L. The derived Freundlich distribution coefficients are listed in Table 3. 3.2. Bivariate Correlation Analysis Results Sorption behavior of dinoseb is believed to significantly depend on organic carbon content, and Clay content and pH have also been reported to affect the sorption of dinoseb [32]. Correlation between the four soil properties and dinoseb adsorption capacity coefficients were assessed. Pearsons correlation coefficients stranded for the bivariate correlation among the dinoseb values and four soil properties (Table 4). On the basis of these data, the two-tailed parametric -test was performed to investigate the significance of differences for the relations between each two variable pair [33]. Table 4 shows that correlations were all significant except that between CEC and the dinoseb values. The dinoseb values had highest positive correlation value with OC , followed by Clay and furthermore highest negative correlation value with pH , while there was no significant relationship between and CEC. These results indicated that the related soil properties with the dinoseb values were soil OC, Clay content, and pH. Moreover, it is noteworthy that correlation matrices among soil properties show several sets of relationships. The amount of OC was significantly and positively correlated with Clay content and CEC and negatively with pH at the significance level of 0.01. Similar to OC, the relationship between CEC, and Clay content was also significantly high. However, generally pH values were weakly and not significantly correlated with CEC and Clay. The results showed that not only the is positively correlated to soil OC, CEC and Clay content, and negatively with pH, but also relationships between two soil properties are still fairly high. With the limitation of bivariate correlation, the Pearson correlation coefficients cannot demonstrate the real relationships when multicollinearity exists. 3.3. Path Analysis Results With path analysis, we can decompose the correlations into direct and indirect effects. The effects are quantified with the path coefficients (Table 5). According to the path coefficients, the sequence of direct effects to is OC > Clay > pH > CEC. Both zero-order correlation and path analysis show OC content has a significant positive effect on , and the direct effect on is much higher than the other three factors (path coefficient 1.056). In the zero-order correlation matrix, pH is significantly correlated with (correlation coefficient −0.659). The path analysis shows that this correlation is mainly due to the correlation of pH with OC (path coefficient −0.662). The direct effect of pH on is low (path coefficient −0.066). For CEC, with almost zero direct effect on , it can be considered that the moderate correlation (correlation coefficient 0.436) with is mainly due to the contribution of collinearity between OC content and CEC. Clay content has negative direct effect on (path coefficient −0.216), although the indirect effect due to correlation with OC is more obvious (path coefficient 0.746). Contrast to that, the correlation coefficient shows that Clay has a positive relationship with . Dinoseb is a weak acid with a pH of 4.4–4.62 [20] and is mainly in anionic form at the pH of the studied soils [34]. Therefore, it is more reasonable that its affinity to soil was negatively correlated with the content of the negatively charged clays. 3.4. Stepwise Multiple-Linear Regression Results Based on the correlation matrix in Table 4 and path analysis coefficients in Table 5, it is obvious that it is not independent between pairs of the soil properties and that makes the interpretation of multiple linear regression equations between the dinoseb values and soil properties unreliable. The problem of multicollinearity among soil properties in linear model has been generally recognized in many studies [35, 36]. In order to overcome multicollinearity, stepwise regression, one of several standard procedures [27] for variable selection, was applied for multiple linear regression in this study. Due to the small number of correlated variables (OC, pH, CEC, Clay), the backward elimination was performed starting with all four soil properties as controlled variables and successively eliminates one at a time. And the criteria based on -statistics is to remove the lowest -to-remove statistic which is bigger than 0.05. The regression coefficients and statistics summary of each prediction model of dinoseb values depending on soil properties as developed using stepwise multiple linear regression are presented in Tables 6 and 7. In Table 6, the standardized coefficients (beta values) indicate the strength of the effect of the respective soil properties on dinoseb values; that is, the larger absolute value shows the stronger effect. Zero-order correlations have been discussed in correlation analysis. Partial correlations reveal the relationship between residualized dinoseb values and residualized soil properties, and part correlations express the correlations between residualized dinoseb values and unaltered soil properties. The model 1, containing all four soil properties, explains 96.1% of the variation in dinoseb values. However, the significant levels of CEC, pH, and Clay content indicate that some of the soil properties can be removed from the model (significant levels are 0.999, 0.497, and 0.344, resp.). According to the removal principle, the soil property with highest significant level, which is CEC, should be removed and then Model 2 is built up with the remaining soil properties; in the same way, sequential stepwise regressions eliminated pH from model 2 since pH shows the highest significant level which is bigger than 0.05 (0.000 and 0.028, resp.). In model 3, both of the remaining variables show a significant level less than 0.05, thus elimination stops . The statistics summary of each regression model is illustrated in Table 7. In addition to the three models in Table 6, Model 4 which uses only OC as a predictor is analysed. In all four models, the multiple correlations between the dinoseb values and predictors are strong ( varies from 0.961 to 0.945) and decrease slightly while one specific soil property is removed from the previous model. The changes from model 1 to model 2 and from model 2 to model 3 are not significant ( and 0.481, resp.). That means that removal of CEC and pH consecutively has minor effect on the goodness of the regression, whereas removal of Clay content from model 3 results in a significant change to . That also implies the clay factor is important for dinoseb sorption in soil. 3.5. Model Development Combining the results from correlation analysis, path analysis, and stepwise regression, we can conclude that the soil OC and clay contents are the most important factors affecting the dinoseb sorption in soil. Therefore, the two factors are selected as the predictors of to build up the regression equation: in which OC is the soil organic content, and Clay is the clay content. The square is 0.92, that is, the variation of OC and clay in soil accounts for the 92% variation, in which 89% variation can be explained directly by OC variation, and the other 3% can be explained by the clay content variation. The -statics of the regression is 98.09, and the regression is found to be significant at . 4. Conclusions A good multilinear regression was made, using all possible factors, including OC, Clay, CEC, and pH as the explanatory variables of the values. The sequence of correlation to was found to be OC > pH > Clay > CEC. However, The explanatory variables were not independent from each other, thus the multicollinearity may make the conclusion suspicious. With bivariate and correlation analysis and path analysis, it was found that the direct effects on should be OC > Clay > pH > CEC. Clay, pH, and CEC are mainly correlated with through the correlations with OC. The direct effects of pH and CEC on are very low. The zero-order correlation matrix shows that clay was positively correlated with , but the path analysis shows that the correlation is negative. The latter is more reasonable according to the dinoseb chemical properties. The backward stepwise regression showed that pH and CEC can be removed from the prediction model. Based on these results, a more efficient regression using OC and Clay as predictors is built up. The authors gratefully acknowledge the financial support from the Special Fund of State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering (2009585312), and the MOE Programme of Introducing Talents of Discipline to Universities (“111” Project B08048). 1. M. Soutter and Y. Pannatier, “Groundwater vulnerability to pesticide contamination on a regional scale,” Journal of Environmental Quality, vol. 25, no. 3, pp. 439–444, 1996. View at Scopus 2. H. J. O'Neill, T. L. Pollock, H. S. Bailey, P. Milburn, C. Gartley, and J. E. Richards, “Dinoseb presence in agricultural subsurface drainage from potato fields in Northwestern New Brunswick, Canada,” Bulletin of Environmental Contamination and Toxicology, vol. 43, no. 6, pp. 935–940, 1989. View at Scopus 3. R. H. Kaake, D. J. Roberts, T. O. Stevens, R. L. Crawford, and D. L. Crawford, “Bioremediation of soils contaminated with the herbicide 2-sec-butyl-4,6- dinitrophenol (dinoseb),” Applied and Environmental Microbiology, vol. 58, no. 5, pp. 1683–1689, 1992. View at Scopus 4. M. Matsumoto, T. Furuhashi, C. Poncipe, and M. Ema, “Combined repeated dose and reproductive/developmental toxicity screening test of the nitrophenolic herbicide dinoseb, 2-sec-butyl-4,6-dinitrophenol, in rats,” Environmental Toxicology, vol. 23, no. 2, pp. 169–183, 2008. View at Publisher · View at Google Scholar · View at Scopus 5. N. Chèvre, A. R. Brazzale, K. Becker-van Slooten, R. Behra, J. Tarradellas, and H. Guettinger, “Modeling the concentration-response function of the herbicide dinoseb on Daphnia magna (survival time, reproduction) and Pseudokirchneriella subcapitata (growth rate),” Ecotoxicology and Environmental Safety, vol. 62, no. 1, pp. 17–25, 2005. View at Publisher · View at Google Scholar · View at Scopus 6. K. L. Takahashi, H. Hojo, H. Aoyama, and S. Teramoto, “Comparative studies on the spermatotoxic effects of dinoseb and its structurally related chemicals,” Reproductive Toxicology, vol. 18, no. 4, pp. 581–588, 2004. View at Publisher · View at Google Scholar · View at Scopus 7. M. Sreedhar, T. M. Reddy, K. R. irisha, and S. R. J. Reddy, “Differential pulse adsorptive stripping voltammetric determination of dinoseb and dinoterb at a modified electrode,” Analytical Sciences, vol. 19, no. 4, pp. 511–516, 2003. View at Publisher · View at Google Scholar · View at Scopus 8. M. Pedrero, F. J. M. de Villena, J. M. Pingarrَn, and L. M. Polo, “Determination of dinoseb by adsorptive stripping voltammetry,” Electroanalysis, vol. 3, pp. 419–422, 1991. 9. M. R. Viant, C. A. Pincetich, D. E. Hinton, and R. S. Tjeerdema, “Toxic actions of dinoseb in medaka (Oryzias latipes) embryos as determined by in vivo ^31 NMR, HPLC-UV and ^1H NMR metabolomics,” Aquatic Toxicology, vol. 76, no. 3-4, pp. 329–342, 2006. View at Publisher · View at Google Scholar · View at Scopus 10. J. A. Arancibia, G. M. Delfa, C. E. Boschetti, G. M. Escandar, and A. C. Olivieri, “Application of partial least-squares spectrophotometric-multivariate calibration to the determination of 2-sec-butyl-4,6-dinitrophenol (dinoseb) and 2,6-dinitro-p-cresol in industrial and water samples containing hydrocarbons,” Analytica Chimica Acta, vol. 553, no. 1-2, pp. 141–147, 2005. View at Publisher · View at Google Scholar · View at Scopus 11. A. W. Warrick and D. R. Nielsen, “Spatial variability of soil physical properties in the field,” in Application of Soil Physics, pp. 319–344, Academic Press, NewYork, NY, USA, 1980. 12. L. Guo, W. A. Jury, R. J. Wagenet, and M. Flury, “Dependence of pesticide degradation on sorption: nonequilibrium model and application to soil reactors,” Journal of Contaminant Hydrology, vol. 43, no. 1, pp. 45–62, 2000. View at Publisher · View at Google Scholar · View at Scopus 13. M. A. Maraqa, X. Zhao, R. B. Wallace, and T. C. Voice, “Retardation coefficients of nonionic organic compounds determined by batch and column techniques,” Soil Science Society of America Journal, vol. 62, no. 1, pp. 142–152, 1998. View at Scopus 14. B. Thompson, “Stepwise regression and stepwise discriminant analysisi need not apply here: a guidelines editorial,” Education and Psychological Measurement, vol. 55, no. 4, pp. 525–534, 1995. View at Publisher · View at Google Scholar 15. Y. Dodge, The Oxford Dictionary of Statical Terms, 2003. 16. M. Montanaro Gauci, T. F. Kruger, K. Coetzee, K. Smith, J. P. Van Der Merwe, and C. J. Lombard, “Stepwise regression analysis to study male and female factors impacting on pregnancy rate in an intrauterine insemination programme,” Andrologia, vol. 33, no. 3, pp. 135–141, 2001. View at Scopus 17. M. Soutter and Y. Pannatier, “Groundwater vulnerability to pesticide contamination on a regional scale,” Journal of Environmental Quality, vol. 25, no. 3, pp. 439–444, 1996. View at Scopus 18. A. G. Hornsby, R. Don Wauchope, and A. E. Herner, Pesticide Properties in the Environment, Springer, New York, NY, USA, 1996. 19. I. M. M. Ghani and S. Ahmad, “Stepwise multiple regression method to forecast fish landing,” Procedia Social and Behavioral Sciences, vol. 8, pp. 549–554, 2010. 20. K. L. Findell and E. A. B. Eltahir, “An analysis of the soil moisture-rainfall feedback, based on direct observations from Illinois,” Water Resources Research, vol. 33, no. 4, pp. 725–735, 1997. View at Scopus 21. D. C. Montgomery, E. A. Peck, and G. G. Vining, Introduction to Linear Regression Analysis, John Wiley and Sons, Hoboken, NJ, USA, 4th edition, 2006. 22. I. G. Chong and C. H. Jun, “Performance of some variable selection methods when multicollinearity is present,” Chemometrics and Intelligent Laboratory Systems, vol. 78, no. 1, pp. 103–112, 2005. View at Publisher · View at Google Scholar · View at Scopus 23. M. S. Lachniet and W. P. Patterson, “Use of correlation and stepwise regression to evaluate physical controls on the stable isotope values of Panamanian rain and surface waters,” Journal of Hydrology, vol. 324, no. 1–4, pp. 115–140, 2006. View at Publisher · View at Google Scholar · View at Scopus 24. P. S. Petraitis, A. E. Dunham, and P. H. Niewiarowski, “Inferring multiple causality: the limitations of path analysis,” Functional Ecology, vol. 10, no. 4, pp. 421–431, 1996. View at Scopus 25. D. J. de Rodríguez, J. L. Angulo-Sánchez, and R. Rodríguez-García, “Correlation and path coefficient analyses of the agronomic trait of a native population of guayule plants,” Industrial Crops and Products, vol. 14, pp. 93–103, 2001. 26. A. G. G. Vasconcelos, R. M. V. Almeida, and F. F. Nobre, “The path analysis approach for the multivariate analysis of infant mortality data,” Annals of Epidemiology, vol. 8, no. 4, pp. 262–271, 1998. View at Publisher · View at Google Scholar · View at Scopus 27. S. B. Haderleln and R. P. Schwarzenbach, “Adsorption of substituted nitrobenzenes and nitrophenols to mineral surfaces,” Environmental Science and Technology, vol. 27, no. 2, pp. 316–326, 1993. View at Scopus 28. H. C. Cho and S. Abe, “Is two-tailed testing for directional research hypotheses tests legitimate?” Journal of Business Research, 2012. View at Publisher · View at Google Scholar 29. L. Cox, M. C. Hermosin, and J. Cornejo, “Distribution coefficient of methomyl in soils from different depths,” Fresenius Environmental Bulletin, vol. 1, pp. 445–449, 1992. 30. D. T. Chiou, “Theoretical considerations of partition uptake of non-ionic organic compounds by soil organic matter. In Reactions and movement of organic chemicals in soils,” SSSA Special Publication, no. 22, pp. 1–29, 1989. 31. M. C. Hermosin, J. Cornejo, and J. L. Perez-rodriguez, “Adsorption and desorption of maleic hidrazide as a function of soil properties,” Soil Science, vol. 144, pp. 250–256, 1987. 32. M. C. Hermosin and J. Cornejo, “Assessing factors related to pesticide adsorption by soils,” Environmental Toxicology and Chemistry, vol. 25, pp. 45–55, 1989. 33. W. Lertpaitoonpan, S. K. Ong, and T. B. Moorman, “Effect of organic carbon and pH on soil sorption of sulfamethazine,” Chemosphere, vol. 76, no. 4, pp. 558–564, 2009. View at Publisher · View at Google Scholar · View at Scopus 34. A. Mermoud, J. M. F. Martins, D. Zhang, and A. C. Favre, “Small-scale spatial variability of atrazine and dinoseb adsorption parameters in an alluvial soil,” Journal of Environmental Quality, vol. 37, no. 5, pp. 1929–1936, 2008. View at Publisher · View at Google Scholar · View at Scopus 35. S. Jagadamma, R. Lal, R. G. Hoeft, E. D. Nafziger, and E. A. Adee, “Nitrogen fertilization and cropping system impacts on soil properties and their relationship to crop yield in the central Corn Belt, USA,” Soil and Tillage Research, vol. 98, no. 2, pp. 120–129, 2008. View at Publisher · View at Google Scholar · View at Scopus 36. B. L. Bowerman and R. T. O' Connell, Linear Statistical Models: An Applied Approach, Duxbury Press, Belmont, Ca, USA, 2nd edition, 1990.
{"url":"http://www.hindawi.com/journals/tswj/2013/362854/","timestamp":"2014-04-18T07:53:20Z","content_type":null,"content_length":"97165","record_id":"<urn:uuid:4be988c2-14ec-4cea-953e-b2c0a25b7a35>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Data mining cluster analysis Methods of cluster analysis are placed between statistics and informatics. They play an important role in the area of data mining. Cluster analysis divides data into meaningful or useful groups (clusters). If meaningful clusters are the goal, then the resulting clusters should capture the "natural" structure of the data. However, in other cases, cluster analysis is only a useful starting point for other purposes, e.g., data compression or efficiently finding the nearest neighbors of points [4]. The aim of this paper is to present some approaches to clustering in categorical data. Whereas methods for cluster analysis of quantitative data are currently implemented in all software packages for statistical analysis and data mining, and the differences among them in this area are small, the differences in implementation of methods for clustering in qualitative data are substantial [3]. There are a number of clustering types which used as hierarchical, partitional, exclusive, non-exclusive, fuzzy, partial, complete, object clustering, and attribute clustering. Cluster analysis is also used and other types of cluster [1]. The scope of this paper is modest: to provide an introduction to cluster analysis in the field of data mining, where we define data mining to be the discovery of useful, but non-obvious, information or patterns in large collections of data. Much of this paper is necessarily consumed with providing a general background for cluster analysis, but we also discuss a number of clustering techniques that have recently been developed specifically for data mining. While the paper strives to be self-contained from a conceptual point of view, many details have been omitted. Consequently, many references to relevant books and papers are provided.[4] What is Cluster Analysis? Clustering is considered as an unsupervised classification process [5]. The clustering problem is to partition a dataset into groups (clusters) so that the data elements within a cluster are more similar to each other than data elements in different clusters by given criteria. Cluster analysis groups data objects based on information found in the data that describes the objects and their relationships. The goal is that the objects in a group be similar to one another and different from the objects in other groups. Cluster analysis divides data into meaningful or useful groups (clusters)[3]. Cluster analysis is an exploratory discovery process. It can be used to discover structures in data without providing an explanation/interpretation. If meaningful clusters are the goal, then the resulting clusters should capture the 'natural' structure of the data. For example, cluster analysis has been used to group related documents for browsing, to find genes and proteins that have similar functionality, and to provide a grouping of spatial locations prone to earthquakes [1]. However, in other cases, cluster analysis is only a useful starting point for other purposes, e.g., data compression or efficiently finding the nearest neighbors of points [3]. Whether for understanding or utility, cluster analysis has long been used in a wide variety of fields: psychology and other social sciences, biology, statistics, pattern recognition, information retrieval, machine learning, and data mining. In many applications, what constitutes a cluster is not well defined, and clusters are often not well separated from one another [5]. Nonetheless, most cluster analysis seeks, as a result, a partition of the data into non-overlapping groups. To better understand the difficulty of deciding what constitutes a cluster, consider figures a) through c), which show twenty points and three different ways that they can be divided into clusters. If we allow clusters to be nested, then the most reasonable interpretation of the structure of these points is that there are two clusters, each of which has three subclusters. However, the apparent division of the two larger clusters into three subclusters may simply be an artifact of the human visual system. Finally, it may not be unreasonable to say that the points from four clusters. Thus, we stress once again that the definition of what constitutes a cluster is imprecise, and the best definition depends on the type of data and the desired results.[4] (a) Original points. (b) Two clusters. (c) Four clusters. (d) Six clusters. What is not Cluster Analysis? We illustrate the difference between cluster analysis and other techniques used to divide data objects into groups. The cluster analysis can be regarded as a form of classification in that it creates a labeling of objects with class (cluster) labels. It derives these labels only from the data. As such, clustering does not use previously assigned class labels, except perhaps for verification of how well the clustering worked. Thus, cluster analysis is distinct from pattern recognition or the areas of statistics know as discriminate analysis and decision analysis, which seek to find rules for classifying objects given a set of pre-classified objects. While cluster analysis can be useful in the previously mentioned areas, either directly or as a preliminary means of finding classes, there is much more to these areas than cluster analysis. For example, the decision of what features to use when representing objects is a key activity of fields such as pattern recognition [3]. Cluster analysis typically takes the features as given and proceeds from there. Thus, cluster analysis, while a useful tool in many areas, is normally only part of a solution to a larger problem which typically involves other steps and techniques.[4] Types of Clustering An entire collection of clusters is commonly referred to as a clustering, and in this section, we describe various types of clustering: hierarchical versus partitional, exclusive versus non-exclusive, fuzzy versus non-fuzzy, partial versus complete, and object clustering versus attribute clustering. Hierarchical versus Partitional: The most commonly discussed difference between collections of clusters is whether the clusters are nested or unnested or, in more traditional terminology, whether a set of clusters is hierarchical or partitional. A partitional of clusters is simply a division of the set of data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset [4]. A hierarchical clustering is a set of nested clusters organized as a hierarchical tree, where the leaves of the tree are singleton clusters of individual data objects, and where the cluster associated with each interior node of the tree is the union of the clusters associated with its child nodes [1]. The tree that represents a hierarchical clustering is called a dendrogram, a term that comes from biological taxonomy. (a) Nested Clusters (b) Dendrogram The distinction between a hierarchical and partitional clustering is not as great as it might seem. Specifically, by looking at the clusters on a particular level of a hierarchical tree, a partitional clustering can be obtained. For example, in Figure 1, the partitional clustering, from bottom to top, are {{1}, {2}, {3}, {4}}, {{1}, {2, 3}, {4}}, {{1}, {2, 3, 4}}, and {{1, 2, 3, 4}}. Thus, a hierarchical partitioning can be viewed as a sequence of partitional clusterings, and a partitional clustering can be viewed as a particular 'slice' of a hierarchical clustering. In the hierarchical clustering of Figure 1, the set of clusters at a given level and the set of clusters of the level immediately preceding it are the same except that one of the clusters in the given level is the union of two of the clusters from the immediately preceding level [5]. While this approach is traditional and most common, it is not essential, and a hierarchical clustering can merge more than two clusters from one level to the next higher one [1]. Exclusive versus Non-Exclusive: In a non-exclusive clustering, a point can belong to more than one cluster. In the most general sense, a non-exclusive clustering is used to reflect the fact that an object may belong to more than one group (class) at a time, e.g., a person at a university may be both an enrolled student and an employee of the university. Note that the exclusive versus non-exclusive distinction is independent of the partitional versus hierarchical distinction [4]. In a more limited sense, a non-exclusive clustering is sometimes used when an object could reasonably be placed in any of several clusters. Rather than make a somewhat arbitrary choice and place the object in a single cluster, such objects are placed in all of the "equally good" clusters. This type of non-exclusive clustering does not attempt to deal with multi-class situations, but is more similar to the fuzzy clustering approach that we describe next. (a) Non-traditional Nested Clusters (b) Non-traditional Dendrogram Fuzzy versus Non-Fuzzy: In a fuzzy clustering, every point belongs to every cluster, but with a weight that is between 0 and 1. In other words, clusters are treated as fuzzy sets. Mathematically, a fuzzy set is one where an object belongs to any set with a weight that is between 0 and 1. For any object, the sum of the weights must equal 1. In a very similar way, some probabilistic clustering techniques assign each point a probability of belonging to each cluster, and these probabilities must also sum to one. Since the cluster membership weights for any object sum to 1, a fuzzy or probabilistic clustering does not address true multi-class situations where an object belongs to multiple classes. Partial versus Complete: A complete clustering assigns every object to a cluster, whereas a partial clustering does not [1]. The motivation for a partial clustering is that not all objects in a data set may belong to well-defined groups. Indeed, many objects in the data set may represent noise, outliers, or "uninteresting background." Object Clustering versus Attribute Clustering: While most clustering is object clustering, i.e., the clusters are groups of objects, attribute clustering, i.e., the clusters are groups of attributes, can also be useful. For instance, given a set of documents, we may wish to cluster the words (terms) of the documents, as well as or instead of the documents themselves. Attribute clustering is less common than object clustering, as are clustering techniques that attempt to cluster the both objects and attributes simultaneously. Types of Clusters Several working definitions of a cluster are commonly used and are described in this section. In these descriptions, we will use two dimensional points as our data objects to illustrate the differences between the different types of clusters, but the cluster types described are applicable to a wide variety of data sets [4]. Well-Separated: A cluster is a set of points such that any point in a cluster is closer to every other point in the cluster than to any point not in the cluster. Sometimes a threshold is used to specify that all the points in a cluster must be sufficiently close to one another. Center-based Cluster: A cluster is a set of objects such that an object in a cluster is closer to the 'center' of a cluster, than to the center of any other cluster. The center of a cluster is often a centroid, i.e., the average of all the points in the cluster, or a medoid, the most 'representative' point of a cluster. Contiguous Cluster: A cluster is a set of points such that a point in a cluster is closer to one or more other points in the cluster than to any point not in the cluster. Density-based: A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density. This definition is often used when the clusters are irregular or intertwined, and when noise and outliers are present. Note that the contiguous definition would find only one cluster. Also note that the three curves don't form clusters since they fade into the noise, as does the bridge between the two small circular clusters. Shared Property: More generally, we can define a cluster as a set of points that share some property. This definition encompasses all the previous definitions of a cluster, e.g., points in a center based cluster share the property that they are all closest to the same centroid. However, the shared property approach also includes new types of clusters. In both cases, a clustering algorithm would need to have a very specific 'concept' of a cluster in mind to successfully find these clusters. Finding such clusters is sometimes called 'conceptual clustering.' However, too sophisticated a notion of a cluster would bring us into the area of pattern recognition, and thus, we will only consider the simpler types of clusters in this chapter[1]. Clusters Defined Via Objective Functions Another general approach to defining a set of clusters is by using an objective function. Specifically, we define an objective function that attempts to capture the 'goodness' of a set of clusters, and then define our clustering as the set of clusters that minimizes the objective function. To illustrate, for two dimensional points, a common objective function is squared error, which is computed by calculating the sum of the squared distance of each point to the center of its cluster. For a specified number of clusters, K we can then define our clusters to be the partitioning of points into K groups that minimizes this objective [2]. The K-means algorithm is based on this objective function. Cluster analysis methods will always produce a grouping. The groupings produced by cluster analysis may or may not prove useful for classifying objects. If the groupings discriminate between variables not used to do the grouping and those discriminations are useful, then cluster analysis is useful[4]. Cluster analysis may be used in conjunction with discriminate function analysis. After multivariate data are collected, observations are grouped using cluster analysis. Discriminate function analysis is then used on the resulting groups to discover the linear structure of either the measures used in the cluster analysis and/or different measures. Cluster analysis methods are not clearly established. We have clustering types which used as hierarchical, partitional, exclusive, non-exclusive, fuzzy, partial, complete, object clustering, and attribute clustering[1]. Also we use and other types of cluster. There are many options one may select when doing a cluster analysis using a statistical package. Cluster analysis is thus open to the criticism that a statistician may mine the data trying different methods of computing the proximities matrix and linking groups until "discovers" the structure that originally believed was contained in the data. One wonders why anyone would bother to do a cluster analysis for such a purpose [5]. [1] Cluster analysis: Basis Concepts and Algorithms [2] "Cluster analysis and categorical data" Hana Rezanková Vysoká škola ekonomická v Praze, Praha [3] "On Continuous Optimization Methods in Data Mining— Cluster Analysis, Classification and Regression — Provided for Decision Support and Other Applications" Tatiana Tchemisova, Bas¸ak Akteke-O¨ ztu¨rk, Gerhard Wilhelm Weber. [4] "An Introduction to Cluster Analysis for Data Mining" 10/02/2000 11:42 AM [5]"Visual Cluster Analysis in Data Mining" A thesis submitted in fulfilment of the requirements for the Doctorial Degree of Philosophy by Ke-Bing Zhang October 2007 [6] "Microarray Gene Expression Data Mining: Clustering Analysis Review" Erfaneh Naghieh and Yonghong Peng Department of Computing, University of Bradford Share This Essay Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay. More from UK Essays Need help with your essay? We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay, you will see an instant price based on your specific needs before the order is processed:
{"url":"http://www.ukessays.com/essays/computer-science/data-mining-cluster-analysis.php","timestamp":"2014-04-17T15:45:30Z","content_type":null,"content_length":"37661","record_id":"<urn:uuid:a52a83d1-e722-4a71-a38e-2f21fc6c5934>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
[gameprogrammer] Re: mapping numbers to colors • From: Jake The Snake Briggs <jacob_briggs@xxxxxxxxxxxxxxx> • To: gameprogrammer@xxxxxxxxxxxxx • Date: Fri, 03 Dec 2004 11:06:17 +1300 Binary partition of the colour space. Assign your colors a number, say 0 through 100. 1 group would yeild the colour at position 0. 3 groups would be 25,50 and 75. 4 would be 0,25,50,100, 5 would be 0,25,50,75,100. Or you could do it with percentage's. 1 group would be 1/1 with would be the colour referenced by 100. 3 groups would be 1/3, the colours would be 33,66 and 100. 5 groups would be 1/5, 20,40,60,80,100. I am sure there is a better way, maybe working directly with the r,g,b values, but i am sure you get the idea of what i am talking about. If each r,g,b value can be between 0 and 255, you could just partition each of those spaces. like 3 groups would yeild the colours (85,85,85),(127,127,127) and (212,212,212). If this makes no sence, just ignore my brain dump.... Alan Wolfe wrote: >Hey guys, >wierd question, possibly straight out of the old school days... >I'm making some line graphs and i have X number of groups of data. >X could be 1 or it could be 20, it varies. >What im trying to do is come up with a good way to automaticly set colors to >these groups of data. >ie if there were 2 groups, 1 line could be red and the 2nd line could be blue. >if there were 4 groups, i could use red, blue, yellow, green for the lines etc. >Does anyone know a way to algorithmicly come up with X number of colors, that >are as far apart visualy as possible? Cause if theres 5 groups of data and >they are all shades of blue, thats not very good. >Thanks a bunch for any help :P >To unsubscribe go to http://gameprogrammer.com/mailinglist.html To unsubscribe go to http://gameprogrammer.com/mailinglist.html Other related posts: • » [gameprogrammer] Re: mapping numbers to colors
{"url":"http://www.freelists.org/post/gameprogrammer/mapping-numbers-to-colors,3","timestamp":"2014-04-20T19:47:12Z","content_type":null,"content_length":"8759","record_id":"<urn:uuid:e088c28f-9622-440f-9ed6-fcb74fb28c72>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Mathematics and Statistics Information for Current and Prospective Graduate Students PhD, MA and PBC students in mathematics and statistics. PhD in Computational Mathematics Brief Overview The PhD degree requires a minimum of 60 semester hours including 18-21 hours of dissertation and 39-42 hours of coursework, with at most 10 hours at the 500-level. By the end of their second year all PhD students are expected to have passed three qualifying exams. After passing these exams, the student should form a dissertation committee and select a topic. The dissertation research proposal consists of a presentation and oral examination by the dissertation committee. Part of the research proposal is the completion of a programming or computational project that is related to the proposal. All dissertations are required to have a significant computational component. Upon completion of the dissertation, it is defended in an oral exam by the dissertation committee. Failure to continue to make progress towards the degree can result in dismissal from the program. Useful Links MA in Mathematics Brief Overview The MA degree is offered in two concentrations: Mathematics and Applied Statistics. The Mathematics concentration requires 30 hours (for the thesis option) or 33 hours (for comprehensive exam option). The Applied Statistics concentration requires 33 hours. Useful Links Post-Baccalaureate Certificate in Statistics Brief Overview The Post-Baccalaureate Certificate in Statistics is a 12-hour program that provides statistical training for persons interested in enhancing their knowledge of statistics but who do not wish to pursue a formal degree and for professionals whose interests require a knowledge of Statistics beyond the undergraduate level. STA 661 and STA 662 are required. Students must also complete two additional three-hour STA courses at or above the 500 level, excluding STA 571/571L, STA 572/572L, and STA 580. The Graduate Bulletin information regarding Post Baccalaureate Certificates can be found
{"url":"http://www.uncg.edu/mat/grad/students.html","timestamp":"2014-04-18T06:25:04Z","content_type":null,"content_length":"23006","record_id":"<urn:uuid:48352ada-fc28-419c-8431-7b40e909e984>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Hillside, IL Math Tutor Find a Hillside, IL Math Tutor ...During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. I have a teaching certificate in high school mathematics issued by the South Carolina State Department of Education. During my two and half years of teaching high school mathematics, I have had the opportunity to teach various levels of Prealgebra, Algebra 1 and Algebra 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...I still tutor there and lead workshops for the school's placement and admission tests. With my BA and MA in English, I've had a lot of writing experience. I evaluate students' grammar as I grade their essays, but I prefer to teach the grammar concept and let students identify and correct their own errors. 17 Subjects: including algebra 1, algebra 2, grammar, geometry ...The college application process is the most exciting time for a high school student. When a student are guided and exposed to various options, wonderful opportunities present themselves. The college counseling process has many important steps, all of which are crucial to student success. 28 Subjects: including SAT math, algebra 2, ACT Math, linear algebra Hi!! My name is Harry O. I have been tutoring high school and college students for the past six years. Previously I taught at Georgia Institute of Technology from which I received a Bachelor's in Electrical Engineering and a Master's in Applied Mathematics. 18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...I completed an IB diploma in high school so it's fair to say I'm an expert at taking exams. I love to work with creative individuals who genuinely want to make themselves better people and I'm a great role model and mentor for young people. I work well and have a lot of experience with college and high school students. 36 Subjects: including algebra 1, algebra 2, biology, chemistry Related Hillside, IL Tutors Hillside, IL Accounting Tutors Hillside, IL ACT Tutors Hillside, IL Algebra Tutors Hillside, IL Algebra 2 Tutors Hillside, IL Calculus Tutors Hillside, IL Geometry Tutors Hillside, IL Math Tutors Hillside, IL Prealgebra Tutors Hillside, IL Precalculus Tutors Hillside, IL SAT Tutors Hillside, IL SAT Math Tutors Hillside, IL Science Tutors Hillside, IL Statistics Tutors Hillside, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/hillside_il_math_tutors.php","timestamp":"2014-04-19T20:00:48Z","content_type":null,"content_length":"24000","record_id":"<urn:uuid:d11f10c2-1427-43a0-a87e-cb7fe32b0ec0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Right Prism November 20th 2008, 06:26 PM #1 MHF Contributor Jul 2008 Right Prism A right prism has bases that are regular hexagons. The measures of each of the six sides of the hexagon is represented by a and the height of the solid by 2a. Express the dimensions and the surface area of each face in terms of a. The sides of the prism will be 6 rectangles of dimension a by 2a. So the area of each lateral face would be $2a^2$, and the total lateral area would be $6(2a^2)=12a^2$ Do we care about the two bases? November 22nd 2008, 08:42 AM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia November 22nd 2008, 09:29 PM #3 MHF Contributor Jul 2008
{"url":"http://mathhelpforum.com/pre-calculus/60753-right-prism.html","timestamp":"2014-04-20T08:04:48Z","content_type":null,"content_length":"37637","record_id":"<urn:uuid:71460f9f-5167-4125-86dd-0c6eae52f1f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Methods of Dynamic and Nonsmooth Optimization Results 1 - 10 of 16 , 1993 "... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..." Cited by 89 (7 self) Add to MetaCart Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of one-sided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the game-theoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows black-and-white constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject. "... In this work, we show how to obtain for non-compact manifolds the results that have already been done for Monge Transport Problem for costs coming from Tonelli Lagrangians on compact manifolds. In particular, the already known results for a cost of the type dr, r> 1, where d is the Riemannian distan ..." Cited by 22 (6 self) Add to MetaCart In this work, we show how to obtain for non-compact manifolds the results that have already been done for Monge Transport Problem for costs coming from Tonelli Lagrangians on compact manifolds. In particular, the already known results for a cost of the type dr, r> 1, where d is the Riemannian distance of a complete Riemannian manifold, hold without any curvature restriction. 1 - SIAM J. Optim , 1999 "... . Optimization problems with complementarity constraints are closely related to optimization problems with variational inequality constraints and bilevel programming problems. In this paper, under mild constraint qualifications, we derive some necessary and su#cient optimality conditions involving t ..." Cited by 16 (11 self) Add to MetaCart . Optimization problems with complementarity constraints are closely related to optimization problems with variational inequality constraints and bilevel programming problems. In this paper, under mild constraint qualifications, we derive some necessary and su#cient optimality conditions involving the proximal coderivatives. As an illustration of applications, the result is applied to the bilevel programming problems where the lower level is a parametric linear quadratic problem. Key words. optimization problems, complementarity constraints, optimality conditions, bilevel programming problems, proximal normal cones AMS subject classifications. 49K99, 90C, 90D65 PII. S1052623497321882 1. Introduction. The main purpose of this paper is to derive necessary and su#cient optimality conditions for the optimization problem with complementarity constraints (OPCC) defined as follows: (OPCC) min f(x, y, u) s.t. #u, #(x, y, u)# = 0, u # 0, #(x, y, u) # 0 (1.1) L(x, y, u) = 0, g(x, y, u) ... - IN THE JOURNAL MATHEMATICS OF CONTROL, SIGNALS, AND SYSTEMS (MCSS , 2002 "... We consider the Lagrange problem of optimal control with unrestricted controls and address the question: under what conditions we can assure optimal controls are bounded? This question is related to the one of Lipschitzian regularity of optimal trajectories, and the answer to it is crucial for closi ..." Cited by 8 (5 self) Add to MetaCart We consider the Lagrange problem of optimal control with unrestricted controls and address the question: under what conditions we can assure optimal controls are bounded? This question is related to the one of Lipschitzian regularity of optimal trajectories, and the answer to it is crucial for closing the gap between the conditions arising in the existence theory and necessary optimality conditions. Rewriting the Lagrange problem in a parametric form, we obtain a relation between the applicability conditions of the Pontryagin maximum principle to the later problem and the Lipschitzian regularity conditions for the original problem. Under the standard hypotheses of coercivity of the existence theory, the conditions imply that the optimal controls are essentially bounded, assuring the applicability of the classical necessary optimality conditions like the Pontryagin maximum principle. The result extends previous Lipschitzian regularity results to cover , 2009 "... We characterize the optimal incentive scheme for a manager who faces costly e¤ort decisions and whose ability to generate pro…ts for the …rm varies stochastically over time. The optimal contract is obtained as the solution to a dynamic mechanism design problem with hidden actions and persistent shoc ..." Cited by 7 (3 self) Add to MetaCart We characterize the optimal incentive scheme for a manager who faces costly e¤ort decisions and whose ability to generate pro…ts for the …rm varies stochastically over time. The optimal contract is obtained as the solution to a dynamic mechanism design problem with hidden actions and persistent shocks to the agent’s productivity. When the agent is risk-neutral, the optimal contract can often be implemented with a simple pay package that is linear in the …rm’s pro…ts. Furthermore, the power of the incentive scheme typically increases over time, thus providing a possible justi…cation for the frequent practice of putting more stocks and options in the package of managers with a longer tenure in the …rm. In contrast to other explanations proposed in the literature (e.g., declining disutility of e¤ort or career concerns), the optimality of seniority-based reward schemes is not driven by variations in the agent’s preferences or in his outside option. It results from an optimal allocation of the manager’s informational rents over time. Building on the insights from the risk-neutral case, we then explore the properties of optimal incentive schemes for risk-averse managers. We …nd that, other things equal, risk-aversion reduces the bene…t of inducing higher e¤ort over time. Whether (risk-averse) managers with a longer tenure receive more or less high-powered incentives than younger ones then depends on the interaction between the degree of risk aversion and the dynamics of the impulse responses for the shocks to the manager’s type. JEL classi…cation: D82 , 1992 "... When an optimization problem is represented by its essential objective function, which incorporates constraints through infinite penalties, first- and second-order conditions for optimality can be stated in terms of the first- and second-order epi-derivatives of that function. Such derivatives also ..." Cited by 6 (4 self) Add to MetaCart When an optimization problem is represented by its essential objective function, which incorporates constraints through infinite penalties, first- and second-order conditions for optimality can be stated in terms of the first- and second-order epi-derivatives of that function. Such derivatives also are the key to the formulation of subproblems determining the response of a problem’s solution when the data values on which the problem depends are perturbed. It is vital for such reasons to have available a calculus of epiderivatives. This paper builds on a central case already understood, where the essential objective function is the composite of a convex function and a smooth mapping with certain qualifications, in order to develop differentiation rules covering operations such as addition of functions and a more general form of composition. Classes of “amenable” functions are introduced to mark out territory in which this sharper form of nonsmooth analysis can be carried out. , 1995 "... this paper we establish some useful calculus rules in the general Banach space setting. ..." - SIAM J. Control Optim , 2000 "... . In general, the value function associated with an exit time problem is a discontinuous function. We prove that the lower (upper) semicontinuous envelope of the value function is a supersolution (subsolution) of the Hamilton-Jacobi equation involving the proximal subdifferentials (superdifferential ..." Cited by 1 (0 self) Add to MetaCart . In general, the value function associated with an exit time problem is a discontinuous function. We prove that the lower (upper) semicontinuous envelope of the value function is a supersolution (subsolution) of the Hamilton-Jacobi equation involving the proximal subdifferentials (superdifferentials) with subdifferential type (superdifferential type) mixed boundary condition. We also show that if the value function is upper semicontinuous then it is the maximum subsolution of the Hamilton-Jacobi equation involving the proximal superdifferentials with the natural boundary condition, and if the value function is lower semicontinuous then it is the minimum solution of the Hamilton-Jacobi equation involving the proximal subdifferentials with a natural boundary condition. Futhermore, if a compatibility condition is satisfied, then the value function is the unique lower semicontinuous solution of the Hamilton-Jacobi equation with a natural boundary condition and a subdifferential type bound... "... . The primary goal of this paper is to study relationships between certain basic principles of variational analysis and its applications to nonsmooth calculus and optimization. Considering a broad class of Banach spaces admitting smooth renorms with respect to some bornology, we establish an equival ..." Add to MetaCart . The primary goal of this paper is to study relationships between certain basic principles of variational analysis and its applications to nonsmooth calculus and optimization. Considering a broad class of Banach spaces admitting smooth renorms with respect to some bornology, we establish an equivalence between useful versions of a smooth variational principle for lower semicontinuous functions, an extremal principle for nonconvex sets, and an enhanced fuzzy sum rule formulated in terms of viscosity normals and subgradients with controlled ranks. Further refinements of the equivalence result are obtained in the case of a Fr'echet differentiable renorm. Based on the new enhanced sum rule, we provide a simplified proof for the refined sequential description of approximate normals and subgradients in smooth spaces. 1991 Mathematical Subject Classification. Primary 49J52; Secondary 58C20, 46B20. Key words and phrases. Nonsmooth analysis, smooth Banach spaces, variational and extremal pri... "... We introduce the notion of "subgradient" for arbitrary functions from R n to R. This "subgradient" generalizes the classical gradient concept. We exhibit a close relationship with Clarke's generalized gradient for arbitrary functions from R n to R as defined in [1]. We apply this "subgradient" ..." Add to MetaCart We introduce the notion of "subgradient" for arbitrary functions from R n to R. This "subgradient" generalizes the classical gradient concept. We exhibit a close relationship with Clarke's generalized gradient for arbitrary functions from R n to R as defined in [1]. We apply this "subgradient" to the problem of optimizing an arbitrary function f : R n ! R. Next, we show how the "subgradient " leads to information about "descent directions". Finally, we describe a role that this "subgradient" may play in a particular class of optimal control problems. 1 Introduction Many problems in pure and applied mathematics deal with non differentiable data. For instance, non differentiable "objective functions" arise naturally and frequently in optimization problems. See [1, Section 1.1] for some examples. When theory and techniques are to be developed to optimize (e.g. minimize) such functions, a good generalization of the classical gradient concept seems indispensable. Since the early 19...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1280557","timestamp":"2014-04-18T02:00:30Z","content_type":null,"content_length":"38376","record_id":"<urn:uuid:ee9d3934-5eb3-46df-8dcf-8c09e9c0e74c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
root (mathematics) Article Free Pass root, in mathematics, a solution to an equation, usually expressed as a number or an algebraic formula. In the 9th century, Arab writers usually called one of the equal factors of a number jadhr (“root”), and their medieval European translators used the Latin word radix (from which derives the adjective radical). If a is a positive real number and n a positive integer, there exists a unique positive real number x such that x^n = a. This number—the (principal) nth root of a—is written ^n√ a or a^1/n. The integer n is called the index of the root. For n = 2, the root is called the square root and is written √ a . The root ^3√ a is called the cube root of a. If a is negative and n is odd, the unique negative nth root of a is termed principal. For example, the principal cube root of –27 is –3. If a whole number (positive integer) has a rational nth root—i.e., one that can be written as a common fraction—then this root must be an integer. Thus, 5 has no rational square root because 2^2 is less than 5 and 3^2 is greater than 5. Exactly n complex numbers satisfy the equation x^n = 1, and they are called the complex nth roots of unity. If a regular polygon of n sides is inscribed in a unit circle centred at the origin so that one vertex lies on the positive half of the x-axis, the radii to the vertices are the vectors representing the n complex nth roots of unity. If the root whose vector makes the smallest positive angle with the positive direction of the x-axis is denoted by the Greek letter omega, ω, then ω, ω^2, ω^3, …, ω[^n] = 1 constitute all the nth roots of unity. For example, ω = −^1/[2] + ^ √( −3 ) /[2], ω^2 = −^1/[2] − ^ √( −3 ) /[2], and ω^3 = 1 are all the cube roots of unity. Any root, symbolized by the Greek letter epsilon, ε, that has the property that ε, ε^2, …, ε^n = 1 give all the nth roots of unity is called primitive. Evidently the problem of finding the nth roots of unity is equivalent to the problem of inscribing a regular polygon of n sides in a circle. For every integer n, the nth roots of unity can be determined in terms of the rational numbers by means of rational operations and radicals; but they can be constructed by ruler and compasses (i.e., determined in terms of the ordinary operations of arithmetic and square roots) only if n is a product of distinct prime numbers of the form 2^h + 1, or 2^k times such a product, or is of the form 2^k. If a is a complex number not 0, the equation x^n = a has exactly n roots, and all the nth roots of a are the products of any one of these roots by the nth roots of unity. The term root has been carried over from the equation x^n = a to all polynomial equations. Thus, a solution of the equation f(x) = a[0]x^n + a[1]x^n − 1 + … + a[n − 1]x + a[n] = 0, with a[0] ≠ 0, is called a root of the equation. If the coefficients lie in the complex field, an equation of the nth degree has exactly n (not necessarily distinct) complex roots. If the coefficients are real and n is odd, there is a real root. But an equation does not always have a root in its coefficient field. Thus, x^2 − 5 = 0 has no rational root, although its coefficients (1 and –5) are rational numbers. More generally, the term root may be applied to any number that satisfies any given equation, whether a polynomial equation or not. Thus π is a root of the equation x sin (x) = 0. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/509457/root","timestamp":"2014-04-17T01:02:23Z","content_type":null,"content_length":"79404","record_id":"<urn:uuid:7482ddbe-60ea-4e59-b3ae-21274b8024bc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
river's problem..Help! October 20th 2008, 12:09 PM river's problem..Help! it takes 5 hours to travel downstream on a river from port A to port B and 7 hours to make the same trip upstream from B to A.How long would it take for a raft,which is propelled only by current of the river to get from A to B? October 20th 2008, 12:32 PM Let speed of boat = x km/h and speed of River Current = y km/h, (x > y) Speed of boat in downstream = (x + y) km/h Speed of boat in upstream = (x - y) km/h Distance covered in downstream from A to B is = speed . time = 5(x + y) hours Distance covered in upstream from B to A is = speed . time = 5(x - y) hours Since, the distance is same, 5(x + y) = 7(x - y) 12y = 2x $\frac{x}{y} = 6$ Since the distance covered by raft from A to B is also same. Distance covered by raft = 5(x + y) Speed of raft = speed of current = y Time taken by raft $= \frac{Distance}{speed}$ $= \frac{5(x+y)}{y}= 5\left(\frac{x}{y}+1\right)$ $= 5\left(6+1\right) = 35$ hours. It will take 35 hours by raft.
{"url":"http://mathhelpforum.com/algebra/54759-rivers-problem-help-print.html","timestamp":"2014-04-16T07:32:57Z","content_type":null,"content_length":"5071","record_id":"<urn:uuid:7018535f-64b1-478f-be7e-0dcd7f546d67>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Seasonal Adjustment Methodology at BLS Consumer Price Index Seasonal Adjustment Methodology at BLS from: BLS Handbook of Methods, Appendix A. An economic time series may be affected by regular intrayearly (seasonal) movements which result from climatic conditions, model changeovers, vacation practices, and similar factors. Often such effects are large enough to mask the short-term, underlying movements of the series. If the effect of such intrayearly repetitive movements can be isolated and removed, the evaluation of a series may be made more perceptive. Seasonal movements are found in almost all economic time series. They may be regular, yet they do show variation from year to year and are subject to changes in pattern over time. These changes are most commonly thought to evolve primarily in a stochastic rather than a deterministic way. Seasonal adjustment practitioners have long recognized, however, that some of the year-to-year variation in seasonal movements can be associated with calendar-related factors such as the number of business or "trading" days in a month (for series whose monthly estimates are accumulations across the days of a month) or, of greater concern for some BLS series, the timing of moving holidays. Recently, variations in the length of intervals between monthly survey reference periods have also been found to significantly affect seasonal patterns in some BLS series. Because the intrayearly seasonal patterns are combined with the underlying growth or decline and cyclical movements of the series (trend-cycle) and also random irregularities, it is difficult to estimate the pattern with exactness. The earliest known attempts to isolate seasonal factors from time series occurred in the first half of the 20th century. Some of the early methods depended upon smoothing curves by using personal judgment. Other formal approaches were periodogram analysis, regression analysis, and correlation analysis. Because these methods involved a large amount of work, relatively little application of seasonal factor adjustment procedures was carried out. In the mid-1950's, new electronic equipment made more elaborate approaches feasible in seasonal factor methods as well as in other areas. The Bureau of the Census developed computer-based seasonal factors based on a ratio-to-moving-average approach. This was a major step forward, as it made possible the uniform application of a method to a large number of series at a relatively low cost. ^1 Subsequent improvements in methods and in computer technology have led to more refined procedures which are both faster and cheaper than the original techniques. The Bureau of Labor Statistics began work on seasonal factor methods in 1959. Prior to that time, when additional data became available and seasonal factors were generated from the lengthened series, the new factors sometimes differed markedly from the corresponding figures based on the shorter series. This difference could affect any portion of the series. It was difficult to accept a process by which the addition of recent information could affect significantly the seasonal factors for periods as much as 15 years earlier, especially since this meant that factors could never become final. The first BLS method, introduced in 1960, had two goals: first, to stabilize the seasonal factors for the earlier part of the series; second, to minimize the revisions in the factors for the recent period. Since 1960, the Bureau has made numerous changes and improvements in its technique and in methods of applying them. Thus far, all the changes relating to the seasonal adjustment of monthly series have been made within the scope of ratio-to-moving-average types of approaches or difference-from-moving-averages types of approaches. The BLS 1960 method, entitled "The BLS Seasonal Factor Method", was further refined, with the final version being introduced in 1966. It was in continuous use for many Bureau series (especially employment series based on the establishment data) until In 1967, the Bureau of the Census introduced "The X-11 Variant of the Census Method II Seasonal Adjustment Program" better known as simply X-11. The X-11 provided some useful analytical measures along with many more options than the BLS method. Taking advantage of the X-11's additional flexibility, BLS began making increasing use of the X-11 method in the early 1970s, especially for seasonal adjustment of the labor force data based on the household survey. Later in the 1970s, Statistics Canada, the Canadian national statistical agency, developed an extension of the X-11 called "The X-11 ARIMA Seasonal Adjustment Method". The X-11 ARIMA (Auto-Regressive Integrated Moving Average) provided the option of using modeling and forecasting techniques to extrapolate some extra data at the end of a time series to be seasonally adjusted. The extrapolated data help to alleviate the effects of the inherent limitations of the moving average techniques at the ends of series. After extensive testing and research showed that use of X-11 ARIMA would help to further minimize revisions in factors for recent periods, BLS began using the X-11 ARIMA procedure in 1980 for most of its official seasonal adjustment. None of the aforementioned procedures had any built-in capabilities to handle the kind of moving-holiday effects found in BLS series, or to estimate other special effects such as level shifts or survey-interval effects. In 1989, BLS developed an extension of X-11 ARIMA to allow it to adjust more adequately for the effects of the presence or absence of religious holidays in the April survey reference period and of Labor Day in the September reference period. This extension has been applied since 1989 to a few persons-at-work series, and from 1990 to 1996, was also used for the adjustment of many of the establishment-survey series on average weekly hours and manufacturing overtime. In 1989, BLS also introduced intervention analysis seasonal adjustment (IASA) for selected price index series. Nonseasonal economic phenomena such as level shifts, seasonal shifts and outliers can have undesirable effects on the computation of seasonal factors, and IASA is a technique which allows such phenomena to be estimated and factored out of series before seasonal factors are computed. The IASA procedures were also used to compute prior adjustment factors for the seasonal adjustment of the labor force series beginning in 1994, to control for level shifts associated with the revision introduced in the Current Population Survey in 1994. In the meantime, over the several years preceding 1996, the Bureau of the Census had been working on a significant new extension of X-11. The new procedure, called X-12 ARIMA, integrates ARIMA forecasting with X-11 seasonal adjustment very much like X-11 ARIMA did, but it also provides a lot of additional tools including some that enable the estimation and diagnosis of a wide range of special effects. BLS began using X-12 ARIMA for the seasonal adjustment of the establishment-survey series effective with the release of the 1995 benchmark revisions in June 1996, primarily because of the capabilities it offered for controlling for survey-interval effects as well as moving holidays. The standard practice at BLS for current seasonal adjustment of data, as it is initially released, is to use projected seasonal factors which are published ahead of time. The time series are generally run through the seasonal adjustment program once a year to provide the projected factors for the ensuing months and the revised seasonally adjusted data for the recent history of the series, usually the last 5 years. It has generally been unnecessary to revise any further back in time because the programs which have been used have all accomplished the objective of stabilizing the factors for the earlier part of the series, and any further revisions would produce only trivial changes. For the projected factors, the factors for the last complete year of actual data were selected when the X-11 or BLS method programs were used. With the X-11 ARIMA and X-12 ARIMA procedures, the projected year-ahead factors produced by the program are normally used for labor force and employment series while the factors for the last complete year are still used for the price series. For the labor force data since 1980, only the factors for the January-June period are projected from the annual run—a special midyear run of the program is done, with up-to-date data included, to project the factors for the July- December period. Since 1989, projected factors are also calculated twice a year for use in seasonally adjusted establishment-based employment, hours, and earnings data. Factors are projected for the May through October period and introduced concurrent with the annual benchmark adjustments, and again for the November-April period. As of the 1996 benchmark adjustments, factors for the 2 months preceding these respective 6-month periods began to be revised so that they would be on the same basis as the 6 months of projected factors. An alternative to the use of projected factors is concurrent adjustment, where all data are run through the seasonal adjustment program each month, and the current observation participates in the calculation of the current factor. Research has shown potentially significant technical advantages in the area of minimization of factor revisions that are possible with concurrent adjustment. Of course, the concurrent approach precludes the prior publication of factors and requires the expenditure of substantially more staff and computer time to run, monitor and evaluate the seasonal adjustment process. If future findings suggest the desirability of a change to a concurrent procedure or to some other type of methodology, such a change will be seriously considered in consultation with the Government's working group on statistics. In applying any of the above mentioned methods of seasonal adjustment, the user should be aware that the result of combining series which have been adjusted separately will usually be a little different from the direct adjustment of the combined series. For example, the quotient of seasonally adjusted unemployment divided by seasonally adjusted labor force will not be quite the same as when the unemployment rate is adjusted directly. Similarly, the sum of seasonally adjusted unemployment and seasonally adjusted employment will not quite match the directly adjusted labor force. Separate adjustment of components and summing of them to the total usually provides series that are easier to analyze; it is also generally preferable in cases where the relative weights among components with greatly different seasonal factors may shift radically. For other series, however, it may be better to adjust the total directly if high irregularity among the components makes a good adjustment of all components difficult. Finally, it is worth noting that the availability of a fast, efficient procedure for making seasonal adjustment computations can easily lead to the processing of large numbers of series without allotting enough time to review the results. No standard procedure can take the place of careful review and evaluation by skilled analysts. A review of all results is strongly recommended. And it should also be remembered that, whenever one applies seasonal factors and analyzes seasonally adjusted data, seasonal adjustment is a process which estimates a set of not directly observable components (seasonal, trend-cycle, irregular) from the observed series and is, therefore, subject to error. Because of the complex nature of methods such as X-11 ARIMA, the precise statistical properties of these errors are not known. ^1 Shiskin, Julius. Electronic Computers and Business Indicators, Occasional Paper No. 57, New York, National Bureau of Economic Research, 1957. Technical References: Department of Commerce, Bureau of the Census. Seasonal Analysis of Economic Time Series, Economic Research Report, ER-1, issued December 1978. Department of Commerce, Bureau of the Census. The X-11 Variant of the Census Method II Seasonal Adjustment Program. Technical Paper No. 15 (1967 revision). Department of Commerce, Bureau of the Census. X-12-ARIMA Reference Manual, Beta Version 1.1, June 24, 1996. Department of Labor, Bureau of Labor Statistics. Employment and Earnings, March and June 1996. Department of Labor, Bureau of Labor Statistics. The BLS Seasonal Factor Method, 1966. Organization for Economic Cooperation and Development. Seasonal Adjustment on Electronic Computers, Paris, 1961. The report and proceedings of an international conference held in November 1960. Describes experience in the United States, Canada, and several European countries. Includes theoretical sections relating to calendar (trading day) variation and general properties of moving Proceedings of a 1976 conference jointly sponsored by the National Bureau of Economic Research and the Bureau of the Census. Barton, H.C., Jr., "Adjustment for Seasonal Variation", Federal Reserve Bulletin, June 1941. The classic account of the FRB ratio-to-moving-average method, in which the analyst uses skilled judgment to draw freehand curves at key stages of the procedure. Buszuwski, James A., and Scott, Stewart., "On the Use of Intervention Analysis in Seasonal Adjustment," Proceedings of the Business and Economic Statistics Section, American Statistical Association, Dagum, Estela Bee. The X-11 ARIMA Seasonal Adjustment Method. Ottawa, Statistics Canada, January 1983 (Statistics Canada Catalogue No. 12-564E). Macaulay, Frederick R. The Smoothing of Time Series, NBER No. 19. New York, National Bureau of Economic Research, 1931. An early discussion of moving averages and of the criteria for choosing one average rather than another. McIntire, Robert J., "A Procedure to Control for Moving-Holiday Effects in Seasonally Adjusting Employment and Hours Series", Proceedings of the Business and Economic Statistics Section, American Statistical Association, 1990. Shiskin, Julius. Electronic Computers and Business Indicators, Occasional Paper No. 57, New York, National Bureau of Economic Research, 1957. Also published in Journal of Business, Vol. 30, October 1957. Describes applications of the first widely used computer program for making seasonal adjustments. Last Modified Date: February 18, 2014
{"url":"http://data.bls.gov/cgi-bin/print.pl/cpi/cpisahoma.htm","timestamp":"2014-04-19T07:22:43Z","content_type":null,"content_length":"20989","record_id":"<urn:uuid:550c9f40-eb2d-4399-b142-4d23740b245c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 1998 [00171] [Date Index] [Thread Index] [Author Index] Re: Re: coordinate transformation • To: mathgroup at smc.vnet.net • Subject: [mg13252] Re: [mg13169] Re: [mg13117] coordinate transformation • From: David Withoff <withoff> • Date: Fri, 17 Jul 1998 03:17:47 -0400 • Sender: owner-wri-mathgroup at wolfram.com > S.-Svante Wellershoff wrote: > > > > Can someone explain me, how to use mathematica for transformations > > between > > different coordinate systems? > > > > Like (x,y,z) in carthesian equals (r sin t cos p, r sin t sin p, r cos > > t) in spherical system. > > > > Thanks, Svante > > > > p.s.: hope its no faq! > > > > --------------------------------------------------------------------- > > S.-Svante Wellershoff svante.wellershoff at physik.fu-berlin.de > > http://www.physik.fu-berlin.de/~welle/ > > --------------------------------------------------------------------- > > Institut fuer Experimentalphysik > > Freie Universitaet Berlin phone +49-(0)30-838-6234 (-6057) > > Arnimallee 14 fax +49-(0)30-838-6059 > > 14195 Berlin - Germany > > --------------------------------------------------------------------- > There is an add-on package which does this called > Calculus`VectorAnalysis`. Please be careful. Your definition above > and the ones built in to mathematica and the ones quoted in nearly > every physics book I have ever seen are valid only for vectors of > infinitesimal spatial extent(field vectors, not displacement vectors) > centered at the origin. You must also transform the unit vectors along > with the components. If you do any subsequent vector differential > operations, you must also take into account the coordinate metric which > depends on the unit system and the location of the vectors. Yes, there are functions in Calculus`VectorAnalysis` to change between common coordinate systems. For example: In[1]:= << Calculus`VectorAnalysis` In[2]:= ?CoordinatesToCartesian CoordinatesToCartesian[pt] gives the Cartesian coordinates of the point given in the default coordinate system. CoordinatesToCartesian[pt, coordsys] gives the Cartesian coordinates of the point given in the coordinate system coordsys. In[3]:= CoordinatesToCartesian[{r, t, p}, Spherical] Out[3]= {r Cos[p] Sin[t], r Sin[p] Sin[t], r Cos[t]} The documentation for this and other functions in this package can be found in the Standard Add-On Packages guide, which is included in the on-line documentation. I expect that that is what you want. The remark about transformations that are "valid only for vectors of infinitesimal spatial extent centered at the origin" is confusing, but may refer to the fact that, when working with a vector field, the field vectors are typically described using a locally cartesian coordinate system derived from the coordinate system that is used for the host space. You can get into all sorts of trouble if you get the host space (or the coordinate system that is used to describe it) mixed up with the spaces (or their coordinate systems) used for the field vectors. If you aren't working with vector fields, then none of that is relevant, of course, and it isn't relevant to the Calculus`VectorAnalysis` package in any case, since the functions in that package only deal with one space at a time, and are not intended for transformations between, say, one locally cartesian space and Dave Withoff Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jul/msg00171.html","timestamp":"2014-04-17T21:29:56Z","content_type":null,"content_length":"37557","record_id":"<urn:uuid:e64a93cd-fb91-4fba-be71-ded8bdb630f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Cup products in simplicial cohomology This post is a walkthrough through a computation I just did – and one of the main reasons I post it is for you to find and tell me what I’ve done wrong. I have a nagging feeling that the cup product just plain doesn’t work the way I tried to make it work, and since I’m trying to understand cup products, I’d appreciate any help anyone has. I’ve picked out the examples I have in order to have two spaces with the same Betti numbers, but with different cohomological ring structure. Sphere with two handles I choose a triangulation of the sphere with two handles given the boundary of a tetrahedron spanned by the nodes a,b,c,d and the edges be, ef, bf and cg, ch, gh spanning two triangles. We get a cochain complex on the form with the codifferential given as Computing nullspaces and images, we get a one-dimensional Now, the Encyclopedia of Topology, volume II has a paper by Viro and Fuchs on homology and cohomology. They state a direct construction of the cup product as defined by the following: So, any product of something in Now, by the definition of the coclasses, we need something that is an equivalence class of linear duals of 2-cells to split into things that only operate on the two handles. However, these are geometrically disjoint – so any cell we could feed into such a product would vanish on the components. For instance, So all the higher degree cohomology products have to vanish. We pick a triangulation of the torus with 9 vertices, 27 one-cells and 18 2-cells, given as the identification space of a square, in a usual manner. It’s going to be obnoxious to write down boundary maps, and list all cells, so I’ll just refer you to the following picture instead: Now, setting up the same computations to get the cohomology classes, we arrive at a one-dimensional class in degree zero, represented by the sum of all duals of all vertices, two classes in degree one, with representatives given, for instance, by Now is where I find things get tricky. Again, we get the 0-degree class acting as, essentially, an identity. And the only products that could possibly be nontrivial now would be products of two classes of degree 1. So let’s call the classes Using the Viro-Fuchs construction of the cup product, I should be able to say that if we consider This, somehow, feels fishy. Could anyone check my reasoning for me, please? Edited to add: indeed this was fishy. The kind of blatant reordering I did to compute with dgi isn’t permissible. Also, we know that the square of a degree 1 coclass has to vanish, since the cohomology ring is graded commutative. However, we’re not far from the truth: If we want to compute corresponding to the resulting potential 2-coclasses
{"url":"http://blog.mikael.johanssons.org/archive/2008/09/cup-products-in-simplicial-cohomology/","timestamp":"2014-04-17T21:26:26Z","content_type":null,"content_length":"45015","record_id":"<urn:uuid:5e9fa166-4ebc-474c-95a9-c653c67e97e5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD AND APPARATUS FOR OPTIMIZING CONSTRAINT SOLVING THROUGH CONSTRAINT REWRITING AND DECISION REORDERING Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Methods and apparatuses are described for assigning random values to a set of random variables so that the assigned random values satisfy a set of constraints. A constraint solver can receive a set of constraints that is expected to cause performance problems when the system assigns random values to the set of random variables in a manner that satisfies the set of constraints. For example, modulo constraints and bit-slice constraints can cause the system to perform excessive backtracking when the system attempts to assign random values to the set of random variables in a manner that satisfies the set of constraints. The system can rewrite the set of constraints to obtain a new set of constraints that is expected to reduce and/or avoid the performance problems. The system can then assign random values to the set of random variables based on the new set of constraints. A method for assigning random values to a set of random variables, the method comprising: receiving a set of constraints, wherein each constraint is defined over one or more random variables from the set of random variables, wherein the set of constraints includes one or more modulo constraints that use a modulo operator; rewriting the set of constraints to obtain a new set of constraints, wherein said rewriting includes replacing the one or more modulo constraints with one or more non-modulo constraints that use only non-modulo operators; and assigning random values to the set of random variables based on the new set of constraints. The method of claim 1, wherein replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators includes replacing modulo expression "expr1% expr2" by expression "expr1 & (|expr2|-1)" if "expr2" is equal to a power of two, wherein "expr1" and "expr2" are expressions. The method of claim 1, wherein replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators includes replacing modulo constraint "expr1% expr2==expr3" by constraints "(expr1==q*expr2+r) && (|r|<|expr2|" and "r==expr3," wherein "expr1," "expr2," and "expr3" are expressions, and "q" and "r" are random variables. The method of claim 1, wherein rewriting the set of constraints includes: determining whether multiple modulo constraints include a modulo expression; and reusing a set of random variables to rewrite the modulo expression in the multiple modulo constraints. The method of claim 1, wherein rewriting the set of constraints includes not rewriting modulo constraint "expr fwdarw.(expr1% expr2)" if expr3 is expected to evaluate to "FALSE." A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for assigning random values to a set of random variables, the method comprising: receiving a set of constraints, wherein each constraint is defined over one or more random variables from the set of random variables, wherein the set of constraints includes one or more modulo constraints that use a modulo operator; rewriting the set of constraints to obtain a new set of constraints, wherein said rewriting includes replacing the one or more modulo constraints with one or more non-modulo constraints that use only non-modulo operators; and assigning random values to the set of random variables based on the new set of constraints. The non-transitory computer-readable storage medium of claim 6, wherein replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators includes replacing modulo expression "expr1% expr2" by expression "expr1 & (|expr2|-1)" if "expr2" is equal to a power of two, wherein "expr1" and "expr2" are expressions. The non-transitory computer-readable storage medium of claim 6, wherein replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators includes replacing modulo constraint "expr1% expr2==expr3" by constraints "(expr1==q*expr2+r) && (|r|<|expr2|)" and "r==expr3," wherein "expr1," "expr2," and "expr3" are expressions, and "q" and "r" are random variables. The non-transitory computer-readable storage medium of claim 6, wherein rewriting the set of constraints includes: determining whether multiple modulo constraints include a modulo expression; and reusing a set of random variables to rewrite the modulo expression in the multiple modulo constraints. The non-transitory computer-readable storage medium of claim 6, wherein rewriting the set of constraints includes not rewriting modulo constraint "expr fwdarw.(expr1% expr2)" if expr3 is expected to evaluate to "FALSE." A computer system, comprising: a processor; and a non-transitory computer-readable storage medium storing instructions that when executed by the processor cause the computer system to perform a method for assigning random values to a set of random variables, the instructions comprising: instructions for receiving a set of constraints, wherein each constraint is defined over one or more random variables from the set of random variables, wherein the set of constraints includes one or more modulo constraints that use a modulo operator; instructions for rewriting the set of constraints to obtain a new set of constraints, wherein said instructions for rewriting the set of constraints include instructions for replacing the one or more modulo constraints with one or more non-modulo constraints that use only non-modulo operators; and instructions for assigning random values to the set of random variables based on the new set of constraints. The computer system of claim 11, wherein the instructions for replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators include instructions for replacing modulo expression "expr1% expr2" by expression "expr1 & (|expr2|-1)" if "expr2" is equal to a power of two, wherein "expr1" and "expr2" are expressions. The computer system of claim 11, wherein the instructions for replacing the one or more modulo constraints with the one or more non-modulo constraints that use only non-modulo operators include instructions for replacing modulo constraint "expr1% expr2==expr3" by constraints "(expr1==q*expr2+r) && (|r|<|expr2|" and "r==expr3," wherein "expr1," "expr2," and "expr3" are expressions, and "q" and "r" are random variables. The computer system of claim 11, wherein the instructions for rewriting the set of constraints include: instructions for determining whether multiple modulo constraints include a modulo expression; and instructions for reusing a set of random variables to rewrite the modulo expression in the multiple modulo constraints. The computer system of claim 11, wherein the instructions for rewriting the set of constraints include instructions for not rewriting modulo constraint "expr fwdarw.(expr1% expr2)" if expr3 is expected to evaluate to "FALSE." A method for assigning random values to a set of random variables, the method comprising: receiving a set of constraints, wherein each constraint is defined over one or more random variables from the set of random variables, wherein the set of constraints includes one or more bit-slice constraints, wherein each bit-slice constraint includes one or more bit-slices of one or more random variables; rewriting the set of constraints to obtain a new set of constraints, wherein said rewriting includes: defining one or more auxiliary random variables, wherein each auxiliary random variable represents a non-overlapping bit-slice of a random variable, and rewriting the one or more bit-slice constraints using the one or more auxiliary random variables; and assigning random values to the set of random variables based on the new set of constraints. The method of claim 16, wherein assigning random values to the set of random variables based on the new set of constraints includes assigning random values to the one or more auxiliary random The method of claim 16, rewriting the set of constraints to obtain a new set of constraints further includes: identifying a high-frequency bit-slice of a random variable that occurs more than a pre-defined number of times in the set of constraints; and defining a high-frequency auxiliary variable that represents the high-frequency bit-slice of the random variable. A method for assigning random values to a set of random variables, the method comprising: receiving a set of constraints, wherein each constraint is defined over one or more random variables from the set of random variables; analyzing the set of constraints to identify a set of outlier random variables in the set of random variables, wherein at least one cost component value of each outlier random variable falls outside a pre-defined range of values; identifying a subset of constraints, wherein each constraint in the subset of constraints includes at least one outlier random variable; rewriting the subset of constraints to obtain a new set of constraints; and assigning random values to the set of random variables based on the new set of constraints. The method of claim 19, wherein assigning random values to the set of random variables includes: arranging the set of random variables in a sequence based on the set of outlier random variables; and assigning random values to the set of random variables based on the sequence. RELATED APPLICATION [0001] This application claims priority to U.S. Provisional Application No. 61/417,754, Attorney Docket Number SNPS-1418US01, entitled "Method and Apparatus for Rewriting Constraints," by inventors Ngai William Hung, Qiang Qiang, Guillermo Maturana, Jasvinder Singh, and Dhiraj Goswami, filed 29 Nov. 2010, the contents of which are incorporated herein by reference. BACKGROUND [0002] 1. Technical Field This disclosure generally relates to constraint solvers. More specifically, this disclosure relates to methods and apparatuses for rewriting constraints. 2. Related Art The importance of circuit verification cannot be over-emphasized. Indeed, without circuit verification it would have been impossible to design complicated integrated circuits which are commonly found in today's computing devices. Constrained random simulation methodologies have become increasingly popular for functional verification of complex designs, as an alternative to directed-test based simulation. In a constrained random simulation methodology, random vectors are generated to satisfy certain operating constraints of the design. These constraints are usually specified as part of a test-bench program. A test-bench automation tool (TBA) uses the test-bench program to generate random solutions for a set of random variables, such that a set of constraints over the set of random variables is satisfied. These random solutions can then be used to generate valid random stimulus for the Design Under Verification (DUV). This stimulus is simulated using simulation tools, and the results of the simulation are typically examined within the test-bench program to monitor functional coverage, thereby providing a measure of confidence on the verification quality and completeness. Constraint solvers are typically used to generate random vectors that satisfy the set of constraints. The basic functionality of a constraint solver is to solve the following constraint satisfaction problem: given a set of random variables and a set of constraints, assign a set of random values to the set of random variables that satisfy the set of constraints. For better software maintenance and quality, the solutions generated by the constraint solver need to be reproducible and deterministic. Further, since users typically require good coverage for the random simulation, the constraint solutions also need to satisfy a user provided distribution. Unfortunately, the constraint satisfaction with desired solution distribution problem is NP-hard. Logic simulation, on the other hand, usually scales linearly with the size of the design. As a result, the speed of stimulus generation usually lags far behind the speed at which the stimulus is used in the simulation. Hence, it is desirable to improve performance of a constraint solver because it can significantly improve the overall performance of constrained random simulation tools. SUMMARY [0009] Embodiments described in this disclosure provide methods and apparatuses for assigning random values to a set of random variables so that the assigned random values satisfy a set of constraints. Each constraint can be defined over one or more random variables from the set of random variables. Specifically, a system (e.g., a constraint solver) can receive a set of constraints that is expected to cause performance problems when the system attempts to assign random values to the set of random variables in a manner that satisfies the set of constraints. For example, the set of constraints may be expected to cause the system to perform excessive backtracking when the system attempts to assign random values to the set of random variables in a manner that satisfies the set of constraints. As another example, the set of constraints may be expected to cause the system to use an impracticably large amount of memory and/or processing resources when the system attempts to assign random values to the set of variables in a manner that satisfies the set of constraints. In some embodiments, the system can rewrite the set of constraints to obtain a new set of constraints which is equivalent to the original set of constraints, but which is expected to reduce and/or avoid the performance problems. The system can then assign random values to the set of random variables based on the new set of constraints. In this disclosure, unless otherwise stated, the term "based on" means "based solely or partly on." In some embodiments, the system creates a representation of the set of constraints (e.g., a word-level circuit or a binary decision diagram), and uses the representation to assign random values to the random variables. In some embodiments, the system can rewrite the set of constraints to obtain a new set of constraints, and then create a representation based on the new set of constraints. In other embodiments, the system can create a representation based on the original set of constraints, and modify the representation so that the modified representation represents the new set of In some embodiments, the system can receive a set of constraints which includes one or more modulo constraints that use a modulo operator. A modulo constraint can cause the system to perform excessive backtracking because the system may not be able to perform complete implication for modulo operators. To reduce the amount of backtracking, some embodiments can rewrite one or more modulo constraints using non-modulo operators. The new set of constraints can then be used to assign random values to the set of random variables. In some embodiments, the system can determine whether multiple modulo constraints include the same modulo expression. If so, the system can reuse the same set of random variables (instead of using different sets of random variables) to rewrite the modulo expression in the multiple modulo constraints. In some embodiments, the system can receive a set of constraints which includes one or more bit-slice constraints. Each bit-slice constraint can include one or more bit-slices of one or more random variables. The system can then rewrite the set of constraints to obtain a new set of constraints by: (1) defining one or more auxiliary random variables, wherein each auxiliary random variable represents a non-overlapping bit-slice of a random variable; and (2) rewriting one or more bit-slice constraints using the one or more auxiliary random variables. The system can then assign random values to the set of random variables based on the new set of constraints. Specifically, the system can assign random values to the one or more auxiliary random variables, and determine values for the original random variables by concatenating one or more auxiliary random variables. The system can use a cost function that has multiple cost components to identify important random variables and/or constraints. In some embodiments, the system can analyze the set of constraints to identify a set of outlier random variables in the set of random variables. According to one definition, an outlier random variable is a random variable such that at least one cost component value for the random variable falls outside a pre-defined range of values. For example, the system may determine an average cost component value based on the cost component values for all random variables. Next, the system can define a range of cost component values around the average cost component value that is considered to be normal. Any random variable whose cost component value falls outside this range can be identified as an outlier random variable. Once the outlier random variables have been identified, they can be used for a variety of purposes. In some embodiments, the outlier random variables can be used to identify constraints that are candidates for being rewritten. In some embodiments, the outlier random variables can be used for assigning random values to the set of random variables. For example, the system may assign random values to the outlier random variables before assigning values to other random variables. BRIEF DESCRIPTION OF THE FIGURES [0017] FIG. 1 illustrates various steps in the design and fabrication of an integrated circuit. FIG. 2 illustrates a word-level circuit that an ATPG-based constraint solver may create for a constraint problem in accordance with some embodiments described in this disclosure. FIG. 3A illustrates a constraint in accordance with some embodiments described in this disclosure. FIG. 3B illustrates a BDD in accordance with some embodiments described in this disclosure. FIG. 4A illustrates an example of forward implication in accordance with some embodiments described in this disclosure. FIG. 4B illustrates an example of backward implication in accordance with some embodiments described in this disclosure. FIG. 5 presents a flowchart that illustrates a process for rewriting modulo constraints in accordance with some embodiments described in this disclosure. FIG. 6 presents a flowchart that illustrates a process for rewriting bit-slice constraints in accordance with some embodiments described in this disclosure. FIG. 7 presents a flowchart that illustrates a process for identifying and using outliers in accordance with some embodiments described in this disclosure. FIG. 8 illustrates a process for assigning higher priority to outlier random variables in accordance with some embodiments described in this disclosure. FIG. 9 illustrates a computer system in accordance with some embodiments described in this disclosure. DETAILED DESCRIPTION [0028] The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. FIG. 1 illustrates various steps in the design and fabrication of an integrated circuit. The process starts with a product idea 100, which is realized using Electronic Design Automation (EDA) software 110. Chips 170 can be produced from the finalized design by performing fabrication 150 and packaging and assembly 160 steps. A design flow that uses EDA software 110 is described below. Note that the design flow description is for illustration purposes only, and is not intended to limit the present invention. For example, an actual integrated circuit design may require a designer to perform the design flow steps in a different sequence than the sequence described below. In the system design step 112, the designers can describe the functionality to implement. They can also perform what-if planning to refine the functionality and to check costs. Further, hardware-software architecture partitioning can occur at this step. In the logic design and functional verification step 114, a Hardware Description Language (HDL) design can be created and checked for functional accuracy. In the synthesis and design step 116, the HDL code can be translated to a netlist, which can be optimized for the target technology. Further, tests can be designed and implemented to check the finished chips. In the netlist verification step 118, the netlist can be checked for compliance with timing constraints and for correspondence with the HDL code. In the design planning step 120, an overall floor plan for the chip can be constructed and analyzed for timing and top-level routing. Next, in the physical implementation step 122, placement and routing can be performed. In the analysis and extraction step 124, the circuit functionality can be verified at a transistor level. In the physical verification step 126, the design can be checked to correct any functional, manufacturing, electrical, or lithographic issues. In the resolution enhancement step 128, geometric manipulations can be performed on the layout to improve manufacturability of the design. Finally, in the mask data preparation step 130, the design can be taped-out 140 for production of masks to produce finished chips. Constraint Solvers [0036] Some embodiments described in this disclosure use constraint solvers that are based on ATPG (Automatic Test Pattern Generation) or BDDs (Binary Decision Diagrams). Unlike BDD-based constraint solvers, ATPG-based constraint solvers do not guarantee a uniform distribution across the solution space. However, the worst-case runtime of a BDD-based constraint solver can be exponential in the size of the input problem. ATPG-based constraint solvers typically require less memory than BDD-based solvers. Although embodiments have been described in the context of these two constraint solvers, the general principles defined herein may be applied to other constraint solvers without departing from the spirit and scope of the present disclosure. Some embodiments described in this disclosure use an ATPG-based constraint solver that creates a word-level circuit model to represent the constraints and uses a word-level value system to represent possible values for all nodes in this model. This word-level value system may use intervals and ranges to represent multiple values in a compact form. For example, let "a" be a 4 bit unsigned variable. Independent of the constraints on "a," we can say that the possible values that "a" can have are {0:15}, i.e., from 0 to 15. Note that this compactly represents multiple values, without explicitly enumerating all the values. This representation can be referred to as an interval. Suppose the constraint "a !=7" is imposed on variable "a." This constraint restricts the values "a" can have to {0:6}, {8:15}. Such a "list of intervals" can be referred to as a range. If another constraint, "a>2," is added, the range value that variable "a" can take is further restricted to {3:6}, {8:15}. A constraint problem that is based on these constraints can be stated as follows: determine random values for variable "a," such that all the constraints on variable "a" are satisfied. The above-described constraint problem can be represented by the following lines of code: -US-00001 rand bit[3:0] a; constraint cons1 { a != 7; a > 2; } FIG. 2 illustrates a word-level circuit that an ATPG-based constraint solver may create for a constraint problem in accordance with some embodiments described in this disclosure. The ATPG-based constraint solver used by some embodiments described in this disclosure may create a word-level circuit that includes nodes 202, 204, and 206. Node 202 can correspond to conjunction of the set of constraints, node 204 can correspond to constraint "a !=7," and node 206 can correspond to constraint "a>2." The "&&" operator shown inside node 202 indicates that both constraints need to be satisfied. After creating the word-level circuit model, the ATPG-based constraint solver can perform static implications to refine the range values on each node in the circuit. The result of performing static implications is also shown in FIG. 2. That is, the range value shown next to each node in the circuit is the result of performing static implications. After performing static implications, the ATPG-based constraint solver can perform static learning to further refine the range values. Then the ATPG-based constraint solver can perform random ATPG to pick and try values for the unassigned random variables. For example, in the word-level circuit illustrated in FIG. 2, suppose the ATPG-based constraint solver picks a random value "5" for variable "a." Once this value is picked, the constraint solver can invoke its implication engine to incrementally update the range values in the circuit and to determine if this value assignment to variable "a" results in any conflicts. For the value "5," the implication engine will evaluate the "!=" comparator node and the ">" comparator node to determine that there is no conflict. Therefore, the assignment "a=5" is determined as a legal random assignment that satisfies all the constraints. Note that a constraint solver can remember the evaluation result generated by a node, and reuse the evaluation result if the input values to the node have not changed. If a conflict is encountered during the implication process, the constraint solver can backtrack on the last assigned variable and try other value assignment until the constraint solver determines an assignment that does not result in any conflicts (if such a solution exists). Due to backtracking, the constraint solver may need to invoke its implication engine repeatedly, which can end up consuming a majority of the constraint solver's computation time. Further details of an ATPG-based constraint solver can be found in U.S. Pat. No. 7,243,087, entitled "Method and Apparatus for Solving Bit-Slice Operators," by inventor Mahesh A. Iyer, issued on Jul. 10, 2007, the contents of which are herein incorporated by reference. Some embodiments described in this disclosure are directed to improve the performance of ATPG-based constraint solvers. In some embodiments described in this disclosure, a constraint solver represents the set of constraints in a canonical representation (e.g., a BDD), and then uses the canonical representation to determine solutions to the set of constraints. For example, if the set of constraints is represented by a BDD, each path in the BDD from the root node to the terminal node that corresponds to the value "1" can be associated with a value assignment that satisfies the set of constraints. FIG. 3A illustrates a constraint in accordance with some embodiments described in this disclosure. The constraint illustrated in FIG. 3A is a Boolean function over three variables: "a," "b," and "c." The variables may model signals in the DUV, e.g., the variable "a" may be a random variable that represents the logic state of an input signal in the DUV. Variable "b" may be a state variable that represents the output of a logic gate in the DUV. FIG. 3B illustrates a BDD in accordance with some embodiments described in this disclosure. BDD 300 can represent the constraint shown in FIG. 3A. BDD 300 includes nodes 302, 304, 306, 308, 310, 312, 314, and 316. Node 302 can be the root node which can be used to represent the entire constraint. Node 304 can be associated with variable "a," nodes 306 and 308 can be associated with variable "b," and nodes 310 and 312 can be associated with variable "c." Node 314 can represent the Boolean value "TRUE" or "1" for the Boolean function. In other words, node 314 can represent a situation in which the constraint has been satisfied. In contrast, node 316 can represent the Boolean value "FALSE" or "0." In other words, node 316 can represent a situation in which the constraint has not been satisfied. The directed edges in BDD 300 can represent a value assignment to a variable. For example, the directed edge between nodes 304 and 306 can be associated with assigning value "0" to the random variable "a." Similarly, the directed edge between nodes 308 and 312 can be associated with assigning value "1" to the state variable "b." A directed path in a BDD from the root node, e.g., node 302, to the terminal node for the Boolean value "TRUE," e.g., node 314, can correspond to a value assignment to the variables that satisfies the set of constraints which is being represented by the BDD, e.g., the constraint shown in FIG. 3A. For example, path 318 begins at node 302 and terminates at node 314. The value assignments associated with path 318 are: a=0, b=1, and c=1. It will be evident that this value assignment causes the Boolean function shown in FIG. 3A to evaluate to "TRUE." Once the system builds the BDD, it can generate constrained random stimulus by determining all distinct paths from the root node to the terminal node associated with the "TRUE" value, and by randomly selecting a path from the set of all distinct paths. Note that BDDs are only one of the many different types of canonical representations that can be used to generate the random stimulus. A canonical representation of a logical function (e.g., a constraint) can generally be any representation which satisfies the following property: if two logical functions are equivalent, their canonical representations will be the same as long as the same variable ordering (or an equivalent characteristic) is used while constructing the canonical representation. Examples of canonical representations include, but are not limited to, binary decision diagrams, binary moment diagrams, zero-suppressed binary decision diagrams, multi-valued decision diagrams, multiple-terminal binary decision diagrams, and algebraic decision diagrams. Some embodiments described in this disclosure provide methods and apparatuses for improving the performance of constraint solvers (e.g., BDD-based constraint solvers) that represent the set of constraints using a canonical representation. Modulo Constraints [0053] One approach for improving the performance of constraint solvers is to reduce the amount of backtracking. Specifically, if the system can increase the effectiveness of the implication engine, it can further restrict the value ranges for the variables, which can reduce the occurrence of conflicts (and therefore backtracking) when a random value is chosen for a variable. FIG. 4A illustrates an example of forward implication in accordance with some embodiments described in this disclosure. As shown in FIG. 4A, if either variable "a" or variable "b" is known to be zero, then forward implication can be used to conclude that the logical AND operation of variables "a" and "b" (shown as "a && b" in FIG. 4A) will also be equal to zero. FIG. 4B illustrates an example of backward implication in accordance with some embodiments described in this disclosure. As shown in FIG. 4B, if the logical AND operation of variables "a" and "b" is known to be non-zero, then backward implication can be used to conclude that the values of "a" and "b" cannot be zero. As shown in FIGS. 4A and 4B, implication can be performed for the logical AND operator. Similarly, implication can also be performed for other standard logical operators (e.g., logical OR, logical EXOR, etc.) and standard arithmetic operations (e.g., addition, subtraction, multiplication, etc.). The modulo operator (which is denoted by the percent symbol "%" in this disclosure) is often used in constraints. However, conventional ATPG-based constraint solvers do not perform implication for modulo operators. As a result, a set of constraints that include a modulo operator can cause a large amount of backtracking in conventional ATPG-based constraint solvers. Some embodiments of the present invention provide methods and apparatuses to facilitate performing implication for modulo constraints. Specifically, some embodiments of the present invention rewrite a set of constraints that use the modulo operator so that the rewritten set of constraints use standard logical and arithmetic operators. Implication can be performed more effectively on the set of rewritten constraints because they only use logical and arithmetic operators on which implication can be performed effectively. Some embodiments use different approaches for rewriting the modulo constraint "expr1% expr2==expr3" depending on the value of "expr2." If the value of expression "expr2" is a power of two, then some embodiments rewrite the modulo constraint using a bitwise AND operator. On the other hand, if the value of expression "expr2" is not a power of two, then some embodiments rewrite the modulo constraint using standard logical and arithmetic operators. When the expression "expr2" is a power of two, the modulo operation is equivalent to a bitwise AND operation. Specifically, the expression "expr1% expr2" can be rewritten as "expr1 & bitmask," where "bitmask" is equal to "expr2-1" if expression "expr2" is greater than zero, and "-expr2-1" if the expression "expr2" is less than zero. In other words, "bitmask" is equal to "|expr2|-1," if expression "expr2" is not equal to zero. If expression "expr2" is equal to zero, the modulo operator returns an error. Replacing a modulo operator with a bitwise AND operator can substantially speedup the constraint solver because the implication engine is capable of performing implications effectively on the bitwise logical AND operator. This can substantially reduce number of incorrect decisions in the ATPG decision procedure. Further, in a BDD-based constraint solver, the bitwise AND operator can prevent the BDD from becoming unmanageably large when the BDD-based constraint solver builds the BDD for the set of constraints (if the modulo constraint had not been rewritten, it could have caused the BDD to become unmanageably large). When "expr2" is not necessarily a power of two, the modulo constraint can be rewritten as a set of constraints that uses standard logical and arithmetic operators, but do not use the modulo operator. Specifically, the constraint "expr1% expr2==expr3" can be replaced by the constraints "(expr1==q*expr2+r) && (|r|<|expr2|" and "r==expr3," where "q" and "r" are new random variables. The following piece of code illustrates a modulo expression in which "expr2" is a power of two. rand int a; a % 4==0; This piece of code can be rewritten using a bitwise AND operation as follows. rand int a; a & 3==0; The following piece of code illustrates a modulo expression in which "expr2" is not necessarily a power of two. rand int a, b; a % b==0; This piece of code can be rewritten using standard logical and arithmetic operators, but without using the modulo operator, as follows. rand int a, b; rand int q, r; a==(b q+r) && (|r|<|b|); If the set of constraints includes multiple modulo constraints that have the same dividend and divisor (i.e., the same "expr1" and "expr2"), then the same random variables can be used for rewriting these modulo constraints. For example, the system can keep track of the dividends and divisors for each modulo constraint (e.g., by using a hash table). If a modulo constraint with the same arguments is encountered again, the system can reuse the random variables that were used for rewriting the previous instance of the modulo constraint. Reusing the same random variables can enable a common sub-expression identification module to remove redundant or duplicate constraints. The data structure (e.g., hash table) that is used for keeping track of the dividends and divisors can be deleted after the constraints have been rewritten to reclaim the memory that was used by the data structure. Modulo constraints with constraint guards can also be rewritten in a form that only uses non-modulo operators. If the modulo expression appears on the left hand side of the implication, the modulo expression can be rewritten as explained above. For example, the modulo expression "expr1% expr2" in the constraint "(expr1% expr2)→expr3" can be rewritten as explained above. However, if the modulo expression appears on the right hand side of the implication, the system may use a heuristic to determine whether or not to rewrite the constraint. For example, in the modulo constraint "expr3→(expr1% expr2)," if the guard expression "expr3" is known to be "FALSE," the modulo expression does not need to be evaluated, and therefore, does not need to be rewritten. In some embodiments, the system creates an internal node to handle cases in which multiple modulo constraints share the same modulo expression. For example, the following piece of code illustrates multiple modulo constraints that include a constraint guard and that share a modulo expression. rand int c1, c2, a, b; c1→a % b==5; c2→a % b==17; This piece of code can be rewritten using non-modulo operators as shown below. Note that the rewritten constraints instruct an ATPG-based constraint solver to create virtual internal node "tmp" so that the two modulo constraints can share a single evaluation of the expression "(a==(b*q+r) && (|r|<|b|))." rand int c1, c2, a, b; rand int q, r; virtual internal node tmp; tmp=(a==(b*q+r) && (|r|<|b|)); The system may determine whether to rewrite guarded modulo constraints based on one or more heuristics. Specifically, if the number of additional random variables and/or constraints that are required in the rewrite is greater than a given threshold, the system may decide not to rewrite the guarded modulo constraints. Further, if the constraint guard condition is expected to be "FALSE" most of the time, the modulo constraint on the right hand side of the implication will be ignored, and so the rewrite is not needed. After the rewrite, common sub-expression extraction may not be able to identify duplicate constraints because of the uncertainty of the guard condition. Therefore, rewriting each and every modulo constraint that has a constraint guard can unnecessarily increase the complexity of the constraint problem. In view of this, some embodiments described in this disclosure monitor the frequency with which a particular modulo expression (e.g., "a % b") appears in the set of constraints. If the modulo expression appears more than a certain amount, the system may decide to rewrite the modulo constraints that contain that particular modulo expression. Further, some embodiments may monitor the size of the constraint problem (e.g., number of variables and/or constraints). If the constraint problem is relatively small, the system may decide to skip rewriting the modulo constraints. FIG. 5 presents a flowchart that illustrates a process for rewriting modulo constraints in accordance with some embodiments described in this disclosure. The process can begin by receiving a set of constraints, wherein each constraint is defined over one or more random variables from a set of random variables, wherein the set of constraints includes one or more modulo constraints that use a modulo operator (block 502). Next, the system can rewrite the set of constraints to obtain a new set of constraints, wherein said rewriting includes replacing the one or more modulo constraints with one or more non-modulo constraints that use only non-modulo operators (block 504). In some embodiments, the system can determine whether multiple modulo constraints include the same modulo expression. If so, the system can use the same set of random variables to rewrite the modulo expression in the multiple modulo constraints. In some embodiments, the system can identify modulo constraints of the type "expr3→(expr1% expr2)." Next, the system can determine whether the expression "expr3" is expected to evaluate to "TRUE" or "FALSE." If the expression "expr3" is expected to evaluate to "FALSE," the system may skip rewriting the modulo constraint "expr3→(expr1% Once the set of constraints has been rewritten, the system can then assign random values to the set of random variables based on the new set of constraints (block 506). In some embodiments the system can create a word-level circuit based on the new set of constraints. In other embodiments, the system can create a word-level circuit based on the original set of constraints, and modify the word-level circuit so that the modified word-level circuit represents the new set of constraints. -Slice Constraints The random variables that are used in the set of constraints are usually defined as integers, which are essentially bit vectors. For example, each of the following random variables can be treated as bit vectors. rand int a; //32-bit signed bit vector rand bit [79:0] b; //80-bit signed bit vector rand byte c; //8-bit unsigned bit vector If a constraint uses only some of the bits (but not all the bits) of a variable, the subset of bits is called a "bit-slice" and the variable is called a "base node". For example, the following lines of code illustrate a bit-slice constraint. -US-00002 class TestBenchClass; rand bit [31:0] x; constraint cc { x[31:12] == x[19:0]; x >= 128; } endclass In the above example, x[31:12] and x[19:0] are bit-slices of base node variable x. The notation v[n:m] represents a bit-slice that includes bits m through n in variable v. Note that bit-slices x [31:12] and x[19:0] have a common bit-slice x[19:12]. ATPG-based constraint solvers usually perform a large amount of backtracking when solving a set of constraints that use bit-slices, especially if some of the bit-slices overlap. This is primarily due to two reasons. First, the implications for constraint expressions that use bit-slices are incomplete because bit-level values are propagated on a word-level model. Forward implication from the base node to a bit-slice node and the backward implication from a bit-slice node to the base node both return an approximate value range. In most cases, the propagated value is the most conservative value that the node can have. As a result, static implication and static learning is not very effective in narrowing down the search space for the bit-slice nodes. Second, some ATPG-based constraint solvers may use overlapping bit-slices as decision variables instead of using the base node as the decision variable. For example, in the piece of code shown above, an ATPG-based constraint solver may treat bit-slices x[31:12] and x[19:0] as decision variables and independently and randomly assign values to them. However, making independent random value assignments to these two bit-slices is likely to cause a conflict because these two bit-slices overlap (the common portion is bit-slice x[19:12]), which, in turn, causes the ATPG-based constraint solver to backtrack. Bit-slice constraints often appear in processor designs. Furthermore, any packed data structure is essentially a bit vector, and members of the packed data structure are essentially bit-slices. Therefore, improving constraint solver performance for bit-slice constraints is important. Some embodiments described in this disclosure provide methods and apparatuses for improving constraint solver performance for bit-slice constraints. Specifically, some embodiments rewrite bit-slice constraints so that the amount of backtracking required to generate random variable assignments based on the set of rewritten constraints is substantially less than the amount of backtracking required to generate random variable assignments based on the original set of constraints. Some embodiments introduce auxiliary decision variables and rewrite bit-slice constraints in terms of the auxiliary decision variables. Specifically, some embodiments identify non-overlapping partitions in the base node variable based on the bit-slices that appear in the constraints, and define an auxiliary decision variable for each non-overlapping partition. The original bit-slices can then be replaced by the auxiliary decision variables as decision nodes in the word-level circuit. The ATPG-based constraint solver can make decisions on these auxiliary decision variables before deciding the value of the base node variable by concatenating the auxiliary decision variables. For example, the bit-slice constraint shown above can be rewritten as follows. -US-00003 line 01: class TestBenchClass; line 02: rand bit [31:0] x; line 03: rand bit [11:0] aux31_20; line 04: rand bit [7:0] aux19_12; line 05: rand bit [11:0] aux11_0; line 06: constraint cc { line 07: aux31_20 == x[31:20]; line 08: aux19_12 == x[19:12]; line 09: aux11_0 == x[11:0]; line 10: x[31:12] == {aux31_20, aux19_12}; line 11: x[19:0] == {aux19_12, aux11_0}; line 12: x == {aux31_20, aux19_12, aux11_0}; line 13: x[31:12] == x[19:0]; line 14: x >= 128; line 15: } line 16: endclass As shown above, three auxiliary random variables "aux31 20," "aux19 12," and "aux11 0" have been added. These auxiliary random variables are marked as decision variables in the word-level circuit that the ATPG-based constraint solver uses to generate random variable assignments. The auxiliary variables are created based on non-overlapping partitions of the random variable "x." Specifically, variable "x" (which corresponds to the base node) is partitioned into non-overlapping bit-slices based on the boundaries of the bit-slices that are used in the constraints. In the above example, the bit-slices that are used in the constraints are x[31:12] and x[19:0]. The boundaries of these bit-slices are located at bits 31, 19, 12, and 0. Therefore, in some embodiments, the system creates three auxiliary decision variables "aux31 20," "aux19 12," and "aux11 0" that correspond to non-overlapping bit-slices x[31:20], x[19:12], and x[11:0], respectively. The relationship between the auxiliary variables and the corresponding bit-slices is captured by three new equality constraints, as shown in lines 07-09 of the rewritten bit-slice constraint. The original bit-slices x[31:12] and x[19:0] that were used in the original set of constraints are represented as a concatenation of auxiliary variables, as shown in lines 10-11 of the rewritten bit-slice constraint. Random variable x can be represented as a concatenation of auxiliary variables, as shown in line 12 of the rewritten bit-slice constraint. In general, concatenation performs incomplete implications unless one side of the concatenation operation is fully assigned. Since the auxiliary variables are always assigned before the original slices and the base node variable, complete implications can be performed on the concatenation operations that are introduced in the rewritten set of bit-slice constraints. In this manner, some embodiments enable complete implications for the bit-slice operator and reduce the amount of backtracking required. In some embodiments, the rewrite can be performed after the ATPG-based constraint solver creates a word-level circuit model and before the ATPG-based constraint solver performs implication. All the input variables that are associated with bit-slice nodes in the word-level circuit model can be statically analyzed and the bit-slice constraints can then be rewritten. For example, the system can collect bit-slice information by analyzing the set of constraints and create the appropriate bit-slice partitions. Next, the system can generate a set of auxiliary variables from these partitioned slices. The system can then create additional circuitry for the auxiliary variables in the word-level circuit. The additional circuitry can essentially capture the relationship between the auxiliary variables and bit-slices in the original set of constraints. Next, the system can collect information to facilitate random value assignments on the decision variables. Specifically, the system can determine the following information: (1) the number of possible paths between an auxiliary variable and the associated original bit-slices, (2) bit position relationships between an auxiliary variable and the associated base node variable, and (3) initial value ranges for the auxiliary variables. Based on the collected information, the system can determine whether and how to rewrite the bit-slice constraints. Adding auxiliary variables and constraints can increase the amount of processing performed by the ATPG-based constraint solver. Therefore, some embodiments try to optimize the number and type of auxiliary variables and constraints that are added in certain situations. Some examples of such optimizations are described below. In some cases, a variable is partitioned into a large number of bit-slices that are only a few bits wide. Such cases are referred to as bit-blasting cases. An example of a bit-blasting case is shown -US-00004 rand bit[7:0] x; constraint c1 { x[7] == (x[0]{circumflex over ( )}x[1]{circumflex over ( )}x[2]{circumflex over ( )}x[3]{circumflex over ( )}x[4]{circumflex over ( )}x[5]{circumflex over ( )}x[6]); } In the above example, the base variable "x" is eight bits long and the constraint has eight bit-slices, each corresponding to a single bit. Without optimization, some embodiments may create eight auxiliary variables (e.g., aux0, aux1, . . . , aux7), eight equality constraints (e.g., x[0]==aux0, x[1]==aux1, . . . , x[7]==aux7), and a seven-level concatenation constraint (e.g., { . . . {{aux7, aux6}, aux5} . . . , aux0}==x). The circuitry that is required to represent the rewritten set of constraints can be substantially larger than the circuitry that is required to represent the original set of constraints. As a result, rewriting the bit-slice constraints can degrade the performance of the ATPG-based constraint solver instead of improving its performance. For bit-blasting cases, such as the one shown above, some embodiments rewrite the constraints by defining an auxiliary variable that represents multiple bit partitions. For example, the constraint shown above can be rewritten as follows. -US-00005 rand bit[7:0] x; rand bit[0:0] y; rand bit[6:0] z; constraint c1 { y[0] == (z[6]{circumflex over ( )}z[5]{circumflex over ( )} ... {circumflex over ( )}z[0]); x == {y, z}; solve y, z before x; solve z before y; } As shown above, two auxiliary variables "y" and "z" are created instead of creating eight auxiliary variables. Circuitry for only one additional constraint is required (note that the "solve before" directive does not require additional circuitry to be built). In the rewritten set of constraints, instead of making decisions on eight bit-slices x[0]-x[7], the system only needs to make decisions on three decision variables: x, y, and z. Due to the "solve before" directives, the variable z is assigned a random value before assigning random values to variables x or y. Because all the bit-slices of z appear in the same constraint and on the same side of the constraint, the values assigned to the bit-slices do not depend on any other variable. In other words, no matter what value is assigned to each bit-slice of z, it would not cause any backtracking Once the value of z is assigned, the values of y and x can be assigned. Note that the value of x is known because of implication on the concatenation equality once the values of z and y are known. Therefore, no backtracking is performed in the rewritten set of constraints, and only a small amount of additional circuitry is required to represent the rewritten set of constraints. In some cases, a particular bit-slice may occur much more frequently than other bit-slices in the set of constraints. In such situations, the system can create auxiliary variables that correspond to the bit-slices that appear frequently in the set of constraints. For example, suppose the following bit-slices of a 64-bit random variable "x" appear in the set of constraints: x[1:0], x[63:32], x[31:24], x[23:19], x[18:16], x[15:12], x[11:2], x[1:1], x[0:0], x [2:0], x[3:0], x[5:2], and x[31:2]. Further, suppose that bit-slices x[31:2] and x[1:0] appear more than ten times in the set of constraints, while the other bit-slices only appear once. The partitions of variable "x" for which auxiliary variables are created are: x[63:32], x[31:24], x[23:19], x[18:16], x[15:12], x[11:6], x[5:4], x[3:3], x[2:2], x[1:1], and x[0:0]. However, if the system does not create special auxiliary variables (as explained below) for bit-slices x[31:2] and x[1:0], the system may require a large amount of backtracking because the value assignments to the partitions may not satisfy the constraints that are defined on the frequently occurring bit-slices. Therefore, some embodiments identify bit-slices that occur frequently in the set of constraints, and create additional auxiliary variables for these frequently occurring bit-slices. If the frequently occurring bit-slices overlap with one another, then, in some embodiments, the system may create an auxiliary variable for only the bit-slice that appears with the highest frequency. For example, in the above example, the system may define variables aux31 2 and aux1 0 for the two bit-slices x[31:2] and x[1:0] that appear substantially more frequently than the other bit-slices in the constraints. Specifically, the base node that corresponds to the variable "x" can be partitioned using boundaries of these two frequently occurring bit-slices to obtain bit-slices x[63:32], x[31:2], and x[1:0]. The system can then add the following lines of code based on the additional auxiliary variables that correspond to the frequently occurring bit-slices in the constraints. 24, aux23 19} . . . aux2}; 0=={aux1, aux0}; 32, aux31 2, aux1 Note that the equality constraint that equates variable "x" with a concatenation of bit-slices only needs to concatenate three bit-slices instead of concatenating a much larger number of bit-slices had the auxiliary variables for the frequently occurring bit-slices not been defined. The auxiliary variables that correspond to the frequently occurring bit-slices can be represented using only a small amount of additional circuitry. Further, the ATPG-based constraint solver can be directed to solve for the frequently occurring bit-slices first, which can substantially reduce the amount of In some cases, only the bit-slices of a variable are used, and the variable itself is never used in any constraint. The following example illustrates this case. rand bit[7:0] a, b; In the above example, the variables "a" and "b" are not used in the constraints; only their bit-slices are used. In the word-level circuit, the base node that represents this type of variable (e.g., variables like "a" and "b") only fans out to bit-slice nodes, because the base node is not connected to any constraint. Some embodiments identify such variables in the set of constraints, and do not introduce auxiliary variables that correspond to each bit partition. Instead, the system solves the individual bit-slices first, and then performs a concatenation operation on the bit-slices to obtain the value for the variable. This approach can avoid performing the implication operations through the auxiliary circuitry and reduce the number of decision variables. For example, in the above example, the system can assign values to bit-slices a[4:0] and a[5:3]. Next, the system can determine the value of bit-slice b[6:2] based on the constraint "b[6:2]+1==a [4:0]." Finally, the system can use the values of bit-slices a[4:0], a[5:3], and b[6:2] to determine values for variables "a" and "b." FIG. 6 presents a flowchart that illustrates a process for rewriting bit-slice constraints in accordance with some embodiments described in this disclosure. The process can begin by receiving a set of constraints, wherein each constraint is defined over one or more random variables from a set of random variables, wherein the set of constraints includes one or more bit-slice constraints, and wherein each bit-slice constraint includes one or more bit-slices of one or more random variables (block 602). The system can then rewrite the set of constraints to obtain a new set of constraints by defining one or more auxiliary random variables, and rewriting the one or more bit-slice constraints using the one or more auxiliary random variables (block 604). Each auxiliary random variable can represent a non-overlapping bit-slice of a random variable. In some embodiments, the system can identify a high-frequency bit-slice of a random variable that occurs more than a pre-defined number of times in the set of constraints. The system can then define a high-frequency auxiliary variable that corresponds to the high-frequency bit-slice of the random variable. Next, the system can assign random values to the set of random variables based on the new set of constraints (block 606). In some embodiments, the system can begin by assigning random values to the auxiliary variables. Specifically, the system may first assign random values to the auxiliary variables that correspond to the high-frequency bit-slices, and then assign random values to the other auxiliary variables. Next, the system can determine values of the base node variables by concatenating the auxiliary variable values. Outliers [0139] An ATPG-based constraint solver decides values for random variables (which are represented as nodes in the word-level circuit) and backtracks when a conflict is detected. An important part in this process is to pick an appropriate decision variable. In some ways, this is similar to the decisions that need to be taken during a depth-first search. During a depth-first search, it is advantageous to select variables that can quickly lead to a satisfiable solution. Further, it is desirable to detect conflicts near the root of the search stack. This is because the search space grows exponentially with the depth of the search stack, and if conflicts are not detected near the root, a large amount of processing may be wasted. For these reasons, it is important to select decision nodes judiciously. Some ATPG-based constraint solvers determine the variable ordering (i.e., the order in which decision nodes are picked) based on a cost function. For example, the higher the cost function value, the earlier the decision node may be picked. In some embodiments, the system determines a cost function based on one or more of the following factors: (1) the type of constraint expressions that are present in the cone of influence of the decision node, (2) the number of constraint expressions that the decision node is involved in, (3) the number of times the decision node was involved in backtracking, and (4) the current ranges of possible values for the decision node. In some embodiments, the value of the cost function increases when (1) the decision node is involved in a greater number of modulo or bi-slice constraints, (2) the decision node is involved in a larger number of constraints, (3) the decision node is involved in a larger number of backtrackings, and/or (4) the current ranges of possible values for the decision node decreases. In some embodiments, the ATPG-based constraint solver can use a cost function that has a number of cost components. Each cost component can represent a factor that increases or decreases the importance of picking a decision node early during the depth-first search. For example, a cost component can be related to the number of constraints that a decision variable is involved in. The cost function can be a weighted sum of the cost components. Once the cost function values for different decision variables have been determined, the system can pick the decision variables based on their cost function values, e.g., by sorting the decision variables in decreasing order based their cost function values. Since the cost function is a weighted sum of the cost components, the cost function value of two variables can be substantially equal to one another even when their cost components have very different values. For example, suppose the cost function C is equal to 0.2*C1 +0.8*C2, where C1 and C2 are cost components. Further, suppose that for random variable "a," the values for C1 and C2 are 10 and 20, respectively, and for random variable "b," the values for C1 and C2 are 86 and 1. The cost function C for both random variables "a" and "b" is equal to 18. In some embodiments, the system may select "a" or "b" in a random order because their cost function values are the same. However, note that the cost component values for random variable "b" are more extreme than those for random variable "a." In some embodiments, the system identifies random variables that have an outlier cost component value, i.e., the value of the cost component is substantially greater or less than the cost component values for other variables. For example, cost component values that are within two standard deviations from the average cost component value across all variables can be deemed as being within a normal range of values; anything outside this range can be considered to be either substantially greater or less than the average cost component values. The system can then pick these identified random variables earlier or later in the decision process than when they would have been picked if they were picked based only on their cost function values. This approach of identifying outliers and guiding the constraint solver based on the identified outliers can be applied in general to any constraint solver. For example, a BDD-based constraint solver may apply a sequence of basic BDD logic operations to generate intermediate BDDs, as stepping stones to arrive at the final BDD. Although the final BDD that is computed is the same, different intermediate BDDs may be generated when the individual BDDs are combined in different orders. In some sequences, an intermediate BDD may become impracticably large, i.e., some sequences may cause the intermediate BDD to blow up. In some embodiments, a cost function can be computed for the individual BDDs to determine a sequence for combining the BDDs so as to avoid intermediate BDD blowup. In these embodiments, the components of the cost function can be used to identify outliers in the individual BDDs. Next, the system can adjust the position of these identified BDDs in the sequence to prevent BDD blowup. Further details of how an intermediate BDD can be computed can be found in U.S. Patent Publication No. 20100191679, entitled "Method and Apparatus for Constructing a Canonical Representation," by inventors Ngai Ngai William Hung, Dhiraj Goswami, and Jasvinder Singh, the contents of which are incorporated by reference herein. In some embodiments, a BDD-based constraint solver can compute BDDs corresponding to multiple constraints using a single BDD manager, or divide the constraints into two or more groups, each managed by a separate BDD manager. A benefit of managing multiple BDDs using a single BDD manager is that it allows the multiple BDDs to share common sub-trees. However, a drawback is that the multiple BDDs need to have the same variable ordering, which may cause an intermediate BDD to blowup. In some embodiments, a BDD-based constraint solver can identify outlier BDDs, and ensure that no two outlier BDDs share the same BDD manager. Further details of how BDDs can share a BDD manager can be found in U.S. Patent Publication No. 20100275169, entitled "Adaptive State-to-Symbolic Transformation in a Canonical Representation," by inventors Dhiraj Goswami, Ngai Ngai William Hung, Jasvinder Singh, and Qiang Qiang, the contents of which are incorporated by reference herein. In some embodiments, outliers can be used to determine which constraints to rewrite. In an ATPG-based constraint solver, rewriting a set of constraints can reduce the amount of backtracking, but increase the size of the word-level circuit that is used for assigning random values to the set of variables. Similarly, in a BDD-based constraint solver, rewriting a set of constraints can prevent an intermediate BDD from blowing up, but it may increase the size of the final BDD. In some embodiments, the system can rewrite constraints based on whether the constraints use an outlier random variable. For example, once a constraint solver has determined a set of outlier random variables based on a set of cost function components, the system can identify constraints that use one or more of these outlier random variables. Next, the system can rewrite these constraints in addition to any other constraints that were identified by the constraint solver based on the cost function. FIG. 7 presents a flowchart that illustrates a process for identifying and using outliers in accordance with some embodiments described in this disclosure. The process can begin by receiving a set of constraints, wherein each constraint is defined over one or more random variables from a set of random variables (block 702). The system can then analyze the set of constraints to identify a set of outlier random variables in the set of random variables, wherein at least one cost component value of each outlier random variable falls outside a pre-defined range of values (block 704). Specifically, the system can determine cost component values for each variable. Next, the system can determine a range of cost component values that are deemed normal. For example, cost component values that are within two standard deviations of the average cost component value across all variables can be considered to be within a normal range of values. Next, the system can identify random variables that have a cost component value that falls outside the normal range of values. Next, the system can identify a subset of constraints, wherein each constraint in the subset of constraints includes at least one outlier random variable (block 706). The system can then rewrite the subset of constraints to obtain a new set of constraints (block 708). Next, the system can assign random values to the set of random variables based on the new set of constraints (block 710). In some embodiments, the outlier random variables can be given higher priority when an ATPG-based constraint solver assigns random values to the random variables. FIG. 8 illustrates a process for assigning higher priority to outlier random variables in accordance with some embodiments described in this disclosure. The process can begin by arranging the set of random variables in a sequence based on the set of outlier random variables (block 802). For example, an ATPG-based constraint solver can arrange the set of random variables in a sequence according to their cost function values, and then adjust the position of the outlier random variables so that the outlier random variables are moved up in the priority order. Next, the system can assign random values to the set of random variables based on the sequence (block 804). FIG. 9 illustrates a computer system in accordance with some embodiments described in this disclosure. Computer system 902 can include processor 904, memory 906, and storage device 908. Computer system 902 can be coupled to display device 914, keyboard 910, and pointing device 912. Storage device 908 can store operating system 916, application 918, and data 920. Data 920 can include Computer system 902 may automatically perform any method that is implicitly or explicitly described in this disclosure. Specifically, during operation, computer system 902 can load application 918 into memory 906. Application 918 can then automatically rewrite a set of constraints to obtain a new set of constraints, and then use the new set of constraints to assign random values to a set of random variables. CONCLUSION [0158] The above description is presented to enable any person skilled in the art to make and use the embodiments. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein are applicable to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The data structures and code described in this disclosure can be partially or fully stored on a computer-readable storage medium and/or a hardware module and/or hardware apparatus. A computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media, now known or later developed, that are capable of storing code and/or data. Hardware modules or apparatuses described in this disclosure include, but are not limited to, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), dedicated or shared processors, and/or other hardware modules or apparatuses now known or later developed. The methods and processes described in this disclosure can be partially or fully embodied as code and/or data stored in a computer-readable storage medium or device, so that when a computer system reads and executes the code and/or data, the computer system performs the associated methods and processes. The methods and processes can also be partially or fully embodied in hardware modules or apparatuses, so that when the hardware modules or apparatuses are activated, they perform the associated methods and processes. Note that the methods and processes can be embodied using a combination of code, data, and hardware modules or apparatuses. The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims. Patent applications by Dhiraj Goswami, Wilsonville, OR US Patent applications by Guillermo R. Maturana, Berkeley, CA US Patent applications by Jasvinder Singh, San Jose, CA US Patent applications by Ngai Ngai William Hung, San Jose, CA US Patent applications by Qiang Qiang, Santa Clara, CA US Patent applications by SYNOPSYS, INC. Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120136635","timestamp":"2014-04-17T06:13:28Z","content_type":null,"content_length":"108324","record_id":"<urn:uuid:1872f0dc-a2ab-4b41-b195-12ab7117e6d1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of arithmetic logic unit In computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit (CPU) of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) have inside them very powerful and very complex ALUs; a single component may contain a number of ALUs. Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report on the foundations for a new computer called the EDVAC. Early development In 1946, von Neumann worked with his colleagues in designing a computer for the Princeton Institute of Advanced Studies (IAS). The IAS computer became the prototype for many later computers. In the proposal, von Neumann outlined what he believed would be needed in his machine, including an ALU. Von Neumann stated that an ALU is a necessity for a computer because it is guaranteed that a computer will have to compute basic mathematical operations, including addition, subtraction, multiplication, and division. He therefore believed it was "reasonable that [the computer] should contain specialized organs for these operations. Numerical systems An ALU must process numbers using the same format as the rest of the digital circuit. For modern processors, that almost always is the two's complement binary number representation. Early computers used a wide variety of number systems, including one's complement format, and even true decimal systems, with ten tubes per digit. ALUs for each one of these numeric systems had different designs, and that influenced the current preference for two's complement, as this is the representation that makes it easier for the ALUs to calculate additions and subtractions. Practical overview Most of a processor's operations are performed by one or more ALUs. An ALU loads data from input registers, executes, and stores the result into an output register. A Control Unit tells the ALU what operation to perform on the data. Other mechanisms move data between these registers and memory. Simple operations Most ALUs can perform the following operations: • Integer arithmetic operations (addition, subtraction, and sometimes multiplication and division, though this is more expensive) • Bitwise logic operations (AND, NOT, OR, XOR) • Bit-shifting operations (shifting or rotating a word by a specified number of bits to the left or right, with or without sign extension). Shifts can be interpreted as multiplications by 2 and divisions by 2. Complex operations An engineer can design an ALU to calculate any operation, however complicated it is; the problem is that the more complex the operation, the more expensive the ALU is, the more space it uses in the processor, and the more power it dissipates, etc. Therefore, engineers always calculate a compromise, to provide for the processor (or other circuits) an ALU powerful enough to make the processor fast, but yet not so complex as to become prohibitive. Imagine that you need to calculate the square root of a number; the digital engineer will examine the following options to implement this operation: 1. Design an extraordinarily complex ALU that calculates the square root of any number in a single step. This is called calculation in a single clock. 2. Design a very complex ALU that calculates the square root of any number in several steps. But--and here's the trick--the intermediate results go through a series of circuits that are arranged in a line, like a factory production line. That makes the ALU capable of accepting new numbers to calculate even before finished calculating the previous ones. That makes the ALU able to produce numbers as fast as a single-clock ALU, although the results start to flow out of the ALU only after an initial delay. This is called calculation pipeline. 3. Design a complex ALU that calculates the square root through several steps. This is called interactive calculation, and usually relies on control from a complex control unit with built-in 4. Design a simple ALU in the processor, and sell a separate specialized and costly processor that the customer can install just beside this one, and implements one of the options above. This is called the co-processor. 5. Tell the programmers that there is no co-processor and there is no emulation, so they will have to write their own algorithms to calculate square roots by software. This is performed by software 6. Emulate the existence of the co-processor, that is, whenever a program attempts to perform the square root calculation, make the processor check if there is a co-processor present and use it if there is one; if there isn't one, interrupt the processing of the program and invoke the operating system to perform the square root calculation through some software algorithm. This is called software emulation. The options above go from the fastest and most expensive one to the slowest and least expensive one. Therefore, while even the simplest computer can calculate the most complicated formula, the simplest computers will usually take a long time doing that because of the several steps for calculating the formula. Powerful processors like the Intel Core and AMD64 implement option #1 for several simple operations, #2 for the most common complex operations and #3 for the extremely complex operations. That is possible by the ability of building very complex ALUs in these processors. Inputs and outputs The inputs to the ALU are the data to be operated on (called ) and a code from the control unit indicating which operation to perform. Its output is the result of the computation. In many designs the ALU also takes or generates as inputs or outputs a set of condition codes from or to a status register. These codes are used to indicate cases such as carry-in or carry-out, overflow, divide-by-zero, etc. ALUs vs. FPUs Floating Point Unit also performs arithmetic operations between two values, but they do so for numbers in floating point representation, which is much more complicated than the two's complement representation used in a typical ALU. In order to do these calculations, a has several complex circuits built-in, including some internal ALUs. Usually engineers call an ALU the circuit that performs arithmetic operations in integer formats (like two's complement and BCD), while the circuits that calculate on more complex formats like floating point, complex numbers, etc. usually receive a more illustrious name. See also External links
{"url":"http://www.reference.com/browse/arithmetic+logic+unit","timestamp":"2014-04-18T11:18:47Z","content_type":null,"content_length":"91537","record_id":"<urn:uuid:8eb7dc90-d599-44cf-858b-349f0c45a1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Concept of Numbers Unformatted Document Excerpt Boise State MATH 124 Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. concept 1 The of numbers. In this chapter we will explore the early approaches to counting, arithmetic and the understanding of numbers. This study will lead us from the concrete to the abstract almost from the very beginning. We will also see how simple problems about numbers bring us very rapidly to analyzing really big numbers. In section 7 we will look at a modern application of large numbers to cryptography (public key codes). In this chapter we will only be dealing with whole numbers and fractions. In the next chapter we will study geometry and this will lead us to a search for more general types of numbers. 1.1 Representing numbers and basic arithmetic. Primitive methods of counting involve using a symbol such as | and counting by hooking together as many copies of the symbol as there are objects to be counted. Thus two objects would correspond to ||, three to |||, four to ||||, etc. In prehistory, this was achieved by scratches on a bone (a wolf bone approximately 30,000 years old with 55 deep scratches was excavated in Czechoslovakia in 1937) or possibly piles of stones. Thus if we wish to record how many dogs we have we would, say, mark a bone with lines, one for each dog. That is 5 dogs would correspond to |||||. Notice, that we are counting by assigning to each dog an abstract symbol for one dog. Obviously, the same method could have been used for cats or cows, etc. Thus the mark | has no unit attached. One can say |||||| dogs (dogs being the unit). Notice that you need exactly the same number of symbols as there are objects that you are counting. Although this system seems very simple, it contains the abstraction of unitless symbols for concrete objects. It uses the basic method of set theory to tell if two sets have the same number of elements. That is, if A and B are sets (collections of objects called elements) then we say that they have the same number of elements (or the same cardinality) if there is a way of assigning to each element of the set A a unique element of the set B and every element of the set B is covered by this assignment. Primitive counting is done by using sets whose elements are copies of | to be numbers. Although each of the symbols | is indistinguishable from any other they must be considered dierent. This primitive method of counting and attaching symbols to numbers basically involves identifying sets with the same cardinality with one special set with that cardinality. In modern mathematics, one adds one level of abstraction and says that the set of all sets with the same cardinality constitutes one cardinal number. There is no limit to the size of a set in this formalism. We will come back to this point later. Early methods of representing numbers more concisely than what we have called the primitive system are similar to Roman numerals which are still used today for decorative purposes. In this system, one, two, three are represented by I, II, III. For ve there is a new symbol V (no doubt representing one hand) and four is IV (to be considered one before V and probably representing a hand with the thumb covering the palm). Six, seven and eight are given as VI, VII, 1 Figure 1: VIII. Then there is a separate symbol for ten, X (two hands) and nine is IX. This pattern continues, so XII is twelve, XV is f ifteen, XIV is fourteen, XIX is nineteen. Twenty and thirty are XX, XXX. Fifty is L. Forty is XL. One hundred is C, f ive hundred is D and a thousand is M. Thus 1998 is MCMXCVIII. This system is adequate for counting (although cumbersome). It is, however, terrible for arithmetic. Here we note that one has a dramatic improvement in the number of symbols necessary to describe the number of elements in a set. Thus one symbol M corresponds to the cardinal with 1000 of the symbols | in it in the most primitive system. The ancient Egyptians (beginning about 3500 BC) used a similar system except that they had no intermediate symbols for f ive, f ifty or f ive hundred. But they had symbols for large numbers such as ten thousand, one hundred thousand, one million and ten million. The below is taken from the Rhind Papyrus (about 1600 BC). Our number system derives from the Arabic positional system which had its precursor in the Babylonian system (beginning about 3000 BC). Before we describe the Babylonian system it is useful to recall our method of writing numbers. We use symbols 1,2,3,4,5,6,7,8,9 for one element, two elements,...,nine elements. we then write 10 for ten elements, 11 for eleven, ..., 19 for nineteen. This means that we count by ones, then by tens, then by hundreds, then by thousands, etc. This way we can write an arbitrarily large number using ten symbols (we also need 0 which will be discussed later). Our system has base ten. That, is we count to nine then the next is ten which is one ten, 10, then we count by ones from 11 to 19 and the next number is two tens, 20. When we get to 9 tens and 9 ones (99) the next number is 10 tens which we write as 100 (hundred). 10 hundreds is then 1000 etc. Thus by hooking together 10 symbols 2 we can describe all numbers. One could do the same thing using a base of any positive integer. For example, if we worked with base 2 then we would count 1, then 10 for two, then 11 for three (one two and one one), then 100 (2 twos), 101, 110, 111, 1000 (two (two twos)). Thus we would only need 2 symbols in juxtaposition to describe all numbers. For example, 1024 would need 1024 of the units, | ,in the most primitive system, it is 4 symbols long in ours, and base 2 it is 10000000000. Still a savings of 1013 symbols. The Roman method would be MXXIV so in this case slightly worse than ours. However, if we try 3333 in Roman notation we have MMMCCCXXXIII. How long is the expression for 3333 in base 2? The Babylonians used base 60 which is called sexagesimal. We should note that for some measurements we still use this system: 60 seconds is a minute, 60 minutes is an hour. Their system is preserved in clay tablets in various excavations. Their method of writing (cuneiform) involved making indentations in soft clay tablets by a wedge shaped stylus. They used two basic symbols, one equivalent with | for one. and one for 10 which we will represent as . Thus six is ||||||. Normally written in the form: ||| ||| and thirty seven is ||| ||| . | But 61 is | |. 3661 is | | |. Thus, except that they used only symbols for 1 and 10 and had to juxtapose them to get to 59, they used a system very similar to ours. They did not have a symbol for 0. We will see that this is a concept that would have to wait more than 3000 years. So when they saw |, they would have to deduce from the context whether it represented 1, 60, 3600, etc. For example if I said that a car cost ||| then you would be pretty sure (in 2003) that I meant 10,800, not 180 or 3. They later (200 BC) had a symbol that they could use for a place marker in all but the last digit (but still no 0). // Thus they could write | // | and mean 3601. There is still an ambiguity in the symbol | which can still mean 1, 61, 3601, etc. 3 Exercises. 1. Write out the number 1335 in Egyptian notation, binary, sexagesimal and in Roman numerals. 2. For computers one kilobit (1K) is actually 1024. Why is that? 3. The early computer programmers used base 16 they therefore needed 16 symbols which they wrote as 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. For example,AF = 10 16 + 15 = 175. What number is F F F F ? Write it in binary. Why was it important to 16 bit computers? F F F F F + 1 is called a megabit. Why is that? 4. In writing numbers in the Egyptian system what is the maximum repetition necessary for each symbol? 1.2 1.2.1 Arithmetic. Addition. We return to the most primitive method of counting. If you have ||| sheep and you have purchased |||| sheep, then you have ||||||| sheep. That is, to add ||| and |||| we need only hook |||| onto |||. For cardinal numbers we have thus described a method of addition: If A corresponds (i.e. is an element of) to the cardinal a and if B corresponds to the cardinal b, and if no element of A is an element of B then a + b is the cardinal number that contains A B (with A B the set that consists of the elements of A combined with those of B).This can be made rigorous (independent of the choice of A and B) we will look into this point later in the book. Thus the abstraction of primitive addition is set theoretic union of disjoint (no element in common) sets. In the Roman system there is one more degree of abstraction since for example |||| is represented as IV and ||||| is represented as V so IV + V = ||||||||| = IX. Obviously, one must remember much more if one uses the more abstract method of the Romans than the direct primitive method. In our system for the same addition we are looking at 4 + 5 and we must remember that this is 9. Thus the situation is analogous to that of the Romans. However, if we wish to add XXXV to XVI, then in Roman numerals we have LI. In our system we have 35 + 16. We add 5 + 6 and get 11 (memorization). We now know that the number has a 1 in the ones position we carry the other 1 to see that for the tens position. We have 1 + 3 + 1 = 5. The sum is therefore 51. Thus we need only remember how to add pairs of numbers up to 9 in our system and all other additions are done following a prescribed method. The Roman system clearly involves much more memorization. We next look at the Babylonian system. For this we will use a method of expressing numbers to base 60 that is due to O. Neugebauer(a leader in the history of mathematics). We write 23,14,20 for 20 plus 14 sixties plus 23 3600. Thus in the Babylonian base 60 system we must memorize all additions of numbers up to 59. If we wish to add 21,13 and 39,48 then we add 48 + 13 and get 1,1 (this is memorized or in an addition table) 21+39 and get 1 (remembering the context). Thus the full sum is 1,1,1. Here we must remember a very large addition table. However, we have grown up thinking in terms of base 10 and we 4 do the additions of pairs of numbers below 59 in our method and then transcribe them to our version of the Babylonian notation. Exercises. 1. Do the addition 1, 2 + 32, 21, 3 + 43, 38, 1 in Neugebauers notation. 2. How do you think that an Egyptian would add together 3076 and 9854? 1.2.2 Multiplication. Multiplication is a more sophisticated operation than addition. There isnt any way to know when and how the notion arose. However, the Egyptians and the Babylonians knew how to multiply (however as we shall see the Egyptian method is not exactly what one would guess). We understand multiplication as repeated addition. That is, if we wish to multiply a times b, a b, then we add b to itself a times. That is 3 5 is 5 + 5 + 5 = 15. If we attempt to multiply a times b in the primitive system we must actually go through the full juxtaposition of b with itself a times (or vice-versa). In a system such as the Roman system we must memorize a great deal. For example XVLI = DCCLXV. For us the multiplication is done using a system: 51 15 255 . 510 765 We usually leave out the 0 in the 510 and just shift 51 into the position it would have if there were a 0. We see that we must memorize multiplication of pairs of numbers up to 9. The Babylonian system is essentially the same. However, one must memorize multiplication of pairs of numbers up to 59. This is clearly a great deal to remember and there are tablets that have been excavated giving this multiplication table. The Egyptian system is different. They used the method of duplication. For example if we wish to multiply 51 by 15 then one would proceed as follows: 51 51 + 51 = 102 102 + 102 = 204 204 + 204 = 408 1 2 4 8 Now 1 + 2 + 4 + 8 = 15 so the product is 51 + 102 + 204 + 408 = 765. Notice that they are actually expanding 15 in base 2 as 1111. If the problem had been multiply 51 by 11 then the answer would be 51 + 102 + 408 = 561 (in base 2, 11 is 1011). So their multiplication system is a combination of doubling and addition. We note that this method is used in most computers. Since, in base 2, multiplication by 2 is just putting a 0 at the end of the number. In base 2, 5 51 = 110011. Thus the same operations are 110011 1100110 11001100 110011000 1 10 100 1000 The basic dierence is that we must remember many carries in binary. Thus it is better to proceed as follows. 110011 1100110 10011001 11001100 110011000 1001100100 10011001 1001100100 1011111101 Addition is actually an operation that involves adding pairs of numbers. In our system we rarely have to carry numbers to more than one column to the left (that is when a column adds up to more than 99). In binary it is easy if we add 4 numbers with 1 in the same digit we will have a double carry. Exercise. 1. Multiply 235 by 45 using the Egyptian method. Also do it in binary. 1.2.3 Subtraction. If we wish to subtract ||| from ||||| then the obvious thing to do is to remove the lines one by one from each of these primitive numbers. ||| ||||| || |||| | ||| ||. With this notion we have ||||| ||| = ||. If we do this procedure to subtract a from b and run out of tokens in b then we will say that the subtraction is not possible. This is because we have no notion of negative numbers. We will see that the concept of 0 and negative numbers came relatively late in the history of mathematics. In any event, we will write a < b if subtraction of a from b is possible. If a < b then we say that a is strictly less than b. In our notation subtraction is an inverse process to addition. This is because our number notation has a higher degree of abstraction than the primitive one. Thus we memorize such subtractions as 3 2 = 1. If we are calculating 23 12 then we subtract 2 from 3 and 1 from 2 to get 11. For 23 15 we do in initial borrow and calculate 135 and 11. So the answer is 8. Obviously, we can only subtract a smaller number from a larger one if we expect to get a number in the sense we have been studying. Both the Egyptians and the Babylonians used a similar system. For the Egyptians it would be somewhat more complicated, since every new power of 10 entailed a new symbol. 6 1.2.4 Division and fractions. For us division is the inverse operation to multiplication in much the same way as subtraction is the inverse operation to addition. Thus a is the number such b that if we multiply it by b we have a. Notice that if b is 0 this is meaningless and that even if a and b are positive integers then a is not always an integer. b Integer division can be implemented as repeated subtraction thus in the primitive notation ||||||||| - ||| = ||||||, |||||| - ||| = ||| thus |||||||||/||| = |||. However, the Egyptians and Babylonians understood how to handle divisions that do not yield integers. 1 The ancient Egyptians created symbols for the fractions n (i.e. reciprocals). 2 They also had a symbol for 3 . However, if they wished to express, say, 7 5 1 then they would write it as a sum of reciprocals say 1 + 1 + 15 . Also they 3 limited their expressions to distinct reciprocals (or 2 ). Thus 1 + 1 + 1 was not 3 5 5 a valid expression. Note that such an expression is not unique. For example, 7 1 1 1 5 = 1 + 4 + 10 + 20 . Notice that one allows any number of reciprocals in the expression. With a method such as this for handling fractions, there was a necessity for tables of fractions. One also had to be quite ingenious to handle fractions. An ancient Egyptian problem asks: If we have seven loaves of bread to distribute among 10 soldiers, how would we do it? 7 We would instantly say that each soldier should get 10 of a loaf. However, this makes no sense to the ancient Egyptians. Their answer was (answers were 1 supplied with the problems) 2 + 30 . 3 The mathematician Leonardo of Pisa (Fibonacci 1175-1250 A.D.) devised an ingenious method of expressing fractions in the Egyptian form. In order to see that the method works in general several basis properties of numbers will be used here. They will be considered in more detail later He starts by observing b that we need only consider a with 1 < a < b. We rst observe that a > 1 so b there exists a positive whole number n such that n1< So b < n. a 1 Thus a = n + anb . We observe that anb > 0 and a(anb) = ba(n1) = b nb b a( a n 1) > 0. Thus a > (an b) > 0. Set a1 = an b, b1 = bn. Then 0 < a1 < a and 1 a1 < b1 . If a1 = 1 then we are done. Otherwise, we repeat 1 the process with a1 . Call n, n1 . If a1 = 1 then we see that b1 = n1 b > n1 so b a 1 1 b = n1 + n1 b is a desired expression. Assume that a1 > 1. Do the same for a1 b1 . This is Fibonaccis method. A full proof that this always works didnt get published until the nineteenth century and is attributed to J.J.Sylvester. What a 1 1 < < n b n1 7 has not been shown is that if 1 a1 1 < < n b1 n1 then n > n1 . We do this by showing that that an1 b < a < b. Thus a1 b1 < 1 n1 . To see this we observe a1 an1 b a b 1 = . < < < b1 n1 b n1 b n1 b n1 In this argument we used an assertion about not necessarily whole numbers that says that if we have a number then it lies between two consecutive integers. 7 7 Consider, for example, 10 then n1 = 2 and 10 1 = 1 . Thus we get as an 2 5 1 1 answer to the Egyptian problem 2 + 5 which seems preferable to the answer given in the original Papyrus. Fibonacci was one of the leading European mathematicians of the Middle Ages. He was instrumental in introducing the Arabic number system (the one we use) to the West. However, he preferred the Egyptian method of fractions to our decimal notation (below). Clearly one must be much cleverer to deal with Egyptian fractions than with decimals. Also, as we will see, strange and impractical problems have propelled mathematics to major new theories (some of which are even practical). The ancient Babylonians used a method that was analogous to our decimal notation. In our decimal method we would express a fraction such as 1 as 8 follows: We rst try to divide 8 into 1 this fails so we multiply by 10. We can divide 8 into 10 once with remainder 2. We must multiply by 10 and divide 8 ani 20 getting 2 with remainder 4. We now multiply 4 by 10 and get 40. Divide by 8 and get 5. The numbers for the three divisions are 1, 2, 5. We write 1 = .125. 8 We can express this as follows: We will use Neugebauer notation in our description of their method. The fraction 7 = 1+ 24 . Our version of their notation would be 1; 24. In our decimal 5 60 notation this is 1.4. We could use exactly the same process (though it is harder 8 for us to do the intermediate steps in our heads). We must divide 5 into 2. So we multiply by 60 and do the division. That is divide 120 by 5. We get 24 and no remainder. If we were to write 1 we could work as follows:1 60 divided 8 by 8 is 7 with remainder 4. 4 60 = 240 which divided by 8 is 30 with no remainder. Thus we have ; 7, 30. There were also bad fractions. In our decimal notation 1 = 0.33333... 3 That is we must write the symbol 3 forever. In the Babylonian form f irst bad fraction and it is given by ; 8, 34, 17, 8, 34, 17, ... repeating 8,34,17 forever. Suddenly the Egyptian way doesnt seem to be so silly! We also think of fractions as expressions p with p and q positive integers. q p = r means ps = rq. Addition is given by p + r = ps+qr . Multiplication is q s q s qs given by p r = pr . We note that 1 = 2 = 3 = ... That is we identify all of q s qs 2 4 6 n the symbols 2n with 1 . This is similar to our def inition of cardinal number. 2 Usually, to rid ourselves of this ambiguity, we insist that p and q are in lowest terms. That is they have no common factor other than 1 1.2.5 Exercises. 2 3 1 7 is the 1. Why do you think that the Egyptians preferred + 1 30 to 1 2 + 1 5 for 7 10 ? 4 2. Use the Fibonacci method to write 17 as an Egyptian fraction. n 3.Make a table in Egyptian fractions of 10 for n equal to 1, 2, 3, 4, 5, 6, 7, 8, 9 1 4. Among { 1 , 1 , 1 , ..., 19 } which are the bad fractions in base 10? What do 2 3 4 1 they have in common. Can you guess a property of n that guarantees that n is a good fraction to base 10? How about base 60? 5. The modern fame of Fibonacci is the outgrowth of a problem that he proposed: Suppose that it takes a rabbit exactly one month from birth before it is sexually mature and that a sexually mature pair (male and female) of rabbits will give birth to two rabbits (male and a female) every month. If we start with newly born male and female rabbits how many rabbits will there be at the end of one year? What is the answer to his question? 6. If b is a positive integer then we can represent any integer to base b in the form a0 + a1 b + a2 b2 + ... + ak bk with 0 ai < b.. This if b = 10 then 231 means 1 + 3 10 + 2 102 . If b = 60 then 231 means 1 + 3 60 + 2 602 . Show that if n < b then the square of 111...1 (n ones) is 123...n...321 that is the digits increase to n then decrease to 1. What happens if n > b? 9 Three examples of early Algebra. At this point we have looked at counting methods and developed the basic operations of arithmetic. We have studied one simple Egyptian exercise in arithmetic and given a method of Fibonacci to express a fraction as an Egyptian fraction. The start with a practical problem or applied mathematics. Whereas, by the time Fibonacci devised his method, there was no reason to use Egyptian fractions. It is what we now call pure mathematics. The method is clever and has an underlying simplicity that is much more pleasant than using trial and error. Obviously, ancient peoples had many uses for their arithmetic involving counting, commerce, taxation, measurement, construction, etc. But even in the early cultures there were mathematical puzzles and techniques developed that seem to have no practical use. An Egyptian style problem: A quantity added to two thirds of it is 10. What is the quantity? We would say set the quantity equal to x (we will see that this small but critical step would not be discovered for thousands of years). Then we have 2 x + x = 10. 3 Hence 5 x = 10. 3 So x = 6. Since the Egyptians had no notion of how to deal with unknown quantities, they would do something like. If the quantity were 3 then the sum of the quantity plus two thirds of the quantity is 5. Since the sum we desire is 10, the answer is 2 times 3 or 6. In other words, they would use a convenient value for the quantity and see what the rule gave for that value. Then re-scale to get the answer. We will now discuss a Babylonian style problem (this involves basic geometry which we will assume now and discuss in context later). Before we write it out we should point out that multiplication as repeated addition was probably not an important motivation for doing multiplication. More likely they multiplied two numbers because the outcome is the area of the rectangle whose sides were the indicated number of whatever units they were using. I add the area of a square to two thirds of its side and I have ;35. What is the side of the square? Solution: Take 1 multiply by 2 take half of this and we have ; 20. You multiply this 3 by ; 20 and the result is ; 6, 40. You add to this ;35 to have ; 41, 40. This is the area of the square of side ; 50. You subtract ; 20 from ; 50 and you have ; 30 the side of the square. 10 In modern notation if we set the side equal to x then we are solving 2 7 = 0. x2 + x 3 12 The quadratic formula tells us that if we are solving x2 + ax b = 0 In our notation what we have done is taken 2 . Next divided by 2 to get 1 . 3 3 7 The square of 1 is 1 now add 35 = 12 to get 25 . The square root of this is 5 . 3 9 60 36 6 7 7 Subtract 12 and we have 1 . Thus if a = 2 , b = 12 then the answer is 2 3 r a a 2 +b . 2 2 a2 + 4b x= . 2 If a > 0,b > 0 then the positive solution is exactly the Babylonian answer. Their method of solving such problems put a premium on the ability to calculate ex pressions of the form a2 + b. They had an approximate method of doing such calculations which corresponds to what we will see is the second iteration of a method of Newton method applied to this simple case. They use the approxib b mation a + 2a . Notice that if 2a is small then this is a good approximation. Thus the Babylonians were aware of general methods to solve quadratic equations. They, however, could only express their method in words. What they wrote out is except for the order (and the absorption of the 1 ) exactly 2 what we would write. It is hard to imagine how either of these methods could be used in practical applications. However, one of the most interesting exercises in pure mathematics can be found in a tablet in the Yale collection (Plimpton 322). This tablet is a tabulation of 15 triples of numbers a, b, c with the property that a a2 + b2 = c2 . The simplest example that we know of this is 32 + 42 = 52 . This triple appears on the tablet as number 11 and in the form 602 + 452 = 752 . The tablet is thus using some strange rule for generating these numbers (usually called Pythagorean triples). We will discuss the Pythagorean theorem later. Here we will study the tablet as a collection of relationships between numbers. The table is arranged as follows: there are 15 existent rows and 4 readable columns. The f irst column contains a fraction and the fractions are decreasing. 11 then The second and third contain integers and the last is just the numbers 1,...,15 in order. If we label an element of the second column a and the element of the third column in the same row c then c2 a2 = b2 with b a positive integer and 2 the element of the f irst column in the same row is c2 . Also the f irst column b contains only regular sexagesimal rational numbers. It seems clear that the Babylonians were aware of a method of generating Pythagorean triples. y 5 4 3 2 -5 -2.5 1 0 2.5 x 5 In our modern notation we know how to generate all Pythagorean triples a, b, c (a2 + b2 = c2 ) with a,b,c having no common factor. Indeed, consider y = c , x = a then y 2 x2 = 1. We are thus looking for rational points on a b b 1 hyperbola (see the figure above). Notice that x2 gives an element of the f irst column of the table. Thus they seem to have picked points rational points on the hyperbola in increasing order. How do you locate such a point? We note that y 2 x2 = (y x)(y + x) (we will discuss what this might n have meant to the Babylonians soon). We write y + x = m , y x = m then n 1 m n 1 m n m2 +n2 m2 n2 y = 2 n + m and x = 2 n m . Thus y = 2mn and x = 2mn . This suggests that we take a = m2 n2 , b = 2mn and c = m2 + n2 . If m and n are positive integers then you can check easily that this assignment generates a Pythagorean triple. There is an algebraic proof of Fibonacci that this method generates all Pythagorean triples that have no common factor. Andr Weil (1906-1999) has pointed out that there is a geometric argument in Euclid Book X, Lemma 1,2 in preparation for Proposition 29 that proves that this method gives all such triples that are relatively prime (in fact a bit more than this). We will come back to this later. Consider the Pythagorian triple 3, 4, 5. We will nd numbers m, n as above. The method above says take y = 5 and x = 3 . Then y + x = 2 and y x = 1 . 4 4 2 This suggests take m = 2 and n = 1. We can check that this works m2 1 = 3, 2mn = 4 and m2 + n2 = 5. We will now discuss a probable meaning for the formula y 2 x2 = (y x)(y + x). The formula y 2 x2 is geometrically the area of the gure gotten by removing a square of side x from one of side y. If you take the smaller square 12 out of the lower right corner then in the lower left corner one has a rectangle of side x and base y x. If we cut this rectangle o and rotate it 90o then we can attach it to what is left of the big square and get a rectangle of side y x and base y + x. The two problems above are similar to the word problems of high school algebra and were probably used in the same way as we use them now. That is, to hone the skills of a student learning basic algebra. Plimpton 322 is another matter. It contains number theoretic relationships at a sophisticated level. Imagine a line of reasoning similar to the one in the previous paragraph without any algebraic notation and without even the notion of a fraction. 1.2.6 Exercises. 1. Problem 26 on the Rhind papyrus is: A quantity whose fourth part is added to it becomes 15. Use the Egyptian method to solve the problem. 2. Use the Babylonian approximation to calculate 2. (Suggestion: Start with a = 4 so that b = 2 . Can you improve on this?) 3 9 3. A problem on a Babylonian tablet says: I have added 7 times the side of my square to 11 times the area and have 6; 15. Find the side. Use the Babylonian method to solve this problem. 4. Find m, n so that a = m2 n2 , b = 2mn and c = m2 + n2 for the Pythagorean triples 119, 120, 169 and 5, 12, 13. 1.3 Some number theory taken from Euclid. We now jump about 1500 years to about 300BC and the time of the school of Euclid in Alexandria. We will examine parts of Books VII,VIII,IX of his Elements that deal with numbers. We will have more to say about the other books at appropriate places in this work. We will use the translation of Sir Thomas Heath for our discussion. 13 1.3.1 Def initions Euclid begins Book VII with 22 def initions that set up basic rules for what we have been calling the primitive number system. We will see in the next chapter that Euclid did not think of numbers in this sense. He rather thought of numbers as intervals. If we have two intervals AB and CD and if we lay out AB a certain number of times an this covers CD exactly then AB is said to measure CD. 1. An unit is that by virtue of which each of the things that exist is called one. This doesnt make too much sense but it is basically establishing that there is a unit for measurement.. We have been denoting this by |. 2. A number is a multitude composed of units. Thus ||| is a number as before. However, Euclid thinks of it as an interval that is exactly covered by three unit intervals.Be warned that the unit is not considered to be a number. 3. A number is a part of a number, the less of the greater, when it measures the greater; Thus the greater, ||||||, is measured by the less |||. 4. but parts when it does not measure it. ||||| is not measured by |||. 5. The greater number is a multiple of the less when it is measured by the less. Notice that the def initions are beginning to be more accessible. Here we measure |||||| by two of the |||. This thus |||||| is ||| multiplied by ||. 6. An even number is that which is divisible into two equal parts. 7. An odd number is that which is not divisible into two equal parts, or that which differs by a unit from an even number. 8.,9.,10. talk about multiplication of odd and even numbers. (e.g. an odd by an even is an even). 11. A prime number is that which is measured by a unit alone. Thus |||||| is measured by |, ||, ||| so is not prime. ||||| is only measured by |. 12. Numbers prime to one another are those which are measured by an unit as a common measure. |||| is measured by |, || ||||||||| is measured by |, ||| thus the only common measure is |. Thus |||| and ||||||||| are prime to one another. 13., 14. are about numbers that are not prime (to each other). A number that is not prime is composite. 14 In 15. he describes multiplication as we did (repeated addition). 16. And when two numbers having multiplied one another make some number, the number so produced is called plane, and its sides are the numbers which have been multiplied. Here Euclid seems to want to think of the operation of multiplication in geometric terms: an area. In 17 the product of three numbers is looked upon as a solid. 18,.19. def ine a square and a cube as we do. We will study these concepts in the next chapter. 20. Numbers are proportional when the f irst is the same multiple, or the same part, or the same parts, of the second that the third is to the fourth. ||| |||||| and |||| |||||||| are proportional. This is a relationship between two pairs of numbers. It is essentially our way of looking at rational numbers. In 21. there is a discussion of similar plane and solid numbers. 22. A perfect number is one which is equal to its parts. The parts of |||||| are |, ||, ||| and | + || + ||| = ||||||. So it is perfect. |||| is not. To us this is not a very basic concept. Perfect numbers are intriguing (28 is one,what is the next one?) but it is hard to see any practical reason for their study. We shall see that Euclid gave a method for generating perfect numbers. 1.3.2 Some Propositions Having disposed of the def initions, Books VII,VIII,IX consist of a series of Propositions. Number one is: B_____________F __A D________G__C __E Two unequal numbers being set out, and the less being continually subtracted in turn from the greater, if the number which is left never measures the one before it until an unit is left, the original numbers will be prime to one another. Let us try this out. Take 27 for the larger and 8 for the smaller. Subtract 8 from 27 and get 19, subtract 8 and get 11, subtract 8 and get 3, subtract 3 from 8 and get 5 subtract 3 from 5 and get 2 subtract 2 from 3 and get 1. Thus the numbers are relatively prime. We will now describe the Euclidian proof. The numbers are denoted AB and CD and Euclid draws them as vertical intervals. He assume on the contrary that AB and CD are not prime to each other. Then there would be a number E that measures both of them. We now come to the crux of the matter: Let 15 CD measuring BF leaving F A less than itself. (Here it is understood that BF + F A = BA and that BF is evenly divisible by CD) This assertion is now called the Euclidean algorithm. It says that if m, n are whole numbers with m < n then we can write n = dm or n = dm + q with q a whole number strictly less than m. For some reason he feels that this assertion needs no proof. To Euclid this is evident. If n is measured by m it is obvious. If it is not then subtract m from n and get q1 if q1 < m we are done. q1 cannot be measured by m hence q1 6= m and so q1 > m. We now subtract m from q1 and get q2 . If q2 < m then we are done otherwise as before q2 > m. Subtract m again. This process must eventually lead to a subtractend less than m since if not then after n steps we would be able to subtract nm from n so mn < n. But this is impossible since m > 1 so mn = n + n + ... + n (m times). Hence we are asserting n > mn n + n. Since it is obvious that n + n > n we see that the process must give the desired conclusion after less than n steps. We now continue the proof. Let AF measuring DG leaving GC less than itself. E measures CD hence BF and E measures AB so E measures F A. Similarly, E measures GC. Since the procedure described in the proposition now applies to AF and GC, we eventually see that E will eventually measure a unit. Since E has been assumed to be a number (that is made up of more that one unit) we see that this is impossible. In Euclid this unbounded procedure (f inite for each example) is only done three times. Throughout his arguments he does the case of three steps to represent the outcome of many steps. The second proposition is an algorithm for calculating the greatest common divisor (greatest common measure in to Euclid). Given two numbers not prime to one another, to f ind the greatest common measure. Given AB and CD not prime to one another then and CD the smaller then if CD measures AB then it is clear that CD is the greatest common measure. If not consider AB CD, CD. There are now two possibilities. The f irst is that AB CD is smaller than CD. In this case if AB CD measures CD then it must measure AB and so is the greatest common measure. In the second case CD is the smaller and if CD measures AB CD then it must measure AB and so it is the greatest common measure. If not the previous proposition implies that if we continually subtract the smaller from the larger then we will eventually come to the situation when the smaller measures the larger. We thus have the following procedure: we continually subtract the smaller from the larger stopping when the smaller measures the larger. Proposition 1 implies that the procedure has the desired end. Here is an example of proposition 2. Consider 51 and 21. Then 51 21 = 30 (30,21) , 30 21 = 9(21,9), 21 9 = 12 (12,9), 12 9 = 3 (9,3) so the greatest common divisor is 3. Why is it so important to understand the greatest common divisor? One important reason is that it is the basis of understanding fractions or rational numbers. Suppose that we are looking at the fraction 21 . Then we have seen 51 16 that the greatest common divisor of 21 and 51 is 3. Dividing both 21 and 51 by 7 3 we see that the fraction is the same as 17 . This expression is in lowest terms 7 21 42 and is unique. 17 = 51 = 102 = ... We will emphasize his discussion of divisibility and skip to Proposition 31. Any composite number is divisible by some prime number. We will directly quote Euclid. Let A be a composite number; I say that A is measured by some prime number. For since A is composite, some number will measure it. Let a number measure it, and let it be B. Now, if B is prime then we are done. If it is composite then some number will measure it. Let a number measure it and call it C. Since C measures B and B measures A, C measures A. If C is prime then we are done. But if it is composite then some number will measure it. Thus, if the investigation is continued in this way, some prime number will be found which will measure the number before it, which will also measure A. For if it were not found an inf inite series of numbers will measure the number, A, which is impossible in numbers. Notice that numbers are treated more abstractly as single symbols A, B, C and not as intervals. (Although they are still pictured as intervals.) More important is the inf inite series of divisors of A. No real indication is given about why this is impossible for numbers. However, we can understand that Euclid considered this point obvious. If D is a divisor of A and not equal to A then D is less than A. There are only a f inite number of numbers less than a given number n, 1, 2, ..., n 1. This argument uses a version of what is now called mathematical induction which we will call the method of descent. Suppose we have statements Pn labeled by 1, 2, 3, .... If whenever Pn is assumed false we can show that there is an m with 1 < m < n with Pm false then Pn is always true. The proof that this method works is that if the assertion for some n were false then there would be 1 < m1 < n for which Pm1 is false. But then there would be 1 < m2 < m1 for which Pm2 is false and this procedure would go on forever. Getting numbers m1 > m2 > ... > mn > ... with all the numbers bigger than 1. Let us try in out. The assertion Pn is that if n is not a unit then n is divisible by some prime. If Pn is false that n is not a prime and not a unit. Hence it is composite so it is a product of two numbers a, b neither of which is a unit and both less than n. If Pa were true then a would be divisible by some prime. But that would imply that n is divisible by some prime. This is contrary to our assumption. Thus if Pn is false then Pa is false with 1 < a < n. The method of descent now implies that Pn is true for all n. We will now jump to Book IX and Proposition 20. Prime numbers are more than any assigned multitude of prime numbers. 17 Here we will paraphrase the argument. Start with distinct primes A,B,C. Let D be the least common multiple of A, B, C (this has been discussed in Propositions 18 and 19 of Book IX). In modern language we would multiply them together. Now consider D + 1. If D + 1 were composite then there would be a prime E dividing it. If E were one of A,B,C then E would divide 1. Notice that we are back with three taking the place of arbitrarily large. The modern interpretation of this Proposition is that there are an inf inite number of primes. What is really meant is that if the only primes are p1 ,...,pk then we have a contradiction since p1 pk + 1 is not divisible by any of the primes and this contradicts the previous proposition. After Book IX, Proposition 20 there are Propositions 21-27 that deal with combining even and odd numbers and seem to be preparatory to Euclids method of generating perfect numbers. For example, Proposition 27 (in modern language) says that if you subtract an even number from an odd number then the result is an odd number. Here one must be careful and also prove that if you subtract an odd number from an even number you get an odd number (Proposition 25). We would say that the two statements are essentially the same since one follows from the other by multiplication by 1. However, since negative numbers were not in use in the time of Euclid Proposition 25 and 27 are independent. We now record one implication of Proposition 31 (and Proposition 30 which is discussed below) that is not explicit in Euclid (we will see why in the course of our argument). This Theorem is usually called the fundamental theorem of arithmetic. If A is a number (hence is not a unit) then A can be written uniquely (up to order) in the form pe1 pe2 per with p1 , ..., pr distinct primes and e1 , ..., er r 1 2 numbers (here B m is B multiplied by itself m times). We rst show that any number is a product of primes using a technique analogous to the method of Euclid in his proof of Proposition 31. If A is prime then we are done. Otherwise A is composite hence by Proposition 31 A = q1 A1 with A1 not a unit and q1 a prime. If A1 is a prime then we are done. Otherwise, A1 = q2 A2 with q2 a prime and A2 not a unit. If A2 is prime we are done since then A = q1 q2 A2 . Otherwise we continue this procedure and either we are done in an a nite number of steps or we have A1 > A2 > ... > An > ... an innite sequence of positive numbers. This is impossible for numbers according to Euclid. We have mentioned in our discussion of Proposition Let us show how the principle of descent can be used to prove the assertion that every number is a product of primes. Let Pn be the assertion that if n is not the unit then is a product of primes. If Pn is false then n is not a unit and not prime so n is composite. Hence n = ab with neither a nor b a unit. If both Pa and Pb were true then a is a product of primes and b is a product of primes so ab is a product of primes. Thus one of Pa or Pb would be false. If 18 Pa is false set m = a otherwise Pb is false and set m = b. Then 1 < m < n and Pm is false. The principle implies that Pn is true for all n. This principle can be made into a direct statement which we call the principle of mathematical induction.. The idea is as follows if S1 , S2 , ... are assertions and if S1 is true and if the truth of Sm for all 1 < m < n implies that Sn is true then Sn is true for all n. This is intuitively clear since starting with S1 which we have shown is true we have. S1 is true so S1 and S2 are true so S3 is true, etc. For example suppose that the statement Sn is the assertion 1 + 2 + ... + n = n(n + 1) . 2 Then S1 says that 1 = 1 which is true. We now assume that Sm is true for all 1 < m < n. Then 1 + 2 + ... + (n 1) + n = (1 + 2 + ... + (n 1)) + n = n(n + 1) n(n 1) 2n + = 2 2 2 which is the assertion Sn . Let us see how the method of descent implies the principle of mathematical induction. Suppose we have a statement Sn for n = 1, 2, ... and suppose that we know that S1 is true and whenever we assume Sm is true for 1 < m < n the Sn is true. Assume that Sn is false. Then n cannot be the unit. If Sm were true for all 1 < m < n then we would know that Sn were true. Since we are assuming the contrary we must have Sm is false for some m with 1 < m < n. Thus the method of descent implies that Sn is always true. One can show that the two principles are equivalent but we have traversed to far away from Euclid already. Returning to the fundamental theorem of arithmetic we have shown that if A is not the unit then A can be written as q1 q2 qn with qi a prime for i = 1, ..., n. Since the qi are not necessarily distinct we can take p1 , ..., pr to be the distinct ones and group those together to get A = pe1 pe2 per (here e1 is the number r 1 2 of i such that p1 = qi , e2 is the number of i such that p2 = qi ,...). We are now ready to prove the uniqueness. The crux of the matter and is Proposition 30 of Book VII which says: If two numbers by multiplying one another make some number, and any prime number measure the product, it will measure one of the original numbers. Let us see how this proposition implies our assertion about uniqueness. We will prove it using the principle of mathematical induction. The assertion Pn is that if n is not one then up to order there is only one expression of the desired form. Notice that P1 doesnt say anything so it is true (by default). 19 n(n 1) +n= 2 Suppose we have proved Pn for 1 m < n. Assume that n = pe1 pe2 per and r 1 2 f f f n = q11 q22 qs s . with p1 , ..., pr distinct primes and q1 , ..., qs distinct primes. Then we must show that r = s and we can reorder q1 , ..., qr so that qi = pi and fi = ei for all i = 1, ..., r. Since p1 divides n we must have p1 divides f f f f f f q1 (q11 1 q22 qs s ). Thus p1 divides q1 or q11 1 q22 qs s by Proposition 30. f If it divides q1 it is equal to q1 . Otherwise since it cannot divide q11 1 it divides f2 fs q2 qs . Proceeding in this way we eventually see that there must be an index i so that p1 = qi . Relabel so that i = 1. Then we see that if m = n/p1 then f f e f m = p11 1 pe2 per , and m = q11 1 q22 qs s . If m = 1 then n = p1 = q1 . r 2 Otherwise 1 < m < n so Pm is true. Hence s = r and f1 1 = e1 1 and the other qi can be rearranged to get the conclusion qi = pi and fi = ei for i = 2, ..., r. So to complete the discussion of our Proposition we need only give a proof of Proposition 30 Book VII. This proposition rests on his theory of proportions (now rational numbers). We will give an argument which uses negative numbers (jumping at least 1500 years in our story). We will assume here that the reader is conversant with integers (0, 1, 2, ...). Our argument is based on Propositions 1 and 2 Book VII given in the following form: If x,y are numbers that are relatively prime (prime to each other) then there exist integers a, b such that ax + by = 1. We follow the procedure in the argument that demonstrates Propositions 1 and 2 of Book VII . If x > y then the f irst step is x y. We assert that at each stage of this subtraction of the lesser from the greater we have a pair of numbers ux+vy and zx+wy with u, v, z, w integers. At step one this is clear. So suppose this is so at some step we show that it is so at the next step. So if ux + vy and zx + wy are what we have at some step then if (say) ux + vy > zx + wy then at the next step we have (ux + vy) (zx + wy) and zx + wy. That is (u z)x + (v w)y and zx + wy. According to Propositions 1 and 2 Book VII this will eventually yield 1. We will now demonstrate Proposition 30 Book VII. Suppose that p is a prime, a, b are numbers and p divides ab, but p does not divide a. Then p and a are relatively prime. Thus there exist integers u, v so that up + va = 1. Now b = upb + vab since and ab = pc we see that b = ubp + vcp = (ub + vc)p. We will also describe how Euclid proves Proposition 30. Let C be the product of A and B and assume that D is a prime dividing C then C is the product of D and E. Now assume that A and D are prime to each other (since D is prime this means that D does not measure A). Then D, A and B, E are in the same proportion. Since D is prime and A all pairs in the same proportion to D, A are given as multiples F D, F A (this is a combination of Propositions 20 and 21 in Book VII) thus D measures B. 20 1.3.3 Exercises. 1. Use the method of Propositions 1 and 2 of Book VII to calculate the greatest common measure of 315 and 240 and of 273 and 56.. 2. Read the original proof of Proposition 30 Book VII. Explain how it differs from the argument given here. Also explain in what sense the two proofs are the same. 3. Use the principal of mathematical induction to show (a) 1 + 4 + 9 + .... + n2 = n(n+1)(2n+1) . 6 (b) 1 + 2 + 4 + ... + 2n = 2n+1 1. 4. Use the material of this section to show that if a is a fraction then it can b c be written uniquely in the form d with c, d in lowest terms (relatively prime). In other words complete the discussion of the proof of Proposition 30 Book VII). 5. Assume that 1 + 2m + ... + nm = pm (n) with pm a polynomial of degree m + 1 in n. Set up a formula of the form of (a),(b) for the sum of cubes. Prove it by induction. Why do you think that the assertion about pm is true? 6. Use the method of descent to prove that there is no rational number a b 2 so that a = 2. Hint: Let Pn be the statement that there is no m such that b n2 = 2m2 . Use Proposition 30 show that if n2 = 2m2 then n is even. Use this to show that if Pn is false then Pm is false for m such that n2 = 2m2 . 1.4 1.4.1 Perfect numbers and primes. The result in Euclid. Perfect numbers are not a central topic in mathematics. However, their study has led to some important consequences. As we saw Euclid devoted one of his precious 22 def initions in Book VII to this concept. We recall that a perfect number is a number that has the property that the sum of its divisors (including 1 but not itself) is equal to itself. Thus 1 has as divisor 1 which is itself so it is not perfect. 2 has divisor 1 other than itself as does 3 and 5 so 2,3,5 are not perfect. Four has divisors 1,2 other than itself so it is not perfect. 6 has divisors 1,2,3 other than itself so it is perfect. Thus the smallest perfect number is 6. One can go on like this the next is 28 whose factors other than itself 1,2,4,7,14. It is still not known if there are only a f inite number of perfect numbers. Euclid in Proposition 36 Book IX gave a method that generates perfect numbers. Let us quote the proposition. If as many numbers as we please beginning from an unit be set out continuously in double proportion, until the sum becomes prime, and if the sum multiplied by the last make some number, the product will be perfect. This says that if a = 1 + 2 + 4 + ... + 2n is prime then 2n a is perfect. Notice that as Euclid gives the result it allows us to discover perfect numbers if we know that certain numbers are prime. We will now try it out. Euclid does not think of 1 as prime. 1 + 2 = 3 is prime. 2 3 = 6 is thus perfect. 1 + 2 + 4 = 7 is 21 prime so 4 7 = 28 is perfect. 1 + 2+ 4 + 8 = 15 not prime. 1 + 2 +4 + 8 + 16 = 31 is prime so 16 31 = 496 is perfect. We now check this because it tells us why the proposition is true. Write out the prime factorization of 496 (which we have seen is unique in the last section as 24 31. Thus the divisors of 496 other than itself are 1, 2, 22 = 4, 23 = 8, 24 = 16, 31, 2 31 = 62, 22 31 = 124, 23 31 = 248. Add them up and we see that Euclid was correct. The example of 496 almost tells us how to demonstrate this assertion of Euclid. If a = 1+2+...+2n is prime then the factors of 2n a are 1, 2, ..., 2n , a, 2a, ..., 2n1 a. So the sum of the factors is 1 + 2 + ... + 2n + a + 2a + ... + 2n1 a. This is equal to a + (1 + 2 + ... + 2n1 )a. Now we observe that Exercise 2 (c) of section 1.4 implies that 1 + 2 + ... + 2n1 = 2n 1. Thus the sum of the factors is a + (2n 1)a = a + 2n a a = 2n a. 1.4.2 Some examples. This proposition is beautiful in its simplicity and we will see that the Swiss mathematician Leonhard Euler (1707-1783) proved that every even perfect number is deducible from this Proposition. The catch is that we have to know how to test whether a number is prime. We have noted that 1 + 2 + 4 + ... + 2n = 2n+1 1. Thus we are looking for numbers of the form 2m 1 that are prime. Let us make an observation about this point. If m = 2k were even then 2m 1 = 22k 1 = (2k + 1)(2k 1). If k = 1 then we have written 3 = 3 1 so if m = 2, 2m 1 is prime. If k > 1 then 2k 1 > 1 and 2k + 1 > 1 so the number is not prime. We therefore see that if 2m 1 is prime and m > 2 then m must be odd. To get 496 we used 25 1 = 31. The next number to check is 27 1 = 127. We now check whether it is prime. We note that if a = bc and b c then b2 a. This is so because if b c then b2 bc = a. Thus we need only check whether 127 is divisible by 2,3,5,7,11 (since 122 = 144 > 127). Since it is not we have another perfect number 127 64 = 8128. Our next candidate is 29 1 = 511 = 7 73. We see that 22 1, 23 1, 25 1, 27 1 are prime but 29 1 is not. One might guess from this that if 2m 1 is prime then m must be prime. Obviously we are guessing on the basis of very little information. However, this is the way mathematics is actually done. So suppose that m = ab, a > 1, b > 1 we wish to see if we can show that 2ab 1 is composite. Set x = 2a then our number is xb 1. We assert that xb 1 = (x 1)(1 + x + ... + xb1 ). One way to do this is to remember long division of polynomials the other is to multiply out (x 1)(1 + x + ... + xb1 ) = x + x2 + ... + xb 1 x ... xb1 . Then notice that x, x2 , ..., xb1 subtract out and we have xb 1 left. Armed with this observation we can show the following proposition. If p = 2m 1 is prime then m is prime. If m = ab and a > 1, b > 1 then setting x = 2a we see that p = xb 1 = (x 1)(1 + x + ... + xb1 ) = cd, c = x 1 > 1 and d = 1 + x + x2 + ... + xb1 > 1. 22 Our next candidate is 11. But 211 1 = 2047 = 23 89. Using Mathematica (or any program that allows one to do high precision arithmetic) one can see that among the primes less than or equal to 61, 2p 1 is prime for exactly p = 2, 3, 5, 7, 13, 17, 19, 31, 61. Notice the last yields a prime 261 1 = 2305843009213693951. We note that at this writing (2002) the largest known prime of the form 2p 1 is 213466917 1(Michael Cameron, 2001 with the help of GIMPS -Great Internet Mersenne Prime Search). 1.4.3 A theorem of Euler. We give the theorem of Euler that shows that if a is a perfect even number then a is given by the method in Euclid. Write a = 2m r with r > 1 odd. Suppose that r is not prime. Let 1 < a1 < ... < as < r be the factors of r. Then the sum of the factors of a other than a is (1 + 2 + ... + 2m ) + (1 + ... + 2m )a1 + ... + (1 + ... + 2m )as + (1 + ...2m1 )r = 2m+1 1 + (2m+1 1)a1 + ... + (2m+1 1)as + (2m 1)r. Since we are assuming that a is perfect this expression is equal to a. Thus (2m+1 1)(1 + a1 + ... + as ) + (2m 1)r = 2m r. We therefore have the equation (2m+1 1)(1 + a1 + ... + as ) = r. From this we conclude that 1 + a1 + ... + as is a factor of r other than r. But then 1 + a1 + ... + as as . This is ridiculous.. So the only option is that r is prime. Now we have (2m+1 1) + (2m 1)r = 2m r. So as before, r = 2m+1 1. This is the assertion of the proposition. 1.4.4 The Sieve of Eratosthenes. In light of these results of Euler and Euclid, the search for even perfect numbers involves searching for primes p with 2p 1 a prime. So how can we tabulate primes? The most obvious way is to make a table of numbers 2, ..., n and check each of these numbers to see if it is divisible by an earlier number on the list. This soon becomes very unwieldy. However, we can simplify our problem by observing that we can cross off all even numbers, we can then cross off all numbers of the form 3 n then 5 n then 7 n, etc. This leads to the Sieve of Eratosthenes (230 BC) 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30... 23 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60... We cross out all numbers in the f irst row that are in the second or third row and have 1 5 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 ... The next sequence of numbers to check is the multiples of 5 5 10 15 20 25 30 35 40 45 50 55 60 65 ... This reduces the f irst row to 1 7 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 ... Now the number to check is 7 7 14 21 28 35 42 49 56 63 ... Deleting these gives 1 11 13 17 19 23 29 31 37 41 43 53 59 61 ... The next to check is thus 11 11 22 33 44 55 66 ... We note that 112 = 121 > 61. Thus we see that the primes less than or equal to 61 are 2,3,5,7,11,13,17,19,23,29,31,37,41,43,53,59,61. The main modern use of the Sieve of Eratosthenes is as a benchmark to compare the speed of different digital computer systems. Most computational mathematics programs keep immense tables of primes and this allows them to factor relatively large numbers. For example to test that 231 1 is prime one notes that this number is of the order of magnitude of 2.4 109 thus we need only check whether it is divisible by primes less then or equal to about 5104 so if our table went to 50,000, the test would be almost instantaneous. However, for 261 1 which is of the order of magnitude of 2.4 1019 the table would have to contain the primes less than or equal to about 50 billion. This is not reasonable for the foreseeable future. Thus other methods of testing primality are necessary. Certainly, computer algebra systems use other methods since, say Mathematica, can tell that 261 1 is a prime in a few seconds. Using Mathematica one can tell that 289 1 = 618970019642690137449562111 is a prime. This gives the next perfect number 288 (289 1) which is (base 10) 191561942608236107294793378084303638130997321548169216. We will come back to the question of how to produce large primes and factoring large numbers later. In the next section we will give a method of testing if a number is a prime. We will see that the understanding of big primes has led to practical applications such as public key codes which today play an important role in protecting information that is transmitted over open computer networks. 1.4.5 Exercises. 1. Use the Sieve of Eratosthenes to list the primes less than or equal to 1000. 2. Write a program in your favorite language to store an array of the primes less than or equal to 500,000. Use this to check that 231 1 is prime. 3. The great mathematician Pierre Fermat (1601-1665) considered primes of the form 2m + 1. Show that if 2m + 1 is prime then m = 2k . (Hint: If m = ab 24 with a > 1 and odd then set x = 2b . 2m +1 = xa +1. Now ((x)a 1) = xa +1 since a is odd. Use the above material to show that 2m + 1 factors.) This gives 21 + 1 = 3, 22 + 1 = 5, 24 + 1 = 17, 28 + 1 = 257,... (so far so good). Fermat m guessed that a number of the form 22 +1 is prime. Use a mathematics program (e.g. Mathematica) to show that Fermat was wrong. 4. The modern mathematician George Polya gave an argument for the proof that there are an inf inite number of primes using the Fermat numbers Fn = n 22 + 1. We sketch the argument and leave the details as this exercise. He asserted that if n 6= m then Fn and Fm are relatively prime. To see this he observes that x2r 1 = (xr 1)(xr + 1) and so x2 1 = (x2 k k1 1)(x2 k1 + 1) = (x2 k2 1)(x2 k2 + 1)(x2 k1 + 1) = ... = (x2 1)(x2 + 1)(x4 + 1) (x2 If n > m then n = m + k so 2n = 2m 2k . Hence m k k1 + 1) = k1 (x 1)(x1 + 1)(x2 + 1)(x4 + 1)(x8 + 1) (x2 Fn = (22 )2 + 1. + 1). So (setting x = 22 ) Fn 2 = (x + 1)K with K = (x1 + 1)(x2 + 1)(x4 + 1)(x8 + 1) (x2 k1 m + 1). Thus Fn 2 = Fm K. So if p is a prime dividing Fn and Fm then p must divide 2. But Fn is odd. So there are no common factors. Now each Fn must have at least one prime factor, pn . We have p1 , p2 , ..., pn , ... all distinct. 5. We say that a number, n, is k perfect if the sum of all of its factors (1, ..., n) is kn. Thus a perfect number is 2-perfect. There are 6 known 3perfect numbers. Can you nd one? 1.5 The Fermat Little Theorem. In the last section we saw how the problem of determining perfect numbers leads almost immediately to the question of testing if a large number is a prime. The most obvious way of testing if a number a is prime is to look at the numbers b with 1 < b2 a and check if b divides a. If one is found then a is not a prime. It doesnt take much thought to see that this is a very time consuming method of a is really big. One modern method for testing if a is not a prime goes back to a theorem of Fermat. The following Theorem is known as the Fermat Little Theorem. 25 1.5.1 The theorem. If p is a prime and if a is a number that is not divisible by p then ap1 1 is divisible by p. Let us look at some examples of this theorem. If p = 2 and a is not divisible by 2 then a is odd. Hence ap1 1 = a 1 is even so divisible by 2. If p = 3 and a is not divisible by p then a = kp + 1 or a = kp + 2 by the Euclidean algorithm. Thus ap1 1 is either of the form (3k +1)2 1 or (3k +2)2 1. In the f irst case if we square out we get 9k2 + 6k + 1 1 = 3(3k 2 + 2). In the second case we have 9k 2 + 12k + 4 1 = 3(3k2 + 4k + 1). We have thus checked the theorem for the f irst 2 primes (2,3). Obviously, one cannot check the truth of this theorem by looking at the primes one at a time (we have seen that Euclid has demonstrated that there are an inf inite number of primes). Thus to prove the theorem we must do something clever. That is demonstrate divisibility among a pair of numbers about which we are almost completely ignorant. 1.5.2 A proof. We now give such an argument. If a is not divisible by p then ia is not divisible by p for i = 1, ..., p 1 (Euclid, Proposition 30 Book VII). Thus the Euclidean algorithm implies that if 1 i p 1 then ia = di p + ri with 1 ri p 1. If i > j and ri = rj then ia ja = di p + ri dj p rj = (di dj )p. So (i j)a is divisible by p. Since we know that this is not true (1 ij p1) we conclude that if i 6= j then ri 6= rj . This implies that r1 , ..., rp1 is just a rearrangement of 1, ..., p 1. Before we continue the proof let us give some examples of the rearrangements. We look at a = 2, p = 3. Then a = 0 3 + 2, 2 a = 4 = 1 3 + 1. Thus r1 = 2, r2 = 1. Next we look at a = 3 and p = 5. Then 3 = 0 5 + 3, 6 = 1 5 + 1, 9 = 1 5 + 4, 12 = 2 5 + 2. Thus r1 = 3, r2 = 1, r3 = 4, r4 = 2. We can now complete the argument. Let us denote by sj for 1 j p 1 numbers given by the rule that rsj = j. Thus in the case a = 2, p = 3, s1 = 2, s2 = 1. In the case a = 3, p = 5 we have s1 = 2, s2 = 4, s3 = 1, s4 = 3. Then we consider a (2 a) (3 a) ((p 1) a). We can write this in two ways. One is 1 2 (p 1) ap1 . The second is (ds1 p + 1) (ds2 p + 2) (dsp1 p + (p 1)). If we multiply this out we will get many terms but by inspection we can see that the product will be of the form 1 2 (p 1) + c p. 26 We are getting close! This implies that 1 2 (p 1) ap1 = 1 2 (p 1) + c p. If we bring the term 1 2 (p 1) to the left hand side and combine terms we have 1 2 (p 1) (ap1 1) = c p. Thus p divides the left hand side. Since p cant divide any one of 1,2,...,p 1, we conclude that p divides ap1 1. 1.5.3 The tests. This leads to our test. If b is an odd number and 2b1 1 is not divisible b then b is not prime. If b is odd and 2b1 1 is divisible by b then we will call b a pseudo prime to base 2. It is certain that if b is odd and not a pseudo prime to base 2 then b is not a prime. The aspect that is amazing about this test is that if we show that b does not divide 2b1 1 then there must be a number c with 1 < c < b that divides b about which we are completely ignorant! On the other hand this test might seem ridiculous.. We are interested in testing whether a number b is prime. So what we do is look at the (generally) very much bigger number 2b1 1 and see if b divides it or not. This seems weird until you think a bit. In principal to check that a number is prime we must look at all numbers a > 1 with a2 b and check whether they divide b. Our pseudo prime test involves long division of two numbers that we already know. That is the good news. The bad news is that the smallest pseudo prime to base 2 that is not a prime is 341 = 11 31 and it can be shown that if b is a pseudo prime to base 2 then so is 2b 1. Thus there are an innite number of pseudo primes to base 2. For example if p is prime then 2p 1 is also a pseudo prime to base 2 (see Exercise 4 below). Note that we could add to the test as follows. If b is odd and 2b1 1 is divisible by b we only know that b is a pseudo prime. We could then check whether 3 divides b and if it does we would know it is not a prime. If it doesnt we could check whether b divides 3b1 1. The smallest number that is not a prime but passes both tests is 1105 = 5 13 17. One can then do the same thing with 5. We note that if we do this test for 2, 3, 5 the non-primes less than 10, 000 that pass the test are {1729, 2821, 6601, 8911}. This leads to a rened test that was rst suggested by Miller-Rabin. Choose at random a number a between 1 and b 1. If the greatest common divisor of a and b is not one then b is not prime. If a and b are relatively prime but ab1 1 is not divisible by b then b is not prime. If one repeats this k times and the test for being composite fails then the probability of b being composite is less than or equal to 21 . Thus if k is 20 the probability is less than one in k a million. Obviously if we check all elements a less than b then we can forget about the Fermat part of the test. The point is that the number b is very big and if we do 40 of these tests we have a probability of better than 1 in 1012 that we have a prime. 27 A number, p, that is not a prime but satises the conclusion of Fermats theorem for all choices of a that are relatively prime to p is called a Charmichael number the smallest such is 561. Notice that 561 = 3 11.17. One further sharper test(the probabilities go to 0 faster and have strictly less failures than the Miller-Rabin test) is the Solovay-Strassen probabilistic test. We can base it on the proof we gave of Fermats Little Theorem. Suppose that a and b are relatively prime and b is odd and bigger than 1. For each 1 j < b we write ja = mj b + rj with 0 rj < b. We note that rj cant be zero since then b divides ja. Since b has no prime factures in common with a this implies that b divides j. This is not possible since 0 < j < b. Thus, as before the numbers r1 , ..., rb1 form a reordering of 1, 2, ..., b 1. We denote by the number that is gotten by multiplying together the numbers rj ri with j > i and j < b. Then since r1 , ..., rb1 is just a rearrangement of 1, ..., b 1 we see that is just 1 times the number we would get without a rearrangement. We write J(a, b) for 1 if the products are the same and 1 if not. We now consider the product ja ia) for j > i and 1 j < b then as we argued above we see that this number is + cb with c a number. range then This says that if is the product of j i over the same a (b1)(b2) 2 = J(a, b) + cb. (b1)(b2) 2 This implies that if b is prime that a J(a, b) is divisible by b. Now n 2 is odd so J(a, b) = J(a, b)n2 . Hence we see that if b is prime then a b1 2 J(a, b) = db for some number d. This leads to the test. We say that a number a between 2 and b 1 is a witness that b is not prime if a and d are not relatively prime or b1 a 2 J(a, b) is not divisible by b. One can show that if there are no witnesses then b is prime. One can also prove that if b is not prime then more than half of the numbers a between 2 and b 1 are witnesses. The test is choose a number a between 2 and b 1 at random. If a is a not a witness that b is not prime then the probability is strictly less than 1 that b is composite. Repeating the 2 test say 100 times and not nding a witness will allow us to believe with high probability that b is prime. The point of these statistical tests is that if we dene log2 (n) to be the number of digits of n in base 2 then the prime number theorem (J. Hadamard and de Valle Poussin 1896we will talk about this later) implies that if N is a large number then there is with high probability a prime between N and log2 (N ). For example, N = 56475747478568 then log2 (N ) = 45 and 56475747478601 is a prime. Thus to search for a prime with high probability with say 256 digits base 2 choose one such number (at random), N , then use the statistical tests on the numbers between N and N + 256. 28 The reader who has managed to go through all of this might complain that the amount of calculation indicated in these tests is immense. When we talk about modular arithmetic we will see that this is not so. In fact these tests can be implements very rapidly. As a preview we consider the amount of calculation to test that a number b is a is a 2-pseudo prime. We calculate 2b1 as follows: We write out b 1 in base 2 say b 1 = c1 2 + c2 4 + ... + cn 2n with ci either 0 or 1. We then 2 2b1 = (22 )c1 (24 )c2 (22 )cn . As we compute the products indicated we note that if m is one of the intermediate products and if we apply division with remainder we have m = ub + r with 0 r < b. In the test we can ignore multiples of b. Also we use the m+1 m m = (22 )2 . And the 22 can be replaced by its remainder after fact that 22 m division by b. Let rm be the remainder for 22 . Thus if we have multiplied the rst k terms and reduced to a number less than b using division with remainder to have the number say s if ck+1 = 1 we multiply s by rk and then take the remainder after division by b. We therefore see that there are at most n operations of division with remainder by b and never multiply numbers as big as b. We will see that a computer can do such a calculation very fast even if b has say 200 binary digits. We give an example of this kind of calculation consider the number n = 65878161. Then 2n1 is an immense number but if we follow the method described we have the binary digits of n 1 (written with the powers of 2 in increasing order) are {0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1}. The method says that each time we multiply we take only the remainder after division by n. We thereby get for the powers {2, 4, 16, 256, 65536, 12886831, 1746169, 1372837, 38998681, 33519007, 56142118, / 28510813, 45544273, 49636387, 27234547, 48428395, 5425393, 65722522, 46213234, 3252220, 64423528, 16511530, 46189534, 45356743, 15046267, 47993272}. Now the intermediate products are (we include the terms where the digit is 0) {1, 1, 1, 65536, 65536, 65536, 46555867, 46555867, 46555867, 46555867, 18206659, 42503458, 24542662, 24542662, 24542662, 54699517, 54699517, 29732113, 728509, 728509, 38913619, 36121177, 1794964, 30837667, 23401021}. The last number is not one so the number n is not a prime. This seems like a lot of computation but most modern personal computers can do these calculations instantly. It turns out the n = 7919 8319. So nding a factor by trial and error would have involved more computations. We also observe that the same m+1 m method can be used for any choice of a using a2 = (a2 )2 . 29 1.5.4 Exercises. 1. Make a large list of pseudo primes base 2 less than or equal to 1000. Compare this with a list of primes less than or equal to 1000. (You will want to use a computer for this.) 2. If n is any positive integer show that there exists a consecutive list of composite integers of length n. (Hint: If we set (n + 1)! = (n + 1)n(n 1) 2 then (n + 1)! + 2, (n + 1)! + 3, ..., (n + 1)! + n + 1 are all composite.) For each n = 2, 3, 4, 5, 6, 7, 8, 9 nd the consecutive list of primes that starts with the smallest number (for example if n = 3 the answer is 8, 9, 10). Why do we need to only check n odd? 3. Calculate the rearrangement of 1,2,...,6 that corresponds to a = 2 and p = 7 as in the proof of the Little theorem. Use this to calculate J(2, 7). 4. Given a and p as in Fermats Little theorem and r1 , ..., rp1 and s1 , ..., sp1 show that if 1 < a < p then r1 = a and s1 r1 = u p + 1 with u a whole number. 5. Show that if p is a pseudo prime to base 2 then so is 2p 1. (Hint: If q = 2p 1 then p p1 2q1 1 = 22 2 1 = 22(2 1) 1. Now p divides 2p1 1. So 2p1 1 = cp. Thus 2q1 1 = 22cp 1 = x2c 1 with x = 2p .) 6. We note that 1 + 22 + 1 = 6 (so divisible by 3), 1 + 24 + 34 + 44 + 1 = 355 (so divisible by 5). Show more generally that if p is prime then 1p1 + 2p1 + ... + (p 1)p1 + 1 is divisible by p. It has been shown that if p satises this condition (that it divides the above sum) then it has been shown by Giuga(Giuga, G. Su una presumibile propertiet caratteristica dei numeri primi. Ist. Lombardo Sci. Lett. Rend. A 83, 511-528, 1950) that p is a Charmichael number. He also conjectured that such a number must, in fact be prime. This has been checked for p < 1013800 (Borwein, D.; Borwein, J. M.; Borwein, P. B.; and Girgensohn, R. Giugas Conjecture on Primality. Amer. Math. Monthly 103, 40-50, 1996). 7. Use a package like Mathematica or Maple to show that 341 is a pseudo prime to base 2 and that 21104 1 and 31104 1 are both divisible by 1105. 8. To do this problem you should use a computer mathematics system. Calculate the remainder of dividing 2n1 by n for n = 57983379007789301526343247109869421887549849487685892237103881017000 7677183040183965105133072849587678042834295677745661721093871. Use the outgrowth of the calculation to deduce that n is not a prime. 1.6 Large primes and cryptography. In the last section we saw that large primes appear naturally in the unnatural problem of f inding perfect numbers. Large primes have also become an important part of secure transmission of data. Most modern cryptographic systems 30 involve two keys one to be used to encode and the other to decode messages. Public key systems have a novel aspect in that the information necessary to encode a message is in principle known to everyone. But the information to decode the message is only known to the person the intended recipient of the message. In other words, even if you know how to encode a message you still do not know how to decode a dierent message encoded by that method. Alternatively, even if you find a method of deciphering one message deciphering another is not easier. This is a seeming contradiction and although most believe that the methods now in use have this contradictory property there is no mathematical proof that this is so. This type of cryptography was f irst described by W.Die and M.E.Helman New directions in cryptography, IEEE Transactions in Information Theory IT-22 (1976),644-654. One of the f irst practical implementations was due to Rivest, Shamir and Adelman (1978) and is called RSA. It is based on the hypothesis that the factoring of large numbers is much harder than multiplying large numbers. We will discuss this point and describe the implementation of RSA later in this section. 1.6.1 A problem equivalent to factorization. In the RSA system a person (usually called Alice) chooses (or is assigned) two very large primes p and q. Alice calculates n = pq and makes n public. She also chooses a number e (for encode) that has greatest common divisor 1 with the number m = (p 1)(q 1) and such that 1 < e < m. This number is also made public. The rest of the system involves enciphering messages using these two numbers (n, e). The point of the methods of enciphering is that to decode the message one must know a number 1 < d < m (for decode) such that ed = rm+1 for some integer r (note that the form of Proposition 1 Book VII in Euclid tells us that d exists. It is hypothesized that one cannot nd d without knowing m. There are also probabilistic arguments that indicate that with high probability if we know d then we know m. The main point is thus the following Proposition: If we know the number m then it is easy to factor n. Before we demonstrate this we will interpret the line of thought. This assertion then says that with a high probability, deciphering the RSA cipher is at the same level of di iculty as factoring n. Since we have hypothesized that this is impractically hard we have implemented a public key system. As for the Proposition, if we know m then we know (p1)(q1) = pqpq+ 1. Since we know n = pq we therefore know p + q. Now, (p + q)2 2pq = (p q)2 we see that we know (p q)2 at the same level of di iculty as squaring (which the ancient Egyptians thought was relatively easy) that we have hypothesized is much easier than factoring. The last step is to see that there is an easy method of recovering a if we know a2 . We will see that this is so below. Thus with little di iculty we have calculated p + q and p q. We can recover p and q by adding and subtracting and dividing by 2. 31 1.6.2 What do we mean by hard and easy? Before we describe an implementation of RSA we will give a working explanation of the terms hard and easy. In what follows we will use the notation log2 (n) to mean the smallest k such that 2k is greater than or equal to n. In other words log2 (n) is the number of operations necessary to write the number n in base 2. We will say that a procedure depending on integers N1 , ..., Nd is easy if the there is a method for implementation (an algorithm) that takes a time to complete that is proportional to a xed power (depending on the procedure) of (log2 (N1 ) + ... + log2 (Nd )). If an operation is not easy then we say that it is hard. The study of hard and easy belongs to complexity theory. It is a formalism that is useful for testing whether good computational methods exist (or dont exist). We will just touch the surface. As our rst example we consider the problem of comparing two numbers M and N . We assert that this takes at most 4(log2 (N ) + log2 (M )) operations. We will go through most of the (gruesome details for this case since it is the simplest. The reader should have patience). Indeed, it takes log2 (N ) + log2 (M ) operations to write the two numbers. Once we have done this we know log2 (N ) and log2 (M ). We prove by induction on r = log2 (N ) + log2 (M ) that it now takes at most 4(log2 (M ) + log2 (N )) operations to test whether N is bigger than M is smaller than N or is equal to N . If r 1 then all we must do is look at the two indicated numbers which are 0 or 1. Assume for r s (the induction hypothesis). We now show that it is true for s. We rst check that if log2 (N ) > log2 (M ) (or log2 (N ) < log2 (M )) then N > M or (N < M ). This by the induction hypothesis we need at most 4(log2 (log2 (N )) + log2 (log2 (M )) steps to check this. If we have strict comparison of the logs we are done in 2(log2 (log2 (N ))+log2 (log2 (M )) steps. Otherwise we now know that log2 (N ) = log2 (M ) we now check the digits one by one from the top and look for the rst place with one of M or N having a 1 and the other a 0 the one with the 1 is the larger. If we do the full number of steps we have equality. Thus we have done the comparison in at most (log2 (N ) + log2 (M )) additional steps. Now we observe that log2 (n) n . If n 2. If n = 2 this says that 1 1. If it is 2 true for n and if n 6= 2k 1 then log2 (n + 1) = log2 (n) n < n+1 . Otherwise, 2 2 n = 2k 1. So log2 (n + 1) = k + 1. We are left with observing that 2k k + 1, for k = 1, 2, ... For k = 1 we have equality. If 2k k + 1 then 2k+1 = 2(2k ) 2(k + 1) = 2k + 2 k + 2. This implies that 2(log2 (log2 (N )) + log2 (log2 (M )) log2 (N )) + log2 (M ) So the total number of steps is at most (log2 (N ) + log2 (M )) + 2(log2 (N )) + log2 (M )) + (log2 (N ) + log2 (N )) the rst term for writing the two numbers, the second for comparing the number of digits and the third for the main comparison. Thus comparison in easy (as we should guess). 32 We now look at addition. We have numbers a and b write out the numbers in base 2 assume that a is the larger and ll out the digits of b by 0 (easy). This involves 2log2 (a) operations. We write n = log2 (a). Now add the lowest digits, if one is 0, then put the other digit in the lowest position of the answer otherwise both are 1 so put a 0 in the lowest position and then look at the next digit of a if it is 0 change it to 1 if it is 1 change it to 0 then do the same operation on the next digit continue until you get to a 0 digit of a or to the top one which must be 1 and we would change it to 0 and add one more digit to a. This happens only if all the digits of a are 1 in this case a = 2n 1. So to add a and b you need only change the lowest digit of b to 0 and then add 2n which involves at most 3 steps. This implies that we are either done in 3 steps or we need at most n operations to add the lowest digits. We then go to the next digit. We see that if we are adding at the rth digit we will need to at most the larger of 3 and n r easy operations. Thus the number of operations is at most n + (n 1) + ... + 1 = n(n+1) easy operations. So addition is easy. 2 The next case is that division with remainder is easy. To see this we look at M and N and we wish to divide M into N . Comparison is easy. So if M > N the division yields 0 with remainder N . If M = N we get division 1 and remainder 0. Thus we nay assume M < N . Let m be the number of digits of M and n that of N . If n = m then the division is 1 with remainder N M (subtraction is easy, you will do this in an exercise). Thus we can assume that n > m. Now multiply M by 2nm and (this just means putting n m zeros at the end of the base two expansion of m) subtract this from N . Getting N1 with less than n digits. If N1 M we are done otherwise do the operation again. After at most n of these steps we are done. Thus we must do at most n easy operations. So division with remainder is easy. We also note that similar considerations imply that addition, subtraction and multiplication are easy. Consider Euclidian method of calculating the greatest common divisor (g.c.d.)of two numbers n > m > 1. f irst subtract m from n repeatedly until one has m or one has a number that is less than m. If the number is m then the g.c.d. is m. If not put n1 = m and m1 equal to the number we have gotten and repeat. If m1 = 1 then we know that the g.c.d. is 1. Thus The initial step involves about n/m subtractions. It also involves one division with remainder. If n is not divisible by m then m1 is the remainder after division. Thus, if we use division rather than subtraction each step involves one division with remainder. Since each step reduces the bigger number to a number less than or equal to one half its size we see that the number of such operations is at most log2 (n). Thus it takes no more than log2 (n) times the amount of time necessary to calculate the division with remainder of n by m. By a hard operation on n or on n > m we will mean an operation that involves more than a multiple of log2 (n)k steps for each k = 1, 2, 3, ... (the multiple could depend on k). Thus calculating the g.c.d. is easy. To complete the line of reasoning in the previous subsection we show that if a is a positive integer then the calculation of the positive integer b such that b2 a < (b + 1)2 is easy b is called the integer square root of a.. The idea is 33 to write out a to base 2. If the number of digits is 4 or less look it up in a table. If the number of digits is n which is odd n = 2k + 1 then take as the f irst approximation to b the number 2k if this satisf ies the upper inequality we are done otherwise try 2k + 2k1 if it satisf ies both inequalities we are done otherwise if it doesnt satisfy the lower one replace by 2k + 2k2 and continue the same testing to see if we leave the bit on or not. The involves calculating 2n squares so since n 1 = log2 (a) and we have decided that squaring is easy we have shown that in this case calculating b is easy. If n = 2k is even then look at the f irst 2 bits of a (the coe icients of the highest and next highest power of 2) then start with 2k1 and use the same procedure. Is anything hard? The implementation of RSA assumes that factoring a large number is hard. There is no proof of this assertion, but the best known methods of factorization take the order of magnitude of 2C(log2 (N )) 3 steps. 1.6.3 An implementation of RSA. 1 Suppose that you are shopping on the internet and you must transmit your credit card number, C, to the merchant. You know that it is possible that Joe Hacker is watching for exactly this sort of transaction. Obviously, you would like to transmit the number in such a way that only the merchant can read it. Here is an RSA type method that might accomplish this task. The merchant chooses two big primes p and q (so big that they are both bigger than any credit card number) then forms the numbers n = pq and m = (p 1)(q 1). He also chooses e randomly between 1 and m that has greatest common divisor. 1 with m. He transmits the numbers n and e to your computer (and probably Joes computer). Your computer then calculates the remainder that is gotten when C e is divided by n. Call this number S. Your computer sends S to the merchant. This is what Joe sees. The merchant calculates the number d that has the property that de = 1 + mk for some k. He then calculates the remainder after division by n of S d and has C we will explain this in the next paragraph. If Joe can calculate d then he also knows C. However, if the primes are very large we have seen that this is very improbable. We now explain why S d = C + nh for some h. Neither p nor q divides C since it is too small. By denition of S, C e = S + ng for some g. Thus S = C e ng. We therefore have S d C de = (C e ng)d C de . One can check the formula xd y d = (x y)(xd1 + xd2 y + ... + xy d2 + y d1 ). 34 by direct multiplication. (x y)(xd1 + xd2 y + ... + xy d2 + y d1 ) = x(xd1 + xd2 y + ... + xy d2 + y d1 ) y(xd1 + xd2 y + ... + xy d2 + y d1 ) = xd + xd1 y + ... + x2 y d2 + xy d1 xd1 y ... x2 y d2 xy d1 y d = xd y d . If we make the replacement x = C e ng and y = C e in this formula we nd that S d C de is divisible a multiple of x y = ng and is thus divisible by n. Thus the remainder after dividing by n of S d and C de is the same. We note that (C e )d = C de = C 1+mk = C(C m )k . Now m = (p1)(q1) and (C k(q1) )p1 = 1+ap by the Fermat Little Theorem. Similarly, C mk = 1 + bq. Thus C mk 1 is divisible by both p and q hence by n. (See Exercise 2 below.) Thus S d = C(1 + cn) = C + un for some whole number u. We will now do an example of this but with smaller numbers than those that would be in a practical implementation.. We take p = 71 and q = 97. Then n = 6887 and m = 6720. Choose e = 533. Then the decoder is d = 2132. If C = 45 then the remainder after division by n of C e is 116. We note that 116d has remainder 45 after division by n. 1.6.4 Fermat factorization. The reverse is also true, that is, if 1 s t and if n = t2 s2 then if a = t s and b = t + s then n = ab. This leads to a method. Start with the number n let g be its integer square root. if g 2 = n we have factored the number into two smaller factors. Otherwise try t = g + 1 and calculate t2 n if this number is a perfect square s2 then apply the above observation Otherwise replace t by t + 1 and try again. Keep this up until t2 n = s2 . This is practical only if n has two factors that are very close together. This tells us that for the sake of security of RSA one must choose p and q far apart. We will try this factorization out for the example we used above n = 6887 then the integral square root is 82. 822 = 6724. 832 n = 2, 842 n = 169 = 132 . 35 RSA is based on the assumption that factoring big numbers is hard. How would we go about doing a factorization of a big number. If we knew that the number came from RSA we would then know that it has only two prime factors. Does this make the problem easier? Fortunately for the internet this doesnt seem to be the case. We will, however, look at a pretty good method of factoring now. Suppose that n is an odd number and that n = ab with 1 < a < b. Set t = a+b and s = ba . Note that a and b are odd so t and s are whole numbers. 2 2 We have t2 s2 = ab. So taking a = 84 13 = 71 and b = 84 + 13 = 97 weve found our original p = a, q = b. There are many variants of this method that involve signicant improvements in the number of operations necessary to do the factorization. However, the best known methods are hard in the sense of this section. In the next section we will show how a change in the rules allows for an easy factorization algorithm. 1.6.5 More approaches to factorization. In 1994, Peter Shor published a proof that if a computer that obeys the rules of quantum mechanics could be built then it would be possible to factor large numbers easily. The subject of quantum computing would take us too far aeld. However, one of the ingredients of Shors approach can be explained here. We start with a large number, N . Choose a number y randomly. We calculate the remainder of division by N of y x for x = 0, 1, 2, ... and call that number f (x). Then there is a minimal number 1 < T < N such that f (x + T ) = f (x) for all T x. We call T the smallest period. If T is even we assert that y 2 + 1 and N have a common factor larger than 1. We can thus use the Euclidean algorithm (which is easy in our sense above) to nd a factor of N . Before we demonstrate that this works consider N = 30 and y = 11. Then f (0) = 1, f (1) = 11, 112 = 121 = 1 + 4 30, so f (2) = 1 = f (0). Thus T = 2. Now 111 + 1 = 12. The greatest common divisor of 12 and 30 is 6. We will next check that this assertion about y, T, N is correct. We rst note that T T (y 2 + 1)2 = y T + 2y 2 + 1. But y T = 1 + m N by the denition of T . Thus after division by N one gets T T the same remainder for (y 2 + 1)2 and for 2(y 2 + 1). This implies that (y 2 + 1)2 2(y 2 + 1) is evenly divisible by N . Thus so is T T T T (y 2 + 1) 2 y 2 + 1 = (y 2 1 y 2 + 1 . Thus if y 2 + 1 and N have no common factor then y 2 1 is evenly divisible by N . This would imply that T which is smaller than T satises 2 f (x + T ) = f (x). 2 T T T T This contradicts the choice of T as the minimal period. There are several problems with this method. The most obvious is what happens if the minimal period is odd? It can be shown that the probability is small that one would make many consecutive choices of y with odd period. Thus the method is probabilistic. However, if you could decode RSA with 36 probability, say, .6 , then you would be able to decode about 60% of the secure internet commerce. There, however, is a much more serious problem. There is no easy algorithm for computation of such periods. The standard ways of nding the T above are as dicult as the factoring algorithms. This is where quantum computing comes in. Shors contribution was to assume that his computer allowed for superpositions (be patient we will know what this means later. For now if you dont know what this means read quantum mechanical operations.) of digits and that these superpositions obeyed the rules of quantum mechanics. Under these assumptions he proved that he could nd the period easily. 1.6.6 Exercises. 1. Why are subtraction, multiplication and division with remainder easy (in the sense above)? 2. Show that if p, q are distinct primes that if p and q divide a then pq divides a. 3. Use Fermat factorization to factor each of the following numbers into a product of two factors 3819, 8051, 11921. 4. Suppose that you have intercepted a message that has been encoded in the following variant of RSA. Each letter in the message is translated into a number between 1 and 26. We will ignore case and all punctuation but spaces and a space is assigned 27. So a and A become 1 , z and Z become 26. Thus we would write NUMBER as 14, 21, 13, 2, 5, 18. We think of this as a number base in base 28. (Here this number is 14+2128+13282 +2283 +5284 +18285 = 312 914 602. We expand the number and write it to base 60. getting 22, 43, 40, 8, 24. We then encode each digit using RSA with n = 8051 and e = 1979. This gives 269, 294, 7640, 652, 198. Suppose that you know that 402, 2832 was coded in this way. What did the original message say? (Even for relatively small numbers such as these you will almost certainly need a computer algebra package to do the arithmetic.) 5. A form of RSA is the standard method of sending secure information on the internet. Do you agree that it is secure? 6. Consider all y between 10 and 20 and N = 30. Calculate the periods, T in the sense of the Shor algorithm (see the previous section). 37 2 2.1 2.1.1 The concept of geometry. Early geometry. Babylonian areas. In section 1.3 we alluded to the fact that Euclid did not look upon arithmetic as an outgrowth of simple counting. He rather looked upon it as arising from measurement of intervals with respect to a unit. The word geometry when analyzed has two parts geo for earth and metry for measurement. The earliest known record of geometry can be found in Babylonian tablets dated from about 3000 B.C. These tablets are concerned with calculating areas. One starts (as did Euclid) by measuring intervals with respect to a unit interval. The subject of these tablets was the calculation of areas bounded by four straight lines. If we think a bit about this question and decide that a square with side given by the chosen unit has unit area and if we take two of them on put then side by side (ore one on top of the other) then we have a rectangle with sides 2 and 1. It is reasonable to think that this rectangle has area 2. Similarly we can put six such unit squares together and make a rectangle of sides 2 and 3 which has area 6. Thus if we have a rectangle of sides a and b then the area should be a b (square units). Obviously, not every area is as regular as a rectangle and the Babylonians concerned themselves with four sided gures that could be determined by 2,3 or 4 measurements.. Thus a rectangle of sides a and b is determined by two measurements. What about 3 measurements? Here imagine a rectangle of sides a, b and on one of the sides b an distance c from the side of length a is marked. One then joins the marked point with the endpoint of the other side of length b. One now has a gure that is sometimes called a rectangular trapezoid. Let us deduce the corresponding area. 38 The gure has sides labeled by a, b, c and the diagonal if we fold it over as in the picture above the two trapezoids t together to make a rectangle of sides a and b + c. Thus the trapezoid is half of that rectangle and so we have shown that its area is 1 (b + c)a. This is the Babylonian formula. 2 It is still a subject of debate as to what the Babylonians meant by a gure determined by four measurements.. However, what seems to be agreed is that the formula that was used for the area does not jibe with any general notion of four measures since if the measurements are a, b, c, d then the formula they give is (a+c)(b+d) . This seems to be what they thought was the area of a general 4 four sided gure with sides of lengths a, b, c, and d. 2.1.2 Right triangles. 2.1.3 As we saw the Babylonians understood Pythagorean triples. They in fact seemed to be aware of what we call the Pythagorean Theorem. In 1916 the German historian of mathematics Ernst Weidner translated a tablet from 2000 BC that contained the assertion that if a right triangle has legs a and b then the other side has length b2 c=a+ . 2a This is not correct, in general, however we should recall that the Babylonians used the approximation p v u2 + v = u + . 2u If we apply this formula we nd that they are using p c = a2 + b2 . Some Egyptian Geometry. In the Moscow Papyrus (approximately 1700 BC) there is the following problem The area of a rectangle is 12 , and the width is three quarters of the length, what are the dimensions? The solution was given in the following way. If we attach a rectangle of side one third of the smaller to the longer side to make the gure into a square then 39 the area of the square would 16. Thus the longer side must be 4 and the shorter 3. This method is what we now call completing the square. This example indicates that the Egyptians understood rectilinear areas. However in the same Papyrus there is the following: If you are told: A truncated pyramid of 6 for the vertical height by 4 on the base by 2 on the top. You are to square this 4, result 16. You are to double 4 result 8. You are to square 2 result 4. You are to add the 16, the 8, and the 4, result 28. You are to take one third of 6, result 2. You are to take 28 twice result 56. You will nd it right. Here the scribe clearly has in mind a truncated pyramid of the type that we know that the Egyptians built. That is the base is square and the top is parallel to the base and centered over it.. If we write u for the height (6) and a for the side of the base (4) and b for the side of the top (2). Then the scribe has written: 2 b + ab + a2 ( u ). Which is the correct formula. We will just indicate how it 3 follows from the formula for the volume of a pyramid of base a and height h, ha2 3 . Consider the picture below Then the total volume is (u+v)a . Now the theory of similar triangles (see Thales 3 fourth Theorem below or Proposition 4 Book VI in Euclid) we have u+v v = . a b The desired volume is thus va3 vb3 (u + v)a2 vb2 = . 3 3 3b 3b We now rewrite the identity just used as v u v + = a a b 40 2 that is u =v a 1 1 b a v= = (a b)v ab this gives b . ab Substituting this into the formula we have for the desired area yields u a3 b3 . 3(a b) We now note that this implies the Egyptian formula once it is understood that (a b)(a2 + ab + b2 ) = a3 b3 . The consensus is that the Egyptians were aware of this identity.. The later Egyptian geometry seems to have been inuenced by the Babylonians since on the tomb of Ptolomy XI who died in 51 BC the inscription contained the incorrect formula for the area of a quadrilateral of sides a, b, c and d (a + c)(b + d) . 4 The Babylonians and the Egyptians also had an understanding of the geometry of circles. 2.1.4 Exercises. 1. Can you nd any quadrilaterals have area in accordance with the Babylonian formula? 2. What is the area of a rectangular trapezoid with dimensions 2,4,3? 3. A problem on the Moscow Papyrus says: One leg of a right triangle is two and a half times the other and the area 20. What are its dimensions? Use the Egyptian method of completing to a rectangle to solve the problem (this is the way it was done on the papyrus). 4. Calculate the volume of a right pyramid (not truncated) of base 9 and height 15. 2.2 Thales and Pythagorus. The geometry of the ancient civilizations is important but pales next to the developments in early Greece. Perhaps one reason why we are so aware of Greek mathematics is because of their rich literature and historical writing dating from the earliest eras of their civilization. The rst Olympic games were held in 776 BC (a documented historic event). The works of Homer and Hesiod (still read) predate this event. During the sixth century BC there is a record 41 of two great mathematicians Thales and Pythagorus. Their individual achievements are only documented by secondary sources which, perhaps, exaggerate the accomplishments of these two mathematicians. The Greek world in 600 BC had spread from its original boundaries of the Aegean and Ionian seas to scattered settlements along the Black and Mediterranean Seas. Most of the mathematics that has been recorded comes from these outskirts. One possible reason for this is that they interacted with the older cultures of the Babylonians and the Egyptians. Thales of Milatus (624-548 BC) and Pythagorus of Samos (580-500 BC) were known to have travelled to the ancient centers of Babylonia and Egypt to study their mathematics. 2.2.1 Some theorems of Thales. Eudemus of Rhodes (320 BC a student of Aristotle) wrote a history of mathematics that is now lost but a summary of this history (also lost) was incorporated by Proclus (410-485 AD) in his early pages of commentary on the rst book of the Elements by Euclid. Proclus reports as follows: ... (Thales) rst went to Egypt and thence introduced this study to Greece. He discovered many propositions himself and instructed his successors in the principles underlying many others, his methods of attack being in some cases more general in others more empirical. Later quoting (the quote of) Eudemus he attributes that following ve theorems (found in the Elements) to Thales. 1. A circle is bisected by its diameter. 2. The base angles of an equilateral triangle are equal. 3. If two lines intersect the two opposite angles are equal. 4. If two triangles have all their angles equal then the corresponding sides are in proportion. 5. If two triangles have one side and the two adjacent angles equal then they are equal. We will consider these theorems in our discussion if Euclidean geometry. Thales was a practical man whose motto according to Proclus was know thyself. 2.2.2 Pythagorus. Pythagorus, on the other hand, was a mystic and a prophet. His motto (and that of the Pythagoreans) was all is number. As with Thales, only secondary sources still exist (Aristotle was known to have written a biography of Pythagorus). The Pythagoreans were a vegetarian sect since they believed (possibly inuenced by a trip of Pythagorus to India) in the migration of souls. The term mathematics comes from Pythagorus and literally means that which is to be learned. Proclus, in his introduction to the books of Euclid says: 42 Pythagorus, who comes after him [Thales], transformed this science into a liberal form of education, examining its principles from the beginning and probing the theorems in an immaterial and intellectual manner[meaning abstract]. He described the theory of proportions and the construction of cosmic gures. Johann Stevin Kepler (1571-1630) wrote: Geometry has two great treasures: one is the Theorem of Pythagorus; the other, the division of a line into extreme and mean ratio. The rst we may compare to the measure of gold the second we may name a precious jewel. The meaning of this quotation will be clearer after reading the next section. 2.2.3 The golden ratio. First we look at a square of side a with its diagonals drawn. This picture is quite similar to one on the Babylonian tablet Yale 7289. We note that if we use the method the Babylonians each of the triangles with legs 2 b,b and hypotenuse a has area b2 since 4 of them make up the square of area 2 a2 we see that a2 = 4 b2 = 2b2 . Note that the obvious symmetry implies that all of the angles in the center are equal and so each must be a right angle. The drawing is therefore an elegant proof of the Pythagorean theorem in this case. It is reasonable to ask what happens for a pentagon? Consider the two gures 43 If we rotated the pentagon so that a vertex would go to a vertex then the gure would look exactly the same. This says that all the segments labeled by an a are equal to the same value which we will call a. Similarly for the ones marked b and c. Now each of the triangles with base c and two sides a full diagonal (a + b + a) rotate one on the other. For example, AEC and ABD. Each of the triangles with base b and sides a (for example, E 0 A0 C) is similar to the of the triangles with base c and sides a + b + a. So a 2a + b = . c b We note that the line BD is parallel to AE and that AC is parallel to ED. This 0 implies that AE has the same length as ED. So c = a + b. We therefore have 2a + b a = . a+b b Cross multiplying gives 2ab + b2 = a2 + ab. Hence b2 + ab = a2 44 If we divide both sides of this equation by a2 and set x = becomes x2 + x = 1. b a then the equation b Thus the ratio a satises the above equation. This ratio was called the golden ratio by the Pythagoreans. It is this division that Kepler called a precious jewel. The ancients believed that a rectangle whose sides are in this ratio was the most pleasant to the eye. The Greeks designed the Parthenon so that its 1 sides conformed to this ratio. The number y = x is called the golden section and satises y 2 = 1 + y. A rectangle whose smaller side is in the proportion of the golden ratio to the larger is called a golden rectangle. It has the property that if we take a golden rectangle ABDF as in the picture below and if we We mark the point C so that the length of BC equals the length of the shorter side AB. Then one has a subdivision into a square ABCH and a rectangle CDF H. The rectangle is another golden rectangle: To see this we observe that if b is the length of AB and if a is that of BD then CD has length a b and F D has length b. We assert that ab b = . b a To see this cross multiply we are trying to see if a(a b) = b2 . That is if b a2 = ab + b2 . Which is just the assertion that a is the golden ratio. The point here is we can now look at the new golden rectangle as a square and a golden rectangle. In fact, we can continue this forever. If in each of the squares we draw the part of the circle of radius equal to the length of a side starting at the far corner (relative to our labeling) we have a spiral. The arcs seem to t smoothly. They do (but not as smoothly as the picture indicates) and well understand why after we discuss the innitesimal calculus. 45 2.2.4 Relation with the Fibonacci sequence. If we recall the problem of Fibonacci: A rabbit takes one month from birth to become sexually mature. Each month a mature pair gives birth to two (assume a male and a female) rabbits. If you have a pair of newborn rabbits how many pairs rabbits will you have in a year? We start with 1 pair after a month they have just gotten mature so there is still only one pair. They give birth at the end of the next month so there are then 2 pair. In one month the original pair will give birth again but the second pair will have just gotten sexually mature. So there are 3 pairs. 2 of the pairs sexually mature and 1 newborn. The next month the two pairs of mature rabbits give birth and the newborn matures there are now 5 pairs of rabbits 3 mature and 2 newborn. Next month there will be 5 mature and 3 newborn. The pattern (rst apparently pointed out by Kepler) is if the number of rabbits at the beginning of month k is denoted Fk with Nk newborn and Mk mature then Mk+1 = Fk (every rabbit that existed at the beginning of month k is mature in one month) and Nk+1 = Mk (only the mature give birth in one month). Since Fk+1 = Mk+1 + Nk+1 , we see that Fk+1 = Fk + Mk = Fk + Fk1 . Viz.. F0 = N0 = 1 (this is where we start), F1 = M1 = 1, M2 = 1,N2 = 1 so F2 = 2. Now F3 = F2 + F1 = 3. Similarly, F4 = F3 + F2 = 3 + 2 = 5. Continuing in this way we have the sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144.... If we start with two squares of side one. Put a square of side two on top of them. Then a square of side 3 to the left. After that a square of side 5 below, etc. (as in the picture) .If we draw circles as in the case of the golden spiral we nd that we have a spiral that is almost identical. This leads us to consider the ratios FFk we have k+1 1, .5, .666..., .6, .625, .6153846, .61904 76, .6176471, .61818..., .6179775, .618055... for the rst 12 ratios to at least an accuracy of 7 digits. We note that the golden ratio is .61803398875 to 11 decimal places. This seems to indicate that if we us the notation for the golden ratio and if we write Qk = FFk then k+1 46 1. Q2k1 < < Q2k for k = 1, 2, 3, ... 2. Q2k Q2k1 becomes arbitrarily small with k. 3. Q2k1 < Q2k+1 , Q2k > Q2k+2 . If these observations are true then the Fibonacci sequence gives an eective way of calculating the golden ratio to arbitrary precision. These observations are true (as we shall soon see) but a much more surprising relationship is true. We rst observe that the quadratic formula implies that 51 = . 2 We also note that if we use the notation for the golden section then 5+1 = . 2 A theorem of J.P.M.Binet proved in 1843 says Fk = k+1 ()k+1 , k = 0, 1, 2, ... 5 Let us check this for some small k. If k = 0 then the numerator is 5+1 + 2 51 = 5. So the formula is correct. If k = 1 then the numerator is 2 ( 5+1 )2 ( 51 )2 = 5. To prove the formula for all k we will use mathemat2 2 ical induction. The assertion Sn is that the formula is true for all k between 0 and n. We know that S0 and S1 are true. Let us assume that Sn is true. We must show that Sn+1 is true. To do this we need only show that the formula is correct for Fn+1 and we may assume that n + 1 2. Thus Fn+1 = Fn + Fn1 . Our assumption implies that Fn = n+1 ()n+1 n ()n , Fn1 = . 5 5 If we add these two terms together we nd that Fn + Fn1 = n ( + 1) ()n (1 ) . 5 5 We now observe that 2 = 1 and 2 = + 1, So n 2 ()n 2 n+2 ()n+2 Fn + Fn1 = = . 5 5 5 This is the desired formula for Fn+1 . Notice that we have given no indication as to why we thought that such a theorem might be true. The method of mathematical induction can only be used to prove assertions that we have guessed 47 in advance or, perhaps, we can derive in using deeper insight. We will describe an alternate more direct approach when we study matrix theory. The formula of Binet easily implies the 3 assertions above. We have (using 1 = ) that if k is even than Qk = if k is odd then Qk = Also Q2k Q2k1 = + 2k+3 > 1 2k+4 2k+3 < . 1 + 2k+4 4k ( + 23 + 5 ) . (1 + 4k+2 )(1 4k+4 ) This implies 2. We leave 3. to the reader. We note that Binets formula can be used to prove many results about the Fibonacci sequence. One very nice formula is 2 Fn+1 Fn1 Fn = (1)n+1 . To check this we do the substitution of Binets formula in the left hand side of the equation: 2 Fn+1 Fn1 Fn = (( n+2 ()n+2 )( n ()n ) ( n+1 ()n+1 )2 )/5. If we multiply out the terms in the braces the left hand side of this equation is equal to n+2 ()n ()n+2 n + 2 n+1 ()n+1 . 5 We now use the facts that = 1 and 2 + 2 = 3. So the above display is indeed (1)n+1 . 2.2.5 Phyllotaxies. In this subsection we will discuss an apparent relationship between the Fibonacci numbers and the spiraling that occurs in plants. It has been observed that the number of petals of a specic type of ower is usually a Fibonacci number. Lilies have 3, buttercups 5, marigolds 13, asters 21 most daisies 34,55, or 89. The head of a ower (like a sunower or a daisy) can be seen to have two families of interlaced spirals, one winding clockwise and the other counterclockwise. The pair of numbers is (see the gure below) 34 and 55 or 55 and 89 for the sunower. Another such phenomenon is the spiralling of pine cones. Among the pine cones found in a cursory look in Del Mar, California one can nd 5,8 and 8,13 pine cones. 48 There are many attempts at an explanation as to why the Fibonacci numbers appear in so many ways in nature. The most convincing are related to the assertion that the golden ratio is the most irrational number. We will give one explanation of this statement in terms of continued fractions. First we will explain the idea of a continued fraction. If we have a number 0 < a < 1 then an expression for a as a continued fraction is a= 1 a1 + 1 a2 + a 1 3 +... With a1 , a2 , ... positive integers. This means that we should consider 1 1 , a1 a1 + 1 a2 , 1 a1 + 1 1 a2 + a 3 , 1 a1 + 1 a2 + 1 a3 + 1 a4 , ... as better and better approximations to a. These rational numbers are called the convergents (here we have the rst,second,third and fourth convergent). 1 Here is the method for nding a1 , a2 , .... Dene r1 = a , and a1 to be the largest integer less than or equal to r1 . In general, assuming rn and an have been dened and rn > an then dene rn+1 = 1 rn an and an+1 to be the largest integer less than or equal to rn+1 . If rn = an . Then the nth convergent is equal to a and we stop. If a is irrational this procedure will never stop. If a 1 then we set a0 equal to the largest integer less than or equal to a and we write a0 + a1 + 1 1 for the continued fraction and the a2 + 1 a3 +... 49 convergents are a0 + 1 1 , a0 + a1 a1 + 1 a2 , a0 + 1 a1 + 1 1 a2 + a 3 , a0 + 1 a1 + 1 a2 + 1 a3 + 1 a4 , ... n n Then one can show that if pn is the nth convergent then pn is in lowest terms q q and is closer to a then any fraction in lowest terms with denominator at most 2 qn . If a is the Golden Mean then r1 = 51 = 5+1 . Thus 1 < r1 < 2 so 2 a1 = 1. 1 2 r2 = = 5+1 51 1 2 thus r2 = r1 so a2 = a1 = 1. We note that this goes on forever, rn+1 = rn for n = 1, 2, ... and thus an+1 = an = ... = a1 = 1. Thus the partial fraction expansion of the Golden Mean is 1 1+ The convergents are 1 2 3 5 8 1, , , , , , ... 2 3 5 8 13 1 1+ 1 1+ 1 1+.... . n1 Which we recognize as FFn for n = 1, 2, 3, ... Recent explanations of phyllotaxies involve this irrationality and these convergents. The suggested theory is that if blobs of expanding matter that radiate from a central source are such that as they radiate out they repel then they should be propelled initially at a slope that is badly approximated rationally thus allowing the most space for the radiated initial blobs which are all assumed to be the same size. The fact that the golden ratio is so badly approximated makes it a likely candidate for this angle of radiation. The corresponding ratio of the counts of spirals then give a rational approximation to this number. One of the rst observations of this phenomenon is in the work of 1837 Auguste and Louis Bravais who observed this angle in the ratio of the left and right spiraling of leaves on many trees. In 1872 P.G. Tait extended the work of the Barvais brothers to the spiralling we have been discussing. A controlled experiment was performed by Stpane Douady and Yves Couder in 1993(La Reserche, 24 (1993), 26-25) which conrms these ideas. In their experiment they had a medium of liquid silicon on a disk and from the center of the disk they shot blobs of magnetized liguid. On the edge of the disk they had a strong magnetic source which would cause the blobs to radiate. They found that the count of the spiraling depended on the rate of radiation. The most likely count was a pair of consecutive Fibanocci numbers. However, by changing the rate they found other sequences such as 1, 3, 4, 7, 11, .... This sequence satises the same recursion as the Fibonacci sequence. An amusing discussion of this work can be found in the Mathematical Recreations column of Ian Stewart in the January 1996 Scientic American. 50 2.2.6 Exercises. 1. Join the vertices of a regular six sided gure. Can you see any interesting ratios, etc.? 2. Explain why the golden spiral looks smooth. 3. Show that Qk+1 = Qk1+1 . Use this to show assertion 3. above directly. Suppose that we have a sequence ak of positive rational numbers satisfying 1 ak+1 = 1+ak . Show that if limk ak exist then the limit is . 4. Use mathematical induction to show that n+1 = Fn +Fn1 , n = 1, 2, .... What is the analogous formula for ? 5. We dene a sequence E0 = 1, E1 = 1 and En+1 = E0 + E1 + ... + En . What can you say about this sequence? 6. Consider the sequence dened by the following rules, A0 = 3, A1 = 0, A2 = 2, An+1 = An1 + An2 . This sequence is called the Perrin sequence. In 1991 Steven Arno proved that if n is prime then n divides An . (A3 = 3, A4 = 2, A5 = 5, A6 = 5, A7 = 7, A8 = 10, A9 = 12, A10 = 17, A11 = 22, ...). It has been shown that calculating the remainder of the division of An by n is easy (in the sense of section 7 of Chapter 1). Devise a primality test based on this result. 2 7. Use the formula Fn+1 Fn1 Fn = (1)n+1 to show that consecutive Fibonacci numbers are relatively prime. 8. Find as many examples (or counter examples) to the phenomenon described in the above section (phyllotaxies). 9. Show that the nth convergent of the Golden Mean is Fn1 Fn . 22 7 10. Let a = . Show that the 0th convergent is 3 and the rst is can use 3.1416 as an approximation for ). (you 2.3 The Geometry of Euclid. When we think of the work of Euclid we think about his Thirteen Books of the Elements and plane geometry. We have already seen that this is a misconception. Books VII,IX and XI are concerned with number theory. Solid geometry also appears in several places Books X,XI,XII and XIII. He also wrote books on other topics. Some of his work still exists including his Optics and a book called Phenomena which is a treatise on spherical geometry as it applies to astronomy. He also wrote The Elements of Music which is unfortunately lost. However, his book Sectio Canonis on the Pythagorean theory of music still exists. Without a doubt, his reputation rests on his masterpiece: The Elements. Since the geometry in the elements is much better known than the number theory, we will make an even less complete study of it than we did of the number theory. As in the case of the number theory Book I begins with denitions 23 in this case. There are then 5 Postulates and 5 Common Notions. 51 2.3.1 The denitions. 1. A point is that which has no part. Like his rst few denitions in Book VII this denition must be taken with a grain of salt. He seems to mean that points are the smallest objects that we will consider. 2. A line is breadthless length. As we shall see a line is not necessarily a straight line. In fact, we will see an attempt in Denition 3 to dene a straight line. In modern terminology Euclids line would be a curve. (Denition 15 denes a circle as a part of a line.) 3. The extremities of a line are points. 4. A straight line is a line which lies evenly with the points on itself. This is Euclids expression for a line as we know it. It seems clear that he is asking us to picture a straight line and is just saying that our picture is correct. In a nutshell, a straight line is a line that has some sort of uniformity that should imply straightness. 5 denes a surface, 6 says that the extremities of a surface are lines and 7 denes a plane surface. These denitions are completely analogous to what he does for lines and straight lines. 8. A plane angle is the inclination to one another of two lines that meet each other and do not lie on a straight line. Here he is giving us the notion of an angle between (what we would call two curves). He doesnt seem to think that a denition of inclination is necessary. Furthermore he must be thinking of lines that have exactly one point in common (where they meet) but both do not lie on the same straight line. This is a bit confusing since the lines are not necessarily straight. We can conceive of curves that are partially in a straight line and partially o of it. With the use of the methods of Calculus one can give a notion of angle between two curves. But these curves must be well approximated by straight lines near the point where they meet. 9. And when the lines containing the angle are straight, the angle is called rectilineal. This is dening what we usually mean by an angle (that is between two straight lines). Next he denes a right angle. 10. When a straight line set up on a straight line makes the adjacent angles equal to one another, each of the equal angles is right and the straight line standing on the other is called a perpendicular to that on which it stands. Here we are asked to know what it means for two angles to be equal. Euclid seems to have no need to dene such a concept. It seems clear that he feels that 52 he must introduce some terminology but that all he is doing is describing objects with which we are already familiar. The next denitions dene obtuse angle to be one greater than a right angle and acute angle to be one less than a right angle. Euclid does not seem to feel that he has any need to explain the meaning of the terms less than or greater than in the context of angles. Denitions 13 and 14 are concern boundary and gure. A boundary is dened to be an extremity but an extremity in this context is not dened. Although there is an indication in Denition 3. What he seems to mean is that the boundary is swept out by extreme points of lines. A gure is that which is contained in a boundary. 15. A circle is a plane gure contained by one line such that all straight lines falling upon it from one point among those lying within the gure are equal to one another. 16. And the point is the center of the circle. So a circle is contained by one line. So a line is really what we think of as a curve. There is a point so that if we take a straight line with one extremity at this point and the other on the circle getting a straight line L then do the same for another point on the circle getting a straight line M and if we lie the two lines one on top of the other they are the same. Denitions 17 and 18 dene diameter and semicircle. We note that one of the parts of the denition of diameter is the rst Theorem that Eudemus attributed to Thales. We should also note that Euclid felt no need to prove this part of the denition. 19,20,21,22 dene various types of gures using the terminology with which we are all familiar. Denition 23 involves a concept that is needed in the statement of the fth Postulate. 23. Parallel straight lines are straight lines which, being in the same plane and being produced indenitely in both directions, do not meet one another in either direction. The point of the denitions seems to be to attach names to concepts that we already know. Euclids denitions are not denitions as they are understood in modern mathematics. 2.3.2 The Postulates. Here Euclid describes assumptions that he feels must be made as the basis of geometry. These are of two types. The rst 3 describe constructions that are possible. 1. To draw a straight line from any point to any point. In other words if we have two points there is always a straight line that joins them. 2. To produce a nite straight line continuously in a straight line. This can mean several things. He seems to want it to mean that we can choose any point on a straight line and have that point be one of the endpoints of a straight line of xed length. 53 3. To describe a circle with any center and distance. We can draw a circle with any center and any radius (in any plane). The next two are assertions about angles. 4. That all right angles are equal. 5. That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indenitely, meet on the side on which are the angles less than the two right angles. (In the picture below the angles in question are a and b. This is the famous parallel postulate. It seems obviously true and is conrmed by every picture we draw. We will see that it was the subject of intense speculation into the nineteenth century. The brunt of the study was to see if it could be deduced from the other 4 using the denitions and the common notions that we will now describe. This work on the parallel postulate will be studied in greater detail after we have developed a more sophisticated groundwork for our analysis. 2.3.3 The Common Notions. These are the basic axioms for equality and inequality. 1. Things that are equal to the same thing are equal to each other. 2. If equals be added to equals the wholes are equal. This common notion is a geometric assertion. It applies to areas, geometric gures and numbers (as in the denitions before Book VII). The next common notion should be interpreted in this way also. 3. If equals be subtracted from equals the remainders are equal. 4. Things which coincide with one another are equal. This is the basic method of showing that things are equal in the Elements. The proofs devise a method of laying one object onto another object in such a way that they coincide. That is they t together perfectly. This can be seen graphically in Proposition 4 of Book I which shows that if two triangles have to 54 pairs of equal sides and the included angles are equal then if you lay the angle made by the corresponding sides of one triangle onto that for the other the two triangles coincide. 5. The whole is greater than the part. 2.3.4 Some Propositions. Euclid is now in business. All terms he will need in Book I are dened (we should assume to his satisfaction also other books such as book II will dene more terms). The rest of Book I involves basic plane geometry. We give the avor of the proofs by looking at two examples, in detail. Proposition 1 and Proposition 47 (The Pythagorean Theorem) in Book I. We will rst look at Proposition 1. On a given nite straight line construct an equilateral triangle. This means that we are asked to show that if we are given a nite straight line (an interval) we can construct an equilateral triangle with one side equal to the given one. We will now give the proof as given in Euclid The argument is as follows. We have the line AB. We use Postulate 3 twice to make the two circles shown the rst with center A the second with center B and both with distance AB. Let C be the intersection of the two circles. Then AC = AB by the denition of circle (Denition 15) and BC = AB for the same reason (that AC and BC exist is Postulate 1). Thus AC = BC by 1. in the Common Notions. The triangle thus has all of its sides equal. This is ne except for one assertion that Euclid does not deem necessary to be proved: That the circles intersect. This is more serious than the lack of the need to prove what we called the Euclidean property in section 1.4. A proof of the existence of this intersection involves more sophisticated mathematics. At a minimum it involves real denitions of some of the terms. The crux of the matter has to do with the fact that a circle has an inside and an outside and 55 that a line (or a circle) that contains a point in the inside of the circle and a point on the outside must have a point on the circle itself. Proposition 5 is the assertion that the base angles of an isosceles (legs equal) triangle are equal. This is a strengthening of Thales second Theorem as we quoted from Eudomus. Proposition 15 is the third Theorem of Thales that we quoted. Proposition 26 is the fth Theorem of Thales in our list. The other Proposition that we will analyze in detail is number 47 in Book I. We call it the Pythagorean theorem. The proof below seems to be original to the Elements (in other words most of the other proofs are transcriptions of other peoples arguments). In right-angled triangles the square of the side subtending the right angle is equal to the squares on the sides containing the right angle. The basic idea is to show that the triangles ABD and F BC are equal as are the triangles AEC and BCK. To see how this proves the theorem we note that since the triangle ABD has base BD and height DL as does the rectangle with sides BD and DL (Euclid simply calls it the parallelogram BL). We conclude (as did the Egyptians, Babylonians and Proposition 41, Book I) that the rectangle BL is twice the triangle ABD. The same argument shows that the square ABF G is twice the triangle BCF . Hence since doubles of equals are equal to each other (this is a statement in braces without any further reference) this implies that the square ABF G is equal to the rectangle BL. Similarly, the square ACKH is equal to the rectangle CL. Since, BL and CL make up the square BCED the Proposition follows. (Euclid says: Therefore etc. Q.E.D.). We are left with the assertion about the triangles. We will consider the rst pair notice that AB = BF , BD = BC thus in light of Proposition 4 Book I 56 (Thales fth theorem in our list above) we need only show that the angles ABD and F BC are equal. To see this we observe that the angle DBA is the sum of DBC and ABC. The angle F BC is the sum of ABF and ABC. Since all right angles are equal ABF = DBC. So the assertion about the angles follows from Common Notions 2. 2.3.5 Exercises. 1. Prove the converse of the Pythagorean theorem. That is, if the square of one side of a triangle is equal to the sum of the squares of the other two sides then the angle opposite this side is a right angle. (This is Proposition 48 in Book I. Explain the proof in the Elements and give a proof using, say, trigonometry). 2. In Proposition 11 of Book II of the elements show that Euclid is showing that one can construct the Golden ratio. 2.4 Archimedes. Archimedes lived during the period 287-212 BC. He was a citizen of Syracuse. In his youth he is thought to have traveled to Egypt and while there he invented the water screw as a way of lifting large amounts of water. He developed a theory of levers and made the famous boast : Give me a place to stand on and I can move the Earth. He is said to have backed this up by raising a ship out of the water using one arm. He was also a military engineer who invented many weapons during the defense of Syracuse against the Romans. He is said to have used giant lenses to focus the sunlight to burn down the Roman eet. The history of his practical inventions is largely second hand since he wrote commentary on only one of these (On sphere making which is lost). Most of Archimedes writings on mathematics have been preserved. He wrote his work in the form of letters to his friends: Conon of Samos and Eratosthenes. After Conon died he sent his letters to Conons student Dositheus of Pelusium. When the Romans eventually invaded Syracuse in 212 BC their general, Marcellus, ordered that Archimedes and his household be spared in the ensuing massacre. However, when a soldier went to escort Archimedes to an audience with Marcellus, Archimedes was concentrating on a geometric problem. He told the soldier that he would come once he solved the problem. The soldier was furious and killed Archimedes: perhaps the greatest mathematician who ever lived. At this point we will discuss two of Archimedes works: The Sand-Reckoner and Measurement of the Circle. The rst is in the nature of a study of very large numbers. The second is the genesis of the elegant approximation 22 of 7 . In later chapters we will be looking at Archimedes work on what we call calculus (although the work alluded to above on the calculation of involves ideas that are usually associated with calculus. 57 2.4.1 The Sand-Reckoner. This paper begins with an introduction written in the form of a letter to King Gelon of Syracuse and ends with a conclusion also addressed to Gelon. Let us quote from the initial material in the paper (as translated by Heath). There are some, King Gelon, who think that the number of the sand is innite in multitude; and I mean by sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited. Again there are some who, without regarding it as innite, yet think that no number has been named that is great enough to exceed its multitude...But I will try to show you by geometrical proofs, which you will be able to follow, that, of the numbers named by me and given in the work I sent to Zeuxippus, some exceed ... that of the mass equal to the magnitude of the universe. [ Here the point that will be critical is the phrase no number has been named that is great enough...] He then discusses the various possibilities for the size of the universe with the idea that whatever sizes are believed he will always take one bigger. Included in the models that he considers is that of Aristarchus of Samos of which Archimides says: His hypothesis are that the xed stars and the Sun remain unmoved, that the Earth revolves about the Sun in the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of the xed stars, situated above the same center as the sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the xed stars as the center of the sphere bears to its surface. Archimedes goes on to discount this theory for technical reasons. However, his point is not to establish a theory of the universe but just to get an upper bound on its size. Now comes the point of the whole exercise. There was no known notation or theory of big numbers. Recall that the Egyptians really didnt get past 10 million. The Romans would be constantly inventing new symbols and would eventually run out of letters in the alphabet. The biggest number that the Greeks used was a myriad which is 10,000. Archimedes considers what happens if we multiply two myriads. One then has a myriad myriads. Then he proposes to take a myriad myriads and treat it as a basic unit (a number of the rst order) then he can multiply it by a myriad myriads. One can continue this way a myriad number of times and get a number that Archimedes called P (probably but we reserve this symbol for something else) a number of the second order. In modern notation P = (100000000)100000000 . He then observes that he can continue this process by taking P to be a number of the rst order and consecutively multiply 100000000 P by itself P times getting P P = (100000000100000000 )100000000 . This new number can now be treated as a number of the rst order and the process repeated once again. He then gives a reasonable argument that one of his new found immense numbers is big enough. He in fact argues that the number of particles in the universe is less than 1063 (much smaller that P ). The modern estimates are somewhat nearer to 10100 . 58 In modern mathematics we the ideas of this paper lead to the Archimedian Property that is given any number (we mean here a rational or real number) there is an integer that is strictly bigger. 2.4.2 Exercises. 1. Prove the following is a theorem in The Sand-Reckoner by induction (this is the way Archimedes proved it): If there be any number of terms in continued proportion say A1 , A2 , ..., An , ...and if the rst is 1 the second is 10 [so the third is 100] and if the mth term is multiplied by the nth term the distance [i.e. the number of terms between them] from this term to An is the same as the distance from 1 to Am . 2. In Archimedes paper he takes as the diameter of the Sun 30 times the diameter of the moon. Do you agree with this? (He quotes Euxedus, his own father Pheidias and Aristarchus for estimates of 9, 12 and 20 times. Thus he was estimating higher than anyone else at the time.) 2.4.3 Archimedes calculation of . We will next look at Archimedes study of the number . He is the rst to prove that 22 is a remarkably good approximation to the ratio of the circumference of 7 a circle to its diameter. In his paper Measurement of a Circle he in shows that 3 10 1 <<3 . 71 7 His method (as we shall see) could yield to arbitrary precision. The important point to note is that he has lower and upper bounds of (in decimal notation) 3.1408 and 3.1429 thus is 3.14.. to an accuracy of at least 0.002. After we study this remarkable result we will look at various ramications of Archimedes work that span about 2200 years. Before we begin his we will discuss the understanding of before Archimedes did his work. The Babylonians routinely used the value 3 for the ratio of the circumference to the diameter of a circle however in some tablets the other values closer to and perhaps including 22 were 7 indicated . In the Rhind Papyrus the value was taken to be 3 1 and sometimes 6 16 2 = 3.16... At least once in the Bible (Revised Standard Version 1952 the 9 King James Bible is a bit more poetic but has the same meaning) in 1 Kings 7-23 it says: Then he made the molten sea; it was round, ten cubits from brim to brim, and ve cubits high, and a line of thirty cubits measured its circumference. The ten cubits from brim to brim is the diameter and the circumference is thirty so the ratio is 3. One can argue that the Bible wouldnt bother with fractions. But even so 31 or 32 would be much closer to the correct value. It is interesting that the rst convergent of the partial fraction expansion of is 22 7 59 (see section 2.2.5). This means that there is no better rational approximation with denominator less than of equal to 7. We will now give a discussion of Archimedes method. The rst proposition involves the following diagram. The area of any circle is equal to a right-angled triangle in which one of the sides about the right angle is equal to the radius, and the other to the circumference of the circle. This proposition says the if a circle has radius r and circumference c then the area is rc . We know this in a dierent way. We know that c = 2r so the 2 proposition says that the area is r2 . However, this result allows us to calculate the area without knowing . The argument is truly ingenious. Let K denote the area of the triangle and let a be the area inside of the circle. Archimedes observes that there are three possibilities K < a, K > a, K = a. The point is to show that the rst two possibilities cannot occur. He rst assumes a > K and shows that this leads to a contradiction. He draws the inscribed square ABCD. He then bisects the arcs AB, BC, CD and DA and draw the lines from the center of the circle through the bisectors. If necessary bisect again and continue until the area of the inscribed gure is greater than K. To see that (under the hypothesis a > K) this is possible since all we need do is take the subdivision so ne that the sum of the maximal distances from the sides of the gure to the circle is less than a K. That this can be done is obvious from the picture and although the Archimedesmethod of determination of the subdivision involves an assertion equivalent with the desired one that is unproved. He now observes that it is easily seen that the area of each of the polygonal gures is less than K.. In fact the area is the sum of the triangles whose vertices are 60 consecutive vertices of the gure and the center. The height of each of these triangles is less than r and the sum of all the sides is less than c. Thus the area is less than K. So the case a > K is impossible. To show that the case a < K is impossible he argues as above using circumscribed polygons (see the picture). As we have pointed out there are still a few points in this argument that have not been proved (these are easily checked using trigonometry). However, if m is the area of any of the inscribed polygons as in the argument and if M is the area of any of the circumscribed polygons then we have M >K>m and M > a > m. To prove the result we must observe that for each (small) E there is a subdivision such that M m < E. This is basically what Archimedes is asserting. Notice how close to modern calculus this is. 61 Archimedes next proves the upper bound for using the above gure (the part OAC is to be thought of as part of the hexagon above it). In this gure the line BA is part of a diagonal of the circle. The line AC is tangent to the circle at A. He starts by taking the angle AOC equal to 1 of a right angle. He then 3 observes that OA > 265 . Fortunately, another Greek mathematician named AC 153 Eutocius inserted an explanation of this inequality. the actual ratio is 3. So 265 2 we must just check that 153 < 3. One checks that the square is 2.9999 to 4 decimal places. To see where the 3 comes from consider the following picture (the curve AN should be an arc of the circle of radius OA) The angles OAC and OM N are right angles. The angle AOC is 1 of a right 3 angle so the angles ON M and OCA are each 2 of a right angle. This implies 3 that N M = 1 (ON = OA since both are radii of the same circle). Now ON 2 ON 2 = N M 2 + OM 2 so OM 2 = 3 ON 2 = 3 OA2 . Using the fact that OAC 4 4 3 OM OM and OM N are similar triangles we see that MN = OA . Since MN = 2 = 3. 1 AC The assertion follows. Similarly, OC = 2 = 306 . Next we bisect the angle AOC CA 153 which yields the line OD in the picture above. Now Proposition 3 in Book VI of the Elements implies that CO = CD (see Exercise 1 in 2.4.4). Now we OA DA CA have CO+OA = CO + 1 = CD + 1 = CD+DA = DA . Now multiplying the OA OA DA DA OA two ends of this string of equations by OA we have CO+OA = OD . We now CA CA OA 265 OC 306 OA use the inequalities AC > 153 and CA = 2 = 153 . Thus OD = CO+OA > CA 265 306 571 2 2 2 153 + 153 = 153 . He now applies the Pythagorean theorem OD = AD + AO . 2 2 2 2 2 +AO +153 450 > 5711532 = 349409 (1532 = 23 409). Now Archimedes So OD2 = ADAD2 AD 23 apparently guesses another very good lower bound for a square root ((591+1/8)2 591 1 = 3494 28.7 66...) yielding OD > 1538 . The point here is that he now has a OA good lower bound for the ratio OD instead of OC . He can now bisect the angle DA CA AOD getting the point E in the main diagram above and argue as before getting 1172 1 a lower bound on OE > 1532 . He then bisects again and yet again. At his EA 1 time he has an angle the size of 48 of a right angle and at this fourth bisection he only does half the argument and gets an inequality OA OG 2 > 4673 1 8 153 . Now the 62 diameter of the circle is 2OA and HG = 2AG. Thus the ratio of the diameter of the circle to of the circumference of this 96 sided circumscribed regular gure 4673 1 2 is at least 15396 . The reciprocally of this then gives an upper bound for of = 3 + 467321 < 3 + 467221 = 3 + 1 . 7 2 2 The next task is to derive a lower bound for . Archimedes does this by starting with a regular inscribed hexagon and bisecting 4 times just as he did for the upper bound. 14688 4673 1 2 667 1 667 1 He starts with the above picture with the angle equal BAC to one third of a AC right angle. As before, CB = 3. This time Archimedes needs an upper 2 bound for 3. He chose 1351 since 1351 = 3.0000016... As before he bisects 780 780 the angle BAC getting the straight line AD. He observes that AD intersects BD at the point d. We note that the angles at C and D are right angles. Thus since dAC and BAD are the two halves of the angle just bisected we see that AD the triangles ADB, ACd and dBD are similar. Thus DB = BD = AC . Now Dd Cd CA Cd we observe (see Exercise 1 below) that AB = dB . Thus AC = AB (this implies Cd dB AD that (AC)(dB) = (AB)(Cd) which we will use in a moment). So DB = AB . Bd We also note that (AB)(Bd) + (AB)(Cd) = (AB)(Bd) + (AC)(Bd) (see the parenthetic remark). Thus AB = AB+AC (cross multiply). The denominator Bd Bd+Cd of the right hand side of this equation is Bd + Cd = BC. We therefor have AB AB+AC AB+AC AD . The outgrowth of all of this is DB = AB+AC . Bd = Bd+Cd = BC BC AC 1351 BA 1560 AD We now note that BC < 780 and BC = 2 = 780 . So DB < 2911 . We 780 now use the Pythagorean theorem for the right triangle BDA. Finding that 2911 2 2 2 2 AB 2 +AD2 AB 2 = BD2 + AD2 . So BD2 = BDBD2 < 780 + 1 = 2911 +780 . As 7802 before Archimedes must approximate a square root and he takes an upper bound 3013 3 AB 4 BD < 780 . He now bisects the angle BAD getting the line AE and proceeds AB in exactly the same way to get an upper bound for BE . He bisects two more times getting the line AG. Using the same technique he gets the estimate 63 66 < 66 4 . So GB > 2017 1 . Since GB is a side of a regular inscribed polygon AB 4 with 96 sides we see that the ratio of the perimeter of the polygon to the radius of the circle is greater than 6696 > 3 10 . 71 2017 1 4 There are many theories as to how Archimedes found his accurate upper and lower bound for 3 one that is very convincing can be found in A.Weil, Number Theory, Birkhuser, Boston, 1984. He suggests that Archimedes was applying the formula (5x + 9y)2 3(3x + 5y)2 = 2(x2 3y 3 ). AB BG 2017 1 Then according to Weil he started with x = 1 and y = 0 and since 52 3 32 = 2. We have 2 5 2 =3 . 3 9 5x + 9y 3x + 5y , 2 2 The iteration involves two parts (x, y) then 2 1 The rst iteration (x = 5, y = 3) yields 262 3(15)2 = 1, that is 26 = 3+ 152 . 15 In the second (x = 26, y = 15) one has 526+915 = 265, 326+515 = 153 2 2 and (265)2 3(153)2 = 2 thus 265 = 3 (153)2 . This gives Archimedes 153 lower bound. The upper bound is obtained by putting x = 265 and y = 153 into the rst formula. Getting the pair 5 265 + 9 153 3 265 + 5 153 = 1351, = 780 2 2 the upper bound used by Archimedes. 2.4.4 Exercises. (x, y) (5x + 9y, 3x + 5y) . 1. Consider the diagram (the curve AN should be an arc of the circle of radius OA) 64 with OD the bisector of the angle AOC show that OC = CD . (Hint: Use OA DA trigonometry. Let be the angle AOC. Then we are asked to show that tan tan 1 2 = . cos tan 2 Do this by using the usual trigonometry identities sin = 2 sin cos and 2 2 cos = (cos ) 2 (sin )2 .) 2 2 2. In the calculation of the upper bound for Archimedes replaced the 667 1 better estimate 3 + 467321 by 3 1 . Why do you think he did it? 7 2 3. The Babylonians preferred the upper bound 3 1 over 3 1 for . Can you 6 7 give a reason for this? 4. Do the indicated iterations at the end of this section for sharper and sharper approximations to 3. What would you do to get good approximations of 5? 2.4.5 The iteration in Archimedes calculation. It is clear that Archimedes could have in principle continued his bisection procedure indenitely. However, the calculations become more and more complicated and even powerful arithmetician (as Archimedes obviously was) would be stymied by the calculation after two more bisections. Furthermore, little would have been gained since his initial choices of square roots of 3 limited him to about 4 digits of accuracy. We now live in an age of cheap high speed calculation power and can therefore implement many more iterations of Archimedes method. Let is rst abstract the iteration that Archimedes does 4 times for the upper bound and 4 times for the lower. 65 We will use the diagram of Exercise 1. above we take the angle AOC to be such that 2m times it makes exactly one rotation. Let denote that angle. Then 2AC is a side of the regular m-gon circumscribed on the circle of radius OA and 2N M is the side of the regular m-gon inscribed in the same circle. Thus the circumference of the circumscribes m-gon is 2mAC and the circumference of the inscribed is 2mN M . We observe that AC = tan and NM = sin . Thus OA OA the circumscribed circumference divided by the diameter (2OA) is a = m tan and the circumference of the inscribed divided by the diameter is b = m sin . We now bisect the angle AOC getting AOD. Then using the same argument we nd that the circumference of the circumscribed 2m-gon divided by the diameter is a0 = 2m tan and the corresponding ratio for the inscribed 2m-gon 2 is b0 = 2m sin . The key point is 2 a0 = b0 = 2ab , a+b a0 b. Let us check this with standard trigonometry. We will use sin = 2 sin cos 2 2 and cos = (cos )2 (sin )2 . If we use the rst identity we nd that 2 2 a0 b = 2m sin (2m sin cos ) 2 2 2 = 4m2 (sin )2 = (b0 )2 . 2 cos 2 This shows that the second identity is true. As for the rst 2m2 (sin ) 4m sin cos 2ab 2m sin cos 2 2 = . = = sin a+b 1 + cos m( cos + sin ) 1 + (cos )2 (sin )2 2 2 We now use the fact that 1 = (cos )2 + (sin )2 so the denominator of the 2 2 last expression is 2(cos )2 . Substituting we nd that the last expression is 2 = 2m tan . 2 We will now use these observations to set up the implied iteration in Archimedes. We start with m = 6 then Archimedes has shown that tan = 33 and sin = 1 . 2 Thus the corresponding ratios (which we denote by a0 and b0 ) are a0 = 2 3 and b0 = 3. If we do the rst bisection then we have a1 and b1 with a1 = and b1 = 2a0 b0 a0 + b0 p a1 b0 . 2an bn an + bn 4m sin cos 2 2(cos )2 2 2 2 In general we have after n + 1 bisections an+1 = 66 and bn+1 = Archimedes is using the upper and lower bounds b4 < < a4 and extremely clever choices of approximate square roots. If you do the calculation using a computer you nd that a4 = 3.14271... and b4 = 3.14103... whereas Archimedes estimates are 22 = 3.142857... and 3 10 = 3.140845... The esti7 71 mates of Archimedes are therefore truly remarkable. We note that a10 = 3.141592930... and b10 = 3.141592519... Thus 4 iterations gives 2 decimal place accuracy and 10 gives 6 decimal place accuracy. One nds that after 16 iterations a16 and b16 agree to 8 decimal places. This predicts that 2k iterations should give an accuracy of k decimal places (indeed a calculation shows that one has 50 digit accuracy after 100 iterations). This can be (essentially) proved as follows. We note that s 2an bn 2an b2 n an+1 bn+1 = = an + bn an + bn 2an bn (an bn ). (an + bn )( 2an + an + bn ) One can check that note that for all n 0 2an bn (an +bn )( 2an + an +bn ) p an+1 bn . 1 2+ 2 < 1 . To see this we rst 3 bn < an indeed if n = 0 this is the assertion that 2 3 > 3. Assuming this for n we note that the iteration implies that bn = Thus bn+1 = s an+1 (an + bn ) . 2an r an + bn < an+1 2an a2 (an + bn ) n+1 = an+1 2an since an > bn . Thus the principle of mathematical induction proves the assertion. In 2an bn (an + bn )( 2an + an + bn ) we divide the numerator and denominator by 2an bn and get We now observe that than 1 1 2(1+ 1 ) an bn 1 q . n 1 + an 1 + an +bn b 2an q an +bn 2an > 1 and > 1 . 2 Thus the expression is less = 1 . 2+ 2 67 There is something very odd about this recursion that is that the expression for bn+1 involves an+1 . Consider the following change in the recursion: an+1 bn+1 2an bn an + bn p = an bn . = One checks that with this recurrence starting with the same initial values we have agreement between a5 and b5 to 50 decimal places. There is only one problem with replacing Archimedes iteration with this one. It converges to 3.219546022.... Which could be an interesting number but it is not . What is it? The next number will unravel this mystery and lead to a method of determining to very high orders of accuracy. Before we go on to these developments we will make one general observation about the above iteration we rst note that. The iteration implies that we have p an bn bn+1 an+1 = (an + bn 2 an bn ) = (an + bn ) 2 an bn p bn an . (an + bn ) This implies that if n > 1 then bn > an . We note that bn an bn + an = bn an . We therefore have an bn 2 bn+1 an+1 = 2 (bn an ) . (an + bn ) bn + an We estimate the expression an bn 2. (an +bn )( bn + an ) For simplicity we assume that a0 1 and b0 1 we assert that an 1 and bn 1 for all n. If n = 0 this is our assumption. If we assume this assertion for n then an bn an and an bn bn so 2an bn > an + bn hence an+1 1 also an bn 1 implies that an bn 1 so 2 bn+1 1. We have already seen that an + bn 2 an bn = bn an so an + bn 2 an bn . Thus we have 1 an bn an + bn 2 2 bn + an (1 + 1)2 = 4. Thus and since an 1 and bn 1 we see that 1 an bn 2 8 . (an + bn ) bn + an This implies that in the modied iteration we have bn+1 an+1 68 1 (bn an )2 8 for n 0 if a0 and b0 are both at least 1. Actually one has a similar estimate if we only assume that a0 and b0 are bigger than 0 (we will see why in the next section when we relate this iteration to the arithmetic-geometric mean iteration.. This accounts for the rapid convergence. Starting with a0 = 2 3, b0 = 3. Then b1 a1 1 (a0 b0 )2 = 0.0269238... Now 8 b2 a2 b3 a3 2.4.6 1 1 (b1 a1 )2 3 (b0 a0 )4 = 0.00009063..., 8 8 n 1 1 (b0 a0 )2 , ... (b2 a2 )2 = 7 (b0 a0 )8 , ..., bn an 8 8 82n+1 The arithmetic-geometric mean iteration of Gauss. Recall the new iteration of the previous subsection: an+1 bn+1 2an bn an + bn p = an bn . = With a0 , b0 positive real numbers. If we write an = have the recursion un+1 vn+1 un + vn 2 = un vn . = 1 un and bn = 1 vn then we The rst is the arithmetic mean of un and vn and the second is their geometric mean. If u and v are positive then their mean or arithmetic mean is their average a(u, v) = u+v their geometric mean or multiplicative average is 2 m(u, v) = (uv) 2 . We note that a(u, v)2 m(u, v)2 = (uv) 0. Thus if we 4 start the iteration with u0 , v0 0 then un vn for all n 1. This iteration was discovered independently by J.L.Lagrange (1736-1813) and C.F.Gauss (1777-1855). Lagrange alluded to it in 1785 and Gauss studied it in 1790 (when he was about 14). We attach the name of Gauss since he did the most profound work on it in particular answering the question we asked at the end of the last section. The iteration is called the AGM On May 30, 1799 Gauss wrote (in his diary) that if we start the iteration with u0 = 1 and v0 = 2 then u1 and v1 n n are equal to Z dt 2 1 0 1 t4 to at least 11 decimal places for n large. Notice that he is predicting the value of the original variant of the Archimedean iteration. He was absolutely certain that the limit of the sequences was in fact this number. In his diary he said that should this be true then it will surely open a whole new eld of analysis. The area of analysis that was opened is the theory of elliptic functions which is still one of the most important areas of mathematics that has permeated every 69 1 2 aspect of the science and about which we shall hear much more later. Before giving Gausss solution to the general problem we will explain a possible reason why he might believe that the limit in the above case might be given by an integral of the above sort. We rst return to the Archimedean iteration of the previous section an+1 bn+1 2an bn an + bn p = an+1 bn = with a0 = tan , b0 = sin and 0 < < . Then as above we see that there is a 2 number L such that bn < L < an and that an bn can be made as small as we wish by increasing n. The amazing fact is that the number L is . Thus for 1 example if we start with a0 = 1, b0 = 2 then = . This could have lead him 4 to look at his iteration multiplied by . He was no doubt certain that integrals 4 of the type of his projected formula for his limit could not be calculated using elementary methods (e.g. modern Freshman or Sophomore Calculus). The integral Z 1 dt 1 t4 0 is one of the simplest of the type that he would have studied. He therefore would have known that it was about 1.311028777. From this and his no doubt 2 very accurate approximation to he could have easily come up with the ap8 proximation in his diary entry of 1799. Gauss later derived a formula for the limit of the AGM which can be found in volume 3 of his collected works. The solution is given in terms of a completed elliptic integral. We will just quote the formula (a very nice discussion can be found in Borwein and Borwein, Pi and the AGM. Consider the AGM then p since xa+xb = x a+b and (xa)(xb) = x ab for x, a, b > 0 we see that if we 2 2 denote be M (a, b) the limit of the AGM with u0 = a, v0 = b. Then if a, b > 0, b M (a, b) = aM (1, a ). It is therefore enough to calculate M (1, x) for x > 0. Here is the formula Z 1 2 2 d q = . M (1, x) 0 1 (1 x2 ) sin2 2.4.7 Exercises. 1. Consider the following iteration an + bn 2 2an bn . bn+1 = an + bn With a0 > 0, b0 > 0. Show by induction that an bn = a0 b0 . Also that an+1 = an+1 bn+1 = 70 (an bn )2 . 2(an + bn ) Use these observations to derive a very fast method of calculating square roots. (Note that if we start with a0 = 3 , b0 = 2 then a2 = 97 , b2 = 168 and 2 56 97 a3 = 18817 , b3 = 32592 , further (b3 )2 = 2.999999992... 10864 18817 2. Use the method in section 2.4.5 of the derivation of the Archimedean iteration to show that if = m then the Archimedean iteration (the main iteration in 2.4.5) starting with a0 = tan , and b0 = sin eyelids in the limit. 3. Make the appropriate change of variables to show that the value of M (1, 2) using Gausss general formula agrees with the one he predicted in 1799. 2.4.8 A short history of calculations of . As we have seen, Archimedes is the author of the famous approximation 22 for 7 . We have also seen that most ancient peoples who were aware of used 3. The Babylonians used the somewhat better approximation 3 1 . We will end 6 this short history with a method of approximation based on the AGM. After Archimedes the iteration described above was used to nd approximations to until the 17th century.perhaps the best usage (and perhaps the last) was by Ludolph van Ceulen (1540-1610) who used the method to calculate 34 digits of . This method converges relatively slowly so the computational overhead overwhelms hand computation. It wasnt until the advent of calculus that more precise approximations were found using more rapidly converging sequences. Until the middle of the twentieth century the approaches involved clever uses of a formula attributed to James Gregory (1638-1675) which says that x2 x4 x6 arctan(x) = x + + ... 3 5 7 If, in this series, we set x = 1 then we have 1 1 1 = 1 + + ... 4 3 5 7 p Edmond Halley (1656-1743) used x = 1/3 in Gregorys series to produce the series 1 1 1 1 = (1 + 2 3 + ...). 6 33 3 5 3 7 3 He used this to nd to 71 decimal places to do this he needed to sum at least p 143 terms of the series. The approximation is q3 with p = 2975751933403306660762781037851580732457179725218341337 8517664256040092164338566715216074032725294059375304662800 8784897242437840350479660097317139924948757850741584109975 3560486485710547648 and 71 q = 5468726754975023858173190008331026443083349550027969750 06063504744927456329014146000945985504325020793071970588029 48449190349218434866194124401527196795946520854577134466195 5929457343724625 This is an amazing achievement using hand calculation. Later variants of these methods are related to the formula of John Machin (1680-1752). He observed that 1 1 = 4 arctan( ) arctan( ) 4 5 239 the point here is that the rst term is easily calculated and the second is an alternation of terms that become very small rapidly. Machin used his formula to nd 100 digits of . In 1961 using an IBM mainframe D. Shanks and J.W.Wrench produced 100,000 digits of using two 3 term variants of Machins formula. (one to check the other). Using the same method one million digits were computed in 1973 by Guillard and Bouyer. Further precision has for the most part been based on the AGM. As of 1999 the record is held by Kanada, Takahashi 1999 with 206158430000 digits. We include here an iterative scheme for calculating that was discovered by Borwein and Borwein in the 1980s that is derived from the AGM. The iteration is as follows: 1 xn + 1 xn+1 = n0 2 xn yn xn + 1 yn+1 = n1 xn (1 + yn ) xn + 1 n1 n = n1 yn + 1 with x0 = 2, 0 = 2 + 2, y1 = 4 2. One can show that if n 2 102 n+1 n 0. This says that after 10 iterations we have to the accuracy of over 2000 decimal digits. After 20 we heve over 2000000 digits. 2.4.9 Exercise. 1. Use a computer algebra system to check that the iteration does indeed give the asserted accuracy for n = 2, 3, 4, 5 (asserted 8, 16, 32, 64 digits). Devise an algorithm combining 2.4.7 Exercise 1 with the above iteration to get a high precision algorithm for with no square roots. 72 3 The emergence of algebra As we saw in chapter one the ancient Babylonians and Egyptians had an understanding of much of what we would now call Algebra. For the most part their algebra comes down to us in the form of problems that are not very dierent from those that are assigned to modern students. The ancients certainly had an understanding of how to solve linear and quadratic equations in one variable. The Babylonians with the use of copious tables could solve some cubic equations. But they were hampered in their lack of two basic formalisms that we take for granted. The rst is that they had no concept of negative numbers and they had no notation such as our modern algebra which allowed them to handle an unknown quantity as if it were a number. There was still a basic distinction between the role of numbers for counting and numbers for measurement. As we shall see, the nal synthesis involves the identication of the two notions of number. 3.1 Algebra in Euclids Elements. In Euclids elements Book II can be considered to be devoted to algebra. For example, Proposition 4 book II says: If a straight line be cut at random, the square on the whole is equal to the squares on the segments and twice the rectangle contained by the segments. We will not go through Euclids proof here (which is surprisingly long). We will just point out that what it says is that the square ADEB is made up of the two squares CBIG, DF GH and the two equal rectangles ACGH and F EIG. In more modern notation the side of the big square is AB = x + y. The side of the square CBIG is y that of DF GH is x and the two adjacent sides of the two rectangles are x, y. Thus the content of the Proposition is (x + y)2 = x2 + 2xy + y 2 . We will see that until the time of Descartes, the part of mathematics that we consider to be algebra was consistently phrased in geometric terms. In Euclids number theory a number was a concatenation of unit intervals. He 73 only considers whole numbers. In his geometric algebra (Book II) he considers lengths and areas but gives no direct relationship with the concept of number which comes later (Book VII). Thus intervals, squares and rectangles, cubes are dealt with as if they are what we consider to be numbers. The addition and subtraction meant putting together gures as in the one above. The amazing aspect of all of this is that within these constraints mathematicians were able to do serious work in algebra such as solving a polynomial of degree3 or 4. The constraints were broken by the seventeenth century French mathematicians. 3.2 The Arabian notation. In chapter 1 we studied the methods that were used by several early cultures to represent numbers. We also looked at our own decimal system. This positional system was used by the peoples of the middle east and comes to us under the name Arabic notation. This notation when it appeared in Europe was very similar to our modern notation (however it is likely that it had its genesis in India). One of the most important and earliest western advocates of this system was Leonardo of Pisa (alias Fibonacci) who used the system in his book Liber Abaci (published in 1228). In this book he used the arabic notation for everything but fractions. For fractions he used sexagesimal, Egyptian fractions and common fractions (that is a/b in lowest terms). He preferred the latter two types. We have seen that he devised an algorithm to convert common fractions to Egyptian fractions. We also observed that he gave a complete characterization of Pythagorean triples. 3.2.1 The completion of the characterization of Pythagorean triples. The following argument involves the understanding of squares of integers. Fibonacci was so enamored of squares that he wrote a book Liber Quadratorum (Book of Squares) which contained the proof of the following theorem (see also Euclid Book X, Lemmas 1,2 before Proposition 29): If a, b, c are positive integers such that a2 +b2 = c2 then one of a and b, say, b must be even and there exist numbers m, n, x such that a = x(m2 n2 ), b = 2xmn, c = x(m2 + n2 ). If a, b are relatively prime we can take x = 1. To prove this assertion we rst show that one of a or b must be even. Suppose not. Then a = 2r+1, b = 2s+1 and so c2 = a2 +b2 = 4r2 +4r+1+4s2 +4s+1 = 4(r2 + r + s2 + s) + 2. We can thus conclude that c2 is even. This can only be so if c is even. But then c = 2t. We conclude that 4t2 = 4(r2 +r +s2 +s) +2. This leads to the conclusion that 4 divides 2. We can thus assume that b is even. To complete the hard part of the argument we rst observe that the last assertion implies the main assertion. Indeed, if x divides a and b then x2 divides c2 so x divides c. (You should be starting to see why the book had its title.). Thus if a, b, c is a Pythagorian triple and if x is the greatest common divisor of a 74 a b c and b then x , x , x is a Pythagorian triple. So we are left with showing the last assertion. We thus may assume a and b are relatively prime and that b is even. Since a2 + b2 = c2 it follows that c2 a2 = b2 . So (c a)(c + a) = b2 . We notice that if c were even then a must be even. But then a and b would have 2 as a common factor. Thus a and c are odd. If y is odd and divides both c + a and c a then y divides their sum and dierence which are 2c and 2a. But then x2 divides b2 . So x divides b which is contrary to our assumption. Thus if p is an odd prime so that pr divides b then p divides exactly one of c + a and c a thus p2r divides one of the factors. Hence if b = 2t pr1 pru is a prime u 1 factorization of b then we can reorder the indices so that c + a = 2v p2r1 p2rs s 1 2rs+1 and c a = 2w ps+1 p2ru with v + w = t. If v and w were both bigger than u 1 then 4 would divide both 2c and 2a. Since both are at least one we see that one of w and v must be one. This implies that the other must be odd. If v = 1 then c + a = 2m2 and c a = 2n2 . It is clear that if w = 1 we come to the same conclusion. Thus b2 = 4m2 n2 so b = 2mn. Also 2c = (c+a)+ (ca) = 2(m2 +n2 ) and 2a = (c + a) (c a) = 2(m2 n2 ). So a, b, c are of the desired form. 3.2.2 Exercises. 1. Observe that if m = 2, n = 1 then m2 n2 = 3, 2mn = 4, and m2 + n2 = 5. If m = 3, n = 1 then m2 n2 = 8, 2mn = 6 and m2 + n2 = 10. Thus aside from the factor of 2 and the order the two give the same Pythagorian triple. Show that if m, n are relatively prime then the greatest common divisor of any pair of the Pythagorian triple is either 1 or 2. 2. Show that if x, y are rational numbers such that x2 + y 2 = 1 then there exist integers m, n such that either (x, y) = ( or (x, y) = ( 2mn m2 n2 , ) m2 + n2 m2 + n2 2 a2 (Hint: y = 1 x2 . If x = a in lowest terms then 1 x2 = c c2 . If the c square root is rational then c2 a2 must be the square of an integer b. Thus a2 + b2 = c2 .) 3. What is the overlap between the sets described by the two formulas in problem 2? 4. If we divide the numerators and denominators of the rst expression in problem 2 by n2 and write t = m then we have n (x.y) = ( 2t 1 t2 , ). 1 + t2 1 + t2 2mn m2 n2 , ). m2 + n2 m2 + n2 Show that the only pair of real numbers (x, y) not covered by a value of t is (1, 0). This is called the rational parametrization of the circle. 75 5. Look at Euclid Book X. Lemmas 1,2 before Proposition 29 and write out what the assertions mean algebraically. 3.2.3 Polynomials of higher degree. Not as well known is the fact that Fibonacci studied cubic equations. Studying algebraic equations was quite dicult in his time due to a lack of appropriate notation and since the algebra of Fibonacci was still the geometric algebra of Euclid. Furthermore, cube roots were not constructed in the plane geometry so they were somewhat more mysterious. In his book the Flos (1225) he studied some cubic equations in particular x3 + 2x2 + 10x = 20. He proved that this equation has no rational roots and even no roots of the form u + v with u and v rational. He also gave an approximate solution to the equation in sexagesimal (1;22,7,42,33,4,40). The Middle Eastern mathematicians had methods of calculating roots of polynomials to arbitrary precision. Most notable is the work of Abul Kamil (850-930). Also Omar Khayyam (1050-1130) had interpretations of roots of certain cubics as intersections of conic sections. One reason for the slow progress in general methods of solution of polynomial equations was the lack of good notation (which persisted into the seventeenth century) and a lack of the ability to manipulate unknowns and indeterminates. For example, the unknown quantity x made sense to them (even to the ancients) as a quantity one could also say a cube a side and a face none of which are known. Then a description of a cubic equation could be given as a cube added to 2 times a face added to 6 times as side is equal to 12. We would write this as x3 + 2x2 + 6x = 12. The mathematicians developed clever short hand notations for such expressions. However, they did not go to the next stage and replace the 2, 6, 12 by indeterminates a, b, c thus getting x3 + ax2 + bx = c. Rather, they dealt with the specic equation with explicit coecients and used techniques that could work with many other coecients. We have seen this approach in Euclid. It persisted into the seventeenth century and the work of Vite and Descartes. 3.2.4 Exercises. 1. Show that there are no rational solutions to the equation x3 + 2x2 + 10x = 20. 76 (Hint: If x = a b in lowest terms then a3 + 2a2 b + 10ab2 = 20b3 . Thus every prime divisor of b divides a. Hence b = 1. Now conclude a divides 20 and check all of the cases.) 2. Convert Fibonaccis approximation to decimal and check that it is a good approximation. 3.3 3.3.1 The solution of the cubic and quartic The Tartaglia, Cardano approach to the cubic. In spite of the fact that modern algebraic notation did not exist in the sixteenth century the general solution to the cubic (degree 3) and to the quartic (degree 4) was deduced by the Italian mathematicians Niccolo Tartaglia (1500-1577) and Gironomo Cardano (1501-1576) for the cubic and Ludovico Ferrari (15221565) for quartic. The history of that endeavor is not the most savory in the annals of mathematics and in fact it is almost certain that the solution to the cubic is in fact due to Scipione del Ferro (1465-1526) but unpublished. It seems that neither Tartaglia nor Cardano were morally as strong as they were mathematicians. It also seems that Tartaglias role in the solution of the cubic was much more substantial than that of Cardano, although he seems to have been inuenced by the rumors that a solution by Ferro existed.. We will leave these historical questions aside and just point out that there is an English translation of the Ars Magna published by the M.I.T Press (1968), translated by T.R.Witner, also the book A History of Mathematics by C.B.Boyer, et al has an interesting discussion of this history and further references. What seems to be well documented is that Tartaglia could solve a general cubic of the form x3 + ax = b. It is quite conceivable that Cardanos contribution was to reduce the general cubic x3 + cx2 + dx = e to this form. In modern notation this is a fairly simple task. Set x = y u. Then the equation says y 3 3uy 2 + 3u2 y u3 + cy 2 2cuy + cu2 + dy du = e. So if u = c 3 then the equation is given in y as y 3 + ay = b with a = d c3 and b = dc + e 2c . This step seems to be truly minor in 3 27 our notation. But in the sixteenth century the methods used to derive such formulas were completely geometric. Recall that cubes had to be interpreted 77 2 3 as volumes and squares as areas. Also the formulas used had to be given as geometric properties of areas of geometric gures. The nal problem was that negative numbers were still not allowed and so there were many variants of the equations that needed to be analyzed. For example x3 + ax + c = 0 was not seen as the same x3 + ax = b with b = c. Similarly, the term ax might be on the right hand side of the equation. In addition to all of these complications, there was still no direct way of dealing with general quantities such as a and b above (the x was better understood). Thus rather than write the equation above Cardano would consider (say) x3 + 3x = 4. He would than say: Let the cube plus 3 times the side equal 4. The 3 and the 4 would take the place of the a and the b. He would then go through an equivalent geometric discussion to the one below with the special values of a and b. With these provisos we will now derive a solution to the above reduced form of the cubic. The critical idea is to write x = u v . Then substituting in the equation we have u3 3u2 v + 3uv 2 v 3 + a(u v) = b. u3 3uv(u v) v 3 + a(u v) = b. u3 v 3 = b. Now v = a 3u . That is If we take 3uv = a then the equation becomes So upon substitution we have a 1 u3 ( )3 3 = b. 3 u Multiply through by u3 and we have a u6 bu3 ( )3 = 0. 3 Apply the quadratic formula to solve for u3 and get r p b b2 + 4( a )3 b a b 3 3 u = = ( )2 + ( )3 . 2 2 2 3 Notice that Cardano must choose the plus sign. Now v 3 = have (at least as a possibility) sr sr 3 3 b 2 a 3 b b a ( ) +( ) + ( )2 + ( ) 3 x= 2 3 2 2 3 78 u3 b. Thus we b . 2 r !3 r !3 q q 3 3 b 2 a 3 b b 2 a 3 b Thus x + ax = (2) + (3) + 2 (2) + (3) 2 = b. 3 This is one of Cardanos solutions (depending on various signs as we have pointed out). Notice that in the course of this development we have made choices. However, if we assume that a > 0 and that the only cube roots we can have are positive then we can reverse the steps sr sr 3 b 2 a 3 b 3 b a b ( ) +( ) + ( )2 + ( ) 3 2 3 2 2 3 2 r r b 2 a 3 b 2 a a 3 ( ) + ( ) ( ) = 2 ( )3 = . = 2 3 2 3 3 3.3.2 Some examples. First let us give some examples of Cardanos formula. Consider the equation x3 + x2 + x = 14. The rst step is to eliminate the x2 . According to the recipe above, taking c = 1, d = 1, e = 14 we are reduced to y 3 + ay = b with a = 2 3 and b = 385 27 . We can no plug into the formula Observe that this expression involves only square roots of positive numbers so at least it makes sense geometrically. If you do the calculation indicated you are looking at r r 385 385 3 17 3 17 57 + 57 y= . 18 54 18 54 We now know that y 1 is a solution to our original equation. If you use 3 a calculator and evaluate this expression numerically you will nd that y 1 3 is approximately 2 and if you substitute 2 into the original equation you will nd that 2 is indeed a solution. This indicates that there could be serious diculties in the use of the elegant formula above. We will look at several other such diculties in the exercises. Cardano and his contemporaries were much more worried about another problem. Which we will now describe. First we must consider the form that was necessary for they must use to write the solution to x3 = ax + b. 79 vs vs u u 2 2 3 3 u 3 3 385 385 u 385 385 2 2 t t + + + . y= 9 54 54 9 54 54 without recourse to negative numbers. We would replace a by a and use the previous solution. Cardano did something equivalent using the substitution x = u + v. This gave rise to the solution s s r r 3 b 3 b b 2 a 3 b a x= + ( ) ( ) + ( )2 ( ) 3 . 2 2 3 2 2 3 If the term under the square root sign was non-negative then he had no trouble understanding the solution. However, if we consider (as Cardano did) x3 = 15x + 4. The formula yields q q 3 3 2 + 121 + 2 121 which made no sense to Cardano. If you ask a mathematical software package to evaluate this expression numerically then it yields 4.0. Direct check shows that x = 4 is indeed a solution. The mathematics package would be hard pressed to see that this expression is exactly 4. (In Maple V version 4 the simplify operation doesnt yield 4. However, the f actor operation does. Mathematica 4 (but not 3)actually returns 4 when it encounters this expression.) We note that x3 15x4 = (x4)(x2 4x+1). This implies that the equation has three distinct roots: 4 and the roots corresponding to the quadratic factor that involve square roots of positive numbers. Getting ahead of ourselves, we will see that if all three roots of a cubic are real and distinct Cardanos formula always involves a square root of a negative number (see exercise 3 below) q q 1. Show that the number 3 17 57 + 385 3 17 57 385 1 is equal to 2 18 54 18 54 3 using Cardanos formula. (Show that the only real root of the corresponding equation is 2.) 2. Observe that if that if x is real then 3 x = 3 x. Use this to see that the choice made in the derivation of Cardanos formula didnt change the outcome. 3. Consider the equation x3 2x = 5. Calculate the solution given by an appropriate variant of Cardanos formula. Next use a calculator or a computer to do Newtons iteration to derive an approximate solution (Newton actually did this calculation to 5 decimal places in 1669.) The Newton method is to guess a solution x0 . The iteration is xn+1 = xn f (xn ) . f 0 (xn ) 3.3.3 Exercises. Here f (x) = x3 2x 5 and f 0 (x) = 3x2 2. Thus if we start with the approximate root 2 then 1 x1 = 2 + . 10 80 Newton showed that x2 was accurate to 5 decimal places. 4. Let f (x) = (x u)(x v)(x w) with u, v, w distinct and real. Assume that u + v + w = 0. Show that f (x) = x3 ax b with a = u2 + uv + v 2 , b b = (uv)(u + v). Show that ( 2 )2 ( a )3 is always negative. Thus if there are 3 3 real roots then the formula cannot be written directly in terms of real numbers. 3.3.4 The early attempts to explain the paradox. q q 3 3 2 + 121 + 2 121 This strange expression that must be 4 was a thorn in the side of the remarkable achievement of solving the cubic (and relatively soon the quartic). The resolution of this paradox that the expression must be 4 but involves meaningless objects would not be fully resolved for about 400 years. As we shall see it goes to the heart of what we understand of numbers. Cardano had in earlier studies encountered other types of equations with solutions had a form (u + v) + (u v) and he knew that 2u was indeed a solution. Such numbers are now called conjugate complex numbers and we know that they do indeed add up to a number closer to the sense of Cardano and his contemporaries. Rafael Bombelli (1526-1573) made a proposal the explain the paradox. He suggested the following wild thought. Suppose the cube roots p a pair of conof jugate numbers were conjugate? That is suppose we could write 3 2 + 121 = p u + v and 3 2 121 = u v. Then the irksome sum would be 2u. Since he knew that by all rights 2u should be 4 he chose u = 2. He then comp puted (2 + v)3 = 8 6v + (12 v)2 v. Thus if Bombellis wild thought is to work he must have 8 6v = 2. He was therefore forced to have v = 1 and he found (probably to his own amazement) that 2 + 121 = (2 + 1)3 and 2 121 = (2 1)3 . He now felt justied to plug his newfound cube roots into the Cardano formula and found that with his interpretation the Cardano solution was indeed equal to 4. This brilliant analysis was no doubt very convincing at the time. However, from our perspective it leaves open more questions than it answers. However, before we begin to attempt to study the larger issues we will need to bite the bullet and understand what is meant by numbers. This will be begun in the next section when we discuss analytic geometry. First we will give a short discussion of Ferraris solution of the quartic and another related problem that arises from that result. 81 3.3.5 The solution of the quartic. We rst describe Ferraris reduction of the solution of the quartic to the cubic. We are considering x4 + ux3 + vx2 + w = mx. Cardanos technique for eliminating the square term in the cubic can be used to eliminate the cube. That is replace x by x u . Then the equation is in the 4 form x4 + ax2 + b = cx. The idea of Ferrari is to complete x4 + ax2 to a square by adding sides. We are thus looking at a a2 (x2 + )2 = ( b) + cx. 2 4 The critical step is to throw in another parameter (say) y in the left hand side and to observe that (x2 + a + y)2 2 a a = (x2 + )2 + 2(x2 + )y + y 2 2 2 a2 = ( b) + cx + 2x2 y + ay + y 2 . 4 2 a2 4 to both For the last equation we have substituted ( a b) + cx for (x2 + a )2 . This 4 2 term can be written in the form Ax2 + Bx + C. With A = 2y, B = c, C = 2 ( a b) + ay + y 2 . We solve for y so that the quadratic equation has exactly 4 one root. That so that B 2 4AC = 0 (i.e. we eliminate the term in the quadratic formula). Substituting the values of A, B, C we have B 2 4AC = c2 8y(( a2 b) + ay + y 2 ). 4 This is a cubic equation in y. Let u be a root of this equation (which we presumably can nd using Cardanos formula). Then for this value we have (x2 + a + u)2 2 a2 b) + cx + 2x2 u + au + u2 4 B 2 = Ax2 + Bx + C = A(x ( )) 2A c 2 = 2u(x + ) . 4u = ( a c 2 + u)2 = 2u(x + ) . 2 4u This says that to nd a solution x we need only solve (x2 + x2 + a c + u = 2u(x + ). 2 4u 82 That is To do this we can apply the quadratic formula. The point of this is that in light of Cardanos formula we can write a solution to the quartic as an algebraic expression that involves arithmetic operations on square roots and cube roots of arithmetic operations on the coecients of the equation. This is also true for the cubic and the quadratic formula does the same for degree 2. Of course, we must take into account the same provisos as we did for the cubic. When we study analytic functions of a complex variable we will come back to the sense in which Cardanos solution to the cubic (and thereby Ferraris of the quadric) is actually a well dened solution. In spite of these possible misgivings these results came at least 4000 years after the Babylonians understood how to solve the quadratic equation. The next natural problem was to nd a solution of the quintic (fth degree polynomial) in terms if arithmetic operations (addition, subtraction, multiplication, and division) and square roots, cube roots and fth roots (radicals). The greatest mathematicians of that time and in fact for about the next 200 years could not nd any clever method that would solve this problem. The answer to this problem was given by two of the most tragic cases in the history of mathematics. We will rst discuss the solution of the problem for the quintic. 3.3.6 The quintic The success of the Italian algebraists of the sixteenth century was extraordinary. The next step would be the quintic and then, of course, equations of higher degree. To the surprise of the mathematical community, there were no clever methods that they could nd to reduce the solution of the quintic to that of the quartic, cubic and quadratic and extraction of fth roots (or for that matter roots of any order). The prevailing idea had always been that one should be able to nd the roots of any polynomial by doing arithmetic operations and extraction of roots. However, in 1799, Paolo Runi (1765-1822) published his two volume treatise Teorie Generale delle Equazioni in which he included an argument to show that there was no such method of solving the quintic. As happens in the history of mathematics, announced proofs of major new results are often incomplete or even wrong. Runi, in fact was on the right track but wrong in detail. One can imagine the scrutiny to which this treatise was subjected. The proof of the impossibility for the quintic was given a rigorous proof by Nicolas Abel (1802-1829) at the age of 19 (notice that he lived at most 27 years!). He published his proof in the form of a pamphlet at his own expense in 1824. Due to his limited funds, he had to keep the pamphlet brief and for that reason it was extraordinarily dicult to understand. He later proved a theorem that applied to all equations of degree 5 and higher. It states If n 5 then there is no formula for a solution involving arithmetic operations and extraction of roots on the coecients of the equation xn + a1 xn1 + a2 xn2 + ... + an = 0. This assertion is now known as the Abel-Runi theorem. A great deal of mathematics ocurred in the time that intervened between the work of Cardano, 83 et. al. and the work of Abel. Most notably, the algebraic notation, which we now take for granted, was invented. Also the understanding and general usefulness if negative numbers was nally a standard part of mathematics. Another major development in the interim was the invention of complex numbers. We will make a rst (relatively geometric) attempt at explaining complex numbers in this chapter and will approach this concept more analytically in the next chapter. These numbers were used in so-called conjugate pairs in Bombellis solution of the apparent paradox in the solution of the cubic. Within the system of complex numbers Carl Friedrich Gauss (1777-1855) proved the fundamental theorem of algebra which states Within the complex numbers every equation xn + a1 xn1 + a2 xn2 + ... + an = 0 with n > 0 has a root. Here we have two seemingly contradictory theorems. Abel asserts that there is no way of writing a formula (involving extraction of roots) for a solution if n 5 and Gauss asserts that even so there is a solution. We will encounter such apparently paradoxical situations throughout our investigations. It should also be pointed out that Abel sent his pamphlet to Gauss who was acknowledged to be the most important mathematician of his time. Gauss was furious with the brevity of the work and made scathing remarks about it. Abels article on this subject was eventually published in Crelles Journal (founded in 1826) in the rst issue which in addition contained 21 other articles by Abel. The theorem of Gauss will be studied in more detail in the later chapters (however, we will discuss an important special case later in this chapter). This theorem appeared in the thesis of Gauss and he went on to give numerous alternative proofs. The most notable aspect of the theorem is that it is not really a theorem in algebra. The reason for this is that, as we shall see, complex numbers (and in fact real numbers) are of a very dierent nature from integers and rational numbers. This dierence is the basis of mathematical analysis (modern calculus). Before we leave this subject in order to learn enough mathematics to discuss it in more depth, we should point out that the theorem of Abel only proves that there is no general formula. Obviously there are equations that we can solve by radicals. For example, x6 2 = 0. It is thus reasonable to ask the question: What equations can be solved by radicals? Here we mean that even though we cant nd a formula we can still write out a solution in terms of the coecients of the equation using arithmetic (that is addition, multiplication and division) and extraction of roots (square roots, cube roots, fth roots, etc.). Abel studied this problem also and laid the partial foundation of the mathematical theory of groups (terms like abelian groups are in the honor of Abels work). However, the solution of this problem was completed by another teenager, Evariste Galois (1811-1832). Galois gave a complete criterion as to when an equation can be solved by radicals. But we are now well ahead of ourselves in this story. We now return to the situation at hand 84 and begin the development of a broader notion of number that would at least include the expressions described (recall Bombellis analysis). In this broader formulation Gausss fundamental theorem of algebra will hold. We will next need to introduce the theory of Galois (now called appropriately Galois Theory) which lays the basis of the theory of groups. The latter will be studied in later chapters. To begin the analysis of the rst problem we must step back to the seventeenth century and study the work if Descartes. 3.4 Analytic Geometry. Ren Descartes (1596-1650) is now mainly known for the notorious x, y axis and Cartesian coordinates (which we will see he never directly used) and for the quotation I think therefore I am. Both are oversimplications of what he actually did and what he actually meant. We will discuss his geometry which was an important beginning to what we now take for granted in algebra. The work that we will discuss is The Geometry which was published in 1637 as an addendum to his treatise The Discourse on Method. The Geometry consists of three books (we would probably call them sections). The rst establishes a basis for the meaning of number in terms of geometry and establishes the notation that we still use today for polynomials with indeterminate coecients. In the second he shows how one can use his algebraic methods to analyze plane gures in terms of polynomial equations. The third analyzes 3 and higher dimensions. It relates his notation with the earlier works of Cardano, et. al. and for example writes out Cardanos formula in exactly the same way we do. There were several people whose work predated Descartes who understood the idea of independent variable and dependent variable. Nicole dOresme (1323-1382) actually did graphing of data much the way we do today (he did explicitly use what we call Cartesian coordinates thus a more accurate but cumbersome name might be Oresmian coordinates). We will have more to say about the work of this amazing man in the next chapter. Also, Francoise Vite (15401603) established our formalism of unknown quantities that we manipulate in the same way as if they were known numbers. Bombelli, in addition to his work on the mysteries of the cubic and the foundations of complex numbers wrote a treatise on algebra in which he also handled unknowns algebraically. Bombelli did not label his unknowns by letters, instead he invented a symbol for an unknown (not unlike our "frowning face" -( rotated ninety degrees). Fortunately that notation didnt catch on. 3.4.1 Descartes notation and interpretation of numbers. He begins his rst book with the following assertion (we will use the translation given by David Eugene Smith and Marcia Latham): Any problem in geometry can easily be reduced to such terms that a knowledge of the lengths of certain straight lines is sucient for its construction. Just as arithmetic consists of only four or ve operations, namely, addition, 85 subtraction, multiplication and the extraction of roots, which may be considered a form of division, so in geometry, to nd required lines it is merely necessary to add or subtract other lines; or else, taking one line which I shall call unity in order to relate it as closely as possible to numbers, and which can in general be chosen arbitrarily, and having given two other lines, to nd a fourth which shall be to one of the given lines as the other is to unity (which is equivalent to multiplication);... Before going on let us see what he means here. First he chooses a line segment which he calls unity and in the diagram below (which is essentially the same picture that occurs on page one in The Geometry) is denoted AB on the inclined line he measures BC (the rst line) and on the line containing AB he measures BD. He then joins the points A and C. From D he draws the parallel line to AC which intersects the line containing BC at E. He then says that if we consider the ratios of BC and BD to AB then the ratio of BE to AB is the product of the corresponding ratios. Let us demonstrate the correctness of this assertion (Descartes feels no need to explain any more than what we have already said). The triangles ABC and DBE are similar. Thus the corresponding sides are all in the same proportion (Euclid, Elements, Book VI, Proposition 10). Thus BE BC BD = AB . If we think of BE as c times a unit, BC as a times a unit, BD as b times a unit and AB as 1 times a unit then the assertion is just that a b = c. Descartes also had a method for doing division geometrically (we will give it as an exercise). We now come to an important point. Often it is not necessary thus to draw lines on paper, but it is sucient to designate each by a single letter. Thus to add the lines BD and GH, I call one a and the other b and write a + b. Then a b will indicate that the line b is subtracted from a; ab is the line a multiplied by b;... Here he is saying that a+b is just the line corresponding to hooking together BD and GH on the same line. ab denotes the geometric operation of multiplication. The symbol a is a bit more abstract than a line since it corresponds to a line measured by a unit (shades of Euclid!). He writes division as we do and aa is a2 and if this is multiplied by a then we have a3 , etc. The square 86 root of a2 + b2 is denoted a2 + b2 . The cube root of a3 b3 + ab2 is written 3 a3 b3 + ab2 and as he says similarly for other roots. Notice that so far he has only taken square roots of combinations of squares and cube roots of combinations of cubes or rectangular solids. Now comes the crux: Here it should be observed that by a2 , b3 and similar expressions, I ordinarily mean only simple lines, which however I name squares, cubes, etc., so that I may make use of the terms employed by algebra. This means that even though we might be looking at a line segment of length 2 times the unit and then considering a cube that has one edge that segment we can think of the corresponding volume as a unitless number 8. This is obvious to us but it was not standard at the time. Further, it leads to our modern approach to numbers being produced by geometry. Descartes goes on to study the variants of the quadratic formula that are formed by changing the signs of the coecients. Again his approach is quite modern and he writes such things as x2 = ax + b2 . To him negative numbers have a right to existence. However, since he still interprets the solution of the equation geometrically he looks three cases the above, x2 = ax + b2 and x2 = ax b2 . We show how Descartes handles the rst two. Consider the following gure: He takes LM N to be a right triangle with LM = b, LN = a . Now prolong M N 2 to M Q a distance equal to N L. Then x = M Q is the solution. If on the other hand we were considering the equation x2 = ax + b then we would use the same gure from the point N we lay o N P on the line N M with the length of N P = N L. This time x = P M is the answer. The last case is perhaps more interesting we are looking at x2 = ax b2 . For this we consider the following gure: 87 Here LM is of length b, LN is of length a perpendicular to LM and the circle 2 is of radius N L. The line through M is parallel to N L. There are three possibilities. The rst is that the circle cuts the line through M in 2 points R, Q and both M Q and M R are solutions. The second is that the circle touches in one point, say Q, then M Q is the solution. The third is that b > a that is the 2 circle doesnt touch the line through M . Then he asserts that the equation has no solution. The main distinction between Descartes and his contemporaries is that his reason for the geometric constructions is the quadratic formula. So in the rst example x = M Q = QN +N M. QN = a and the Pythagorian theorem 2 q q p 2 2 says that N M = QN 2 + LM 2 = a + b2 so x = a + a + b2 . Which is 4 2 4 the positive solution given by the quadratic formula. We will leave the other cases to the reader in the exercises. The point here is that in Descartes formalism numbers and their units have been separated. Vite had allowed for the handling of unknowns as numbers but he still considered a product of two numbers to be an area, of three a volume. Thus x2 + 2x + 1 makes sense to him as making an area that is a disjoint combination of a square of side x, a rectangle of side x and side 2 and a gure of area 1. For Descartes this "homogenization" is unnecessary. Even Descartes makes a distinction between positive solutions (actual) and negative (false). In the third case he has a third possibility no solution. In the The Geometry he has several results about counting actual solutions or converting false solutions into actual ones. Thus although he did not believe that negative numbers could be actual solutions to geometric problems he was aware of their existence in his algebraic formalism. Book 2 of The Geometry is a study of curves in the plane. Although, the familiar x, y axes of analytic geometry do not appear explicitly in Descartes work they are certainly implicit. We will come back to these ideas in the next chapter. In the next subsection we will consider his approach to what we now call analytic geometry. 3.4.2 Exercises. 1. Show how to divide using the same diagram as Descartes used to give a geometric interpretation of multiplication. 88 2. Consider the gure below. We quote Descartes: If the square root of GH is desired, I add F G equal to unity; then bisecting F H at K, I describe the circle with radius F K with center K, and draw from G the perpendicular and extend it to I, and GI is the required root. Show that this is indeed a geometric interpretation of the square root of GH. 3. Show that Descartes geometric method does indeed describe the solutions to the quadratic equations described above. 3.4.3 Conics and beyond. The second book of The Geometry contains the meat of the Cartesian method of algebraic geometry it has the title On the nature of Curved lines. The opening paragraph says: The ancients were familiar with the fact that the problems of geometry may be divided into three classes, namely plane, solid, and linear problems. This is equivalent to saying that some problems require only circles and straight lines for their construction, while others require a conic section and still others require more complex curves. I am surprised, however, that they did not go further, and distinguish between dierent degrees of these more complex curves ... The chapter ends with (the perhaps unwarranted) paragraph: And so, I think I have omitted nothing essential to an understanding of curved lines. Descartes begins by rejecting the study of certain curves such as spirals by saying that they really belong only to mechanics. We will study one example from that chapter that rst shows how congurations involving two lines (we will make this more precise) yield conic sections which can be described by quadratic equations and that if one if one allows a line to be a conic section then one has a cubic equation. We will also show how to use Descartes method to derive an equation for an ellipse. We start with the following picture. 89 The angles at A, B, L are right angles. Descartes looks at the curve traced out as follows (this is just a paraphrase of what he actually says the algebra, however is the same as his). The points G and A are xed. The dimensions of the gure KN L are xed and the side KL is slid up and down the line AB. As it slides we look at the path that the point of intersection, C, of the line joining G to L and the line extending KN . He then observes that the two quantities BC and AB determine the point C. Since they are unknown he uses the notation y for BC and x for AB. (This is as close as he gets to the x, y axes.) The quantities that are known (or xed) are AG which he calls a, KL which he calls b and BK LN which he calls c. He then uses similar triangles to observe that KL = BC. . NL b BL+b b Thus c = y . That is, BL = c y b. He uses similar triangles again to y a see that GA = BC . That is, x+BL = BL . This gives aBL = y(x + BL). So AL BL ab b b 2 c c y ab = yx + y( c y b) = yx + c y by. Multiplying through by b we have c 2 ay ac = b yx + y cy. We therefore have c y 2 = (a + c)y xy ac. b Which Descartes observes is the equation of a hyperbola. He then observes that if the gure to be slid were say a hyperbola then one would have gotten a higher order equation. Let us do one more example. We will follow his method to calculate an equation for the locus of points so that the sum of the distances from two xed points is xed (we now call the gure an ellipse). Consider the following diagram 90 P is the point on the ellipse. The two xed points are A and B. C is the midpoint of the line segment AB joining A and B. The line P D is perpendicular to AB. We will call the constant value of the sum of the distances from P to A and to B, 2a. We will also denote by c the length of the segment AC. Then AP + P B = 2a. We set x = CD and y = P D. Then the Pythagorian theorem says that AP 2 = (c + x)2 + y 2 , P B 2 = (c x)2 + y 2 . Thus AP 2 P B 2 Now AP 2 P B 2 = (AP P B)(AP + P B) = 2a(AP P B). We therefore have 2xc . a Since AP P B + AP + P B = 2AP and by the above AP P B + AP + P B = 2xc a + 2a we have xc AP = a + . a Similarly, AP + P B (AP P B) = 2P B. Thus AP P B = PB = a We have a This gives xc 2 a xc . a x2 c2 a2 = (c + x)2 (c x)2 = (c2 + 2cx + x2 ) (c2 2cx + x2 ) = 4xc. = y 2 + (cx)2 . Hence a2 2xc + x2 + y 2 = a2 c2 + c2 2 x . a2 = y 2 +c2 2xc+x2 . 91 It is convenient to write b2 = a2 c2 . Then we have x2 + y 2 = b2 + This implies that b2 2 x + y 2 = b2 . a2 Dividing both sides of this equation by b2 we have y2 x2 + 2 = 1. 2 a b Notice that here the x, y axes are not explicitly drawn but they are used implicitly. 3.4.4 Solution to higher degree equations in The Geometry. a2 b2 2 b2 x = b2 + x2 2 x2 . a2 a The Third Book of The Geometry has the title On the Construction of Solid and Supersolid Problems. This chapter establishes our normal notation for higher degree equations. It also lays the foundation for polynomial algebra. Descartes rst order of business is to make clear that only positive roots of equations are true. We quote: It often happens, however, that some of the roots are false or less than nothing. He gives the example, rst considering (x2)(x3)(x4) = x3 9x2 + 26x 24 with roots 2, 3, 4. He then multiplies by x + 5 that has false root 5 (notice a false root in his sense is still described by a positive number and labeled as false). Thus he has x4 4x3 19x2 + 106x 120 which has three true roots 2, 3, 4 and one false root 5. He uses this as an example for his celebrated Law of Signs. Which says (the assertions refer to the signs occurring before the coecients the coecient of x4 should be taken as +1). An equation has as many true roots as it contains changes of sign from + to - or from - to +; and as many false roots as the number of times two + or two - are found in succession. We note that 0 should be ignored. Thus the quartic example above the signs are +, , , +, thus it has 3 changes + + and two in succession so the theorem asserts that the number of true roots (i.e positive) is 3 and the number of false (negative) is 1. Thus the theorem is correct without any interpretation in this case. In general, there are some caveats. First we must count a root with multiplicity since x2 2x + 1 = 0 has signs +,-,+. Hence 2 sign changes. But it only has one root 1. The method that he nds the example by successively multiplying seems to be all he feels is necessary for a proof of his assertion. In fact, proofs are noticeably absent from Descartes 92 book. However, there are several detailed derivations of formulas. In this case one can easily see that the above assertion is false as stated. If we consider x2 2x + 2 = 0 then the sequence of signs is +,-,+ so he predicts two true roots and no false ones. However there are no real roots of this equation by the quadratic formula. We must therefore interpret the result to be about equations with all roots true or false. The next really substantive part of this chapter involves what happens when one starts with a polynomial in an unknown x and substitutes x = y a (in fact he uses the example of a = 3 in a variant of the polynomial he has been studying. He says that he has increased by 3 every true root and decreased by 3 every false root that is greater than 3. He then gives a full discussion of how to carry out the substitution on a specic quartic. He then considers the same calculation but this time diminishing by a that is substituting x = y + a (again a = 3 and he looks at a specic quartic). These calculations take up a full half page of this extremely terse book. But then we come to the point he says: ...we can remove the second term of an equation by diminishing its true roots by the known quantity of the second term divided by the number of dimensions of the rst term, if these terms have opposite signs; or if they have like signs by increasing the roots by the same quantity. In other words the reduction used in Cardanos Ars Magna. Descartes formulas look just like ours but his text is still far from our approach of considering negative numbers as true objects. It is also important to note that it is here that he considers equations with indeterminate coecients (we know them but they are arbitrary). He later applies these considerations to the cubic and for all practical purposes gives a derivation of Cardanos formula for the solution of the cubic geometrically but writes the formula in exactly the same way that we do. We note that Descartes attributes it to Ferro (he in fact says that ...the rule, attributed by Cardan to one Scipio Ferreus ...). In the rest of the book he considers equations of higher degree mainly 6 and gives methods of solving specic equations. This small book lays the foundation of a synthesis of algebra (polynomials) and geometry. It lays out the power of a notational scheme that has lasted through our time. 3.4.5 Exercises. 1. Prove Descartes rule of signs for polynomials of degree 1,2,3,4 with only real roots. 2. In the derivation of the equation for the ellipse it is necessary to have a > c. However, if we had a < c then we would have had to write b2 = a2 c2 . Follow 93 the line of argument from there to see that y2 x2 2 = 1. a2 b Can you make any sense out of this? 3.5 Higher order equations. As we indicated Abel proved that, in general, an equation of degree 5 or higher cannot be solved using algebraic operations combined with extraction of roots (i.e. solvable by root extraction). Galois gave a method of determining which equations could be solved. Before we study what these two prodigies actually did in later chapters. In this chapter we will content ourselves with a better understanding of their accomplishment by improving our understanding of the concept of number. We will also resolve some ambiguities that arise in Cardano formula when we include (as we must) complex numbers. We will not attempt, as yet, to be completely rigorous (perhaps we never will) with this concept but will build on Descartes ideas. We rst introduce the concept of complex number more carefully than we did when we studied Bombellis explanation of Cardanos strange example. 3.5.1 Complex numbers. To Descartes, once a unit square is chosen the square of a number a, a2 must be considered to be the area of the square of side a. Thus the square of a number can never be negative. However, in the Cardano formula we must include the possibility of taking the square root of a negative number. We note that if we wish to allow for this possibility we must only nd a meaning for 1 since if a < 0 then a = b with b > 0. So square root of a could be taken to be 1 b. Thus if we wish to allow square roots of any number we need only make up a symbol for 1. Engineers generally use j and mathematicians use i (for imaginary no doubt). Since there is no real number with our desired property we must throw in our new number i. Now we have a more complex type of number that looks like c = a + bi. We would like to maintain the rules of arithmetic so we are forced into (a + bi) + (c + di) = (a + c) + (b + d)i and (a + bi)(c + di) = (a + bi)c + (a + bi)di = ac + bci + adi + bdi2 = (ac bd) + (bc + ad)i. In other words with only our symbol i thrown in we can with apparent consistency dene an addition and a multiplication. If we assume (as we must) that 94 there is no relation of the form a + bi = 0 with a or b non-zero then we have a system that is consistent with arithmetic. We also note that if a or b is not 0 then (a + bi)(a bi) = a2 (bi)2 = a2 (i)2 (b2 ) = a2 + b2 > 0. Thus if we set a + bi = a bi. Then if c = a + bi, cc = a2 + b2 thus c c . = 2 cc a + b2 So c c c c= 2 =1 a2 + b2 a + b2 c This tells us that if c 6= 0 then 1 exists and is given by a2 +b2 . We now have c a number system that contains the square root of every real number. Let us call (as does everyone else) these numbers complex numbers. We assert that every complex number has a square root. In fact, the Fundamental Theorem of Algebra (mentioned earlier) asserts that every non-constant polynomial with complex coecients has at least one root. The proofs of this theorem involve a deeper understanding of numbers than we have as yet and we will defer this to the next chapter where we will come to grips with the problem of rigorously explaining numbers. We will content ourselves, in this chapter, to showing that every complex number has n-th roots for all n = 2, 3, ... For this we need trigonometry. 3.5.2 Exercises. 1. Show that (1 + i)2 = 2i. 2. If a and b 6= 0 are given real numbers and if c = show that (c + di)2 = a + ib. 3. Use the formula in 2. to calculate i. q a+ a2 +b2 2 and d = b 2c then 3.5.3 Trigonometry. We have seen in our discussion of Euclids Elements that the Greeks were very interested in the properties of circles. They also had a notion of angle and studied methods of bisecting and trisecting angles. Our trigonometry is based on the understanding that angles can be represented by points on the unit circle. This is motivated by the following calculation. Consider (x + iy)(u + iv) = (xu yv) + (xv + yu)i = t + is. We calculate t2 +s2 = (xu yv)2 + (xv +yu)2 = x2 u2 2xyuv + y 2 v 2 +x2 v 2 +2xyuv +y 2 u2 = 95 x2 u2 + y 2 v 2 + x2 v 2 + y 2 u2 = (x2 + y 2 )(u2 + v 2 ). The conclusion we have been aiming at is that if x2 + y 2 = 1 and u2 + v 2 = 1 then the point that corresponds to (x + iy)(u + iv) also has this property. If we dene the unit circle to be the set of all complex numbers z = x + iy such that x2 + y 2 = 1. Then we conclude that the product of two elements of the unit circle is on the unit circle. We also note that y 0 and x = iy is on the unit if circle then y = 1 x2 . If y < 0 then y = 1 x2 . Thus up to the sign we have parametrized the unit circle in terms of the value of the x coordinate and a sign. We are looking for a better parametrization in terms of a parameter . We wish to have z() = x() + iy() with z(1 )z(2 ) = z(1 + 2 ). If we multiply out we have x(1 )x(2 ) y(1 )y(2 ) = x(1 + 2 ) and x(1 )y(2 ) + x(2 )y(1 ) = y(1 + 2 ). Also we have assumed that x()2 + y()2 = 1. There is an amazing fact that we will prove in our discussion of calculus. It says that if we have two functions of a real parameter satisfying the above three conditions and one more that asserts that if we make a small change in the value of then this induces a small change in the value of each of x() and y() then there is a xed real number c such that x() = cos(c) and y() = sin(c). This is why the rst two equations look so familiar. For the moment we will assume that we are all experts in trigonometry. We have therefore observed that the points of the unit circle can be described as z() = cos() + i sin(). We also know that there is a number with the property that if we consider the values z() for 0 < 2 then every point of the circle has been parametrized with a unique parameter. We also note that if have set up our parameter so that we traverse the circle counter-clockwise then we most have z(0) = z(2). Then using the property z(a + b) = z(a)z(b) we must have z(0)2 = z(0). Now this implies (z(0) 1)z(0) = 0. Since z(0) is not zero (its on the unit circle). We must have z(0) = 1. We can now make an observation due to Abraham De Moivre (1667-1754) (and perhaps to Jean dAlembert p (1717-1783)). If z is a complex number then we can write z = rz() with r = x2 + y 2 and z() = z/r. We presume that we can take arbitrary roots of non-negative real numbers. So if we want an n-th 1 root of z we can take r n z( n ). This says that a complex number has at least one n-th root for each n. We have seen that a positive real number has two square roots x the square-root symbol always stands for the non-negative square root. The point here is the square roots of 1 are 1. The same sort of 96 thing happens in general. If an = bn = z then (a/b)n = 1. Thus the ambiguity in taking roots is contained in the n-th roots of unity. Here is the observation: z( 2k n ) = z(2k) = z(2)k = z(0)k = 1k = 1. n This is explained in terms of the following picture (here we have plotted 8 equally spaced points on the circle 2k with k = 0, 1, 2, 3, 4, 5, 6, 7) 8 We can see that we are just putting n (in this place 4) equally spaced points on the circle with the rst one 1. Multiplication by the second one clockwise cycles the points clockwise around the circle. Interpretation of Cardanos formula. We now see that there is a real problem with Cardanos formula (and Ferraris for that matter). In Cardano there are two cube roots and two square roots thus there is a possible thirtysixfold ambiguity in the formula. The only way to make it a formula again is to give a rule for how to choose the roots. Lets look at the equation x3 = ax + b again. The formula says v v s s u u 2 2 ub a 3 u b a 3 3 3 b b t t + + 2 2 2 2 2 2 b 2 3 is a root. We note that the formula involves both square roots of 2 a 2 symmetrically. So the ambiguity involves only how we choose cube roots. We b 2 3 will therefore use the same square root, v,of 2 a in both parts of the 3 b formula. We write u = 2 . Then we must choose a cube root of u+v and a cube 97 root of u v so that x = + is a solution to the equation. Let us calculate. With , arbitrary choices of cube roots. Then x3 = 3 + 32 + 3 2 + 3 . We therefore have x3 = u + v + 32 + 3 2 + u v = b + 32 + 3 2 . 2 Now 32 + 3 = 3( + ). We note that 3 3 = (u + v)(u v) = b 2 b 2 3 a 3 u2 v 2 = 2 2 a = 3 . Thus to help resolve the ambiguity of 3 cube roots we choose and so that = a . (Notice that we can do this by 3 multiplying one of or by a complex number whose cube is 1.) We now note that + = x (by our denition) and 3 = 3 a = a. Thus 32 + 3 2 = ax. 3 With these choices and x = + then the equation x3 = ax + b is satised. We also note that since the choice of forces that of and vice-versa there is now only a threefold ambiguity. Which is what we would have if a = 0. The general polynomial of degree three has 3 roots. 3.5.4 Exercises. 1. What are the 4 eighth roots of 1? 2. Find a fourth root of 1 and show that its powers give up to reader the 8 equally spaced points around the circle in the picture above. 3. Resolve the ambiguity in Cardanos formula for a solution of x3 + ax = b. 4. Write out the three roots of the equation x3 + 2x = 4. 3.5.5 Polynomials of degree 5 or higher. We will begin this section with a special case of Gausss fundamental theorem of algebra. A fuller explanation of the argument in the next subsection will be given in the next chapter. Also the following theorem will be an ingredient in our development of the full theorem. We will see that the result is based on a deeper understanding of the concept of a real number. We will be studying the full fundamental theorem of algebra in the next chapter. Here we will give an argument for polynomials with real coecients of odd degree that uses methods of analysis (the subject of the next chapter). The proof involves a deep property of real numbers which we will assume. The reader who has not had any introduction to the manipulation of inequalities might nd that the proof below is gibberish. Try reading it anyway. The mysteries will be expanded on in the next chapter. 98 Polynomials of odd degree. The purpose of this subsection is to discuss the following Let f (x) = a0 + a1 x + ... + an xn be a polynomial with an 6= 0, a0 , ..., an real numbers and n odd. Then there exists a real number c such that f (c) = 0. Notice this assertion is not about arbitrary polynomials but only ones of odd degree and having real coecients. In particular, if the coecients are rational then there is a real root. As we observed above this result is a consequence of a deep property of real numbers which will be delved into more deeply in the next chapter. We rst note that we may assume that an = 1 since we can divide through by an . Thus we are looking at f (x) = xn + (a0 + a1 x + ... + an1 xn1 ). We assume that C > |ai | for all i = 0, ..., n1. Then |a0 +a1 x+...+an1 xn1 | |a0 | + |a1 ||x| + ... + |an1 ||x|n1 . Then |a0 + a1 x + ... + an1 xn1 | C + C|x | + ... + C|x|n1 . Thus if |x| > 1 then we have |a0 + a1 x + ... + an1 xn1 | nC|x|n1 . We now note that if x is real then f (x) xn + nC|x|n1 . Now suppose that x < 0 and |x| > 2nC. Then since n is negative xn = |x||x|n1 . Thus f (x) |x||x|n1 + nC|x|n1 < 2nC|x|n1 + nC|x|n1 = nC|x|n1 < 0. We conclude that if x < 0 and |x| > 2nC then f (x) < 0. We note that if a and b are real numbers then a + b a |b|. Indeed if b 0 then this is an equality and if b > 0 then b > |b|. Thus if x > 2nC and x > 1 then f (x) xn |a0 + a1 x + ... + an1 xn1 | xn nC|x|n1 = |x||x|n1 nC|x|n1 2nC|x|n1 nC|x|n1 = nC|x|n1 > 0. We have thus shown that if x < nC and x < 1then f (x) < 0 and if x > 2nC and x > 1 then f (x) > 0. Fix u < 1 and u < nC. Fix v > 1 and v > 2nC. Then f (u) < 0 and f (v) > 0. The deep property that we need is that if g is a polynomial and if we have two real numbers a < b then for every real number, c, between g(a) and g(b) there exits a real number, y, with a y b such then g(y) = c. We apply this to f . Then since f (u) < 0 and f (v) > 0 the number 0 is between f (u) and f (v) and thus there exists y with u y v and f (y) = 0. The property we have used is called the intermediate value property and it applies in much greater generality than polynomials (as we shall see in the next chapter). 99 Why havent we found a paradox? In the previous subsection we have observed that if we have a polynomial of odd degree with real coecients then it has a real root. The existence of the root is demonstrated using a deep property of the real numbers not by giving a formula. This leads to the question: If we have a polynomial of degree 5 with rational coecients then what is the nature of the real numbers that are its real roots? The point is that for degrees 2, 3 and 4 the roots were found it the collection of complex numbers that can be found by the following operations on rational numbers: 1. Arithmetic (addition, subtraction, multiplication and division). 2. Extraction of roots ( 2 x, 3 x, ...) which we now understand how to do using trigonometry. For example in Cardanos formula we must extract a square root of an expression involving the coecients of the polynomial, do arithmetic with that not necessarily rational number combined with a further coecient and then extract cube roots and then subtract these numbers. The amazing outcome of the work of Runi, Abel and Galois is that these operations are not enough to nd all roots of polynomials with rational coecients of degree at least 5. This goes far beyond showing that we cannot nd an explicit formula using only operations of type 1. or 2. It shows that roots of polynomials with rational coecients form an algebraic object that is much more subtle than was imagined. In the next chapters we will endeavor to explain the analysis that is involved in Gausss fundamental theorem of algebra. This analysis comes from the foundations of the dierential and integral calculus which had been developed for totally dierent purposes (the determination of velocities, accelerations and tangents). Also the study of roots of equations led to what is now called abstract algebra. The abstraction of addition, multiplication and division. The whole is a startling edice that will be one of the main subjects of the remaining chapters. We will content ourself with a discussion of why the Abel, Runi, Galois theory was a surprise to the mathematicians of the eighteenth and nineteenth centuries (as the theory developed). This involves the question of why mathematicians with only the knowledge that there is a formula for a solution of an equation of degree two with (say) rational coecients seemed to expect that there would be analogous formulas in higher degrees? Since it apparently took mankind about 4000 years from the realization that there was a quadratic formula to the time of Cardano and Ferrari when formulae for degrees 3 and 4 were discovered, why were mathematicians not looking for reasons why this couldnt be done? The expectation that it could, in fact, be done was correct for degrees 3 and 4 but denitely incorrect for higher degree. This time it took approximately 200 years to come to the realization that one could not do for degree 5 what was done for 2,3 and 4. This is not unlike the prevailing feeling of mathematicians before the nineteenth century that the parallel postulate was a consequence of the other axioms of Euclids geometry. The point is that 100 the mathematicians had decided that they believed the validity of a statement that they could not prove. Since there was no justication for this belief one should perhaps call it a prejudice. In the latter part of the twentieth century and the beginning of this (the twenty rst) century there has been a new debate on the question of truth without proof Brilliant expositors of mathematics justify their views of computability and articial intelligence with just such an idea. Indeed, the argument is that if a human being can discern the truth of an assertion without a proof then he can nd true statements that could never be found by a computer. Thus human beings must be more than biological computers. We will not enter this fray which is poised at a higher level than we have scaled as yet. Rather, the history of the search for formulae for the solution of polynomial equations and the prejudice that this could be done is perhaps related to a problem with the exibility of the human mind. That so many believe that assertions must be true even though we cant prove them may be related to the prejudices that were at the root core of the horrible events of the twentieth century. 3.5.6 Exercises. 1. Does the intermediate value property: If a polynomial, f (x) with real coecients, takes two values f (a) > 0 and f (b) < 0 then there exists a real number, c, between a and b such that f (c) = 0. Seem obvious? Is it it true if we replace the word real by rational? 2. Let f (x) = x5 + 2x2 + x + 1 show that there is a real root between 1.23 and 1.22 by calculating the two values and seeing that one is negative and the other is positive. If you have access to a computer algebra package you could use it to check that the intermediate root does indeed exist. 4 The dawning of the age of analysis. Archimedes (287-212 BC) proved that the area of a circle is equal to the area, A, of a right triangle with one side equal in length to the radius and the hypotenuse equal to the circumference (see in discussion in Chapter 2). His method was to observe that the area of the triangle in question is either equal to, strictly less than or strictly greater then the area of the circle. He then inscribes regular polygons with the number of sides increasing indenitely and shows that they are eventually bigger in area than any number strictly less than A he then circumscribes regular polygons of increasingly many sides and shows that eventually they have area less than any number strictly larger than A. He then concludes that the only possibility left is that A is the area. This type of argument replacing direct calculation with upper and lower bounds is the method of modern analysis. The main impetus for the development of a rigorous branch 101 of mathematics which we now call analysis was the need for a consistent underpinning for (what we now call) Calculus (Isaac Newton(1642-1727), Gottfried Leibniz(1646-1716). The term calculus is a generic term that roughly means a method of calculation. It was a revolutionary idea that led to simple methods of calculating areas and tangents in geometry and velocities, accelerations and trajectories in mechanics. In particular, the clever method of Archimedes becomes unnecessary within the framework of Calculus. Unfortunately, the early methods were completely formal implicitly assuming that one can deal with quantities that were so small that their squares could be treated as 0 (uents in the terminology of Newton, innitesimals to others). Although there was no rigorous notion of innitesimal in the seventeenth and eighteenth century the idea led to such amazing simplictions of dicult problems that the theory led to a revolution in mathematics. As we shall see in later chapters a more rigorous approach to solving the same problems was developed in the nineteenth century and was based on the ideas of modern analysis. We wiill see in this chapter that Fermat (1601-1665) had developed methods consistant with modern analysis to compute certain important areas and tangents. But he had no general calculus based on modern analysis. In the twentieth century a more rigorous version of the innitesimal calculus was developed by Abraham.Robinson(19181974) based on a deep understanding of logic which made the formal methods of the seventeenth and eighteenth centuries more acceptable in the twentieth centuries. All attempts at understanding a rm basis of calculus are in the end based on attempts to understand the real number system. This was part of our goal in the previous chapter. There, we showed how Euclid and Descartes had developed numbers out of geometry. Descartes went much further and showed that the algebraic manipulation of numbers could replace the clever methods of geometry. However, Descartes numbers did not have any existance beyond geometry. We also saw that the basic question of whether ther exist roots of polynomial equations and whether or not we can calculate them also devolves on the question: What is a real number and thereby what is a complex number. We will be studying these points in this chapter. The modern formulation of real and complex numbers will have to wait for the next chapter. 4.1 4.1.1 Early aspects of analysis. Zenos paradox. We will begin this chapter with a standard puzzle usually attributed to Zeno (490-425 B.C.). Suppose there were a tortoise and a hare (sometimes it is Achilles) such that the hare moves twice as fast as the tortoise. To simplify things we assume that the tortoise can move 1 unit in a second and the hare can move 2 units in a second. Suppose that the tortoise starts moving rst along a straight line and travels a distance d before the hare begins moving along the same line. We then have the following situation in the rst d seconds: the hare 2 is d units from the starting point and the tortoise is 3d . In the next d seconds 2 4 102 the hare has moved to 3d units and the tortoise is at 7d , that is the tortoise is 2 4 still ahead by d units. After the next d seconds the tortoise will be ahead by d 4 8 8 units, etc. Thus the hare will never catch the tortoise! We know that there is something wrong here since it is obvious that the hare will eventually pass the tortoise. Aristotle (384-322 B.C.) used this paradox as evidence for the premise that innity is meaningless. This is, certainly a practical point of view. We cannot do an innite number of operationseach of which take at least a xed amount of time to accomplish. But this is not what is happening in our discussion of the tortoise and the hare. If we redid the steps and did the measurement in xed units of time, say one second. Then after n seconds the tortoise would be at d + n units from the start and the hare would be at 2n units. Thus after (say) d + 1 seconds the tortoise would be at 2d + 1 units from the start and 2 2 the hare would be 2d + 1 units. That is they will pass each other before d + 1 2 seconds elapse. Lets look at our original analysis let us make the problem more 100 concrete by taking d ot be 100. Then after 10 steps the tortoise is 1024 units 100 ahead of the hare. After 20 interations of this procedure it is 1048576 ahead. If say the units were meters then this is less than .0001 meters and the amount of time to travel that far for the tortoise would be that many seconds. This is absurd. There is no way we can measure that small an interval in time (let alone what we would have a few iterations further along). The time intervals are becoming so small as to be meaningless. However this is not a solution to the paradox. For example, it is possible for the toroise to move as far as he wishes even if he moves in certain incriments of time that become arbitrarily small. Here we look at just the tortoise and look at where he is from the start after 1 seccond, a 1/2 half second later, a 1/3 second later,... Then after 2 such time intervals he would have gone 3 = 1.5 units, after 4 he would have gone 2 25 761 2436559 12 2.08, after 8 it would be 280 2.72, after 16 it would be 720720 3.3,after 200 it would be about 5.88. after 10000 it would have gone 9.79 units. We will show that the numbers dened in this way increase without bound (see the section immediately below on the harmonic series). Thus just saying that the time incriments are becoming too small to measure does not resolve the puzzle. Many look upon this puzzle as indicating a need for a better understanding of innity. We will take a dierent approach and explain how the techniques of modern analysis explain that the puzzle is merely a missunderstanding of the nite. 4.1.2 The harmonic series. In this section we will use the method of Nicole dOresme (1323-1382) to show 1 that the numbers 1 + 1 + 1 + ... + n increase without bound with n. The idea 2 3 of Oresme can be seen as follows: 1+ 1 1 =1+ , 2 2 103 1 1 1 1 1 1 1 1 + + >1+ + + =1+ + 2 3 4 2 4 4 2 2 (here we have observed that 1 > 1 ) 3 4 1+ 1 1 1 1 1 1 1 + + + + + + = 2 3 4 5 6 7 8 1 1 1 1 1 1 1 1+ +( + )+( + + + )> 2 3 4 5 6 7 8 1 1 1 1 1 1 1 1 1 1+ + +( + + + )=1+ + + . 2 2 8 8 8 8 2 2 2 The pattern is now clear if we add up 1+ 1 1 1 + + ... + n+1 2n + 1 2n + 2 2 1 We have 2n terms that are all at least as big as 2n+1 . Thus they add up to a 1 1 n number that is at least 2 2n+1 = 2 . The conclusion is that the sum 1+ If we dene the integral logarithm in base 2 by I log2 (N ) = n if 2n1 < N 2n . Then we have 1 1 1 1 + + + ... + 1 + I log2 (n). 2 3 n This beautiful argument actually gives a very good idea of how this series of numbers grows with n. One can show that there exists a constant (Eulers constant) that is usually denoted and another constant we will call which (we shall see just the natural logarithm of 2) such that if we substitute increasingly larger values of n in the expression 1 1 1 + + ... + n n 2 3 2 it becomes smaller than any preassigned (small number). We will discuss this in more detaiol when we talk about logarithms. This constant occurs in many contexts in mathematics and has been calculated to high precision. However, it is not known if it is a rational number. 1+ Exercises. 1. Suppose you have blocks each 1 unit thick 4 units wide and 12 units long (the unit could be inches or cent1meters the dimensions are not terribly important) made of a uniform material. Suppose you were to pile the bocks one on top of each other so that the second overhangs the rst the third overhangs the second, etc. How big an overhang could we achieve? 2. Use a computer algebra package or calculater with high precision and natural logarithms (ln )to calculate 1 1 1 + + ... + ln(n) 2 3 n for large n. What is the value of that your calculation predicts. 1+ 104 1 1 n 1 1 + + ... + n + n 1+ . 2 3 2 1 2 2 4.1.3 Another look at the methods of Archimedes. As we saw Archimedes developed a method of proving formulas for areas of geometric gures. His method was to have a target value, A, for the area in mind and to show that for any B > A there exists a geometric gure strictly containing the one in question with area that we know how to compute and which is less than B. He then showed that if C < A then there was a gure stricly inside the one in question with area bigger than C. He than concludes that the area is bigger than any number strictly bigger than A and is less than any number strictly bigger than A. He then asserts that this implies that the area must be A. The argument is ingenious but once understood seems self evident. However, there are several (reasonable) assumptions that have been made and there is one problem with the method. We will rst look at the assumptions. The rst is about numbers (which eventually leads to the notion of a Dedekind cut) the then there are two about areas. The assumption about numbers is: 1. If A and D are numbers then A = D if the following two conditions are satised a) Every B satisfying B > A also satsies B > D. b) Every C satsifying C < A also satsies C < D . The rst assumption about areas is: 2. If F and G are subsets of the plane with areas A and B and if every point in F is also in G then A B. It is hard to disagree with these two assumptions. The next is not clearly an assumption at all will eventually become part of the denition of a set with area. 3. Let F be a subset of the plane. Suppose that A is a number such that whenever G is a set that has an area B that contains F then B A and whenever L is a set that is contained in F and has an area C then C A. Then F has area A. The rst assumption is part of the order properties of the real numbers. The second is a property that must be satised if we are to have a reasonable notion of area. The third has to do with the fact that in the contexts that are least weird to mathematicians not every set can be allowed to have an area with condition 2. (and a few more equally obvious conditions) satised. We also indicated that there was a problem with Archimedes method. The problem is that it is a method that proves that a value asserted for an area is the correct one. It gives no method of nding what the value should be. We know that Archimedes was aware that the area of the circle of radius r is r2 . He also clearly knew that the circumference is 2r. The right triangle with 105 sides of length r and and the other of length the circumference has area 1 r (2r) = r2 . 2 Thus he has proved the formula that we believe. (Actually what he has done is reduced the problem of calculating the area to the problem of calculating the circumference or vice-versa.) A general method for calculation was one of the main aims in the development of the innitesimal and integral calculus. Archimedes was the rst to calculate the area of a segement of a parabola. We will not go into his derivation but say that his basic axioms of area were also used and the area was given in terms of the area of a triangle that one can only feel was an outgrowth of an amazing insight. After we discuss calculus we will explain Archimedes remarkable formula. Recall that a parabola is a conic section that is determined by a point, P , and a line, L. The curve is the locus of points whose distance to P is equal to its distance to the line L. For instance The point P is called the focus of the parabola and the line perpendicular to L through P is called the axis of the parabola. The Greeks understood that for a conic section a line could intersect it in two, one and no points. A line that intersected the curve at one point would be called the tangent line to that point (it was known to be unique). Archimedes in ??? set about to calculate the area of what he called a section of the parabola, that is,the set is cut out by a line intersecting with the parabola at two points A and B. 106 He assumes that the point A and B are on dierent sides of the axis and A is closer. He then considers the triangle ABC formed by the tangent line through B the segment AB and the line parallel to the axis of the parabola through A. The theorem of Archimedes is: The area of the parabolic segment is one third the area of the triangle ABC. This theorem is one of the high points of Greek geometry. Archimedes approach (as we have pointed out) involved a guess of the area and then trhough brilliant upper and lower estimates proving that his asserted area is correct. In fact, he had a method of deciding what the appropriate value should be that involved what he called The Method. This method was based on what is now called statics in physics involving the theory of levers and pulleys. So in addition to being one of the greatest mathematicians who ever lived he was also a great physicist. The story of how The Method was rediscovered after seeming to be lost for over a thousand years is also very interesting. We refer to more standard texts in the history of mathematics for this story (e.g. C. Boyer et. al. A History of Mathematics). 107 Exercises. 1. Derive an equation for the parabola using the plane coordinates (x, y) if we take the line L to be given by y = 1 and the focus to be the point (0, 1 ). 4 4 2. For the parabola in problem 1. show that if the line AB is parallel to the axis then the endpoints are given with A having x-coordinate a and B having x-coordiane a and the area of the indicated triangle is 4a3 so the area of the 3 sector is 4a . 3 3. Show that the theorem of Archimedes shows that the area of the parabolic segment AB depends only on the sum of the distances of A and B to the axis. 4.2 Precursors to calculus. As mentioned above, Nicole dOresme had developed methods for studying the growth of innite sequences of numbers. He also understood fractional powers of positive numbers and most astonishingly used graphical methods to plot data (using a horizontal axis for the independent variable and the verticle axis for dependent variable. However, very little progress was made in the years between Archimedes and Oresme in the calculation of areas bounded by curves. One major drawback was that the mathematical notation was still quite cumbersome and the methods of Oresme to visualize were not widely used. In the last chapter we mentioned the work of Vite which explained how to deal with unknown quantities and thereby led to the concept of function. However, his work did not separate the notion of number from geometry. Thus, positive numbers were lengths of intervals, products of positive numbers were areas of rectangles, triple poducts were volumes, etc. He also used cumbersome notation for powers writing something like xcube for what we would write as x3 . Thus he would have x xsquare = xcube. This did not aord a useful formalism for doing algebraic manipulation of polynomials. In our notation, a polynomal: x3 + 2x2 + 5x + 1 would be understood to represent a volume so the 2 would would be in units of length, the 5 in units of area and the 1 would be a volume. You could then visualize a cube of side x a rectangular box with base of side x and height 2, a rectangular box with base of area 5 and height x and a three dimensional gure of volume 1 all attached to each other in some way. This all changed with the French mathematicians of the rst half of the seventeenth century. We have already written about Descartes explanation of how to interpret products of positive numbers as intervals and thereby freed polynomials of units. Of the great mathematicians of this time the one who arguably came the closest to calculus was Fermat. He, in fact, was more in the tradition of Archimedes than the later formal methodology of calculus. That is, more in line with the modern notion of analysis. 108 4.2.1 The Pascal triangle and the Leibniz harmonic triangle. We recall that (x + y)2 = x2 + 2xy + y 2 . Multiplying out we see that (x + y)3 = x3 + 3x2 y + 3xy 2 + y 3 . We can continue to multiply indenitely and we see that (x + y)n has n + 1 terms the ith a multiple of xni y i . If we lay out the coecients we have for n = 0, 1, 2, 3, 4, 5. 1 1 1 1 1 1 5 4 10 3 6 10 2 3 4 5 1 1 1 1 1 Blaise Pascal (1623-1662) observed the pattern that one had a triangle with the two legs all ones and the interior values gotten by adding together the two adjacent values one row up for the interior points. Thus for the fth power we would get 1, 5, 10, 10, 5, 1 and, say, 10 = 4 + 6. The standard method of writing these coecients is n so 5 = 10. With the conventions that n = 0 if i < 0 i 2 i n or i > n. It is convenient to write n = 1 and n = 1 (these account for the 0 outer legs of the triangle). We have n n1 n n2 2 n (x + y)n = xn + x y+ x y + ... + xy n1 + y n . 1 2 n1 If we multiply this identity by (x + y) then we have n n n (x+y)xn1 y+ (x+y)xn2 y 2 +...+ (x+y)xy n1 +y n . (x+y)xn + 1 2 n1 Now n (x + y)xni y i = n xn+1i y i + n xn+1i1 y i+1 . This says that in the i i i product the coecient of xn+1i y i is n n .+ i1 i Which is the pattern Pascal observed. We will call this the generating identity for the binomial coecients. We note that if 0 i n then n 6= 0. i Some years after Pascals discovery Christiaan Huygens (1629-1695) asked Leibniz to sum the series: 2 2 2 + + ... + + ... 1(1 + 1) 2(2 + 1) n(n + 1) that is the sum of the reciprocals 1 , n = 1, 2, 3, .... The rigorous theory of (n+1) 2 summing such series had not as yet been developed. However, Leibniz came up with as solution that incontrovertibly summed the series to 2. Here is what he did. He observed that 1 1 1 = . n(n + 1) n n+1 109 This says that if we sum the rst, say, 5, terms we have 1 1 1 1 1 1 1 1 2 1 +2 +2 +2 +2 =2 . 2 1 2 2 3 3 4 4 5 5 6 6 2 If we sum the rst n terms we get 2 n+1 . Thus if we sum a million terms the sum is 2 to 5 signicant gures. The more terms we add the closer the value is to 2. This is essentially the modern version of sumation of innite series. We should, however, point out that one must be careful about formal manipulation of innite series. For example, suppose we want to sum 1 1 + 1 1 + 1 + ... If we sum the rst 2n terms we get (1 1) + (1 1) + ... + (1 1) = 0. If we sum the rst 2n + 1 terms we get (1 1) + (1 1) + ... + (1 1) + 1 = 1. Leibniz felt that a reasonable value for the sum of this series should be 1 . 2 Returning to the reciprocals of the binomial coecients n+1 Leibniz made 2 a beautiful discovery that allowed him to compute many more innite series using exactly the same trick.. He rst observed Pascals triangle could be written somewhat dierently as 1 1 1 1 1 1 1 . . . 1 2 3 4 5 6 7 . . . 1 3 6 10 15 21 28 . . . 1 1 1 1 4 5 6 7 10 15 21 28 20 35 56 84 35 70 126 210 56 126 252 462 84 210 462 964 . . . . .. . . . . . . . . . One observes that except for the rst row and column if we look at an entry in this double array then it is the dierence between the entry directly below and the entry one below and one to the left. Thus 6 in the third row has 10 directly below and four one down and one to the left We have 6 = 10 4. To this see property is true we note that the entries in the rst row are 0 , 1 , 2 , 3 .... 0 0 0 0 Those in the second row are 1 , 2 , 3 , 4 ... in the third 2 , 3 , 4 , 5 ,..., 1 1 1 1 2 2 2 2 j+i2 etc. Thus the entry in the i, j position is i1 . In other words the element in the 3, 5 positions is 82 = 15 that in the 5, 4 position is 7 = 35. The 2 3 j+i2 j+i1 j+i2 moving the negative term to assertion above is just i1 = i i the right hand side we see that this is the generating identity for the binomial coecients. Now if we use the method of Leibniz above we nd that if we consider an entry not on the rst row or column and add up all the entries in the column directly to its left that are either on the same row or a higher row then we get the original entry . For example: 462 = 210 + 126 + 70 + 35 + 15 + 5 + 1. 110 The harmonic triangle of Leibniz is given by 1 1 2 1 3 1 4 1 5 1 6 1 7 1 2 1 6 1 12 1 20 1 30 1 42 1 56 1 3 1 12 1 30 1 60 1 105 1 168 1 252 1 4 1 20 1 60 1 140 1 280 1 504 1 840 1 5 1 30 1 105 1 280 1 630 1 1260 1 2310 1 6 1 42 1 168 1 504 1 1260 1 2772 1 5544 1 7 1 56 1 252 1 840 1 2310 1 5544 1 12012 . . . . . . . . . . . . . . . . . . . . . . .. . 1 Here the rst row consists of the numbers 1, 1 , 3, , 1 , ... that is the terms in 2 4 1 the harmonic series. The second row consists of the numbers n(n+1) that is 1 1 1 1 1 1 1 1 1 2 , 6 , 12 , 20 , ... The third row consists of the numbers 3(n+2) that is 3 , 12 , 30 , 60 , ... 3 1 The k-th row has entries k n+k1 these entries can be read o from Pascals ( k ) triangle. Leibniz observation was that the sum of the entries in the k-th row from the n-th poistion on is given by the number in the k 1-st row in the n-th position. Thus the sum 1 1 1 1 1 + + + + ... = . 60 105 168 252 20 That is we are summing the entries in the third row starting with the fourth entry the sum is the fourth entry in the second row. In particular the answer to Huygens question is twice the sum of the entries in the second row which is twice the rst entry in the rst row which is 2. Although the Leibniz method was ingeneous it was a severely limited ,method of summing innite series. A series that looked very similar to the series 1+ is 1 2 + ... + + ... 3 n(n + 1) 1 1 + ... + 2 + ... 4 n This series baed Leibniz who was asked to sum it in 1673 by Henry Oldenburg (1615-1677) and, in fact, all mathematicians until Euler determined its sum in about 1736. We will come back to this series later in this chapter. 1+ Exercises. 1. Use the harmonic triangle to sum the series 1 1 1 + + ... + + ... 6 24 (n + 2)(n + 1)n 2. Use the method that Leibniz used in answering Huygens to show that the entire harmonic triangle works as advertised. Hint: we need to show that 1 1 1 n+k1 n+k = . k k k (k + 1) n+k k k+1 111 3. You may wonder why we called Leibniz array a triangle. If you rotate the rectangular version of Pascals triangle 45 degrees to the right (clockwise) then it it is a triangle. Do the same with the harmonic version. Explain the geometyr of summing series in terms of the version that is given as a triangle. 4.2.2 Fermats calculation of areas. Fermat considered the problem of calculating the area of a gure bounded by a line pL1 , a line L2 perpenducular to L1 and a curve which we would write as y = x q with p, q > 0 relatively prime integers. He also allowed p to be negative but his method failed for p = 1. We will discuss this case later, although it q was done chronologically earlier. In fact, the case of q = 1 had been handled by several authors who came before Fermat. Here is a picture of Fermats area 5 corresponding to the curve y = x 3 with the base of length 2. Let the length be denoted m. His idea was as follows consider a number 0 < E < 1 then one has the points E k m for k = 0, 1, 2, .... Which start with m and decrease to indenitely becoming arbitrarily close to 0. He then drew the corresponding rectangles corresponding to the vertical lines through these points he would then have two collections of rectangles one inside and one outside the area in question. In our example above with E = 1 this looks like:v 2 He then sums the areas of the corresponding rectangles: The inner being (Em) q (m Em) + (E 2 m) q (Em E 2 m) + ... + (E k m) q (E k1 m E k m) + ... p p p 112 and the outer being (m) q (m Em) + (Em) q (Em E 2 m) + ... + (E k1 m) q (E k1 m E k m) + .... The idea of Fermat is to add up the rst k terms of these sums we rst look at the outer sum and write it out m q +1 Em q +1 + E q +1 m q +1 E q +2 m q +1 + ... + E q +k m q +1 E p p p p p = m q +1 (1 E) + m q +1 (1 E)E q +1 + ... + m q +1 (1 E)E k q +k + .. = m q +1 (1 E)(1 + E q +1 + ... + E k q +k + ...). p p p p p p p p p p p kp p kp q +k+1 p p p m q +1 p If we set F = E q +1 then the outer sum is given as m q +1 (1 E)(1 + F + F 2 + ... + F k ) with F = E q +1 . Fermat writes this as F = E close this expression 1 + F + F 2 + ... + F k = He now has the expression m q +1 p p p+q q . We now recall that we can 1 F k+1 . 1F (1 E)(1 F k+1 ) . 1F We look at G = E q then F = Gp+q and 1 Now comes the brilliant trick. E = Gq . Thus we have 1 Gq 1 G 1 Gq 1E = . = p+q 1F 1G 1 G 1 Gp+q p 1 Gq 1G (1 E)(1 F k+1 ) = m q +1 (1 F k+1 ) 1F 1 G 1 Gp+q which is equal to So m q +1 p m q +1 (1 F k+1 ) p 1 + G + ... + Gq1 . 1 + G + ... + Gp+q1 Now the total sum over all values (i.e. not stopping at k) is larger than the indicated area. But the only part of the above expression that depends on k is the term 1 F k+1 . Which is always less than one. We conclude that for all values of E with 0 < E < 1 the number m q +1 p 1 + G + ... + Gq1 1 + G + ... + Gp+q1 p q is an upper bound for the area. If we evaluate this for E = 1 we get m q +1 p+q as an upper bound for the area. If one looks at the expression for the sum of 113 the inner rectangles it is just E q times the expression for the outer ones. We therefore nd that p p 1 + G + ... + Gq1 E q m q +1 1 + G + ... + Gp+q1 is a lower bound for the area. We can evaluate this at E = 1 to see that the p p q q area is at least m q +1 p+q and at most m q +1 p+q . Hence it must be equal to q m q +1 p+q . If we write r = p p p q then we have the familiar expression (for those who know r+1 some calculus) that the area is m . r+1 Fermat used a similar method for negative powers, r = p . Here one should q look at the curve over the half line of all numbers x > m. The method involves taking E > 1 and looking at the points m < Em < E 2 m < .... This time the upper sum for the points m, Em, ..., E k m is: m q (Em m) + (Em) q (E 2 m Em) + (E 2 m) q (E 3 m E 2 m) + p p p p p ... + (E k m) q (E k+1 m E k m). p p p 2p This time we can factor out m q +1 and have m q +1 (E1+E q +2 E q +1 +E q m p +1 q +3 E q 2p +2 +...+E kp q +k+1 E kp q +k )= (E 1)(1 + E p +1 q p +E 2 p +2 q +E 3 p +3 q + ... + E k p +k q )= m q +1 (E 1) with F = E 1 q = E p pq q . This time we write G = E q and we have p 1 F k+1 1F 1 m q +1 E (1 Gq )(1 G(k+1)(pq) ) . (1 Gpq ) Now if p > q then as k is evaluated at increasinly large values the only term involving k is closer and closer to 1. As in the earlier case we have m q +1 E(1 G(k+1)(pq) ) p 1 + G + ... + Gq1 . 1 + G + ... + Gpq1 We see that the upper sum is always at most m q +1 E p 1 + G + ... + Gq1 . 1 + G + ... + Gpq1 Now eveluating at E = 1 (thus G = 1) we have as an upper bound on the area m p +1 q q m q +1 = p . pq q 1 p 114 If 0 < p < 1 one can see that this formula is also true In the case of positive q powers it is clear that if we wish an area for 0 < a < x < m then one can subtract the area between 0 and a and get ar+1 mr+1 r+1 r+1 p for r = q . We can see that the formula for the area over the same interval for r = p < 0 but not 1 is q mr+1 ar+1 . r+1 r+1 Exercises. 5 1. For the indicated case of y = x 3 , and m = 2 calculate (using a high precision calculator or math software package) the upper sums for E = 1 , 1 , 1 2 3 5 and say 100 terms. Compare with the answer. 2. Why didnt the method above work for r = 1? 3. Complete the argument for the inner sum in the rst part of the discussion. 4. Complete the argument for r < 1 by analyzing the lower sum. 5. What do you think Fermat did for rational numbers r with 0 > r > 1? 4.2.3 Fermats derivation of tangents. Notice that this method only works for p > 1. If we need the area over a q nite interval 0 < m < M then we can just subtract the area above M from the area above m and get p p m q +1 M q +1 p . p q 1 q 1 In addition to his calculation of the area under the curves y = xr with r rational but not 1. Fermat also calculated the tangent lines. Here he also used methods that were clear precursors to what we call calculus. He observed that if one has a curve given as y = xn then the slope of the line through the points (x, xn ) and (x + E, (x + E)n ) is (x + E)n xn (x + E)n xn = . (x + E) x E The gure below is y = x3 and A and B are two such points. 115 He observed that if E is chosen progressively smaller the connecting line would rotate to a tangent line (for the moment we will take this to mean that any line through A gotten by slightly rotating the tangent line intersects the curve at a nearby point, we will come back to the idea of a tangent line). Fermat (and probably many others) observed that if n is an integer then (x + E)n xn = nExn1 + n(n 1) 2 n2 + ... + E n . E x 2 Thus every term is divisible by E. We therefore have n(n 1) n2 (x + E)n xn + ... + E n1 . = nxn1 + Ex E 2 He could then put E = 0 and nds the slope of the tangent line at (a, an ) to be nan1 . However, Fermat did more, he in fact calculated the slope of the tangent if n is only rational. Here we write n = p and assume that p, q > 0. q We are looking at (x + E) q x q ((x + E) q )p (x q )p = . E E We note that if E > 0 then (x + E) q = x q + F with F > 0. Thus taking q-th poweres of both sides of this equation we have x + E = x + qF x1 q + 1 1 1 p p 1 1 q(q 1) 2 1 2 F x q + ... + F q . 2 Thus subtracting x form both sides of this equation and dividing by F we have 1 2 q(q 1) 1 q E + ... + F q1 . = qx1 q + Fx F 2 This means that we can substitute E equals zero in this equation since if E = 0 1 then F = 0. We therefore have if E = 0 then we can evaluate E and get qx1 q . F We now have p p 1 1 (x + E) q x q (x q + F )p (x q )p F = E F E in both we can substitute E = 0 and get p(x q )p1 1 x q 1 = nxn1 . q 1 This certainly shows that Fermat knew a great deal of what we usually think of as basic calculus. However, he did not invent calculus. The point here is that by its very name calculus is a method of computation. Fermat relies on brilliant relationships between rational and integral powers. He is not in the tradition of Archimedes either since he does not use true limits but rather uses a more algebraic formalism that allows substitution. We will discuss these distinctions more carefully when we get to our discussion of calculus. 116 4.2.4 Further precursors to calculus. Mathematics ourished in the seventeenth century, Mathematicians nally had a notational system that had enough exibility that they could study very general mathematical relationships. Also numbers had nally been divorced from units. Thus numbers could be manipulated algebraically without recourse (unless so desired) to geometric constructs. In Europe mathematicians were analysing areas, volumes, and tangents as they had never been before. As an example, we will take a look at the work of Isaac Barrow (1630-1663) who held the Lucasian Chair at Cambridge before Newton. He was more a geometer than an algebraist and had a low regard for abstract manipulation. His approach to the tangents studied by Fermat would be substantially as in the following discussion (we will, however, replace his geometric arguments with more algebraic ones). He would consider two positive relatively prime integers p and q yielding the curve that is the locus of points (x, y) with y q xp = 0. To calculate the tangent to this curve at the point (a, b), xed and on the courve, he would substitute x = a + u, y = b + v. Thus he would have y q xp = (b + v)q (a + u)p = with E(u, v) a sum of terms involving ur or v s with r, s 2. The term ap bq = 0 by assumption. Thus if u, v had been chosen very small and such that (a + u, b + v) is on the curve then the quantity qbq1 v pap1 u must be very close to 0 (since if u is smaller than 1 then u2 is smaller than u). This indicates that if we had a particle moving along the curve then at the point (a, b) it would be moving in the direction of the line qbq1 v pap1 u = 0. That is along the line y= p1 p bq ap + qbq1 v pap1 u + E(u, v) pap1 x. qbq1 since aq1 = a q 1 . This agrees with Fermats solution. Barrows approach is b now called implicit dierentiation. Exercises. 1. Complete the calculation that Barrows method gives the same answer as Fermats. 2. Use Barrows method to calculate the tangent to the ellipse x2 + y 2 = 1. 117 4.3 Calculus. As we have seen, the rst half of the seventeenth century was brimming with activity on calculations of areas and tangents. A substantial part of what we call calculus had already been discovered before either Newton or Leibniz had begun their work. However, it was these exceptional mathematicians who actually established the calculus. The term calculus means a method of calculation. This is precisely what they developed. Their method unied what had been done before and established rules which if followed would lead to solutions to problems which heretofore were solved using ingeneious methods. As we have seen, one reason for the explosion of activity was the development of a notational system and an abstract formalism that simplied the task of communicating mathematics. In most aspects of the rivalry between Newton and Leibniz (actually the rivalry was between their adherents and desciples) the history gives the edge to Newton. However, when it comes to the notation that would be used in communicating and working with the calculus Leibniz wins hands down. Their independent work was published in several places. Leibniz published A new method for maxima minima as well as tangents in Acta Eruditorum, 1684. A year later Newton published De methis Fluxionen and claimed that the paper was written in 1671. Newtons masterpiece Principia Mathematica was published in 1687. In the introduction to the rst edition he said: In letters that passed between me and that most excellent geometer G.W.Leibniz 10 years ago, when I signied that I knew a method of determining maxima and minima, of drawing tangents and the like, and when I concealed it in transposed letters... the most distinguished man wrote back that he had also fallen on a method of the same kind, and communicated his method which hardly diered from mine except in his forms of symbols. The rst calculus text was published in 1696 by the Marquis de LHospital called Analyse des innement petits which was a compendium of lessons by his private tutor John Bernouli. 4.3.1 Newtons method of uxions. In this subsection we will describe Newtons approach to dierential calculus. Since Leibniz approach is essentially the same, we will emphasize the notational dierences in the next subsection. Suppose we x the independent variable to be x and y varies with x according to a predetermined rule. We think of the symbol o as indicating a very small change in x this symbol is a uxion and at rst we will take it to be an independently varying very small value. Then we think of x + o as a very small change in x. Now when x has moved to x + o the value of y changes to a new value y + z (not Newtons notation). This z is an arbitrarily small change in y and so to Newton it should be proportional to o. That is z = yo and this proportionality should be a new function of x. The term yo is called a uent and y is the derivative. We will now look at an example. y = xn and n a positive integer. Then 118 using the binomial formula we have (x + o)n = xn + nxn1 o + higher powers of o. Since o is to be thought of as arbitrarily small the terms beyond the rst power cannot contribute to the uent. Thus the uent if nxn1 o and y = nxn1 . This isnt much dierent from what Fermat might do. We nor look at the case when p p n = a .Then y = x q so y q = xp (looks a bit like Barrows start). This says that (y + yo)q = (x + o)p Now (x + o)p = xp + pxp1 o and (y + yo)q = y q + qy q1 yo thus equating coecients of o we have qy q1 yo = pxp1 o. This implies that yo = p p p p p 1q p1 y x o = x(1q) q xp1 o = x q 1 . q q q so Notice that we have neglected the higher powers of o. This is a consistant part of the method. The point here is that it is a method and not just a clever trick. We look at one more example (which is a special case of the chain rule). 1 Consider y = 1x . Then (1 x)y = 1 (1 x o)(y + yo) = 1. (1 x)y oy + (1 x)yo = 1. (1 x)yo = oy yo = y 1 o o= 1x (1 x)2 y= 1 . (1 x)2 Expanding we have Using the relation we have so and we conclude Exercises. 1. Calculate y for y = x3 + 3x + 1 using the method of uxions. 2. Suppose that you know y use the method of uxions to calculate z if 1 z = y. 3. Compare for the case of y = xn with n rational compare the method in this subsection with that of Fermat and that of Barrow. 119 4.3.2 The Leibniz notation. The notation of Leibniz is now the standard method of expression in calculus. He wrote dx for Newtons xo and when y = f (x) then what we denoted by y dy (this is not Newtons notation) he wrote dx . In his notation one sees whar us called the chain rule in modern calculus immediately if y = f (z) and z = g(x) then dy dy dz = . dx dz dx Leibniz is also credited with the product rule (also called the Leibniz rule). Suppose that f (x) = g(x)h(x). That is y = uv witt u = g(x) and v = h(x) then du dv dy = v+u . dx dx dx In fact y + dy = (u + du)(v + dv) = uv +vdu +udv + dudv = y + vdu + udv. Now subtract y from both sides of the equation and divide by dx. This condition has come to be called Leibnizs rule. Exercise. 1. Do problem 2. of the previous section using the chain rule. 4.3.3 Newtons binomial formula Newton thought of o as small to rst order, that is o2 is negligable and he understood that one could equally well introduce objects small tosecond order say u with u, u2 not negligable but. One would have 1 f (x + u) = f (x) + f(x)u + g(x)u2 2 with g(x) to be dertermined. He looked at f (x) = x m then f 0 (x) = We now expand out m m (x)u + 1 g(x)u2 f (x + u) = f (x) + f = 2 m m1 (x)u + 1 g(x)u2 f (x) + mf (x) f 2 1 m(m 1) + f (x)m2 (f(x)u + g(x)u2 )2 . 2 2 Expanding in powers of u we have x + u = f (x + u)m = x + mx 1 1m 1 m(m 1) m2 x m u + g(x)u2 ) + x m m 2 2 m m1 m 1 1 x+u+ u2 . x m g(x) + x 2 2m m1 m 1 1 1 1 m . mx ( 1 1m x m m 2 u2 = 120 m 1 1 m m1 x m g(x) + x =0 2 2m 1 we can solve the equation and get g(x) = m1 x m 2 . Observe that m2 So the third term is m1 1 1 = ( 1). m2 m m 1 1 m(m Thus 1) 1 2 xm . 2 Proceding in this way Newton derived his formula that the k + 1 term is 1 1) ( m k + 1) 1 k xm k1 . Newton intruduced a notation analogous to ours for binomial coecients to denote this expression. In modern notation we write a a(a 1) (a k + 1) = . k k! 1 1 m(m From this derivation he asserted that 1 1 1 1 1 1 1 1 m x m 1 t + m x m 2 t2 + ... + m x m k tk + ... (x + t) m = x m + 1 2 k This formally derived formula he checked by taking the m-th power. One can then do the same for rational powers and get a a1 a a2 2 a ak k x t+ x t + ... + x t + ... (x + t)a = xa + 1 2 k This is Newtons binomial series. Exersizes. 1. Consider t or be the independent variable and calculate the derivative of y = (x + t)a . Then dierentiate the individual terms in Newtons series. Check that the two series for the same thing agree. 4.3.4 The fundamental theorem of calculus. We consider rhe following picture 121 If we think of h as the innitesimal o then we have the area Ao under the curve above the interval is between yo and (y + yo)o. But we can ignore the o2 terms so Ao = yo. This says the A = y. In other words the area under the curve y = f (x) from a to x thought of as a function of x has derivative f (x) at x. Thus the area under the curve between a and b with a < b is F (b)F (a) for any function such that F (x) = f (x). This is the fundamental theorem of calculus which was rst enunciated by Leibniz. Of course, this argument is not in any way complete but it gives a method and that method yields the correct answer in all cases where another technique 1 could be used. For example if f (x) = xm with m 6= 1 then F (x) = m+1 xm+1 and so we have the same outcome as Fermat (which was completely justied). Exercises. 1. Use the fundamental theorem of calculus to derive the special case of Archimedes theorem in exercise 2 of subsection 1.3 of this chapter. 2. This problem is dicult and can be considered a research project. Use the fundamental theorem of calculus to derive the theorem of Archimedes on the area of a sector of a parabola. 4.3.5 Logarithms As we saw in the last subsection the Newton-Leibniz method has no problem calculation areas once a function is found with the appropriate derivative. The 1 appropriate function for xr is r+1 xr+1 except, of course, for r = 1. We will use the notation y 0 for what we wrote as y and f 0 (x) for f (x). The question then 1 remains what about y 0 = x ? This is serious since it is necessity if we wish to calculate areas related to the hyperbola uv = 1. As it turns out the appropraite function had been discovered before calculus and for dierent reasons. We will digress from our main line and study the history of the missing function. We rst consider x as a function of y. Then y(x(y)) = y. Thus the chain rule says 1 that y 0 (x(y))x0 (y) = 1. But y 0 (x(y)) = x(y) . So x0 (y) = x(y). 1 In other words if we nd a function such that f 0 (x) = x then we would also nd 0 a function such that g (x) = g(x). Such a function with value g(0) = 1 has the remarkable property that g(a + b) = g(a)g(b). That is, it changes addition into multiplication or vice-versa. We are ahead of our story. John Napier (1550-1617) had an interest in making mechanical devices that would allow one to do complicated calculations precisely and easily. Before him there were several methods found of converting multiplication and division to addition and subtraction. One based on trigonometry that was perfected by the Arab mathematicians called prosthapaeresis. We will not go into this method here but suce to say, it helped Tycho Brahe do his intricate calculations and was based on tables involving for sets of trigonometric identities involving multiplication addition and subtraction. Others, notably Michael Stifel (1487?-1567) 122 had observed that if we x a number a then ax ay = ax+y , ax = axy . ay This certainly changes multiplication and division into addition and subtraction. However, for Stifel there were only caculations of such powers for a, x, y integers and for rational numbers one would need very accurate methods of extracting roots. Napier decided to just use integral powers of a number that had the property that the successive powers were close enough together that if one drew a straight line between the successive values (interpolated) the value would still be within a desired tolerance. This would allow one to use only integral powers in the tables or on the device to be constructed. Napier chose the number N = 0.9999999. He then considered N L to have logarithm L. In order to avoid small decimals he in fact considered 10000000N L as having logarithm L.Now to multiply 10000000N L by 10000000N K all you need do is add K + L look at the table to nd the number with logarithm K + L (or interpolate to get it) and then shift by 7 decimal positions. This was implemented in a slide rule type mechanism. Note that if L = 1 corresponds to the number 9999999 and L = 0 to 10000000. Now if we calculate he value 10000000N L for L = 10000000 we get 3678794 to seven digit accuracy. It therefore gave an ecient method of doing 7 digit multiplication and division. But except for turniing multiplication into division what does it have to do with 1 the problem of nding a function whose derivative is x ? A hint can befound in the following observation. If f (x) satisfyies f 0 (x) = f (x) and f (0) = 1 then to seven signicatnt gures f (1) = 2.718281. The reciprical of this number is 0.3678794 to 7 decimal places. This cannot be an accident. 1 The upshot is that a function whose derivative is x is very dierent then n a function whose derivative is x for any integer other than 1. The function that has this derivative and value 0 at 1 is usually denoted ln(x) and is called the natural logarithm. It is also denoted simply as log(x) when logarithms to base 10 are note being used. Convarting a base involves the simple maneuver of ln(x) multiplying by the logarithm of the inverse. Thus log(x) = ln(10) . The number that yields a natural logarithm of 1 is usually denoted by e. This number is not rational and as observed above it is 2.718281 to seven decimal places. Exercise. Suppose Napier had used 100000000 = 108 so he would have been looking 8 at powers of N = .99999999. What would the Napier logarithm of 108 N 10 be? 4.3.6 The trigonometric functions. We have seen that the ancient Greeks had an extensive knowledge of trigonometry. We have given an interpretation of trigonometry in section 3.5.3. In particular, the two basic trigonometric functions cos(x) and sin(x). Notice that we are using x for the variable rather than a more traditional Greek letter and 123 thinking of the functions as being attributes of angles. We have seen that these functions have the following properties: 1. 2. 3. 4. cos(0) = 1, sin(0) = 0. cos(x)2 + sin(x)2 = 1. sin(x + y) = sin(x) cos(y) + sin(y) cos(x). cos(x + y) = cos(x) cos(y) sin(x) sin(y). We will calculate the derivatives of these functions using techniques of analysis. We will use the prime rather than the dot notation. Consider the picture below of a circle of radius 1. The lengh of AB is the sin() where is the angle AOB. The length of OA is sin() cos(). The length of the arc CB is and the length of CD is tan() = cos() . We note that the area of the triangle COB is sin() . The area of the triangle 2 COD is tan() (see exercise 1below) and the area of the part of the interior of 2 the circle COB is (at least for positive and a rational multiple of that is 2 2 at most ). To see this last assertion note that the area of a quadrant is 2 4 (since the area of the interior of the circle is ). To get a quadrant we take = . If = then we would get half the area which is = . If = 2k 2 4 8 2 1 with k a positive integer than we would have area k times that of the area of the quadrant. That is . If we multiply by a positive integer m and is very 2 small then we get the area of m equal pieces corresponding to . Thus the area is m . We therefore see that if is a positive rational multiple of less than 2 2 or equal to then the area of the piece of the circle is . We now observe that 2 2 since the three areas are nested we have sin() sin() < < 2 2 2 cos() in the range 0 < < . We therefore see that sin() < 1 and sin() > cos() = 2 p p 1 sin()2 . Using 0 < sin() < we see that sin() > 1 2 . From this we see that as we choose positive and progressively smaller the value of sin() is 124 being crushed to 1. This says that the slope of the tangent line to y = sin(x) at x = 0 is 1.That is sin0 (0) = 1. It is easier to see that cos0 (0) = 0. Indeed, since cos(x)2 + sin(x)2 = 1 we can use Leibniz rule to see that 2 cos0 (x) cos(x) + 2 sin0 (x) sin(x) = 0. Subsituting x = 0 gives 2 cos0 (0) cos(0) + 2 sin0 (0) sin(0) = 0. We have cos(0) = 1 and sin(0) = 0. So cos0 (0) is indeed 0. To calculate all derivatives we note sin(x + o) = sin(x) cos(o) + cos(x) sin(o). Now cos(o) = cos0 (0)o = 0 and sin(o) = sin0 (0)o = o. So sin(x + o) = cos(x)o. Similarly, cos(x + o) = cos(x) cos(o) sin(x) sin(o) = sin(x)o. This yields 4. cos0 (x) = sin(x) and sin0 (x) = cos(x). In p above derivation we used the fact that by choosing is small we can the make 1 2 as close to 1 as we wish. This is true and you might think that it is obvious but it does need proof in modern mathematics. We will discuss this point in the exercises. Exersizes, 1. Use the theory of similar triangles to deduce that in the gure above CD = tan , (Hint: CD = OC . AB OA p 2. Here we will sketch that assertion that if 1 1 then 1 1 2 < 2 . First check that p p 2 = (1 1 2 )(1 + 1 2 ). p Next observe that in the range indicated 1 + 1 2 1. Conclude 1 4.3.7 p 1 2 = 2 p 2 . 2 (1 + 1 ) The exponential function. In this section we will discuss Eulers unication of logarithms and trigonometry. This was essential done section 3.5.3. We will rst take another look at logarithms. We saw that in Napiers work on logarithms the number u = (1 1 n ) n with n = 10000000 = 107 played an important role. Also, we pointed out that if f (x) were a function with f 0 (x) = f (x) and f (0) = 1 then to seven signicant 1 gures f (1) = u . Euler established the standard notation e = f (1). Now, f (x) satises f (x + y) = f (x)f (y). This is reminicent of the known formula ax+y = 125 ax ay for x, y rational. Further, a0 = 1 is the standard interpretation of the 0-th power and we have by denition a1 = a. This led to the notation f (x) = ex . The distinction is that this function is dened for all real numbers. If we have a = eL then we say that L = ln(a). This denes the natural logarithm that is up to a sign and a shift essentially Napiers logarithm (that is to 7 signicant 1 gures). We have seen that ln0 (x) = x giving the missing derivative. We note that this new function allows us to dene ax for a > 0 and any real number, x, by ax = eln(a)x . In section 3.5.3 we also saw that in the realm of complex numbers if we set z() = cos() + i sin() then the trigonometric identities of the previous section can be written z( + ) = z()z(). We also note that z(0) = 1. This led Euler to dene eix = cos(x) + i sin(x). This allowed the exponential function to be dened for all complex numbers as ex+iy = ex (cos(y) + i sin(y)) . The basic properties are still satised in this context. 1.e0 = 1. 2.ez+w = ez ew . Euler was especially intrigued with the formula ei + 1 = 0 which he called the relationship between the 5 most important constants of mathematics. At this point we are ahead of our story. We need to learn a few things from Eulers teachers. Exercises. 1. What are the 5 constants in Eulers formula? 2. What value would you assign to (ei )i ? How would you interpret the value? 4.3.8 Power series expansions. We have already encountered Newtons bininomial formula which is an innite series. This formula showed how one might express a function as an innite 1 series. In this case it is the function (1x)a with a rational and 1 < x < 1. One notes that if we dierentiate this k times one gets a(a + 1) (a + k 1) . (1 x)k+a 126 Thus if we call the function f (x) then Newtons binomial formula becomes f (x) = f (0) + f 0 (0)x + f 00 (0) 2 f (k) (0) k x + ... + x + .... 2 k! Here f (k) is gotten by dierentiating f repeatedly k times. This result was generalized by many authors but most notably Brook Taylor (1685-1773) and later Colin Maclaurin(1698-1746) who are interchangeably named for the generalization. It says that a function can be expanded in the form f (c + x) = f (c) + f 0 (c)(x c) + f 00 (c) f (k) (c) (x c)2 + ... + (x c)k + .... 2 k! We will call this series the Taylor series of f (x) at c. For example of f (x) = ex . Then f 0 (x) = f (x) and f (0) = 1. Thus f 00 (x) = 0 0 (f ) (x) = f 0 (x) = f (x). So f (k) (0) = 1 for all k. This says that the Taylor series of ex is x2 x3 xk 1+x+ + + ... + + ..... 2 6 k! Similarly we have cos0 (x) = sin(x) and sin0 (x) = cos(x). This gives cos00 (x) = sin0 (x) = cos(x). and sin00 (x) = cos0 (x) = sin (x). This says that even repeated derivatives are given as follows cos(2k) (x) = (1)k cos(x) and sin2k (x) = (1)k sin(x). The odd repeated derivatives are given as cos2k+1 (x) = (1)k cos0 (x) = (1) k+1 sin(x) and sin(2k+1) (x) = (1)k sin0 (x) = (1)k cos(x). Since cos(0) = 1 and sin(0) = 0 we have the Taylor series cos(x) = 1 and sin(x) = x x2 x4 x2k + ... + (1)k + ... 2 4! (2k)! x2k+1 x3 x5 + ... + (1)k + .... 6 5! (2k + 1)! If we add together cos(x) + i sin(x) then we have 1 + ix x4 x5 x2k x2k+1 x2 ix3 + + i + ... + (1)k + (1)k + .... 2 3! 4! 5! (2k)! (2k + 1)! 127 if we write z = ix then z 2k = (i2k )x2k = (i2 )k x2k = (1)k x2k and z 2k+1 = ix(z 2k ) = ix(1)k x2k = (1)k ix2k+1 . Thus in terms of z we have 1+z+ z2 z3 zk + + ... + + .... 2 3! k! This says that Eulers interpretation of eix as cos(x) + i sin(x) is completely consistant with Taylor series. 4.3.9 Eulers summation of a series. As we mentioned the value of the series 1 1 1 1 + 2 + 2 + ... + 2 + ... 2 3 n was a mystery to some of the greatest minds of the seventeenth and early eighteenth centuries. We have discussed the method of Leibniz that summed the series 1 2 2 1+ + + ... + + ... = 2. 3 34 n(n + 1) One notes that each term of the rst series is less than the corresponding one in the second. This implies that the series sums to a number that is infact less than 2. Before we give Eulers ingeneous deduction of the sum there is another sum that the previous section allows us to calculate. 1+ 1 1 1 + + ... + + ... = e. 2 6 n! The terms in the sum are fairly simple but the number e is not a simple rational number. One doesnt guess such a value and in fact it had no name until Euler named it. It is therefore not a reasonable idea to just guess an answer. He rst observes that if we have a polynomial of the form 1 a1 x + a2 x2 + ... + an xn and if this polynomial has roots r1 , ..., rn counting multiplicity then if this roots are non-zero we have 1 1 1 + + ... + . a1 = r1 r2 rn To see this we observe that the polynomial with value 1 at 0 and roots r1 , ..., rn is x x x 1 1 1 . r1 r2 rn Now compare the coecient of x. We will come back to this in the exercises. Eulers leap was to apply this observation to an innite series (as in the last section). He considers sin(x) = x x3 x5 x2n+1 + ... + (1)n + ... 3! 5! (2n + 1)! 128 Thus x2 x4 x2n sin(x) =1 + ... + (1)n + ... x 3! 5! (2n + 1)! as series with only even powers. Assuming that we can expand it as a polynomial the roots being n wint n = 1, 2, 3, ... So we could expect that this is given by x x x x x x (1 + ) 1 (1 + ) 1 (1 + ) 1 2 n n 2 x2 x2 x2 1 2 1 2 2 = 1 2 4 n which also has even powers. Thus if you consider the series 1 x x2 xn + ... + (1)n + ... 3! 5! (2n + 1)! then it is reasonable to think that it is given by x x x 1 2 1 2 2 1 2 4 n 1 1 1 1 = 2 + 2 + ... + 2 2 + ... 3! 4 n This yield s Thus if we apply the observation (valid for polynomials) we have 1 1 1 2 = 1 + + + ... + 2 + ... 6 4 9 n Although no one doubted this as the sum of the series after they saw the marvelous argument the reader should be cautioned that this is not a proof of the formula (as Leibniz derivation is of his value for his series). As it turns out this argument can be made rigorous using a theory of innite products. Indeed, the above innite products can be be proved to converge in a well dened sense to the desired function and the suggested formal manipulation actually gives the Taylor series. Exercise. 1. If f is a polynomial of degree n with roots r1 , ..., rn (allowing for repititions) then f is a multiple of (x r1 )...(x rn ). Assuming that f (0) = 1 show that x x x f (x) = 1 1 1 . r1 r2 rn Hint: The multiple is (1)n r1 r2 rn . 129 4.3.10 The question of rigor. There were two controversies that arose in the development of the Calculus. The rst was the question of priority between Newton and Leibniz. It can be said that Newton came out ahead on that issue (although few doubt the independence of Leibniz contribution). However, Newtons apparent victory was one of the causes of the eclipse of English mathematics during the eighteenth century. There are many explanations of this but one the strongest (to our mind) is just that the Leibniz notation was superior. The second contraversy had to do with the very roots of the Calculus. The scientic community knew that caculus gave them an entirely new arsonal of tools to study problems in simple mechanical ways that had been only handled in special cases by methods that were extremely clever and complicated. However, the method of both Newton and Leibniz involved the multiplication and division of objects that were not exactly numbers. Newtons symbol o was an object that one should consider to be such that o2 can be neglected in expressions where it occurs.. The ratio f (x+o)f (x) = f 0 (x) is the derivative. o Leibniz approach was similar he had dx and one should think of (dx)2 = 0 but dy dx was the derivative. Many scientists, philosophers,etc. felt that there was a dangerous lack of foundation for these methods. However, the methods always gave correct answers to the problems to which they were applied. However, the application of the methodology was becoming more and more of a specialty. For example, in the derivation that we gave in the previous section we saw an argument that as it turns out gives the correct answer but is based on a premise that has not been checked. One starts with (at least the hope that) something like the following statement is true. 1. f (0) = 1. 2. f 0 (0) = a. Then a is the sum of the reciprocals of the roots of f (the numbers c such that f (c) = 0. This is true for polynomials if one includes complex roots. However it is denitely false for even very nice functions. For example, ex has the properties 1. and 2. with a = 1. But it is never 0 (this includes using the extended denition ex+iy = ex (cos y + i sin y) of Euler. This says that the argument of Euler is only rigorous if he shows that the function that he dened has the property that the sum of the reciprocals of its roots is the negative of its derivative at 0.This can be done, as we indicated in the previous section by giving a rigorous meaning to the product formula for sin x . x Even before Euler at the very beginning of the development of Calculus there were skeptics about the foundations (not the applications). One of the most serious attacks was made by Bishop George Berkeley (1685-1753). In his pamphlet The Analyst in 1734 he expressed doubts about the foundations of Calculus in particular of Newtons uxions. His point was you cannot have something that behaves like a very small but non-zero number and still has the property that its square is 0. It is clear that you must exercise great care when yo divide by something whose square is o. He labled such objects innitesimals 130 and argued that they cannot have an independent reality. Here is a quote from The Analyst in his discussion of uxions: ...they are neither nite quantities nor quantities innitely small, not yet nothing. In fact, one can develop a rigorous theory with highly restricted classes of functions. For example, if we only consider polynomials then we well see in the next chapter that we can have a completely consistant theory with polynomials in two variables with one variable having the property that it is not 0 but its square is 0. This theory would also wallow for power series and explain why the Newton-Leibniz method always gave the right answer for functions given by power series. A more radical consistant theory which allowed for objects like the ones that Berkeley disparaged was developed by Abraham Robinson in his theory of non-standard analysis. Roughly speaking, he hypthesized the existance of non-standard numbers that were allowed to t between actual numbers. To describe these numbers we must understand our usual number system in a more rigorous manner than we have so far. There were other problems with the foundations of calculus that were less apparent in the seventeenth and eighteenth centuries. This had to do with how careful one must be in the choice of functions that are analyzable using the methods at hand. For example if we considered the function |x| (x if x 0 and x if x < 0) Then if x > 0 we have |x + o| |x| x+ox = =1 o o and if x < 0 we have (x + o) (x) o |x + o| |x| = = = 1. o o o However if we consider x = 0 then we are dealing with |o| . o In other words we must gure out a meaning for |o|. much worse phenomena are possible and can actually occur in useful applications of mathematics. Calculus had to be given a rm footing, 131 Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Boise State - MATH - 124 from Dialogues on Mathematics, Alfred Renyi, Holden Day Publishers, San Francisco, 1967A DIALOGUE ON THE APPLICATIONS OF MATHEMATICSARCHIMEDES Your Majesty! What a surprise at this late hour! To what do I owe the honor of a visit from King Hieron Boise State - MATH - 124 from The Mathematical Traveler: Exploring the Grand History of Numbers Calvin C. Clawson, Perseus Publishing, 1994CHAPTER 2 Early Counting We will now investigate how long people have been counting. If counting is a recently acquired skill for huma Boise State - MATH - 124 Gdel and the limits of logic 19972004, Millennium Mathematics Project, University of Cambridge. Permission is granted to print and copy this page on paper for noncommercial use. For other uses, including electronic redistribution, please contact us Boise State - MATH - 124 from Mathematics in Civilization, H.L. Resnikoff and R.O. Wells Jr., Dover Publishing, New York, 1973CHAPTER 1 Number Systems and the Invention of Positional Notation 1.1 .The tally notches incised 30,000 years ago in the wolf's bone described in Boise State - MATH - 124 The Origins of Mathematics1The origins of mathematics accompanied the evolution of social systems. Many, many social needs require calculation and numbers. Conversely, the calculation of numbers enables more complex relations and interactions betwee Boise State - MATH - 124 The origins of proof III: Proof and puzzles through the ages 1997-2004, Millennium Mathematics Project, University of Cambridge. Permission is granted to print and copy this page on paper for non-commercial use. For other uses, including electronic Boise State - MATH - 124 from Dialogues on Mathematics, Alfred Renyi, Holden Day Publishers, San Francisco, 1967A SOCRATIC DIALOGUE ON MATHEMATICSSOCRATESAre you looking for somebody, my dear Hippocrates?HIPPOCRATES No, Socrates, because I have already found him, nam Boise State - MATH - 124 from Mathematics in Western Culture Morris Kline, Oxford University Press, New York, 19531 Introduction.True and False ConceptionsStay your rude steps) or e'er your feet invade The Muses' haunts) ye sons of War and Trade! Nor you, ye legion fie Boise State - MATH - 124 CHAPTER 2 - WHAT MATHEMATICS IS FOR from Natures Numbers: The Unreal Reality of Mathematics by Ian Stewart, Basic Books,1995 We've now established the uncontroversial idea that nature is full of patterns. But what do we want to do with them? One thin Boise State - MATH - 124 Emmy Noether and Hermann Weyl.by Peter Roquette June 26, 2007 This is the somewhat extended manuscript of a talk presented at the Hermann Weyl conference in Bielefeld, September 10, 2006.1Contents1 Preface 2 Introduction 3 The rst period: unt Boise State - MATH - 124 From kaleidoscopes to soccer balls 19972004, Millennium Mathematics Project, University of Cambridge. Permission is granted to print and copy this page on paper for noncommercial use. For other uses, including electronic redistribution, please cont Boise State - MATH - 124 Geometry and the ImaginationJohn Conway, Peter Doyle, Jane Gilman, and Bill Thurston Version 0.94, Winter 20061Bicycle tracksC. Dennis Thron has called attention to the following passage from The Adventure of the Priory School, by Sir Arthur C Boise State - MATH - 124 DISCRETE MATHEMATICSW W L CHENcW W L Chen, 1991, 2003.This chapter is available free to all individuals, on the understanding that it is not to be used for nancial gains, and may be downloaded and/or photocopied, with or without permission from Boise State - MATH - 124 Introduction to Groupsimetryis a vast subject, significant in art and ire. Mathematics lies at its root, and it would be to find a better one on which to demonstrate the 'dng of the mathematical intellect.HERMANN WEYLJ SymmetrySYMMETRIES OF A SQU Boise State - MATH - 124 Chapter 1Symmetry of StructureFor aeons humans have been fascinated by the striking symmetry of many natural objects such as owers, snowakes and mineral crystals, and embodied the idea of symmetry in numerous articial objects such as decoration pa Boise State - MATH - 124 Algebra Project:An Atlas of Small GroupsRobert Pinheiro Math 316 May 4, 20051The Theory of Groups is one of the oldest and most established branches of Algebra. Nowadays, the concept of group appears not only in the discipline itself, but is a Boise State - MATH - 124 Notes The GreeksThales of Miletus (624-547 BC) Philosopher/ Mathematician Predicted solar eclipse in 585 BC, measured height of pyramids, distance of ships at sea, put deductive logic into geometry?, Five Fundamental Geometric Proofs; A circle is Boise State - MATH - 124 Chapter 1Early CalculationIntroductionhis chapter covers many different aspects of the history of calculation, describing the first steps in numeration and continuing through some of the nineteenth- and twentieth-century developments of mechanica Boise State - MATH - 124 Chapter 5Analog Computing DevicesIntroductionmagine that you are standing on the bank of a small river. On the opposite bank, on a small rise inaccessible to you, is a tall building whose height you would like to determine. Fortunately, you have w Boise State - MATH - 124 223Chapter 7Electronic CalculatorsIntroductionof using high-speed electronic circuits in digital calculating machinery. The vacuum tube itself, a device that could switch current many times faster than electromagnetic relays, was known and heavi Boise State - MATH - 124 A whirlpool of numbers 19972004, Millennium Mathematics Project, University of Cambridge. Permission is granted to print and copy this page on paper for noncommercial use. For other uses, including electronic redistribution, please contact us. May Boise State - MATH - 124 Chapter 3EXPONENTS, LOGARITHMS, AND ROOTSJust as multiplication indicates multiple additions, exponents indicate multiple multiplications. The symbol for &quot;multiple multiplications&quot; is y = xn 3.1where n indicates the number of times the multipli Boise State - MATH - 124 Fibonacci Numbers and the Golden RatioSay What?What is the Golden Ratio?Well, before we answer that question let's examine an interesting sequence (or list) of numbers. Actually the series starts with 0, 1 but to make it easier well just start wi Boise State - MATH - 124 Discrete Mathematics CS 2610October 2, 2006Sequences (Section 2.4)Def. :A sequence is a function from a subset of integers I to a set S, (I Z) f:IS Usually, the domain I is either a set of positive or nonnegative consecutive integers {1,2,3.} or Boise State - MATH - 124 Chinese Science 13 (1996): 3554The Development of Hindu-Arabic and Traditional Chinese ArithmeticLam Lay Yong[Lam Lay Yong is Professor of Mathematics at the National University of Singapore. She is an Effective Member of the International Academ Boise State - MATH - 124 INTRODUCTION Imagine a sphere. It is unity's perfect symbol. Each point on its surface is identical to every other, equidistant from the unique point at its center. Establishing a single point on the sphere allows others to be defined in relation to Boise State - MATH - 124 The GreeksThere are four major periods of Greek literature: preclassical, classical, Hellenistic-Roman, and ByzantineLyric PoetryArchilochus of Paros about 700 BC Sappho, who lived in the period from 610 to 580 BC Pindar the transition has been UCLA - ECON - 101 Review Problems From Economics 11December 18, 1996 David K. Levine1. Consumer and Demand TheoryMs. Rockstar enjoys champagne x 1 and diamonds x 2 . She has $1,000,000 to spend, and her utility is log x 1 + 2 log x 2 . What is her demand for cham Boise State - MATH - 124 Text Book Notes Evariste Galois Topic Sub Topic Summary Authors Date Web SiteHISTORYMajor ALLHistorical Anecdotes Text Book Notes Galois A brief history on Galois Jai Paul August 27, 2002 http:/ numericalmethods.eng.usf.eduMethods for solving Boise State - MATH - 124 Review: [Untitled] Reviewed Work(s): A History of Greek Sculpture by A. S. Murray Ad. Michaelis The Classical Review, Vol. 6, No. 5. (May, 1892), pp. 227-231.Stable URL: http:/links.jstor.org/sici? Boise State - MATH - 124 Chapter 2. Arithmetic in Euclids ElementsWe tend to think of Euclids Elements as a compendium of geometry, but, as we have already noted, Books 7, 8 and 9 are devoted to elementary number theory. We will give some indication of key ideas in these bo
{"url":"http://www.coursehero.com/file/3015612/Concept-of-Numbers/","timestamp":"2014-04-21T04:49:50Z","content_type":null,"content_length":"321138","record_id":"<urn:uuid:af99b485-b75c-4592-bfcf-0665f118e2f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Discharge (hydrology) From Wikipedia, the free encyclopedia In hydrology, discharge is the volume rate of water flow, including any suspended solids (e.g. sediment), dissolved chemicals (e.g. CaCO[3][(aq)]), and/or biologic material (e.g. diatoms), which is transported through a given cross-sectional area.^1 Frequently, other terms synonymous with discharge are used to describe the volumetric flow rate of water and are typically discipline dependent. For example, a fluvial hydrologist studying natural river systems may define discharge as streamflow, whereas an engineer operating a reservoir system might define discharge as outflow, which is contrasted with inflow. GH Dury and MJ Bradshaw are two hydrologists who devised models showing the relationship between discharge and other variables in a river. The Bradshaw model described how pebble size and other variables change from source to mouth; while Dury considered the relationships between discharge and variables such as slope and friction. The units that are typically used to express discharge include m³/s (cubic meters per second), ft³/s (cubic feet per second or cfs) and/or acre-feet per day.^2 For example, the average discharge of the Rhine river in Europe is 2,200 cubic metres per second (78,000 cu ft/s) or 190,000,000 cubic metres (150,000 acre·ft) per day. A commonly applied methodology for measuring, and estimating, the discharge of a river is based on a simplified form of the continuity equation. The equation implies that for any incompressible fluid, such as liquid water, the discharge (Q) is equal to the product of the stream's cross-sectional area (A) and its mean velocity ($\bar{u}$), and is written as: • $Q$ is the discharge ([L^3T^−1; m^3/s or ft^3/s) • $A$ is the cross-sectional area of the portion of the channel occupied by the flow ([L^2; m^2 or ft^2) • $\bar{u}$ is the average flow velocity ([LT^−1; m/s or ft/s) Catchment discharge The catchment of a river above a certain location is determined by the surface area of all land which drains toward the river from above that point. The river's discharge at that location depends on the rainfall on the catchment or drainage area and the inflow or outflow of groundwater to or from the area, stream modifications such as dams and irrigation diversions, as well as evaporation and evapotranspiration from the area's land and plant surfaces. In storm hydrology, an important consideration is the stream's discharge hydrograph, a record of how the discharge varies over time after a precipitation event. The stream rises to a peak flow after each precipitation event, then falls in a slow recession. Because the peak flow also corresponds to the maximum water level reached during the event, it is of interest in flood studies. Analysis of the relationship between precipitation intensity and duration, and the response of the stream discharge is aided by the concept of the unit hydrograph which represents the response of stream discharge over time to the application of a hypothetical "unit" amount and duration of rain, for example 1 cm over the entire catchment for a period of one hour. This represents a certain volume of water (depending on the area of the catchment) which must subsequently flow out of the river. Using this method either actual historical rainfalls or hypothetical "design storms" can be modeled mathematically to confirm characteristics of historical floods, or to predict a stream's reaction to a predicted storm. The relationship between the discharge in the stream at a given cross-section and the level of the stream is described by a rating curve. Average velocities and the cross-sectional area of the stream are measured for a given stream level. The velocity and the area give the discharge for that level. After measurements are made for several different levels, a rating table or rating curve may be developed. Once rated, the discharge in the stream may be determined by measuring the level, and determining the corresponding discharge from the rating curve. If a continuous level-recording device is located at a rated cross-section, the stream's discharge may be continuously determined. Flows with larger discharges are able to transport more sediment downstream. See also 1. ^ Buchanan, T.J. and Somers, W.P., 1969, Discharge Measurements at Gaging Stations: U.S. Geological Survey Techniques of Water-Resources Investigations, Book 3, Chapter A8, 1p. 2. ^ Dunne, T., and Leopold, L.B., 1978, Water in Environmental Planning: San Francisco, Calif., W.H. Freeman, 257-258 p. External links
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Discharge_(hydrology)","timestamp":"2014-04-20T18:50:21Z","content_type":null,"content_length":"74116","record_id":"<urn:uuid:dbe1099f-cf41-4664-827b-d0b0f156253c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Franklin, MA Math Tutor Find a Franklin, MA Math Tutor ...I hold both a master's and a bachelor's degree in economics. In addition to math, I have also taught economics both at the high school and community college level.I have taught Algebra I at the honors and standard levels for almost 15 years. It is one of my favorite subjects to teach and tutor. 6 Subjects: including trigonometry, algebra 1, algebra 2, geometry ...I eat roadblocks for breakfast! Your child will learn the paths around these blocks as I give them tools to solve problems they feel are impossible. As they see small victories, they find their steps around these road blocks, and they leave them behind! 10 Subjects: including algebra 1, prealgebra, SAT math, ACT Math ...There is hard work involved, but now you know, you will be in a position to understand, your subject matter better, if this is Mathematics or Science or Engineering (college students) and get results! The practice sessions, I shall plan for you, will be straight to the point and will yield best ... 6 Subjects: including precalculus, algebra 1, trigonometry, prealgebra ...With all students, I am excited to use student-centered approaches to encourage critical thought and facilitate academic success. In other words, I love to teach! I love getting to know my students, and helping them succeed. 16 Subjects: including SAT math, algebra 1, elementary (k-6th), grammar ...In many cases I will break the learning to the most basic elements or at a concrete level and utilize a parts to whole approach. I taught students with special needs in my classroom for over 20 years and also worked professionally for Special Olympics, giving me a wealth of experience. I was a high school basketball coach at Dartmouth High School for approximately 15 years. 31 Subjects: including algebra 2, English, precalculus, algebra 1 Related Franklin, MA Tutors Franklin, MA Accounting Tutors Franklin, MA ACT Tutors Franklin, MA Algebra Tutors Franklin, MA Algebra 2 Tutors Franklin, MA Calculus Tutors Franklin, MA Geometry Tutors Franklin, MA Math Tutors Franklin, MA Prealgebra Tutors Franklin, MA Precalculus Tutors Franklin, MA SAT Tutors Franklin, MA SAT Math Tutors Franklin, MA Science Tutors Franklin, MA Statistics Tutors Franklin, MA Trigonometry Tutors Nearby Cities With Math Tutor Attleboro Math Tutors Bellingham, MA Math Tutors Cumberland, RI Math Tutors Foxboro, MA Math Tutors Mansfield, MA Math Tutors Medway, MA Math Tutors Milford, MA Math Tutors Needham, MA Math Tutors Norfolk, MA Math Tutors North Attleboro Math Tutors Norwood, MA Math Tutors Plainville, MA Math Tutors Walpole, MA Math Tutors Woonsocket, RI Math Tutors Wrentham Math Tutors
{"url":"http://www.purplemath.com/Franklin_MA_Math_tutors.php","timestamp":"2014-04-16T13:25:06Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:9dfe8633-ab07-4ead-896c-ced23ed9b08f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois cohomology for surgical fields In 1995, Pillay and Poizat introduced the notion of a surgical structure (translated from the french chirurgicale), a structure such that to each definable set there was an element of a poset attached to it (denoted h by its dimension) in such a way that if there was a partition of a set X into finite pieces and each of these pieces could be sent via some finite-to-one map to another set Y, then dim(X)is bounded above by dim(Y). Moreover, an accumulation character was required on this assignment, i.e, given a definable equivalence relation on a definable set, there were only finitely many classes of dimension the dimension of the ambient set. Under these weak assumptions, they proved that a field interpreted in such a structure is perfect and has small absolute Galois group. I will show how these techniques can be extended to consider certain Galois cohomological groups relative to the field, and discuss their geometrical meaning. The talk is intended to be self contained and for a general audience in Model Theory.
{"url":"http://www.newton.ac.uk/programmes/MAA/Pizarro.html","timestamp":"2014-04-18T15:41:22Z","content_type":null,"content_length":"2960","record_id":"<urn:uuid:aa95ca45-f1db-40f3-ac0c-e16c84e8d4b3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Rank two vector bundles on a curve of genus two up vote 9 down vote favorite I recently learned of an interesting result of Narasimhan and Ramanan from 1969, which says that moduli space of rank two vector bundles with trivial determinant on a curve $X$ of genus two is naturally isomorphic to $\mathbb PH^0(\operatorname{Pic}^1X,\mathcal L_\Theta^{\otimes 2})$ (this vector space has dimension four). I'd like to understand this isomorphism better in the context of the Narasimhan--Seshadri theorem. First, fix a closed topological surface $X$ of genus two. Let: $$M_{X,SU(2)}=\operatorname{Hom}(\pi_1(X),SU(2))//SU(2)$$ denote the $SU(2)$ character variety of $X$. For a complex structure $\sigma$ on $X$, let $M_{X,\sigma,\operatorname{rk}2}$ denote the moduli space of rank two vector bundles on $X$ with trivial determinant (actually, there is a technical stability/equivalence relation which should be included, but I will ignore this). According to the Narasimhan--Seshadri theorem, $M_{X,SU(2)}$ and $M_{X,\sigma,\operatorname{rk}2}$ are naturally diffeomorphic (again, there are some qualifications to this which I am ignoring; in particular both spaces are usually singular). Now I want to recall the isomorphism $M_{X,\sigma,\operatorname{rk}2}\to\mathbb PH^0(\operatorname{Pic}^1X,\mathcal L_\Theta^{\otimes 2})$. To any rank two vector bundle $E$, we consider the subvariety $C_E$ of $\operatorname{Pic}^1X$ consisting of bundles $\xi$ for which there is an exact sequence: $$0\to\xi\to E\to\xi^{-1}\to 0$$ (that is, $E$ is an extension of $\xi^{-1}$ and $\xi$). Then Narasimhan and Ramanan prove that $C_E$ is a divisor on $\operatorname{Pic}^1X$ and is linearly equivalent to $2\Theta$. This gives a map $M_{X,\sigma,\operatorname{rk}2}\to\mathbb PH^0(\ operatorname{Pic}^1X,\mathcal L_\Theta^{\otimes 2})$, which Narasimhan and Ramanan go on to show is an isomorphism. (That was only a rough outline). OK, now let's reinterpret the map $M_{X,\sigma,\operatorname{rk}2}\to\mathbb PH^0(\operatorname{Pic}^1X,\mathcal L_\Theta^{\otimes 2})$ in terms of the Narasimhan-Seshadri theorem. Remember that $\ operatorname{Jac}X$ is $\operatorname{Hom}(\pi_1(X),U(1))$. Thus for a homomorphism $\rho:\pi_1(X)\to SU(2)$ (corresponding to a vector bundle of rank two), we can define the subvariety $C_E$ as those $U(1)$ representations $\alpha:\pi_1(X)\to U(1)$ for which $\rho$ can be conjugated to have the form: $$\left(\begin{matrix}\alpha&\beta\cr 0&\alpha^{-1}\end{matrix}\right)$$ This is a subset of $\operatorname{Hom}(\pi_1(X),U(1))=\operatorname{Jac}X$. Now according to Narasimhan and Ramanan, it should be a subvariety of $\operatorname{Jac}X$ for any complex structure on $X$. This seems a bit unlikely to me, because there is a large moduli of complex structures on $X$. Also, somehow I've constructed naturally $C_E\subseteq\operatorname{Jac}X$, but according to the construction in Narasimhan and Ramanan I should be getting $C_E\subseteq\operatorname{Pic}^1X$, which is really not the same thing canonically. I suppose I'm getting confused in applying the Narasimhan-Seshadri theorem. Any assistance is appreciated! ag.algebraic-geometry algebraic-curves vector-bundles character-varieties add comment 1 Answer active oldest votes Dear unknown, by NR, there exists always a family of degree -1 line sub bundles for every holomorphic vector bundle of rank two with trivial determinant on a genus 2 surface. As they have degree -1, they do not admit flat connections. This means that when your representation $\rho$ has the upper triangular form you wrote down, the representation $\alpha$ will correspond to up vote 5 a degree $0$ subbundle $L$ and not to a degree -1 subbundle. Of course, this implies that your bundle $E$ is only semi-stable and not stable, or equivalently, $\rho$ is reducible, which down vote is not the generic case. (In that case, $C_E$ can be explicitly computed as $L\Theta\cup L^*\Theta$ which clearly depends on the Riemann surface structure.) Thanks, I understand now. Is there a way of describing the degree -1 line subbundles in terms of the SU(2) representation? – John Pardon Aug 8 '12 at 16:59 1 I think there is no way because that would give an explicit identification of the character variety with the moduli space of (semi)stabile holomorphic structures. This seems to be as difficult as computing the monodromy of a given flat connection (e.g. A fuchsian system) explicitly. – Sebastian Aug 8 '12 at 17:22 1 You may find it very instructive to work out the identification from the NS theorem in the case of genus one and structure group $SU(2)$. – Peter Dalakov Aug 8 '12 at 18:10 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves vector-bundles character-varieties or ask your own question.
{"url":"http://mathoverflow.net/questions/104254/rank-two-vector-bundles-on-a-curve-of-genus-two?sort=newest","timestamp":"2014-04-16T11:13:19Z","content_type":null,"content_length":"57019","record_id":"<urn:uuid:b098c0c4-234a-4b16-87db-ab764ca5f442>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Book II [ Book I | Main Euclid page | Book III ] Book II Byrne's edition - page by page 51 52-53 54-55 56-57 58-59 60-61 62-63 64-65 66-67 68-69 70 Proposition by proposition With links to the complete edition of Euclid with pictures in Java by David Joyce, and the well known comments from Heath's edition at the Perseus collection of Greek classics. David Joyce's Introduction to Book II Definitions from Book II Byrne's edition - Definition 1 Byrne's edition - Definition 2 David Joyce's Euclid Heath's comments Proposition II.1 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.2 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.3 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.4 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.5 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.6 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.7 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.8 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.9 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.10 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.11 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.12 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.13 Byrne's edition David Joyce's Euclid Heath's comments Proposition II.14 Byrne's edition David Joyce's Euclid Heath's comments
{"url":"http://www.math.ubc.ca/~cass/euclid/book2/book2.html","timestamp":"2014-04-19T17:22:06Z","content_type":null,"content_length":"9823","record_id":"<urn:uuid:5eb69223-3598-4d2e-99f6-ae5d993acc04>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Who did Einstein’s Mathematics?: A Response to Troemel-Ploetz Who did Einstein’s Mathematics?: A Response to Troemel-Ploetz In an article in Time magazine in July 2006 Walter Isaacson, president of the Aspen Institute and former chairman of CNN, stated that Einstein’s first wife Mileva Marić was a “Serbian physicist who had helped him with the math of his 1905 [special relativity] paper”[1] From the unequivocal way that this information was presented by Isaacson, readers would be forgiven for assuming that this a straightforward factual statement. Yet this is far from the case. For a start, the mathematics in the 1905 relativity paper was quite elementary: as Jürgen Renn, an editor of the Albert Einstein Collected Papers, observes, “If he had needed help with that kind of mathematics, he would have ended there.”[2] Then there is the fact the, contrary to myth, Einstein was highly proficient at mathematics. Einstein’s precocious talent in mathematics has been recorded by Max Talmey, a medical student who knew the Einstein family when Albert was in his early teens. After Einstein had worked through Euclid by himself around the age of 11, he tackled books on analytical geometry, algebra and calculus, and Talmey reports that soon “the flight of his mathematical genius was so high that I could no longer follow.”[3] When he left the Luitpold Gymnasium in Munich at the age of 15 to join his parents who had emigrated to Italy, his mathematics teacher provided him with a letter stating that his mathematical knowledge was already at matriculation level.[4] This letter was instrumental in his being allowed to take the entrance examination for the prestigious Zurich Polytechnic the following year when he was 16, some two years below the normal age.[5] Having spent a year without formal education, he failed this exam, but his grades in physics and mathematics were exceptional.[6] At the end of a year spent at the high school in Aarau in Switzerland to bring his other subjects up to the required standard, his school record shows that, though a year younger than his fellow students, in 1896 he obtained maximum grades in geometry, arithmetic and algebra.[7] Despite neglecting mathematics to follow his extra-curricular interests in physics, in the mathematical component of the final examination for the physics and mathematics teaching Diploma at the Zurich Polytechnic he achieved grade 11 (maximum 12).[8] Set against this is the fact that, although she graduated from her Swiss high school with excellent grades in mathematics, at the Zurich Polytechnic Mileva Marić fared rather less well. Her yearly grades were moderately good,[9] but she struggled with the geometry course taught by Wilhelm Fiedler,[10] and obtained only grade 5 (on a scale 1-12) in the mathematics component (theory of functions) of her final diploma examination, less than half that of the other four candidates in their group.[11] Almost certainly her poor mathematics grades were the reason for her failing to be awarded a diploma in 1900 and again in 1901.[12] The above information alone suffices to dispose of the notion that Einstein would have needed help with the rather elementary algebra and calculus he used in his 1905 special relativity paper, and further confirmation comes in the glowing report on his mathematical abilities in the “Expert Opinion” on his Ph.D. thesis submitted to Zurich University in 1905. The Professor of Physics Alfred Kleiner wrote: “The arguments and calculations to be carried out are among the most difficult ones in hydrodynamics, and only a person possessing perspicacity and training in the handling of mathematical and physical problems could dare to tackle them.” The mathematical difficulties were such that the opinion of Professor of Mathematics Heinrich Burkardt was sought, and he reported that he found Einstein’s calculations “correct without exception, and the manner of treatment demonstrates a thorough command of the mathematical methods involved”(emphasis in original).[13] So how did the notion that Mileva Marić assisted Einstein with the mathematics of the 1905 special relativity paper (and much more) become widely circulated? The most likely direct source of the claim is a paper published in 1990 by the linguist Senta Troemel-Ploetz with the title “Mileva Einstein-Marić: The Woman Who Did Einstein’s Mathematics”,[14] and it seems that in our era of mass communications it is only necessary to make such claims in the public domain for them to become widely accepted regardless of the paucity of the evidence. And the evidence provided by Troemel-Ploetz is very feeble indeed, and, as we shall see, is almost entirely dependent on the highly unreliable claims of Marić’s Serbian biographer Desanka Trbuhović-Gjurić.[15] In the course of her article Troemel-Ploetz falsely describes Marić as “a mathematician”, and even inflates Marić’s abilities to that of a “mathematical genius”, (pp. 420, 421) while correspondingly depreciating Einstein’s. Nowhere does she cite the fact that Marić badly failed the mathematics component of the Zurich Polytechnic teaching diploma, though at the time her article was published this information was available in the first volume of the Einstein Collected Papers (which she actually cites elsewhere in her article in a different context [p. 417]). Nor is she able to cite a single documented example of Marić’s achievements in mathematics other than in the course of her education – her evidence lies elsewhere. But first let’s look at the evidence she provides for Einstein’s supposed relatively poor mathematical ability. First part of the case made by Troemel-Ploetz One part of the case made by Troemel-Ploetz consists of a purported demonstration that Einstein was a poor mathematician. For instance, she states (p. 420) that Einstein “needed at various points someone ‘to solve his mathematical problems’.” She continues, starting with a quote attributed to Einstein: “I encountered mathematical difficulties which I cannot conquer. I beg for your help, as I am apparently going crazy” (Trbuhović-Gjurić, 1983, p. 96) he wrote to a friend Marcel Grossman, who then helped him. Now Trbuhović was in error when she stated that this quotation comes from a letter Einstein wrote to Grossman (an old friend of Einstein’s from his student days who had become professor of mathematics at Zurich University) – it comes from a report by Louis Kollros, another of Einstein’s old student friends, of something Einstein said to Grossman after they had met up again when Einstein returned to Zurich in late 1912 to take up a post at Zurich Polytechnic (now ETH).[16] (I leave aside that the quotation is an embellished version. In common with many of the quotations in Trbuhović’s book, it is not specifically referenced, so it is impossible to know where she got it from, or how accurately she has reproduced it from that source.) More important, the claims of Troemel-Ploetz (following Trbuhović) that Einstein’s reported words reveal his general dependency on other people for solving mathematical problems only serves to illustrate her ignorance of Einstein’s actual achievements, and the reason he requested help from Grossman. In 1912 Einstein had reached a stage in his attempts to develop a theory which incorporates accelerated systems into a general theory of relativity for which he required an esoteric branch of mathematics involving tensor calculus. His old friend Grossman was able to seek out for him what he needed, and to provide assistance in applying it to the work Einstein was doing. That this help was needed illustrates the difficult level of mathematics necessary for the purpose, not that Einstein was weak in mathematics. In fact in a letter supporting Einstein’s candidacy for a chair of mathematical physics at ETH (previously Zurich Polytechnic) the year before, Marie Curie had written that she believed that “mathematical physicists are at one in considering his work as being in the first rank”[17] (Curie had met Einstein at the 1911 Solvay Conference, to which he had been invited most of the leading European physicists, including Nernst, Planck, Lorentz, Poincaré, Rutherford and de Broglie.) Troemel-Ploetz opens her article with a reference to Trbuhović-Gjurić’s biography of Mileva Marić, and much of what follows is based on claims made in that volume. However, as I have noted elsewhere, [18] most of Trbuhović’s contentions are based on third or fourth-hand reminiscences of friends and acquaintances of the Marić family and remaining family members, reported more than 50 years after the events in question, with all the unreliability and inaccuracies inherent to such recollections. Having introduced Trbuhović’s book, Troemel-Ploetz immediately reports (p. 415) that “Einstein’s admission, ‘My wife does my mathematics,’ is general knowledge at the ETH in Zurich…”. The “admission” alluded to is a paraphrased version of words that Trbuhović claims were uttered by Einstein (of which more below), but what is interesting is that Troemel-Ploetz clearly implies that the “general knowledge” is recognized as a joke – “…although it serves only as a starter for jokes along the same lines”– and one can imagine Einstein self-deprecatingly making such a quip. Later in the article (p. 418) Troemel-Ploetz gives what is presumably the original source of her paraphrased quotation: “He [Einstein] told a group of Serbian intellectuals in 1905: ‘I need my wife. She solves all the mathematical problems for me’ (Trbuhović-Gjurić, 1983, p. 106).” This is stated as if it were a documentable fact. Examining the source one finds that the words reported by Trbuhović supposedly were said by Einstein at a reunion of young intellectual friends of Miloš Marić, brother of Mileva, at some unspecified occasion on which Einstein was supposedly present. The report apparently comes from one Dr Ljubomir-Bata Dumić (of whom no information is supplied by Trbuhović), who is also quoted as having written: We raised our eyes towards Mileva as to a divinity, such was her knowledge of mathematics and her genius… Straightforward mathematical problems she solved in her head, and those which would have taken specialists several weeks of work she completed in two days… We knew that she had made [Albert], that she was the creator of his glory. She solved for him all his mathematical problems, particularly those concerning the theory of relativity. Her brilliance as a mathematician amazed us.[19] I leave readers to decide on the reliability of such reminiscences from a proud fellow-Serb. As supposed evidence for Einstein’s serious mathematical limitations, Troemel-Ploetz writes (p. 421) that “it is interesting to look at some self-evaluations of Albert Einstein before he had to play the role [sic] of genius of the century”, and she provides an extract from a passage that Trbuhović quotes from Einstein’s late “Autobiographical Sketch”[20]: …higher mathematics didn’t interest me in my years of studying. I wrongly assumed that this was such a wide area that one could easily waste one’s energy in a far-off province. Also, I thought in my innocence that it was sufficient for the physicist to have clearly understood the elementary mathematical concepts and to have them ready for application while the rest consisted of unfruitful subtleties for the physicist, an error which I noticed only later. My mathematical ability was apparently not sufficient to enable me to differentiate the central and fundamental concepts from those that were peripheral and unimportant. (Trbuhović-Gjurić, 1983, p. 47) In her ignorance of the subject matter, Troemel-Ploetz fails to understand that by the standards necessary for most of physics at that time, Einstein’s knowledge of, and ability at, mathematics was extremely good. What he is doing here is explaining why, when he was a student at Zurich Polytechnic, he neglected to investigate more advanced pure mathematics. He expresses this perhaps more clearly in the “Autobiographical Notes” (1979 [1949]) that he contributed to the volume Albert Einstein: Philosopher-Scientist (1949). After reporting that “At the age of twelve through sixteen I familiarized myself with the elements of mathematics together with the principles of differential and integral calculus”, he said of his time at Zurich Polytechnic: There I had excellent teachers (for example, Hurwitz, Minkowski), so that I should have been able to obtain a mathematical training in depth…The fact that I neglected mathematics to a certain extent had its cause not merely in my stronger interest in the natural sciences than in mathematics but also in the following peculiar experience. I saw that mathematics was split up into numerous specialties, each of which could easily absorb the short lifetime granted to us. Consequently, I saw myself in the position of Buridan’s ass, which was unable to decide upon any particular bundle of hay. Presumably this was because my intuition was not strong enough in the field of mathematics to differentiate clearly the fundamentally important, that which is really basic, from the rest of the more or less dispensable erudition. Also, my interest in the study of nature was no doubt stronger; and it was not clear to me as a young student that access to a more profound knowledge of the basic principles of physics depends on the most intricate mathematical methods. This dawned upon me only gradually after years of independent scientific work.[21] To put this more specifically, in the decade after graduating from the Polytechnic the mathematical knowledge he had acquired sufficed for his purposes. It was only then that he found he had need of more specialist fields of mathematics if he were to make progress with developing his general theory of relativity. Misinterpreting the words of Einstein’s she has quoted as indicating that he regarded himself as weak in mathematical ability, Troemel-Ploetz goes on to assert that “others agreed with his evaluation”. She then quotes (translating from Trbuhović [1983]) a Zurich Polytechnic professor, Jean Pernet, saying to Einstein: “Studying physics is very difficult. You don’t lack diligence and good will but simply knowledge. Why don’t you study medicine, law, or literature?” As is frequently the case, Trbuhović provides no reference for this quotation, and its source has to be hunted down to examine the context (and the accuracy) of the report. Evidently it comes originally from a commemorative article written by a former student at Zurich Polytechnic at the time Einstein studied there, Margarete von Üxküll.[22] (According to the Einstein biographer Carl Seelig, Einstein told the story to Üxküll some thirty years after the event,[23] and it was recalled some years later, so the accuracy of the quotation cannot be regarded as reliable.) Missing from Trbuhović’s reporting of Pernet’s words is the fact that Einstein was out of sympathy with the teaching methods of the professor in question; he frequently skipped Pernet’s classes (among others) to follow up his own extra-curricular interests in physics, and received an official reprimand on the instigation of Pernet.[24] Evidently Einstein’s independent attitude provoked Pernet into making the disparaging comments to him, so obviously at variance with Einstein’s later achievements. Troemel-Ploetz (p. 421) now recounts that a former student of Einstein’s recalled an occasion when he “got stuck in the middle of a lecture missing a ‘silly mathematical transformation’ which he couldn’t figure out.” He told the class to leave a space and just gave them the final result. “Ten minutes later he discovered a small piece of paper and put the transformation on the blackboard, remarking, ‘The main thing is the result not the mathematics, for with mathematics you can prove anything’. (Trbuhović-Gjurić, 1983, p. 88).” Though Trbuhović provided no reference for this report to enable its accuracy to be checked, she cites Dr Hans Tanner as the source. Fortunately a lengthy quotation from Tanner’s recollections of Einstein is provided by Seelig in his biography of Einstein.[25] The first thing to note is that there is no mention of Einstein’s discovering “a small piece of paper” in Tanner’s account of the incident in question (the only one of its kind he could recall). On the contrary, he says: “Some ten minutes later Einstein interrupted himself in the middle of an elucidation. ‘I’ve got it.’…During the complicated development of his theme he had still found time to reflect upon the nature of that particular mathematical transformation. That was typical of Einstein.” So whence comes the piece of paper? A couple of paragraphs earlier Tanner had reported that in the lectures given by the newly appointed Einstein as professor of theoretical physics at Zurich University in 1909, “The only script he carried was a strip of paper the size of a visiting card on which he had scribbled what he wanted to tell us. Thus he had to develop everything himself and we obtained some insight into his working technique.” It is evident that Trbuhović garbled the account, so that she erroneously has the piece of paper playing a role in the classroom incident she recounts. The next thing of note is that the words “The main thing is the result… with mathematics you can prove anything” was not reported by Tanner in the context of the incident Trbuhović recounts, but in a completely different social setting, when Einstein had invited some of his students to return with him to his apartment to examine some work he had received from Planck in which he had perceived there had to be a mistake. Tanner was one of two students who accepted the invitation, and who told Einstein that they could find no error and that he must be mistaken. Einstein responded by pointing out why, on the grounds of “a simple dimensional datum”, there must be an error somewhere. When Tanner suggested writing to Planck to inform him of the mistake, Einstein reportedly said: “…we won’t write and tell him that he’s made a mistake. The result is correct, but the proof is faulty. We’ll simply write and tell him how the real proof should run.” It is at this point he is reported as having said: “The main thing is the content, not the mathematics. With mathematics one can prove anything.” This puts a very different complexion on Einstein’s latter remark than that which Troemel-Ploetz presents. Equally important, here we have an instance where we are able to check Trbuhović’s report, uncritically recycled by Troemel-Ploetz, and find that it misrepresents the context of Einstein’s remark about mathematics. (This leaves aside that we cannot be sure of the accuracy of the reported words, recalled many years after the event.) Troemel-Ploetz, however, having misinterpreted the quotation in question as a further indication of Einstein’s supposed deficiencies in mathematics, follows it with the evidence-free assertion that he “did not have to worry about the [mathematical] proofs because Mileva Einstein-Marić was doing them.” Summing up this passage in Troemel-Ploetz’s article, she is recycling an unreferenced report by Trbuhović which is both inaccurate and also misrepresents the context of the quoted remark attributed to Einstein. As a result she completely fails to understand the rationale of the remark from a scientific point of view. This is a further illustration of how unreliable are the numerous unverifiable quotations Trbuhović sprinkles throughout her book – she cannot even be relied upon to recount accurately the reports she is reproducing for her readers (frequently themselves from an unreliable third-hand source). Yet Troemel-Ploetz relies heavily on Trbuhović for the great bulk of the evidence that she provides to support her central thesis. More direct evidence (allegedly) Continuing our examination of Troemel-Ploetz’s case, she writes (pp. 419-420) that a biographer of Einstein, Peter Michelmore, who “had much information from Albert Einstein”, said: “Mileva helped him solve certain mathematical problems. She was with him in Bern and helped him when he was having such a hard time with the theory of relativity.” (Trbuhović-Gjurić, 1983, p. 72 [1991, p. 103]) Consulting the citation Troemel-Ploetz provides, one finds that only the first sentence of the words attributed to Michelmore are given by Trbuhović; the rest is added by Troemel-Ploetz herself. The first quoted sentence certainly can be found in Michelmore’s book (though, characteristically, no page reference is given by Trbuhović). It occurs in the middle of a somewhat imaginative account of the period encompassing Einstein’s production of the celebrated papers of 1905. According to Michelmore, after the publication of the paper on the photoelectric effect Einstein wrestled with the problem of relativity: “Frustration drove him to wander the farm lands around Berne. He took time off from the office. Mileva helped him solve certain mathematical problems, but nobody could assist with the creative work, the flow of fresh ideas.”[26] Michelmore provides no evidence for his claim that Marić helped Einstein solve mathematical problems, nor does he give the least indication what these might be. (Recall that nothing in the mathematics that he required for his work at that time would have taxed Einstein’s knowledge and abilities.) Earlier Michelmore had made assertions relevant to this issue that are manifestly false. He writes, referring to Marcel Grossman, who was in Einstein’s group at Zurich Polytechnic, but majored in mathematics: “Generously, Grossman took detailed notes on all lectures and drummed them into Einstein at the week-ends… His [Einstein’s] other close friend was Mileva Maric… She was as good at mathematics as Marcel and she, too, helped in the week-end coaching sessions.”[27] Most of this is imaginative fiction. The only time Einstein made use of Grossman in this way was immediately prior to his diploma examinations, when he borrowed his meticulous notes for self-study. [28] This puts the notion that Marić assisted in these supposedly regular weekend sessions well into the realms of fiction. (If anything, the indications are that it was Einstein who assisted Marić in her studies: In a letter in December 1901 Einstein wrote to her: “Soon you’ll be my ‘student’ again, like in Zurich.”[29]) Even more fantastical is the assertion that Marić was as good at mathematics as Grossman. This is negated by a comparison of their respective grades at both intermediate and final diploma examinations: Marić received lower grades than Grossman in every single mathematics topic that they both took for these exams.[30] Moreover, whereas Marić failed her diploma exam, almost certainly because of her poor mathematics grade, Grossman went on to become a professor of mathematics at Zurich Polytechnic at the early age of 29. He also, of course, assisted Einstein in the application of highly abstruse mathematics to general relativity theory. Clearly Michelmore is not a reliable source of information about any supposed contribution Marić made to Einstein’s mathematical work. The assertion by Troemel-Ploetz that he “had much information from Albert Einstein” is erroneous. The book was published some seven years after Einstein’s death, and in his “Author’s Note” Michelmore makes no mention of ever having met Einstein. He did spend two days interviewing Einstein’s elder son, but acknowledges that neither his notes, nor the book manuscript, were checked for accuracy by Hans Albert Einstein.[31] In any case, Hans Albert was an infant at the time Einstein wrote his 1905 papers, and could not have passed on any first-hand knowledge of relevant events. As we have seen from the above material, Michelmore’s account is too unreliable to take from it any definitive statement about alleged contributions by Marić to Einstein’s mathematical work. One may add that Michelmore’s propensity to invent dialogue disqualifies his book as a serious work of biography. For instance, he has Einstein saying, at the end of the evening when Einstein had a crucial discussion with his friend Michele Besso prior to his breakthrough to the special theory of relativity: “I’ve decided to give it up – the whole theory.”[32] This is at totally at variance with Einstein’s own account, in which he reports how Besso’s perspicacious contributions led, that evening, to his coming to understand where the key to the problem lay.[33] Troemel-Ploetz next cites (p. 420) the great mathematician Hermann Minkowsky, one of Einstein’s professors at Zurich Polytechnic, who, she writes, “knew him well and was his friend”, and who is reported as having remarked to Max Born in relation Einstein’s producing the theory of [special] relativity: “This was a big surprise to me because Einstein was quite a lazybones and wasn’t at all interested in mathematics” (Trbuhović-Gjurić, 1983, p. 47 [1991, p. 104]) In her book Trbuhović cites Carl Seelig for this quotation, and in fact it can be found in Seelig’s biography. (The English language edition has a slightly different translation of Minkowsky’s words.)[34] Leaving aside the erroneous assertion that Minkowsky knew Einstein well as a friend (he was at Göttingen University in Germany from 1902 until his death in 1909, and they scarcely met or corresponded), his reportedly saying of Einstein that “he never bothered about mathematics at all” is consistent with what we know – that Einstein neglected mathematical studies at Zurich Polytechnic, preferring to spend his time on his own extracurricular interests in physics. It bears not at all on the issue of Einstein’s ability to make use of mathematics when he needed it. This is followed by a statement (p. 420) that “Bodanović, a mathematician in the Ministry of Education in Belgrade who was well acquainted with Mileva Einstein-Marić, is reported to have said that she had always known that Mileva Einstein-Marić had helped her husband a great deal, especially with the mathematical foundations of his theory, but Mileva Einstein-Marić had always avoided talking about it (Trbuhović-Gjurić, 1983, p. 164).” One wonders what value one should put on something that someone is reported to have said by another party about information she was not privy to, and which the person concerned had not spoken about! Consulting Trbuhović’s book we find that she actually claims that Milica Bodanović recalled that it was Malvina Gogić, a mathematics inspector at the Ministry of Education at Belgrade, who was the one who reportedly had said that Marić helped with the mathematical foundations of his theory [what theory?], but that Marić refused to talk about it.[35] But much more important than this minor error is the fact that Troemel-Ploetz should deem it worth recycling a report of such vagueness and doubtful reliability as if it were of genuine evidential value. (Alberto Martinez places such reports at the very bottom of a twenty point scale of historical reliability in his article on “Handling evidence in history”.[36]) Troemel-Ploetz naturally reports (p. 419) the (erroneous) claim made by Trbuhović that the Soviet physicist Abram Joffe “wrote in his Errinnerungen an Albert Einstein (Joffe 1960) that the original manuscripts were signed Einstein-Marić”.[37] In fact an examination of what Joffe actually wrote shows that he does not say he had seen the original manuscripts, as both Martinez and Stachel have demonstrated.[38] In any case, as Stachel writes, how do we get from the claim that the three articles cited by Joffe had one signature to the claim that this one signature represents two authors? On the basis of this false information Troemel-Ploetz had earlier (p. 418) written in relation to Einstein: “Why did he not immediately insist on a correction when Mileva Einstein-Marić’s name was dropped as an author of the articles that appeared in the Leipzig Annalen der Physik?” In addition to the points made above, Stachel notes that the three papers in question contain many authorial comments in the first person singular. This means that, were one to accept Troemel-Ploetz’s underlying assumption here, the distinguished editors of the Annalen der Physik (Max Planck and Paul Drude) would have had not merely to omit a co-author’s name, they would have had to have made appropriate changes of first person plural pronouns to first person singular throughout the articles. It is also worth observing that physics papers co-authored by spouses would not have set a precedent; Marie and Pierre Curie had published such papers, and together had been awarded a share in the 1903 Nobel Prize for physics. (For a comprehensive refutation of all the claims made by Trbuhović and others in relation to Joffe, readers should consult Stachel’s editorial Introduction to the 2005 edition of Einstein’s Miraculous Year: Five Papers That Changed the Face of Physics, pp. liv-lxxii.) A full critique of the whole of Troemel-Ploetz’s article would take many more words, and be on much the same lines as the above. (Some additional items have been examined in my article “Mileva Marić: Einstein’s Wife”: http://www.esterson.org/milevamaric.htm.) But it is worth looking at just one more passage (p. 420), in which Troemel-Ploetz translates the words of Trbuhović (1983) commenting on the 1905 special relativity paper:[39] It’s so pure, so unbelievably simple and elegant in its mathematical formulation – of all the revolutionary progress physics has made in this century, this work is the greatest achievement. Even today when reading these yellowing pages printed almost 80 years ago, one feels respect and cannot but be proud that our great Serbian Mileva Einstein-Marić participated in the discovery and edited them. Her intellect lives in those lines. In their simplicity, the equations show almost beyond a doubt the personal style she always demonstrated in mathematics and in life in general. Her manner was always devoid of unnecessary complications and pathos. As Fölsing points out,[40] there is not a single known document containing any mathematical work by Marić for us to compare with the paper in question, so Trbuhović’s statement that the equations show almost beyond a doubt Marić’s personal style inhabits the realms of fantasy. That Troemel-Ploetz recycles it uncritically is one more illustration of the unscholarly nature of her article. Most egregiously, she repeatedly reproduces Trbuhović’s reports without any attempt to check sources to judge their accuracy or reliability, and fails to raise even the faintest question mark about the reliability of Trbuhović’s numerous unverifiable third-hand reports obtained many decades after the events in question and provided by far from disinterested sources. One can only arrive at the conclusion that her deeply flawed article does not remotely bear out her claims about Marić’s alleged contribution to Einstein’s mathematical work. For further discussion of the issues raised in Troemel-Ploetz’s article, including a few not touched upon above, readers should consult the comprehensive articles in John Stachel’s book Einstein from ‘B’ to ‘Z’ , pp. 26-38, 39-55. November 2006 NOTES (Citations refer to books and articles listed in the Bibliography.) 1. Time, 12 July 2006 2. Quoted in Highfield, R. and Carter, P. (1993), pp. 114-115. 3. Talmey, M. (1932), pp. 162-164. 4. Reiser, A. (1930), pp. 42-43; Frank (1948), p. 27. 5. Collected Papers Vol.1 [Eng. trans], 1987, p. 7. 6. Fölsing, A. (1997), p. 37. 7. Collected Papers Vol. 1 [Eng. trans.], 1987, pp. 9-10. 8. Collected Papers Vol. 1 [Eng. trans.], 1987, p. 141. 9. Trbuhović-Gjurić, D. (1983), p. 43; Trbuhović-Gjurić, D. (1991), pp. 49-50. 10. Renn, J. & Schulmann, R. (1992), p. 12. 11. Collected Papers Vol. 1 [Eng. trans.], 1987, p. 141. 12. Stachel, J. (2002), p. 29. 13. Collected Papers, Vol. 5 [Eng. trans.], 1995, pp. 22-23. 14. Troemel-Ploetz, S. (1990). Women’s Studies Int. Forum, 13(5), pp. 415-432. 15. Trbuhović-Gjurić, D. (1983). Im Schatten Albert Einsteins: Das tragische Leben der Mileva Einstein-Marić. Bern: Paul Haupt [German translation of the original book by D. Trbuhović-Gjurić, published in Yugoslavia in 1969]; Trbuhović-Gjurić, D. (1991). Mileva Einstein: Une Vie (trans. from the German). Paris: Antoinette Fouque. 16. Pais, A. (1983), pp. 212, 226n; Fölsing, A. (1997), pp. 314; 778, n.45. 17. Clark, R. W. (1971), p.191. 18. Esterson, A. (2006). Mileva Marić: Einstein’s Wife 19. Trbuhović-Gjurić, D. (1983), p. 93; (1991), p. 106 [my translation – A. E.]. 20. Einstein, A. (1956 [1954]). “Autobiographische Skizze.” In C. Seelig (ed.), Helle Zeit – Dunkle Zeit: In memoriam Albert Einstein, Zurich, 1956. 21. Einstein, A. (1979 [1949]), p. 15. 22. Clark, R. 1971, pp. 61, 788n. 23. Seelig, C. (1956), pp. 40-41. 24. Fölsing, 1997, p. 57. 25. Seelig, C. (1956), pp. 100-106. 26. Michelmore, P. (1962), p. 41. 27. Michelmore, P. (1962), p. 31. 28. Fölsing, A. (1997), pp. 53, 248 n.11. 29. Renn, J. & Schulmann, R. (1992), p. 71. 30. Collected Papers, Vol. 1 [Eng. trans.], 1987, pp. 125, 140; Trbuhović-Gjurić, D. (1991), p. 70. 31. Michelmore, P. (1962), p. ix. 32. Michelmore, P. (1962), p. 41. 33. Fölsing, A. (1997), pp. 155, 176, 177. 34. Seeling, C. (1956), p. 28. 35. Trbuhović-Gjurić, D. (1983), p. 164; (1991), p. 215. 36. Martinez, A. A. (2005), p. 54. 37.. Trbuhović-Gjurić, D. (1983), p. 79; (1991), p. 111. 38. Martinez, A. A. (2005), pp. 51-52; Stachel, J. (2005), pp. liv-lxxii. 39. Trbuhović-Gjurić, D. (1983), p. 71; (1991), p. 109. 40. Fölsing, A. (1990). Keine ‘Mutter der Relativitätstheorie’. Die Zeit, Nr. 47, 16 November 1990. Clark, R. (1971). Einstein: The Life and Times. New York: World Publishing Company. Einstein, A. The Collected Papers of Albert Einstein. Princeton University Press. Einstein, A. (1979 [1949]). “Autobiographical Notes.” Trans. by P. S. Schilpp. La Salle, Illinois: Open Court. Einstein, A. (1956 [1954]). “Autobiographische Skizze.” In C. Seelig (ed.), Helle Zeit – Dunkle Zeit: In memoriam Albert Einstein, Zurich: Europa Verlag, 1956. Esterson, A. (2006). “Mileva Maric: Einstein’s Wife” Esterson, A. (2006). Critique of Evan Harris Walker’s Letter in Physics Today, 1991 Frank, P. (1948). Einstein: His Life and Times. London: Jonathan Cape. Fölsing, A. (1990). Keine ‘Mutter der Relativitätstheorie’. Die Zeit, Nr. 47, 16 November 1990. Fölsing, A. (1997). Albert Einstein. (Trans. by E. Osers.) New York: Penguin Books. Highfield, R. and Carter, P. (1993). The Private Lives of Albert Einstein. London: Faber and Faber. Joffe, A. F. (1955). Pamiati Alberta Einsteina. Uspekhi fizicheskikh nauk, 57 (2), 187. Martínez, A. A. (2005). Handling Evidence in History: The Case of Einstein’s Wife. School Science Review, March 2005, 86 (316), pp. 49-56. Michelmore, P. (1962). Einstein: Profile of the Man. New York: Dood, Mead. Pais, A. (1994). Einstein Lived Here. Oxford University Press. Renn, J. and Schulmann, R. (eds.) (1992). Albert Einstein and Mileva Maric: The Love Letters. Trans. by S. Smith. Princeton University Press. Reiser, A. (1930). Albert Einstein: A Bibliographical Portrait. New York: Boni. Seelig, C. (1956). Albert Einstein: A Documentary Biography. London: Staples Press. Stachel, J. (1996). Albert Einstein and Mileva Marić: A Collaboration that Failed to Develop. In H. M. Pycior, N. G. Slack, and P. G. Abir-Am (eds.), Creative Couples in the Sciences, Rutgers University Press. Reprinted in Stachel, J. (2002), Einstein from ‘B’ to ‘Z’, Boston/Basel/Berlin: Birkhauser, pp. 39–55. Stachel, J. (2002). Einstein from ‘B’ to ‘Z’. Boston/Basel/ Berlin: Birkhäuser. Stachel, J. (ed.) (2005). Einstein’s Miraculous Year: Five Papers That Changed the Face of Physics. Princeton University Press. Talmey, M. (1932). “The Relativity Theory Simplified And the Formative Period of its Inventor.” New York: Falcon Press. Trbuhović-Gjurić, D. (1983). Im Schatten Albert Einsteins: Das tragische Leben der Mileva Einstein-Marić. Bern: Paul Haupt. (The German language edition is an edited version of the book by Trbuhović-Gjurić originally published in Serbo-Croat in Yugoslavia in 1969.) Trbuhović-Gjurić, D. (1991), Mileva Einstein: Une Vie (French translation of Im Schatten Albert Einsteins: Das tragische Leben der Mileva Einstein-Marić). Paris: Antoinette Fouque. Troemel-Ploetz, S. (1990). Mileva Einstein-Marić: The Woman Who Did Einstein’s Mathematics. Women’s Studies International Forum, Vol. 13, No. 5, p. 415-432.
{"url":"http://www.butterfliesandwheels.org/2006/who-did-einstein-s-mathematics-a-response-to-troemel-ploetz/","timestamp":"2014-04-16T15:59:52Z","content_type":null,"content_length":"64437","record_id":"<urn:uuid:45ed9e05-926b-48c1-96f7-2bf634c951fb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Does probability of a derangement go up under passing to subgroups? up vote 2 down vote favorite This is prompted by my attempts to work on this question. Let $H \subset G \subseteq S_d$ be transitive permutation groups. Recall that an element of $S_d$ is called a derangement if it has no fixed Is the proportion of derangements in $H$ always greater than in $G$? If $H$ doesn't have to be transitive, then the answer is "no"; just let $H$ be trivial. But a quick sampling of examples with $G$ and $H$ both transitive doesn't turn up any counterexamples. UPDATE Never mind. $A_4$ inside $S_4$, the probability of a derangement goes down from $3/8$ to $1/4$. gr.group-theory co.combinatorics permutation-groups 1 I'm voting to close as no longer relevant, because, if I understand the MO software correctly, if the question remains open and (as I would expect) no answers (in the sense of the software) are given, then the question will keep reappearing on the front page from time to time. – Andreas Blass Jul 24 '12 at 16:14 @Andreas Blass: I do not think this is the case. I never saw a question without an answer reappear just so. Those that reappear are those with answer(s) that are still in the 'unanswered' category, i.e. no accept answer and none with positive (or perhaps +2) score. (I once spend some time to check ca 2000 items in unanswered category looking for each question modified by the MO-user, ie having been bumped; very few have no answer and for a couple of those that currently have none I checked, with kind help of Felipe Voloch, that they used to have one.) – quid Jul 24 '12 at 19:18 add comment closed as no longer relevant by Andreas Blass, Qiaochu Yuan, j.c., Will Sawin, Noah Stein Jul 24 '12 at 17:39 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. Browse other questions tagged gr.group-theory co.combinatorics permutation-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/103009/does-probability-of-a-derangement-go-up-under-passing-to-subgroups","timestamp":"2014-04-16T10:41:41Z","content_type":null,"content_length":"42734","record_id":"<urn:uuid:6df102f2-9703-4968-bb9e-784c8af3fdba>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Franklin, MA Math Tutor Find a Franklin, MA Math Tutor ...I hold both a master's and a bachelor's degree in economics. In addition to math, I have also taught economics both at the high school and community college level.I have taught Algebra I at the honors and standard levels for almost 15 years. It is one of my favorite subjects to teach and tutor. 6 Subjects: including trigonometry, algebra 1, algebra 2, geometry ...I eat roadblocks for breakfast! Your child will learn the paths around these blocks as I give them tools to solve problems they feel are impossible. As they see small victories, they find their steps around these road blocks, and they leave them behind! 10 Subjects: including algebra 1, prealgebra, SAT math, ACT Math ...There is hard work involved, but now you know, you will be in a position to understand, your subject matter better, if this is Mathematics or Science or Engineering (college students) and get results! The practice sessions, I shall plan for you, will be straight to the point and will yield best ... 6 Subjects: including precalculus, algebra 1, trigonometry, prealgebra ...With all students, I am excited to use student-centered approaches to encourage critical thought and facilitate academic success. In other words, I love to teach! I love getting to know my students, and helping them succeed. 16 Subjects: including SAT math, algebra 1, elementary (k-6th), grammar ...In many cases I will break the learning to the most basic elements or at a concrete level and utilize a parts to whole approach. I taught students with special needs in my classroom for over 20 years and also worked professionally for Special Olympics, giving me a wealth of experience. I was a high school basketball coach at Dartmouth High School for approximately 15 years. 31 Subjects: including algebra 2, English, precalculus, algebra 1 Related Franklin, MA Tutors Franklin, MA Accounting Tutors Franklin, MA ACT Tutors Franklin, MA Algebra Tutors Franklin, MA Algebra 2 Tutors Franklin, MA Calculus Tutors Franklin, MA Geometry Tutors Franklin, MA Math Tutors Franklin, MA Prealgebra Tutors Franklin, MA Precalculus Tutors Franklin, MA SAT Tutors Franklin, MA SAT Math Tutors Franklin, MA Science Tutors Franklin, MA Statistics Tutors Franklin, MA Trigonometry Tutors Nearby Cities With Math Tutor Attleboro Math Tutors Bellingham, MA Math Tutors Cumberland, RI Math Tutors Foxboro, MA Math Tutors Mansfield, MA Math Tutors Medway, MA Math Tutors Milford, MA Math Tutors Needham, MA Math Tutors Norfolk, MA Math Tutors North Attleboro Math Tutors Norwood, MA Math Tutors Plainville, MA Math Tutors Walpole, MA Math Tutors Woonsocket, RI Math Tutors Wrentham Math Tutors
{"url":"http://www.purplemath.com/Franklin_MA_Math_tutors.php","timestamp":"2014-04-16T13:25:06Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:9dfe8633-ab07-4ead-896c-ced23ed9b08f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
mathematical induction for an inequality February 3rd 2010, 06:16 PM mathematical induction for an inequality I need to show 1^3 + 2^3 + ... + n^3 < (1/2)n^4 for all n in N and n >=3. I want to use mathematical induction, but I don't know if I need to use the first Mathematical induction or the second one? February 3rd 2010, 06:34 PM Archie Meade If $1^3+2^3+....+n^3 < \frac{1}{2}n^4,\ n\ge3$ then hopefully $1^3+2^3+....+n^3+(n+1)^3 < \frac{1}{2}(n+1)^4$ $\frac{1}{2}n^4+(n+1)^3=\frac{1}{2}n^4+n^3+3n^2+3n+ 1$ which is less than $(n+1)^4$ Hence, if $1^3+2^3+...+n^3 < \frac{1}{2}n^4$ then $1^3+2^3+....+(n+1)^3$ is certainly < $\frac{1}{2}(n+1)^4$ The inductive process is validated for the hypothesis. True for n=3 ? $\frac{1}{2}3^4=40.5$ Proven February 4th 2010, 08:45 AM We are to prove that $1+2^3+3^3+...+n^3 < \frac{1}{2}n^4$, for $n\geq 3$ The sum of cube series $1+2^3+3^3+...+n^3$ is equal to $[\frac{n(n+1)}{2}]^2$ We can restate the question as follows: $[\frac{n(n+1)}{2}]^2< \frac{1}{2}n^4$, for $n\geq 3$ Base case: For integer $k=3$, $[\frac{k(k+1)}{2}]^2=36< \frac{1}{2}k^4=\frac{81}{2}$ Induction hypothesis: Suppose for every integer $k>3, k\in \mathbb{N}, [\frac{k(k+1)}{2}]^2< \frac{1}{2}k^4$ $(k+1)^3+[\frac{k(k+1)}{2}]^2< \frac{1}{2}(k+1)^4$ LHS: $(k+1)^3+[\frac{k(k+1)}{2}]^2$ = $k^3+3k^2+3k+1+\frac{k^2(k+1)^2}{4}$ = $k^3+3k^2+3k+1+\frac{k^2(k^2+2k+1)}{4}$ = $k^3+3k^2+3k+1+\frac{k^4+2k^3+k^2}{4}$ = $\frac{k^4}{4}+\frac{3k^3}{2}+\frac{13k^2}{4}+3k+1$ RHS: $\frac{1}{2}(k+1)^4$ = $\frac{k^4}{2}+2k^3+3k^2+2k+\frac{1}{2}$ Putting LHS and RHS together: $\frac{k^4}{4}+\frac{3k^3}{2}+\frac{13k^2}{4}+3k+1< \frac{k^4}{2}+2k^3+3k^2+2k+\frac{1}{2}$ $\frac{k^2}{4}+k+\frac{1}{2}<\frac{k^4}{4}+\frac{k^ 3}{2}$ Hence, by induction hypothesis, $1+2^3+3^3+...+n^3 < \frac{1}{2}n^4$, for all $n\geq 3$ in $\mathbb{N}$ February 4th 2010, 11:17 AM After you put the LHS and RHS together, how do you simplify the inequality?\ thanks for your detailed work February 4th 2010, 12:34 PM Simple algebra. Move the 1st and 2nd terms to the right hand side, and move the last term from the right hand side to the left, etc. February 4th 2010, 12:39 PM question about inequality THanks!! it makes sense. My new question is that if what we are trying to prove the < (inequality), it seems that the RHS < LHS, because some terms are larger than others. However, there are other terms that are smaller than others. So, even though it seems logical that the inequality is true, is it enough to state it like that, or is there a need of more explanation to justify the February 4th 2010, 02:09 PM THanks!! it makes sense. My new question is that if what we are trying to prove the < (inequality), it seems that the RHS < LHS, because some terms are larger than others. However, there are other terms that are smaller than others. So, even though it seems logical that the inequality is true, is it enough to state it like that, or is there a need of more explanation to justify the That's all. If the base case and induction hypothesis are true, then proposition is true for all positive integers---end of proof. February 4th 2010, 02:28 PM Archie Meade Putting LHS and RHS together: $\frac{k^4}{4}+\frac{3k^3}{2}+\frac{13k^2}{4}+3k+1< \frac{k^4}{2}+2k^3+3k^2+2k+\frac{1}{2}$ $\frac{k^2}{4}+k+\frac{1}{2}<\frac{k^4}{4}+\frac{k^ 3}{2}$ Hence, by induction hypothesis, $1+2^3+3^3+...+n^3 < \frac{1}{2}n^4$, for all $n\geq 3$ in $\mathbb{N}$ If all the steps to here are understandable, inthequestforproofs, then you can say $\frac{k^4}{4} > \frac{k^2}{4}$ definately $k^4 > k^2$ for k >1 Also, we can ask... is $\frac{k^3}{2} > k+0.5$ ? So, is $\frac{k^3}{2} > \frac{2k+1}{2}$ ? Is $k^3 > 2k+1$ ? Is $k(k^2) > 2(k+0.5)$ ? if k > 2, then $k^2 > k+0.5$
{"url":"http://mathhelpforum.com/discrete-math/127062-mathematical-induction-inequality-print.html","timestamp":"2014-04-19T16:10:53Z","content_type":null,"content_length":"19358","record_id":"<urn:uuid:55719c00-0aad-40e9-b8e8-ee5be7d83315>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Complexity of Paris-Harrington Tautologies Seminar Room 1, Newton Institute We study the proof complexity of Paris-Harrington’s Large Ramsey Theorem for bicolorings of graphs. We prove a conditional lower bound in Resolution and a upper bound in bounded-depth Frege. The lower bound is conditional on a (very reasonable) hardness assumption for a weak (quasi-polynomial) Pigeonhole principle in Res(2). We show that under such assumption, there is no refutation of the Paris-Harrington formulas of size quasi-polynomial in the number of propositional variables. The proof technique for the lower bound extends the idea of using a combinatorial principle to blow-up a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced Ramsey principle for triangles is established. This is obtained by adapting some constructions due to Erdo ̋s and Mills.
{"url":"http://www.newton.ac.uk/programmes/SAS/seminars/2012032611001.html","timestamp":"2014-04-19T17:07:37Z","content_type":null,"content_length":"4372","record_id":"<urn:uuid:67aa85d8-9f3d-4ed8-b60b-67a44df46a9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
# --8<--8<--8<--8<-- # # Copyright (C) 2012 Smithsonian Astrophysical Observatory # # This file is part of Math::Rational::Approx::ContFrac # # Math::Rational::Approx::ContFrac is free software: you can # redistribute it and/or modify it under the terms of the GNU General # Public License as published by the Free Software Foundation, either # version 3 of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # # -->8-->8-->8-->8-- package Math::Rational::Approx::ContFrac; use strict; use warnings; use Carp; our $VERSION = '0.01'; use Math::BigFloat; use Moo; use MooX::Types::MooseLike::Numeric ':all'; use Params::Validate qw[ validate_pos ARRAYREF ]; use Math::Rational::Approx qw[ contfrac contfrac_nd ]; has x => ( is => 'ro', isa => sub { die( "must be a positive number\n" ) unless is_PositiveNum($_[0]) }, required => 1, ); has n => ( is => 'rwp', isa => PositiveInt, required => 1, ); has _terms => ( is => 'rwp', init_arg => undef, default => sub { [] }, ); sub terms { [ @{$_[0]->_terms} ] } has _resid => ( is => 'rwp', init_arg => undef, lazy => 1, builder => '_build_resid', ); sub resid { $_[0]->x->copy } sub _build_resid { Math::BigFloat->new( $_[0]->x ) } sub approx { my $self = shift; my ( $n ) = validate_pos( @_, { optional => 1, callbacks => { 'positive integer' => sub { is_PositiveInt($_[0]) }, }, }); $self->_set_n( $self->n + $n ) if defined $n; my ( undef, $x ) = contfrac( $self->_resid, $self->n - @{$self->_terms}, $self->_terms ); $self->_set__resid( $x ); return contfrac_nd( $self->_terms ); } 1; __END__ =head1 NAME Math::Rational::Approx::ContFrac - Rational number approximation via continued fractions =head1 SYNOPSIS use Math::Rational::Approx::ContFrac; $x = Math::Rational::Approx::ContFrac->new( x => 1.234871035, n => 10 ); ( $n, $d ) = $x->approx; # continue for an additonal number of steps ( $n, $d ) = $x->approx( 3 ); =head1 DESCRIPTION This module is an object oriented front end to the B function =head1 INTERFACE =over =item new $obj = Math::Rational::ContFrac->new( %attr ); Construct an object which will maintain state for the continued fraction. The following attributes are available: =over =item x The number to approximate. It must be positive. =item n The number of terms to generate. This may be augmented in calls to the B method. =back =item approx ( $n, $d ) = $obj->approx; ( $n, $d ) = $obj->approx($n); Calculate the continued fractions and return the associated nominator and denominator. If C<$n> is not specified, the number of terms generated is that specified in the call to the constructor, plus any terms requested by additional calls to B with C<$n> specified. C<$n> specifies the number of additional terms to generate beyond what has already been requested. =item x $x = $obj->x; The original number to be approximated. =item n $n = $obj->n; The number of terms generated. =item terms $arrayref = $obj->terms Returns an arrayref of the current terms. =item resid The residual of the input number as a B object. This is I the difference between the input number and the rational approximation. =back =head1 DEPENDENCIES =for author to fill in: A list of all the other modules that this module relies upon, including any restrictions on versions, and an indication whether the module is part of the standard Perl distribution, part of the module's distribution, or must be installed separately. ] Math::BigFloat, Moo, MooX::Types::MooseLike::Numeric, Params::Validate =head1 INCOMPATIBILITIES =for author to fill in: A list of any modules that this module cannot be used in conjunction with. This may be due to name conflicts in the interface, or competition for system or program resources, or due to internal limitations of Perl (for example, many modules that use source code filters are mutually incompatible). None reported. =head1 BUGS AND LIMITATIONS =for author to fill in: A list of known problems with the module, together with some indication Whether they are likely to be fixed in an upcoming release. Also a list of restrictions on the features the module does provide: data types that cannot be handled, performance issues and the circumstances in which they may arise, practical limitations on the size of data sets, special cases that are not (yet) handled, etc. No bugs have been reported. Please report any bugs or feature requests to C, or through the web interface at L. =head1 SEE ALSO L, L, L, L, L. =head1 AUTHOR Diab Jerius Edjerius@cpan.orgE =head1 LICENSE AND COPYRIGHT Copyright (c) 2012 The Smithsonian Astrophysical Observatory Math::Rational::Approx is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see .
{"url":"http://cpansearch.perl.org/src/DJERIUS/Math-Rational-Approx-0.01/lib/Math/Rational/Approx/ContFrac.pm","timestamp":"2014-04-24T11:17:20Z","content_type":null,"content_length":"6553","record_id":"<urn:uuid:d1b12c95-9ef6-497b-925d-7012bf4355d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Smallest dimension of nontrivial representation of a simple Lie algebra over `$\mathbb{C}$` up vote 10 down vote favorite The question involved here is natural and very classical, but I'm unsure what has been formally stated and proved in the literature. The only approach I know involves assembling facts that apparently weren't all known until the 1970s. Start with a simple Lie algebra $\mathfrak{g}$ over $\mathbb{C}$, along with some Cartan subalgebra $\mathfrak{h}$ and the resulting weight lattice $X$ and root sublattice $X_r$ inside $\mathfrak{h}^ *$ (Bourbaki's $P \supset Q$). Fix simple roots $\alpha_1, \dots, \alpha_\ell$ and corresponding fundamental dominant weights $\varpi_1, \dots, \varpi_\ell$. [The symbol \varpi gives an old version of the handwritten letter pi, for "poids".] Finite dimensional simple modules $L(\lambda)$ are then parametrized by dominant weights $\lambda$ with $L(0)$ the trivial module, while arbitrary finite dimensional modules are direct sums of these. Everything here just depends up to isomorphism on $\mathfrak{g}$, by standard conjugacy theorems. Now the natural problem is: Find the smallest nontrivial (hence faithful) finite dimensional modules. By complete reducibility, it's enough to consider simple modules. A detailed answer requires the Killing-Cartan classification of simple Lie algebras, but there is a classification-free result: All weights of $L(\lambda)$ lie in a single coset of $X_r$ in $X$, and these weights consist of Weyl group orbits of various dominant $\mu \leq \lambda$ in the standard ordering (possibly with multiplicity $>1$ if $\mu < \lambda$). (1) Fix a coset not containing 0. Then there is a unique smallest nontrivial $L(\lambda)$ corresponding to a "minuscule" $\lambda$ (always one of the fundamental weights), with weights consisting of the Weyl group orbit of $\lambda$. (2) Fix the coset containing 0. Then there is a unique smallest nontrivial $L(\lambda)$, whose weights consist of 0 together with the Weyl group orbit of $\lambda$. Here $\lambda^\vee$ is the highest root in the dual root system. Is this written down and proved somewhere? The history might also be interesting (or just messy). Once this result is in hand, it's not difficult to fill in the details for each simple Lie algebra, but much of that is found only in exercises of books by Bourbaki or me. REFERENCES: As I commented to Sasha Premet, I've only seen fragments of the story in earlier textbooks. There is a short note by H. Freudenthal in Proc. Amer. Math. Soc. 7 (1956), 175-176, concerning occurrence of the 0 weight; but this is superseded by later results. For general background and details on (1) and (2), I'll refer to [B1] = Bourbaki, Chap. 6 (1968), my book [H] = GTM 9 (1972), [B2] = Bourbaki, Chap. 8 (1975). [B1]: $\S1$, Exer. 24, and $\S4$, Exer. 15. [H]: Exer. 13.10 (just using root system axioms) and Prop. 21.3 (using basic representation theory), Exer. 21.3. Also Exer. 13.13 ("minimal nonzero" = "minuscule"). [B2]: $\S7.2$, $\S7.3$, and $\S7$, Exer. 22. reference-request rt.representation-theory lie-algebras ho.history-overview 1 Something a little weird. The set of cosets $X/X_r$ is $Z(G)^*$, where $G$ is the simply-connected group. The set of minuscule representations plus the trivial representation corresponds to the pointiest corners of the Weyl alcove, which in turn correspond to $Z(G)$. So your (1) seems to be asserting a bijection between $Z(G)$ and its dual. Right? – Allen Knutson Oct 10 '12 at 13:09 Yes, this involves the whole story about affine Weyl groups relative to the root system and its dual: see [B1] $\S6.2$ and $\S4.9$ of my book on Coxeter groups along with the papers by Verma and Iwahori-Matsumoto. Everything here is connected to everything else, but for the representation theory it's hard to find a conceptual pathway through the maze. – Jim Humphreys Oct 15 '12 at 22:01 add comment 1 Answer active oldest votes In Bourbaki, Ch. VIII, $\S$ 7, Sect. 2, one can find the notion of an $\mbox{$R$-saturated set}$, and Corollary to Prop. 4 in that section proves that for every $R$-saturated set $\mathcal X$ there is a finite dimensional $\mathfrak g$-module whose set of weights coincides with $\mathcal X$. Prop. 6 in the next sections proves that the smallest $R$-saturated sets have the form $W.\lambda$ where $\lambda$ is minuscule. This answers Question (1). For Question (2) we take for $\mathcal X$ the union of $0$ and the set of all short roots in $R$. It is easy to up vote 4 see that this is an $R$-saturated set as verifying this reduces to root systems of rank 2 where everything is clear (even in type ${\rm G}_2$). Applying the above-mentioned Corollary we down vote get a $\mathfrak g$-module with desired properties. If $\beta$ is the dominant short root then, by construction, our set will coincide with the set of weigths of $L(\beta)$ as that set accepted cannot be any smaller. Thanks for assembling these details, which I admit I deliberately omitted in hopes of finding somewhere a more explicit formulation. I've added to my already long question a summary of the bits and pieces I've found in older literature. – Jim Humphreys Oct 3 '12 at 12:16 Perhaps it is also interesting to comment on the above subquestion of Jim, namely to find a smallest faithful finite dimensional module for a given Lie algebra - which always exists by Ado's theorem (that is, a faithful module of minimal dimension). This has been studied in connection with fundamental groups of affinely flat manifolds (Milnor, Margulis, Abels, Soifer etc.). For complex reductive Lie algebras this minimal dimension has been determined. – Dietrich Burde Mar 27 '13 at 12:31 @Sasha: I'm still looking for a more explicit representation-theoretic statement in the literature but did find your answer useful. – Jim Humphreys Apr 28 '13 at 22:12 add comment Not the answer you're looking for? Browse other questions tagged reference-request rt.representation-theory lie-algebras ho.history-overview or ask your own question.
{"url":"http://mathoverflow.net/questions/108468/smallest-dimension-of-nontrivial-representation-of-a-simple-lie-algebra-over","timestamp":"2014-04-19T17:20:57Z","content_type":null,"content_length":"62297","record_id":"<urn:uuid:a2274c9f-1768-480e-81e8-3f88c0197db1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"}