content
stringlengths
86
994k
meta
stringlengths
288
619
Landover Hills, MD Algebra 2 Tutor Find a Landover Hills, MD Algebra 2 Tutor ...I live in Quincy, MA. My career objective is to further my education in the field of biostatistics — preferably in the areas of genetic data analysis and applied statistics in clinical trials. I intend to improve my knowledge and skills, gain valuable experience, and make a genuine contribution in solving public health problems around the world. 10 Subjects: including algebra 2, calculus, statistics, algebra 1 ...I offer tutoring for any high school math subject up to and including AP Calculus AB and BC. I also help students improve their scores for the quantitative portions of the SAT and ACT. I did very well on the math portion of the SAT by scoring 720. 11 Subjects: including algebra 2, calculus, geometry, algebra 1 I have been tutoring math for over 4 years with students from diverse backgrounds. My tutoring approach includes the following: (1) talk to students on a peer-level to better understand why they are experiencing difficulties in their subject; (2) work with students to improve the skills that will h... 17 Subjects: including algebra 2, reading, writing, biology ...When I took the ACT and SAT in October 2008, I scored in the 99% and 95% percentiles respectively for the English portions. I took both AP English and AP Literature in high school and have taken English classes in college. In addition to my B.A. in political science, I have been tutoring for about 5 years with great success. 17 Subjects: including algebra 2, English, reading, calculus ...I teach basic through advanced mathematics and sciences. I am a research chemist by profession and hold a PhD in Physical Chemistry with a BS in both Mathematics and Chemistry. I was a teaching assistant in graduate school, teaching primarily Chemistry. 14 Subjects: including algebra 2, chemistry, physics, algebra 1 Related Landover Hills, MD Tutors Landover Hills, MD Accounting Tutors Landover Hills, MD ACT Tutors Landover Hills, MD Algebra Tutors Landover Hills, MD Algebra 2 Tutors Landover Hills, MD Calculus Tutors Landover Hills, MD Geometry Tutors Landover Hills, MD Math Tutors Landover Hills, MD Prealgebra Tutors Landover Hills, MD Precalculus Tutors Landover Hills, MD SAT Tutors Landover Hills, MD SAT Math Tutors Landover Hills, MD Science Tutors Landover Hills, MD Statistics Tutors Landover Hills, MD Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bladensburg, MD algebra 2 Tutors Cheverly, MD algebra 2 Tutors Colmar Manor, MD algebra 2 Tutors Cottage City, MD algebra 2 Tutors Edmonston, MD algebra 2 Tutors Glenarden, MD algebra 2 Tutors Landover, MD algebra 2 Tutors Lanham algebra 2 Tutors Lanham Seabrook, MD algebra 2 Tutors Mount Rainier algebra 2 Tutors New Carrollton, MD algebra 2 Tutors North Brentwood, MD algebra 2 Tutors Riverdale Park, MD algebra 2 Tutors Riverdale Pk, MD algebra 2 Tutors Riverdale, MD algebra 2 Tutors
{"url":"http://www.purplemath.com/landover_hills_md_algebra_2_tutors.php","timestamp":"2014-04-16T16:43:21Z","content_type":null,"content_length":"24585","record_id":"<urn:uuid:81380cce-27c2-4899-94e5-9a4476fcbe2c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Clear the Runway 1. The problem statement, all variables and given/known data To take off from the ground, an airplane must reach a sufficiently high speed. The velocity required for the takeoff, the takeoff velocity, depends on several factors, including the weight of the aircraft and the wind velocity. A plane accelerates from rest at a constant rate of 5.00 [tex]m/s^{2}[/tex] along a runway that is 1800m long. Assume that the plane reaches the required takeoff velocity at the end of the runway. 2. Relevant equations What is the time needed to take off?
{"url":"http://www.physicsforums.com/showthread.php?t=337152","timestamp":"2014-04-18T18:12:23Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:d2243bcb-c626-4463-9740-0711ae3c0f59>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Plotting bifurcation diagram Replies: 0 Sanaa Plotting bifurcation diagram Posted: Jan 30, 2013 9:10 AM Posts: 154 Registered: 3/20/12 Hi, I am plotting the bifurcation diagram for the map x_(n+1)= x_n + (t-n*r)+ rho* x_n*(1-x_(n-1)), n=0,1,2,..., t is the time should be between n*r and (n+1)*r. What's wrong in the code! I should have got a diagram not a point!!! % define the vector of colors, to plot the dat aof reach value of r in % different color color_vec = ['b']; for rho = 0: 0.001:4 % define the number of deiscrete times in interval [k*r, (k+1)*r] n = 10; % define the nimber of iterations k = 1,.....,Nit Nit = 50; % define the initial vector which is a vector of size n x0 = 0.3*ones(1,n); % define vectors x_next and x_previous x_next = zeros(1,n); x_previous = zeros(1,n); %x_n x_pp=zeros(1,n); %x_n-1 time = zeros(1,n); % initialize x_previous %x_previous = x0; x_pp=0.2*ones(1,n); %x_n-1 hold on for i = 1:Nit time = linspace(i*r,(i+1)*r,Nit); x_next = x_previous+(time(i)-i.*r)*rho*x_pp.*(1-x_pp); %x_previous = x_next; I need your help if you please. Thanks a lot in advance.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2431763","timestamp":"2014-04-19T05:14:57Z","content_type":null,"content_length":"14571","record_id":"<urn:uuid:759ce2b3-3539-494b-b226-adef4dd8f771>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: symbol tables and search tries Matthias Blume <blume@tti-c.org> 31 Jan 2006 21:22:30 -0500 From comp.compilers | List of all articles for this month | From: Matthias Blume <blume@tti-c.org> Newsgroups: comp.compilers Date: 31 Jan 2006 21:22:30 -0500 Organization: private References: 06-01-085 06-01-111 06-01-117 06-01-125 Keywords: symbols, performance Posted-Date: 31 Jan 2006 21:22:30 EST haberg@math.su.se (Hans Aberg) writes: > John Levine wrote: >> I make my hash tables big enough that the chains are short, i.e., >> design so that k>=N so it's O(1). Hash headers need only be one word >> pointers, so pick a number larger than the number of symbols you >> expect in a large program, say, 10,000, and use that. So it adds 40K >> to the size of the program, that's down in the noise these days. > Even though one in a practical application can choose k large enough, > hoping that somebody will not choose a larger N, from the theoretical > point, when computing the complexity order, N is not limited. So, the > adjustment of k must be taken into account when computing the > complexity order. Then, by doubling k, one gets down to logarithmic > time. Hans, please, do the calculation! It does /not/ give you logarithmic time, it gives you constant time (in the amortized sense). Suppose, for simplicity, you set the load threshold to 1 (i.e., you double the size when there are as many elements as there are buckets). Clearly, access is O(1) on average (assuming a reasonable hash function without major collisions). So let's look at insertions: In the worst case, the very last insert just pushes the load over the edge and causes another doubling. Thus, every element in the table has suffered through at least one rebuild. Half the elements have suffered through at least one additional rebuild, one quarter has suffered through at least a third rebuild, and so on. This gives rise to a sum of a geometric series: 1 + 1/2 + 1/4 + ... which is 2 in the limit. Thus, in the limit, each element has (on average) suffered through 2 rebuilds. The time each rebuild takes is constant per element that participates in that rebuild. Since each element that is in the table must have been inserted at some point, we charge the amortized constant amount of work due to rebuilds to that insert operation. As a result, the time for one insert is constant on (In the best case, the last insert just fills the table without causing another rebuild. In this case the series to be summed is 1/2 + 1/4 + ... = 1 i.e., the average cost of insertions due to rebuilds is only half as big. In either case, it is constant.) > I also think one cannot do any better, because the best sorting > algorithm, over all choices is O(N log N), so if one has better > inserting complexity in a container than O(log N), one can beat that > by merely insert elements one by one. A hash table does not sort, so this line of reasoning is not Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/06-01-145","timestamp":"2014-04-20T11:29:52Z","content_type":null,"content_length":"9227","record_id":"<urn:uuid:063ed0c8-09fe-4922-a1e8-e5d18dcdcb66>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 420/507 Math 420/507, Section 101 Problems, Solutions, Handouts Note: PDF files may be read with Acrobat Reader, which is available for free from Adobe. • Cardinality [pdf] • Review of Measure Theory [pdf] • Review of Measurable Functions [pdf] (version of November 14, 2008) • Review of Integration [pdf] (version of December 2, 2008) • Monotone Classes [pdf] (version of November 7, 2008) • Review of Signed Measures and the Radon-Nikodym Theorem [pdf]
{"url":"http://www.math.ubc.ca/~feldman/m420/","timestamp":"2014-04-20T11:03:57Z","content_type":null,"content_length":"5577","record_id":"<urn:uuid:73a82917-adbc-4cba-80a7-cf88465407ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph question June 2nd 2010, 09:34 AM #1 Oct 2009 Graph question I definitely know the answer is not C, because there are two 100 shots. What is the correct answer? I am guessing B? Hi eliteplague, Answer B would seem to be correct. the answer is B because for x=16, you have 2 different values of y which does not satisfy the definition of a function June 2nd 2010, 10:06 AM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia June 2nd 2010, 11:03 AM #3 May 2010
{"url":"http://mathhelpforum.com/algebra/147507-graph-question.html","timestamp":"2014-04-19T02:04:33Z","content_type":null,"content_length":"34817","record_id":"<urn:uuid:b9bcda45-cad5-4c52-8f9c-ecd8691fe8d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Rejected Post: 3 Msc. Kvetches on the Blog Bagel Circuit In the past week, I’ve kvetched over at 3 of the blogs on my blog bagel (instead of using the time to work). Here are the main ones, you can follow up on their blogs if you wish: I. I made a brief comment on a blatant error in Mark Chang’s treatment of my Birnbaum disproof on Xi’an’s Og. Chang is responding to Christian Robert’s critical review of his book, Paradoxes in Scientific Inference (2013) Mayo Says: December 27, 2012 at 9:08 am (actually posted Dec.26,~1:30 a.m.) I have only gotten to look at Mark Chang’s book a few days ago. I have many concerns regarding his treatment of points from Mayo and Spanos (2010), in particular the chapters by Cox and Mayo (2010) and Mayo (2010). Notably, having set out, nearly verbatim (but without quotes), my first variation of Birnbaum’s argument (Mayo 2010, 309), Chang takes, as evidence that “Mayo’s disproof is faulty”, assertions that I make only concerning the second variation of the Birnbaum argument (310-11). Chang has written (Chang, 138) the first version in detail, but obviously doesn’t understand it. The problem with the first version is that the two premises cannot both be true at the same time (the crucial term shifts its meaning in the two premises). The second formulation, by contrast, allows both premises to be true. I label the two premises of the second variation as (1) and (2)’. The problem in the second formulation is: “The antecedent of premise (1) is the denial of the antecedent of premise (2)’.”(Mayo 2010, 311). (Note the prime on (2)’. )These are both conditional claims, hence they have antecedents. Chang gives this quote, but has missed its reference. I might mention that I don’t see the relevance of Chang’s point about sufficiency to either variations of Birnbaum’s proof (bottom para, Chang 138). A less informal and clearer treatment of my Birnbaum argument may be found in a recent paper: On the Birnbaum Argument for the Strong Likelihood Principle. I am inviting comments for posting (some time in January) as explained at this link. I invite Chang to contribute, perhaps with a newly clarified attempt to reject my disproof of Birnbaum. II. I am comment #17 on Normal Deviate’s clever post offering “New Names For Statistical Methods”[ii] Mayo 
Posted December 24, 2012 at 9:22 pm N.D.: I’m all for some well-needed name changes, but I would like to voice (a) some gripes/drawbacks with a few of these, and (b) some glaring omissions.
I think in general it’s best not to hang a person’s name on these things, particularly if that name wasn’t already there (so I agree with another commentator). There are enough irrelevant attacks “against the man” slipping into the assessment of statistical tools. The p-value, or the “significance probability” or “significance level” will surely not benefit by being called the “Fisher statistic”, what with Fisher’s achievements being derogated, references to him as “The Wasp”, and as a man who wore rumpled clothes and smoked too much…It already appears as the main character in U-tube clips with titles like “what the p-value”, do we really need “what the #@$% Fisher statistic”? Bayesian Inference—(N.D. suggests Laplacian Inference): why not just go back to inverse probability? I know many people who are irked that it was ever changed. Bayesian Nets—(N.D. suggests Pearl graph): I do think a name change is very much needed. Pearl indicated to me long ago that he was intending just to refer to probability. So what’s wrong with a probabilistic net, or a DAG (as many already use), for a directed acyclic graph endowed with probability distribution? Confidence Interval —(N.D. suggests Coverage set). I think the interval aspect, or even just the use of “bounds” or “limits” are essential. There are counterintuitive “sets” that can have reported coverage. Also, the fact that there is a “confidence concept” (Birnbaum, Fraser) and confidence distributions, might suggest retaining it. A sensible confidence interval should give corroboration bounds, in the sense of indicating well-corroborated values of a parameter. So it seems best to stick with CI bounds or corroboration bounds or the like. Causal Inference. —(N.D. suggests formal inference): This would get confused with formal deductive inference which obviously needn’t be causal; if anything, causal inference is (strictly speaking) informal inference (in the sense often used in philosophy, i.e., inductive/qualitative). The Central Limit Theorem and the Law of Large Numbers–N.D. thinks these are boring, but I think the LLN is extremely cool (so I agree with another commentator), and it already already has a BLLN version. The CLT is informative, where de Moivre is not. Stigler’s law of eponymy. 
New Name: Stigler’s law of eponymy. I think there is something self-referentially wrong here, since Stigler did name it. That is, if Stigler is right, it should be named after non-Stigler.[i] [i] additional note: Yes I know Stigler claims it was noted by Merton, but Merton didn’t coin Stigler’s law. Neural nets. N.D. says, “Let’s call them what they are.
 (Not so) New name: Nonlinear regression”. Now for (b): frequentist statistics, sampling theory, and “classical statistics”—must these remain as an equivocal mess? None of these work well. “Sampling theory” does make sense since the key is the use of the sampling distribution for inference, but it doesn’t capture it. Since sampling distributions are used for error probabilities (of methods), one might try error probability statistics, or error statistics for short. That’s the best I could come up with. (I know some people find “error probabilities” overly behavioristic, but I do not.) You can find N.D.’s post and many other comments here. [ii] How do you get comments to be numbered on WordPress? Other trouble I’ve gotten into this week (just on blogs I mean): III. I made some comments on a statistics chapter in Nate Silver’s book, discussed over at Gelman’s blog: Here’s my last, in response to one of the comments. Mayo says: December 25, 2012 at 9:33 pm Dear E.J.
 Gelman and I have posted frank comments on each other’s blogs for a while now, and I had just read that chapter when I noticed Gelman was posting some reviews on his blog. I was not reviewing Silver’s book, just commenting on the chapter on frequentist statistics on a blog. Had I been reviewing it I would have read and discussed the other chapters. (I haven’t even discussed Silver on any of my blogs, though maybe now I will, having taken the time to write so much here.)[i] I think one of the main reasons I was so disappointed with that Silver chapter is that I was expecting/hoping to like the book. Two months ago, an artist who was helping to paint a faux finish mural in my condo asked what I did. Philosophy of science is typically considered pretty esoteric, but when she heard I was interested in various statistical methods/knowledge, she asked, to my surprise, if I knew Nate Silver, and we discussed his 538 blog, and the day’s comments and dialogue. I thought it was great that he seemed to be bringing statistics into the popular culture, even if it was just polling and baseball. So I was dismayed at his negative remarks on frequentist statistics and his kooky, gratuitious ridicule of R.A.Fisher. I am not alone in thinking this (see Kaiser below). I guess I’m just sorry Silver comes off looking so clownish here, because I had thought he’d be interesting and insightful. I’ll look at the other chapters at some point, I’ve loaned out my copy… You are wrong to suppose that I am “angry because Nate wants to euthanize” methods that are widespread across the landscape of physical and social sciences, in experimental design, drug testing, model selection, law, etc., etc. These methods won’t die so long as science and inquiry remain. You see, science could never operate with a uniform algorithm (his “Bayes train”) where nothing brand new enters, and nothing old gets falsified and ousted. We’d never want to keep within such a bounded universe of possibilities: inquiry is open-ended. Getting beyond the Bayesian “catchall” hypothesis is an essential part of pushing the boundaries of current understanding and theorizing in science and in daily life.* Frequentist statistics, like science/learning in general, provides a cluster of tools, a storehouse by which the creative inquirer can invent, and construct, entirely new ways to ask questions, often circuitously and indirectly, while at the same time allowing (often ingenious) interconnected ways to control and distinguish patterns of error. As someone once said, statistics is “the study of the embryology of knowledge, of the processes by means of which truth is extracted from its native ore in which it is fused with much error.”**
 They are methods for the critical scrutiny of the data and models entertained at a given time (not a hope for the indefinite long-run). The asymptotic convergence from a continual updating from more and more data (on the same query) that Silver happily touts, even in the cases where we imagine the special assumptions it requires are satisfied, is irrelevant to the day to day scrutiny/ criticism of actual scientific results. I agree that foundational discussions are good, but much more useful when coupled with published material or at least blogposts.
 Good Luck. *An analogy made yesterday by Isaac Chatfield: frequentist statistical methods are a bit like surgical tools; they are designed to be used for a variety of robust probes which are open to being adaptively redesigned to solve new problems while in the midst of an operation/investigation.
 **Fisher 1935, DOE, 39. You can dig out as much of the back and forth as you care to, or can stand, here. [i] As you can see, I did. 2 thoughts on “Rejected Post: 3 Msc. Kvetches on the Blog Bagel Circuit” You got quite involved with comments on Gelman’s blog (about reviews of Nate Silver’s book)! I did not realize (until now) how heated comments between readers can get. Nonetheless, I did offer comment on that blog, with the intention of making your case stronger! (However, I am only a beginning Master’s student, and they may look down on me for my lack of experience.) Also, I read Normal Deviate’s post on offering new names for statistical methods, and do not like most of the names he offers (I agree with 1 or maybe 2 – but that’s it!). Though, his point about the importance of having good names is appreciated. Nicole: Thanks for this, I’d like it if more graduate students submitted comments or even posts. I’ve corrected “Deviate”. If he succeeds in changing “Bayes nets” to “probabilistic nets”, “DAGs”, or frankly, anything else, we would not have a lot of people erroneously assuming they are Bayesians (simply because they use conditional probability). Glymour, one of the leaders in causal, modeling concurs. Pearl has moved away from Bayesianism anyway, but I forget which of the weekly, weakly Bayesian stances he’s up to… Remember this is in “rejected posts” by the way. Categories: danger, Misc Kvetching, phil stat 2 Comments
{"url":"http://rejectedpostsofdmayo.com/2012/12/27/rejected-post-3-msc-kvetches-on-the-blog-bagel-circuit/","timestamp":"2014-04-21T12:08:24Z","content_type":null,"content_length":"66132","record_id":"<urn:uuid:ccc2e4cc-d677-44fa-b43e-4b4dba905ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Distribution Plotter - File Exchange - MATLAB Central Probability Distribution Plotter (ProbDistPlot) is a GIU tool which plots statistical distributions commonly used in reliability engineering. The program plots the probability distribution function (pdf), the cumulative distribution function (cdf) and the hazard rate of each distribution. The user can gain a better understanding of the affect the distribution parameters have on the distribution but interactively changing distribution parameters with slides. ProbDistPlot is also able to create quick and easy figures by allowing multiple distributions to be plotted on the one figure with automatic legend annotations (updating parameter values), latex and tex labels, and export to excel charts, clipboard, jpg, pdf, .mat and the printer. Exporting to excel actually transfers the data into an excel spreadsheet and creates an organic Excel chart which can be further manipulated in Excel. Yair M. Altman, altmany(at)gmail.com is acknowledged as the author of the function findobj.m which allowed the properties of the slider objects to be modified. This software uses distributions from the book, O’Connor, A.N, Probability Distributions Used In Reliability Engineering, RIAC, 2011. Programmed and Copyright by Andrew O'Connor: AndrewNOConnor@gmail.com. Change log: 1.0 - 12 Apr 2010: First Version. Please login to add a comment or rating.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/27227-probability-distribution-plotter","timestamp":"2014-04-25T03:52:51Z","content_type":null,"content_length":"39722","record_id":"<urn:uuid:2026a205-248c-4bae-b54e-7f1323c60036>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: determine the coefficient of x^5 in the expression (2x-3)^8 help pls!! • one year ago • one year ago Best Response You've already chosen the best response. going with binomial theorem\[(a+b)^n=\sum_{k=0}^{n} \binom{n}{k} a^k b^{n-k}\]and setting a=2x ,b=-3 and n=8 ... will be your start point Best Response You've already chosen the best response. \[\sum_{k=0}^{8} \left(\begin{matrix}8 \\ 5\end{matrix}\right) 2^5 (-3) ^3 x^5\] Is this is? Best Response You've already chosen the best response. u just need to evaluate one term of the sum and its where k equals to 5 so just \[ \left(\begin{matrix}8 \\ 5\end{matrix}\right) 2^5 (-3) ^3 x^5\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50cd023ee4b0031882dc2a05","timestamp":"2014-04-21T12:53:07Z","content_type":null,"content_length":"32576","record_id":"<urn:uuid:ba2a2896-f08f-43cc-ba59-4518d49d438d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: On the Nature of Mathematical Objects Timothy Y. Chow tchow at alum.mit.edu Mon Aug 9 13:02:14 EDT 2004 Dmytro Taranovsky <dmytro at mit.edu> wrote: > One can try to define what a star is by noting some properties that all > stars have (such as being heavy) and some properties that no star has, > but the definition will be in some cases ambiguous (for example, are > brown dwarfs stars? what about neutron stars?). By contrast, a > mathematical definition, such as that of an even integer, has no > ambiguity. So for example, if / the empty set, if the continuum hypothesis holds, and x = < \ the class of all sets, otherwise, then it is unambiguous whether x is a set? > The reach of particular physical objects is ambiguous (for > example, is solar corona a part of the sun?), but no such ambiguity > exists, say, for the set of prime numbers: The set reaches those only > those natural numbers that have exactly two different divisors. And no ambiguity exists about whether x (as defined above) "reaches," say, the set of all integers? Whether you answer yes or no to these questions, it seems that the answer is far from uncontroversial. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008393.html","timestamp":"2014-04-18T06:15:11Z","content_type":null,"content_length":"3707","record_id":"<urn:uuid:98e59282-3c2c-405d-a2aa-54fabd52c4a0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Collins Math Tutor Find a Fort Collins Math Tutor ...On top of that, I also worked for a national test prep agency where I was certified to teach: GRE, GMAT, ACT, SAT, and the Biology section of the MCAT. I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. 26 Subjects: including precalculus, SAT math, ACT Math, trigonometry ...I enjoy every level of Algebra student from the one that needs help on a few concepts to the student who has felt lost since the beginning of the semester. If you ask any of my students what most of them will tell you is that I am able to explain things in ways they understand. If Algebra 2 help is what you are looking for, I look forward to getting you back on the right track. 14 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I have taken Introduction to Music History, Music History I, Music History II, and I am currently taking a graduate level music history course on Romanticism at Colorado State University. I received an A in Introduction to Music History, an A in Music History II, and a B in Music History I. I h... 23 Subjects: including algebra 1, prealgebra, Spanish, reading ...I began trumpet in the 4th grade, and continued through high school. I was involved in jazz band, orchestra, and marching band. In junior high, I also started the French horn, and continued playing that through high school. 33 Subjects: including algebra 1, English, prealgebra, SAT math ...Finally, for my thesis project I used econometric techniques to model the behavior of markets for replenishing resources. While it is a powerful tool, a strong understanding of the assumptions behind the equations is imperative to prevent misuse. Linear algebra is the study of all forms of math... 25 Subjects: including discrete math, SPSS, logic, probability
{"url":"http://www.purplemath.com/fort_collins_co_math_tutors.php","timestamp":"2014-04-21T10:55:42Z","content_type":null,"content_length":"23916","record_id":"<urn:uuid:9decfe38-e89a-4588-bc29-c64c2f9d24d1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab Programmer Username: ronobir1 Has not made a deposit. Has verified their email address. Has completed their profile. Has not verified their secure phone number. • Verified Payment is verified. Location: Dhaka, Bangladesh Member since: January 2011 • $35 USD “ Honest and talented person .... MAtlab expert and great communication ... will hire him againHonest and talented person .... MAtlab expert and great communication ... will hire him again ” Project Description:Need someone good in numerical analysis and matlab to help me . Half of the stuff needs to be done in matlab and rest regularly. Its a urgent project so make sure you are available and finish within the deadline... • $35 CAD Project Description:U need to solve a partial differential equation. For that you have to find weak formulation first and then after creating necessary meshing of boundary you need to solve the equation using FEM method... • $100 USD asazul42 [ Incomplete Report ] Jan 13, 2014 “ The project is not completed according to the disputeThe project is not completed according to the dispute ” Project Description:Hello, attached is a spherical geometry problem I am having difficulty with. it involves spherical surface and piecewise curve integration • $700.83 USD Nov 29, 2013 “ ronobir1 did really a great job. His numerical scheme to solve the given problem gave really accurate result and was very fast. Recommend him to every employer who wants a perfect job in reasonable price. Hope to hire him again.ronobir1 did really a great job. His numerical scheme to solve the given problem gave really accurate result and was very fast. Recommend him to every employer who wants a perfect job in reasonable price. Hope to hire him again. ” Project Description:You need to make simulation of a fluid dynamics model which is like to transfer fluid through a pipe. You have to formulate some partial differential equations which describes behavior of the fluid when it ejects from the pipe... • $83 USD valleyview [ Incomplete Report ] Sep 26, 2013 “ The project is not completed according to the disputeThe project is not completed according to the dispute ” Project Description:Simple Magnet Perpetual Motion Motor Design and build a Small Simple Magnet Perpetual Motion Motor, and you must test it and make a video of it working and test it. You must convince me you can do this... • $307 AUD Sep 12, 2013 “ Ronobir is a very knowledgeable freelancer. His knowledge in Maths was an invaluable contribution to this project.Ronobir is a very knowledgeable freelancer. His knowledge in Maths was an invaluable contribution to this project. ” Project Description:I have 2 scissor jacks, one at 75mm x 40mm and the other 130mm X 40mm. Both are to be mounted somehow to the underside of a platform lying flat. I need an inexpensive automated way to turn the first Jack up on its side, so as to extend directly downwards... • $204.5 USD May 15, 2013 “ Ronobir made some quite difficult mathematical calculations concerning design and construction under extreme circumstances, and also calculations on heat transfer in different material.It was very helpful, and it has been a large help for me.Thenk you Ronobir. I hope you will help me again when I need an expert !Ronobir made some quite difficult mathematical calculations concerning design and construction under extreme circumstances, and also calculations on heat transfer in different material.It was very helpful, and it has been a large help for me.Thenk you Ronobir. I hope you will help me again when I need an expert ! ” Project Description:In a deep wide pit in the ground, 5 - 10 meter in diameter, the pressure will increas by increasing depth. The debth will be ut to 10.000 meter. The pit walls need to be supported in order to prevent the pit from collaps... • $70 USD Project Description:project description : Can anyone please help me in solving a differential equation by FEM method and make a programming code in matlab? Please pm me if u are intereseted. • $230 USD Dec 27, 2012 “ Knowledgeable in the areas of Matlab and computing. Work was performed on time and on budget. Will definitely hire again.Knowledgeable in the areas of Matlab and computing. Work was performed on time and on budget. Will definitely hire again. ” Project Description:Need by Wednesday. Willing to pay $30. If you do good work I may have some more in coming weeks. thanks • $70 USD Nov 25, 2012 “ Very good freelancer. The work was done on very high level. I think he is good for math. projects. I will definitely think to hire him againVery good freelancer. The work was done on very high level. I think he is good for math. projects. I will definitely think to hire him again ” Project Description:as we agreed before ronobir1 has not completed any projects. Jan 2011 - Present (3 years) I am doing works in freelancer from january 2011<br />My most works are related to matlab programming. Jahangirnagar University International Math Olympiad Tehran, Iran Obtained 23rd position Analytical and Numerical solution of viscid Burger's equation Journal of Science, Jahangirnagar University The paper presents analytical solution of viscous Burger's equation as an IVP. Due to the complexity of the analytical solution the paper studies explicit and implicit finite difference schemes and determines stability condition for explicit finite difference scheme. The research also studies accuracy and numerical feature of convergence of the explicit and implicit numerical schemes for specific initial and boundary values by estimating their relative errors.
{"url":"http://www.freelancer.com/u/ronobir1.html?ref_project_id=5039416","timestamp":"2014-04-20T06:01:38Z","content_type":null,"content_length":"453851","record_id":"<urn:uuid:bb3b1708-d231-4dcb-92ea-d160f29cc7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: Binary128 - PI and E The current proposed specification in COBOL for the intrinsic function PI under standard-binary arithmetic (which follows the rules for IEEE Std 754-2008 binary128 format) is the arithmetic _expression_ (16,312,081,666,030,376,401,667,486,162,748,272 / (2 ** 112)). What I personally like about that is this: 1) The dividend is an integer that completely and exactly specifies the binary content of the significand field (including the implicit leading bit being set) 2) The divisor is an integer that completely and exactly specifies the binary content of the exponent field after bias is applied. Although it's not obvious, there's no need for the application of arithmetic operations to produce this value -- assuming a datum of binary128 format, the significand bit-pattern is set to the binary integer representation of the dividend, and the exponent bit-pattern is set to the negative of the power specified in the divisor _expression_. This, it seems to me, is the closest one can come using decimal numbers to an EXACT representation of PI. The question is, whether this value is the CLOSEST approximation to PI that can be represented in binary128 format, and that basically means "is the last digit of the integer correct?". -Chuck Stevens > Date: Wed, 11 May 2011 23:06:30 +0200 > From: 7_born@xxxxxx > To: stds-754@xxxxxxxxxxxxxxxxx > Subject: Fwd: Re: Binary128 - PI and E > -------- Original-Nachricht -------- > Betreff: Re: Binary128 - PI and E > Datum: Wed, 11 May 2011 23:03:32 +0200 > Von: Thorsten Siebenborn <7_born@xxxxxx> > An: William M Klein <wmklein@xxxxxxxxxxxxx> > Am 11.05.2011 21:27, schrieb William M Klein: > > This may be a stupid question and I don't know if I am even expre4ssing it correct, BUT Can anyone tell me or point me to some place that documents what the exact DECIMAL fixed-point value would be for the most accurate values of PI and E that can be stored in a binary128 data item? Would this just be (for PI) PI with 35 digits to the right of the decimal point? > The quad-precision format (binary128) has with the implicit hidden bit > 113 bit stored which is for pi equivalent to > 11.0010010000111111011010101000100010000101101000110000100011010011000100 > 11000110011000101000101110000000110111000 > This is nearly 0,25 ulps lower than the correct value. > The correctly rounded value is > 3.1415926535897932384626433832795028 > *Exact* conversion binary to decimal (with garbage > digits due to conversion) > 3.141592653589793238462643383279502797479068098137295573004504331874296718662975536062731407582759857177734375 > Bit equivalent for e: > 10.1011011111100001010100010110001010001010111011010010101001101010101111 > 11011100010101100010000000100111001111010 > Nearly 0,49 ulps lower than the correct value. > Correctly rounded value: > 2.7182818284590452353602874713526623 > *Exact* conversion binary to decimal (with garbage > digits due to conversion) > 2.71828182845904523536028747135266231435842186719354886266923086032766716801933881697550532408058643341064453125 > Best regards, > Thorsten
{"url":"http://grouper.ieee.org/groups/754/email/msg04206.html","timestamp":"2014-04-17T03:56:30Z","content_type":null,"content_length":"8763","record_id":"<urn:uuid:42547220-0dd8-4ec2-bfc1-526ad6b41022>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: 49:Ulm Theory/Reverse Math FOM: June 25 - July 31, 1999 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] [FOM Postings] [FOM Home] FOM: 49:Ulm Theory/Reverse Math This is the 49th in a series of self contained postings to fom covering a wide range of topics in f.o.m. Previous ones are: 1:Foundational Completeness 11/3/97, 10:13AM, 10:26AM. 2:Axioms 11/6/97. 3:Simplicity 11/14/97 10:10AM. 4:Simplicity 11/14/97 4:25PM 5:Constructions 11/15/97 5:24PM 6:Undefinability/Nonstandard Models 11/16/97 12:04AM 7.Undefinability/Nonstandard Models 11/17/97 12:31AM 8.Schemes 11/17/97 12:30AM 9:Nonstandard Arithmetic 11/18/97 11:53AM 10:Pathology 12/8/97 12:37AM 11:F.O.M. & Math Logic 12/14/97 5:47AM 12:Finite trees/large cardinals 3/11/98 11:36AM 13:Min recursion/Provably recursive functions 3/20/98 4:45AM 14:New characterizations of the provable ordinals 4/8/98 2:09AM 14':Errata 4/8/98 9:48AM 15:Structural Independence results and provable ordinals 4/16/98 16:Logical Equations, etc. 4/17/98 1:25PM 16':Errata 4/28/98 10:28AM 17:Very Strong Borel statements 4/26/98 8:06PM 18:Binary Functions and Large Cardinals 4/30/98 12:03PM 19:Long Sequences 7/31/98 9:42AM 20:Proof Theoretic Degrees 8/2/98 9:37PM 21:Long Sequences/Update 10/13/98 3:18AM 22:Finite Trees/Impredicativity 10/20/98 10:13AM 23:Q-Systems and Proof Theoretic Ordinals 11/6/98 3:01AM 24:Predicatively Unfeasible Integers 11/10/98 10:44PM 25:Long Walks 11/16/98 7:05AM 26:Optimized functions/Large Cardinals 1/13/99 12:53PM 27:Finite Trees/Impredicativity:Sketches 1/13/99 12:54PM 28:Optimized Functions/Large Cardinals:more 1/27/99 4:37AM 28':Restatement 1/28/99 5:49AM 29:Large Cardinals/where are we? I 2/22/99 6:11AM 30:Large Cardinals/where are we? II 2/23/99 6:15AM 31:First Free Sets/Large Cardinals 2/27/99 1:43AM 32:Greedy Constructions/Large Cardinals 3/2/99 11:21PM 33:A Variant 3/4/99 1:52PM 34:Walks in N^k 3/7/99 1:43PM 35:Special AE Sentences 3/18/99 4:56AM 35':Restatement 3/21/99 2:20PM 36:Adjacent Ramsey Theory 3/23/99 1:00AM 37:Adjacent Ramsey Theory/more 5:45AM 3/25/99 38:Existential Properties of Numerical Functions 3/26/99 2:21PM 39:Large Cardinals/synthesis 4/7/99 11:43AM 40:Enormous Integers in Algebraic Geometry 5/17/99 11:07AM 41:Strong Philosophical Indiscernibles 42:Mythical Trees 5/25/99 5:11PM 43:More Enormous Integers/AlgGeom 5/25/99 6:00PM 44:Indiscernible Primes 5/27/99 12:53 PM 45:Result #1/Program A 7/14/99 11:07AM 46:Tamism 7/14/99 11:25AM 47:Subalgebras/Reverse Math 7/14/99 11:36AM 48:Continuous Embeddings/Reverse Mathematics 7/15/99 12:24PM There is a nearly completed manuscript which includes the following results. It will eventually appear on my website. Consider the following statements for countable Abelian groups. 1. Either G is embeddable into H* or H is embeddable into G*. 2. There is a direct summand K of G and H such that every direct summand of G and H is embeddable into K. 3. There is a direct summand J of G* and H* such that every direct summand of G* and H* is a direct summand of J. 4. In every infinite sequence of groups, one group is embeddable in a later (different) group. 5. In every infinite decreasing chain of groups, one group is embeddable in a later group. Here G* is the direct sum of countably many copies of G. In 5, a decreasing chain of groups is a sequence of groups G1,G2,..., where each Gi+1 is a subgroup of Gi. Here we mean literal subgroup, not just up to isomorphism. Reduced means no divisible subgroup. Torsion group means every element is of finite order. THEOREM 1. For reduced p-groups, each of 1-5 are provably equivalent to ATR_0 over RCA_0. This is also true for any specific prime p. For reduced torsion groups, each of 2,3 are provably equivalent to ATR_0 over RCA_0. 1,4,5 are false for reduced torsion groups. THEOREM 2. For p-groups, each of 1-5 are provably equivalent to ATR_0 over RCA_0. This is also true for any specific prime p. For torsion groups, each of 2,3 are provably equivalent to ATR_0 over RCA_0. 1,4,5 are false for torsion groups. Theorem 2 may be somewhat surprising since obvious proofs lie in pi-1-1-CA_0. Additional work is needed to stay within ATR_0. The reversals of 4 and 5 rely on Richard Shore's: On the strength of Fraisse's conjecture, in: Logical Methods, In Honor of Anil Nerode's Sixtieth Birthday, Brikhauser, 1993, 782-813. Actually, we need a small refinement of Shore for 5, which looks I am indebted to Paul Eklof for valuable discussions, especially concerning the paper: Jon Barwise and Paul Eklof, Infinitary Properties of Abelian Torsion Groups, Annals of Mathematical Logic, 1970, 25-68. PS: The proof of 2,3 for countable Abelian p-groups in ATR_0 relies on a technical generalization of Ulm's theorem which I sent out to be checked for verification. [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] [FOM Postings] [FOM Home]
{"url":"http://www.personal.psu.edu/t20/fom/postings/9907/msg00019.html","timestamp":"2014-04-17T15:31:16Z","content_type":null,"content_length":"7222","record_id":"<urn:uuid:88ae446a-0911-4069-810a-d393d6acd0b3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
I have another problem that involves some sort of substitution. Start with the basics. y = mx + b We know this has to pass through the point (x, y) = (1, 2), so lets plug those in: 2 = m + b, or, m = 2 - b That didn't seem to get us very far, did it? But at least we have a relationship for m and b, this might come in handy later. So lets go back. What we want is a general equation for the area of the triangle. Well, we know of the formula 1/2 * base * height. So lets try to find those variables. The height of the triangle is going to be the y-intercept, b. Easy enough. The base of the triangle is going to be the x intercept. Since y = mx + b, and y must be 0 (definition of the x-intercept) 0 = mx + b. We want to solve this for x, so that would be x = -b / m. So the area of the triangle is 1/2 * b * -b/m, or -b^2 / 2m. But wait, isn't this going to be negative? Negative area? Nope, remember the line that we are drawing has a negative slope, so m is negative, making -b^2 / m positive. So we want to find the least area of the function -b^2 / 2m. Huh, two variables, that's going to be pretty tricky without multi variable calculus. But wait, doesn't m = 2 - b? Told you that would come in handy. So -b^2 / 2 * (2 - b) is the area, or -b^2 / (4 - 2b). Try to find the minimum for that function. This will tell you what b is, then you can find m because m = 2 - b. And for extra credit, what kind of triangle does this make? Last edited by Ricky (2005-12-16 15:39:47) "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=20781","timestamp":"2014-04-19T05:01:20Z","content_type":null,"content_length":"31029","record_id":"<urn:uuid:83f562cc-b7ff-4277-a7df-9dd8e832e4d0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Browse call numbers: QA370 .E64 | The Online Books Page Call number Item Q Science (Go to start of category) QA Mathematics and Computer Science (Go to start of category) QA370 .E64 Electronic Journal of Differential Equations (full serial archives) QA371 .F73 T7 A Treatise on Differential Equations, by Andrew Russell Forsyth (frame- and JavaScript-dependent page images at Cornell) QA371 .H12 Four Lectures on Mathematics Delivered at Columbia University in 1911, by Jacques Hadamard (frame- and JavaScript-dependent page images at Cornell) QA371 .H99 A Treatise on Differential Equations, and on the Calculus of Finite Differences, by J. Hymers (frame- and JavaScript-dependent page images at Cornell) QA371 .I52 Inside Out: Inverse Problems and Applications (2003), ed. by Gunther Uhlmann (PDF files with commentary at msri.org) QA371 .T68 Elementary Differential Equations With Boundary Value Problems (free online edition, 2013), by William F. Trench (PDF at trinity.edu) QA372 .C88 A Treatise on Linear Differential Equations, by Thomas Craig (frame- and JavaScript-dependent page images at Cornell) QA372 .M37 The Hopf Bifurcation and Its Applications (1976), by Jerrold E. Marsden and Marjorie McCracken (PDF files at Caltech) QA372 .O81 Examples of Differential Equations, With Rules for Their Solution, by George A. Osborne (frame- and JavaScript-dependent page images at Cornell) QA372 .P13 Ordinary Differential Equations, by James Morris Page (frame- and JavaScript-dependent page images at Cornell) QA372 .P6 1922 Differential Equations (New York: John Wiley and Sons, 1922), by H. B. Phillips (PDF at djm.cc) QA374 .S56 Hilbert Space Methods for Partial Differential Equations, by R. E. Showalter (PDF files at ams.org) QA377 .S683 Existence, Multiplicity, Perturbation, and Concentration Results for a Class of Quasi-Linear Elliptic Problems (EJDE monograph #7, 2006), by Marco Squassina (PDF with commentary at 20046 ams.org) QA377 .W36 Palais-Smale Approaches to Semilinear Elliptic Equations in Unbounded Domains (EJDE monograph #6, 2004), by Hwai-chiuan Wang (PDF with commentary at ams.org) QA377 .W454 An Introduction to Multigrid Methods, by Pieter Wesseling (PDF files at mgnet.org) QA385 .C18 Introductory Treatise on Lie's Theory of Finite Continuous Transformation Groups, by John Edward Campbell (frame- and JavaScript-dependent page images at Cornell) QA385 .C67 An Introduction to the Lie Theory of One-Parameter Groups, by Abraham Cohen (frame- and JavaScript-dependent page images at Cornell) QA401 .C48 The Analysis of Linear Systems (New York et al.: McGraw-Hill, c1963), by Wayne H. Chen (page images at HathiTrust) QA401 .W68 Mathematics for the Physical Sciences (c1962), by Herbert S. Wilf (PDF with commentary here at Penn) QA402 .U535 Unsolved Problems in Mathematical Systems and Control Theory (with problem solutions submitted since first publication in 2004), ed. by Vincent Blondel and Alexandre Megretski (PDF 2004 files with commentary at Princeton) QA402.3 .V26 Lecture Notes on Optimization (electronic edition of "Notes on Optimization", 1998), by Pravin Pratap Varaiya (PDF at Berkeley) QA402.5 .A27 Optimization Algorithms on Matrix Manifolds (2008), by P.-A. Absil, R. Mahony, and R. Sepulchre (PDF files with commentary at Princeton) QA404.5 .B69 Chebyshev and Fourier Spectral Methods (second edition), by J. P. Boyd (PDF with commentary at Citeseer) QA406 .F38 An Elementary Treatise on Spherical Harmonics and Subjects Connected With Them, by N. M. Ferrers (frame- and JavaScript-dependent page images at Cornell) QA409 .C54 A Presentation of the Theory of Hermite's Form of Lamé's Equation, by J. Bruce Chittenden (frame- and JavaScript-dependent page images at Cornell)
{"url":"http://onlinebooks.library.upenn.edu/webbin/book/browse?type=lccn&key=QA370%20.E64","timestamp":"2014-04-19T09:26:28Z","content_type":null,"content_length":"21730","record_id":"<urn:uuid:8329e8c0-362f-4924-8dd0-53cf601f39d4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
NA Digest Sunday, March 29, 1992 Volume 92 : Issue 13 NA Digest Sunday, March 29, 1992 Volume 92 : Issue 13 Today's Editor: Cleve Moler The MathWorks, Inc. Submissions for NA Digest: Mail to na.digest@na-net.ornl.gov. Information about NA-NET: Mail to na.help@na-net.ornl.gov. From: David H. Bailey <dbailey@nas.nasa.gov> Date: Thu, 19 Mar 92 09:26:00 -0800 Subject: Fast Hartley Transform Readers of the NA Net Digest may have noted the recent posting of a seminar on the Hartley transform. It is widely believed, and sometimes stated or implied in print, that for purely real inputs the fast Hartley transform (FHT) is twice as fast as the fast Fourier transform (FFT), since the FHT operates with real data instead of complex data. However, there exist well-known variants of the FFT that compute the discrete Fourier transform (DFT) on real input data in only half the operations required for the standard complex FFT. One way is to employ formulas that convert an FFT on real input data into a complex FFT of half the size. An even more efficient scheme is to use Edson's algorithm, which computes the DFT result using real arithmetic, and which does not require a pre- or post-processing step. Almost all vendor FFT libraries include efficient implementations of such schemes, and these routines usually run faster than implementations of the FHT. Thus the FHT is not fundamentally more efficient than well-known variants of the FFT, even for real data, and the Hartley transform itself has no known physical significance. Also, the FHT software as developed at Stanford is patented, and commercial usage requires consent from Stanford. In summary, we frankly do not understand the continuing interest in the FHT. Can anyone offer solid scientific reasons for using the FHT in place of the real-to-half complex FFT? David H. Bailey, NASA Ames Paul N. Swarztrauber, NCAR From: Stephen Vavasis <vavasis@cs.cornell.edu> Date: Mon, 23 Mar 92 14:43:48 -0500 Subject: Mesh Generation Bibliography Over two years ago in this forum I posted a notice that I was interested in writing a bibliography on automatic mesh generation for finite elements. The task quickly overwhelmed me, and I never finished it. The good news is that Marshall Bern of Xerox PARC and David Eppstein of U.C. Irvine have written a very nice illustrated summary of automatic mesh generation. Their summary is in the form of an annotated book chapter, and is also available as a Xerox technical report. The summary looks at the problem primarily from the computer science viewpoint (rather than from the applications viewpoint). The title of the report is "Mesh Generation and Optimal Triangulation"; if you are interested please contact Marshall Bern directly: bern@parc.xerox.com. -- Steve Vavasis From: John Coleman <John.Coleman@durham.ac.uk> Date: Thu, 26 Mar 92 15:30:57 GMT Subject: Who Solves ODEs of the form y''=f(x,y)? I would like to hear from you if you solve second-order differential equations of the special form y''=f(x,y), where y and f may be vectors, or if you know of anyone who does. Many numerical methods have been proposed for these equations, in papers which may contain one or two simple problems for which method n+1 is 'better' than method n. I would like to make contact with people who solve such problems to find an answer rather than to test a method. I am particularly interested in the sub-class of problems having oscillatory solutions -- such as orbit problems in classical mechanics, quantum mechanical scattering problems governed by the radial Schrodinger equation, and perhaps others about which I know nothing. In your response please tell me: (a) the form and the origin of the problem, (b) the numerical method or methods currently used, (c) the accuracy required and how you know if you have got it, (d) what difficulties (if any) arise in the solution process, (e) how satisfactory you consider your present approach to be, (f) any relevant references. I believe that the information I seek would be of interest to several people involved in the numerical analysis of ODEs. If I get a good response I intend to submit a summary to the NA Digest. My thanks in advance to all who can contribute in any way. John Coleman. E-mail: John.Coleman @ durham.ac.uk na.jcoleman @ na-net.ornl.gov From: E.V.Glushkov <evg@kgu.kuban.su> Date: 26 Mar 92 16:02:58 GMT Subject: Three Dimensional Singular Elements Hello from Russia, As is well-known, use of so-called singular elements in a 2-dim. finite element method improves the convergence and accuracy very much. But it is very difficult to find the order of singularity at the 3-dim. corner points, and therefore to construct the singular elements. In our lab there was eleborated, realized as a computer program and tested a method of the singularity orders at a top of an arbitrary elastic polihedron extracting. The method is based on the spectral points of some integral operators seaching. We are not specialists in FEM and, being restricted in our com- puter's output, cannot carry out a regular and thorough numerical analysis. Untill now we've obtained results for a top of a cube with one fixed side and for some other geometries. So, if somebody is interested in employing 3-dim. singularities, please contact us and we'll discuss how a co-operation could be Drs. E.V.Glushkov & N.V.Glushkova E-mail: evg@kgu.kuban.su Kuban State University Institute of Mechanics and Applied Mathematics Krasnodar 350640, Russia From: James R. Bunch <jrb@sdna3.ucsd.edu> Date: Thu, 26 Mar 92 11:53:23 -0800 Subject: Temporary Change of Address for Jim Bunch I will be at the IMA during April 3 - June 6, 1992: Prof. James R. Bunch IMA, Univ. of Minnesota 514 Vincent Hall 206 Church St SE Minneapolis, MN 55455-0436 From: SIAM <ddilisi@siam.org> Date: Tue, 24 Mar 92 10:25:37 EST Subject: New Jersey Section of SIAM Spring Meeting New Jersey Section of SIAM: Spring Meeting The spring meeting of the New Jersey Section of SIAM will be held on Saturday morning, April 25, 1992 from 8:30 to noon. It will be held on the Busch Campus of Rutgers University in Piscataway. There will be two distinguished speakers who will provide an excellent mix of applied mathematics and computer science: o Diane Souvaine of the Computer Science Department and DIMACS, Rutgers University, on "Finding Maximum Inscribed Triangles and Shortest Aquarium Keeper Tours Using Shortest o Bruce L. Bush of the Molecular Systems Department, Merck Research Laboratories, Rahway, NJ, on "Some Headaches in Biomolecular Modeling: Are Mathematical Remedies on the Mathematicians in industry are encouraged to attend the meeting. The meeting will provide a unique opportunity for applied mathematicians in local industries, research laboratories, and academic institutions to meet and share problems, methods, and solutions. The meeting is not only for SIAM members but is open to all interested people. You are encouraged to bring along your colleagues. Graduate and undergraduate students are most For a copy of the schedule of the meeting, titles and abstracts of the talks, and detailed directions, send your mailing address and request for same to siam@siam.org. Richard B. Pelz President, New Jersey Section of SIAM From: George Anastassiou <ANASTASG@hermes.msci.memst.edu> Date: 24 Mar 92 16:54:59 CDT Subject: Conference on Approximation and Probability The following is of concern primarily to approximation theory and probability people, but some people from numerical analysis might be interested. An international conference on "Approximation,Probability & Related Fields" Place: Univ. California at Santa Barbara Dates: 20,21,22 May 1993 Days: Thurs.,Friday,Saturday George Anastassiou Dept. Mathematical Sciences Memphis State University Memphis TN 38152 E-mail anastasg@hermes.msci.memst.edu Dept. Statistics & Applied probability program University of california at Santa Barbara Santa Barbara CA 93106 U.S.A E-mail ZARIRACH@BERNOULLI.UCSB.EDU From: D. Sloan <CAAS10@vaxb.strathclyde.ac.uk> Date: Thu, 26 Mar 92 10:47 GMT Subject: 35th British Theoretical Mechanics Colloquium 35th British Theoretical Mechanics Colloquium 5th - 8th April 1993 Venue: University of Strathclyde in Glasgow Start Time: Approx. 4.00pm on Monday 5th April 1993 Finish Time: Lunch on Thursday 8th April 1993 Invited Speakers Computational Mathematics Professor Bengt Fornberg (Exxon Corporate Research, NJ, USA & University of Strathclyde) Mathematical Biology Professor Bob May, FRS (University of Oxford) Solid Mechanics Professor Ingo Muller (Technische Universitat Berlin) Differential Equations Professor Larry Payne (Cornell University) Professor Ken Walters, FRS (University College of Wales, Aberystwyth) Stewartson Memorial Lecture Professor Alex Craik (University of St Andrews) Mini- Symposia Liquid Crystals Industrial Mathematics Inverse Problems Nonlinear Dynamical Systems Office Bearers Chairman Professor Frank Leslie Tel: 041 552 4400 EXT 3655 Secretary Dr. Ian Murdoch Tel: 041 552 4400 EXT 3657 Treasurer Dr. John Parkes Tel: 041 552 4400 EXT 3720 E-mail address bamc93@uk.ac.strath Postal Address: British Applied Mathematics Colloquium Department of Mathematics University of Strathclyde Glasgow G1 1XH U. K. From: James R. Bunch <jrb@sdna3.ucsd.edu> Date: Thu, 26 Mar 92 12:04:13 -0800 Subject: Parlett-Kahan Meeting A conference will be held at MSRI in Berkeley on Saturday, October 17, 1992, in honor of the 60th birthdays of Beresford Parlett and William Kahan. The Organizing Committee consists of: James Bunch, UC San Diego, jbunch@ucsd.edu (March 28-midJune: bunch@ima.umn.edu)James Demmel, UC Berkeley, demmel@imafs.ima.umn.edu (after midJune: demmel@ Horst Simon, NASA Ames, simon@nas.nasa.gov The speakers at the conference will be: Scott Baden, UC San Diego James Bunch, UC San Diego James Demmel, UC Berkeley Gene Golub, Stanford Anne Greenbaum, Courant Larry Nazareth, Washington State Bahram Nour-Omid, San Francisco John Reid, Rutherford Appleton Laboratory, England David Scott, Intel G. W. Stewart, Maryland Peter Tang, Argonne There will be a banquet on the evening of October 17. The banquet speaker will be Richard Lau, ONR. There will be a Special Issue of the Journal of Numerical Linear Algebra with Applications dedicated to Parlett and Kahan. The deadline for manuscripts will be October 17, 1992. James Bunch will be the editor of the Special Issue. Anyone interested in submitting a manuscript should obtain a copy of "Guidelines for Contributors" from James Bunch. HOTEL INFORMATION: We have been able to reserve only 40 rooms for the conference. Some other departments are having conferences that same weekend. You must make your own reservation directly with the hotel, but mention the 20 single rooms reserved: Durant Hotel, 2600 Durant Ave, Berkeley 94704; at the campus; 510-845-8981. $75/single. (Only singles are available.) 20 rooms reserved: Marriott Hotel, 200 Marina Blvd., Berkeley; at the Berkeley Marina, 1 1/2 miles from campus, city bus service is convenient. 510-548-7920. $85 flat rate per room for a single double, triple, or quad. Other possibilities: The Women's Faculty Club, 510-642-4175, on campus. The Men's Faculty Club, 510-642-1993, on campus. The Shattuck Hotel, 510-845-7300, downtown Berkeley. (They weren't willing to reserve a block of rooms.) There are various motels in the area. We will get a list later. From: Gene Golub <golub@a31.ima.umn.edu> Date: Thu, 26 Mar 92 20:52:42 CST Subject: XI Parallel Circus XIth Parallel Circus sponsored by the Department of Computer Science, Cray Research, Inc., and the Minnesota Supercomputer Institute April 24-25, 1992 The Department of Computer Science, Cray Research, Inc., and the Minnesota Supercomputer Institute, continuing the tradition started at Yale University in 1986, will be hosting the XIth Parallel Circus at the Minnesota Supercomputer Institute in Minneapolis, MN on Friday and Saturday, April 24-25, 1992. The Parallel Circus is an informal meeting which emphasizes parallel algorithms for scientific computing. There is no set agenda. At the beginning of each session the attendees reach a consensus as to each day's program of presentations. This format allows attendees to discuss the very latest results as well as interesting work in progress. (The Tenth Circus is described in the NA Digest, v 92, # 10, March 8, 1992.) Graduate students are especially welcome to attend. There is modest support from the National Science Foundation for student travel to the Parallel Circus XI. Those students requesting support should give reasons for attending the meeting, and a budget for expenses. The student(s) should indicate their research interests and plans. A letter verifying that the student is in good standing should be sent independently by a faculty adviser. This letter should give the student's GPA. We will be pleased to consider joint proposals which would include the expenses of several students. The dealine for application has been extended to April 1, 1992. Correspondence by e-mail is desirable. FAX: (612) 625-0572 (Write "Student Support" on Fax.) E-mail: circus92@cs.umn.edu To register for XIth Parallel Circus, contact: Michael Olesen Symposium Administrator Minnesota Supercomputer Institute 1200 Washington Avenue South Minneapolis, MN 55415 Tel: (612) 624-1356 FAX: (612) 624-8861 e-mail: mikeo@s1.msi.edu.umn The registration fee of $25 includes lunches on both days of the symposium as well as a banquet the evening of Friday, April 24, 1992. Registration is required by April 10, 1992. Post deadline registrations will be accepted on a space available basis. Special conference airfares are available through Daisy Travel of St. Paul, MN. For details or to make reservations call toll free (800) 553-1660 (8:00 a.m. to 4:30 p.m. CST) and mention that you are participating in the XIth Parallel Circus at the Minnesota Supercomputer Institute. Arrangements have been made for a block of discount rooms at the Days Inn University, 2407 niversity Avenue S.E., Minneapolis, MN 55414, for April 23, 24, and 25. The price per night is $39 for single occupancy and $45 for double. To receive these rates contact them at (612) 623-3999 and mention that you are participating in the XIth Parallel Circus at the Minnesota Supercomputer Institute. Each presentation will be about 30 minutes long, 25 minutes plus 5 minutes for questions and discussions. Their actual length will depend on the number of participants wishing to make presentations. The Circus will begin on Friday morning at 9:00 a.m. Although the program is not yet set, it will probably conclude early Saturday Organizers: Gene Golub, Bill Harrad, Ahmed Sameh, Apostolos Gerasoulis Local Committee: Don Truhlar, Michael Olesen From: U. Vermont <zwick@hal.uvm.edu> Date: Thu, 26 Mar 92 22:08:07 GMT Subject: Positions at University of Vermont Applications are invited for two tenure-track faculty positions in Computer Science at the level of Assistant Professor beginning in the l992-93 academic year. Responsibilities will include instruction in mainstream computer science and the development of a quality research program. Candidates should show promise of excellence in both teaching and research; have demonstrable expertise in networks and distributed systems, parallel algorithms and systems, or database and knowledge base systems; and have a strong interest in interdisciplinary research in the mathematical sciences. Faculty are encouraged to supervise graduate students in related fields as well as in computer science. A doctorate in computer science or a closely related field is required. Applications will be accepted until the positions are filled. Please submit resume and description of current research interests, and have three letters of recommendation sent directly to Dr. Richard Foote, Search Committee Chairperson, 101 Votey Building, College of Engineering & Mathematics, University of Vermont, Burlington, VT 05405. Inquiries may be made by mail to the above address or by email to cssrch@uvm.edu. The University of Vermont is an Affirmative Action/Equal Opportunity employer and encourages applications from women and members of minority groups. Applications are sought in Computer Science for the Dorothean Chair in the College of Engineering and Mathematics beginning in the 1992-93 academic year. It is anticipated that the position will be filled at the level of Full Professor. The successful candidate is expected to assume a leadership role, teach mainstream computer science, and develop an externally funded research program. An established record of excellence in teaching and research in computer science is required. Candidates should have demonstrable expertise in networks and distributed systems, database and knowledge base systems, or parallel algorithms and systems, together with a strong interest in interdisciplinary research in the mathematical sciences. Faculty are encouraged to supervise graduate students in related fields as well as in computer science. A doctorate in computer science or a closely related field is required. Applications will be accepted until the position is filled. Please submit resume and description of current research interests, and have three letters of recommendation sent directly to Dorothean Search Committee, 101 Votey Building, College of Engineering & Mathematics, University of Vermont, Burlington, VT 05405. Inquiries may be made by mail to the above address or by email to dorsrch@uvm.edu. The University of Vermont is an Affirmative Action/Equal Opportunity employer and encourages applications from women and members of minority groups. From: Hans Mittelmann <beck@plato.la.asu.edu> Date: Thu, 26 Mar 92 17:15:20 mst Subject: Visiting Positions at Arizona State There is some chance that the Department of Mathematics at Arizona State University may be able to support one or more visiting faculty for the 1992/93 academic year (8/16/92-5/15/93). While these visitors are needed to cover the department's teaching obli- gations and probably will teach 3-4 courses during the year, preference will be given to those candidates that contribute also to the department's research efforts. In particular, anyone interested in cooperation with members of the Computational Mathematics Group (Feldstein, Jackiewicz, Mittelmann, Ringhofer, Welfert; Renaut will be on leave) should send their vita to my attention. By e-mail (TeX, LaTeX, ASCII): na.mittelmann or mittelmann@math.la.asu.edu or by FAX (602) 965 8119. Hans D. Mittelmann Department of Mathematics Arizona State University Tempe, AZ 85287-1804 From: Beth Gallagher <gallaghe@siam.org> Date: Mon, 23 Mar 92 12:22:15 EST Subject: Contents: SIAM Scientific and Statistical Computing SIAM Journal on Scientific and Statistical Computing July 1992 Volume 13, Number 4 Parallel Methods for Solving Nonlinear Block Bordered Systems of Equations Xiaodong Zhang, Richard H. Byrd, and Robert B. Schnabel Solution of Structured Geomotric Programs in Sample Survey Design Faiz A. Al-Khayyal, Thom J. Hodgson, Grant D. Capps, James A. Dorsch, David A. Kriegman, and Paul D. Pavnica An Efficient Scheme for Unsteady Flow Past an Object with Boundary Conformal to a Circle Mo-Hong Chou Block M-Matrices and Computation of Invariant Tori Luca Dieci and Jens Lorenz Analysis of Initial Transient Deletion for Parallel Steady-State Simulations Peter W. Glynn and Philip Heidelberger An Implementation of the Fast Multipole Method without Multipoles Christopher R. Anderson On the Spectrum of a Family of Preconditioned Block Toeplitz Matrices Takang Ku and C.-C. Jay Kuo Domain Decomposition with Local Mesh Refinement William D. Gropp and David E. Keyes An O(n log n) Time Algorithm for the Minmax Angle Triangulation Herbert Edelsbrunner, Tiow Seng Tan, and Roman Waupotitsch Which Cubic Spline Should One Use? R. K. Beatson and E. Chacko Integrating Products of B-Splines A. H. Vermeulen, R. H. Bartels, and G. R. Heppler End of NA Digest
{"url":"http://netlib.org/na-digest-html/92/v92n13.html","timestamp":"2014-04-19T19:44:17Z","content_type":null,"content_length":"26317","record_id":"<urn:uuid:d1b13c54-441b-4de4-b7a4-9e79270b7362>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- October 2002 (#255)LISTSERV at the University of Georgia Date: Tue, 22 Oct 2002 11:43:38 -0700 Reply-To: Michael Healy <healym@earthlink.net> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Michael Healy <healym@earthlink.net> Subject: Statistical Questions about MDS/INDSCAL Content-type: text/plain; charset="US-ASCII" Hello SPSS list readers, I have a statistical question about MDS/INDSCAL modeling I hope that this list might be able to help me with. I obtained a matrix unconditional INDSCAL model that recovered the predicted 2-dimensional structure. However, the angle of rotation of the group configuration space was such that the dimensions were not clearly defined in the CONFIG weights. That is, the group configuration space was rotated about 45 degrees from Dimension 1 being parallel to the x-axis. My interest was examining the subject weights in this space in relation to several external measures. Since Dimension 1 and 2 were not in line with the x- and y-axes, it seems that the Dimension 1 and Dimension 2 weights would be sharing some of the same information. To correct for this, I rotated the group configuration space so that Dimension 1 was parallel to the x-axis and re-fit the INDSCAL model by telling ALSCAL to use the rotated configuration and that this configuration was FIXED. My question is whether doing this rotation/re-fitting is acceptable or whether I have introduced some sort of error or bias Into my solution. Another question I have is that the data I am modeling are essentially z-scores, although I believe them to be only fairly stable estimates of distances given the data collection method. I am fitting these models setting at both the RATIO and INTERVAL levels of measurement, and I am finding that the solutions differ depending upon the measurement level--especially in the subject weights. My concern is that the RATIO level is more appropriate for the data, but because the ratio fits is more restrictive the difference in the solution/subject weights is reflecting lack of fit to a greater extent that the interval level solutions. Are there any guidelines for deciding which level of measurement Is appropriate? Thanks again for you help and any feedback would be appreciated. Michael R. Healy, M.A., A.B.D Claremont Graduate University/Pitzer College Department of Psychology 170 E. Tenth St. Claremont, CA 91711-6163 Claremont Memory and Aging Project: 909-607-4499
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0210&L=spssx-l&F=&S=&P=28809","timestamp":"2014-04-19T06:53:09Z","content_type":null,"content_length":"10864","record_id":"<urn:uuid:7b6a660d-5430-4944-a3d8-c7649c9ee054>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Gallery of jsMath in Use MathsNet An on-line A-level mathematics information site. This has tons of information, interactive drill and practice, and worked problems for the student studying for A-level exams. It uses applets, JavaScript and multi-media technologies to help the student learn the material. Introduction to This text by Franz J. Vesely of the University of vienna makes extensive use of jsMath for its mathematical notation. Computational Physics The cold atom micromaser This is a short paper that describes the physics of a micro-maser, or one-atom maser by John Martin. Fonctions d'Airy en This is a short paper (in French) that describes the physic of a particle in a gravitational field, by John Martin. mécanique quantique Reynold's Transport This is a one-page document giving a proof of Reynold's transport theorem. Alpheccar's blog A French-language blog on science and freedom that has incorporated jsMath. For example, see the pages on chaos and constructivism and Galois connections. Differential Geometry A site that uses TiddlyWiki to discuss the differential geometry used in physics. For example, see the pages on exponentiation, basis vectors, and the volume form. Road Sign Math The motto of this site is "Driving + Math = Fun", and it includes pictures of roadsigns whose numeric values combine to form equations or important mathematical constants. Readers send in their own pictures and compete for the the honor of being displayed on the site. Thai Math Center This is a mathematics web board in the Thai language that has incorporated jsMath into the software that handles the message board. See in particular the discussion of displaying math on the web board, and a sample topic that includes mathematics in the discussion. Cauchy Example This page used jsMath and a Geometer's Sketchpad applet to illustrate some mathematics, but it is in Greek, so I don't know exactly what it is. Here is another page about the chain rule. Big bang modelling This description of the big bang and cosmology uses jsMath to typeset some (but not all) of its mathematics. +FQ na rede This is a Spanish-language site based on TiddlyWiki that uses jsMath to display chemical reaction equations. (Try the March 8, 2006 entry.) InfoPedia This Italian-language site uses jsMath on its mathematics and computer science pages. See for example the one on linear equations. Dimensional Analysis This is a page from a physics course by Dr. Tim Niiler that uses jsMath. UW-ACE Paul Kates seems to be incorporating jsMath into web pages and on-line testing facilities at the University of Waterloo. See the links available after pressing "continue" at this on-line course. LiteMat The Math Department at Linköpings University (Sweden) is using jsMath to display mathematics in their on-line announcements. Mephi-33 The Moscow Engineering Physical Institute runs a (Russian-language) forum using phpBB for their mathematical software in engineering program. See this example page and another example page. tttan This is a Chinese-language bulletin board that uses jsMath (see this example page). I don't know what it is all about, but it looks like fun. PukiWiki This is a Japanese-language bulletin board based on PukiWiki that uses jsMath (see this example page). OJB This is another Thai-language bulletin board, listing "synopsis, discussion, and exchange for academic literature" as its mission. Math E-Book Another Thai language site that uses jsMath. This one seems to have an associated electronic math text. Some of the webboard pages seem to use jsMath. WeBWorK This is an open-source on-line homework delivery system that uses jsMath as one choice for viewing the mathematical equation in the problems it assigns. The student can control which viewing option he or she wishes to use. Moodle This is an open-source course-management tool that has a module that allows students to enter mathematics using TeX notation within their discussions. The module can use jsMath as a means of rendering those equations. LON-CAPA The Learning Online Network with CAPA is an open-source distributed-learning content-management and assessment system. It has incorporated jsMath as one of its methods of displaying mathematics. (See the LON-CAPA mailing list for details.) TiddlyWiki Bob McElrath has created a jsMath plugin for TiddlyWiki, a non-linear web notebook system. This makes it easy to include jsMath in your own pages, and several other sites listed here are using this plugin. Axiom There is a project to build a browser-based interface to this computer algebra system that uses jsMath to display the mathematics generated by Axiom. (See the axiom developer mailing list Wikka This Wiki engine can be modified to use jsMath for displaying mathematics in the pages it generates. See its instruction page for how to do so. Wacko Fork This seems to be a project to modify Wacko Wiki to include lots of enhancements, including a jsMath module. Here is a German site that uses it. Poorman's CMS This is a bulletin board system from the University of Texas at Austin. A problem of the week seems to use jsMath.
{"url":"http://www.math.union.edu/~dpvc/jsMath/gallery.html","timestamp":"2014-04-18T05:31:10Z","content_type":null,"content_length":"13597","record_id":"<urn:uuid:cf975034-a694-41bb-ad24-c50e1abe2563>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
problems from the scottish book MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Which of the problems from the Scottish Book (pdf of English version) by Stefan Banach are still open? I know that one of the problems was solved by Per Enflo for which he got a live goose from Stanislaw Mazur. up vote 6 down vote favorite big-list fa.functional-analysis add comment Which of the problems from the Scottish Book (pdf of English version) by Stefan Banach are still open? I know that one of the problems was solved by Per Enflo for which he got a live goose from Stanislaw Mazur. The book version edited by Daniel Mauldin (from 1982) has commentaries on the problems as of that date. up vote 4 down vote accepted add comment The book version edited by Daniel Mauldin (from 1982) has commentaries on the problems as of that date. Luis Montejano solved problem 68 in 1990 (he also solved the limiting 0 density case of problem 19, but I think this is already mentioned in Mauldin's book). The paper is called About a up vote 4 problem of Ulam concerning flat sections of manifolds and appeared in Commentarii Mathematici Helvetici. down vote add comment Luis Montejano solved problem 68 in 1990 (he also solved the limiting 0 density case of problem 19, but I think this is already mentioned in Mauldin's book). The paper is called About a problem of Ulam concerning flat sections of manifolds and appeared in Commentarii Mathematici Helvetici. Second problem in The Scottish Book (Edit: is open? GRP). Let $X$ be a compact metric space. If there exists a finitely additive Borel measure $\mu$ such that $\mu(X)=1$ and if $A,B\ subset X$ are congruent then $\mu(A)=\mu(B)$. Remark. We say that that the sets $A,B \subset X$ are $congruent$ if there exists a distance preserving bijection from $A$ to $B$, not necessarily defined on the whole space $X$. up vote 1 down vote 2nd edit. I think the best reference frame, related to the above mentioned problem, is a book by Stan Wagon "The Banach-Tarski Paradox". On page 31 of this book there is described recent progress (up to 1985) toward a solution of the problem. add comment Second problem in The Scottish Book (Edit: is open? GRP). Let $X$ be a compact metric space. If there exists a finitely additive Borel measure $\mu$ such that $\mu(X)=1$ and if $A,B\subset X$ are congruent then $\mu(A)=\mu(B)$. Remark. We say that that the sets $A,B \subset X$ are $congruent$ if there exists a distance preserving bijection from $A$ to $B$, not necessarily defined on the whole space $X$. 2nd edit. I think the best reference frame, related to the above mentioned problem, is a book by Stan Wagon "The Banach-Tarski Paradox". On page 31 of this book there is described recent progress (up to 1985) toward a solution of the problem.
{"url":"http://mathoverflow.net/questions/122115/problems-from-the-scottish-book","timestamp":"2014-04-19T02:29:33Z","content_type":null,"content_length":"61164","record_id":"<urn:uuid:70541419-e2a9-44e1-9170-96b8285d3af5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
A Linear Homogeneous Partial Differential Equation with Entire Solutions Represented by Laguerre Polynomials Abstract and Applied Analysis Volume 2012 (2012), Article ID 609862, 10 pages Research Article A Linear Homogeneous Partial Differential Equation with Entire Solutions Represented by Laguerre Polynomials ^1College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China ^2Department of Mathematics, Shandong University, Shandong, Jinan 250100, China Received 11 November 2011; Revised 19 March 2012; Accepted 19 March 2012 Academic Editor: Agacik Zafer Copyright © 2012 Xin-Li Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study a homogeneous partial differential equation and get its entire solutions represented in convergent series of Laguerre polynomials. Moreover, the formulae of the order and type of the solutions are established. 1. Introduction and Main Results The existence and behavior of global meromorphic solutions of homogeneous linear partial differential equations of the second order where are polynomials for , have been studied by Hu and Yang [1]. Specially, in [1, 2], they have studied the following cases of(1.1) and showed that the solutions of (1.2) and (1.3) are closely related to Bessel functions and Bessel polynomials, respectively. Hu and Li [3] studied meromorphic solutions of homogeneous linear partial differential equations of the second order in two independent complex variables: where . Equation (1.4) has a lot of entire solutions on represented by Jacobian polynomials. Global solutions of some first-order partial differential equations (or system) were studied by Berenstein and Li [4], Hu and Yang [5], Hu and Li [6 ], Li [7], Li and Saleeby [8], and so on. In this paper, we concentrate on the following partial differential equation (PDE) for a real . We will characterize the entire solutions of (1.5), which are related to Laguerre polynomials. Further, the formulae of the order and type of the solutions are obtained. It is well known that the Laguerre polynomials are defined by which are solutions of the following ordinary differential equations (ODE): Moreover, Hu [9] pointed out that the generating function of is a solution of the PDE (1.5). Based on the methods from Hu and Yang [2], we get the following results. Theorem 1.1. The partial differential equation (1.5) has an entire solution on , if and only if has a series expansion such that If is an entire function on , set we define its order by where Theorem 1.2. If is defined by (1.9) and (1.10), then where Valiron [10] showed that each entire solution of a homogeneous linear ODE with polynomial coefficients was of finite order. By studying (1.2) and (1.3), Hu and Yang showed that Valiron’s theorem was not true for general partial differential equations. Here by using Theorems 1.1 and 1.2, we can construct entire solution of (1.5) with arbitrary order . If , we define the type of by Theorem 1.3. If is defined by (1.9) and (1.10), and , then the type satisfies Lindelöf-Pringsheim theorem [11] gave the expression of order and type for one complex variable entire function, and for two variable entire function the formulae of order and type were obtained by Bose and Sharma in [12]. Hu and Yang [2] established an analogue of Lindelöf-Pringsheim theorem for the entire solution of PDE (1.2). But from Theorems 1.2 and 1.3, we find that the analogue theorem for the entire solution of (1.5) is different from the results due to Hu and Yang. 2. An Estimate of Laguerre Polynomials Before we prove our theorems, we give an upper bound of , which will play an important role in this paper. The following asymptotic properties of can be found in [13]: (a) holds for in the complex plane cut along the positive real semiaxis; thus, for , we obtain that holds when is large enough. (b) holds uniformly on compact subsets of , where is the Bessel function and combining with (2.3), for we can deduce that holds when is large enough. Then (2.2) and (2.5) imply where 3. Proof of Theorem 1.1 Assuming that is an entire solution on satisfying (1.5), we have Taylor expansion where Hence is an entire solution of (1.7). By the method of Frobenius (see [14]), we can get a second independent solution of (1.7) which is where are constants. So there exist and satisfying Because of the singularity of at , we obtain . That shows Now we need to estimate the terms of . Since is an entire function, we have Since we easily get Conversely, the relations (1.7), (1.9), and (1.10) imply that holds for all . Since (2.6) implies we have Combining (1.10), (3.10) with (3.12), we can get that is obviously an entire solution of (1.5 ) on . 4. Proof of Theorem 1.2 Firstly, we prove . If , the result is trivial. Now we assume and prove for any . The relation (1.15) implies that there exists a sequence such that By using Cauchy’s inequality of holomorphic functions, we have together with the formula of the coefficients of the Taylor expansion we obtain . Since , we have then Putting , we have which means . Then we can get . Next, we will prove . Set . The result is easy for ; then we assume . For any , (1.15) implies that there exists , when , we have where . For any , there exists such that when , combining with (2.6) and (4.7), we get where is a constant but not necessary to be the same every time. Set , which means that for . Further set , which yields that when , Obviously, we can choose such that for . Then We also have Therefore, when , we have which means . Hence follows by letting . 5. Proof of Theorem 1.3 Set At first, we prove . The result is trivial for , we assume and take with , set Equation (5.1) implies that there exists a sequence satisfying combining with (4.4), we can deduce that Taking , we get , which yields , so . Next, we prove . We may assume . Equation (5.1) implies that for any , there exists , such that when , For any , we choose such that when , combining with (2.6), we have Set , when , we deduce . Set , it is obvious that for . Since , there exists such that when , Then We note that for , , , then we have This shows Therefore when , Together with and the definition of type, we can get , which yields by letting . The authors sincerely thank the reviewers for their valuable suggestions and useful comments that have led to the present improved version of the original paper. The first author was partially supported by Natural Science Foundation of China (11001057), and the third author was partially supported by Natural Science Foundation of Shandong Province. 1. P.-C. Hu and C.-C. Yang, “Global solutions of homogeneous linear partial differential equations of the second order,” Michigan Mathematical Journal, vol. 58, no. 3, pp. 807–831, 2009. View at Publisher · View at Google Scholar 2. P.-C. Hu and C.-C. Yang, “A linear homogeneous partial differential equation with entire solutions represented by Bessel polynomials,” Journal of Mathematical Analysis and Applications, vol. 368, no. 1, pp. 263–280, 2010. View at Publisher · View at Google Scholar 3. P.-C. Hu and B.-Q. Li, “Unicity of meromorphic solutions of partial differential equations,” Journal of Mathematical Sciences, vol. 173, pp. 201–206, 2011. 4. C. A. Berenstein and B. Q. Li, “On certain first-order partial differential equations in ${ℂ}^{n}$,” in Harmonic Analysis, Signal Processing, and Complexity, vol. 238, pp. 29–36, Birkhäuser Boston, Boston, Mass, USA, 2005. View at Publisher · View at Google Scholar 5. P. C. Hu and C.-C. Yang, “Malmquist type theorem and factorization of meromorphic solutions of partial differential equations,” Complex Variables, vol. 27, no. 3, pp. 269–285, 1995. 6. P.-C. Hu and B. Q. Li, “On meromorphic solutions of nonlinear partial differential equations of first order,” Journal of Mathematical Analysis and Applications, vol. 377, no. 2, pp. 881–888, 2011. View at Publisher · View at Google Scholar 7. B. Q. Li, “Entire solutions of certain partial differential equations and factorization of partial derivatives,” Transactions of the American Mathematical Society, vol. 357, no. 8, pp. 3169–3177, 2005. View at Publisher · View at Google Scholar 8. B. Q. Li and E. G. Saleeby, “Entire solutions of first-order partial differential equations,” Complex Variables, vol. 48, no. 8, pp. 657–661, 2003. View at Publisher · View at Google Scholar 9. P.-C. Hu, Introduction of Function of One Complex Variable, Science Press, Beijing, China, 2008. 10. G. Valiron, Lectures on the General Theory of Integral Functions, Ëdouard Privat, Toulouse, France, 1923. 11. Y.-Z. He and X.-Z. Xiao, Algebroid Functions and Ordinary Differential Equations, Science Press, Beijing, China, 1988. 12. S. K. Bose and D. Sharma, “Integral functions of two complex variables,” Compositio Mathematica, vol. 15, pp. 210–226, 1963. 13. G. Szegő, Orthogonal Polynomials, vol. 23 of of American Mathematical Society Colloquium Publications, American Mathematical Society, Providence, RI, USA, 4th edition, 1975. 14. Z.-X. Wang and D.-R. Guo, Introduction to Special Function, Peking University Press, Beijing, China, 2000.
{"url":"http://www.hindawi.com/journals/aaa/2012/609862/","timestamp":"2014-04-16T13:14:46Z","content_type":null,"content_length":"443289","record_id":"<urn:uuid:96d6c251-bb5c-4b24-81ec-b3d4759b7737>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
- Power and Torque - Torque is measured; Power is calculated In order to discuss powerplants in any depth, it is essential to understand the concepts of POWER and TORQUE. HOWEVER, in order to understand POWER, you must first understand ENERGY and WORK. If you have not reviewed these concepts for a while, it would be helpful to do so before studying this article. CLICK HERE for a quick review of Energy and Work. It often seems that people are confused about the relationship between POWER and TORQUE. For example, we have heard engine builders, camshaft consultants, and other technical experts ask customers: "Do you want your engine to make HORSEPOWER or TORQUE?" And the question is usually asked in a tone which strongly suggests that these experts believe power and torque are somehow mutually exclusive. In fact, the opposite is true, and you should be clear on these facts: 1. POWER (the rate of doing WORK) is dependent on TORQUE and RPM. 2. TORQUE and RPM are the MEASURED quantities of engine output. 3. POWER is CALCULATED from torque and RPM, by the following equation: HP = Torque x RPM ÷ 5252 (At the bottom of this page, the derivation of that equation is shown, for anyone interested.) An engine produces POWER by providing a ROTATING SHAFT which can exert a given amount of TORQUE on a load at a given RPM. The amount of TORQUE the engine can exert usually varies with RPM. TORQUE is defined as a FORCE around a given point, applied at a RADIUS from that point. Note that the unit of TORQUE is one pound-foot (often misstated), while the unit of WORK is one foot-pound. Referring to Figure 1, assume that the handle is attached to the crank-arm so that it is parallel to the supported shaft and is located at a radius of 12" from the center of the shaft. In this example, consider the shaft to be fixed to the wall. Let the arrow represent a 100 lb. force, applied in a direction perpendicular to both the handle and the crank-arm, as shown. Because the shaft is fixed to the wall, the shaft does not turn, but there is a torque of 100 pounds-feet (100 pounds times 1 foot) applied to the shaft. Note that if the crank-arm in the sketch was twice as long (i.e. the handle was located 24" from the center of the shaft), the same 100 pound force applied to the handle would produce 200 lb-ft of torque (100 pounds times 2 feet). POWER is the measure of how much WORK can be done in a specified TIME. In the example on the Work and Energy page, the guy pushing the car did 16,500 foot-pounds of WORK. If he did that work in two minutes, he would have produced 8250 foot-pounds per minute of POWER (165 feet x 100 pounds ÷ 2 minutes). If you are unclear about WORK and ENERGY, it would be a benefit to review those concepts HERE In the same way that one ton is a large amount of weight (by definition, 2000 pounds), one horsepower is a large amount of power. The definition of one horsepower is 33,000 foot-pounds per minute. The power which the guy produced by pushing his car across the lot (8250 foot-pounds-per-minute) equals ¼ horsepower (8,250 ÷ 33,000). OK, all that’s fine, but how does pushing a car across a parking lot relate to rotating machinery? Consider the following change to the handle-and-crank-arm sketch above. The handle is still 12" from the center of the shaft, but now, instead of being fixed to the wall, the shaft now goes through the wall, supported by frictionless bearings, and is attached to a generator behind the wall. Suppose, as illustrated in Figure 2, that a constant force of 100 lbs. is somehow applied to the handle so that the force is always perpendicular to both the handle and the crank-arm as the crank turns. In other words, the "arrow" rotates with the handle and remains in the same position relative to the crank and handle, as shown in the sequence below. (That is called a "tangential force"). Figure 2 If that constant 100 lb. tangential force applied to the 12" handle (100 lb-ft of torque) causes the shaft to rotate at 2000 RPM, then the power the shaft is transmitting to the generator behind the wall is 38 HP, calculated as follows: 100 lb-ft of torque (100 lb. x 1 foot) times 2000 RPM divided by 5252 is 38 HP. The following examples illustrate several different values of TORQUE which produce 300 HP. Example 1: How much TORQUE is required to produce 300 HP at 2700 RPM? since HP = TORQUE x RPM ÷ 5252 then by rearranging the equation: TORQUE = HP x 5252 ÷ RPM Answer: TORQUE = 300 x 5252 ÷ 2700 = 584 lb-ft. Example 2: How much TORQUE is required to produce 300 HP at 4600 RPM? Answer: TORQUE = 300 x 5252 ÷ 4600 = 343 lb-ft. Example 3: How much TORQUE is required to produce 300 HP at 8000 RPM? Answer: TORQUE = 300 x 5252 ÷ 8000 = 197 lb-ft. Example 4: How much TORQUE does the 41,000 RPM turbine section of a 300 HP gas turbine engine produce? Answer: TORQUE = 300 x 5252 ÷ 41,000 = 38.4 lb-ft. Example 5: The output shaft of the gearbox of the engine in Example 4 above turns at 1591 RPM. How much TORQUE is available on that shaft? Answer: TORQUE = 300 x 5252 ÷ 1591 = 991 lb-ft. (ignoring losses in the gearbox, of course). The point to be taken from those numbers is that a given amount of horsepower can be made from an infinite number of combinations of torque and RPM. Think of it another way: In cars of equal weight, a 2-liter twin-cam engine that makes 300 HP at 8000 RPM (197 lb-ft) and 400 HP at 10,000 RPM (210 lb-ft) will get you out of a corner just as well as a 5-liter engine that makes 300 HP at 4000 RPM (394 lb-ft) and 400 HP at 5000 RPM (420 lb-ft). In fact, in cars of equal weight, the smaller engine will probably race BETTER because it's much lighter, therefore puts less weight on the front end. AND, in reality, the car with the lighter 2-liter engine will likely weigh less than the big V8-powered car, so will be a better race car for several reasons. Measuring Power A dynamometer determines the POWER an engine produces by applying a load to the engine output shaft by means of a water brake, a generator, an eddy-current absorber, or any other controllable device capable of absorbing power. The dynamometer control system causes the absorber to exactly match the amount of TORQUE the engine is producing at that instant, then measures that TORQUE and the RPM of the engine shaft, and from those two measurements, it calculates observed power. Then it applies various factors (air temperature, barometric pressure, relative humidity) in order to correct the observed power to the value it would have been if it had been measured at standard atmospheric conditions, called corrected power. Power to Drive a Pump In the course of working with lots of different engine projects, we often hear the suggestion that engine power can be increased by the use of a "better" oil pump. Implicit in that suggestion is the belief that a "better" oil pump has higher pumping efficiency, and can, therefore, deliver the required flow at the required pressure while consuming less power from the crankshaft to do so. While that is technically true, the magnitude of the improvement number is surprisingly small. How much power does it take to drive a pump delivering a known flow at a known pressure? We have already shown that power is work per unit time, and we will stick with good old American units for the time being (foot-pounds per minute and inch-pounds per minute). And we know that flow times pressure equals POWER, as shown by: Flow (cubic inches / minute) multiplied by pressure (pounds / square inch) = POWER (inch-pounds / minute) From there it is simply a matter of multiplying by the appropriate constants to produce an equation which calculates HP from pressure times flow. Since flow is more freqently given in gallons per minute, and since it is well known that there are 231 cubic inches in a gallon, then: Flow (GPM) x 231(cubic inches / gal) = Flow (cubic inches per minute). Since, as explained above, 1 HP is 33,000 foot-pounds of work per minute, multiplying that number by 12 produces the number of inch-pounds of work per minute in one HP (396,000). Dividing 396,000 by 231 gives the units-conversion factor of 1714.3. Therefore, the simple equation is: Pump HP = flow (GPM) x pressure (PSI) / 1714. That equation represents the power consumed by a pump having 100% efficiency. When the equation is modified to include pump efficiency, it becomes: Pump HP = (flow {GPM} x pressure {PSI} / (1714 x efficiency) Common gear-type pumps typically operate at between 75 and 80% efficiency. So suppose your all-aluminum V8 engine requires 10 GPM at 50 psi. The oil pump will have been sized to maintain some preferred level of oil pressure at idle when the engine and oil are hot, so the pump will have far more capacity than is required to maintain the 10 GPM at 50 psi at operating speed. (That's what the "relief" valve does: bypasses the excess flow capacity back to the inlet of the pump, which, as an added benefit, also dramatically reduces the prospect cavitation in the pump inlet line.) So suppose your 75%-efficient pump is maintaining 50 psi at operating speed, and is providing the 10 GPM needed by the engine. It is actually pumping roughly 50 GPM ( 10 of which goes through the engine, and the remaining 40 goes through the relief valve ) at 50 psi. The power to drive that pressure pump stage is: HP = ( 50 gpm x 50 psi ) / ( 1714 x 0.75 efficiency ) = 1.95 HP Suppose you succumb to the hype and shuck out some really big bucks for an allegedly 90% efficient pump. That pump (at the same flow and pressure) will consume: HP = ( 50 gpm x 50 psi ) / ( 1714 x 0.90 efficiency ) = 1.62 HP. WOW. A net gain of a full 1/3 of a HP. Can YOUR dyno even measure a 1-HP difference accurately and repeatably? General Observations In order to design an engine for a particular application, it is helpful to plot out the optimal power curve for that specific application, then from that design information, determine the torque curve which is required to produce the desired power curve. By evaluating the torque requirements against realistic BMEP values you can determine the reasonableness of the target power curve. Typically, the torque peak will occur at a substantially lower RPM than the power peak. The reason is that, in general, the torque curve does not drop off (%-wise) as rapidly as the RPM is increasing (%-wise). For a race engine, it is often beneficial ( within the boundary conditions of the application ) to operate the engine well beyond the power peak, in order to produce the maximum average power within a required RPM band. However, for an engine which operates in a relatively narrow RPM band, such as an aircraft engine, it is generally a requirement that the engine produce maximum power at the maximum RPM. That requires the torque peak to be fairly close to the maximum RPM. For an aircraft engine, you typically design the torque curve to peak at the normal cruise setting and stay flat up to maximum RPM. That positioning of the torque curve would allow the engine to produce significantly more power if it could operate at a higher RPM, but the goal is to optimize the performance within the operating An example of that concept is shown Figure 3 below. The three dashed lines represent three different torque curves, each having exactly the same shape and torque values, but with the peak torque values located at different RPM values. The solid lines show the power produced by the torque curves of the same color. Figure 3 Note that, with a torque peak of 587 lb-ft at 3000 RPM, the pink power line peaks at about 375 HP between 3500 and 3750 RPM. With the same torque curve moved to the right by 1500 RPM (black, 587 lb-ft torque peak at 4500 RPM), the peak power jumps to about 535 HP at 5000 RPM. Again, moving the same torque curve to the right another 1500 RPM (blue, 587 lb-ft torque peak at 6000 RPM) causes the power to peak at about 696 HP at 6500 RPM Using the black curves as an example, note that the engine produces 500 HP at both 4500 and 5400 RPM, which means the engine can do the same amount of work per unit time (power) at 4500 as it can at 5400. HOWEVER, it will burn less fuel to produce 450 HP at 4500 RPM than at 5400 RPM, because the parasitic power losses (power consumed to turn the crankshaft, reciprocating components, valvetrain) increases as the square of the crankshaft speed. The RPM band within which the engine produces its peak torque is limited. You can tailor an engine to have a high peak torque with a very narrow band, or a lower peak torque value over a wider band. Those characteristics are usually dictated by the parameters of the application for which the engine is intended. An example of that is shown in Figure 4 below. It is the same as the graph in Figure 3 (above), EXCEPT, the blue torque curve has been altered (as shown by the green line) so that it doesn't drop off as quickly. Note how that causes the green power line to increase well beyond the torque peak. That sort of a change to the torque curve can be achieved by altering various key components, including (but not limited to) cam lobe profiles, cam lobe separation, intake and/or exhaust runner length, intake and/or exhaust runner cross section. Alterations intended to broaden the torque peak will inevitable reduce the peak torque value, but the desirability of a given change is determined by the application. Figure 4 Derivation of the Power Equation (for anyone interested) This part might not be of interest to most readers, but several people have asked: "OK, if HP = RPM x TORQUE ÷ 5252, then where does the 5252 come from?" Here is the answer. By definition, POWER = FORCE x DISTANCE ÷ TIME (as explained above under the POWER heading) Using the example in Figure 2 above, where a constant tangential force of 100 pounds was applied to the 12" handle rotating at 2000 RPM, we know the force involved, so to calculate power, we need the distance the handle travels per unit time, expressed as: Power = 100 pounds x distance per minute OK, how far does the crank handle move in one minute? First, determine the distance it moves in one revolution: DISTANCE per revolution = 2 x π x radius DISTANCE per revolution. = 2 x 3.1416 x 1 ft = 6.283 ft. Now we know how far the crank moves in one revolution. How far does the crank move in one minute? DISTANCE per min. = 6.283 ft .per rev. x 2000 rev. per min. = 12,566 feet per minute Now we know enough to calculate the power, defined as: POWER = FORCE x DISTANCE ÷ TIME Power = 100 lb x 12,566 ft. per minute = 1,256,600 ft-lb per minute Swell, but how about HORSEPOWER? Remember that one HORSEPOWER is defined as 33000 foot-pounds of work per minute. Therefore HP = POWER (ft-lb per min) ÷ 33,000. We have already calculated that the power being applied to the crank-wheel above is 1,256,600 ft-lb per minute. How many HP is that? HP = (1,256,600 ÷ 33,000) = 38.1 HP. Now we combine some stuff we already know to produce the magic 5252. We already know that: TORQUE = FORCE x RADIUS. If we divide both sides of that equation by RADIUS, we get: (a) FORCE = TORQUE ÷ RADIUS Now, if DISTANCE per revolution = RADIUS x 2 x π, then (b) DISTANCE per minute = RADIUS x 2 x π x RPM We already know (c) POWER = FORCE x DISTANCE per minute So if we plug the equivalent for FORCE from equation (a) and distance per minute from equation (b) into equation (c), we get: POWER = (TORQUE ÷ RADIUS) x (RPM x RADIUS x 2 x π) Dividing both sides by 33,000 to find HP, HP = TORQUE ÷ RADIUS x RPM x RADIUS x 2 x π ÷ 33,000 By reducing, we get HP = TORQUE x RPM x 6.28 ÷ 33,000 33,000 ÷ 6.2832 = 5252 HP = TORQUE x RPM ÷ 5252 Note that at 5252 RPM, torque and HP are equal. At any RPM below 5252, the value of torque is greater than the value of HP; Above 5252 RPM, the value of torque is less than the value of HP.
{"url":"http://www.epi-eng.com/piston_engine_technology/power_and_torque.htm","timestamp":"2014-04-16T21:52:55Z","content_type":null,"content_length":"28657","record_id":"<urn:uuid:1383fcc8-b8ce-46a0-8595-a3232fb2d7d9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem with a sequence 05-30-2007 #1 Registered User Join Date Apr 2007 Problem with a sequence Teh first term is 1, the second term is 2, and for all n>=3 the nth term is equal to term (n-1) plus term n/2, where n/2 is rounded down. Program should prompt for n, and then display the nth term in the sequence. it can be assumed n is 50 or less. the program must contain a loop that keeps reading new values for n and then prints the nth term. The program show stop when n=0. //tj wright 11.1 #include <stdio.h> void main (void) int q,m,n,x[50]; for (m=1;m<=6;m++) printf("enter n"); scanf ("%d", &n); if (n==0) goto end; for (q=3;q<=50;q++) printf("The number is %d\n",x[n]); When I run the program, it asks me to enter n, then I go to scan it in, and the program exits and points me to the x[2]=2; line. I am trying to trace the program and see where it is going wrong, but I cannot tell. Can anyone tell me where I am going wrong? void main is wrong, see the FAQ. Using goto in such a short program is very poor style. Use appropriate loop structures. The indentation is shoddy. Cleaning this up will help you follow the program flow. > for (q=3;q<=50;q++) This steps off the end of the array. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. "this steps off the end of the array" What does that mean? Obvious mistake: I can't even get it to compile, it gives me: error at line 23 (end warning at line 3 (void main(void)) return type of 'main' is not 'int' Last time i checked, "goto" was highly discouraged. Here, you can either use: return; //supposing you don't fix the return type return 0; //supposing you fix the return type to be int like it's supposed to be exit(0); //works either way, but I'd generally encourage you to use "return 0;" I could use break; instead of the goto, and it would work fine. If you have something like: int nums[10] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; int i; for(i = 0;i < 11;++i) printf("&#37;d\n", nums[i]); When i gets to 10 in the loop, it's stepped off the end of the array. The only valid indexes are 0 through 9 for that array. Last edited by itsme86; 05-30-2007 at 04:14 PM. If you understand what you're doing, you're not learning anything. A better solution would be to change your main loop from a for loop to a while loop. You could change this: for (m=1;m<=6;m++) to this: while(n != 0) Then you wouldent even need to use a goto or break to end the program. Thank you all very much for your help and sharing your knowledge! Yeah, it should. I got a little carried away with the initializer If you understand what you're doing, you're not learning anything. sourceFile.cpp:5: `main' must return `int' sourceFile.cpp: In function `int main(...)': sourceFile.cpp:27: label must be followed by statement A goto label must precede a statement. You can't just have one at the end of a block. However, you can add a NULL statement if you wish: end: ; Not that I'm advocating goto. It's a bad idea to use goto. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. 05-30-2007 #2 05-30-2007 #3 Registered User Join Date Apr 2007 05-30-2007 #4 Registered User Join Date Sep 2006 05-30-2007 #5 Registered User Join Date Jan 2007 05-30-2007 #6 Registered User Join Date Apr 2007 05-30-2007 #7 Gawking at stupidity Join Date Jul 2004 Oregon, USA 05-30-2007 #8 05-30-2007 #9 Registered User Join Date Apr 2007 05-30-2007 #10 05-30-2007 #11 Gawking at stupidity Join Date Jul 2004 Oregon, USA 05-31-2007 #12
{"url":"http://cboard.cprogramming.com/c-programming/90367-problem-sequence.html","timestamp":"2014-04-23T16:22:02Z","content_type":null,"content_length":"83442","record_id":"<urn:uuid:97d0af69-5548-4baa-a02c-f7a25195019d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Connes' embedding problem and Tsirelson's problem Navascués Cobo, Miguel and Pérez García, David and Junge, Marius and Palazuelos Cabezón, Carlos and Scholz, V. B. and R. F. Werner, R. F. (2011) Connes' embedding problem and Tsirelson's problem. Journal of Mathematical Physics, 52 (1). ISSN 0022-2488 Official URL: http://arxiv.org/abs/1008.1142 We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C*-algebras. Connes' embedding problem asks whether any separable II$_1$ factor is a subfactor of the ultrapower of the hyperfinite II$_1$ factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely, a positve answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/12154/","timestamp":"2014-04-21T14:50:47Z","content_type":null,"content_length":"26452","record_id":"<urn:uuid:0e50c56a-1f5c-4520-aed0-9246e281a0ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with the demand funtion when price is changing? October 14th 2010, 10:18 AM #1 Junior Member Sep 2010 Help with the demand funtion when price is changing? how do you write the demand function when there has been a change in demand? the question asks: A store is selling t-shirts at $40 per shirt and at this price customers have been buying 60 shirts per month. The owner estimates that for every $1 price increase, 3 fewer shirts will be sold. If the relationship between price per shirt and number sold is linear ... a) find the demand function expressing price, p, in terms of shirts sold, x b) find the total revenue R(x) in terms of shirts sold A store is selling t-shirts at $40 per shirt and at this price customers have been buying 60 shirts per month. The owner estimates that for every $1 price increase, 3 fewer shirts will be sold. If the relationship between price per shirt and number sold is linear ... a) find the demand function expressing price, p, in terms of shirts sold, x You have two variables 'sales' dependant on 'price'. For (p=price,s=sales) you are given the following information (40,60) and (41,57). The linear function will be $s = m\times p+c$ where $m = \frac{s_2-s_1}{p_2-p_1}$ After this step use one of (40,60) and (41,57) to solve for c. October 14th 2010, 01:27 PM #2
{"url":"http://mathhelpforum.com/algebra/159624-help-demand-funtion-when-price-changing.html","timestamp":"2014-04-18T19:35:42Z","content_type":null,"content_length":"34379","record_id":"<urn:uuid:19a9ff51-4caa-4a82-8e61-bc8a595e3b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
2012 AIME 1 Problem #12 I was checking out Mess or Math?’s cool blog and her latest post wonders whether problem 12 on the 2012 AIME 1 has a simple solution. Simple is a subjective term. The problem’s setup certainly suggests invoking things like the angle bisector theorem and if you know that theorem, then a solution is not far around the corner. But here’s a solution that is simple in the sense that it uses very little machinery, just a little linear algebra and a dash of trig (the dash of trig being that you have to know or be able to figure out the tangents of 30 and 60 degrees). For the problem statement, check out her blog. We set up the problem as in the figure. Notice that $\tan B = 1/m$. It often comes in handy on contests to know that the slope of a line is the tangent of the angle that line makes with the x-axis. Since A and B are complementary, their tangents are reciprocals of each other. Points D and E are intersections of lines whose equations are known, so we can find their coordinates with a little linear algebra…and, in fact, we only need to determine their x-coordinates because the condition of the problem is equivalent to D‘E‘ : E‘C = 8 : 15, which is the same as D‘C : E‘C = 23 : 15. The x-coordinate of the intersection of the lines $y = m(x+a)$ and $y = px$ is easy to find: we set $px = m(x+a)$ and solve for $x$ and find that $x = ma/(p-m)$. So the x-coordinate of D‘ is $-\frac{ma}{\tan 30 + m}$ and the x-coordinate of E‘ is $-\frac{ma}{\tan 60 + m}$. Putting these into the condition D‘C : E‘C = 23 : 15 yields: $\frac{ma}{\tan 30 + m}$ : $\frac{ma}{\tan 60 + m}$ = 23: 15. This simplifies to the linear equation $23(\tan 30 + m) = 15(\tan 60 + m)$. Since $\tan 60 = \frac{1}{\tan 30} = \sqrt{3}$, we find that $m = \frac{11\sqrt{3}}{12}$. We just have to remember that we really wanted the reciprocal of m, so that the answer to the problem is $\ Sometimes, coordinate geometry is unpleasant and not very illuminating, but in this case, it yields a rather painless solution.
{"url":"http://girlsangle.wordpress.com/2012/04/15/2012-aime-1-problem-12/","timestamp":"2014-04-20T06:32:45Z","content_type":null,"content_length":"68103","record_id":"<urn:uuid:79ac04a9-0e29-40e9-8813-4397de96bfef>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Using loops to run functions over a list of variables [R] Using loops to run functions over a list of variables rapton mattcomm at gmail.com Tue May 12 17:55:31 CEST 2009 I have a data set with many variables, and often I want to run a given function, like summary() or cor() or lmer() etc. on many combinations of one or more than one of these variables. For every combination of variables I want to analyze I have been writing out the code by hand, but given that I want to run many different functions over dozens and dozens of variables combinations it is taking a lot of time and making for very inelegent code. There *has* to be a better way! I have tried looking through numerous message boards but everything I've tried has failed. It seems like loops would solve this problem nicely. (1) Create list of variables of interest (2) Iterate through the list, running a given function on each variable I have a data matrix which I have creatively called "data". It has variables named "focus" and "productive". If I run the function summary(), for instance, it works fine: Both of these work. If I try to use a loop like: factors <- c("data$focus", "data$productive") for(i in 1:2){ It given the following errors: Error in get(factors[i]) : variable "data$focus" was not found Error in summary(get(factors[i])) : error in evaluating the argument 'object' in selecting a method for function 'summary' But data$focus *does* exist! I could run summary(data$focus) and it works What am I doing wrong? Even if I get this working, is there a better way to do this, especially if I have dozens of variables to analyze? Any ideas would be greatly appreciated! View this message in context: http://www.nabble.com/Using-loops-to-run-functions-over-a-list-of-variables-tp23505399p23505399.html Sent from the R help mailing list archive at Nabble.com. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2009-May/198088.html","timestamp":"2014-04-17T06:44:24Z","content_type":null,"content_length":"4671","record_id":"<urn:uuid:4204ad58-43fb-40a2-af8d-b7362ee6af0c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts Tagged with 'Mathematica'—Wolfram|Alpha Blog Wolfram|Alpha isn’t just the wolframalpha.com website; it’s a whole range of technologies. While the website may be the most familiar way to access these technologies, there are many potential uses and interfaces for the Wolfram|Alpha technology. We’ve already seen a few. Mobile apps for Google’s Android and Apple’s iOS make Wolfram|Alpha accessible anywhere. Widgets allow users to tap portions of Wolfram|Alpha and bring them into their own webpages. The Wolfram|Alpha API allows programmers to integrate Wolfram|Alpha’s data and computation abilities in their own programs. There are even private custom versions of Wolfram|Alpha used to analyze confidential corporate data. But now there’s another interface to Wolfram|Alpha, one which brings with it a whole new set of capabilities: Mathematica. With the new Mathematica 8, you can access the Wolfram|Alpha engine directly from within Mathematica. Inside a Mathematica notebook document, just type == at the beginning of a line; you’ll get an orange Spikey icon indicating that Mathematica is ready to perform a Wolfram| Alpha query. Now simply type anything that you would type into the Wolfram|Alpha website. You’ll get back the same results as on the website—and more! Using the full power of the Mathematica software, this interface to Wolfram|Alpha allows new levels of interactivity and detail. In Mathematica, all graphics can be resized, and three-dimensional graphics can be rotated. Moreover, since Mathematica receives the underlying vector graphic from Wolfram|Alpha and not simply a bit-mapped image, this means that enlarging a graphic provides greater detail instead of a boxy image. For example, let’s look at everyone’s favorite three-dimensional surface, the Mathematica By simply clicking and dragging, you can rotate the Spikey. To resize, click the resize points on the frame that appear after clicking on the graphic. More » Last week we shared with you a highlight from Stephen Wolfram‘s keynote at the International Mathematica User Conference 2009. The highlight included a look at what’s in the research and development pipeline for Mathematica and future directions of Wolfram|Alpha. In this final video of our series, Stephen shares how the developments of Wolfram|Alpha will be integrated with Mathematica. (For more of Stephen’s keynote, please see parts 1 and 2 on the Wolfram Blog and part 3, “Future Directions for Wolfram|Alpha,” here on the Wolfram|Alpha Blog.) If you can’t see the video, please enable Flash in your browser or install the latest version of Adobe Flash Player. Transcript Excerpt: More » Are you interested in learning more about Mathematica—the powerful technology engine that makes Wolfram|Alpha possible, from its advanced computational algorithms to web deployment? We are pleased to announce that the International Mathematica User Conference 2009 will be held October 22–24 in Champaign, Illinois, USA. This is a great opportunity for anyone interested in learning more about Mathematica to meet and hear from Mathematica users from around the globe and all walks of life. If you’d like to learn more about Mathematica and all it brings to Wolfram|Alpha, we’d love to see you at this year’s conference. Please visit the Wolfram Blog for more details. Starting later today, we’ll be launching Wolfram|Alpha (you can see the proceedings on a live webcast). This is a proud moment for us and for the whole Mathematica community. (We hope the launch goes well!) Wolfram|Alpha defines a new direction in computing—that would have simply not have been possible without Mathematica, and that in time will add some remarkable new dimensions to Mathematica itself. In terms of technology, Wolfram|Alpha is a uniquely complex software system, which has been entirely developed and deployed with Mathematica and Mathematica technologies. More » Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/tag/mathematica/","timestamp":"2014-04-18T18:12:03Z","content_type":null,"content_length":"41381","record_id":"<urn:uuid:7370a837-0804-4148-9c15-bdf674f2b992>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Interior Solution Back to Last Page > < Full Glossary Definition of Interior Solution: An interior solution is a choice made by an agent that can be characterized as an optimum located at a tangency of two curves on a graph. A classic example of an interio solution is the tangency between a consumer's budget line (characterizing the maximum amounts of good X and good Y that the consumer can afford) and the highest possible indifference curve. The slope of that tangency is where: (marginal utility of X)/(price of X) = (marginal utility of Y)/(price of Y) Contrast interior solution with corner solution. (Econterms) Terms related to Interior Solution: About.Com Resources on Interior Solution: Writing a Term Paper? Here are a few starting points for research on Interior Solution: Books on Interior Solution: Journal Articles on Interior Solution:
{"url":"http://economics.about.com/library/glossary/bldef-interior-solution.htm","timestamp":"2014-04-17T00:48:54Z","content_type":null,"content_length":"35030","record_id":"<urn:uuid:9ddad354-86e9-42de-b1e9-6c3df989190f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Calculate the derivative of the function. ( Using Chain Rule) f(x) = square root 5x+x^2 << all under root • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5091eab4e4b0ad620537f5b4","timestamp":"2014-04-16T13:32:07Z","content_type":null,"content_length":"44736","record_id":"<urn:uuid:fca6c7a8-27d2-4ab4-a080-104661c54b76>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: '1000 Point Star' Brain Teaser 1000 Point Star Logic puzzles require you to think. You will have to be logical in your reasoning. Puzzle ID: #32838 Category: Logic Submitted By: lessthanjake789 Corrected By: cnmne Typical "stars" are drawn in connected, but not repeated, line segments. For example, a 5-point star is drawn as such - line segments AC, CE, EB, BD, DA. The segments must always alternate a constant number of points (in the above case, skipping 1 point in between). Given the information that there is only 1 way to draw a 5-point star, and that there is NO way to draw a 6-point star (in continuous lines, that is), and there are 2 ways to draw a 7-point star, how many different ways are there to draw a 1000-point star? Show Hint First, let's examine the 5-point star and why there is only that one, described, way to draw it. Starting at A, you cannot go straight to point B, otherwise you will have a pentagon, not a star. Same with point E. AC and AD (and their progressions) yield identical pictures (notice that the last segment in the AC one is DA, literally reverse all the letters to get AD ending in CA). Now, for a 6-point star, there are points A, B, C, D, E, and F. Starting at A, we cannot go to B or F, because then we have a hexagon. If we go AC, then we must go CE, then CA... making a triangle, not a 6-point star. Going backwards, AE, EC, EA; mirror image. If we skip 2 points, we go AD, then DA, a straight line, hardly a 6-point star. For a 7-point star, A-G, AB and AG are not possible, as that's a heptagon. AC, CE, EG, GB, BD, DF, FA is one way to draw the star, skipping one point in between. Also, AD, DG, GC, CF, FB, BE, EA, skipping two points in between each, is the other way. The last two line possibilities, AF and AE are mirror images of the two working scenarios. Examining the above information a little more deeply, we see that we must divide all possible drawings by two, to get rid of the mirror images. Also, we cannot use the point immediately to the left, that is AB as a segment. This explains why there is only 1 way for a 5-point, and 2 for a 7-point, but does not explain the 6-point. Of those remaining points (dealing only with the first half, again, mirror images), those that are factors of the number of total points do not work, because you will end up back at the starting point before hitting every number. AC is a factor in six because it passes point B, so it's 2. AD is a factor of 3 (B, C and D). Not only factors, but multiples of factors, as well (for example, skipping 4 points in the 6-point still does not work). Using the knowledge of mirrors, first point impossibility (AB) and factors, we see that the number 1000 is made up of 10^3, or 2^3*5^3. Without going through every factor, it's quickly and easily discernible that of the numbers 1-1000, 2 or multiples of it encompass 500 numbers, all evens. Any multiple of 5 includes 200 numbers, half of which end in 0 and are already included with the 2's, so there are 600 unique factors and multiples between 1 and 1000. That leaves us with 400 that are *not*, which should explain how many unique stars we can make. Remember to cut it in half because of mirroring, leaving us with 200, and to discount the AB segment (factor of 1), leaving us with 199. (If you discount AB, and THEN halve it, remember you needed to remove A[Z], the mirror image of that point). Hide What Next? HiImDavid Wow, very nice. It looks liek you invested a lot of time into that one!!! Very nice Sep 04, 2006 Punk_Rocker Someone has wayyy too much free time. xD Sep 05, 2006 Interesting, though...That was cool. soccerfreak how ling did it take u to do this? Sep 05, 2006 jazzmusician46 My goodness! Sep 06, 2006 Sep 18, 2006 Great job. Must've been very time-consuming. googoogjoob Got me on this one! I've been interested in multi-pointed stars ever since a 9-pointed star showed up in a math contest I once entered in middle school. So this one was a lot of fun Sep 20, 2006 for me, even though I came up with a wrong answer. ulan Good puzzle, not so diffucult of one know about phi Oct 19, 2006 Matio_Mario Ugh! Math is my best subject, but at the end I'm like "WHAT?!?!?!? Is this English???" Oct 29, 2006 NomadShadow In the course of my job I usually have to do alot of modulus arithmetics, so this took me less than 10 seconds to figure out. Nov 08, 2006 Cool one though, this is the first time I think of stars as ranges. Keep up the good work qwertyopiusa Nice problem! Nov 29, 2006 stil Confusing use of "star" and "repeated line segment." Dec 02, 2006 Hope to effect correction. McBobby1212 ...what?... Mar 02, 2007 sftball_rocks13 wow...that was hard Mar 25, 2007 the answer was long. very long. it scared me good teaser scary teaser Odessius i dunno, a million? yet i think that defeats the purpose of brain logic. still just grab a pencil and that ones pretty easy. for some people Apr 26, 2007
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=32838&op=2&comm=1","timestamp":"2014-04-16T16:08:55Z","content_type":null,"content_length":"37516","record_id":"<urn:uuid:74be9687-aa49-42de-ad43-ad85bf267f01>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of non-parametric methods for monitoring of tool chipping. Chipping of cutting tools usually occurs when a tool is exposed to cyclical stresses, i.e. in intermittent machining processes such as milling. It is a strictly nonlinear process, and sensor signals demonstrate its stochastic behaviour. A signal analysis (Lessard, 2006) in the frequency domain could result in relations connecting fast process changes and signal changes. Since such relations are not known a priori, it is necessary to perform estimation. Methods to be used for the spectra estimation could be parametric and nonparametric (Lijoi et al., 2007). While parametric methods are based on a process model, nonparametric methods are based on the calculation of autocorrelation function, and the calculation of the Fourier transform. Two problems should be pointed out here: the amount of available data is not unlimited, and the data is often corrupted by noise, or contaminated with an interfering signal. Hence, the goal of a spectral estimation based on a finite set of data is to describe the distribution (over frequency) of the power contained in a signal. Among possible nonparametric methods for the estimation of power spectral density of a random signal, a periodogram and the MATLAB Signal Processing Toolbox (MATLAB user's guide) are applied study. 2. PERIODOGRAM A periodoram is based on the fact that the power spectral density of a stationary random process is a Fourier transform of the autocorrelation function. The autocorrelation function of an ergodic process in the limited interval n=[0, N-1], k=[0, N-1] is calculated as a finite sum, (Marie, 2002) eq. 2: [??].sub.x](k) = 1/N [N-1-k.summation over n-0]_x(n + k)[x.sup.*](n) (2) With the discrete Fourier transform of autocorrelation function we obtain the estimation of power spectral density, which is a periodogram defined in eq. 3: To express the periodogram directly from the process x(n), we multiply x(n) with the rectangular window [[omega].sub.R](n), thus limiting x(n) to interval [0, N-1]. The result is the process x(n) defined in interval [0, N-1] and its autocorrelation function. The application of the Fourier transform results in the final expression for the periodgram, eq. 4, where [X.sub.N]([e.sup.j[omega]]) is discrete Fourier transform of the process [x.sub.N](n). A periodogram can be used for the spectral estimation of the signal structure. Some of the properties of the periodogram are: omission of the spectrum, resolution, partiality, and consistency, []. The approximation of the power spectrum by a periodogram requires the application of time windows. The basic shape of the window is rectangular, but there is a number of other set and changeable window shapes []. In this study, Welch's method is used to improve the periodogram. The method is based on the division of a set of data into segments (which may overlap). Subsequently, a modified periodogram is calculated for each segment (it is possible to use different windows for each segment, i.e. a modified periodogram). Finally, the mean value of the obtained estimates of the power spectral density is calculated. The method of nonparametric estimation of the spectrum power is applied for the estimation of the tool chipping process in the milling of CK 45 steel. The parameters of the machining process were constant. The process of milling was carried out to the point when the tool became worn. In one of the performed experiments, tool chipping was noted and the current, acoustic, and force signals were recorded. In each pass, the tool wear parameter VB was measured. Forces were measured by a Kistler three-component force sensor. An acoustic sensor was put close to the site where the milling process was carried out, and signals of the current were taken directly from the control unit (Mulc et al., 2004), Fig. 1. [FIGURE 1 OMITTED] In order to reveal the signal structure, a detailed analysis of the structure of force, current and acoustic signals was carried out. Figure 2 shows the diagrams of forces in which the occurrence of tool edge chipping can be easily noted. The change is abrupt and hard to detect in the time domain. [FIGURE 2 OMITTED] Changes in the signal of the current are similar to those in force signals: they are visible but they do not enable us to draw the right conclusion on the degree of wear. The power contained in the acoustic signal increases significantly with the degree of wear of the tool and it exhibits periodic behaviour. The influence of the signal stochastic behaviour makes a timely determination of the basic state more difficult. A periodogram was used in this study to perform a spectral analysis of the signal. A coefficient that expresses the area below the periodogram curve has been selected. The results of the experiment are given in Tab. 1. [FIGURE 3 OMITTED] The analysis of the results shows that the occurrence of chipping could be recorded by means of power coefficients, and thus, the chipping process could be stopped before the tool breakage. The force signal in the y-axis direction and the current signal in the x-axis direction exhibit the biggest change in the power coefficient of the spectral range. Therefore, a more detailed analysis of the periodogram structure for these two signals has been performed. The frequency range which reacts to changes in the cutting edge wear and to phenomena caused by abrupt changes in chipping or by tool breakage has to be determined. The diagrams in Fig. 3 show changes in the current spectrum in the x-axis direction. One can see that the frequency structure of the current signal is maintained with small changes in the fall in amplitude when the tool cutting edge is worn, while the frequency structure of the signal is disturbed when tool chipping and tool breakage occur. At the same time, the power contained in the signal is increased in the frequency range with a higher degree of wear, which can be a good indicator for the initialization of chipping. 4. CONCLUSION A successful design of the monitoring process of machining when tool chipping occurs requires the knowledge of recorded spectra of signals and the spectra of their power. In this study, a nonparametric estimate of the power spectrum in the monitoring process of cutting tool edge chipping is given. A periodogram was used, and the area below the curve was used as a coefficient of comparison. The biggest changes in the coefficient, which can be used for the detection of the initialization of the tool chipping, occurred on the force signal in the y-axis and on the current signal in the x-axis direction. This proves that a periodogram can be a satisfactory estimator in particular situations. The periodogram can be improved by different procedures of window optimization (Thomson, 1998). There is not "the best" method; rather, the selection of "the best" method depends on the signal and on the estimation parameter. In addition, a relatively large set of data is required for the application of nonparametric methods in order to obtain as good results as with parametric methods. Further research conducted by means of spectral analysis estimation would follow a process of a more detailed description of the machining process and an analysis of different conditions of the chipping process. 5. REFERENCES Lessard, C. S. (2006). Signal processing of random physiological signals, Morgan & Claypool, ISBN: 9781598290387, Texas A&M University Lijoi, A; Mena, R. & Prunster, I. (2007). A Bayesian nonparametric method for prediction in EST analysis, BMC Bioinformatics 2007, 8:339 Mulc, T.; Udiljak, T.; Cus, F. & Milfelner, M. (2004). Monitoring Cutting-Tool Wear Using Signals from the Control System. Journal of Mechanical Engineering, 50 (2004), 12; 568-579, ISSN: 0039-2480 Thomson D. J.: Multiple-Window Spectrum Estimates, Proceedings of Highlights of Statistical Signal and Array Processing, pp. 344--347, IEEE SP Magazine, June 1998., Portland, OR, USA Signal Processing Toolbox for use with MATLAB, User's Guide, Version 7.10, The MathWorks, Inc., 2010, http://www. mathworks.com/products/signal/ Maric, V. (2002). Nonparametric spectrum estimation (in Croatian),Available from: http://spus.zesoi.fer.hr/projekt/ 2001_2002/maric/estimacija.htm, Accessed: 2010-05-17 Tab. 1. Power coefficient in the frequency range Area below the frequency spectrum curve (*10**5) PSD Power Worn tool spectral density Sharp tool VB=0.6 mm Chipping Force Fx 1,9800 0,5488 1,4380 Fy 3,0410 0,9609 8,2550 Fz 0.5081 0,1570 1,0546 Current Ix 0,1385 0,1340 4,7485 Iy 11,6380 21,2650 9,3290 Is 131,9700 151,5500 105,9000 Acoustic AE 0,0506 2,1707 0,7911 AERMS 4,9107 25,0000 23,6650
{"url":"http://www.freepatentsonline.com/article/Annals-DAAAM-Proceedings/246013714.html","timestamp":"2014-04-17T09:57:20Z","content_type":null,"content_length":"29881","record_id":"<urn:uuid:9322cace-61d1-4fe4-8913-c09502575dad>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm the host, you're the player. I shuffle 3 cards, 2 of which have the word "LOSE" on them, one has "WIN". You randomly select a card but you're not allowed to turn it over and I do not turn over my 2 cards. AT THIS POINT, WHO IS MORE LIKELY TO HOLD THE WINNING CARD? I look at my cards and reveal a losing card. NOW, WHO IS MORE LIKELY TO HOLD THE WINNING CARD! I ALLOW YOU TO SWITCH TO THE REMAINING FACE DOWN CARD. SHOULD YOU? I WOULD! Hey, I figured I'd try my "hand" at this classic too! An important point here is whether my model of the original puzzle is equivalent. Your thoughts? Sent from my Verizon Wireless 4GLTE Phone 12 comments: Mr. Chase said... Yes, I think this is the same as the Monty Hall Problem. Though you're right, it's easy to tweak the Monty Hall Problem in small, nuanced ways, so that it's no longer equivalent. "You randomly select a card but you're not allowed to turn it over and I do not turn over my 2 cards. AT THIS POINT, WHO IS MORE LIKELY TO HOLD THE WINNING CARD?" You are more likely to hold the winning card. The probability of MY card being the right card is 1/3. The probability of YOU having the winning card is 2/3. "I look at my cards and reveal a losing card. NOW, WHO IS MORE LIKELY TO HOLD THE WINNING CARD!" Still the same. No new information. The probability of my card being the right card is still 1/3. The probability of one of your two cards being the right card is still 2/3. That hasn't changed. We can point to your face-down card and now say that IT is more likely to be the winning card, so... "I ALLOW YOU TO SWITCH TO THE REMAINING FACE DOWN CARD. SHOULD YOU?" ...absolutely, I should switch. The original probability that MY card was right was 1/3. The probability that YOUR face-down card is the winning card is 2/3. Thanks for your support of my argument! I've read and viewed almost every explanation and there always seems to be something missing that would convince me and students. I remember coming up with a mathematical approach a few years ago that really made sense to me but unfortunately I left it in the margins of some book I was reading! We would need a little more information about the set up to be truly monty hall. We need to know that you will ALWAYS reveal a losing card and that you will ALWAYS give us the chance to switch. Without this information then in your posted scenerio it could be that you only give us the chance to switch if we have the winning card. Which means switching will always be a bad move. Thank you, Mr. B. Yes, I should have explicitly stated those rules rather than imply them. That's why I asked for suggestions! With that said, does this version of the problem make it more or less transparent? Again, all the videos and explanations I've seen online and in print do not give a satisfactory explanation to dispel the myth that the contestant's chances of winning do NOT become 1/2 when Monty Hall reveals the goat. After all, the contestant gets to make a SECOND DECISION so why shouldn't the odds change. Here's how I see it. Instead of looking at it from the contestant's point of view, consider Monty's chances of keeping the car (a win for him!). If contestant DOES NOT SWAP, Monty will win 2 out of 3 times (if you chose Goat 1 or Goat 2, he wins), which means your chances of winning are 1/3. BUT IF YOU SWAP, YOUR CHANCES OF WINNING BECOME Darn it! Five minutes after I post a comment, I realize I missed the point I had intended to make! YES, AFTER MONTY SHOWS HIS CARD OR OPENS A DOOR TO REVEAL A GOAT, THERE ARE TWO POSSIBLE OUTCOMES REMAINING --- BUT THEY ARE NOT EQUALLY LIKELY! THAT IS THE KEY. IF CONTESTANT DOES NOT SWITCH, THE OUTCOME THAT MONTY HOLDS THE WINNING CARD OR REMAINING DOOR REVEALS THE CAR IS 2/3 AS I EXPLAINED IN PREVIOUS COMMENT. THE ONLY WAY THE CONTESTANT CAN IMPROVE HIS ODDS IS TO SWITCH. SO, FIVE MIN FROM NOW, I'LL HAVE ANOTHER EXPLANATION! 1. You're more likely to get the wrong card than the right card in the beginning, correct? (Since there are 2 wrong cards and 1 right card, so you have a 1/3 chance of getting the right card and 2/3 chance of getting the wrong card) 2. This doesnt change if 1 of the 2 wrong cards is revealed, correct? (Because your decision is still based off the circumstances described in #1) 3. Important Question: Would it change if you switched your decision? Yes, because you know there is 1 right card and 1 wrong card now. Your odds have increased from 1/3 to 1/2 of getting the right card. The circumstances have officially changed. I think the hardest part to understand is WHY should the person switch. I'm would ask, why not just go with eeny, meeny, miny, moe and pick one? And even if it does land on the same card you picked in the beginning, wouldn't it still have been 50/50 chance? I guess one can explain it by saying the first card you picked is "tainted" with the increased likelihood of being the wrong choice (since again, you picked it under the circumstances where you would be more likely to lose rather than win). Since that card is "tainted", your best call is to switch your decision to the remaining card? I hope I got the logic correct. Oh, and long time no see Mr. Marain! You are not forgotten! Hi YzW731! I remember you too. Read over my previous comment re why it would not be 1/2. Of course I may be wrong in my logic but I keep trying! Indeed this is difficult to understand fully... do you think that my "tainted" explanation could be valid somehow? Hi YzW731! 'Tainted' is an interesting idea but how do you define it quantitatively? I keep coming back to staying gives a 1/3 chance whereas switching gives you twice the chance. This is late, but I thought I'd join in. First, as is mentioned, it must be made VERY clear that you will ALWAYS turn over a losing card. Second, there's actually nothing wrong with the logic that says you originally had a 1 in 3 chance of picking the right card, and that now you have a 1 in 2 chance. This aspect is much clearer when you consider the following scenario: Suppose there were 10 cards, 9 with LOSE, 1 with WIN. The player chooses a card. At that point, you flip over 8 cards, all of which say LOSE (again, they *must* all say LOSE). There are now two cards left unturned, the player's original pick and one of your cards. Would you switch? I believe in that case, most people would realize that their odds went from 1 in 10 to 1 in 2, and that the odds are much better for switching. So that's what I call the "1 in 2" explanation. However, if you prefer the "2 in 3" explanation (using the original scenario), here's a good way to explain it: The player is offered a chance to switch or stay. If the player had perfect knowledge, when would he stay? He'd stay when he had the WIN card. So what is the probability of him first choosing the WIN card? 1 in 3. So, given imperfect knowledge, it is more likely that he did *not* choose the WIN card, and thus he should always switch. Again, this is much clearer if you use 10 cards. I'm assuming you are going to give them the cards to try this experimentally? Dear educationrealist, Thank you for your carefully reasoned comments and, yes, of course, I would have students working in pairs perform this experimentally and then play me! I find going to extreme cases as you recommended to be highly effective. Why not start with an ordinary deck of 52 cards and tell the students that the winning card is the Ace of Spades. It should be evident to most that the Ace is far more likely to be in the remaining cards! If the Ace is among those, then you have NO CHANCE of winning unless you switch!
{"url":"http://mathnotations.blogspot.com/2012/05/full-monty-hall-revealed.html","timestamp":"2014-04-17T15:42:10Z","content_type":null,"content_length":"210249","record_id":"<urn:uuid:bb27deca-b271-4ab0-b387-2126c29342d3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Countable Hom/Ext implies finitely generated up vote 12 down vote favorite Today I learned this interesting fact from Jerry Kaminker: If $A$ is an abelian group such that $\mathrm{Hom}(A,\mathbb{Z})$ and $\mathrm{Ext}(A,\mathbb{Z})$ are both countably generated, then in fact $A$ is finitely generated. This is known in the literature, in some old papers by Nunke-Rotman, Chase, and Mitchell. It makes me interested in possible generalizations. Suppose that $M$ is a left module over a ring $R$ and that $\mathrm{Ext}^k(M,R)$ is countably generated for all $k$. For which $R$ can you conclude that $M$ is finitely generated or, better, finitely resolved? Any commutative Noetherian ring with finite projective dimension? Is there a countability restriction missing from this proposed generalization? What about non-commutative rings? The result has been stated for any countable PID rather than just for $\mathbb{Z}$. In fact Mitchell says that if $R$ is a countable PID and $M$ is infinitely generated, then $$|\mathrm{Hom}(M,R)|\ cdot|\mathrm{Ext}(M,R)| = 2^{|M|}.$$ Victor in the comments asks for an application for this result for abelian groups. The original motivation was to extract information about what is possible for the cohomology of a topological space. The homology can be anything, provided that $H_0(X)$ is free and non-trivial. There are various obvious and not-so-obvious impossible choices for cohomology. For instance, this result implies that $H ^*(X)$ cannot be countably infinitely generated as an abelian group. (And if it is so as a ring, then the degrees of the generators have to go to $\infty$.) I suspect that Jerry needs it for a similar reason. I found the result interesting because it gives an "external" criterion for whether a countable abelian group is finitely generated. Even though the forgetful functor to Set does not distinguish countable groups, it does sometimes distinguish their dual or derived dual groups. One of the things that it can determine is whether the original group was finitely generated. Also, assuming that the result holds for all PIDs, it is a natural and non-trivial generalization of the fact that the algebraic dual of an infinite-dimensional vector space $V$ satisfies $$\dim V^* = 2^{\dim V}.$$ Everyone learns this for vector spaces, so it's cool to have a module version. ac.commutative-algebra ra.rings-and-algebras 2 For the case of abelian groups, does this characterization have any interesting applications? – Victor Protsak Jul 23 '10 at 7:56 5 In addition to the original sources, a textbook exposition of the abelian group case can be found in Proposition 3F.12 of my algebraic topology book. (Don't you just hate it when authors are constantly referring to their own books?) – Allen Hatcher Jul 23 '10 at 16:11 3 @Allen: I certainly don't hate it if it's your book! – Greg Kuperberg Jul 23 '10 at 16:21 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra ra.rings-and-algebras or ask your own question.
{"url":"https://mathoverflow.net/questions/33042/countable-hom-ext-implies-finitely-generated","timestamp":"2014-04-21T12:59:04Z","content_type":null,"content_length":"51275","record_id":"<urn:uuid:1cd24d03-3cad-431b-84ec-27cedae91c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 24 , 1994 "... Let G be an arbitrary cyclic group with generator g and order jGj with known factorization. G could be the subgroup generated by g within a larger group H. Based on an assumption about the existence of smooth numbers in short intervals, we prove that breaking the DiffieHellman protocol for G and ..." Cited by 69 (6 self) Add to MetaCart Let G be an arbitrary cyclic group with generator g and order jGj with known factorization. G could be the subgroup generated by g within a larger group H. Based on an assumption about the existence of smooth numbers in short intervals, we prove that breaking the DiffieHellman protocol for G and base g is equivalent to computing discrete logarithms in G to the base g when a certain side information string S of length 2 log jGj is given, where S depends only on jGj but not on the definition of G and appears to be of no help for computing discrete logarithms in G. If every prime factor p of jGj is such that one of a list of expressions in p, including p \Gamma 1 and p + 1, is smooth for an appropriate smoothness bound, then S can efficiently be constructed and therefore breaking the Diffie-Hellman protocol is equivalent to computing discrete logarithms. , 1998 "... Both uniform and non-uniform results concerning the security of the Diffie-Hellman key-exchange protocol are proved. First, it is shown that in a cyclic group G of order jGj = Q p e i i , where all the multiple prime factors of jGj are polynomial in log jGj, there exists an algorithm that re ..." Cited by 38 (3 self) Add to MetaCart Both uniform and non-uniform results concerning the security of the Diffie-Hellman key-exchange protocol are proved. First, it is shown that in a cyclic group G of order jGj = Q p e i i , where all the multiple prime factors of jGj are polynomial in log jGj, there exists an algorithm that reduces the computation of discrete logarithms in G to breaking the Diffie-Hellman protocol in G and has complexity p maxf(p i )g \Delta (log jGj) O(1) , where (p) stands for the minimum of the set of largest prime factors of all the numbers d in the interval [p \Gamma 2 p p+1; p+2 p p+ 1]. Under the unproven but plausible assumption that (p) is polynomial in log p, this reduction implies that the Diffie-Hellman problem and the discrete logarithm problem are polynomial-time equivalent in G. Second, it is proved that the Diffie-Hellman problem and the discrete logarithm problem are equivalent in a uniform sense for groups whose orders belong to certain classes: there exists a p... - DESIGNS, CODES, AND CRYPTOGRAPHY , 1999 "... The 1976 seminal paper of Diffie and Hellman is a landmark in the history of cryptography. They introduced the fundamental concepts of a trapdoor one-way function, a public-key cryptosystem, and a digital signature scheme. Moreover, they presented a protocol, the so-called Diffie-Hellman protoco ..." Cited by 26 (0 self) Add to MetaCart The 1976 seminal paper of Diffie and Hellman is a landmark in the history of cryptography. They introduced the fundamental concepts of a trapdoor one-way function, a public-key cryptosystem, and a digital signature scheme. Moreover, they presented a protocol, the so-called Diffie-Hellman protocol, allowing two parties who share no secret information initially, to generate a mutual secret key. This paper summarizes the present knowledge on the security of this protocol. , 2004 "... We re-examine the reduction of Maurer and Wolf of the Discrete Logarithm problem to the Di#e--Hellman problem. We give a precise estimate for the number of operations required in the reduction and use this to estimate the exact security of the elliptic curve variant of the Di#e--Hellman protocol for ..." Cited by 7 (0 self) Add to MetaCart We re-examine the reduction of Maurer and Wolf of the Discrete Logarithm problem to the Di#e--Hellman problem. We give a precise estimate for the number of operations required in the reduction and use this to estimate the exact security of the elliptic curve variant of the Di#e--Hellman protocol for various elliptic curves defined in standards. 1. "... . In this paper we investigate stable operations in supersingular elliptic cohomology using isogenies of supersingular elliptic curves over finite fields. Our main results provide a framework in which we give a conceptually simple new proof of an elliptic cohomology version of the Morava change of r ..." Cited by 6 (3 self) Add to MetaCart . In this paper we investigate stable operations in supersingular elliptic cohomology using isogenies of supersingular elliptic curves over finite fields. Our main results provide a framework in which we give a conceptually simple new proof of an elliptic cohomology version of the Morava change of rings theorem and also gives models for explicit stable operations in terms of isogenies and morphisms in certain enlarged isogeny categories. We are particularly inspired by number theoretic work of G. Robert, whose work we reformulate and generalize in our setting. Introduction In previous work we investigated supersingular reductions of elliptic cohomology [5], stable operations and cooperations in elliptic cohomology [3, 4, 6, 8] and in [9, 10] gave some applications to the Adams spectral sequence based on elliptic (co)homology. In this paper we investigate stable operations in supersingular elliptic cohomology using isogenies of supersingular elliptic curves over finite fields; this is ... - DIMACS Workshop on Unusual Applications of Number Theory , 1997 "... The security of many cryptographic protocols depends on the difficulty of solving the so-called "discrete logarithm" problem, in the multiplicative group of a finite field. Although, in the general case, there are no polynomial time algorithms for this problem, constant improvements are being ma ..." Cited by 3 (0 self) Add to MetaCart The security of many cryptographic protocols depends on the difficulty of solving the so-called "discrete logarithm" problem, in the multiplicative group of a finite field. Although, in the general case, there are no polynomial time algorithms for this problem, constant improvements are being made -- with the result that the use of these protocols require much larger key sizes, for a given level of security, than may be convenient. An abstraction of these protocols shows that they have analogues in any group. The challenge presents itself: find some other groups for which there are no good attacks on the discrete logarithm, and for which the group operations are sufficiently economical. In 1985, the author suggested that the groups arising from a particular mathematical object known as an "elliptic curve" might fill the bill. In this paper I review the general cryptographic protocols which are involved, briefly describe elliptic curves and review the possible attacks - ACTA ARITHMETICA , 1998 "... Let p ? 3 be a prime. In the ring of modular forms with q-expansions defined over Z (p) , the Eisenstein function Ep+1 is shown to satisfy (Ep+1) p\Gamma1 j \Gamma ` \Gamma1 p ' \Delta (p 2 \ Gamma1)=12 mod (p; Ep\Gamma1 ): This is equivalent to a result conjectured by de Shalit on the po ..." Cited by 3 (2 self) Add to MetaCart Let p ? 3 be a prime. In the ring of modular forms with q-expansions defined over Z (p) , the Eisenstein function Ep+1 is shown to satisfy (Ep+1) p\Gamma1 j \Gamma ` \Gamma1 p ' \Delta (p 2 \Gamma1)= 12 mod (p; Ep\Gamma1 ): This is equivalent to a result conjectured by de Shalit on the polynomial satisfied by all the j-invariants of supersingular elliptic curves over F p . It is also closely related to a result of Gross and Landweber used to define a topological version of elliptic cohomology. - J. Number Th "... We show that finite fields over which there is a curve of a given genus g ≥ 1 with its Jacobian having a small exponent, are very rare. This extends a recent result of W. Duke in the case g = 1. We also show when g = 1 or g = 2 that our bounds are best possible. 1 ..." Cited by 3 (1 self) Add to MetaCart We show that finite fields over which there is a curve of a given genus g ≥ 1 with its Jacobian having a small exponent, are very rare. This extends a recent result of W. Duke in the case g = 1. We also show when g = 1 or g = 2 that our bounds are best possible. 1 "... Abstract. In this paper we describe an algorithm that outputs the order and the structure, including generators, of the 2-Sylow subgroup of an elliptic curve over a finite field. To do this, we do not assume any knowledge of the group order. The results that lead to the design of this algorithm are ..." Cited by 3 (1 self) Add to MetaCart Abstract. In this paper we describe an algorithm that outputs the order and the structure, including generators, of the 2-Sylow subgroup of an elliptic curve over a finite field. To do this, we do not assume any knowledge of the group order. The results that lead to the design of this algorithm are of inductive type. Then a right choice of points allows us to reach the end within a linear number of successive halvings. The algorithm works with abscissas, so that halving of rational points in the elliptic curve becomes computing of square roots in the finite field. Efficient methods for this computation determine the efficiency of our algorithm. 1. , 2004 "... Soit Φ un Fq[T]-module de Drinfeld de rang 2, sur un corps fini L Fq, une extension de degré n d’un corps fini On abordera plusieurs points d’analogie avec les courbes elliptiques. Nous specifions les conditons de maximalite et de non maximalite pour l’anneau d’endomorphismes EndLΦ en tant que Fq[T] ..." Cited by 2 (1 self) Add to MetaCart Soit Φ un Fq[T]-module de Drinfeld de rang 2, sur un corps fini L Fq, une extension de degré n d’un corps fini On abordera plusieurs points d’analogie avec les courbes elliptiques. Nous specifions les conditons de maximalite et de non maximalite pour l’anneau d’endomorphismes EndLΦ en tant que Fq[T]-ordre dans l’anneau de division EndLΦ⊗Fq[T]Fq(T), on s’intéressera ensuite aux polynôme caractéristique et par son intermédiaire on calculera le nombre de classes d’iogénies. Let Φ be a Drinfeld Fq[T]-module of rank 2, over a finite field L, a finite extension of n degrees of a finite field with q elements Fq. Let m be the extension degrees of L over the field Fq[T]/P, P is the Fq[T]-characteristic of L, and d the degree of the polynomial P. We will discuss about a many analogies points with elliptic curves. We start by the endomorphism ring of a Drinfeld Fq[T]-module of rank 2, EndLΦ, and we specify the maximality conditions and non maximality conditions as a Fq[T]-order in the ring of division EndLΦ⊗Fq[T] Fq(T), in the next point we will interest to the characteristic polynomial of a Drinfeld module of rank 2 and used it to calculate the number of isogeny classes for such module, at last we will interested to the Characteristic of Euler-Poincare χΦ and we will calculated the cardinal of this ideals. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=211638","timestamp":"2014-04-20T17:48:27Z","content_type":null,"content_length":"37300","record_id":"<urn:uuid:1a7c4e71-6bd7-4de3-8948-1d6a1565eed2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Belmont, MA Algebra 2 Tutor Find a Belmont, MA Algebra 2 Tutor I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. 16 Subjects: including algebra 2, French, elementary math, algebra 1 ...I am especially strong in math, science and computing. DanielleI have taken the MCAT successfully, and have gained admission to Medical School. I not only have experience with all of the subjects covered on the MCAT, but I have perspective on how they are used and applied to medical school material. 17 Subjects: including algebra 2, Spanish, geometry, biology ...When I was an elementary school student, I played with the middle school orchestra. Since elementary school, I was concert mistress and principal violinist of my elementary, middle, and high school orchestras. I was nominated to play in the NYSSMA Festival (New York State School Music Association), the All-County Orchestra, and the Long Island String Festival. 11 Subjects: including algebra 2, Spanish, accounting, ESL/ESOL ...I didn't take any precalculus courses but all the material in this subject is covered by calculus. So is trigonometry really relevant in your day to day activities? You bet it is. 10 Subjects: including algebra 2, calculus, physics, geometry ...Two of these years was as a houseparent, while the third as Crisis Counselor. As Crisis counselor, I would assist other houseparents with behavior management, intervene in acting out situations and support academic training. People with Dyslexia use parts of the brain others do not when trying to learn. 45 Subjects: including algebra 2, chemistry, reading, physics Related Belmont, MA Tutors Belmont, MA Accounting Tutors Belmont, MA ACT Tutors Belmont, MA Algebra Tutors Belmont, MA Algebra 2 Tutors Belmont, MA Calculus Tutors Belmont, MA Geometry Tutors Belmont, MA Math Tutors Belmont, MA Prealgebra Tutors Belmont, MA Precalculus Tutors Belmont, MA SAT Tutors Belmont, MA SAT Math Tutors Belmont, MA Science Tutors Belmont, MA Statistics Tutors Belmont, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Allston algebra 2 Tutors Arlington Heights, MA algebra 2 Tutors Arlington, MA algebra 2 Tutors Brighton, MA algebra 2 Tutors Lexington, MA algebra 2 Tutors Medford, MA algebra 2 Tutors Melrose, MA algebra 2 Tutors Newton Center algebra 2 Tutors Newton Centre, MA algebra 2 Tutors Newtonville, MA algebra 2 Tutors Waltham, MA algebra 2 Tutors Watertown, MA algebra 2 Tutors Waverley algebra 2 Tutors West Medford algebra 2 Tutors Winchester, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/belmont_ma_algebra_2_tutors.php","timestamp":"2014-04-21T15:09:55Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:667d75b4-1f20-41c0-bc87-c87f7b8ec365>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Analysis September 19th 2006, 10:25 AM Real Analysis Having trouble proving this. Any suggestions? For all a, b (they are real numbers) show that: max (a,b) = (1/2)[a+b+abs(a-b)] min (a,b) = (1/2)[a+b-abs(a-b)] September 19th 2006, 10:37 AM Without loss of generality suppose a>=b, then If b were the larger we would observe that max(a,b)=max(b,a) then we would have: max (b,a) = (1/2)[a+b+abs(b-a)], and the result would again follow. min (a,b) = (1/2)[a+b-abs(a-b)] This is similar.
{"url":"http://mathhelpforum.com/calculus/5648-real-analysis-print.html","timestamp":"2014-04-21T14:51:00Z","content_type":null,"content_length":"4458","record_id":"<urn:uuid:019b1613-f83e-459d-9ba2-9fb80f92fff8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Diameter estimate of distance sphere of positive curved manifold up vote 2 down vote favorite Let $M$ be an $n$-dimensional Riemannian manifold with sectional curvature lower bound 1. Fix a point say $O\in M$, let $S(r)$ denote the distance sphere centered at $O$ with radius $r$. The classical Hessian comparison theorem says that the principle curvatures of $S(r)$ is less than that of standard sphere ${S}^n(1)$. And Toponogov triangle comparison implies that given any two point in $S(r)$ there distance in $M$ is less than or equal to the correspond distance in round sphere with the same openning angle at $O$. So is there any way to see how the intrinsic diameter (i.e. the length metric induced from ambient metric) upper bound? How about the Ricci curvature case? riemannian-geometry dg.differential-geometry mg.metric-geometry add comment 1 Answer active oldest votes I guess you want to ask is it true that $$\mathop{\rm IntrinsicDiameter}[S(r)]\le\mathop{\rm IntrinsicDiameter}[\tilde S(r)],$$ where $\tilde S(r)$ denotes the sphere of radius $r$ in the standard sphere. • This is true if $r\ge \tfrac\pi2$; it follows since $S(r)$ has bigger curvature than $\tilde S(r)$ in the sense of Alexandrov. • Note that if $r<\tfrac\pi2$ then $S(r)$ might be not connected; in this case $$\mathop{\rm IntrinsicDiameter}[S(r)]=\infty.$$ If sectional curvature $\ge 1$, I do not see other counterexamples. It reminds me some questions related to the conjecture that boundary of Alexandrov space is an Alexandrov space. up vote 2 down If you find a way to prove it then likely you will get some nontrivial corollaries of this conjecture say if $\Sigma$ is an Alexandrov space with curvature $\ge 1$ then $\mathop{\ vote accepted rm diam}\partial\Sigma\le \pi$ or perimeter of any triangle in $\partial\Sigma$ is at most $2{\cdot}\pi$. If $r\le\tfrac\pi2$, it is possible to construct a short map $h_r\colon \tilde S(r)\to M$ so that its image covers $S(r)$. In particular $$\text{area}[S(r)]\le\text{area}[\tilde S (r)]$$ (which is obvious anyway). In general the image of $h_r$ contains creases which stick inside $S(r)$ which in principle might be used as a shortcut. • For Ricci curvature the statement does not hold even if $S(r)$ is connected. You may take a small disc in hyperbolic plane and take a warp product with the sphere to make the Ricci curvature of obtained manifold to be colose to $+\infty$. The sphere $S(r)$ will have intrinsic diameter bigger than $\tilde S(r)$ as far as $S(r)\ne\emptyset$. @Anton, Is there any background material for this open problem you mentioned? – J. GE Mar 22 '13 at 10:24 2 @Sergei, if $r\ge\tfrac\pi2$ then $S(r)$ is the boundary of convex set, so it has to be connected. – Anton Petrunin Mar 23 '13 at 17:35 1 What is an example where $S(r)$ is not connected? – horse with no name Mar 23 '13 at 23:53 1 @horse, Here is the example, take a small circle $C$ and consider the spherical suspension over it. it is an Alexandrov space with two singular non-smooth conic point at the north and south poles. Then any point close to the poles will have disconnected distance sphere $S(r)$ when $r$ is small enough, as there won't be geodesic passing through the poles. Smooth the metric in arbitary small neighborhood of poles would give you a smooth Riemannian manifold. – J. GE Mar 25 '13 at 10:49 1 @horse, the example looks like the surface of a cigar. Its length can be arbitrary close to $\pi$; so if the center is near the middle and say $r=\tfrac\pi3$ then $S_r$ is formed by two circles near surrounding the ends of the cigar. – Anton Petrunin Mar 25 '13 at 20:20 show 4 more comments Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry mg.metric-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/125202/diameter-estimate-of-distance-sphere-of-positive-curved-manifold","timestamp":"2014-04-20T01:30:11Z","content_type":null,"content_length":"59620","record_id":"<urn:uuid:ad9e2abc-a04e-4545-a63e-98d6ea5956b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Fields and Their Particles: With Math Getting a rough understanding the basics of particle physics — our current understanding of the most elementary aspects of the universe — isn’t that hard. If you’ve had a class on physics at the advanced pre-university or beginning university level, it’s even easier. If math terrifies you, try the non-math version of this presentation [sorry, that version won't be ready for a while yet.] But if you can handle algebra, sines and cosines, and (perhaps not even necessary) the simplest aspects of calculus, then you can learn how fields work and how particles arise. There’s one leap of faith you’ll need, which involves learning a tiny bit about what quantum mechanics does. I won’t explain it in math, I’ll just tell you the answers. But once you accept that one point, everything else will follow. Here are the articles 1,2. The ball attached to a spring: 3,4,5. Waves (classical formula and equations of motion, and quantum waves) 6,7. Fields and their particles 8. How fields and particles interact with each other Once you’ve read these, don’t miss How the Higgs Field Works 15 responses to “Fields and Their Particles: With Math” 1. If people could just realize that [tex]\epsilon_0[/tex] and [tex]\mu_0[/tex] are the essential properties of the space. Just like the lengths are the properties of the space. That is all. But it turned out to be the hardest thing to realize. Length, time, [tex]\epsilon_0[/tex] and [tex]\mu_0[/tex], is everything we need to start with, concerning space. To accept them as axioms, for which Maxwell already discovered the fundamental The change of a photon position in time is equal to the reciprocal value of the [tex]\displaystyle \sqrt{ \epsilon_0 \cdot \mu_0}[/tex] [tex]\epsilon_0[/tex] and [tex]\mu_0[/tex] are something that was discovered, measured, long time ago. They are electromagnetic properties. Of the space. Photon also has electromagnetic properties. [i]Electromagnetic[/i] is that what [i]attaches[/i] energy and space, what enables the propagation of a photon, what enables photon’s existence in the way it exists – as a linearly propagating EM-energy-oscillation, which has the wavelength (spatial property), and the period of oscillation (time property). The time in which a photon makes one full EM-oscillation is [tex]\displaystyle \ Delta t = \Delta s \cdot \sqrt{\epsilon_0 \cdot \mu_0}[/tex]. Any photon will propagate with the velocity [tex]\displaystyle \frac{1}{\sqrt{ \epsilon_0 \cdot \mu_0}}[/tex], regardless of its energy. A photon’s energy is [tex]\Delta E = h \cdot \nu \Rightarrow \Delta E \cdot \Delta t = h[/tex]. The equation [tex]\Delta E \cdot \Delta t = h[/tex] is the law that each photon has to obey. In the above text are given all that is necessary to derive all of the most important equations in physics, using simple infinitesimal calculus, because all of the essential properties of space and of a photon can and do change continually. A photons mass, non-inertial mass, is [tex]\Delta m = \Delta E \cdot \epsilon_0 \cdot \mu_0[\tex], that is, it is the measure of coupling, the convolution of photons elementary energy and Best regards, □ Best regards to you too, but please keep this silliness to your own website. 2. There is nothing silly about it. The last sound science was the explanation of photoelectric effect. After that, the whole century of silliness passed. Enough with that. Stop embarrassing □ thank you for your comment. I’d like to see you build a transistor or a laser or a GPS system with your theory of the world. And meanwhile, what business is it of yours if I embarrass myself? Go run your own website. 3. The transistor was made with clever experimenting, based upon accidental discovery, and only after it was made and tested, it was also modeled mathematically, using the “in”, “fancy” theory of that time, the QM. That model is as good as Ptolemy-helicoids which described the movement of the heaven-bodies – it describes, but explains nothing. Does not enable any theoretical analysis on which one could rely in order to make modifications/improvements. The same was with the laser. And, concerning the GPS systems, the equations that I have derived, simply, accurately and comprehensibly, from their fundamental, elementary level physical origin, are the equations upon which the future GPS systems will be made, and the engineers will know completely and exactly why and what are they doing when they make them. The kids in high-school will soon, easily, and with understanding, derive the Newton’s principles, relativity equations, Newton’s gravitation law, … . And, as an electrotechnics engineer (electronics with telecommunications as the main course of study) and Computer Systems expert, I’ve significantly contributed in making the state-of-the-art thermo-optical DWDM TC devices (nxn SVTs, EDFA GTC, several versions of AWGs, several versions of ROADMs), through which, me and you exchange these messages today. I spent 4 years, 10-12h per day in the test&measurement lab, measuring, testing, evaluating, programming the lasers, power monitors, spectrum analysers, polarizers, robotic-measurement-stages, performing measurements, creating and automating the evaluation procedures,… And I can tell you, not even the q of quantum mechanics was used to develop them. Only experiments, and good old classical physics. Because, practically, QM is of no use. It is only for showing off, when presenting results in journals and on conferences. I won’t bother you any more, but, please, stop embarrassing yourself. It is not what a clever man should do. □ Its not true that QM was used to explain the workings of the transistor only after it was made. There was this PBS documentary in the early 2000s which described, for example, how Bardeen worked out the effect of surface electron states on the resistance at the interfaces between the semiconductors. Besides, even to understand the propagation of electrons in simple metals, as well as the inertness of certain elements and insulators, one needs QM. In fact for a lot of macroscopic properties and phenomenon (magnetism, rigidity of solids, chemical reactivity of elements, etc) one needs to invoke some QM principles even at the qualitative level! 4. We agree on one thing: one of us is embarrassing himself. 5. Pingback: Sábado, reseña: “El bosón de Higgs” de Alberto Casas y Teresa Rodrigo « Francis (th)E mule Science's News 6. miu miu 手帳 7. Thank you for your efforts … The comments, questions and your responses contain learning value. So, any comment (including this) which is not relevant or contains negative learning value should be deleted or placed in ‘lol’ category. lol!
{"url":"http://profmattstrassler.com/articles-and-posts/particle-physics-basics/fields-and-their-particles-with-math/?like=1&_wpnonce=5e79d40c83","timestamp":"2014-04-18T15:40:38Z","content_type":null,"content_length":"109739","record_id":"<urn:uuid:f5dad788-e77d-481f-9002-2ff48e07819b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponents and Logarithms 1.Solve, 3^2x = 5^(1-x) 2. Solve 4^2x - 8(2^2x) - 9 =0 Take logarithms to the base 5 on both sides. You have $2x = 1-x$ which is a simple linear equation in one variable. For the second one, use the substitution: $y = 2^{2x}$ For the second one, The answer must be in 3 significant numbers which I can't seem to get......Instead I get, After substitution, y^2 - 8y - 9 = 0 y= 9 ; y= -1 Oh man , how could i missed the substitution part on question 2...okay i get it:) What about question 1 where the base number is different? Hm... let's do it. $3^{2x} = 5^{1-x}$ Take logs on both sides. $log3^{2x} = log5^{1-x}$ $2x log3 = (1-x)log5$ $2x log3 = log5 - xlog5$ Bring xlog5 to the left; $2x log3 + x log 5 = log5$ Factor x; $x (2log3 + log 5) = log5$ Can you continue now? (Happy)
{"url":"http://mathhelpforum.com/algebra/161584-exponents-logarithms-print.html","timestamp":"2014-04-18T15:10:01Z","content_type":null,"content_length":"8809","record_id":"<urn:uuid:a31df659-d791-406f-9d38-af8968d4ae38>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Acorn Farms - Square Footage Calculator * To determine the area of your yard, multiply the length by the width (both in feet). The answer will be in square feet. * To determine the diameter of a circle (such as a tree trunk): circumference divided by 3.14. To measure the circumference of a tree trunk, wrap a fabric tape measure (or a piece of string) once around the trunk, about waist high. * To determine the area of a circle: 3.14 times the radius squared. When measuring the area beneath a tree, the radius can be calculated by extending the ruler from the trunk to the drip line (the furthest extension of the tree branches). * Approximately one cubic yard of mulch will cover 100 square feet with three inches of mulch. A more exact formula: Area (in square feet) times depth of mulch or compost you want to apply (in inches) divided by 324 will give you the number of cubic yards to purchase. Also see our mulch calculator. *27 cubic feet equal one cubic yard. * Three teaspoons equal one tablespoon. Two tablespoons equal one ounce. 16 tablespoons (eight ounces) equal one cup. Additional conversions 1
{"url":"http://www.acornfarms.com/acornfarmssquarefootagecalc.htm","timestamp":"2014-04-21T14:47:15Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:a8196e23-ece6-471c-8187-dcd2ca783aae>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Help hi helpmewithproofs Welcome to the forum. How would you put that in a 2 Colum proof with statements - reasons ? post 2 has most of what you need. You could throw in the word 'given' for statements that were given in the problem, and the congruency reason is angle/angle/side. So the only other things you need are a piece of paper, a ruler to draw the columns and a pencil Good luck with it. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=281599","timestamp":"2014-04-19T01:54:52Z","content_type":null,"content_length":"16620","record_id":"<urn:uuid:74791153-1c76-47bc-b751-c5704aec02bb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
determine least number of keypress Author determine least number of keypress hi guys, Feb 27, My brother brought home a test question that he wasnt able to answer. i tried to look at it and decided to try it for my practice but tweak the problem a bit. what i basically want to do 2008 is this: a user enters a word/s and the program computes the total number of moves one has to make using the arrow keys. but heres the kicker, you cant use the arrow keys so its basically Posts: 2 just computing the least number moves you have to make. for example: "GPS" * letter A is always the starting point * no new lines or carriage return in the input string to key in the word "GPS" the user would move DOWN 1 to select "G" then move RIGHT 3 and DOWN 1 to select "P", then move DOWN 1 and LEFT 3 to select "S". in total the least number of moves is 9 to form the word GPS. there is no problem with that since G is at the top of the virtual keyboard, P is below G and S is below P so its ascending. and no problem for an array. but the thing is when you need to move back like for example "ECHO ROCK". if you count the least number of steps it total to 25 but when i do it in my program, its 33. i tried deducting some moves and it ended at 20 which is still off here is my code to better understand: btw, this is test was given to highschool kids. i feel really embarrassed not being able to solve it any ideas? Ranch Hand Joined: I would suggest that you should look at your problem and your code. Loops are the wrong way to do this. Jan 01, 2007 Basicly you have 30 charactors that are of interest, and a 6 by 5 grid (6x6 with the final row not used (helps with the maths), as a clue, this is basicly vectors, each move between Posts: 333 letters is a vector. So if your grid co-ords start at 0 and you number each cell in the grid starting at 0. All you have to do is translate the charactor value to the cell value (a simple offset sum, with if statements for SPACE, -, . and ENT). Then you can calculate the grid co-ord from the cell, then for each move/vector, its just a matter of calculating each component (x and y) of the vector, inverting any negative numbers and summing the compont, and adding this to your running total. (There might be an ease way to add matrix's in Java but I dont know it) For example A -> E equals 4. 4,0 - 0,0. You will only have to do as many iterations as you have letters in the text? Is this for a text entry system?? Ranch Hand I would go about this much differently - IE only one loop to loop through the characters of the incoming string - and create a data structure to get the position based on the character Joined: instead of looping through the keyboard each time. Aug 31, 2006 However using your code - you need to calculate the distance (moves) from one key to the next, I'm not sure what the code is trying to do - however when moving from E to C. Posts: 226 tmp is 4 and x+y = 2 so the else runs and does total = 4 - 4 - giving zero total = 0 + 4 - (0 + 2) - giving two So your algorithm gives 2 moves for the string "EC"; To me the critical bit is to create a formula that given two positions returns the moves required to move between them. Then you just do this for all keys versus the previous key (starting with 0,0 for the first calculation) and add up these values. Sheriff [mat]: a user enters a word/s and the program computes the total number of moves one has to make using the arrow keys. but heres the kicker, you cant use the arrow keys so its basically just computing the least number moves you have to make. Jan 30, I'm not sure what the second sentence means here. It's the number of moves to make useing the arrow keys, except we can't use the arrow keys? Posts: Ignoring that second sentence, everything else seems to make sense. I agree with Tim's comments about how best to go about this. "I'm not back." - Bill Harding, Twister hi guys! Feb 27, thank you for your input, i came up with this: Posts: 2 it does what i want it to do. yes its just a small exercise for myself. ideally, if you dont have a keyboard and you just have arrow keys to select letters you would use the arrow keys to navigate along the virtual keyboard. i wanted to find out the least number of arrow key moves required when typing something in the virtual keyboard. so yes, its sort of a text entry exercise. when you said vectors, is it the same as map or hashmap? if i recall properly, vectors are like arrays but you can add and remove elements? is it the same as arraylist or is it more like hashmap or map? we have the same thing in mind -guess? have a look at my revised code and see if thats what you meant sorry for the confusion. its like this, you have a device with no physical keyboard and only arrow keys. the only visible keyboard is the on-screen keyboard and the way to navigate that on-screen keyboard is using the arrow keys. i just wanted to find out the least number of moves a user can make using the arrow keys to navigate the virtual keyboard Ranch Hand Hi Matt, Joined: No, in this sense I meant the maths construct (vector), i.e. a value (such as velocity) that has direction and magnitude, as oppoise to a normal value (such as area) that has just Jan 01, magnitude and is a scalar value. 2007 I would expect this to be a GCSE maths questions, so yes I would assume this is ok for high school. Do'nt know if kids learn programming at GCSE level here, so cant say if its a high Posts: 333 school computing question. However, in answer to your question Vectors are pretty much the same as ArraList, only Vectors are syncronised and thus considered thread safe. This increases their access time. If we ignore the puntuation marks, this can be done without any data sturtures, and in fact you could probably do this using recursion so would not require any loops. What I did was to convert the numerical value of the number to a cell value in the grid, and use that cell value to compute the co-ordinates then work out the vectors. Now you could just go stright from the numerical value of the number to the vectors, but that makes the maths look a lil untidy to my mind. Here is my code: Glad you found a solution. PS, the instance variable _gridSize represents the size of teh grid in the x axis, I think this is because we are moving in the X direction as we build our grid, so if we had two long columns rather then a square (ish) grid we would have a _gridSize of 2... I think. [ March 03, 2008: Message edited by: Gavin Tranter ] [ March 03, 2008: Message edited by: Gavin Tranter ] [ March 03, 2008: Message edited by: Gavin Tranter ] [ March 03, 2008: Message edited by: Gavin Tranter ] subject: determine least number of keypress
{"url":"http://www.coderanch.com/t/409583/java/java/determine-number-keypress","timestamp":"2014-04-21T00:36:42Z","content_type":null,"content_length":"39857","record_id":"<urn:uuid:9195f6d7-68bd-437b-97be-020685bf43e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Splitting string variables without parse strings [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: Splitting string variables without parse strings From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Splitting string variables without parse strings Date Fri, 8 May 2009 14:28:32 +0100 You are correct about -split-. As the original author of -split- I can comment. It was designed specifically to cope with strings containing one or more parse strings, typically not necessarily single characters such as spaces or commas. I thought quite a bit about extending it to your kind of problem, but could see no easy way to do that (a) did not complicate the syntax mightily and (b) was an improvement on direct use of -substr()-. -substr()- is, and has long been, the method of choice for your kind of problem. forval i = 1/4 { gen sitc_`i' = substr(sitc, `i', 1) is a solution to your problem, modulo your exact variable names. Thus it isn't very tricky at all. Ben Carpenter I have got a problem splitting one string variable into four new stringvariables each containing a part i.e. 1digit of the former 4digit string variable. My strings look like "103A" or "009X" (without the "") i.e. I have got 4 digit codes which contain numbers and I want to generate four new variables each consisting of one digit of the former 4digit-string variable. An Example: sitc(variable name of the 4digit string var): After the split the data should look like: sitc_1st_digit sitc_2nd_digit sitc_3rd_digit sitc_4th_digit 1 0 3 A 0 9 X X As far as I know, the split command needs characters as parse_strings to know when to "cut" the string. But I don ´t have any parse strings within my 4digit string variable. How can I cope with that? Is there another command(or combination of commands) apart from "split" which can deal with the problem? I would very much appreciate help with this tricky problem. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-05/msg00312.html","timestamp":"2014-04-18T08:21:00Z","content_type":null,"content_length":"7557","record_id":"<urn:uuid:fd875d32-813c-4a21-9cdc-b1c4405b0b2a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Amelia on Thursday, October 9, 2008 at 7:19pm. Hi guys, would appreciate some help with this problem: When a light turns green, a car speeds up from rest to 50.0 mi/h with a constant acceleration of 8.80 mi/h·s. In the next bike lane, a cyclist speeds up from rest to 20.0 mi/h with a constant acceleration of 14.5 mi/h·s. Each vehicle maintains a constant velocity after attaining its cruising speed. (a) For how long is the bicycle ahead of the car? (b) By what maximum distance does the bicycle lead the car? • Physics - drwls, Thursday, October 9, 2008 at 7:26pm Write equations for position vs time for both car and bike. Solve for the time that the car passes the bike. In other words, solve X(car) = X(bike) That will answer part (a). For part (b), differentiate X(bike) - X(car) vs. time and find out where the derivative is zero. That will be the time of maximum separation. Show your work if you need additional help • Physics - Damon, Thursday, October 9, 2008 at 8:33pm First make everything feet and seconds 1 mi = 5280 ft 1 h = 3600 s 50 mi/h *5280 ft/mi *1h/3600s = 73.3 ft/s 8.8 mi/hs *5280 ft/mi *1h/3600s =12.9ft/s^2 20 mi/hr --> 29.3 ft/s 14.5 mi/hs --> 21.3 ft/s^2 Now as long as the bike is accelerating the bike will be getting further ahead. However once the car finally reaches 20 mph, the car will start to catch up. So question (b) is easy. When does the car reach 20 mph or 29.3 ft/s ? 29.3 = 12.9 t t = 2.27 s for the car to reach max bike speed, from then on it will be catching up. How far did the car go before it started to catch up? d = (1/2) (12.9) (2.27)^2= 33.2 ft (answer part b) now when does the car finally pass the bike? How long does the car accelerate? 73.3 = 12.9 t t =5.68 s during acceleration the car goes d = .5 (12.9)(5.68^2) = 208 ft from then on the car goes at 73.3 ft/s How long does the bike accelerate? 29.3 = 21.3 t t = 1.38 s d = .5 (21.3)(1.38^2) = 20.1 ft from then on the bike goes at 29.3 ft/s Now probably the car passes the bike while the car is still accelerating so try that possibility first d car = .5*12.9 * t^2 d bike = 20.1 + 29.3 (t-1.38) 6.45 t^2 = -20.3 +29.3 t 6.45 t^2 -29.3 t +20.3 = 0 t = 3.69 or .863 at .863 s the bike is ahead of the car and gaining, so that does not work However at 3.69 s the car has gone d = (1/2) (12.9) 3.69^2 = 87.8 ft and the bike has gone d = 20.1 + 29.3 (3.69-1.38) = 87.8 ft so the bike was ahead for 3.69 s (part a) Related Questions Physics - As soon as a traffic light turns green, a car speeds up from rest to ... physics - As soon as a traffic light turns green, a car speeds up from rest to ... physics - As soon as a traffic light turns green, a car speeds up from rest to ... Physics - As soon as a traffic light turns green, a car speeds up from rest to ... physics - As soon as a traffic light turns green, a car speeds up from rest to ... physics - As soon as a traffic light turns green, a car speeds up from rest to ... physics - Vroom-vroom! As soon as a traffic light turns green, a car speeds up ... physics - At the instant a traffic light turns green, a car starts from rest ... physics - At the instant a traffic light turns green, a car starts from rest ... science - a car is stopped at a traffic light, which then turns green. The car ...
{"url":"http://www.jiskha.com/display.cgi?id=1223594383","timestamp":"2014-04-20T04:10:36Z","content_type":null,"content_length":"10805","record_id":"<urn:uuid:2cf3dd73-cc7b-457e-8b0d-26a00bf22a35>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Severna Park Math Tutor Find a Severna Park Math Tutor ...I played organized baseball from ages 8-24. I played all positions except pitcher. I coached youth leagues while my children were growing up. 24 Subjects: including algebra 1, SAT math, prealgebra, physics ...Integrals A. Interpretations and Properties of Integrals. B. 21 Subjects: including calculus, ADD/ADHD, world history, statistics ...I passed both Praxis I & Praxis II physical science pedagogy (chemistry) tests in one attempt. Since then, I have been involved in helping many colleagues with study skills to help them sit for Praxis. I have extensive teacher as a high school math and chemistry teacher, skills I have honed over long time of dedicated service and practice. 15 Subjects: including algebra 1, algebra 2, chemistry, geometry I am a cum laude graduate of Norfolk State University with a Bachelor of Science degree in Applied Mathematics. Because of my passion for Math, I find it fun to help others understand and appreciate it as well. For two of my college years, I tutored my fellow students in all levels of Algebra and Calculus. 8 Subjects: including calculus, precalculus, trigonometry, statistics ...I enjoy challenging people, and giving them the tools to further themselves in their lives. My education philosophy is fairly straight forward: speak clearly, be excited about the material at hand, be supportive and encouraging, and give students the tools and opportunity to show what they can do. I grew up in a musical family, taking piano lessons from a young age. 15 Subjects: including geometry, piano, statistics, linear algebra Nearby Cities With Math Tutor Annapolis, MD Math Tutors Arnold, MD Math Tutors Crofton, MD Math Tutors Crownsville Math Tutors Elkridge Math Tutors Essex, MD Math Tutors Lake Shore, MD Math Tutors Middle River, MD Math Tutors Millersville, MD Math Tutors New Carrollton, MD Math Tutors Odenton Math Tutors Pasadena, MD Math Tutors Rosedale, MD Math Tutors Severn, MD Math Tutors South Bowie, MD Math Tutors
{"url":"http://www.purplemath.com/Severna_Park_Math_tutors.php","timestamp":"2014-04-17T04:10:24Z","content_type":null,"content_length":"23617","record_id":"<urn:uuid:5cf79e01-a661-447f-9d30-158e26d94deb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
easiest way to find greatest common factor for multiple large numbers Author Message Weshel Posted: Tuesday 26th of Dec 13:01 hi People out there I really hope some math wiz reads this. I am stuck on this assignment that I have to submit in the coming week and I can’t seem to find a way to finish it. You see, my tutor has given us this homework on easiest way to find greatest common factor for multiple large numbers, exponential equations and 3x3 system of equations and I just can’t understand it. I am thinking of going to some private tutor to help me solve it. If one of you guys can give me some suggestions, I will be obliged. From: Netherlands or Holland, your IlbendF Posted: Wednesday 27th of Dec 10:13 Don’t fret my friend. It’s just a matter of time before you’ll have no problems in answering those problems in easiest way to find greatest common factor for multiple large numbers. I have the exact solution for your math problems, it’s called Algebrator. It’s quite new but I assure you that it would be perfect in assisting you in your math problems. It’s a piece of software where you can solve any kind of math problems with ease . It’s also user friendly and displays a lot of useful data that makes you understand the subject matter fully. From: Netherlands Troigonis Posted: Friday 29th of Dec 07:49 Hi , Algebrator is one awesome thing! I started using it when I was in my college . It’s been years since then, but I still use it occasionally. Take my word for it, it will really help you. From: Kvlt of Ø mulphjs Posted: Saturday 30th of Dec 15:35 I am so reassured to hear that there is hope for me. Thanks a lot . Why did I not think about this? I would like to begin on this right now . How can I get hold of this program? Please give me the particulars of where and how I can get this program. From: London, UK Vnode Posted: Monday 01st of Jan 10:49 geometry, multiplying fractions and sum of cubes were a nightmare for me until I found Algebrator, which is truly the best algebra program that I have ever come across. I have used it through many algebra classes – Algebra 2, Intermediate algebra and Intermediate algebra. Just typing in the algebra problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I highly recommend the program. From: Germany ZaleviL Posted: Tuesday 02nd of Jan 09:49 It is really good if you think so. You can find the software here http://www.easyalgebra.com/square-roots.html. From: floating in the light, never
{"url":"http://www.easyalgebra.com/elementaryalgebra/graphing-lines/easiest-way-to-find-greatest.html","timestamp":"2014-04-21T07:23:24Z","content_type":null,"content_length":"22973","record_id":"<urn:uuid:2129a914-aa1a-46fe-9995-d0c9ee6a6aab>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Portsmouth, NH Math Tutor Find a Portsmouth, NH Math Tutor ...Music history has also been fully integrated into my teaching of AP music theory, having students study and emulate the writing styles of the composers who personified each era. In addition to my academic and teaching credentials, I am an expert in teaching the Zoltan Kodaly system of Sight Sing... 46 Subjects: including calculus, precalculus, trigonometry, statistics ...I use computer simulations and mathematics to conduct physics research, so math in geometry is basic for my physics research. I graduated from Nagoya University, Japan, in 1996, and received a Master's degree of physics from Nagoya University in 1998 and a PhD in Science (physics) in 2001. I wa... 16 Subjects: including algebra 1, algebra 2, calculus, geometry ...I start the first session by giving the student some work that is easy for him or her, then increasing the difficulty until I can find the student's maximum potential. From there, I am better able to understand how the student learns, memorizes, and solves problems by looking at his or her progr... 17 Subjects: including algebra 1, reading, prealgebra, algebra 2 ...I have been interested in Math for about 15 years and I am always looking for ways to make this sometimes worrisome subject much more enjoyable for everyone. I have worked with children at an elementary school level through substitute teaching and volunteering for Big Brother Big Sister, and I h... 15 Subjects: including calculus, geometry, precalculus, statistics I currently work at Woburn Memorial High School in the special education department. I specialize in Mathematics. I graduated from UNH in 2008 with a BS in Mathematics and an option in Middle School Education. 11 Subjects: including calculus, precalculus, trigonometry, special needs Related Portsmouth, NH Tutors Portsmouth, NH Accounting Tutors Portsmouth, NH ACT Tutors Portsmouth, NH Algebra Tutors Portsmouth, NH Algebra 2 Tutors Portsmouth, NH Calculus Tutors Portsmouth, NH Geometry Tutors Portsmouth, NH Math Tutors Portsmouth, NH Prealgebra Tutors Portsmouth, NH Precalculus Tutors Portsmouth, NH SAT Tutors Portsmouth, NH SAT Math Tutors Portsmouth, NH Science Tutors Portsmouth, NH Statistics Tutors Portsmouth, NH Trigonometry Tutors
{"url":"http://www.purplemath.com/portsmouth_nh_math_tutors.php","timestamp":"2014-04-18T13:54:50Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:d1657e7b-f37c-4bfd-85a8-7f92a59aefd4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Useful inequalities cheat sheet Back to László Kozma's homepage Useful inequalities cheat sheet This is a collection of some of the most important mathematical inequalities. I tried to include non-trivial inequalities that can be useful in solving problems or proving theorems. I omitted many details, in some cases even necessary conditions (hopefully only when they were obvious). If you are not sure whether an inequality can be applied in some context, try to find a more detailed source for the exact definition. For lack of space I omitted proofs and discussions on when equality holds. I didn't include inequalities which require lengthy definitions, inequalities involving complex functions, number theory, inequalities that rely on advanced calculus (most integral inequalities) and I also omitted inequalities with a pure geometric character. Many of the inequalities are special cases of others, but I tried to resist the temptation of including only the most general form (which may not be the most easily applicable). Useful Inequalities: Download PDF Download zipped PostScript References: Books • J. M. Steele: The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. • D. S. Bernstein: Matrix Mathematics: Theory, Facts, and Formulas. • G. H. Hardy, J. E. Littlewood, G. Pólya: Inequalities. • D. E. Knuth: The Art of Computer Programming (Volume 4) • D. S. Bernstein: Matrix Mathematics: Theory, Facts, and Formulas. • D. S. Mitrinovic: Analytic Inequalities. • D. S. Mitrinovic, J. E. Pecaric, A. M. Fink: Classical and New Inequalities in Analysis. • P. S. Bullen: A Dictionary of Inequalities. • P. S. Bullen: Handbook of Means and Their Inequalities. • J. Herman, R. Kucera, J. Simsa: Equations and Inequalities. • A. Lohwater: Introduction to Inequalities. • D. Dubhashi, A. Panconesi: Concentration of Measure for the Analysis of Randomized Algorithms. • S. Jukna: Extremal Combinatorics. • J. H. Spencer: Ten Lectures on the Probabilistic Method. Other sources • G. J. Woeginger: When Cauchy and Hölder Met Minkowski: A Tour through Well-Known Inequalities. • Zhi-Hong Sun: Inequalities for binomial coefficients. • LaTeX template: http://www.stdout.org/~winston/latex/ • Suggestions, observations: Sebastian Ziesche (product diff. ineq.), ... Please send corrections, completions, suggestions to Lkozma@gmail.com. I will keep uploading the newest version to this page. Licensed under CC Attribution-ShareAlike 3.0
{"url":"http://www.lkozma.net/inequalities_cheat_sheet/","timestamp":"2014-04-20T08:13:52Z","content_type":null,"content_length":"4087","record_id":"<urn:uuid:11b4665e-a0ae-4694-a0b0-6c132951a5fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
RSA Cryptography November 16, 2010 In 1973, Clifford Cocks invented a method of public-key cryptography in which the recipient of a message holds a private key, known only to himself, and publishes a public key, which may be used by anyone to send him a message that only he will be able to read. Cocks worked for the British security service GCHQ, so he was unable to publish his work. In 1978, three MIT professors, Ronald L. Rivest, Leonard M. Adelman and Adi Shamir invented the same algorithm, independently, which they patented in 1983. The algorithm is now known as RSA, after their initials. The basic idea of RSA starts with two large prime numbers of equal bit-length, p and q; their product n becomes the modulus of the cryptosystem. The totient of n is computed as φ(pq) = (p−1) × (q−1). Then two keys are chosen, the encryption key e and the decryption key d, such that de ≡ 1 (mod φ(pq)) and gcd(e, φ(pq)) = 1. Then, given a message m, an integer on the range 0 < m <n, the message is encrypted by computing m^e (mod n) and the resulting cipher-text c is decrypted by computing c^d (mod n). In practice, the keys are generated by selecting a bit-length k and and arbitrary encryption key e. A longer bit-length provides more security than a shorter bit-length; a 768-bit key has recently been factored, albeit with extreme effort (about 2000 PC-years), most commercial users of RSA are probably using 1024- or 2048-bit keys these days, and high-security users (banks, military, government, criminals) are probably using 4096- or 8192-bit keys. E must be odd, is frequently chosen to be prime to force the condition that it is co-prime to the totient, and is generally fairly small; e = 2^16+1 = 65537 is common. Then the key generator chooses p and q at random, computes d as the modular inverse of e, and reports n and d. In that way, nobody, not even the person generating the keys, knows p and q. Here is a simple example from Wikipedia: Choose p = 61 and q = 53. Then n = p×q = 3233 and the totient φ(pq) = 60×52 = 3120. Choose e=17 which is co-prime to 3120 since 17 is prime and 17∤3120; the corresponding d is the inverse of 17 with respect to the modulus 3120, so d = 2753. Then the message m = 65 is encrypted as c = m^e (mod n) = 65^17 (mod 3233) = 2790, and the cipher-text 2790 is decrypted as m = c^d (mod n) = 2790^2753 (mod 3233) = 65. The standard definition of RSA cryptography is known as PKCS #1. It provides a method for converting a text message to a number m suitable for encryption, and converting it back to the original text message, but we won’t examine that algorithm today. It is also possible to use RSA to provide non-forgeable signatures; the basic idea is that the sender encrypts a message hash with his decryption key, so the receiver can decrypt the message hash with the sender’s public key, which works because only the sender knows his private decryption key. Your task is to write an RSA key generator and procedures to encrypt and decrypt messages using the RSA algorithm as described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2 5 Responses to “RSA Cryptography” 1. November 16, 2010 at 10:15 AM My Haskell solution (see http://bonsaicode.wordpress.com/2010/11/16/programming-praxis-rsa-cryptography/ for a version with comments): keygen :: Integer -> Integer -> IO (Integer, Integer) keygen bits key = do p <- gen (div bits 2) q <- gen (div bits 2) let d = inv key ((p - 1) * (q - 1)) return (p * q, d) where gen k = fmap (until valid succ) $ randomRIO (2 ^ (k - 1), 2 ^ k) valid v = gcd key (v - 1) == 1 && mod v 4 == 3 && isPrime v crypt :: Integer -> Integer -> Integer -> Integer crypt = flip . expm 2. November 17, 2010 at 4:33 AM Racket (with shameless use of libraries): #! /bin/sh #| Hey Emacs, this is -*-scheme-*- code! exec racket -l errortrace –require “$0″ –main — ${1+”$@”} ;; http://programmingpraxis.com/2010/11/16/rsa-cryptography/ but of ;; course also http://en.wikipedia.org/wiki/Rsa #lang racket (require rackunit rackunit/text-ui (planet soegaard/math/math) (define *bit-length* (make-parameter 10)) (define (random-prime) (let* ([min (expt 2 (sub1 (*bit-length*)))] [r (+ min (big-random min))]) (next-prime r))) (provide make-keys) (define (make-keys [defaults? #f]) (let* ( [p (if defaults? 61 (random-prime))] [q (if defaults? 53 (random-prime))] [n (* p q)] [φ (* (sub1 p) (sub1 q))] [e (if defaults? 17 65537)] [d (inverse e φ)] (values (cons n e) ;public (cons d (cons n e)) ;private (provide encrypt-integer) (define (encrypt-integer m pubkey) (match-define (cons n e) pubkey) (with-modulus n (^ m e))) (provide decrypt-integer) (define (decrypt-integer c privkey) (match-define (cons d (cons n e)) privkey) (with-modulus n (^ c d))) (define-test-suite praxis-tests (let () ;; example from Programming Praxis (define-values (pub priv) (make-keys #t)) (check-equal? (encrypt-integer 65 pub) 2790))) (define-test-suite random-tests (for ([_ (in-range 5)]) (define-values (pub priv) (make-keys)) (define plaintext (big-random (car pub))) (check-equal? (decrypt-integer (encrypt-integer plaintext pub) priv) plaintext) (printf “W00t~%”))) (define-test-suite all-tests (provide main) (define (main . args) ;; This indirectly affects big-random. Feh. (random-source-randomize! default-random-source) (exit (run-tests all-tests ‘verbose))) 3. November 19, 2010 at 1:58 PM Once again, my submission is entirely too large to fit here nicely. The problem: I needed to either import some number-theoretic functions (Extended Euclidean Algorithm, primality test), or define them myself if the incredible gmpy library is unavailable for the Python installation being used. Though I’m certainly biased, I’m a huge fan of the mathematical exercises here! Sorry it took so long for me to get to 4. January 10, 2011 at 5:44 PM Works for any N = 18, the integers get too big. import java.math.BigInteger; import java.security.SecureRandom; * Cryptography. * Generates public and private keys used in encryption and * decryption * Author: Jamie * Version: Dec 27, 2011 public class RSA private final static BigInteger one = new BigInteger(“1″); private final static SecureRandom random = new SecureRandom(); // prime numbers private BigInteger p; private BigInteger q; // modulus private BigInteger n; // totient private BigInteger t; // public key private BigInteger e; // private key private BigInteger d; private String cipherText; * Constructor for objects of class RSA public RSA(int N) p = BigInteger.probablePrime(N/2, random); q = BigInteger.probablePrime(N/2, random); // initialising modulus n = p.multiply(q); // initialising t by euler’s totient function (p-1)(q-1) t = (p.subtract(one)).multiply(q.subtract(one)); // initialising public key ~ 65537 is common public key e = new BigInteger(“65537″); public int generatePrivateKey() d = e.modInverse(t); return d.intValue(); public String encrypt(String plainText) String encrypted = “”; for(int i = 0; i < plainText.length(); i++){ char m = plainText.charAt(i); BigInteger bi1 = BigInteger.valueOf(m); BigInteger bi2 = bi1.modPow(e, n); m = (char) bi2.intValue(); encrypted += m; cipherText = encrypted; return encrypted; public String decrypt() String decrypted = ""; for(int i = 0; i < cipherText.length(); i++){ char c = cipherText.charAt(i); BigInteger bi1 = BigInteger.valueOf(c); BigInteger bi2 = bi1.modPow(d, n); c = (char) bi2.intValue(); decrypted += c; return decrypted;
{"url":"http://programmingpraxis.com/2010/11/16/rsa-cryptography/?like=1&source=post_flair&_wpnonce=55bb07b550","timestamp":"2014-04-20T08:33:07Z","content_type":null,"content_length":"73351","record_id":"<urn:uuid:de364bd3-39f8-4dea-9be4-ee1029cd8552>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Extractors for a Constant Number of Polynomially Small Min-Entropy Independent Sources Anup Rao Department of Computer Science, University of Texas at Austin March 22, 2006 We consider the problem of randomness extraction from independent sources. We construct an extractor that can extract from a constant number of independent sources of length n, each of which have min-entropy n for an arbitrarily small constant > 0. Our extractor is obtained by composing seeded extractors in simple ways. We introduce a new technique to condense independent somewhere-random sources which looks like a useful way to manipulate independent sources. Our techniques are different from those used in recent work [BIW04, BKS+ 05, Raz05, Bou05] for this problem in the sense that they do not rely on any results from additive number theory. Using Bourgain's extractor [Bou05] as a black box, we obtain a new extractor for 2 independent block-sources with few blocks, even when the min-entropy is as small as polylog(n). We also show how to modify the 2 source disperser for linear min-entropy of Barak et al. [BKS+
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/103/2986379.html","timestamp":"2014-04-19T04:23:10Z","content_type":null,"content_length":"8317","record_id":"<urn:uuid:b2c095d9-b8e8-4da2-b749-0eb095cce3d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Probability & Swinburne's P-Inductive & C-Inductive Arguments (For God's Existence) Bayesian Probability & Swinburne’s P-Inductive & C-Inductive Arguments (For God’s Existence) Because I am jet-lagged (and fundamentally lazy) here is a classic post, modified to include a homework assignment. Don’t be like me: do the work. This post contains some fundamental derivations from Bayes’s Theorem which are of great interest, and not just in “proving” the existence of God (I use scare quotes because if your “proof” only has God as “probable” then it is not a proof). Anxious readers may start at Notation. This originally appeared November, 2012. Because my ignorance is vast—and, as many readers will argue, probably increasing—I only just yesterday heard of Swinburne’s “P-inductive” and “C-inductive” arguments for the existence of God. I had heard of Richard Swinburne, but I thought the name historical, a contemporary of John Calvin or perhaps Jonathan Edwards. Well, it sounds Eighteenth Century Protestanty, doesn’t it? Boy, was I off base. Anyway, let’s make up for lost time. This post is merely a sketch. I want to explore his probability language only; we won’t today use his arguments to prove God’s existence. David Stove (in The Rationality of Induction) showed us how easy it is skip gleefully down the wrong path by misusing words. How misuse causes misunderstanding, and misunderstanding becomes the seed of “paradoxes” and philosophical “problems”, such as the supposed “problem of induction.” This is an academic belief which says that induction is not “justified” or rational. Yet, of course, every academic who proclaims that induction is irrational also uses it. We all do, and must use, induction. Now, an inductive argument is an invalid one. That is, the conclusion of an inductive argument does not follow infallibly from its premises. Example (from Hume): (premise) all the many flames I have seen before have been hot, therefore (conclusion) this flame will be hot. Induction is also why tenured faculty do not leap from tall buildings and expect to live. It might be that this flame won’t be hot and that holding our palm over it will turn it to ice instead of cooking our flesh. But there isn’t anybody who would make a bet on this “might.” Also, we can’t say we have deduced this flame will be hot, so the argument is invalid, but we have induced it will be. So let’s be careful with Swinburne’s language and talk of P and C probability arguments. His notation: all new arguments begin with some conclusion or hypothesis we want to judge. Call this H. All arguments, including probability arguments, are conditional on premises or evidence. Thus all arguments must have a list of premises, or evidence. Call new evidence E and old evidence or knowledge K. We write the probability H is true given K as Pr( H | K ). Let K = “We have a two sided object which will be tossed once, one side of which is labeled ‘head’, and when tossed only one side can show.” Let H = “A ‘head’ shows.” Then Pr( H | K ) = 1/2. Note very carefully that we constructed H. We could have let H = “Barbara Streisand votes for Romney” and then Pr( H | K ) = unknown. We know K but suppose we learn E = “A magician will toss the object, and when he tosses the object always comes up ‘head’.” Then Pr( H | E & K ) = 1, it is certain a ‘head’ will show. Notice that we added E to the list of premises, i.e. to K. We got to this equation through the use of Bayes’s theorem, which goes like this: Pr( H | E & K ) = Pr( E | H & K ) * Pr( H | K ) / Pr( E | K ) (see the classic posts link for the statistics teaching articles for why this is so). We already know Pr( H | K ) = 1/2 and that Pr( H | E & K ) = 1. But what of Pr( E | H & K ) and Pr( E | K )? Well, Pr( E | H & K ) / Pr( E | K ) must equal 2 (because 2 * 1/2 = 1). Let’s be careful: Pr( E | H & K ) says given we know K and we know a ‘head’ showed, what is the probability that a magician with this talent exists? I haven’t any idea what this number would be except to say it can’t be 1, because if it were it means we deduced E is true given (only) H & K, which of course we can’t. Now Pr( E | K ) says, given K, what is the probability a magician with this talent exists? Again, I don’t know, except to say that this probability is between 0 and 1. But we can deduce that Pr( E | H & K ) = 2 * Pr( E | K ). That is, after learning a ‘head’ did show (and knowing K), we know that E is twice as likely as before we knew the outcome. This is a great example, because it shows that not all probability is precisely quantifiable, but that we can sometimes bound it. And that it is futile to search for answers where none can be found. Even in situations that seem trivially easy, as this one. We need one more piece of notation. H is our conclusion or hypothesis. Saying “H” is shorthand for “H is true”. We need a way to say “H is false.” How about “~H”? Eh, not perfect, kinda ugly, but it’s common enough. Thus, Pr(H | K) + Pr(~H | K) = 1, right? P-probability arguments Swinburne (modified) says a P-probability argument makes its conclusion more probable than not. That means, Pr( H | E & K ) > 1/2 and the inequality is strict. This implies that a “good” P-probability argument starts with Pr( H | K ) <= 1/2. In other words, adding evidence E pushes us from “maybe not” into the realm of “maybe so.” The example we used above is a good P-probability argument. C-probability arguments Swinburne (modified) says a C-probability argument raises the probability of its conclusion. That means Pr( H | E & K ) > Pr( H | K ) where again the inequality is strict. A C-probability argument is thus weaker than a P-one, because it could be that 1/2 > Pr( H | E & K ) > Pr( H | K ). A C-probability argument can sometimes be a P-probability argument, but only when Pr( H | E & K ) > 1/2. Our example is also a good C-probability argument. Increasing probabilities Point of both these (unnecessary?) complications is to examine arguments which increase the probability of H after adding evidence E. When does that happen? Look again at Reverend Bayes, rearranged: Pr( H | E & K ) / Pr( H | K ) = Pr( E | H & K ) / Pr( E | K ). Now if Pr( H | E & K ) > Pr( H | K ), Pr( H | E & K ) / Pr( H | K ) > 1, and thus Pr( E | H & K ) / Pr( E | K ) > 1. In other words, Pr( H | E & K ) > Pr( H | K ) when the evidence (or premise) E becomes more likely if we assume H is true. And this is true for both P-probability and C-probability arguments. We already saw that this was true for our example. There is one more result to be had. It isn’t as easy to get to. Ready? Two rules of probability let us write: Pr(E | K) = Pr(E & H | K) + Pr(E & ~H | K) [total probability] and that Pr(E & H | K) = Pr(E | H & K) * Pr(H | K); [conditional prob.] Pr(E & ~H | K) = Pr(E | ~H & K) * Pr(~H | K). We already proved that if H is more probable after knowing E that Pr( E | H & K ) / Pr( E | K ) > 1. Thus substituting for Pr(E | K) and multiplying both sides by this substitution, we have Pr( E | H & K ) > Pr(E | H & K) * Pr(H | K) + Pr(E | ~H & K) * Pr(~H | K). From which Pr( E | H & K ) – Pr(E | H & K) * Pr(H | K) > Pr(E | ~H & K) * Pr(~H | K). Gathering terms and because (1 – Pr(H | K)) = Pr(~H | K) we conclude Pr( E | H & K ) > Pr(E | ~H & K), and that this must hold when the probability of H increases when adding E. In other words, the probability that E is true given we assume H is true is larger than the probability E is true given we assume H is false. In other, other words, the probability the evidence is true is larger assuming H true than assuming H false. For our example this says it is more likely a tricky magician exists if we see a ‘head’ than if we do not see it, which I hope makes sense. There is no conclusion yet, except for these mathematical consequences (well known to readers of Jaynes, for example). We’ll have to return to these results when we look at Swinburne’s implementation of them. But not today. There is no understanding without doing, which should be the only reason for homework. Your task: propose a K, H, and one or more E such that the probability of H increases on E. Bonus points for non-quantifiable K, H, and E (let him that readeth understand). Bayesian Probability & Swinburne’s P-Inductive & C-Inductive Arguments (For God’s Existence) — 32 Comments 1. I met Prof. Swinburne once at cocktail reception whilst a student of Philosophy and Theology at Oxford. He was retired at that point and aged but ever the gentleman and (after drinks and years) sharper than me (at my youngest and most sober). We don’t produce many like him anymore. 2. Self awareness of one’s increasing ignorance comes with wisdom. I think that English is the best language. I also think that it is quite poor. 3. “Barbara Streisand votes for Romney” sounds like K. Has always, is now and will always be gushing effusives regarding Romney. Or if not K what possible knowledge could we have that would have any “!” with H. Psyto… nad. I’t might have been interesting to have another psychotic as President but I think Obama will serve a whole second term. 4. Flames that are produced while some material is burning are hot because the burning process releases lots of energy. And lots of flames you see are the result of that burning process, and that is the reason they are not. That is not induction, but prediction. It is of course quite possible that you can make something cold that looks like a flame, or even find a burning process that generates very little energy, and will therefore have quite cool Nevertheless, being burned is not a popular experience so there is no harm in being careful around flames. 5. If probability is regarded as the degree of belief or confirmation, basically, this post explains the following intuitions using probability calculus. An extra argument/evidence/information (E) will have no influence on, or strengthen, or weaken the degree of your belief in a certain argument/event (H). E is called a C-probability argument if it strengthens your belief in H, and a P-probability argument if it increases your degree of belief in H to more than 50%. Loosely speaking, in this case, we know E as positive evidence for H. In a way, E and H are positively correlated. So, given you know H is true, your degree of belief in E will increase, and vice versa. Given ~H (non H) is true, it will weaken your belief in E. Furthermore, your degree of belief in E given H is true is larger than the one given H is not true. My 2 cents. 6. “that is the reason they are not. ” >> that is the reason they are hot. 7. @JH In case of the magician, as soon as he throws hist first tails, it is clear he’s not a magician. Belief is then not lessened, but shattered. And the interesting bit is, that before the throw of the tail, belief was apparently increasing. Regarding Pr(E| H & K), it is possible to compute this. Nobody would believe you to be a magician, unless you have already thrown quite a lot of heads and no tails at all. Like nobody would believe you if you said a coin with two sides, one of which would be head. If the magician was to supply his own coin, and that coin would have two sides with ‘head’, then it would be very easy to prove one was a magician. So instead of having just a magician, you have a magician with a verified number of heads and no tails on record. The change of such a person being a magician is easy to calculate, (1/2) to the power of N. And it would be a good idea to check the coin after each throw, if the other side still is not ‘head’. 8. Sander van der Wal, Indeed, the negative evidence that the magician throws a tail will destroy the belief that he/she is a magician as defined in H by Briggs. In other words, the additional negative evidence has weakened the degree of belief in H (before the throw of tails) to zero. 9. Once proposition E is introduced. I am not sure it really matters if P(E) is small but it is distracting me from following the rest of the arguement. 10. Doug M, Your second line is a mistake. You can say: Pr(H|K) = Pr(H & E | K) + Pr(H & ~E | K), Pr(H|K) = Pr(H | K & E) + Pr(H | K & ~E) is false in general. 11. Correction noted. 12. And when Jesus had finished eating he turned to his disciples and said, “Brethren, I should have passed on the refried beans. Because I’ve got a fart the size of Palestine about to explode out of my ass.’–Jesus Christ, as told to Kirk Cameron. 13. All, I left L.W. Dickel’s comment intact, only to show a representative example of what happens when you marry self-esteem and ignorance. Coincidentally, this is the slogan of our modern-day educational system. 14. “When you marry self esteem and ignorance.” Isn’t that how religions are formed? Only you might add: deluded, retarded and prone to believing in superstitious bullshit. 15. Briggs, My God, how could you think that increasing ignorance isn’t a very good sign. It means at the least that you are paying attention. I have very little use for people whose ignorance is not increasing – it usually means that they simply aren’t trying. 16. define “god” 17. Not an example, more of a refutation. Instead of assuming that the guy throwing a coin is a magican, lets assume we think he is a fraud throwing a coin with heads only. Now, each throw that turns up a head increases the change he is a fraud, in exactly the same way as in the belief that the guy is a magican. But you cannot be both a magician and a fraud at the same time, these hypotheses are mutually exclusive. But from the measurements, it is impossible to say which of the two hypotheses is correct, as the same meaurement makes both hypotheses equally more likely. 18. Early on you say that because the coin has two sides and only one will appear that P(H|K)=1/2. However, this probability does not follow from the premise K. We simply don’t have enough information to give P(H|K) a value. 19. Charles, why not? If you have a two sided object of which only one side can show when tossed, then each side has a 1/2 chance of showing. If I add more evidence I can change my 1/2 probability to be more refined. Isn’t this the principle of indifference in action? 20. Would this work as an answer to the homework? With knowledge K “I have a shuffled deck of cards, and one card will be drawn from the top.” The conclusion H is “The card drawn will be an Ace.” I add evidence E “This is a pinochle deck.” If an Ace comes up on the first draw, my probability that I’m dealing with a pinochle deck has increased greater than if an Ace did not come up. I can’t necessarily quantify it since regular decks have 52 cards, but pinochle decks have 48 face cards & aces, and I don’t know how many cards there are in the deck. Does this make sense? 21. @Sander, I don’t think that’s what’s being demonstrated by your example. He may be a fraud, but it doesn’t matter. Logically, the probability of a magician being the one that tossed the coin given that we saw a heads is greater than the probability of a magician being the one that tossed the coin if a tails comes up. We’re not trying to find the correctness of the tosser being a magician. We simply know that if a heads comes up, we are more confident in the person being a magician than if a tails came up. 22. If you can think of only one hypothesis that explains the reason why this guy is only throwing heads, then you will become more confident. But as soon as you think of a second hypotheses, you need to add that second hypotheses to the equation. You get a combined hypothesis, the guy is either a magician, of a fraud, but noth both. Now the guy throws heads, and we become more confident that the guy is either a magician, or a fraud, but not both. And then we think of a new hypothesis, the guy is just very lucky and his first tail will come after 10*2000 throw. Which means we are now testing a new hypothesis again, magician, fraud or very luck. And after a new heads we are even more confident of the guy being a magician, a fraud or just very lucky. And a fourth hypothesis, the guy has now run out of his luck and the next throw is tails. The testable hypothesis now is, magician, fraud, lucky or out of luck. And he throws tails, so again we have become more confident in our combined hypothesis, magician, fraud, lucky, or out of luck, because the guy has thrown a tail. Clearly, exor-ing hypotheses do not work. If you have two, or more hypotheses you must test them separately, and as long as the experiments do not contradict the hypotheses your confidence in all of them will grow, and grow, and grow. But how is that possible if these hypotheses are mutually exclusive, i.e. P(H1) + P(H2) + … + P(Hn) = 1, or 0, the sum being 1 of one of these hypotheses is indeed true, or 0, if none of the proposed hypotheses is true. Clearly if you sum over all possible hypotheses, one of them is true so the sum is 1, and we do become more confident of the true hypothesis. And if we are lucky and have formulated the true hypothesis among our finite number of hypotheses we will become more confident in that one too is we run enough experiments. But if we have not formulated the true hypothesis, then our confidence will grow in all the hypotheses that have not yet being proven wrong, knowing all the time that at most one of these is true indeed. The only confidence that is growing is that one of them might be true. Hence the idea that you need to find an experiment that will disprove a hypothesis as quickly as possible. So you inspect the coin after each throw, to see if it is a proper one, with one head and one tails. 23. Followup to Nate: All we know is the object has two sides and one side will appear. Therefore P(H|K) is between 0 and 1 inclusive. Also P(T|K)=1-P(H|K). That’s all we know. Anything else represents investigator bias. For instance, we can apply a “minimax” idea: if we select P(T|K)=p, then we are wrong by at most the larger of p and 1-p. The minimum over p of the max(p,1-p) says choose p=0.5. In practice, this may be a perfectly reasonable way of dealing with our uncertainty, but WMB is discussing a problem of logic. IMHO, he can’t claim Bayesian probability is logical and apply arbitrary criteria for eliminating the uncertainty. E.g., if we knew the object was a fairly ordinary looking coin, then a Bayesian might reasonably assume P(H|K) = 0.5, or perhaps assume a beta distribution centered on 0.5 for P(H|K). These assumptions are reasonable because we’ve flipped many coins in the past and “know” P(H|K) is approximately 0.5 for a great many coins, so therefore it is likely to 0.5 for this coin. But we have no such additional information in this problem. 24. So basically, @Sander, you are saying that Pr( E | H & K ) > Pr( E | ~H & K ) is true, but not very useful because we only have one possible hypothesis, and only the set of evidence given? What if you can’t easily find evidence that will easily falsify something? I’m confused as to your terminology. Briggs posted that K is our knowledge (coin with heads n tails), H is our hypothesis (coin shows heads) and E is our evidence. Sure, in a thought experiment we can invent all sorts of E, but aren’t we just trying to see improve our credence of E? And when a heads is thrown, we are more confident in E, no matter what the other E could be? But don’t we all become more confident differently? If I can’t think if another reason than that he’s a magician, and can’t find more evidence, I may make a bet after he throws heads 20 times in a row that he’s a magician. But if you think that the coin is weighted and the man is a fraud, or he could be a magician, you might not bet at all, since your confidence that he’s one of the two has gone up, but you need evidence that allows you to distinguish them. I’m not sure we disagree, I’m just not quite understanding your point – that we may end up overconfident if we only allow one piece of evidence to enter our minds? 25. Charles, So you’re saying that all we know is: 0 <= Pr(H|K) <= 1 0 <= Pr(T|K) <= 1 Pr(H|K) + Pr(T|K) = 1 So I guess my question is: We don't know anything except that one of these two outcomes may occur. If I don't assign equal probabilities to Pr(H|K) and Pr(T|K), then i'm assiging non-equal proabilities to them, and I don't have any evidence to do that. I have to have a degree of belief in Pr(H|K) and Pr(T|K) – given that I know nothing except that one of two outcomes appears, shouldn't I, based on the evidence, believe in either one equally? 26. Nate, you say you should choose either one equally “based on the evidence”. There is no evidence supporting any probability choice. Any value, whether 0.00001, 0.2354, or 0.5, or any other number would represent your bias, not evidence. The point WMB is making is that all probabilities are dependent on the premises you make. The point I’m making is the premises have to be sufficiently strong to justify assigning numbers to those This is a fundamental problem in arguing Bayesian probability is an extension of ordinary logic. In almost every problem one can think of, there is no amount of empirical evidence that allows one to assign probabilities uniquely (so that every knowledgeable person would assign the same probabilities). As some point, the practitioner asserts his bias, as you do arguing we should choose probabilities “equally”. 27. @Nate I am saying that having a single hypothesis is a bad idea, especially when you are only looking for experimental outcomes which agree with the hypothesis. The equations do show that the change the hypothesis is true becomes a bit bigger. But what the equation does not show is that the same is true for all other competing hypotheses. You have good reason to be a bit less inconfident about a theory that has not been disproven by more experimental data, but no reason at all to be more confident. This is a glass being almost empty. 28. Sander & Charles, thank you for your detailed answers to my questions. 29. well, if Dickel can get a personal response from Briggs, so can I. at the very least, it increases the probability of getting a personal response from Briggs to > 0 30. Charles Boncelet, We do in fact have enough information. The probability is assigned via the “statistical syllogism”. Click on my “Who” and navigate down to the list of papers on assigning probabilities. Hi there. 31. I’ve read your paper on assigning probabilities (“On the non-arbitrary assignment of probability”). Admittedly, I need to read it again, but … I find the statistical syllogism argument just as arbitrary as any other (indifference, ignorance, etc). All these mechanisms (you reject) assign equal probability but they are bad, while statistical syllogism assigns the same equal probabilities but it is good. May I suggest the argument might be more compelling if you could provide examples when statistical syllogism assigns different–and better–probabilities than the methods you reject. Viewed as a branch of mathematics, probability is agnostic to the assignment of actual values, i.e., any value of p works. The equations are self-consistent. What makes probability interesting is its connection to real-life experiments. I.e., probability is predictive. It tells us something about what might happen. We can even make strong statements in many experiments (e.g., the various laws of large numbers). I maintain that we have no basis for assigning a numerical value to P(H|K) absent some physical understanding of the object and/or absent some knowledge of past history of the experiment. Arbitrarily assigning P(H|K)=0.5 can result in terrible performance, in, say, a data compression system or a drug testing regimen. Just because we want to assign a value to P(H|K) doesn’t mean we should. (BTW, I get far less worked up on the “arbitrariness” or “subjectivity” of Bayesian priors than others do. As long as the prior is “reasonable” the system should quickly adapt to data and the resulting posterior probabilities should depend weakly on the prior.) 32. — Swinburne (modified) says a P-probability argument makes its conclusion more probable than not. That means, Pr( H | E & K ) > 1/2 and the inequality is strict. This implies that a “good” P-probability argument starts with Pr( H | K ) Pr(E | ~H & K). This says that the change that a some guy being a magician is bigger if he throws heads than if he throws tails, using a proper coin. Pr(E | ~H & K) = Pr(E & ~H & K) / Pr(~H & K). So the change that you are a magician *if* you have thrown tails using a proper coin is equal to the change you are a magician *after* you have thrown tails using a proper coin, divided by the change that a proper coin comes up with tails. By definition you are not a magician if you throw tails, so Pr(E & ~H & K) == zero. Pr(~H & K ) is a proper number, 1/2, and zero divided by 1/2 is zero. The statement therefore says that the change you are a magician *if* you throw tails using a proper coin is zero too. Which is again what we would expect. Let’s throw again. We have more knowledge: K1 = H & K. We have the same hypothesis E, and a new throw H1. Pr( H1 | E & K1 ) > 1/2 Pr( H1 | E & H & K ) > 1/2 And indeed, the change that a magician will throw two heads in a row must be bigger than 1/2. Pr( E | H1 & K1 ) > Pr(E | ~H1 & K1). Pr(E | H1 & H & K) > Pr(E | ~H1 & H & K). Which becomes Pr(E | ~H1 & H & K) = Pr(E & ~H1 & H & K) / Pr(~H & H & K). So the change that you are a magician *if* you throw heads first, then tails using a proper coin, is the change that you are a magician *after* you thrown heads, then tails, using a proper coin. And that change is zero too. Adding more tests doesn’t matter. You only increase the knowledge K, and every new H is as big a hurdle as the first. Ok, so what about adding the outcome to E: E1 = E & H: Pr( H1 | E1 & K ) > 1/2 Pr(H1 | E & H & K) > 1 which can be written as Pr(H1 | E & K1) > 1/2 and we have already shown that what happens then.
{"url":"http://wmbriggs.com/blog/?p=6500","timestamp":"2014-04-18T13:09:30Z","content_type":null,"content_length":"107109","record_id":"<urn:uuid:bd2902eb-e953-464c-bb4b-8acc48ed6038>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Maimi, OK Trigonometry Tutor Find a Maimi, OK Trigonometry Tutor ...I have taught ninth grade and tenth grade math. I am familiar with the curriculum of Geometry and Algebra. I have previously helped and tutored students to improve their FCAT scores with great 18 Subjects: including trigonometry, chemistry, calculus, geometry ...I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be explained and applied. We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. 46 Subjects: including trigonometry, Spanish, reading, chemistry ...Thank you for your understanding and I hope to assist you in the best possible way! Sincerely, RocioI would like to be certified in ESL/ESLO because I have been teaching ESL for adults for two years at Stony Point High School. I have also taken the training with Literacy of Austin group. 16 Subjects: including trigonometry, Spanish, chemistry, biology ...I don't think I needed to take this class, but I believe it is a very good start into Algebra. I can tutor this subject without a problem. Math is my strong subject. 11 Subjects: including trigonometry, chemistry, physics, calculus ...Also I am certified by the State of Florida as teacher. This certification, which may be checked, covers from K6 up to K12 and Community Colleges as well. Fully bilingual Spanish English, I have more than 40 years of experience in the Engineering construction and design fields and also 5 years as teaching assistant, instructor or Tutor at the University level. 12 Subjects: including trigonometry, Spanish, calculus, physics Related Maimi, OK Tutors Maimi, OK Accounting Tutors Maimi, OK ACT Tutors Maimi, OK Algebra Tutors Maimi, OK Algebra 2 Tutors Maimi, OK Calculus Tutors Maimi, OK Geometry Tutors Maimi, OK Math Tutors Maimi, OK Prealgebra Tutors Maimi, OK Precalculus Tutors Maimi, OK SAT Tutors Maimi, OK SAT Math Tutors Maimi, OK Science Tutors Maimi, OK Statistics Tutors Maimi, OK Trigonometry Tutors Nearby Cities With trigonometry Tutor Coconut Grove, FL trigonometry Tutors Coral Gables, FL trigonometry Tutors Doral, FL trigonometry Tutors El Portal, FL trigonometry Tutors Hialeah Lakes, FL trigonometry Tutors Key Biscayne trigonometry Tutors Mia Shores, FL trigonometry Tutors Miami trigonometry Tutors Miami Beach trigonometry Tutors Palmetto Bay, FL trigonometry Tutors Pinecrest, FL trigonometry Tutors South Miami, FL trigonometry Tutors Sweetwater, FL trigonometry Tutors West Miami, FL trigonometry Tutors West Park, FL trigonometry Tutors
{"url":"http://www.purplemath.com/Maimi_OK_Trigonometry_tutors.php","timestamp":"2014-04-17T01:35:19Z","content_type":null,"content_length":"24083","record_id":"<urn:uuid:a31b5098-3683-413b-be97-3aff9e15e3e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Curriculum Vitae Wilfrid Augustine Hodges born 27 May 1941 M.A. D.Phil. FBA. Website http://wilfridhodges.co.uk E-mail my first and last names with a dot between them, at btinternet.com Current appointments etc. • 1992- Scientific Council, European Association for Computer Science Logic • Scientific Advisory Committee, ILLC, University of Amsterdam. • 2007- Consultant on logic, Oxford English Dictionary. • 2008- WoLLIC Steering Committee. • 2009- Fellow of the British Academy. • 2009- Emeritus Professor, Queen Mary, University of London. • 2010- Board, IfCoLog. • 2010- Associate Member, Iranian Institute of Philosophy, Tehran. • 2011- Chair, British Academy Section H12 Philosophy. See also below under Editorial. • 1959-65 full-time student, New College Oxford • 1961 Classical Moderations (first class honours) • 1963 Literae Humaniores (first class honours) • 1965 Theology B.A. (first class honours) • 1970 D.Phil. in Faculty of Lit. Hum., Oxford, "Some questions on the structure of models" (under supervision of J. N. Crossley). • 1967-8 Acting Assistant Professor in Department of Philosophy, University of California at Los Angeles. • 1968-74 Lecturer in Departments of Philosophy and Mathematics, Bedford College, University of London. • 1974-81 Lecturer in Department of Mathematics, Bedford College. • 1979-80 Visiting Associate Professor, Department of Mathematics, University of Colorado at Boulder. • 1981-4 Reader in Mathematical Logic, University of London, at Bedford College. • 1984-7 Reader in Mathematical Logic, University of London, at Queen Mary College. • 1987-2006 Professor of Mathematics, University of London, at Queen Mary College. • 1987-9 Deputy Head of Department, School of Mathematical Sciences, Queen Mary College. • 1990-3 Dean of Informatics and Mathematical Sciences, Queen Mary and Westfield College. • 1999-2003 Deputy Head of Department, School of Mathematical Sciences, Queen Mary, University of London. • 2006-8 Professorial Fellow, School of Mathematical Sciences, Queen Mary, University of London. Teaching and supervision Courses taught include: Analysis and calculus; College algebra; Complexity and optimisation in operational research; Discrete techniques for computing; Differential equations; Ethics; Galois theory; Geometry; Graph theory and applications; Logic; Logic for mathematical writing; Module theory; Moral and political philosophy; Philosophy of religion. Courses taught include: Sets, logic and categories; Prolog; Rings and modules; Universal algebra. Number of Ph.D. theses completed under my supervision: 13, viz. 1. 1972 A. J. Wilkie, Models of number theory. 2. 1973 D. A. Bryars, On the syntactic characterisation of some model theoretic relations. 3. 1973 S. C. Jackson, The model theory of abelian groups. 4. 1975 M. Mortimer, Some topics in model theory. 5. 1977 P. K. Rogers, The model theory of abelian and nilpotent groups. 6. 1977 A. Pillay, The number of countable models of a first-order theory. 7. 1980 E. Nemesszeghy, On the notion of negation in certain non-classical logics. (In Philosophy department) 8. 1980 C. Kalfa, Decision problems concerning sets of equations. 9. 1982 A. L. Pope, Some applications of set theory to algebra. 10. 1983 S. R. Thomas, Classification theory of simple locally finite groups. 11. 1985 I. M. Hodkinson, Building many uncountable rings by constructing many different Aronszajn trees. 12. 1992 G. M. Weetman, Groups acting flag-transitively on simplicial complexes. 13. 1995 J. Clark, Model-theoretic classification of topological structures. M.Phil. theses completed under my supervision 1. 1980 S. W. Salem, Sheaves in model theory. 2. 1988 Pan Geng, Interpreting groups in linear orderings. Teaching prize In 1999 I was one of two recipients of the first Drapers' Prize for Excellence in Teaching, Queen Mary, University of London. College and departmental committees and responsibilities • Secretary, Philosophy Departmental Board 1971-3. • Secretary, Mathematics Departmental Board 1973-6. • Space Committee 1973-4. • Academic Board (Faculty of Science representative) 1977-9. • Organising Committee, Mathematics Sixth-form Conference 1973-83. Queen Mary • Admission of MSc. students for pure mathematics 1984-90. Admission of Ph.D. students for pure mathematics 1987-90. • Various committees as dean 1990-3. • Undergraduate admissions to School of Math. Sciences 1993-9. • Clerical Review panel 1993-6. • Working party on criteria for promotion to senior lecturer 1993-4. • Chair, Examination Offences Panel 1994-2000. • Chair, Clerical Review Committee 1994-5. • Working party on future of the Registry 1997-8. • Director of Undergraduate Studies, School of Mathematical Sciences 1999-2003. • Quality Enhancement Committee 1999-2003. • Teaching and Learning Quality Enhancement Group 1999-2003. • Continuation Audit Group 1999-2000. Grants received • 1978 Science Research Council Grant GR/A/52997: Senior Visiting Fellowship on behalf of Professor Saharon Shelah. • 1981 Royal Society grant (Israel Academy Programme) for visit to Model Theory Year, Institute for Advanced Studies, Jerusalem, 25 March - 15 April 1981. • 1983 Science and Engineering Research Council Grant GR/C/30672: Visiting Fellowship on behalf of Dr Roman Kossak. • 1984 British Council grant for two weeks' visit to Department of Mathematics, University of Paris VII, in Spring 1984. • 1985 British Council grant for travel to Department of Mathematics, University of Paris VI, in Spring 1985. • 1985 Science and Engineering Research Council Grant GR/D/33298: £23,870 for research assistant for two years, to work in "Permutation groups applied to model theory". • 1987 Science and Engineering Research Council Grant GR/E/06732: Visiting Fellowship of £1901 on behalf of Professor Bruno Poizat for three months' visit to Queen Mary College. • 1987 Science and Engineering Research Council Grant GR/E/06466: Visiting Fellowship of £1110 on behalf of Dr Ehud Hrushovski for one month's visit to Queen Mary College. • 1987 with P. J. Higgins and P. M. Neumann: Science and Engineering Research Council Grant GR/E/36982: £21,715 for Durham Symposium, July 1988, on Model Theory and Groups, organised with Neumann and Otto Kegel. • 1988 Science and Engineering Research Council Grant GR/E/67412: Visiting Fellowship of £1385 on behalf of Dr Simon R. Thomas for two months' visit to Queen Mary College. • 1988 Science and Engineering Research Council Grant GR/E91639: Visiting Fellowship of £1011 on behalf of Professor Saharon Shelah for one month's visit to Queen Mary College. • 1989 Science and Engineering Research Council Grant GR/F22234: Visiting Fellowship of £1667 on behalf of Professor M. Rubin for one month's visit to QMW. • 1989 Science and Engineering Research Council Grant GR/F22258: Visiting Fellowship of £2588 on behalf of Professor Anand Pillay for two months' visit to QMW. • 1989 (with Dugald Macpherson) Science and Engineering Research Council Grant GR/F22241 for a research assistant (Deirdre Haskell) for two years. • 1990 Royal Society grant for visit to Karl-Weierstrass-Institut, Akademie der Wissenschaften der DDR, 2-26 September 1990. • 1992 (with Dugald Macpherson) Science and Engineering Research Council Grant GR/H25324: Visiting Fellowship of £2770 on behalf of Dr Ludomir Newelski for a month's visit to QMW. • 1994 INTAS grant of up to 20,000 ecus for network of research groups in Kazakhstan and western Europe, on 'Combinatorial questions in model theory'. • 1994 Joint with John Bell, EPSRC grant for research assistant for 3 years on 'Implementing nonmonotonic logics'. • 1999 London Mathematical Society grant of £500 for visiting lecturer Yi Zhang. • 2000 Royal Society grant of £2620 for visitor Bektur Baizhanov (Almaty). • 2000 Royal Society grant of £1970 for visitor Serban Basarab (Bucharest). Invited talks Conference addresses • 1975 Conference on Model Theory, Université Catholique de Louvain. • 1977 Logic Workshop, Free University, Berlin. • 1978 Conference 'Modelltheorie der Gruppen', Oberwolfach, W. Germany. Mathematical Logic Seminar, University of Oslo. • 1980 Association for Symbolic Logic joint meeting with American Mathematical Society, Boulder, Colorado, USA. Midwest Model Theory Seminar, Bowling Green, Ohio, USA. • 1982 Meeting of Deutsche Ver. Math. Logik Grundl., Oberwolfach. Meeting on Universal Algebra, Oberwolfach. Conference sur les ensembles ordonnées, Lyon. • 1983 25th Arbeitstagung in Universal Algebra, Darmstadt. British Mathematical Colloquium, Aberdeen. European meeting of Association for Symbolic Logic, Aachen. Conference on model theory and complexity, Institute Poincaré, Paris. • 1984 Wintersymposium, Nederlandse Vereniging voor Logica en Wijsbegeerte der Exacte Natuurwetenschappen, Utrecht. 2nd East German Easter Conference in Model Theory, Lutherstadt-Wittenberg. • 1985 3rd East German Easter Conference in Model Theory, Gross Köris. Conference on PROLOG and its relevance to philosophy (Nuffield Foundation), King's College London. • 1986 Conference on "Model theory, model-theoretic algebra and models of arithmetic", Notre Dame, Indiana. • 1987 European meeting of Association for Symbolic Logic, Granada. Mid-Atlantic Mathematical Logic Seminar, Rutgers University. • 1988 6th East German Easter Conference in Model Theory, Wendisch-Rietz. VII Congrés Català de Lógica, Barcelona. • 1989 Annual Conference, Belgian Mathematical Society, Brussels. Conference on "Abelsche Gruppen", Oberwolfach. Conference on Algebra and Logic, in memory of A. I. Mal'tsev, Novosibirsk. • 1991 9th Easter Conference in Model Theory, Berlin. Semantik von Programmiersprachen und Modelltheorie, Dagstuhl Castle, Saarland. Mid-Atlantic Mathematical Logic Seminar, Rutgers University. Encuentro de Logica y Filosofia de la Ciencia, Madrid. • 1992 British Colloquium for Theoretical Computer Science, Newcastle University. Association for Logic Programming UK, City University, London. Mathematics and Music, A Weekend Course, Department of Continuing Education, Oxford. • 1993 Théorie des modèles, Association Henri Poincaré, Séminaire d'histoire des mathématiques, Université Paris 6. LOGFIT Final Workshop, Leeds. Dagstuhl Conference on Semantics of Programming Languages. Oberwolfach meeting on Abelsche Gruppen. • 1994 Oberwolfach meeting on Model Theory. British Mathematical Colloquium, Cardiff. Conference 'Logic and its applications', University of Amsterdam. Colloquium Logicum, Deutsche Logikverein, Neuseddinland, Berlin. Théorie des Modèles, Université d'Angers. • 1995 Conference in Model Theory, University of Seville. Workshop on Games, Processes and Logic, Isaac Newton Institute, Cambridge. • 1996 Workshop on Logic, Language, Information and Computation, Salvador, Brazil. Mathematics Summer School on Neural Networks, Kings College London. Universal Algebra, Szeged. FoLLI Workshop on Logics for Linguistics, Philosophy, and Cognitive Science, Chiba University, Japan. • 1997 Conference on A level Mathematics, King's College London. Conference in Model Theory, Luminy. Logic Colloquium '97, Leeds. Amsterdam Logic Conference, Workshop on Logical Games. • 1998 German-Polish Workshop on Logic and Logical Philosophy, Zagan (Poland). Plenary lecture, 10th European Summer School in Language, Logic and Information, Saarbruecken. • 1999 Meeting on Model Theory and Combinatorics, Hattingen. Frontiers of Logic, King's College London. Meeting in memory of Mostowski and Rasiowa, Warsaw. CEIC section, Future of Electronic Communication, Berkeley. • 2000 Logic Colloquium 2000, Paris. Meeting on Intensionality, Munich. Second Augustus De Morgan meeting on History of Logic, Kings College London. Meeting in honour of Jaakko Hintikka, Helsinki. • 2001 Model Theory conference, Istanbul. Centenary Conference on Alfred Tarski, Warsaw. • 2002 Meeting in honour of Melvin Fitting, City University of New York Sémantique et Epistémologie, Casablanca. Spinoza Lecture, ESSLLI, Trento. ICM 2002 Satellite Conference on Electronic Information and Communication in Mathematics, Beijing. • 2003 Retirement conference for Daniel Lascar, Paris. Logica, Kravsko Castle, Moravia. International Congress in Logic, Methodology and Philosophy of Science, Oviedo. First Order Logic (75th year), Humboldt University, Berlin. • 2004 Models in Science and Technology, Dutch-Flemish Network for Philosophy of Science and Technology, Ravenstein. New Aspects of Compositionality, Paris. • 2005 Logic 2005, Indian Institute of Technology, Bombay. Peri Hermeneias Symposium, Cambridge. Methods of Logic in Mathematics, Euler International Institute, St Petersburg. Annual Meeting, Logic Association of Kolkata. Interactive Logic: Games and Social Software, Kings College London. Workshop on Semantic Processing, Logic and Cognition, Tuebingen. • 2006 The Topics in the Arabic and Latin Traditions, CRASSH, Cambridge. Logic, Models and Computer Science, in memory of Sauro Tulipani, Camerino. Tools for Teaching Logic, Salamanca. • 2007 International Conference on Logic, Navya-Nyaya and Applications, Homage to Matilal, Kolkata. International Seminar on Necessity and Contingency, Dept of Philosophy, University of Calcutta. 2nd World Congress on Universal Logic, Xi'an. A Day of Mathematical Logic, Amsterdam. Aesthetics and Mathematics, Utrecht. • 2008 Retirement conference for Gabriel Sabbagh, Paris VII. • 2009 Workshop on use of ideal and imaginary elements and methods in mathematics, Pont-à-Mousson. Workshop on practice-based philosophy of logic and mathematics, Amsterdam. Memorial conference for Maria Panteki, Thessaloniki. Sixtieth birthday conference for Oleg Belegradek, Istanbul. • 2010 MiDiSoVa, Amsterdam. Fundamental Structures of Algebra, for Şerban Basarab 70th birthday, Constanta, Romania. History of Science in Practice, Athens. Antalya Algebra Days, Antalya, Turkey. Workshop "Modern Formalisms for Pre-Modern Indian Logic and Epistemology", Hamburg. A snapshot of logic: some recent results in mathematical logic, University of East Anglia (London campus). YuriFest, Brno. Meeting in Honor of Jouko Väänänen's 60th Birthday, Helsinki. World Philosophy Day, Tehran. 'Logic and philosophy of logic, traditional and modern', Iranian Institute of Philosophy, Tehran. SIHSPAI International Colloquium on 'Philosophy and science in classical islamic civilisation', London. • 2011 British Post Graduate Model Theory Conference, Leeds. Conversazione in Philosophy of Mathematics, Cambridge. Presidential Address, 14th International Congress of Logic, Methodology and Philosophy of Science, Nancy, France. Workshop on History of Logic, Brussels. Workshop on later Arabic logic and philosophy of language, Cambridge. Workshop on the Roots of Deduction, Groningen. • 2012 Ancient and Arabic Logic, CNRS, Paris. Tarski Workshop, Amsterdam. Medieval modal logic, St Andrews. • 2013 Dependence Logic, Dagstuhl. Proof-Theoretic Semantics, Tübingen. Society meetings • 1983 London Mathematical Society. • 1986 Aristotelian Society. • 1987 London Mathematical Society Popular Lecture, in Leeds and London, and on videotape. British Association for the Advancement of Science, Belfast. • 1988 Mathematical Association, Leicestershire branch. • 1991 British Society for the Philosophy of Science. • 1994 London Mathematical Society lecture for Week of Science and Technology, Norwich. • 1997 Welsh Pure Mathematics Society, Gregynog. • 1998 British Association for the Advancement of Science, Cardiff. Institute of Physics. • 2001 Cognitive Science Society, 23rd Annual Meeting, Edinburgh. Joint Session of Aristotelian Society and Mind Association, York. Annual meeting, British Logic Colloquium, Manchester. • 2003 Annual meeting of HoDoMS, Greenwich. • 2007 Annual meeting, British Logic Colloquium, London. • 2009 Cameleon, Cambridge. University colloquia Talks given at Universities of: Birmingham, Bristol, Cambridge, Canterbury, Edinburgh, Essex, Exeter, Hull, Kent, Leeds, Leicester, Liverpool, London (including Computer Science LiveNet seminar), Manchester, Newcastle, Nottingham, Oxford, St Andrews, Sheffield, Surrey, Sussex, Swansea, UMIST, Warwick, York, the Open University; Amsterdam; Barcelona; Calgary, Alberta; Delft University of Technology; Humboldt-University, Berlin; University of Bonn; University of California at Irvine; University of California at Los Angeles; University of Colorado at Boulder; University of Illinois at Chicago Circle; University of Freiburg; Hebrew University, Jerusalem; University of Heidelberg; University of Helsinki; Wesleyan University, Middletown, Connecticut; International University, Moscow; University of Munich; University of New Mexico at Albuquerque; University of New Mexico at Las Cruces; Notre Dame University, Indiana; Mathematical Institute of Russian Academy of Sciences, Novosibirsk, Siberia; Stanford University; Steklov Institute, St Petersburg; University of Ohio at Columbus; State University of Omsk, Siberia; University of Paris VI; University of Paris VII; University of Passau, Germany; Bar Ilan University, Tel Aviv; University of Tuebingen; University of Tver', Russia; Simon Fraser University, Vancouver BC, Canada. • 1988 10-hour course on Stability Theory, given at the University of Utrecht, for Dutch Mathematical Society. • 1990 12-hour course on "Structures oméga-catégoriques", given at the University of Paris 7. • 1991 10-hour course on "Definitions", 3rd European Summer School in Language, Logic and Information, Saarbruecken. • 1998 10-hour course on "Model theory", 10th European Summer School in Language, Logic and Information, Saarbruecken. • 1999 Short course on "Advanced logic", Summer School in Logic, Universal Algebra and Theoretical Computer Science, Rand Afrikaans University, Johannesburg. • 2003 Short course on "Composition of meanings", University of Duesseldorf. • 2003 Short course on "Logic", Conference on Logic and Linguistics, Tbilisi. Schools and student societies • 1985 Sixth-form mathematics conference, Royal Holloway and Bedford New College. • 1986 Eton College. • 1988 Student Mathematical Society, University of Surrey. • 1989 Weekend conference for students from King's College, London. • 1989 Student Mathematical Society, King's College London. • 1991 Skegness Grammar School. • 1992 Omsk State University, Siberia (on British universities). • 1992 International University, Moscow (on British universities). • 1993 North London Collegiate School. • 1994 Conference for women prospective students, University College, London. • 1995 Cerberus Club, Balliol College, Oxford. • 1996 Royal Grammar School, High Wycombe. • 1996 Popular Schools Lecture, Liverpool Mathematics Association. • 1996 The PPE Society, Oxford. • 1997 The PPE Society, Oxford. • 1997 Invariants, Oxford. • 1998 Borehamwood Gifted Children's Association. • 2001 Kings College Mathematics Society, Cumberland Lodge, Windsor. • 2001 De Morgan Society, University College London. • 2002 Cardinal Vaughan Memorial School, Notting Hill. • 2007 Kings College Mathematics Society, Cumberland Lodge, Windsor. • 2010 Edinburgh University Philosophy Society. Invited institute visits • 2001 Mittag-Leffler Institute, Stockholm, 'Mathematical Logic', Dag Norman et al. (one month). • 2006 Isaac Newton Institute, Cambridge, 'Logic and Algorithms', Anuj Dawar and Moshe Vardi (one month). Other lectures • 2001 Coulter McDowell Annual Lecture, Royal Holloway (on Pythagoras and music). • 2001 Christmas Lecture, Culham Science Centre (on geometry of music). • 2003 Third Annual Venn Lecture, Hull (on geometry of music). • 2008 Big Ideas Group, The Wheatsheaf, Goodge Street London (on logic versus rationality). • 2009 Greenwich Time Symposium, National Maritime Museum, Greenwich (on music and time). • 2009 Gresham College (on geometry of music). • 2013 Lindström Lectures (on Ibn Sina's logic), Göteborg. Examining, assessment and degree administration • Ph.D. examining and assessment for the universities of Birmingham (2000), Bristol (1971), Helsinki (1991), IC London (Computing) (1986, 1988, 1991, 1993), Leeds (1994, 2002), Manchester (1977), Middlesex (1999), Oxford (1979, 1983, 1985, 1990, 1991, 1993), Paris (1991, 1994, 1998), Sussex (2001), Tübingen (1988), Uppsala (2000). • M.Phil. examining for London School of Economics, London University (1983), Kings College London (2000). • External examiner in Logic for Mathematics M.Sc., University of Leeds 1976-9. • External examiner in Logic for Mathematics M.Sc., University of Nottingham 1979-1982. • London University panel of examiners in Mathematics for course-unit courses in colleges of education 1978-9, 1980-6. • Subject coordinator in Logic and Foundations for London University M.Sc. in Pure and Applied Mathematics 1977-9, 1980-6. • Block coordinator for Block III (Pure: Algebra) for the London University M.Sc. in Mathematics 1986-90. • Coordinator of M.Sc. in Mathematics for SERC-funded students at London University 1982-90. • SERC interviewing panel for Advanced Fellowships in Mathematics 1986. • London University Scholarships Committee 1989-93. • External examiner for BSc Mathematics, University of Leeds 1992-4. • Visiting examiner in Pure Mathematics, Imperial College London 1994-7. • EPSRC Panel to Award Earmarked and Case Studentships, February 1995. • EPSRC College in Mathematics, 1995-2005. • External expert, appointment committee for Developmental Chair in Logic, University of Leeds, 1995. • Chair, Panel of judges for the IGPL/FoLLI Prize for the best idea in pure or applied logic in the year 1995, 1996. • EPSRC Pure Mathematics Research Grants panel, July 1996. • Chair of External Assessors, Institute of Logic, Language and Computation, University of Amsterdam, September 1996. • Chair, EPSRC Pure Mathematics Research Grants panel, November 1996. • Invited member of Scientific Committee of the Equipe de Logique Mathématique (UPRESA 7056) of the CNRS for review, 10 March 1998. • External examiner in Pure Mathematics, University of Birmingham 1997-2000. • Chair of Subject Panel in Mathematics and Convener of Specialist Group in Mathematics, University of London 1998-2002. • Elector for Chair in Mathematical Logic, Oxford University 1998-9. • External examiner for MSc in Computational Linguistics, Kings College, London 2000-3. • External examiner in pure mathematics for BSc, University of Hull, 2000-4. • External advisor in Mathematics and Computation, Open University, 2001-3. • Chair of Subject Area Board B, University of London, 2001-3 (Deputy Chair till 2005). • EPSRC MathFIT panel 2003. • Advising Committee for a personal chair at ILLC, Amsterdam, 2003. • Kurt Goedel Fellowship Committee 2007-8. • AERES Evaluation Committee for the Institute of Mathematics at the University of Lyon, 2010. • AERES Evaluation Committee for the Institut de Mathématiques de Jussieu, Paris 7, 2013. Organising and committee work, mostly in the field of logic • 1974- Editorial Board of Journal of Philosophical Logic. • 1979-87 Editor, Journal of Symbolic Logic. • 1981-2 Elected coordinator of editorial board, Journal of Symbolic Logic. • 1983-7 Editor for survey/expository papers, Journal of Symbolic Logic (with brief to establish new section of journal). • 1988-96 Advisory editor in Logic and Set Theory, London Mathematical Society journals. • 1989- Editorial Board of journal Logic and Computation. • 1991-2005 Editorial Board, Perspectives in Logic (book series of Association for Symbolic Logic; formerly the Springer series Perspectives in Mathematical Logic); Managing Editor from 1999. • 1992-2002 Editor, Mathematical Logic Quarterly. • 1994- Editorial Board, Bulletin of the Interest Group in Pure and Applied Logics (now the Logic Journal of the IGPL). • 1996-9 Project Manager, LMS Journal of Computation and Mathematics. • 1998-2010 Editorial Board, de Gruyter book series Logic and its Applications. • 2003- Editorial Board, Journal of Applied Logic. • 2005- Editorial Board, FoLLI Publications on Logic, Language and Information, Springer-Verlag. • 2007 with Johan van Benthem and Helen Hodges: Guest Editor, volume on Logic and Cognition, Topoi 26 (2007) pp. 1-165. • 2007 Advisory Board, Texts in Logic and Games, Amsterdam University Press. • 2009 with Ruy de Queiroz: Guest Editor, WoLLIC 08 special issue of Journal of Computer and System Sciences. • 2009- Advisory Board, Journal of Computer and System Sciences. Meeting organisation and Programme Committees • 1970 Conference in Mathematical Logic - London '70, Bedford College, London (organising secretary). • 1977, 1978, 1981 Meeting of Model Theorists, Bedford College, London (organiser). • 1984 Conference in Mathematical Logic, Manchester (programme committee). • 1985 Conference in Mathematical Logic, Paris (programme committee). • 1986 Conference in Mathematical Logic, Hull (programme committee). • 1988 Oberwolfach meeting on Model Theory of Modules (organising committee); Durham Symposium on Model Theory and Groups (organising committee). • 1990 Conference in Mathematical Logic, Helsinki 1990 (programme committee). • 1992 4th European Summer School in Language, Logic and Information, Colchester (programme committee). • 1993 European meeting of Association for Symbolic Logic, Keele (programme committee chair); LMS day conference 'Benefits of a unified computing and mathematics undergraduate course', London • 1994 Conference in Mathematical Logic, Alma-Ata (programme committee). • 1995 Conference of Kurt Goedel Society and Italian Logic Association, Florence (programme committee); Human Capital and Mobility summer school on Model Theory of Groups and Automorphism Groups, Blaubeuren (programme committee chair); Section on Model Theory, Set Theory and Formal Systems, for Logic, Methodology and Philosophy of Science, Florence (section chair). • 1997 European meeting of Association for Symbolic Logic, Leeds (programme committee). • 1998 Logic and Philosophy of Logic, 20th World Congress of Philosophy, Boston (section chair). • 1999 Joint meeting of Belgian Mathematical Society and London Mathematical Society, Brussels (programme committee); International Congress in Logic, Methodology and Philosophy of Science, Kraków (section chair); European meeting of Association for Symbolic Logic, Utrecht (programme committee chair); First Southern African Summer School and Workshop on Logic, Universal Algebra and Theoretical Computer Science, Johannesburg (advisory board) • 2004 Workshop on Knowledge and Games, Liverpool (programme committee). • 2005 Logic and Language, Batumi (Georgia) (programme committee). • 2007 International Congress in Logic, Methodology and Philosophy of Science, Beijing (section programme committee); WoLLIC 2007 (programme committee). • 2008 WoLLIC 2008, Edinburgh (programme committee chair). • 2010 ESSLLI 2010, Copenhagen, workshop on Dependence and Independence in Logic (programme committee). • 2013 International Congress of History of Science and Technology, Manchester: symposium on Arabic Foundations of Science (with Ahmad Hasnaoui). • 1977-9 Executive Committee for European Affairs, Association for Symbolic Logic. • 1978-80 Council, Association for Symbolic Logic. • 1981 with A. J. Ostaszewski, set up Polish Mathematical Book Fund (supported by London Mathematical Society from 1982). • 1982 Nominating committee, Association for Symbolic Logic. • 1983-4 Ad hoc committee of Association for Symbolic Logic on the future of the Association. • 1984- Committee on professional rights of logicians, Association for Symbolic Logic. (Chair 1984-91) • 1988 Nominating committee, Association for Symbolic Logic. • 1989-2000 London Mathematical Society Computer Science Committee (Chair from 1990 to 1994). • 1989-95 Scientific Council, European Foundation for Logic, Language and Information (FoLLI). • 1989 Ad hoc committee of Association for Symbolic Logic, on revision of the Newsletter. • 1990-5 President, British Logic Colloquium. • 1990-6 Council, London Mathematical Society. • 1990-1 London Mathematical Society representative on Executive Committee of Save British Science. • 1993-5 Working Party on Former Soviet Union, LMS Council. • 1993-8 Personnel and Office Management Committee, LMS Council. • 1994-6 Executive Committee, Association for Symbolic Logic. • 1994-5 Nominating Committee, Association for Symbolic Logic. • 1994-5 Hon. Secretary, British National Committee for Logic, Methodology and Philosophy of Science. • 1994 EPSRC meeting to advise on earmarked areas between Mathematics and Computer Science. • 1995-6 President, European Association for Logic, Language and Information (FoLLI). • 1996-7 Vice-President, London Mathematical Society. • 1998-02 Committee on Electronic Publishing and Communication (CEIC) of International Mathematical Union. • 2009- Philosophy Section Standing Committee, British Academy. • 2011- Chair, British Academy Section H12 Philosophy. Author : Wilfrid Hodges Last updated 11 November 2013
{"url":"http://wilfridhodges.co.uk/cv.html","timestamp":"2014-04-20T13:29:46Z","content_type":null,"content_length":"33389","record_id":"<urn:uuid:683a1ecf-5e61-4702-9157-4952e68124f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Geometry - Proba - year 12 - 2 units - URGENT EXAM on Monday Replies: 0 Ced Geometry - Proba - year 12 - 2 units - URGENT EXAM on Monday Posted: Oct 22, 2010 11:52 PM Posts: 1 From: sydney Hi, I don't know if I should post my question here. This is for my son Registered: 10/22/10 here it is An enclosure is made of 3 enclosures, 1 big rectangle on and on the 2 small ones of the same measure < y ><----------------6y--------------> x PIG - Paddock for calves x PIG brick walllllllllllllllllllllllllllllllllll Farmer Brown wishes to construct 3 rectangular enclosures. The paddock for the calves is to be 6 times as long and twice as wide as a pig pen. One pig pen and the calves' paddock have an existing brick wall as a boundary fence as shown. All other fences are to be constructed from 56 metres of wire mesch. 1. Let x metre be the width of a pig pen and y metres be its length. Show that y = 7 - 3/4x. 2. Hence show that the total area A square metres contained in the 3 enclosures is given by A= 14x(7-3/4x). 3. Show that A is a maximum when half the wire fencing has been placed parallel to the brick wall. 1. A die whose faces numbered 1,2,3,4,5,6 is tossed twice. The sum S of the numbers which appear uppermost on the die is calculated. Find the probability that S is greater than 8. 2. It is known that a 4 appears on the die at least once in the 2 throws. Find the probability that S is greater than 8. Thank you very much
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2161057","timestamp":"2014-04-20T06:56:34Z","content_type":null,"content_length":"15075","record_id":"<urn:uuid:5a42cc3c-1e65-44f6-8ab5-88f84393e0d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from October 8, 2008 on The Unapologetic Mathematician What is it that makes the exponential what it is? We defined it as the inverse of the logarithm, and this is defined by integrating $\frac{1}{x}$. But the important thing we immediately showed is that it satisfies the exponential property. But now we know the Taylor series of the exponential function at ${0}$: In fact, we can work out the series around any other point the same way. Since all the derivatives are the exponential function back again, we find Or we could also write this by expanding around $a$ and writing the relation as a series in the displacement $b=x-a$: Then we can expand out the $\exp(a)$ part as a series itself: But then (with our usual handwaving about rearranging series) we can pull out the inner series since it doesn’t depend on the outer summation variable at all: And these series are just the series defining $\exp(a)$ and $\exp(b)$, respectively. That is, we have shown the exponential property $\exp(a+b)=\exp(a)\exp(b)$ directly from the series expansion. That is, whatever function the power series $\sum\limits_{k=0}^\infty\frac{x^k}{k!}$ defines, it satisfies the exponential property. In a sense, the fact that the inverse of this function turns out to be the logarithm is a big coincidence. But it’s a coincidence we’ll tease out tomorrow. For now I’ll note that this important exponential property follows directly from the series. And we can write down the series anywhere we can add, subtract, multiply, divide (at least by integers), and talk about convergence. That is, the exponential series makes sense in any topological ring of characteristic zero. For example, we can define the exponential of complex numbers by the series Finally, this series will have the exponential property as above, so long as the ring is commutative (like it is for the complex numbers). In more general rings there’s a generalized version of the exponential property, but I’ll leave that until we eventually need to use it. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2008/10/08/","timestamp":"2014-04-19T09:27:17Z","content_type":null,"content_length":"41999","record_id":"<urn:uuid:d3d98701-5771-434f-9490-d672a2ce38d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Help: calculate elasticity tensor for hyperelastic material with plane stress configuration Submitted by Haofei Liu on Wed, 2012-10-03 15:03. Dear all, I am confused with the calculation of elasticity tensor of hyperelastic material in plane stress configuration. Your helps are very much appreciated. Below is my question: Incompressible strain energy function is expressed as, saying, U=***+p(J-1). Therefore, the second piola-kirchhoff stress is: S=***+pC^-1 (the Lagrange muliplier p is treated as an arbitrary constant here, right?). Using the plane stress condition, S<33>=0 so that p is updated as a function of deformation. Now, to calculate the elasticity tensor, H=2*dS/dC. Should I continue to treat p as a constant, independent of deformation, and get: (1): H=***+2*p*d(C^-1)/dC otherwise, should I treat it here as a function of deformation and get: (2): H=***+2*(C^-1*(dp/dC)+p*d(C^-1)/dC) I am not sure which one is correct (or none of them?). The elasticity tensor H possesses both major and minor symmetry (is this always correct?), if H takes the form (1), major symmetry holds; but if H takes the form (2), it lost its major symmetry. Any comment about this? Thank you all very much indeed! Best regards Haofei Liu Submitted by on Mon, 2012-10-08 20:42. Hi Haofei, I am new to this field. Your question is very interested. However, I would like to write some comments about it. First, you have a full 3-d incompressible hyperelastic model with p undetermined. Then you applied boundary condition to it and find p. The model then reduced from 3-d to 2-d. So 2rd order tensor reduce to 2 by 2 matrix. Then all tensor requirement may not apply to new 2 dimension model. It is because that 2 dimension model is not a tensor. It is also because that this 2 dimension model is not strict constitutive relationship of materials because of boundary condition was applied. The boundary condition could change in other cases. About your question about major and minor symmetry, I don't why 2nd one don't satisfy major while the first one does? Can you explain a little bit? I am also not sure that major and minor symmetry is a validating point if stress and strain are not tensor. Lixiang Yang Hi Liu, I am learning Non-linear Solid Mechanics. Let me make an attempt to answer your questions. In case of incompressible material, 'p' is a Lagrange multiplier. It is a constant. When we write the strain energy density, we add the constraint, J=1 as p(J-1) to the strain energy function. To compute the stress (say 2nd PK = S), we take the derivative of strain energy density wrt 'C' (Right Cauchy-Green tensor). When derivative of p(J-1) is taken wrt 'C', it becomes pdJ/dC, where 'p' is a constant. Here, 'p' is estimated from boundary conditions. As you rightly said about the plane strain condition => S33 = 0. This gives the value of pressure. When elasticity tensor, H, is computed, we take the derivative of 'S' wrt 'C'. 'S' has two parts - isochoric (Siso) and volumetric (Svol). Svol = pC^-1. When Svol is differentiated wrt 'C' to get Hvol, 'p' again remains constant. Since 'p' (constant) is a Lagrange Multiplier. Hence your first equation is correct and H' is symmetric If the material is compressible, then, strain energy has two parts = Isochoric (Uiso) + volumetric (Uvol). Note that that Uvol = U(J). In this case, 'p' is obtained by taking the derivative of U(J) wrt 'J'. Here, 'p' is a variable. Hence the second equation is correct for compressible material. In the second equation, 'J' also will appear along with 'p'. Then 'H' becomes symmetric. When there is no body couple and rotary inertia is neglected, then shear stresses are complementary. This gives the minor symmetry. This exists for all the materials. Major symmetry is material specific and exists for hyperelastic materials. In hyperelastic materials, potential exists. Stresses can be computed by taking the derivative of potential (strain energy density) wrt strains. With best regards, - Ramadas Recent comments
{"url":"http://imechanica.org/node/13360","timestamp":"2014-04-18T23:32:15Z","content_type":null,"content_length":"26272","record_id":"<urn:uuid:04898175-ac55-47a2-8e93-bf1561eb5166>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
help with level curve October 8th 2011, 11:44 AM #1 Nov 2008 help with level curve I have x^2 +xy I want to sketch level curves for it but I'm stuck at simplifying it. I can factor out the x or complete the square but how do I find out what the curve is? so another form is (x+½y)^2 - 0.25y^2 so if I let c = 2 for example I'll get (x+½y)^2 - 0.25y^2 = 2 what do I do now? Re: help with level curve If $f(x,y)=x^2+xy$ then to plot level curves we choose values of a constant $c$ such that $f(x,y)=c$. If $f=c$, then $c=x^2+xy\Rightarrow{y}=\frac{c-x^2}{x}$. Now, graph $y$ as a function of $x$ in the plane for varying values of $c$ October 8th 2011, 12:35 PM #2
{"url":"http://mathhelpforum.com/calculus/189846-help-level-curve.html","timestamp":"2014-04-19T02:13:08Z","content_type":null,"content_length":"33252","record_id":"<urn:uuid:62a452ec-e601-4f16-9e84-742bb5bd5bf3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientia Agricola Services on Demand Related links Print version ISSN 0103-9016 Sci. agric. (Piracicaba, Braz.) vol.63 no.6 Piracicaba Nov./Dec. 2006 SOILS AND PLANT NUTRITION A statistical basis for selecting parameters for the evaluation of soil penetration resistance Critérios estatísticos na seleção de parâmetros para avaliação da resistência do solo à penetração Tairone Paiva Leão^I; Álvaro Pires da Silva^II, ^* ^IUniversity of Tennessee, Department of Earth and Planetary Sciences, 306 Earth and Planetary Sciences Building, TN - 37996-1410 - USA ^IIUSP/ESALQ - Depto. de Ciência do Solo, C.P. 9 - 13418-900 - Piracicaba, SP-Brasil Measurements of soil penetration resistance (SR) have been frequently used for the evaluation of soil structural quality for plant growth. However, different data analysis approaches have been used, without a previous evaluation of their statistical quality. In this study we tested the hypothesis that the mean is the parameter with best statistical properties to evaluate alterations in soil penetration resistance in response to soil use and management, as compared to other SR statistical parameters. Undisturbed (5 × 5 cm) soil cores were collected from three sampling sites with different degrees of compaction: an undisturbed site under native scierophylous forest (NC); a site under short-duration grazing with post-grazing residue maintained at 2.0 to 2.5 Mg (Total Dry Matter) TDM ha^-1 (LR); and a site under short-duration grazing with post-grazing residue maintained at 3.0 to 3.5 Mg TDM ha^-1 (HR). The statistical quality of the parameters from undisturbed soil samples of SR profiles: mean (x), median (M), maximum (max), percentage of linear penetrability at 2 MPa (PLP[2MPa]), and the parameters from Probit analysis intercept (n) and slope (m) was evaluated using the ANOVA and LSD tests. Results from the F ratio, P > F values and LSD tests show that mean, median and maximum were the parameters with better statistical properties as criteria to evaluate alterations in soil penetration resistance in response to soil use and management as compared to other statistical SR parameters, validating the hypothesis of the research. Key words: LSD, probit analysis, means comparison test, soil physical quality assessment A quantificação da resistência do solo à penetração (RP) é freqüentemente utilizada na avaliação da qualidade estrutural do solo para o crescimento de plantas. Diferentes abordagens têm sido utilizadas na análise de dados de RP, e na maioria dos casos, sem uma avaliação prévia da qualidade estatística dos parâmetros. Neste trabalho foi testada a hipótese de que a média é o parâmetro com melhores propriedades estatísticas como critério para avaliação de alterações na resistência à penetração do solo em resposta ao uso e manejo do solo, em relação a outros parâmetros estatísticos derivados de conjuntos de dados de RP. Amostras indeformadas de solo (5 × 5 cm) foram coletadas em locais com diferentes graus de compactação: Cerrado nativo (CN); pastejo rotacionado, com nível de resíduo pós-pastejo mantido entre 2,0 e 2,5 Mg (Matéria Seca Total) MST ha^-1 (RB); e pastejo rotacionado, com nível de resíduo pós-pastejo mantido entre 3,0 e 3,5 Mg MST ha^-1 (RA). A qualidade estatística dos parâmetros obtidos dos perfis de RP: média (x), mediana (M), máximo (max), percentagem linear de penetrabilidade à 2 MPa (PLP[2MPa]), e os parâmetros da análise Probit: intercepto (n) e inclinação (m) foi avaliada por meio de ANOVA e pelo teste da Diferença Mínima Significativa (DMS). Os resultados da razão F, P > F e do teste DMS indicaram que a média, mediana e máximo são os parâmetros com melhores propriedades estatísticas como critério para avaliar as alterações na resistência à penetração do solo em resposta ao uso e manejo, confirmando a hipótese testada. Palavras-chave: DMS, análise Probit, teste de comparação de médias, avaliação da qualidade física An ideal methodology to characterize the resistance of soil matrix (SR) to deformation by a growing root should take into account as many of the unique features of root development as possible. These would include: root size, a lubrication system, an ability to develop both axial and radial pressures, rate of root tip advancement, and mainly the ability to seek out pores of appropriate size and resistance to deformation (Groenevelt et al., 1984; Whiteley & Dexter, 1984; Bengough & MacKenzie, 1994). Statistical parameters such as the mean (i.e. arithmetic average) or the maximum SR value of a soil layer can only give a rough idea of the actual limitation of the soil matrix to root growth. To minimize the influence of the soil structural heterogeneity on the soil structural quality evaluation, Groenevelt et al. (1984) defined the Percentage Linear Penetrability (PLP) as the percentage of linear trajectory through soil, for which the actual tip resistance is lower than a set critical value. The PLP concept was further developed by incorporating linear regression analysis on Probit (Finney, 1952; Tietjen, 1986) transformed PLP data (Perfect et al., 1990). The regression procedures resulted in two empirical parameters, the slope (m) and the intercept (n) of the lines. It has been suggested that the slope, m, reflects the dispersion in the aggregate penetration resistance, while the intercept, n, reflects the soil macroporosity (Perfect et al., 1990). The mean is the statistical parameter most commonly used to evaluate resistance to penetration in soils. In this paper we tested the hypothesis that the mean is the parameter with better statistical properties as criteria to evaluate alterations in soil penetration resistance in response to soil use and management as compared to the SR statistical parameters: median, maximum, m, n, and PLP at the critical value of 2 MPa (PLP[2MPa]). The objective of this research was to evaluate comparatively the statistical quality of the parameters from the penetration resistance profile: mean, median, maximum, m, n , and PLP at the critical value of 2 MPa (PLP[2MPa]). The study was conducted with soil cores collected in the Mato Grosso do Sul state, Brazil (20º26' 48'' S, 54º43'19'' W). According to Köppen's classification, the climate is defined as a transition between Cfa (subtropical humid) and Aw (Tropical wet-dry), with a mean annual precipitation of 1500 mm and mean temperature of 22ºC. The soil is classified as Typic Acrudox with 399 g kg^-1 clay, 66 g kg^-1 silt, and 535 g kg^-1 sand. Three sampling sites were selected for the study: (i) Native Cerrado (NC): an undisturbed savanah site under native scierophylous forest called "Cerradão"; (ii) Short-duration Grazing with Lower Post-Grazing Residue (LR): the site (0.18 ha) cultivated with Tanzania grass (Panicum maximum cv. Tanzania) was grazed for seven days followed by resting periods of 35 days. This site is part of an intensive short-duration grazing system experiment established in 1999. The experiment was designed to maintain soil base saturation from 45 to 50%; available phosphorous levels in Mehlich-1 (Nelson et al., 1953) from 4 to 8 mg dm^-3; and potassium from 60 to 80 mg dm^-3, at the 0 to 0.2 m layer. The nitrogen fertilization rate was 100 kg N ha^-1 yr^-1. The post-graze residue at this site was maintained at 2.0 to 2.5 Mg (Total Dry Matter) TDM ha^-1. Average annual stocking rates were 4.12 (Animal Units) AU ha^-1 and 2.26 AU ha^-1 for wet and dry seasons, respectively. (iii) Short-duration Grazing with Higher Post-Grazing Residue (HR): Same features as the LR site, except that the post-graze residue at this site was maintained at 3.0 to 3.5 Mg TDM ha^-1. Average annual stocking rates were 4.8 AU ha^-1 and 2.26 AU ha^-1 for wet and dry seasons, respectively. Twenty-seven undisturbed (5 × 5 cm) soil cores were collected at each site. The cores were subdivided in three groups of nine samples. Each group was dried to three water content ranges, namely pre-field capacity range (bfc), field capacity range (fcr) and post-field capacity range (pfc). The bfc consisted of samples equilibrated at the tension of 4 KPa, fcr at 10 KPa and pfc at 0.5 MPa. After equilibration, the cores were stored in a refrigerator for one month and used in the evaluation of the soil penetration resistance (SR) profiles. The SR profile is the set of penetration resistance values collected along the vertical trajectory of the penetrometer in an undisturbed soil sample (Figure 1). These profiles were collected using an electronic penetrometer device directly connected to a computer parallel port. The cone penetrometer used has a semi-angle of 30º and 3.86 mm of basal diameter. The penetration rate was 1.0 cm min^-1, with values recorded every 0.7 s. Immediately after the SR tests, soil cores were oven dried at 105ºC for 24 hours and volumetric water content (q[V]) and bulk density (Db) were calculated. For each SR profile the statistical parameters: mean (max), and the parameters from Probit analysis slope (m), intercept (n) and percentage linear penetrability at the critical value of 2 MPa (PLP [2MPa]) (Groenevelt et al., 1984; Perfect et al., 1990) were calculated. For each water content range, the set of SR statistical parameters for the sampling sites (NC, HR, LR) was evaluated using the F ratio from ANOVA (Gravetter & Wallnau, 1995) and mean comparisons by Least Significant Difference test (LSD) (Hsu, 1996). All statistical analyses were performed using the SAS/STAT software package (SAS Institute, 1999). Mean values for volumetric water content (qv), dry bulk density (Db), and for the statistical parameters from the soil penetration resistance (SR) profiles: mean (max), slope (m), intercept (n) and percentage linear penetrability at the critical value of 2 MPa (PLP[2MPa]) are presented in Table 1. Parameters are grouped by sampling site and water content range, therefore the statistical parameters from the SR profiles are evaluated not only according to their sensitivity to soil use and management, but the influence of soil water content over these relationships was also The sites used in this study were selected so that three different and clearly distinct degrees of compaction could be used in SR parameter evaluations. Mean Db values were lower in the NC site, Db = 0.97 g cm^-3, and increased with the stocking rate in the grazed sites, Db = 1.30 g cm^-3 in HR and Db = 1.42 g cm^-3 in LR. For clay Oxisols such the one evaluated here, these Db values are usually associated with loose soil (NC) (Normally undisturbed native vegetation), slightly compacted (HR), and highly compacted sites (LR), respectively. Once stated that the difference in compaction degree between the sites was previously known, the statistical quality of each SR profile parameter was initially evaluated by the magnitude of the F ratio from the ANOVA and by its statistical significance level (P > F) (Gravetter & Wallnau, 1995). According to the classical statistical theory, the higher the F ratio and the lower the P > F value the better is the statistical quality of the indicator (Gravetter & Wallnau, 1995). However, it is worth noting that not only the statistical quality of the parameter was investigated but also its physical significance in the process was taken into account in this discussion (Webster, 2001). The F ratio, P > F, and r^2 from the ANOVA used for comparison of SR profile parameters between sampling sites are presented in Table 2, Table 3, and Table 4, for the water content ranges: bfc, fcr and pfc At the three evaluated water content ranges the higher F ratio and lower P > F value were found for the statistical parameters max, M and PLP[2MPa] (Tables 2 to 4). The max are the SR parameters most commonly used for evaluation of soil physical quality, or compaction degree (Corrêa & Reichardt, 1995; Sojka et al., 2001; Villamil et al., 2001). The results presented here only confirm that these parameters are adequate to evaluate the SR in soil quality studies. However, the PLP[2MPa] may be a more meaningful parameter since it indicates the percentage of the linear trajectory (PLP) through a soil sample (or layer) that would be penetrable with resistance values lower than 2 MPa (Groenevelt et al., 1984). Mean, maximum and median values give only a rough idea of the degree of mechanical resistance of a soil. For mean and maximum, besides the fact that they represent a whole soil layer with a single value, they can be highly affected by extreme observations. Despite not being affected by extreme observations, the median has not been frequently used as an indicator parameter of soil resistance, which is probably related to the fact that it cannot give a true representation of the range of resistances in a soil sample/profile nor give an indication of high SR values in layers/spots which can limit root growth under field conditions. The PLP, on the other hand, estimates the fraction of the linear trajectory through a soil that is readily penetrable by a growing root, given a certain critical penetration resistance value. Therefore, the PLP could give a better picture of the degree of penetrability of a soil layer than any other indexes (Groenevelt et al., 1984). The critical SR value of 2 MPa for the PLP was chosen since there is experimental evidence that root growth is severelly restricted at this condition (Taylor et al., 1966; Materechera et al., 1991). Thus at the PLP[2MPa] a given percentage of the soil profile would be penetrable at values lower then 2 MPa. For any water content range, the PLP[2MPa] was greater at the NC site (ranging from 88.74 to 98.78%) and much lower at the LR site (ranging from 0 to 26.5%). However it is not known which percentage of the soil profile would need to be penetrable under a certain mechanical resistance for adequate root growth to occur. What is known is that plant roots are able to "search" for rooting paths in pores, fissures and zones of lower mechanical resistance through soil (Groenevelt et al., 1984; Whiteley & Dexter, 1984; Dexter, 1986a; 1986b). It has been suggested that the parameters m (slope) and n (intercept) from the Probit regression analysis could represent this soil structural heterogeneity (Perfect et al., 1990). The m parameter would reflect the spread in distribution of aggregate strengths while n would reflect the macroporosity (Perfect et al., 1990). However, their statistical quality as criteria for evaluation of SR differences among different soil management strategies was lower than the other evaluated parameters (Tables 2 to 4). To further investigate the ability of the parameters as criteria to categorize SR among sampling sites with different compaction degrees, mean comparison tests were performed for the SR parameters with higher F ratio and lower P < F value. Tables 5, 6, 7 and 8 present the LSD means comparison test for PLP[2MPa], max and M soil resistance parameters, respectively, between sampling sites. Theoretically, the more sensitive the parameter to SR alterations by soil management, the more easily the LSD test could identify statistically significant differences among sites. The three parameters from the SR profile, PLP[2MPa], max and M identified the same trend of soil physical quality decrease at any water content range (Tables 5 to 8). However, the interpretation of PLP[2MPa] results is different, since the sampling sites with best soil physical quality will have higher values (Table 5). Therefore, at any water content range, the PLP[2MPa] value was lower at LR, but they did not differ statistically between the NC and HR sites for the bfc and fcr water content ranges. At pfc the PLP[2MPa] did not differ statistically between LR and HR sites. This behavior can be explained by the fact that in compacted sites the SR increases sharply with the decrease in water content while the magnitude of this increase is lower in loose soil (Imhoff et al., 2000). The same trends were observed between sampling sites for the max and M. At any water content range the max and M values were greater at the LR site. As one might expect, the LR site was associated with the worst soil structural properties, a trend which was also identified by other physical parameters, like Db (Table 1), range of critical bulk density values and least limiting water range (Leão et al., 2004). At higher water contents (bfc and fcr) the LSD test for the max and PLP[2MPa] parameters identified the same statistical trend (Tables 5 and 7). The SR properties were better and not statistically different for the NC and HR sites. On the other hand, even at with higher water content, the LR site soil had low structural quality as evaluated by any SR profile parameter (Tables 5 to 8). At any water content range, M and Tables 6 and 8). With drier soil (pfc) the sensitivity of the LSD test was equal for the max, and M parameters (Tables 6, 7 and 8). At this water content range (pfc) the LSD test for the max and M parameters identified the three sites as statistically different, the value of the parameter decreasing from LR to NC site. The SR characteristics in a soil are mainly influenced by bulk density and water content and by their interactions (Sojka et al., 2001). However, as one might observe in surface plots of SR as a function of both bulk density and water content (Imhoff et al., 2000; Sojka et al., 2001) the influence of bulk density over SR characteristics increases as the soil water content decreases. The decrease in SR with the increase in qv is due to the reduction in cohesion and in the angle of internal friction as soil water content increases (Camp & Gill, 1969). The increase in SR with Db is attributed to the effect of compaction over soil matrix, and the increase in interparticle friction as soil particles become closer with soil compaction (Vepraskas, 1984; Sojka et al., 2001). At lower water contents, max and M were more reliable in identifying these relationships than any other SR profile parameter. To test if the PLP[2MPa ]could be estimated from easily measured soil physical properties (i.e. water content, bulk density, and resistance parameters) for the specific conditions of the soil under evaluation, a simple stepwise multiple regression exercise was carried out. The PLP[2MPa ]was inputted in the stepwise regression model as a function of bulk density, volumetric water content and the soil resistance parameters: maximum, minimum, mean and median. The PLP[2MPa] for the soil under evaluation can be estimated with a high degree of accuracy from bulk density and mean soil resistance values (i.e. arithmetic average). where: PLP[2MPa ]= Percentage of linear penetrability at the soil resistance of 2MPa (%); Db = Soil bulk density (g cm^-3); SR = Mean soil resistance (MPa); Db > 0; 0 < PLP[2MPa] < 100; SR > 0. The mean, maximum and median are the parameters with better statistical properties as criteria to evaluate alterations in soil penetration resistance in response to soil use and management as compared to the statistical SR parameters: m, n, and PLP at the critical value of 2 MPa (PLP[2MPa]). The traditional indicator of SR, the mean, is adequate to evaluate the differences in SR among sites with different use and management. However, it would be more effective at lower water contents. A new parameter PLP[2MPa] is also proposed, but its sensitivity to SR changes should be further BENGOUGH, A.G.; MACKENZIE, C.J. Simultaneous measurements of root force and elongation for seedling pea roots. Journal of Experimental Botany, v.45, p.95-102, 1994. [ Links ] CAMP, C.R.; GILL, W.R. The effect of drying on soil strength parameters. Soil Science Society of America Proceedings, v.33, p.641-644, 1969. [ Links ] CORREA, J.C.; REICHARDT, K. Efeito do tempo de uso das pastagens sobre as propriedades de um Latossolo Amarelo da Amazônia Central. Pesquisa Agropecuária Brasileira, v.30, p.107-114, 1995. [ Links ] DEXTER, A.R. Model experiments on the behaviour of roots at the interface between tilled seed-bed and compacted sub-soil. II. Entry of pea and wheat roots into sub-soil cracks. Plant and Soil, v.95, p.135-147, 1986a. [ Links ] DEXTER, A.R. Model experiments on the behavior of roots at the interface between tilled seed-bed and compacted sub-soil. III. Entry of pea and wheat roots into cylindrical biopores. Plant and Soil, v.95, p.149-161, 1986b. [ Links ] FINNEY, D.J. Probit analysis: a statistical treatment of the sigmoid response curve. Cambridge: University Press, 1952. 318p. [ Links ] GRAVETTER, F.J.; WALLNAU, L.B. Essential statistics for the behavioral sciences. 2.ed. Minneapolis: West Publishing Company, 1995. 431p. [ Links ] GROENEVELT, P.H.; KAY, B.D.; GRANT, C.D. Physical assessment of a soil with respect to rooting potential. Geoderma, v.34, p.101-114, 1984. [ Links ] HSU, J.C. Multiple comparisons: theory and methods. London: Chapman & Hall, 1996. 277p. [ Links ] IMHOFF, S.; SILVA, A.P.; TORMENA, C.A. Aplicações da curva de resistência no controle da qualidade física de um solo sob pastagem. Pesquisa Agropecuária Brasileira, v.35, p.1493-1500, 2000. [ Links ] LEÃO, T.P.; SILVA, A.P.; MACEDO, M.C.M.; IMHOFF, S.; EUCLIDES, V.P.B. Intervalo hídrico ótimo na avaliação de sistemas de pastejo contínuo e rotacionado. Revista Brasileira de Ciencia do Solo, v.28, p.1-8, 2004. [ Links ] MATERECHERA, S.A.; DEXTER, A.R.; ALSTON, A.M. Penetration of very strong soils by seeding of different plant species. Plant and Soil, v.135, p.31-41, 1991. [ Links ] NELSON, W.L.; MEHLICH, A.; WINTERS, E. The development evaluation and use of soil tests for phosphorus availability. In: PIERRE, W.H., NORMAN, A.G. (Ed.). Soil fertilizer phosphorus in crop nutrition . New York: Academic Press, 1953. p.153-188. [ Links ] PERFECT, E.; GROENEVELT, P.H.; KAY, B.D.; GRANT, C.D. Spatial variability of soil penetrometer measurements at the mesoscopic scale. Soil and Tillage Research, v.16, p.257-271, 1990. [ Links ] SAS INSTITUTE. SAS/STAT user's guide: version 8. Cary, 1999. [ Links ] SOJKA, R.E.; BUSSCHER, W.J.; LEHRSCH, G.A. In situ strength, bulk density, and water content of a durinodic xeric haplocalcid soil. Soil and Tillage Research, v.166, p.520-529, 2001. [ Links ] TAYLOR, H.M.; ROBERSON, G.M.; PARKER, J.J. Soil strength-root penetration relations to coarse textured materials. Soil Science, v.102, p.18-22, 1966. [ Links ] TIETJEN, G.L. A topical dictionary of statistics. New York: Chapman and Hall, 1986. 171p. [ Links ] VEPRASKAS, M.J. Cone index of loamy sands as influenced by pore size distribution and effective stress. Soil Science Society of America Journal, v.48, p.1220-1225, 1984. [ Links ] VILLAMIL, M.B.; AMIOTTI, N.M.; PEINEMANN, N. Soil degradation related to overgrazing in the semi-arid southern caldenal of Argentina. Soil Science, v.166, p.441-452, 2001. [ Links ] WEBSTER, R. Statistics to support soil research and their presentation. European Journal of Soil Science, v.52, p.331-340, 2001. [ Links ] WHITELEY, G.M.; DEXTER, A.R. The behaviour of roots encountering cracks in soil. II. Development of a predictive model. Plant and Soil, v.77, p.141-149, 1984. [ Links ] Received May 17, 2006 Accepted October 06, 2006 * Corresponding author <apisilva@esalq.usp.br>
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-90162006000600007&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T02:02:03Z","content_type":null,"content_length":"56113","record_id":"<urn:uuid:c390f6e7-312b-4dad-8833-7f61a381c9b4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 Author Message Intern Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 14:40 Joined: 10 Jul 2011 I agree with rjdunn about timing. Also your math score seems a bit low compared to your practice scores. maybe you were just having a bad day. Guessing on 7 answers coupled with maybe just bad luck would account for a 20 in verbal. I'd recommend working on timing. You can definitely achieve 700+! Posts: 30 Followers: 0 Kudos [?]: 5 [0], given: 5 Senior Manager Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 18:22 Status: schools I listed were for the evening ammulujnt wrote: programs, not FT :( gettin scared Joined: 16 Aug 2011 I think the original poster answered his question on why his score dipped. He stated that he guessed about 7 questions in the verbal section, most toward the end. Posts: 391 The way I've approached this test is to simply keep a close eye on the time and work at a steady pace at about 2 min/question in math. Location: United States (VA) I don't care about the experimental to non-experimental ratio. The only test that really boggles people on that is the LSAT when there is an experimental section and then GMAT 1: 640 Q47 V32 the test takers bite their fingernails for three weeks hoping that the section they felt that they bombed was the experimental. GMAT 2: 640 Q43 V34 GMAT 3: 660 Q43 V38 GPA: 3.1 WE: Research (Other) Followers: 3 Kudos [?]: 46 [0], given: 50 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 18:44 jko wrote: satishchandraaily wrote: But the fact is- I did not screw my exam; exam screwed me In a 37 question problem set, guessing on the last 7 questions is a huge hit, but for a CAT problem set it's even worse. Not to mention whatever problems you missed in the first 30 problems. satishchandraaily The plus side here is that you know what you have to work on. Time management. Spend a month Intern working on your pacing and doing Joined: 21 Aug 2011 more verbal problems and you should be able to dramatically Posts: 28 improve. Followers: 6 I agree with you. I can't be guessing the last 10. I will work on this difficiency. Posted from my mobile device Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 18:55 This post received dmnk wrote: You mentioned that you had completed more than 20 prep exams before taking the real GMAT. Is it possible that you did the same prep test more than once and that therefore your scores in the latter part of your preparation period could have been inflated? Good point. You are trying to evaluate my argument by rising the best question. Had it been CR, I would have picked your question as my answer However it is not the case. These are tests I have given so far Mgmat 6 tests Powerprep 2 tests Joined: 21 Aug 2011 2 tests Posts: 28 800scoregmat 5 tests Followers: 6 GMATprep 5 tests Kudos [?]: 180 [1] , given: 0 Varitasprep test1 I could see 3 or 4 repetitive questions in my 4th and 5th GMATprep exams. I did not evaluate my wrong answers in first 3 gmatprep tests before writing 4th n 5 th test. I took snapshots n reviewed all wrong answers in last 15 days before my official GMAT. So chances of getting inflated scores are less. If so, my gmatprep score might have got inflated by 10 or at max 20 points but not 150 points for sure. Posted from my mobile device Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 19:20 gmatpapa wrote: I know how frustrating it can get. Honestly, though, 46 is not a score that reflects an IITian's quant ability.. Considering how big quant studs you guys are, so over confidence may be?? You guessed the last 10 questions on verbal- so that took care of your score going from 30 to 20 in pure arithmetic progression. Statistically, you have 20% chance of guessing a question correctly so even if you got 2/10 right, spending 2-3 seconds/ question, the CAT would surely know that you've guessed on the questions. So most probably, no points there as well (could be disputed but its just my opinion). ----------------- story -- timing, timing, timing. Do as many official questions as possible and solve them by logic. That is what will get you through. No templates. No shortcuts. All my best wishes to you. Hi dude. I must thank you for critically thinking on where could have I gone wrong and advising me your opinion. First let's assume your logic about A.P. and algorithm about questions guessed. If this is true, then why do they give GMATprep n powerprep with different algorithms. I did the same in mgmat tests. Why would mgmat present their tests with different algo? Joined: 21 Aug 2011 My friend, who gave exam in aug who scored 600 then n 540 on my exam day got 24 in verbal Posts: 28 She had scored 34 in august. She followed the same strategy as mine. She did guess from 32 to 41 questions but still she could score 34. Followers: 6 In these two months, I know how much she improved. Nothing has got reflected in the test. Last one, I got 46 in quant when I started my prep. I did not all the quant formulae. I forgot all formulae after studying them in 12th n after studying for campus placements. When I got 46 at my start of prep, I could not finish quant I had to guess the last 4 questions. I did solve very good tough quant questions with ease. FYI quant also had data 3 interpretation questions. I solved them with ease too.I was expecting 50 or 49. My questions about whether a day can have any effects on score are still unanswered. I ask this again and again because everyone who we knew who took exam on Oct 21st got 150 or more points less Posted from my mobile device Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 24 Oct 2011, 20:59 Status: potential future applicant GMAC is shady. I know someone who did >100 worse from an actual GMAT to another later actual GMAT, with more study.... They've had bugs in grading before and they're still completely opaque so who knows what it's doing with your answers. Extra shady if they're a for-profit organization. Joined: 11 Sep 2011 Posts: 40 Location: United States Followers: 0 Kudos [?]: 6 [0], given: 2 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 25 Oct 2011, 22:30 sda wrote: GMAC is shady. I know someone who did >100 worse from an actual GMAT to another later actual GMAT, with more study.... satishchandraaily They've had bugs in grading before and they're still completely opaque so who knows what it's doing with your answers. Extra shady if they're a for-profit organization. Intern Dude, Joined: 21 Aug 2011 I did not completely understand what you meant to say. Posts: 28 _________________ Followers: 6 Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 25 Oct 2011, 22:47 rjdunn03 wrote: yeah from what I understand, and I believe it is posted on mba.com as well as in the OG, not finishing a section of the test dramatically drops your score. As others have mentioned, the algorithm likely took into account when you put the same answer for all of the last 7 questions and probably scored it the same as if you hadn't answered them at all. The point of the GMAT isn't just to test how well you can answer verbal and quant questions, but how well you can manage the test. I'm sure any of us - given enough time - could get most GMAT questions right, but trying to answer them in under 2 minutes is much more challenging and it forces you to pick your battles and make educated guesses by eliminating answer choices etc. The fact that you were only able to answer 30 V questions in time, and that you say that is how it has been in your practice exams tells me that you haven't learned to consistently manage your time. I'll echo what the above poster suggested: reschedule the test for a month from now and focus your efforts on timing. Practice timing strategy and learn to make smart guesses when you are starting to get behind. You'll be surprised how much your score can increase by employing these tactics. Thanks for your analysis on my test. I dont know whether i have misread your post or not. But I want to clarify that I did finish off my test. I left no question unanswered. Joined: 21 Aug 2011 My Questions are- Posts: 28 How is it treated answering questions in a row same option as same as leaving questions unasnwered? Followers: 6 It could happen that I read all last three and thought answer should be 'D' in a row. I may be wrong for any question. But does the program think I guessed and treat those questions as unanswered ones? It's hard to believe and accept this as true mate. However, I still feel spending equal time on all questions and managing time properly is great strategy. I will work upon it. Also do you think any particular day can have its effect on scores? If majority of top scorers give exam on same day? Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 25 Oct 2011, 22:54 Abhishek14 wrote: I gave GMAT today .. I saw shocker 550 M45/V21 .. Cant believe .. Guys plzz help .. I am not able to figure out where it went wrng... Where and when did you give your exam? Are you by any chance the same abhishek who gave in Pune on Oct 21st? I could not talk to that abhishek after my exam as he left Joined: 21 Aug 2011 early and forgot his jacket in my locker Posts: 28 _________________ Followers: 6 Quant700-800(PS+DS)with solutions 17/11/2011-->Kaplan Diagnostic Free Test 680 19/01/2012-->MGMAT 690 26/01/2012 MGMAT 710 Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 27 Oct 2011, 14:29 satishchandraaily wrote: Senior Manager Thanks for your analysis on my test. I dont know whether i have misread your post or not. But I want to clarify that I did finish off my test. I left no question unanswered. Joined: 05 May 2011 My Questions are- How is it treated answering questions in a row same option as same as leaving questions unasnwered? Posts: 358 It could happen that I read all last three and thought answer should be 'D' in a row. I may be wrong for any question. But does the program think I guessed and treat those questions as unanswered ones? Location: United States (WI) It's hard to believe and accept this as true mate. GMAT 1: 780 Q49 V50 However, I still feel spending equal time on all questions and managing time properly is great strategy. I will work upon it. WE: Research (Other) Also do you think any particular day can have its effect on scores? If majority of top scorers give exam on same day? Followers: 7 Well nobody knows precisely how the algorithm works, I was just giving a possible guess as to why your would score so low. Also seven questions is a good chunk of the verbal, and after each subsequent incorrect answer, the test gives an easier question. So the last few questions you answered D on were likely low level questions and Kudos [?]: 56 [0], given: 35 could have caused a large drop in your score. As far as a particular day having an effect on your test - no I don't think so. The reason being that your scores are given as a percentile compared to the last 3 years of test takers, so you aren't only being compared to those who sit for the test the same day as you. BUT the day surely can have an effect on you - if you are tired, nervous etc, you may perform more poorly than you would in a different setting. So in that regard your scores will probably fluctuate from day to day. gmatclubot Re: 710 in Gmatprep but 540 on GMAT day-OCT 21st 2011 [#permalink] 27 Oct 2011, 14:29
{"url":"http://gmatclub.com/forum/710-in-gmatprep-but-540-on-gmat-day-oct-21st-122257-20.html","timestamp":"2014-04-18T19:00:10Z","content_type":null,"content_length":"168868","record_id":"<urn:uuid:852f417b-4750-450e-8791-db2701cb6246>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Substructural logic programming This page collects work done by me and others on substructural logic programming. It focuses on forward-chaining substructural logic programming in which logic programming rules act roughly like rewriting rules - there are lot of interesting potential connections between forward-chaining substructural logic programming and rewriting logic semantics. This is distinct from backward-chaining or goal-oriented logic programming in substructural logics. A closely related project is the Security Protocol Specification Language MSR; Cervesato and Scedrov's 2009 journal paper "Relating State-Based and Process-Based Concurrency through Linear Logic," mentioned in the list of papers below, describes the connection. Reasoning about substructural logic programs is one of the goals of the Meta-CLF project (and one of the goals of my thesis!). Because the Meta-CLF project page reviews work on this topic, I am only reviewing papers that discuss the specification or implementation of substructural logic programming languages. Papers are listed in roughly chronological order. • Monadic concurrent linear logic programming, Pablo López, Frank Pfenning, Jeff Polakow, and Kevin Watkins, PPDP 2005. This was an initial ambitious attempt to make a logic programming language combining both forward-chaining and backward-chaining based on Watkins et al.'s logical framework CLF. It introduced the Lollimon implementation; instructions for downloading and running Lollimon can be found at the bottom of this page. (Again, I didn't have anything to do with Lollimon, but there didn't seem to be instructions anywhere else on the web for how to download and run it!) The paper is available from the ACM Digital Library and Frank Pfenning's web page. • Linear Logical Algorithms, Robert J. Simmons and Frank Pfenning, ICALP 2008. This paper gives a logical . Frank wrote up a short note, titled On Linear Inference, that discusses the philosophical basis for the style of forward-chaining we use in this paper. The extended abstract published in ICALP 2008 is owned by Springer; the content in that extended abstract is also covered in the CMU tech report, which has the official CMU tech report number CMU-CS-08-104 but didn't ever quite make it into the official CMU tech report archive, I believe. • Celf – A Logical Framework for Deductive and Concurrent Systems (System Description), Anders Schack-Nielsen and Carsten Schürmann, IJCAR 2008. Describes Celf. • Relating State-Based and Process-Based Concurrency through Linear Logic, Iliano Cervesato and Andre Scedrov, IC 2009. • Substructural operational semantics as ordered logic programming, Frank Pfenning and Robert J. Simmons, LICS 2009. Introduces Ollibot. This is the presentation of Linear Logical Algorithms from ICALP. I let myself fall slightly out of practice and was a bit under the weather when I recorded this; consider it an experiment, given that the presentation doesn't quite make sense without narration attached to the slides. Lollimon is an experimental programming language that supports forward and backward chaining linear logical programming. It was originally described in the paper Monadic concurrent linear logic programming. The Linear Logical Algorithms paper describes a fragment of the Lollimon language, and so every LLA can be run in Lollimon. Note that Lollimon most definitely does not support the run-time guarantees described in the Linear Logical Algorithms paper. The following commands will allow you to check out and build Lollimon. These instructions assume you have Ocaml installed on your system. shell> cvs -d :pserver:guest_lf@cvs.concert.cs.cmu.edu:/cvsroot login (press Enter if prompted for a password - no password is necessary) shell> cvs -d :pserver:guest_lf@cvs.concert.cs.cmu.edu:/cvsroot checkout lollimon shell> cd lollimon shell> make If everything goes correctly, you can then run the examples as follows. Starting Lollimon will cause you to enter Lollimon's top-level environment. shell> ./lollimon LolliMon>#load "examples/linlogalg/spantree.lo" [Loading file /Users/rjsimmon/Repos/lollimon/examples/linlogalg/spantree.lo] Looking for 4 solutions to query: edge a b => edge a c => edge a d => edge b c => edge b e => edge d e => vertex a -o vertex b -o vertex c -o vertex d -o vertex e -o {! (tree X Y)} Attempt 1, Solution 1 with [X := d; Y := e] Attempt 1, Solution 2 with [X := a; Y := b] Attempt 1, Solution 3 with [X := a; Y := c] Attempt 1, Solution 4 with [X := a; Y := d] spantree.lo is Ok. [Closing file /Users/rjsimmon/Repos/lollimon/examples/linlogalg/spantree.lo] In this case, the four lines "Attempt 1, solution N" tell us that Lollimon's execution of the linear logical algorithm for spanning trees on the five-element undirected graph from the ICALP talk computed the spanning tree (a,b), (a,c), (a,d), (d,e). In this manner, you can run all of the examples from the presentation and the ICALP paper, and most from the technical report. See the "README" file in the "lollimon/examples/linlogalg" directory. Validation is like typechecking your webpage!
{"url":"http://www.cs.cmu.edu/~rjsimmon/sublogicprog.html","timestamp":"2014-04-21T05:28:43Z","content_type":null,"content_length":"8640","record_id":"<urn:uuid:6231bcb7-5715-44ff-9dce-3087df828374>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about scala on Darren Wilkinson's research blog In previous posts I have discussed general issues regarding parallel MCMC and examined in detail parallel Monte Carlo on a multicore laptop. In those posts I used the C programming language in conjunction with the MPI parallel library in order to illustrate the concepts. In this post I want to take the example from the second post and re-examine it using the Scala programming language. The toy problem considered in the parallel Monte Carlo post used $10^9$$U(0,1)$ random quantities to construct a Monte Carlo estimate of the integral $\displaystyle I=\int_0^1\exp\{-u^2\}du$. A very simple serial program to implement this algorithm is given below: import java.util.concurrent.ThreadLocalRandom import scala.math.exp import scala.annotation.tailrec object MonteCarlo { def sum(its: Long,acc: Double): Double = { if (its==0) else { val u=ThreadLocalRandom.current().nextDouble() def main(args: Array[String]) = { val iters=1000000000 val result=sum(iters,0.0) Note that ThreadLocalRandom is a parallel random number generator introduced into recent versions of the Java programming language, which can be easily utilised from Scala code. Assuming that Scala is installed, this can be compiled and run with commands like scalac monte-carlo.scala time scala MonteCarlo This program works, and the timings (in seconds) for three runs are 57.79, 57.77 and 57.55 on the same laptop considered in the previous post. The first thing to note is that this Scala code is actually slightly faster than the corresponding C+MPI code in the single processor special case! Now that we have a good working implementation we should think how to parallelise it… Parallel implementation Before constructing a parallel implementation, we will first construct a slightly re-factored serial version that will be easier to parallelise. The simplest way to introduce parallelisation into Scala code is to parallelise a map over a collection. We therefore need a collection and a map to apply to it. Here we will just divide our $10^9$ iterations into $N=4$ separate computations, and use a map to compute the required Monte Carlo sums. import java.util.concurrent.ThreadLocalRandom import scala.math.exp import scala.annotation.tailrec object MonteCarlo { def sum(its: Long,acc: Double): Double = { if (its==0) else { val u=ThreadLocalRandom.current().nextDouble() def main(args: Array[String]) = { val N=4 val iters=1000000000 val its=iters/N val sums=(1 to N).toList map {x => sum(its,0.0)} val result=sums.reduce(_+_) Running this new code confirms that it works and gives similar estimates for the Monte Carlo integral as the previous version. The timings for 3 runs on my laptop were 57.57, 57.67 and 57.80, similar to the previous version of the code. So far so good. But how do we make it parallel? Like this: import java.util.concurrent.ThreadLocalRandom import scala.math.exp import scala.annotation.tailrec object MonteCarlo { def sum(its: Long,acc: Double): Double = { if (its==0) else { val u=ThreadLocalRandom.current().nextDouble() def main(args: Array[String]) = { val N=4 val iters=1000000000 val its=iters/N val sums=(1 to N).toList.par map {x => sum(its,0.0)} val result=sums.reduce(_+_) That’s it! It’s now parallel. Studying the above code reveals that the only difference from the previous version is the introduction of the 4 characters .par in line 22 of the code. R programmers will find this very much analagous to using lapply() versus mclapply() in R code. The function par converts the collection (here an immutable List) to a parallel collection (here an immutable parallel List), and then subsequent maps, filters, etc., can be computed in parallel on appropriate multicore architectures. Timings for 3 runs on my laptop were 20.74, 20.82 and 20.88. Note that these timings are faster than the timings for N=4 processors for the corresponding C+MPI code… Varying the size of the parallel collection We can trivially modify the previous code to make the size of the parallel collection, N, a command line argument: import java.util.concurrent.ThreadLocalRandom import scala.math.exp import scala.annotation.tailrec object MonteCarlo { def sum(its: Long,acc: Double): Double = { if (its==0) else { val u=ThreadLocalRandom.current().nextDouble() def main(args: Array[String]) = { val N=args(0).toInt val iters=1000000000 val its=iters/N val sums=(1 to N).toList.par map {x => sum(its,0.0)} val result=sums.reduce(_+_) We can now run this code with varying sizes of N in order to see how the runtime of the code changes as the size of the parallel collection increases. Timings on my laptop are summarised in the table N T1 T2 T3 1 57.67 57.62 57.83 2 32.20 33.24 32.76 3 26.63 26.60 26.63 4 20.99 20.92 20.75 5 20.13 18.70 18.76 6 16.57 16.52 16.59 7 15.72 14.92 15.27 8 13.56 13.51 13.32 9 18.30 18.13 18.12 10 17.25 17.33 17.22 11 17.04 16.99 17.09 12 15.95 15.85 15.91 16 16.62 16.68 16.74 32 15.41 15.54 15.42 64 15.03 15.03 15.28 So we see that the timings decrease steadily until the size of the parallel collection hits 8 (the number of processors my hyper-threaded quad-core presents via Linux), and then increases very slightly, but not much as the size of the collection increases. This is better than the case of C+MPI where performance degrades noticeably if too many processes are requested. Here, the Scala compiler and JVM runtime manage an appropriate number of threads for the collection irrespective of the actual size of the collection. Also note that all of the timings are faster than the corresponding C+MPI code discussed in the previous post. However, the notion that the size of the collection is irrelevant is only true up to a point. Probably the most natural way to code this algorithm would be as: import java.util.concurrent.ThreadLocalRandom import scala.math.exp object MonteCarlo { def main(args: Array[String]) = { val iters=1000000000 val sums=(1 to iters).toList map {x => ThreadLocalRandom.current().nextDouble()} map {x => exp(-x*x)} val result=sums.reduce(_+_) or as the parallel equivalent import java.util.concurrent.ThreadLocalRandom import scala.math.exp object MonteCarlo { def main(args: Array[String]) = { val iters=1000000000 val sums=(1 to iters).toList.par map {x => ThreadLocalRandom.current().nextDouble()} map {x => exp(-x*x)} val result=sums.reduce(_+_) Although these algorithms are in many ways cleaner and more natural, they will bomb out with a lack of heap space unless you have a huge amount of RAM, as they rely on having all $10^9$ realisations in RAM simultaneously. The lesson here is that even though functional languages make it very easy to write clean, efficient parallel code, we must still be careful not to fill up the heap with gigantic (immutable) data structures…
{"url":"http://darrenjw.wordpress.com/tag/scala/","timestamp":"2014-04-17T18:27:46Z","content_type":null,"content_length":"113521","record_id":"<urn:uuid:10a9fa06-a1c7-4ee0-8b86-92b27b1999c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis Ch5 pg3 ex7 Full Regression: Measles Full Regression Model: Measles Immunization To run the full regression with the two interaction variables (MEAS_AGE, and LOWED_HT), just follow these steps. Then scrutinize your results to see if you think the model is appropriate: 1. Open keast4j.sav 2. Click on Statistics, Regression, Linear. 3. For the Dependent variable, enter waz using the arrow. 4. For the Independent variables, enter age, agesq, hmeasyn, dlowedn, htresp, meas_age, and lowedn_ht. 5. Click on OK. Do your results look like these? You are likely already realizing that there is something likely to be wrong with this analysis, since you have a negative effect from measles according to the B coefficient (-0.253). As we have seen in an earlier section, there is certainly some confounding of age and measles immunization since those younger children do not get immunization and they are also likely to be better nourished. Here, it appears that the interaction variables are both non-significant which would indicate that you should try running the model again without them. Because measles and age is thought to have a strong likelihood of interacting, we will keep it in the model even though it is does not have a significant p-value. Really, the next best step to see what is happening with measles and nutrition status, lets look at these variables in a model that is stratified by age. But first, here is a look at the results of the full model without the interaction the interaction term for education and height.
{"url":"http://www.tulane.edu/~panda2/Analysis2/Multi-way/full_reg.htm","timestamp":"2014-04-18T08:35:45Z","content_type":null,"content_length":"3382","record_id":"<urn:uuid:649e385b-91dc-4960-990c-d2dc055e98c9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and apparatus for timing recovery - Patent # 5675612 - PatentGenius Method and apparatus for timing recovery 5675612 Method and apparatus for timing recovery (7 images) Inventor: Solve, et al. Date Issued: October 7, 1997 Application: 08/502,317 Filed: July 13, 1995 Inventors: Fertner; Antoni (Solna, SE) Solve; Torkel C. J. (Bromma, SE) Assignee: Telefonaktiebolaget LM Ericsson (Stockholm, SE) Primary Chin; Stephen Assistant Vo; Don Attorney Or Nixon & Vanderhye P.C. U.S. Class: 375/233; 375/326; 375/350 Field Of 375/355; 375/231; 375/232; 375/233; 375/343; 375/350; 375/326; 364/724.2; 364/724.16; 331/18; 331/25R; 331/32; 381/103 International H04L 7/02 U.S Patent Re34206; 3962637; 4028626; 4061978; 4815103; 4896334; 4995031; 5020078; 5276711; 5450457 Patent 0 330 282; 0 476 487 Other ICC 1975, vol. 2, 16 Jun. 1975, San Francisco, pp. 34-24-34-37, XP000579359, H. Sailer, "Timing Recovery in Data Transmission Systems UsingMultilevel Partial Response Signaling".. References: Proceedings of the ACM Symposium on Problems in the Optimization of Data Communication Systems, Pine Mountain, GA, USA, 13-16 Oct. 1969, 1969, New York, NY, USA, Assoc. Computing Machinery, USA, pp. 347-367, XP000600005, R.W. Chang, "Joint automaticequalization for data communication".. IEEE Transactions on Communications, Feb.-Apr. 1994, USA, vol. 42, No. 2-4, pt. 2, Feb. 1994, ISN 0090-6778, pp. 1409-1414, XP000447364, T. Aboulnasr et al., "Characterization of a symbol rate timing recovery technique for a 2B1Q digital receiver".. ICC '93 Geneva, IEEE International Conference on Communications '93. Technical Program, Conference Record (Cat. No. 93CH3261-5), Proceedings of ICC '93--IEEE International Conference on Communications, Geneva, Switzerland, 23-26 May 1993, ISBN0-7803-0950-2, 1993, New York, NY, USA, IEEE, USA, pp. 1804-1804, vol. 3, XP000448433, B. Daneshrad et al., "A carrier and timing recovery technique for QAM transmission on digital subscriber loops".. Daneshrad et al., "A Carrier and Timing Recovery Technique for QAM Transmission on D Subscriber Loops"; IEEE International Conference on Communications '93; May 23-26, 1993, Geneva, Switzerland; pp. 1804-1808.. Chang, "Joint Automatic Equalization for Data Communication"; Proceedings of the ACM Symposium on Problems in the Optimization of Data Communications Systems; Oct. 13-16, 1969; pp. Sailer; "Timing Recovery in Data Transmission Systems Using Multilevel Partial Response Signaling"; 1975 International Conference on Communications; vol. II, IEEE Catalog No. 75 Cho 971-2 CSCB; pp. 34-24-34-27.. Bergmans et al., "A Class of Data-Aided Timing-Recovery Schemes," IEEE Transactions on Communications, vol. 43, No. 2/3/4, Feb./Mar./Apr. 1995, pp. 1819-1827.. Qureshi, "Timing Recovery for Equalized Partial-Response Systems," IEEE Transactions on Communications, Dec. 1976, pp. 1326-1331.. Kobayashi, "Simultaneous Adaptive Estimation and Decision Algorithm for Carrier Modulated Data Transmission Systems," IEEE Transactions on Communications Technology, vol. COM-19, No. 3, Jun. 1971, pp. 268-280.. Mueller et al.; "Timing Recovery in Digital Synchronous Data Receivers"; IEEE Transactions on Communications, vol. COM-24, No. 5, May 1976, pp. 516-531.. Aboulnasr et al.; "Characterization of a Symbol Rate Timing Recovery Technique for a 2B1Q Digital Receiver"; IEEE Trans. on Comm., vol. 42, No. 2/3/4, Feb./Mar./Apr. 1994; pp. Gottlieb et al.; "The DSP Implementation of a New Timing Recovery Technique for High-Speed Digital Data Transmission"; IEEE; CH2847, 1990, pp. 1679-1682.. Tzeng et al.; "Timing Recovery in Digital Subscriber Loops Using Baud-Rate Sampling"; IEEE Journal on Selected Areas in Communications, vol. SAC-4, No. 8, Nov. 1986, pp. 1302-1311.. Agazzi et al.; "Timing Recovery in Digital Subscriber Loops"; IEEE Trans. on Comm., vol. COM-33, No. 6, Jun. 1985, pp. 558-569.. Agazzi et al.; "A Single-Chip ANSI Standard ISDN U-Interface Transceiver"; IEEE 1992 Custom Integrated Circuits Conference; pp. 29.5.1-29.5.4.. Lin et al.; "Adaptive Nonlinear Decision Feedback Equalization with Channel Estimation and Timing Recovery in Digital Magnetic Recording Systems"; IEEE Trans. on Circuits and Systems-II; Analog and Digital Signal Processing, vol. 42, No. 3, Mar.1995; pp. 196-206.. Abstract: A method and apparatus for recovering a timing phase and frequency of a sampling clock signal in a receiver are disclosed for determining a desired timing phase by minimizing a mean squared error due to uncancelled precursor intersymbol interference. A detected symbol error is correlated with a signal obtained from the received signal. This correlation function provides an approximate of the time instant where the mean squared error approaches its minimum at which point an unambiguous zero crossing of the correlation function signal is obtained. From such an unambiguous zero crossing, e.g., only one zero crossing, a desired sampling timing instant is determined. Claim: What is claimed is: 1. A timing recovery method in a digital communications system for determining a desired sampling instant in a digital receiver, comprising: sampling a received signal at a controlled sampling instant; filtering the sampled signal in a filter; equalizing the filtered signal; detecting a symbol value corresponding to the sampled signal using the equalized signal; determining an error between the equalized signal and the detected symbol; controlling subsequent sampling instants by correlating the error with an unequalized signal obtained from the filter; and adjusting the sampling instant to minimize a magnitude of a correlation result. 2. The method in claim 1, wherein the error includes uncancelled precursor intersymbol interference of the received signal. 3. The method in claim 1, wherein the controlling step includes: adjusting the timing instant based on whether a correlation result is a positive or a negative value such that the timing instant is advanced if the correlation result is one of the positive or negative value and retarded if the correlationresult is the other of the positive or negative value. 4. The method in claim 1, wherein the controlling step includes: correlating a sign of the error with a sign of another signal, and adjusting the sampling instant according to a sign of the correlation. 5. A timing recovery method in a digital communications system for determining a desired sampling instant in a digital receiver, comprising: sampling a received signal at a controlled sampling instant; detecting a symbol value corresponding to the sampled signal; determining an error between the sampled signal and the detected symbol; controlling subsequent sampling instants using the error including correlating the error with another signal; and adjusting the sampling instant according to a correlation result, wherein the correlation produces only one zero crossing from which the desired sampling instant is determined. 6. A timing recovery method in a digital communications system for extracting a desired phase of a sampling clock signal in a receiver, comprising: sampling a received signal at controlled timing instants and converting the received signal into a digital signal; determining a timing recovery correlation function from the received signal that correlates a first signal which is based on an error between the sampled signal and a value detected for the sampled signal with a second signal obtained from thesampled signal; and minimizing a magnitude of the timing correlation function to provide an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined. 7. The method in claim 6, wherein the second signal is obtained from the received signal before the received digital signal is processed in the processing step. 8. The method in claim 6, wherein the processing step includes filtering the received signal in a digital filter and equalizing the filtered signal, and wherein the second signal is obtained from the digital filter before the received digital signal is equalized such that the correlation of the first signal and the second signal provides a mean or an approximate mean squared error value thataccounts for uncancelled precursor intersymbol interference of the received signal. 9. The method in claim 8, wherein the second signal is a weighted combination of the signal input to the digital filter and one or more earlier filter input signals. 10. The method in claim 8, wherein the second signal is a combination of the first and second earlier received filtered signals. 11. The method in claim 6, further comprising: filtering the received signal in a digital filter; and equalizing the filtered signal, wherein the first signal is a combination of the determined error and a previously determined error and the second signal is obtained from the digital filter. 12. The method in claim 11, wherein the digital filter includes a predetermined number of delay stages and the second signal is an earlier signal input to the digital filter which is output from one of the predetermined number of delay stages. 13. The method in claim 11, wherein the second signal is a delayed version of a weighted combination of the signal input to the digital filter and one or more earlier filter input 14. The method in claim 6, wherein the second signal is selected so that the correlation provides the unambiguous zero crossing. 15. The method in claim 6, further comprising: minimizing a magnitude of the correlation to obtain an optimal or near sampling timing instant. 16. The method in claim 6, wherein a sign of the correlation determines whether the phase of the timing needs to be advanced or retarded. 17. The method in claim 6, wherein the processing step includes: filtering the received signal with a digital filter to suppress a precursor portion of the received signal, the filtering including (1) multiplying the received signal by a first precursor coefficient thereby generating a first product, and (2)multiplying an earlier received signal having been delayed in one of plural filter delay stages by a second precursor coefficient thereby generating a second product, and wherein the second signal is a sum of the first and second products for the received signal and the first and second products for the earlier received signal. 18. The method in claim 6, further comprising: averaging results of the timing recovery function over a time interval; comparing the averaged results with a threshold; and generating either an advance signal or a retard signal to initiate advance and retard, respectively, of the sampling instant. 19. The method in claim 6, wherein the first signal or a sign of the first signal and the second signal or a sign of the second signal are correlated. 20. A timing recovery method in a digital communications system for extracting a desired phase of a sampling clock signal in a receiver, comprising: sampling a received signal at controlled timing instants and converting the received signal into a digital signal; determining a timing recovery correlation function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined, processing the received signal to compensate for distortions; detecting a value of the received signal from the processed signal; determining an error between the detected value and the processed signal, the timing recovery function being a correlation between a first signal based on the error and a second summing the first signal determined for a current sampling period with a previous first signal for a previous sampling period; and multiplying the sum by the second signal. 21. A timing recovery method in a digital communications system for extracting a desired phase of a sampling clock signal in a receiver, comprising: sampling a received signal at controlled timing instants and converting the received signal into a digital signal; determining a timing recovery correlation function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined, processing the received signal to compensate for distortions; detecting a value of the received signal from the processed signal; determining an error between the detected value and the processed signal, the timing recovery function being a correlation between a first signal based on the error and a second summing the second signal determined for a current sampling period with a previous second signal for a previous sampling period; and multiplying the sum by the first signal. 22. A timing recovery method in a digital communications system for extracting a desired phase of a sampling clock signal in a receiver, comprising: sampling a received signal at controlled timing instants and converting the received signal into a digital signal, and determining a timing recovery correlation function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined, wherein the timing phase is not adjusted if a magnitude of the correlation does not exceed a threshold. 23. A data communications transceiver in a digital communications system comprising: a transmitter for transmitting digital information encoded as one of plural symbols over a communications channel; a receiver including: an analog to digital converter for sampling a received signal at controllable, predetermined timing instants, the digital information in the received signal being distorted as a result of transmission over the communications channel; a detector for comparing samples of the received signal to a threshold and generating a symbol corresponding each sample based on the comparison; and a timing recovery controller for determining an optimum or near optimum sampling instant using a nonambiguous zero crossing of a timing recovery correlation function that is based on a mean squared error or an approximate mean squared errorbetween the sample and its corresponding detected symbol and varying the phase of the sampling instant to a point where the mean or approximate mean squared error is at or near a minimum. 24. The data communications transceiver in claim 23, wherein the error represents at least in part uncancelled precursor intersymbol interference of the received signal. 25. The data communications transceiver method in claim 23, wherein the controlling step includes: correlating the error with another signal, and adjusting the timing instant based on whether a correlation result is a positive or negative value such that the timing instant is advanced if the correlation result is one of the positive or negative value and retarded if the correlation resultis the other of the positive or negative value. 26. The data communications transceiver in claim 23, wherein the timing recovery controller correlates the error with another signal, and adjusts the sampling instant according to a correlation result. 27. The data communications transceiver in claim 26 wherein in a steady state condition, the correlation result produces only one zero crossing from which the optimum or near optimum sampling instant is determined. 28. The data communications transceiver in claim 26, wherein the timing recovery controller correlates a sign of the error with a sign of another signal. 29. Apparatus for digital communications timing recovery, comprising: a sampler for sampling a received signal at controlled timing instants and converting the received signal into a digital signal; processing circuitry for processing the received signal including: a digital filter for filtering the digital signal, and an equalizer for equalizing the filtered signal; detector for detecting a value of the received signal from the processed signal; combiner for determining an error between the detected value and the processed signal; and a timing recovery controller for determining a timing recovery function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined, wherein the timing recovery function is a correlation between first and second signals, the first signal being based on the error and the second signal being an unequalized signal obtained from the digital filter. 30. The apparatus in claim 29, wherein the second signal is a weighted combination of the signal input to the digital filter and one or more earlier received digital signals. 31. The apparatus in claim 29, wherein the second signal is obtained from a combination of first and second earlier filtered signals. 32. The apparatus in claim 29, wherein the digital filter includes a predetermined number of delay stages and the second signal is an earlier signal input to the digital filter which is output from one of the predetermined number of delaystages. 33. The apparatus in claim 29, wherein the second signal is selected so that the correlation provides the unambiguous zero crossing. 34. The apparatus in claim 29, wherein the timing recovery controller minimizes a magnitude of the correlation to obtain an optimal or near optimal sampling timing instant. 35. The apparatus in claim 29, wherein the timing recovery controller uses a sign of the correlation to determine whether the phase of the timing is to be advanced or retarded. 36. The apparatus in claim 35, wherein the timing phase is not adjusted if a magnitude of the correlation does not exceed a threshold. 37. The apparatus in claim 29, wherein the digital filter filters the received signal to suppress a precursor portion of the received signal by (1) multiplying the received signal by a first precursor coefficient thereby generating a firstproduct, and (2) multiplying an earlier received signal having been delayed in one of plural filter delay stages by a second precursor coefficient thereby generating a second product, and wherein the second signal is a sum of the first and second products for the received signal and the first and second products for the earlier received signal. 38. The apparatus in claim 29, wherein the timing recovery controller averages results of the timing recovery function over a time interval, compares the averaged results with a threshold, and generates either an advance signal or a retardsignal to initiate advance and retard, respectively, of the sampling instant. 39. The apparatus in claim 29, wherein the correlation is between the first signal or a sign of the first signal and the second signal or a sign of the second signal. 40. The apparatus in claim 29, wherein the second signal is a delayed version of a weighted combination of the signal input to the digital filter and one or more earlier filter input 41. Apparatus for digital communications timing recovery, comprising: a sampler for sampling a received signal at controlled timing instants and converting the received signal into a digital signal, a timing recovery controller for determining a timing recovery function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined; processing circuitry for processing the received signal to compensate for distortions; a detector for detecting a value of the received signal from the processed signal; a combiner for determining an error between the detected value and the processed signal, the timing recovery function being a correlation between first and second signals; and a digital filter for filtering the received signal, wherein the first signal is a combination of the error and a previously determined error and the second signal is obtained from the digital filter. 42. Apparatus for digital communications timing recovery, comprising: a sampler for sampling a received signal at controlled timing instants and converting the received signal into a digital signal; a timing recovery controller for determining a timing recovery function from the received signal that provides an unambignous zero crossing of the received signal from which a desired sampling timing instant is determined; processing circuitry for processing the received signal to compensate for distortions; a detector for detecting a value of the received signal from the processed signal; a combiner for determining an error between the detected value and the processed signal, the timing recovery function being a correlation between a first signal based on the error and a second signal; a summer for summing the first signal determined for a current sampling period with a previous first signal for a previous sampling period; and a multiplier for multiplying the sum by the second signal. 43. Apparatus for digital communications timing recovery, comprising: a sampler for sampling a received signal at controlled timing instants and converting the received signal into a digital signal; a timing recovery controller for determining a timing recovery function from the received signal that provides an unambiguous zero crossing of the received signal from which a desired sampling timing instant is determined; processing circuitry for processing the received signal to compensate for distortions; a detector for detecting a value of the received signal from the processed signal; combiner for determining an error between the detected value and the processed signal; the timing recovery function being a correlation between a first signal based on the error and a second signal; a summer for summing the second signal determined for a current sampling period with a previous second signal for a previous sampling period; and a multiplier for multiplying the sum by the first signal. 44. A data communications receiver in a digital communications system comprising: means for generating a clocking signal; means for sampling a received signal at predetermined timing instants in response to the clocking signal; means for determining a correlation between an error signal and a signal obtained from the received signal that provides only a single zero crossing at an optimal or near optimal timing instant for sampling the received signal; and means for minimizing the magnitude of the correlation; means for adjusting the generating means based on the minimized correlation. 45. The data communications receiver in claim 44, wherein the means for adjusting adjusts a phase of the clocking signal used by the means for sampling such that a magnitude of the correlation is minimized toward zero. 46. The data communications receiver in claim 45, further comprising: means for detecting a value of the received signal at a predetermined timing instant, wherein the means for determining includes: means for calculating an error between the received signal input to the means for detecting and the detected value output by the means for detecting; and means for correlating the error with at least some portion of the received signal thereby generating a correlated signal. 47. The data communications receiver in claim 46, wherein the means for adjusting adjusts a phase of the clocking signal so that a magnitude of the correlation is minimized toward zero and the received signal is sampled at a desired timinginstant. 48. The data communications receiver in claim 47, wherein a sign of the correlation determines whether the phase of the clocking signal is advanced or retarded. Description: FIELD OF THE INVENTION This invention relates to high speed, digital data transmission systems, and in particular, to timing recovery in transceiver circuits. BACKGROUND OF THE INVENTION Communication over a digital subscriber line or other communications loop requires very low error or even error free transmission of coded binary data, e.g., a bit error rate (BER) equal to or less than 10.sup.-7 is required for use in theintegrated services digital network (ISDN) basic access interface for subscriber loops. Such low BERs are difficult to obtain given unknown delays, attenuation, dispersion, noise, and intersymbol interference (ISI) introduced by and/or oh thecommunications channel. An essential part of very low error transmission of coded binary data is symbol synchronization at the digital data receiver. In general, the receiver clock of a receiving transceiver interface must be continuously adjusted to track andcompensate for frequency drift between the oscillators in the transmitter located at the opposite ends of the communications loop and the receiver clock as well as to track and compensate for changes in the transmission media. Digital receivers rely ondigital processing to recover the transmitted digital information. In other words, the received signal is sampled at discrete time intervals and converted to its digital representation. As a result, a timing recovery function is required to synchronizethe receiver clock so that received symbols can be sampled at an appropriate sampling instance, (e.g., an optimum sampling instance would be at the peak of the sampled pulse for Pulse Amplitude Modulated (PAM) codes). This task is further complicatedbecause the received pulses are distorted. Once source of disturbance is the coupling of transmitted pulses from the transmitting portion of the transceiver directly across a hybrid circuit which are detected at the receiver portion of the transceiver as echoes. Such transmit pulseechoes are typically removed by an echo canceler (e.g., a transversal filter which models the transmit signal and subtracts it from the received signal). But even after the echo canceler removes the echoes of transmitted pulses, the received pulses arestill distorted as a result of the transmission path characteristics and intersymbol interference as mentioned above. The result is that relatively square, narrow pulses transmitted from the far end transceiver are "smeared," (i.e., widened anddistorted) by the time they are received at the near end transceiver. To detect the value of the received pulses, the receiver performs a number of functions in addition to echo cancellation. For example, the receiver tries to cancel intersymbol interference (ISI) caused by symbol pulses received before thecurrent symbol pulse of interest. Such ISI is caused by the delay and pulse shaping characteristics of the transmission path such that when symbols are transmitted, the "tail" of one symbol pulse extends into the time period of the next transmittedsymbol pulse, making it difficult to determine the correct amplitude of the pulse actually transmitted during that symbol period. High speed digital communication systems may employ decision feedback equalizers (DFE) to suppress ISI. After performing various corrective/compensating functions, (some of which were briefly described above), the receiver then decides (1) where in time and (2) at what amplitude to quantize or "slice" the received signals to covert them back todesired pulse or symbol values. In order to perform these slicing functions, the receiver must determine the timing instant to sample the signal as well as determine the signal level at that sampling instant. Since digital signal processing circuitycost and complexity typically increase with sampling rate, it is desirable and typical to sample the incoming signal at the lowest possible rate, i.e., the baud rate. Accordingly, the timing phase is crucial in minimizing errors due to noise andintersymbol interference. The timing recovery is further complicated if a "baud rate" timing recovery algorithm is employed where received symbol pulses are sampled only once per symbol or baud. Such a sampling rate timing recovery algorithm was proposed by Mueller and Muller in "Timing Recovery in Digital Synchronous Data Receivers," IEEE Trans. Comm., Vol. COM-24, No. 5, pp. 516-531, May 1976. The Mueller and Muller timing recoveryalgorithm selects a "timing function" which is zero at the optimum sampling phase. The objective is to find the phase that makes this timing function equal to zero. Detecting when the function is zero is accomplished by detecting when the function'samplitude crosses zero, i.e., a zero crossing. This objective is only theoretical, however, because such a timing function cannot be computed exactly and has to be estimated from the received signal samples. The sampling phase is then adjusted until the estimate is equal to zero. Inpractice, derivation/estimation of the timing function is quite difficult. For example, previously proposed timing function estimates are expressed s an equivalent system of equations. Many such equations do not have a unique solution and becomeintractable when the number of equations exceeds 3. Another and perhaps more serious problem is that the Mueller et al timing function estimates may not converge to a single zero crossing for many transmission paths and instead exhibit multiple zerocrossings. Thus, false timing instants may be easily selected which may adversely influence the timing recovery process. The problem of stably recovering timing information from an incoming digital signal sample at the baud rate therefore remains. SUMMARY OF THE INVENTION It is an object of the present invention to provide a stable timing recovery algorithm that permits accurate sampling of incoming digital signals at the symbol baud rate. A further object of the present invention is to achieve clock synchronization between transmitted and receiver clocks as well as to track and adjust phase drift between those clocks using an efficient timing recovery algorithm that can beimplemented in very large scale integrated (VLSI) circuitry at low cost. It is a further object of the present invention to provide a timing recovery algorithm which selects the timing phase based on the characteristics of the communications channel to minimize bit error rate to a very low value. A still further object of the present invention is to provide a timing recovery algorithm that cohesively interacts with other receiver elements/parameters such as the decision feedback equalizer. To this end, a timing recovery function is disclosed for determining a desired timing phase by minimizing a mean squared error due to uncancelled precursor intersymbol interference. In general, the error is calculated as a difference between theequalized signal and the corresponding detected symbol. The optimum or near optimum timing phase for sampling is achieved when the mean squared error approaches its minimum. A timing recovery method in a digital communications system is disclosed for recovering a timing phase of a sampling clock signal in a receiver. A received signal is sampled at controlled timing instants to convert the received signal into adigital signal. A timing recovery function is generated using a correlation between two signals that produces an unambiguous zero crossing. With such an unambiguous zero crossing, e.g., only one zero crossing, a desired and reliably accurate samplingtiming instant is determined. The received signal is processed to compensate for various distortions, and a value of the received signal from the processed signal is detected in a signal detector. Then, an error between the input to the detector and the detector output iscalculated. The timing recovery function is defined as the correlation between the error and some other signal. That other signal is selected so that the correlation provides the unambiguous zero crossing. Typically, the other signal is a signalobtained or otherwise derived from the received signal. The correlation is zero at the optimal or near optimal sampling timing instant. In one implementation, the "sign" of the correlation result, i.e., a positive or negative correlation result,determines whether the phase of the timing needs to be advanced or retarded. The present invention also describes a data communications transceiver in a digital communications system for implementing the timing recovery technique includes a transmitter for transmitting digital information encoded as one of plural symbolsover a communications channel and a receiver. The receiver includes an analog to digital converter for sampling a received signal at controllable, predetermined timing instants. A detector compares each signal sample to a threshold and generates acorresponding symbol based on the comparison. A timing recovery controller evaluates the correlation between an error and a signal obtained from the received signal that provides a single zero crossing at an optimal or near optimal timing instant. Inone embodiment, the phase of the receiver clocking signal is adjusted so that the sum of the squares of precursor error values is effectively minimized. The receiver symbol detector detects a value of the received signal at a predetermined timing instant. The error is calculated between the received signal input to the detector and the detected value output by the detector. The timing recoverycontroller then correlates that error with some combination of the received signal thereby generating a signal that approximates the sum of squares of the uncancelled precursor intersymbol interference values. By providing a timing recovery correlation function that produces an unambiguous zero crossing, the present invention produces a reference point that can be readily detected. The timing recovery clock is advanced, retarded, or maintained basedon the correlation product sign, e.g., a positive correlation instructs retarding the clock, a negative correlation instructs advancing the clock, and a correlation magnitude value (positive or negative) below a threshold value instructs maintaining thecurrent clock phase. That reference point gives an optimal or near optimal sampling instant for sampling the pulse. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description of the invention and the accompanying drawings which set forth an illustrative embodiment in whichthe principles of the invention are utilized. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a function block diagram of an example of a digital ISDN communications system in which the present invention may be applied; FIG. 2 is a function block diagram of a U-type transceiver that may be used in an ISDN; FIG. 3 is a graph of a typical symbol pulse when initially generated by a transmitter; FIG. 4 is a graph of a typical received pulse after transformer and receive filtering; FIG. 5 is a graph of a typical received pulse after filtering in a feedforward precursor filter in a receiver; FIG. 6 is a graph of a theoretically computed autocorrelation function .epsilon..sub.k (solid line) and its derivative (dotted line); FIG. 7 is another block diagram of the U-type transceiver of FIG. 2 with additional details of the signals used in an example embodiment of the timing recovery technique in accordance with the present invention; FIG. 8 is a comparative graph evaluating an example of a timing function in accordance with the present invention; FIG. 9 is a block diagram showing in more detail the timing recovery unit shown in FIG. 7; and FIGS. 10-13 are diagrams showing example signal shaping approaches for providing various suitable correlation signals used in various example embodiments of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular interfaces, circuits, techniques, etc. in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known methods, devices, and circuits areomitted so as not to obscure the description of the present invention with unnecessary detail. FIG. 1 shows an overall block diagram of one data communications environment, i.e., the integrated services digital network (ISDN) 10, to which the present invention may be applied. A building 12 may, for example, include telephone subscribers(16 and 18) and data subscribers (personal computer 14) linked over a local area network to a U-transceiver 20 (via an S-transceiver not shown). The U-transceiver 20 is connected by a 2-wire "subscriber loop" transmission line 22 to anotherU-transceiver 26 at telephone switching and services network 24 which provides digital switching and other messaging/call processing services. One important function of the U-transceivers 20 and 26 is the accurate and stable recovery of timinginformation from an incoming digital signal sampled at the baud rate so that symbol synchronization is achieved between the two transceivers. For purposes of illustration and description only, the present invention is described hereafter in the context of such an ISDN network that uses U-transceivers and 2B1Q line codes. In the ISDN, the 2-binary, 1-quaternary (2B1Q) line code is usedwhich employs a four level, pulse amplitude modulation (PAM), non-redundant code. Each pair of binary bits of information to be transmitted is converted to a quaternary symbol (-3, -1, +1 and +3). For example, "00" is coded to a -3, "01" is coded to a-1, "10" is coded to a +3, and "11" is coded to a +1. However, as will be appreciated by those skilled in the art, the present invention may be applied to other types of data communication networks and other types of line codes/symbols. Reference is now made to FIG. 2 which illustrates a U-interface transceiver 30 comprising a transmitter and receiver. Again, although the present application is being described in conjunction with a U-interface transceiver for use in conjunctionwith an ISDN digital communications network, the present invention of course could be applied to other high speed data environments such as high bit rate digital subscriber lines (HDSL), etc. Binary data for transmission is applied to a scrambler 31which encodes the data into pseudo-random bit stream formatted by a framer 32 into frames of 240 bits or 120 (2B1Q) symbols in accordance with ISDN specification T1D1. The framer inserts a 9-symbol signalling word used for frame synchronization in eachframe of data so that 111 symbols are left for the scrambled data. The framed and scrambled binary signal is applied to a 2B1Q encoder where it is converted into a parallel format by a serial-to-parallel converter which produces digits in the combinations of 00, 01, 10, and 11. Digit-to-symbol mapping in theencoder produces the four corresponding symbol levels -1, +1, -3, and +3. Digital-to-analog converter (DAC) 38 converts the encoded signal to a voltage level suitable for application to the hybrid 44 which is connected to subscriber loop 45. Thetransmit filter 40 removes high frequencies from the digital pulses output by the digital-to-analog converter 38 to reduce cross-talk and electromagnetic interference that occur during transmission over the subscriber loop 45. Incoming signals from the subscriber loop 45 are transformed in hybrid 44 and processed by the receiver which, at a general level, synchronizes its receiver clock with the transmitter clock (not shown) so that the received signal can be sampledat the symbol/baud transmission rate, i.e., the rate at which symbols were transmitted at the far end of the loop. More specifically, the receiver includes an anti-aliasing filter 46 which removes high frequencies. The filtered signal is converted intoa digital format using analog-to-digital converter (ADC) 48. The sampling rate of the analog-to-digital converter 48, which is tied to the receiver clock, is adjusted using a control signal from timing recovery circuit 70. For example, A-to-D converter48 may sample at a sampling rate of 80 kHz even though it has a built-in higher frequency clock permitting phase adjustment in smaller intervals, e.g., a period of 15.36 MHz. The control signal from timing recovery circuit 70 adjusts the phase of thebaud rate recovery clock by stepping the clock signal forward or backward. The digitized samples are filtered by a receive filter 50, the output of which is provided to summing block 52. Receive filter 50 increases the signal-to-noise ratio of the received signal by suppressing the "tail" of the received signal. Theother input to summer 52 is an output from echo canceler 36. As described above, pulses transmitted onto subscriber loop 45 result in echo on the receiver side of the hybrid 44 due to impedance mismatch. Unfortunately, it is difficult to separate theechoes of these transmitted pulses (using for example a filter) from the pulses being received from subscriber loop 45. Accordingly, echo canceler 36 generates a replica of the transmitted pulse waveform and subtracts it at summer 52 from the receivedpulses. The echo canceler is adjusted based upon an error signal .epsilon. between the received symbol and the detected symbol output at summer 66. Such an adaptive echo canceler is typically realized as a traversal, finite impulse response (FIR)filter whose impulse response is adapted to the impulse response of the echo path. The error .epsilon. is used to adjust the filter coefficients to "converge" the filter's response to the impulse response model of the communications channel. The echo cancelled signal is processed by adaptive gain controller 54 to adjust the amplitude to levels specified for the symbols in the 2B1Q line code. In general, the gain applied to the input signal is adapted by comparison of the inputsignal to fixed amplitude thresholds and increasing or decreasing the gain as necessary to achieve the amplitudes standardized for symbols -3, -1, +1, and +3. The output of the adaptive gain controller is provided to a feedforward filter 56 which inphysical terms enhances high frequencies of pulses in the received signal which translates into an increase in the steepness or slope of the rising edge of the digital pulse. In functional terms, known digital communications systems refer to thisfeedforward filter 56 as a precursor filter because its purpose is to suppress the precursor portion of received pulses. In this regard, reference is made to the pulse waveforms shown in FIGS. 3-5. FIG. 3 shows a typical, isolated, transmitted pulse waveform before it is distorted over the transmission path. FIG. 4 illustrates a typical, isolated, received pulseafter filtering in receive filter 50 and echo cancellation in summer 52. The pulse amplitude is significantly attenuated compared with the transmitted pulse in FIG. 3 and the overall pulse width is significantly increased. FIG. 5 shows the pulse afterfiltering by the feed forward filter 56 with increased steepness/slope of the rising edge of the received pulse. In FIG. 4, the initial portion of the pulse before it starts to rise is flat at zero amplitude. The optimal time to sample the pulse amplitude and measure its value at or near its peak is one symbol period "T" after the pulse begins its steeprise from zero amplitude to its peak amplitude in order to avoid precursor interference. With the initial flat portion shown in FIG. 4, it is difficult in practice to detect that initial point in time when the pulse starts to steeply rise and thereforedetect the point from which one symbol period should be measured. Furthermore, for pulses with a slow rise rate, as in the case of long transmission loops, the pulse amplitude at the sampling instant defined as above, will be much less than peakamplitude, resulting in deterioration of SNR due to other noise sources. One advantageous by-product of the feedforward/precursor filter in this regard is that it introduces precursor zero crossings. In the example waveform shown in FIG. 5, the pulse has two readily detectable zero crossings after precursor filteringapproximately spaced by the sampling interval T. Zero crossing 82 in particular defines a subsequent sampling position 84 (the main cursor sampling position) one sampling period T after the zero crossing 82. As can be seen, the main cursor samplingpoint occurs slightly before the peak of the pulse waveform 86. Nonetheless, the main cursor sample is sufficiently close to the pulse peak to provide an accurate pulse amplitude sample. For purposes of the present description, the term "main cursor" is the pulse height or amplitude at the sampling position 84. A "precursor" refers to pulse heights at sampling positions just before the main cursor sampling position 84. Thus,the second precursor corresponds to the pulse height at a second sampling position 80 before the main cursor sampling position 84. The first precursor corresponds to the pulse height at a first sampling position 82 immediately preceding the main cursorsampling position 84. Ideally, the distance between the precursor zero crossings 80 and 82 as well as the distance between first precursor zero crossing 82 and main cursor sampling position 84 should be spaced by sampling interval T corresponding for example in baudrate sampling to the symbol transmission period. Achieving (even approximately) such cursor spacing permits sampling at points where precursor intersymbol interference (ISI) caused by preceding and succeeding pulses is near zero. To eliminate the effect of such precursor ISI, the sampling instants should be aligned with the zero crossings of the precursors. In practice, however, it is difficult to obtain such spacing for all transmission paths on the network using asingle feedforward/precursor filter. Consequently, it is not possible to completely eliminate precursor ISI. However, a satisfactory result is achieved when this condition is at least approximated. As is described further below, the timing recoveryalgorithm in accordance with the present invention uses this residual precursor ISI to adjust the phase of the receiver sampling clock so that the main cursor is sampled at the point where the mean squared error or the approximate mean squared error dueto the residual precursor ISI is minimized. This point corresponds to the optimal or near optimal sampling time instant at or sufficiently near the pulse peak. While the present invention is described in terms of mean squared error due to residualprecursor ISI, the present is not limited to residual precursor ISI. Other received signals or portions of received signals, e.g., the post cursor ISI, may be used to calculate the timing function. Referring again to FIG. 2, a correction signal from a decision feedback equalizer (DFE) 68 is subtracted from the filtered sample at summer 58 to provide an equalized version of the pulse at symbol detector 60. As a result of the channelcharacteristics of the subscriber loop and signal processing, the "tail" of the single symbol pulse persists into a large number of symbol sample periods after the main cursor is sampled and therefore interferes with the subsequent symbols. Thisintersymbol interference caused by the tail of the symbol pulse is removed by the decision feedback equalizer 68. The decision feedback equalizer is implemented as a digital transversal filter and is adapted much in the same manner as the echo The detector 60 converts the corrected pulses of the received signal to symbol logic levels. The timing recovery circuit 70 then must choose the correct sampling phase so that pulse values are detected. As mentioned above, a suitable samplinginstant is determined by timing recovery circuit 70 at the instant where the mean squared error due to precursor interference reaches its minimum corresponding to a lowest probability of error. Thereafter, the timing recovery unit 70 tracks the changesin the phase of the received signal to ensure synchronization with the transmitted signals. To that end, an error signal s is generated at summer 66 based on the detector input D.sub.i and the detector output D.sub.o as follows: In a simplified mathematical expression, the detector input D.sub.i can be approximated by the following equation: where k is the current sampling instant, a is the symbol amplitude value (which for a 2B1Q code corresponds to .+-.1 and .+-.3), h.sub.0 is the main cursor amplitude, h.sub.-1 is the first precursor amplitude, and h.sub.-2 is the second precursoramplitude all measured at time k. The output of the detector is of course a selected one of the .+-.1 and .+-.3 symbols. The first term h.sub.0 *a.sub.k corresponds the main cursor of the signal to be detected and therefore is essentially D.sub.o. Thelast two terms correspond to the error generated by the ISI overlap caused the first and second precursors with the future two symbols a.sub.k+1 and a.sub.k+2. Stated another way, the sum of these first and second precursor terms represents the degreeto which the precursor crossings (at least for the first and second precursors used in our example) do not correspond with the sampling times. Of course, the two precursor example is only that and the present invention may be implemented using anynumber of precursor terms. The inventors of the present invention recognized therefore that the error terms in the above equation (1) provide information that can be advantageously employed to adjust the sampling time to the optimal or near optimal value. If the errorgenerated by precursor ISI is driven to zero, h.sub.-1 and h.sub.-2 are sampled at zero crossings, which as described above sets up a suitable reference from the first precursor zero crossing for sampling the main cursor value h.sub.0 one period T afterthe first precursor zero crossing at an amplitude at or sufficiently near the pulse peak. If the first and second precursor values h.sub.-1 and h.sub.-2 are not zero or nearly zero, then the precursors are not being sampled at or near a zero crossingand the timing phase needs to be adjusted to move the error closer towards zero. If the first and second precursor values h.sub.-1 and h.sub.-2 cannot be zero simultaneously because they are not spaced exactly one sampling interval T, the samplinginstance should be adjusted to move the error as close as possible to zero. The timing recovery function of the present invention is different from conventional timing estimation functions such as proposed in the Mueller et al article described above. Those timing functions typically depend upon both the precursor andmain cursor and do not use an error signal as described above. Moreover, none of the Mueller et al based timing estimation techniques employ correlation properties to extract timing recovery information. Based on the knowledge that the precursor values (i.e., h.sub.-1, h.sub.-2, etc.) are heavily influenced by the choice of sampling phase (i.e., when the sampling instants occur at or near zero crossings the precursor have zero or near values),the error term .epsilon..sub.k is correlated with some "other" selected signal labelled f.sub.k for convenience representative of the received signal to generate a mean or an approximate mean squared error value. The reason why the mean squared error isused rather than just the error .epsilon..sub.k is because, recalling from the block diagram in FIG. 2, the transmitted symbols are scrambled which means in equation (1) above, the symbol variables a.sub.k+1 and a.sub.k+2 are uncorrelated. As such, theerror .epsilon..sub.k provides no useful information. However, useful timing information can at least in theory be derived from the square of the error as will be explained in conjunction with FIG. 6. FIG. 6 shows as a solid line the mean squared error(i.e., the autocorrelation of .epsilon..sub.k). Note that the solid line is plotted on the horizontal axis representing distance from the maximum or peak pulse value against relative amplitude on the vertical axis. The mean squared error achieves aminimum near the maximum or peak pulse value, and therefore, it may be used to detect the optimal or near optimal sampling instant of the received pulse. Unfortunately, the mean squared error term keeps the same sign (i.e., it does not cross zero) irrespective of whether the signal is sampled before or after the optimum sampling instance. In other words, without a zero crossing from positive tonegative or negative to positive, it is difficult to determine whether to advance or retard the receiver sampling clock phase. What is needed is a clear, easily detectable zero crossing at or about distance "0" shown in FIG. 6. If the derivative of the mean squared error term is calculated (see the dashed line in FIG. 6), the zero crossing near distance "0" could be used, but the derivative of the mean squared error function results in multiple zero crossings with allbut one being "false" zero crossings. Consequently, in some circumstances, the timing recovery algorithm may become "locked" on a false sampling instance and may prevent the decision feedback equalizer from converging. The present invention therefore correlates the error .epsilon..sub.k not just with itself alone but instead with some other signal derived or obtained from the received signal which includes the error term plus additional information about thesignal, resulting in additional cross-correlation components. The additional cross-correlation components may be used to remove the false zero crossings from the correlation product. For purposes of this description, the "other signal" is defined as asignal which when correlated with the error .epsilon..sub.k produces an unambiguous zero crossing, e.g., a single zero crossing, at or near the optimal sampling instant. This other signal can be obtained from a signal containing uncancelled precursor orfrom some other suitable signal. For simplicity of description and not limitation, the following other signal examples are obtained from the feedforward precursor filter and therefore are based on the uncancelled precursor. A first embodiment of the invention is described where the other signal to be used in the correlation is labelled f.sub.k : where .mu..sub.k is obtained from the feedforward filter 56 at the point shown in FIG. 7 and .mu..sub.k-1 is a delayed version of .mu..sub.k. The timing recovery correlation function is then defined as: where .DELTA..theta. is the timing adjustment. FIG. 8 plots as a solid line this correlation function using similar axes as used in FIG. 6. Advantageously, the solid line has only one zero crossing at approximately distance "0" from themaximum value of the signal. In other words, there are no false zero crossings. Thus, the other correlation signal f.sub.k should be carefully selected and tested to insure that the timing correlation function produces a single zero crossing. Accordingly, the present invention generates a timing recovery correlation function such that when the error is reduced toward zero, the sampling period is at the optimal or near optimal point. The optimal or near optimal timing phase is thatwhich minimizes the mean squared error, due to for example the uncancelled precursor intersymbol interference, which is approximately achieved when the correlation between the error and the other signal f.sub.k is at zero or within a "deadband zone"explained further below. The correlation function zero crossing then determines the steady state locations of the desired sampling timing instants. In implementing this first example embodiment of the timing recovery correlation function, the timing recovery correlation function .DELTA..theta.=E(f.sub.k * .epsilon..sub.k) is calculated for a current received pulse and provides a timing phaseadjustment signal to the receiver sampling clock. Optimally (although not necessarily), only the sign or direction of that correlated timing phase adjustment value .DELTA..theta. is used to correct timing phase. For example, if the .DELTA..theta. value is a negative, the clock is "lagging," and the timing recovery circuit 70 generates an "advance" signal for advancing the phase of the sampling clock provided to the A-to-D converter 48 and echo canceler 36 by an incremental time value. If thevalue is positive, then the clock is "leading," and the timing recovery circuit 70 outputs a "retard" signal which delays the clock by an incremental time value. If calculated timing phase adjustment value is zero or less than a deadband threshold, a"hold" signal is output from the timing recovery circuit 70 meaning that the clock is not adjusted for the time being. Since the transmission channel characteristics on a subscriber loop usually change slowly, it is desirable though not necessary to adjust the receiver sampling clock only in small steps (the increments noted above), and only after a phasecorrection in a particular direction is detected over many samples, i.e., an integrating time period. For example, a 2000 sample time period is appropriate. The correlation function used for timing recovery minimizes the mean squared error as obtained by the equation .DELTA..theta.=E(f.sub.k * .epsilon..sub.k) has significant advantages. First, the cross-correlation function exhibits only one zerocrossing thereby avoiding the possibility of locking on a sampling instance other than the optimal or near optimal sampling instance or the risk of locking the system in an uncontrolled oscillatory state. A second advantage is that as a result of thesingle zero crossing, the timing recovery correlation function converges unconditionally to the optimal or near optimal sampling instant regardless of the initial sampling point. Further description of the present invention will now be with reference to FIGS. 7, 8 and 10 where like reference numerals refer to like elements from FIG. 2. The output from the adaptive gain control unit 54, which includes a number ofoverlapping symbols, is processed in feedforward filter 56. The filer input and delay elements z.sup.-1 employed in filter 56 provide signals s.sub.k, s.sub.k+1, and s.sub.k+2 at the current sampling instant k. The signal s.sub.k+2 is multiplied by aprecursor coefficient or "tap" pc.sub.2 while the signal s.sub.k+1 is multiplied by precursor coefficient or tap pc.sub.1. While a two tap, feedforward filter (corresponding to taps pc.sub.1 and pc.sub.2) is shown and described below for purposes ofexplanation, those skilled in the art will appreciate that a one tap filter or a more than two tap filter could also be used if desired. The two weighted signals are summed together to provide a signal .mu..sub.k which is then summed with signal s.sub.k to generate the filtered signal x.sub.k which looks like the typical received and filtered signal shown in FIG. 5. The output ofdetector 60 a.sub.k is fed into decision feedback equalizer 68 to provide an estimate of intersymbol interference. The intersymbol interference is removed at summer 58 shown in FIG. 7 so that the current symbol pulse can be accurately detected bydetector 60. The output from the detector a.sub.k is also subtracted from the input of the detector at summer 66 to provide the error signal .epsilon..sub.k to update coefficients of the equalizer 68 and to the timing recovery block 70. As shown inFIG. 10 described later, the filter signal .mu..sub.k is provided to signal shaper 90 to provide the other signal f.sub.k according to formula (3) set forth above, i.e., f.sub.k =.mu..sub.k +.mu..sub.k-1. The signal to be correlated f.sub.k is thenprovided to timing recovery block 70 which performs the cross-correlation between the error signal .epsilon..sub.k and the other signal to be correlated f.sub.k. The input signal to the function generated is defined as follows: Since the output from the feed forward filter x.sub.k equals s.sub.k +.mu..sub.k, one can readily see that signal .mu..sub.k is very much related to the first and second precursors of symbol x.sub.k. Thus, when f.sub.k is correlated with.epsilon..sub.k, only the precursor portions of both signals correlate in a steady state, i.e., are approximately squared. FIG. 9 shows the main stages of the timing recovery circuit 70 in block diagram form. Signal .mu..sub.k from the feedforward filter 56 is processed by signal shaper 90 which essentially provides a signal shaping function that adds .mu..sub.k toits delayed version .mu..sub.k-1. FIG. 10 shows an optional sign block which may be used to simplify the correlation calculation. One or both of the correlated signals may be approximated with its sign value, i.e., +1 or a -1, which avoids higher dataprocessing overhead multiplications using simpler combinations of sign. In other words, irrespective of whether the correlation result is (0.2)(-0.7)=-0.14 or (0.2)(-1.0)=-0.2 or even (1.0)(-1.0)=-1.0 for that matter, a correct decision (on average) maybe made to advance or retard the sampling instant based purely on the sign. This approach is particularly useful when a preferred implementation of at least the timing recovery circuit 70 is performed by a programmable digital processor. Signalsf.sub.k and .epsilon..sub.k are then provided to correlator 91 where they are multiplied in a multiplier 92 and then filtered in loop filter 94. Loop filter 94 averages (integrate and dump) the correlation result over, for example, 2000 samples, and theaveraged value is used to adjust timing. For example, the sampling phase would be adjusted once every 2000 samples dependent upon the new value .DELTA..theta..sub.k. The output from the loop filter is applied to a phase quantizer 96 which interprets the loop filter output to make adecision as to whether to "advance," "retard," or "hold" the timing recovery baud rate clock. Phase quantizer 96 may correspond to a multi-level slicer having a positive threshold and a negative threshold with the region therebetween being referred toas a hold or dead zone region. Depending upon the polarity of the signal it receives, the quantizer 96 outputs an advance or retard signal which shifts the phase of the recovered baud rate clock and hence adjusts the sampling instant to an optimalvalue. As mentioned above, a digital, voltage controlled oscillator (VCO) may be used typically in the form of an up/down counter. Reference is now made to FIGS. 10-13 which illustrate examples of signals to be correlated. As already described, the correlation function used for timing recovery minimizes (or at least nearly minimizes) the mean squared error following theequation .DELTA..theta.=E(f.sub.k *.epsilon..sub.k). The issue is how to ensure that such a correlation function has only one zero crossing. As already described above in the context of FIG. 6, the autocorrelation of the error, i.e.,.epsilon..sub.k.sup.2 and its derivative, are unsatisfactory. The strategy adopted by the inventors of the present invention for choosing particular signal combinations to develop an optimal or near optimal correlation function relies on the principleof superposition which applies in any linear system. The timing functions described further below, such as one illustrated in FIG. 9, may be seen as a linear combination of the correlation functions. In general, once a particular combination of signals to be correlated is adopted, a program for evaluating the correlation function, (developed using commercially available software such as MATLAB), is executed to check whether a single zerocrossing is achieved. In other words, each possible correlation function for various combinations of signals, (including in some fashion the error signal .epsilon..sub.k), is evaluated to determine whether or not it fulfills the objects of approximatelyminimizing the mean squared error and providing only one zero crossing. For example, the MATLAB program was used to generate the graphs in FIGS. 6 and 8, where FIG. 8 shows a suitable correlation function which has only one zero crossing. Whilespecific, suitable correlation signals are not known in advance, the inventors of the present invention determined that the detected error signal .epsilon..sub.k contains information about precursor noise. The correlation function is used to extractthis information which is then used for timing recovery. Turning to FIG. 10 referred to previously in conjunction with FIG. 7, the error signal .epsilon..sub.k is correlated with another correlation signal f.sub.k generated from signal .mu..sub.k obtained from the feedforward filter 56. Thefeedforward filter signal .mu..sub.k is input to signal shaper 90 where it is summed with a delayed version of itself .mu..sub.k-1. The resulting correlation function therefore is E((.mu..sub.k +.mu..sub.k-1).epsilon..sub.k). As described above, insome digital processing operations, the correlation function may be implemented without a multiplication by simply adopting the sign (+ or -) of the summer output as correlation signal f.sub.k. A mathematically equivalent combination of signals is shown in FIG. 11 which when correlated satisfy the above objectives is the combination of a delayed signal .mu..sub.k, that is .mu..sub.k-1, and the sum of the error .epsilon..sub.k and itsdelayed version .epsilon..sub.k-1 to produce a correlation function f.sub.k. The resulting correlation function therefore is .DELTA..theta.=E((.epsilon..sub.k +.epsilon..sub.k-1).mu..sub.k-1). As with FIG. 10, the data processing may be simplifiedusing the sign of one or both f.sub.k and .mu..sub.k-1. FIG. 12 shows a third example correlation function in which a delayed unfiltered signal s.sub.k+1 from the feedforward filter 56 is input to signal shaper 90. The resulting output signal f.sub.k is correlated with the error .epsilon..sub.k. Alternatively, since signals s.sub.k and s.sub.k+1 are readily available, they may be for example combined in the summer to produce signal f.sub.k for correlation with .epsilon..sub.k. The resulting correlation function therefore is.DELTA..theta.=E((s.sub.k +s.sub.k+1).epsilon..sub.k). Again, the sign of f.sub.k (+ or -) could simply be correlated with the error .epsilon..sub.k or its sign to simplify the data processing operation. Another combination of signals shown in FIG. 13 which when correlated satisfy the above objectives is the combination of an unfiltered signal s.sub.k and the sum of the error .epsilon..sub.k and its delayed version to produce a correlationfunction f.sub.k. The signal to be correlated f.sub.k is then combined with a signal s.sub.k from the feedforward filter 56. The resulting correlation function therefore is .DELTA..theta.=E((.epsilon..sub.k +.epsilon..sub.k-1)s.sub.k). As with FIG.10, the correlation of f.sub.k with s.sub.k may be adequately approximated using the sign of one or both of f.sub.k and s.sub.k to simplify the data processing operation. Each of these four example timing correlation functions satisfies the objectives above such that when the mean squared error is minimized, only a single zero crossing is obtained as confirmed by observation using the MATLAB program. Of course,these illustrated timing recovery functions are simply examples to which the present invention is not limited. Other various combinations of signals that satisfy the above-identified objectives would also be suitable correlation functions for achievingtiming recovery in accordance with the present A more rigorous mathematical explanation of the invention follows. To meet the above formulated requirements, the sum of the signals .mu..sub.k-1 and .mu..sub.k is correlated with the error .epsilon..sub.k. ##EQU1## where h.sub.u,i depicts thechannel partial impulse response function, .mu..sub.k-1 is simply the delayed version of .mu..sub.k, and the data symbols {a.sub.k } are assumed to be an uncorrelated sequence. The error .epsilon..sub.k can be mathematically described as: ##EQU2## whereN is the number of taps in the equalizer 68, i and k.sub.1 are time indexes, d.sub.i are estimated coefficients of the equalizer 68, and .eta..sub.k is a noise value at time instant k. Evaluating correlation during the time equalizer 68 converges results in following expression for correcting of the timing phase: where ##EQU3## where h.sub.i denotes the sampled impulse response function at the decision instant. The first term .GAMMA..sub.k,t represents a contribution due to uncancelled precursor intersymbol interference. Hence, it contains an information which can be utilized to optimize and track the optimum or near optimum sampling instance. Since.GAMMA..sub.k,t is the term that actually depends on the sampling phase in steady-state conditions, .GAMMA..sub.k,t is referred to as the timing function. The second term .GAMMA..sub.k,g represents a contribution due to incorrect previous decisions. It vanishes assuming no decisions errors are made, i.e., in the steady-state. This does not apply at the initial phase of the transmission whentiming recovery controller 70 and equalizer 68 operate jointly because the equalizer taps cannot be set to optimal values by independent adjustment. The third and fourth terms .GAMMA..sub.k,d and .GAMMA..sub.k,e represent contributions due to the imperfect channel equalization. Ideally, those two terms vanish completely after convergence to the correct channel impulse response, d.sub.i=h.sub.i. In practice, these terms cause zero-mean random fluctuations around steady state. The fifth term .GAMMA..sub.k,.mu. represents the unequalized part of the channel impulse response. The sixth .GAMMA..sub.k,.eta. represents additive white noise. The first, fifth and sixth terms do not depend whether equalizer 68 hasconverged or not. Nor are they functions of time. From the description given above, it can be seen that: .vertline..GAMMA..sub.k,t .vertline.<.vertline..GAMMA..sub.k,g .vertline. during the time the equalizer 68 converges, since the feedforward filter 56 reduces the amplitude of the pulse precursors such that (h.sub.-1 .apprxeq.0, h.sub.-2.apprxeq.0. . . ; h.sub.-m =0, m.ltoreq.M). On the contrary, .GAMMA..sub.k,g contains the largest values of the sampled impulse response function. .GAMMA..sub.k,d can be neglected providing the correct adjustment of decision threshold (automatic gain control 54), since d.sub.o =h.sub.o. .vertline..GAMMA..sub.k,e .vertline.<.vertline..GAMMA..sub.k,g .vertline. since .vertline..DELTA.h.sub.i .vertline. and a.sub.k-i a.sub.k-1 has a mean value equal to zero. .GAMMA..sub.k,.mu. can be neglected in comparison with .GAMMA..sub.k,g providing the large number of taps in the equalizer 68. the level of external noise is assumed to be low enough to allow the proper operation of the transceiver with a bit error rate (BER)<10.sup.-7, thus .GAMMA..sub.k,.eta. is negligibly small in comparison with .GAMMA..sub.k,g and.GAMMA..sub.k,t. .GAMMA..sub.k,g keeps the same sign during the time the equalizer converges, since h.sub.i and f.sub.i have either the same or opposite sign and since a.sup.2.sub.k-i is always positive. This holds because both h.sub.i and f.sub.i arenon-oscillatory, monotonic for almost all i except for very small values of h.sub.i and f.sub.i. On the other hand, it is possible to find some particular sampling instances that for i=0 the product h.sub.o f.sub.o does not have the same sign as for therest of the pulse tail. The timing function is positive when the timing instance is advanced and negative when the timing instance is retarded according to the timing function shown in FIG. 10. The term .GAMMA..sub.k,g is always negative. Furthermore, the sum of both,corresponding to the initial phase of the transmission, when equalizer 68 has not yet converged, is also negative and does not exhibit zero-crossings. The means that if no training sequence is assumed, and the equalizer 68 and timing recovery controller70 start operation simultaneously, the increment of the timing phase depends on .GAMMA..sub.k,t +.GAMMA..sub.k,g. The timing phase is therefore continuously retarded during that phase of the transmission. Subsequently .DELTA..theta..sub.k converges to.GAMMA..sub.k,t at the pace at which the equalizer converges and the term .GAMMA..sub.k,g decreases successively towards zero. Hence, there is little risk that the equalizer will diverge or that the system will lock unpredictably on a falsezero-crossing. The term .GAMMA..sub.k,g ultimately vanishes when the equalizer 68 reaches a zero-error state, i.e., when it makes the correct decision. The term .GAMMA..sub.k,e is assumed to be eliminated through averaging, since its expected value is zero. The terms .GAMMA..sub.k,.mu. and .GAMMA..sub.k,.eta. are neglected since they are relatively very small. Accordingly, the phase correction .DELTA..theta..sub.k from equation (7) depends mainly on the timing function .GAMMA..sub.k,t. In steady state conditions, the term .GAMMA..sub.k,e does not vanish even when the equalizer 68 models the communication channel correctly. Also in steady-state, the error in the channel modelling or identification .DELTA.h.sub.i depends onadaptive updating of the equalizer filter coefficients: where .mu. is the equalizer adaptation constant, and .epsilon..sub.k is a random process dominated by external noise sources. Inserting equation (14) in equation (12) results in: ## If one assumes that .epsilon..sub.k is a zero mean, non-impulsive random process with a variance .sigma..sub.k.sup.2, the term .GAMMA..sub.k,e may be considered as an approximately gaussian noise source with a variance: ##EQU5## In a practicethis term is negligibly small since it depends on .mu..sup.2. Referring to the .GAMMA..sub.k,d, i.e., the third term in the expression for .DELTA..theta..sub.k, equation (11), one sees the possibility to introduce the bias to the estimator of the steady-state location of the timing instants. In particular,when automatic gain control block takes incorrect value for a gain, it causes the lasting discrepancy between the signal level and the decision threshold, .DELTA.h.sub.o .noteq.0. The .GAMMA..sub.k,d translate it to the permanent bias of the estimate of.DELTA..theta..sub.k. This phenomenon, however, may be eliminated by the proper design of the automatic gain control block. A change of the sampling phase immediately gives rise to an undesired correlation described by the terms .GAMMA..sub.k,d and .GAMMA..sub.k,e. The term .GAMMA..sub.k,g does not contribute to the correlation function provided that the phaseincrements are small enough that they do not cause incorrect decisions, since E(a.sub.k -a.sub.k)=0. The terms .GAMMA..sub.k,d and .GAMMA..sub.k,e can not be eliminated, but their influence diminishes with the small phase increments that usually occurin steady state conditions. In steady-state then, .GAMMA..sub.k,g =0, .GAMMA..sub.k,d =0 and .GAMMA..sub.k,e =0, and expression for the phase correction .DELTA..theta..sub.k simplifies to: The components .GAMMA..sub.k,.mu. and .GAMMA..sub.k,.eta. may be regarded as the bias of the estimate of the timing function .GAMMA..sub.k,t. The term .GAMMA..sub.k,.mu. is caused by the cross-product of the uncancelled far end signal tailsand the tail of the signal to be correlated f.sub.k, and maintains constant mean value during the operation of the timing recovery circuit. The magnitude of the term .GAMMA..sub.k,.mu. depends on the combination of the correlated signals. However, forthe high signal to noise ratios required to achieve BER=10.sup.-7, the tail of the far-end signal must be cancelled almost perfectly, thus the influence of this term is negligibly small. The term .GAMMA..sub.k,.eta. depends of the external noise level. Assuming that .eta..sub.k and .eta..sub.f,k have a gaussian probability density function it may be show that where .sigma..sup.2 is the noise variance at the input of the detector, and .alpha. is a constant depending on the chosen precursor filter coefficients. The contribution of this term is also negligibly small. Under the assumptions above, .DELTA..theta..sub.k depends almost entirely on .GAMMA..sub.k,t. The timing information can therefore be extracted from the estimate of the correlation coefficient between the error .epsilon..sub.k and someadequately chosen signal as described above. In practical implementations, such as that described above, time averaging is employed. Variations of .GAMMA..sub.k,t will cause oscillations around the optimum or near optimum sampling instance. i.e., jitter. In order to avoid unnecessaryphase correction, the correction of the actual sampling instance .DELTA..theta..sub.k, may be restricted to values of .GAMMA..sub.k,t greater than some threshold magnitude. The magnitude of the threshold may be evaluated using the fact that values of.GAMMA..sub.k,t depend on the number of samples in the average estimate. The present invention provides a practical and efficient approach to accurately track and adjusting the phase drift between transmitter and receiver clocks. Timing information is extracted at the symbol baud rate and optimal or near optimalsampling is achieved using a correlation function that passes through zero at or near the desired sampling phase. A correlation function that cross-correlates two signals at the symbol rate is selected such that it minimizes precursor interferencethrough the choice of sampling instance. The timing recovery information is provided from a zero crossing of the correlation function and is used to determine the optimum or near optimum location of the pulse sampling instant. The signals to becorrelated are chosen so that false zero crossings are avoided. In one of the embodiments disclosed above, the correlated signals included a symbol detection error signal and a signal from the feedforward filter. As a result, the present inventionavoids pitfalls of previous timing recovery algorithms including locking onto false zero crossings, oscillatory behavior, and susceptibility to spurious phenomena. While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on thecontrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/5675612.html","timestamp":"2014-04-19T02:05:44Z","content_type":null,"content_length":"99042","record_id":"<urn:uuid:84830ce2-b698-4759-a40b-1893f312a4fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
H. R. Grigoryan Theory Group Physics Division, Building 203 Address: Argonne National Laboratory 9700 South Cass Avenue Argonne, IL 60439, USA Phone: +1 (630) 252-6223 Fax: +1 (630) 252-3903 E-mail: grigoryan@anl.gov Curriculum Vitae My research interests mainly lie in Particle and Nuclear Physics. My immediate interests include applying Quantum Field Theory and String Theory methods to study Hadronic Physics and some nonperturbative aspects of QCD at zero and finite temperatures. Also, I am interested in Physics Beyond the Standard Model emerging from the setups involving Extra Dimensions. 2008-Present: Director's Postdoctoral Fellow, Theory Group, Argonne National Laboratory, USA 2003-2008: Research Assistant at Thomas Jefferson National Accelerator Facility, Newport News, VA The list of all papers and citation summary are available from SPIRES Photoproduction through Chern-Simons Term Induced Interactions in Holographic QCD, S.K. Domokos, H.R. Grigoryan and J.A. Harvey, accepted in Phys. Rev. D (2009) [arXiv:0905.1949[hep-ph]] Electromagnetic Nucleon-to-Delta Transition in Holographic QCD, H.R. Grigoryan, T.-S.H. Lee and Ho-Ung Yee, Phys. Rev. D 80 , 055006 (2009) [arXiv:0904.3710[hep-ph]] Pion in the Holographic Model with 5D Yang-Mills Fields, H.R. Grigoryan and A.V. Radyushkin, Phys. Rev. D 78, 115008 (2008) [arXiv:0808.1243[hep-ph]] Anomalous Form Factor of the Neutral Pion in Extended AdS/QCD Model with Chern-Simons Term, H.R. Grigoryan and A.V. Radyushkin, Phys. Rev. D 77, 115024 (2008) [arXiv:0803.1143 [hep-ph]] Dimension Six Corrections to the Vector Sector of AdS/QCD Model, H.R. Grigoryan, Phys. Lett. B 662, 158 (2008) [arXiv:0709.0939 [hep-ph]] Pion Form Factor in Chiral Limit of Hard-Wall AdS/QCD Model, H.R. Grigoryan and A.V. Radyushkin, Phys. Rev. D 76, 115007 (2007) [arXiv:0709.0500 [hep-ph]] Structure of Vector Mesons in Holographic Model with Linear Confinement, H.R. Grigoryan and A.V. Radyushkin, Phys. Rev. D 76, 095007 (2007) [arXiv:0706.1543 [hep-ph]] Form Factors and Wave Functions of Vector Mesons in Holographic QCD, H.R. Grigoryan and A.V. Radyushkin, Phys. Lett. B 650, 421 (2007) [arXiv:hep-ph/0703069] Vector Meson Mass Corrections at O(a^2) in PQChPT with Wilson and Ginsparg-Wilson quarks, H.R. Grigoryan and A.W. Thomas, Phys. Lett. B 632, 657 (2006) [arXiv:hep-lat/0507028] PQChPT with Staggered Sea and Valence Ginsparg-Wilson Quarks: Vector Meson Masses, H.R. Grigoryan and A.W. Thomas, J. Phys. G 31, 1527 (2005) [arXiv:hep-lat/0511022]
{"url":"http://www.phy.anl.gov/theory/staff/hgrigoryan.html","timestamp":"2014-04-19T04:32:26Z","content_type":null,"content_length":"28638","record_id":"<urn:uuid:b0965fd1-5603-4149-9d86-28b92813322a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
BioSysBio:abstracts/2007/Daniel Mateus From OpenWetWare Modeling Genetic Regulatory Networks from specified behaviors Author(s): Daniel Mateus (1), Jean-Paul Comet (2), Jean-Pierre Gallois (1), Pascale Le Gall (2) Affiliations: (1) CEA/LIST Saclay, France, (2) IBISC, université d'Evry, France Contact:email: daniel.mateus at cea.fr Keywords: 'Gene networks' 'Qualitative dynamical models' 'temporal properties' 'kinetic parameters' Tools for modeling and simulation are needed to understand the functioning of genetic regulatory networks. The difficulty of determining the parameters of the models motivates the use of automatic methods able to find the parameters of the models whose dynamics match the behavior of the actual system. We propose a method applied on the qualitative modeling approach developed by R. Thomas. The logical parameters of the model, which are related to the kinetic parameters of a differential description, can be unknown. Translating the model into a symbolic transition system, and the known behaviors into temporal logic formulas, the method gives the constraints on the logical parameters corresponding to all the models having the specified behavior. This work has been implemented in the AGATHA tool, which is also used in the validation process of industrial specifications [1, 2]. The asynchronous and multivalued logical modeling of regulatory networks, which has been developed by R. Thomas and co-workers [3, 4, 5], generalizes the previously introduced asynchronous boolean modeling [6]. This generalized formalism has been used to model various gene networks. A logical description is constituted of n variables, each representing the concentration of a constituent of the actual network, mainly the proteins produced by the genes of the network. Each variable xi can take an integer value between 0 and bi (bi is the maximum value of xi, and is less than or equal to the number of variables regulated by xi). A logical state E=(E1 ,…, En ) is a vector of values of the variables. With each state E, and each variable xi, is associated a logical parameter K(xi , E), which has an integer value between 0 and bi. The logical parameter is the value toward which the associated variable tends in the associated logical state. It means that in the logical state E: • if K(xi , E)>Ei , then (E1 ,…, Ei +1,…, En ) is a successor of E; • if K(xi , E)<Ei , then (E1 ,…, Ei -1,…, En ) is a successor of E; • if K(xi , E)=Ei , for all i, then E is called a steady state, and has only itself as successor. The graph of sequences of states is constituted of the logical states, and the transitions between each state and its successors. Pseudomonas aeruginosa are bacteria that secrete mucus (alginate) in lungs affected by cystic fibrosis, but not in common environment. As it increases respiratory deficiency, this phenomenon is a major cause of mortality in this disease. The simplified regulatory network, as proposed in [7], contains the protein AlgU (product of algU gene), and an inhibitor complex anti-AlgU (product of muc genes) (see figure 1. on the left: x stands for AlgU, y for anti-AlgU. The mucus production occurs when x=2). Bacteriophage lambda is a virus whose DNA can integrate into bacterial chromosome and be faithfully transmitted to the bacterial progeny. After infection, most of the bacteria display a lytic response and liberate new phages, but some display a lysogenic response, i.e. survive and carry lambda genome, becoming immune to infection. Figure 2. on the right is the graph of interactions described in [8] and involves four genes called cI, cro, cII and N. The lytic response leads to the states (cI,cro,cII,N) is (0,2,0,0) or (0,3,0,0,) where cro is fully expressed. The lysogenic response leads to the state (2,0,0,0), where cI is fully expressed, and the repressor produced by cI blocks the expression of the other viral genes, leading to immunity. In this two cases the logical parameters are unknown. Given a constraint C on the logical parameters, and an initial logical state E, we generate a symbolic transition system (STS). Then the symbolic execution of the STS is made. This method constructs a tree of sequences of logical states, with the following rules: • The root of the tree is the initial state E; • For each possible successor of E, there can be a path constructed, if and only if the condition D on the logical parameters that makes a logical state E’ a successor of the initial state is compatible with C; then E’ is constructed, and an edge is constructed from E to E’; • E’ is associated with a new constraint C’, which is the conjunction of C and D; • The process is repeated with the successors of E’ and the constraint C’; • If a new logical state has already been reached in the same path, then the execution of this path stops; • The symbolic execution is over when all the possible paths have been treated. We see that every state in the tree is associated with a constraint, which is called path condition, and is the constraint on the parameters which is necessary to the existence of the associated path in the logical model of the network. To search a specific path in the symbolic execution tree we have adapted model-checking techniques for Linear Temporal Logic (LTL) [9]. A LTL formula expresses properties of a path. The method we use selects all the paths verifying the LTL formula, and synthesizes the disjunction of the path conditions associated with the last state of each path. The resulting constraint represents all the parameters compatible with the behavior specified by the formula. It has been observed that mucoid P. aeruginosa can continue to produce mucus isolated from infected lungs. The common explanation is that the mucoidy of P. aeruginosa is due to a mutation which cancels the inhibition of algU gene. But the hypothesis that this mucoid state occurs in reason of an epigenetic modification, i.e. without mutation, has been made [7, 10, 11]. With the method described here it is possible to find the constraints such that the resulting models has two stable behaviors, one mucoid (where x=2) and one non-mucoid (where x<2): 8 models are compatible with the epigenetic hypothesis. In the case on lambda-phage, there are 2156 different models that have the following behaviors: lytic and lysogenic states are stable, and there is a pathway from initial state to lysis and to lysogeny. But in all these models, there is a common path to lysis, and one of two different paths to lysogeny. Modeling genetic regulatory networks is generally confronted by the partial knowledge on the system: usually there is not only one model that is certainly accurate whereas the others are certainly false. Even with a qualitative formalism, different models can fit with experimental results. With our method, it is possible to manipulate not only one model, but a set of models compatible with experimental results. Then it is possible to verify if a hypothetic behavior is possible considering all the models (as the epigenetic modification in P. aeruginosa) or to see common behaviors over all the possible models (as possible pathways to lysis or lysogeny in lambda-phage); this kind of results is difficult to reach with only one complete model, as it is generally impossible to justify the unobserved behaviors it reveals. Moreover, by keeping a set of possible models, when a new behavior is discovered experimentally, the new result can be added to restrict the set of models, refining the knowledge on the system.
{"url":"http://openwetware.org/wiki/BioSysBio:abstracts/2007/Daniel_Mateus","timestamp":"2014-04-23T10:56:01Z","content_type":null,"content_length":"35289","record_id":"<urn:uuid:609481c3-110a-46a4-8d38-fc1f6bf6c091>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help December 6th 2009, 02:32 PM #1 Nov 2009 Prove that every element $v\in (\mathbb{C}^{2})^{\otimes 3}$ is the sum of two pure tensors $u_{1}\otimes u_{2}\otimes u_{3}$., where $u_{1},u_{2},u_{3}\in\mathbb{C}^{2}$ . Thanks in advance. you're sure that two is not three then? the reason for asking this is that, in general, for any 2-dimensional vector space $V$ over a field $F,$ the "maximum rank" of $V^{\otimes 3}$ is 3 and not 2, i.e. every element of $V^{\otimes 3}$ is a sum of at most 3 simple tensors. Thks, but the question requires 2, not 3. They give some hints that consider $v$ as a linear map from $\mathbb{C}^2$ to $M_2$ (group of 2x2 matrices) (WHY? and HOW?P) and consider two possibilities of the dimension of the image (1 or 2). But I really dont get it. Oh, sorry, $v$ here must be in a dense open subset. Is there any ideal then? I think the first case (1 dim) is not so hard. But the second case is ...not easy. Thks in advance. December 6th 2009, 11:26 PM #2 MHF Contributor May 2008 December 7th 2009, 01:26 AM #3 Nov 2009 December 7th 2009, 11:42 AM #4 Nov 2009 December 15th 2009, 10:24 AM #5 Nov 2009
{"url":"http://mathhelpforum.com/advanced-algebra/118915-tensor.html","timestamp":"2014-04-17T14:38:54Z","content_type":null,"content_length":"41502","record_id":"<urn:uuid:7379d7eb-c1de-4c5a-ac4e-ad08f41a118a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
How to rewrite this code from python loops to numpy vectors (for perfomance)? up vote 2 down vote favorite I have this code: for j in xrange (j_start, self.max_j): for i in xrange (0, self.max_i): new_i = round (i + ((j - j_start) * discriminant)) if new_i >= self.max_i: self.grid[new_i, j] = standard[i] and I want to speed it up by throwing away slow native python loops. There is possibility to use numpy vector operations instead, they are really fast. How to do that? j_start, self.max_j, self.max_i, discriminant int, int, int, float (constants). two-dimensional numpy array (self.max_i x self.max_j). one-dimensional numpy array (self.max_i). add comment 1 Answer active oldest votes Here is a complete solution, perhaps that will help. jrange = np.arange(self.max_j - j_start) joffset = np.round(jrange * discriminant).astype(int) i = np.arange(self.max_i) for j in jrange: up vote 2 down vote new_i = i + joffset[j] accepted in_range = new_i < self.max_i self.grid[new_i[in_range], j+j_start] = standard[i[in_range]] It may be possible to vectorize both loops but that will, I think, be tricky. I haven't tested this but I believe it computes the same result as your code. unfortunately it's more complicated :( thanks you for the contribution, but it doesn't help me. – aspect_mkn8rd Dec 5 '12 at 19:43 what are you looking for in a solution? A general method for vectorization of anything is "beyond the scope of this discussion". – GaryBishop Dec 5 '12 at 23:24 I updated the solution to make it do just what your code does (I think). Does that help? – GaryBishop Dec 6 '12 at 0:14 ok, thx u. -) It still tells 'IndexError: arrays used as indices must be of integer (or boolean) type' at the last line, but it might help me now. I think I accept your answer a little later, when I solve the task. – aspect_mkn8rd Dec 6 '12 at 7:10 I added a call to astype(int) on the np.round call. That should fix the type. – GaryBishop Dec 6 '12 at 11:41 show 1 more comment Not the answer you're looking for? Browse other questions tagged python performance algorithm numpy vectorization or ask your own question.
{"url":"http://stackoverflow.com/questions/13714790/how-to-rewrite-this-code-from-python-loops-to-numpy-vectors-for-perfomance","timestamp":"2014-04-24T20:46:32Z","content_type":null,"content_length":"70213","record_id":"<urn:uuid:8838ca93-5c51-43d5-a1ed-fd00cf023dbb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Torsion Gravity from String Theory Posted by Urs Schreiber I was asked by A. Pelster, who has worked on torsion gravity together with H. Kleinert and F. Hehl, if I consider it worthwhile thinking about torsion gravity in the context of string theory. Of course everybody knows that there is the Kalb-Ramond field in string theory whose field strength acts like a torsion in many situations. For instance in the highly important (S)WZW models the Kalb-Ramond field provides the parallelizing torsion of the group manifold that the string is propagating on. Apart from that, the Kalb-Ramond field of string theory is perhaps more prominently known for its relation to noncommutative field theory, as described in the seminal paper N. Seiberg and E. Witten, String Theory and Noncommutative Geometry . Now A. Pelster points me to papers by Richard Hammond, who has done very detailed studies of torsion gravity in general as well as its relation to string theory in particular. The most comprehensive review article is apparently R. Hammond, Tosion gravity. When challenged, I realized that I couldn’t satisfactorily answer why I had heard so little about the role of the Kalb-Ramond field as providing spacetime torsion. Together with R. Hammond, A. Pelster likes to argue that, since the Kalb-Ramond field is not at order ${\alpha }^{\prime }$ in the string action but on par with the gravitational terms, it should actually have measurable effects even at relatively modest enegies - shouldn’t it? Indeed, R. Hammond, who discusses experimental signatures of torion in great detail, argues that any detection of torsion would have direct implications for the experimental verification of string theory. In his above paper he conculdes We have seen that torsion is called on stage by many directors, from string theory to supergravity, yet the audience has not yet settled on the correct interpretation of its role. I believen any diract, or even indirect, observation of torsion would be one of the greatest breakthroughs in many decades, and would certainly help settle these questions. I should try to better understand the big picture of torsion gravity in string theory. Posted at February 15, 2004 7:53 PM UTC
{"url":"http://golem.ph.utexas.edu/string/archives/000310.html","timestamp":"2014-04-20T13:36:27Z","content_type":null,"content_length":"11183","record_id":"<urn:uuid:c4eb2fac-30cb-42a7-b8e4-f0eb870e7b28>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: "testing" a cluster analysis [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: "testing" a cluster analysis From Ronán Conroy <rconroy@rcsi.ie> To statalist@hsphsun2.harvard.edu Subject Re: st: "testing" a cluster analysis Date Wed, 7 Feb 2007 10:34:58 +0000 On 7 Feabh 2007, at 00:06, Adam Seth Litwin wrote: Hello. I just ran a cluster analysis, not a technique I use frequently. I have seven binary variables forming, at the moment, five clusters. I thought a useful exercise would be the following: For each of the seven variables, examine its mean in all five clusters. Then, run an F-test to show that the means are not equal across all five clusters. So, for example, I type - tabstat var1, by(CLUSTER) stat(n mean) But, I'm not sure how to run the F-test. Careful. An analysis of variance is a hypothesis test. The model is specified in advance and the anova calculates the values of the model parameters. In your case, the model was generated from the data. The usual interpretation of the F ratio does not apply. Cluster analysis is an exploratory technique. You need to think about validating the clustering by showing that the clusters differ on variables which were not used in the clustering but which are theoretically related to the cluster process. For example, if you use clustering to define five clusters of people based on the type and frequency of their social interactions, then you would expect that the clusters would differ on things like loneliness and perceived social support, and you would hope that they differed in dimensions like mood or (headline from this month's Archives of General Psychiatry) risk of Alzheimer's disease. So I'd forget the F-test and start validating the clusters. Your hypothesis is that the clusters are different from each other in some respect other than the variables you clustered on. Ronán Conroy Royal College of Surgeons in Ireland +353 (0) 1 402 2431 +353 (0) 87 799 97 95 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-02/msg00185.html","timestamp":"2014-04-20T03:24:13Z","content_type":null,"content_length":"8169","record_id":"<urn:uuid:650bbe8e-2251-4aa7-b6d8-fcb3bf935d22>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
A124068 - OEIS A124068 Fixed points for operation of repeatedly replacing a number by the sum of the seventh power of its digits. 6 0, 1, 1741725, 4210818, 9800817, 9926315, 14459929 (list; graph; refs; listen; history; text; internal format) OFFSET 0,3 COMMENTS The sequence "Fixed points for operation of repeatedly replacing a number by the sum of the sixth power of its digits" has just 3 terms: 0,1,548834. LINKS Table of n, a(n) for n=0..6. EXAMPLE 1741725=1^7+7^7+4^7+1^7+7^7+2^7+5^7 CROSSREFS Cf. A046197, A052455, A052464, A124069, A226970, A003321. Sequence in context: A185844 A234130 A177695 * A237307 A090054 A186823 Adjacent sequences: A124065 A124066 A124067 * A124069 A124070 A124071 KEYWORD base,fini,full,nonn AUTHOR Sebastien DUMORTIER (sdumortier(AT)ac-limoges.fr), Nov 05 2006 STATUS approved
{"url":"http://oeis.org/A124068","timestamp":"2014-04-21T16:14:03Z","content_type":null,"content_length":"14656","record_id":"<urn:uuid:7d6a0b35-6eea-4671-b6cd-9a158c4d9a03>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
5th USENIX Conference on File and Storage Technologies - Paper Pp. 1 16 of the Proceedings Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Bianca Schroeder Garth A. Gibson Computer Science Department Carnegie Mellon University {bianca, garth}@cs.cmu.edu Component failure in large-scale IT installations is becoming an ever larger problem as the number of components in a single cluster approaches a million. In this paper, we present and analyze field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. About 100,000 disks are covered by this data, some for an entire lifetime of five years. The data include drives with SCSI and FC, as well as SATA interfaces. The mean time to failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%. We find that in the field, annual disk replacement rates typically exceed 1%, with 2-4% common and up to 13% observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that, rather than a significant infant mortality effect, we see a significant early onset of wear-out degradation. That is, replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as operating conditions, affect replacement rates more than component specific factors. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range Despite major efforts, both in industry and in academia, high reliability remains a major challenge in running large-scale IT systems, and disaster prevention and cost of actual disasters make up a large fraction of the total cost of ownership. With ever larger server clusters, maintaining high levels of reliability and availability is a growing problem for many sites, including high-performance computing systems and internet service providers. A particularly big concern is the reliability of storage systems, for several reasons. First, failure of storage can not only cause temporary data unavailability, but in the worst case it can lead to permanent data loss. Second, technology trends and market forces may combine to make storage system failures occur more frequently in the future [24]. Finally, the size of storage systems in modern, large-scale IT installations has grown to an unprecedented scale with thousands of storage devices, making component failures the norm rather than the exception [7]. Large-scale IT systems, therefore, need better system design and management to cope with more frequent failures. One might expect increasing levels of redundancy designed for specific failure modes [ 3,7], for example. Such designs and management systems are based on very simple models of component failure and repair processes [22]. Better knowledge about the statistical properties of storage failure processes, such as the distribution of time between failures, may empower researchers and designers to develop new, more reliable and available storage systems. Unfortunately, many aspects of disk failures in real systems are not well understood, probably because the owners of such systems are reluctant to release failure data or do not gather such data. As a result, practitioners usually rely on vendor specified parameters, such as mean-time-to-failure (MTTF), to model failure processes, although many are skeptical of the accuracy of those models [4,5, 33]. Too much academic and corporate research is based on anecdotes and back of the envelope calculations, rather than empirical data [28]. The work in this paper is part of a broader research agenda with the long-term goal of providing a better understanding of failures in IT systems by collecting, analyzing and making publicly available a diverse set of real failure histories from large-scale production systems. In our pursuit, we have spoken to a number of large production sites and were able to convince several of them to provide failure data from some of their systems. In this paper, we provide an analysis of seven data sets we have collected, with a focus on storage-related failures. The data sets come from a number of large-scale production systems, including high-performance computing sites and large internet services sites, and consist primarily of hardware replacement logs. The data sets vary in duration from one month to five years and cover in total a population of more than 100,000 drives from at least four different vendors. Disks covered by this data include drives with SCSI and FC interfaces, commonly represented as the most reliable types of disk drives, as well as drives with SATA interfaces, common in desktop and nearline systems. Although 100,000 drives is a very large sample relative to previously published studies, it is small compared to the estimated 35 million enterprise drives, and 300 million total drives built in 2006 [1]. Phenomena such as bad batches caused by fabrication line changes may require much larger data sets to fully characterize. We analyze three different aspects of the data. We begin in Section 3 by asking how disk replacement frequencies compare to replacement frequencies of other hardware components. In Section 4, we provide a quantitative analysis of disk replacement rates observed in the field and compare our observations with common predictors and models used by vendors. In Section 5, we analyze the statistical properties of disk replacement rates. We study correlations between disk replacements and identify the key properties of the empirical distribution of time between replacements, and compare our results to common models and assumptions. Section 6 provides an overview of related work and Section 7 concludes. Table 1: Overview of the seven failure data sets. Note that the disk count given in the table is the number of drives in the system at the end of the data collection period. For some systems the number of drives changed during the data collection period, and we account for that in our analysis. The disk parameters 10K and 15K refer to the rotation speed in revolutions per minute; drives not labeled 10K or 15K probably have a rotation speed of 7200 rpm. │ Data set │ Type of │ Duration │ #Disk │ # Servers │ Disk │ Disk │ MTTF │ Date of first │ ARR │ │ │ cluster │ │ events │ │ Count │ Parameters │ (Mhours) │ Deploym. │ (%) │ │ HPC1 │ HPC │ 08/01 - 05/06 │ 474 │ 765 │ 2,318 │ 18GB 10K SCSI │ │ 08/01 │ 4.0 │ │ " │ " │ " │ 124 │ 64 │ 1,088 │ 36GB 10K SCSI │ │ " │ 2.2 │ │ HPC2 │ HPC │ 01/04 - 07/06 │ 14 │ 256 │ 520 │ 36GB 10K SCSI │ │ 12/01 │ 1.1 │ │ HPC3 │ HPC │ 12/05 - 11/06 │ 103 │ 1,532 │ 3,064 │ 146GB 15K SCSI │ 1.5 │ 08/05 │ 3.7 │ │ " │ HPC │ 12/05 - 11/06 │ 4 │ N/A │ 144 │ 73GB 15K SCSI │ 1.5 │ " │ 3.0 │ │ " │ HPC │ 12/05 - 08/06 │ 253 │ N/A │ 11,000 │ 250GB 7.2K SATA │ 1.0 │ " │ 3.3 │ │ HPC4 │ Various │ 09/03 - 08/06 │ 269 │ N/A │ 8,430 │ 250GB SATA │ │ 09/03 │ 2.2 │ │ " │ HPC │ 11/05 - 08/06 │ 7 │ N/A │ 2,030 │ 500GB SATA │ │ 11/05 │ 0.5 │ │ " │ clusters │ 09/05 - 08/06 │ 9 │ N/A │ 3,158 │ 400GB SATA │ │ 09/05 │ 0.8 │ │ COM1 │ Int. serv. │ May 2006 │ 84 │ N/A │ 26,734 │ 10K SCSI │ │ 2001 │ 2.8 │ │ COM2 │ Int. serv. │ 09/04 - 04/06 │ 506 │ 9,232 │ 39,039 │ 15K SCSI │ │ 2004 │ 3.1 │ │ COM3 │ Int. serv. │ 01/05 - 12/05 │ 2 │ N/A │ 56 │ 10K FC │ │ N/A │ 3.6 │ │ " │ " │ " │ 132 │ N/A │ 2,450 │ 10K FC │ │ N/A │ 5.4 │ │ " │ " │ " │ 108 │ N/A │ 796 │ 10K FC │ │ N/A │ 13.6 │ │ " │ " │ " │ 104 │ N/A │ 432 │ 10K FC │ │ 1998 │ 24.1 │ 2 Methodology 2.1 What is a disk failure? While it is often assumed that disk failures follow a simple fail-stop model (where disks either work perfectly or fail absolutely and in an easily detectable manner [22,24]), disk failures are much more complex in reality. For example, disk drives can experience latent sector faults or transient performance problems. Often it is hard to correctly attribute the root cause of a problem to a particular hardware component. Our work is based on hardware replacement records and logs, i.e. we focus on disk conditions that lead a drive customer to treat a disk as permanently failed and to replace it. We analyze records from a number of large production systems, which contain a record for every disk that was replaced in the system during the time of the data collection. To interpret the results of our work correctly it is crucial to understand the process of how this data was created. After a disk drive is identified as the likely culprit in a problem, the operations staff (or the computer system itself) perform a series of tests on the drive to assess its behavior. If the behavior qualifies as faulty according to the customer's definition, the disk is replaced and a corresponding entry is made in the hardware replacement log. The important thing to note is that there is not one unique definition for when a drive is faulty. In particular, customers and vendors might use different definitions. For example, a common way for a customer to test a drive is to read all of its sectors to see if any reads experience problems, and decide that it is faulty if any one operation takes longer than a certain threshold. The outcome of such a test will depend on how the thresholds are chosen. Many sites follow a ``better safe than sorry'' mentality, and use even more rigorous testing. As a result, it cannot be ruled out that a customer may declare a disk faulty, while its manufacturer sees it as healthy. This also means that the definition of ``faulty'' that a drive customer uses does not necessarily fit the definition that a drive manufacturer uses to make drive reliability projections. In fact, a disk vendor has reported that for 43% of all disks returned by customers they find no problem with the disk [1]. It is also important to note that the failure behavior of a drive depends on the operating conditions, and not only on component level factors. For example, failure rates are affected by environmental factors, such as temperature and humidity, data center handling procedures, workloads and ``duty cycles'' or powered-on hours patterns. We would also like to point out that the failure behavior of disk drives, even if they are of the same model, can differ, since disks are manufactured using processes and parts that may change. These changes, such as a change in a drive's firmware or a hardware component or even the assembly line on which a drive was manufactured, can change the failure behavior of a drive. This effect is often called the effect of batches or vintage. A bad batch can lead to unusually high drive failure rates or unusually high rates of media errors. For example, in the HPC3 data set (Table 1) the customer had 11,000 SATA drives replaced in Oct. 2006 after observing a high frequency of media errors during writes. Although it took a year to resolve, the customer and vendor agreed that these drives did not meet warranty conditions. The cause was attributed to the breakdown of a lubricant leading to unacceptably high head flying heights. In the data, the replacements of these drives are not recorded as failures. In our analysis we do not further study the effect of batches. We report on the field experience, in terms of disk replacement rates, of a set of drive customers. Customers usually do not have the information necessary to determine which of the drives they are using come from the same or different batches. Since our data spans a large number of drives (more than 100,000) and comes from a diverse set of customers and systems, we assume it also covers a diverse set of vendors, models and batches. We therefore deem it unlikely that our results are significantly skewed by ``bad batches''. However, we caution the reader not to assume all drives behave identically. 2.2 Specifying disk reliability and failure frequency Drive manufacturers specify the reliability of their products in terms of two related metrics: the annualized failure rate (AFR), which is the percentage of disk drives in a population that fail in a test scaled to a per year estimation; and the mean time to failure (MTTF). The AFR of a new product is typically estimated based on accelerated life and stress tests or based on field data from earlier products [2]. The MTTF is estimated as the number of power on hours per year divided by the AFR. A common assumption for drives in servers is that they are powered on 100% of the time. Our data set providers all believe that their disks are powered on and in use at all times. The MTTFs specified for today's highest quality disks range from 1,000,000 hours to 1,500,000 hours, corresponding to AFRs of 0.58% to 0.88%. The AFR and MTTF estimates of the manufacturer are included in a drive's datasheet and we refer to them in the remainder as the datasheet AFR and the datasheet MTTF. In contrast, in our data analysis we will report the annual replacement rate (ARR) to reflect the fact that, strictly speaking, disk replacements that are reported in the customer logs do not necessarily equal disk failures (as explained in Section 2.1). Table 1 provides an overview of the seven data sets used in this study. Data sets HPC1, HPC2 and HPC3 were collected in three large cluster systems at three different organizations using supercomputers. Data set HPC4 was collected on dozens of independently managed HPC sites, including supercomputing sites as well as commercial HPC sites. Data sets COM1, COM2, and COM3 were collected in at least three different cluster systems at a large internet service provider with many distributed and separately managed sites. In all cases, our data reports on only a portion of the computing systems run by each organization, as decided and selected by our sources. It is important to note that for some systems the number of drives in the system changed significantly during the data collection period. While the table provides only the disk count at the end of the data collection period, our analysis in the remainder of the paper accounts for the actual date of these changes in the number of drives. Second, some logs also record events other than replacements, hence the number of disk events given in the table is not necessarily equal to the number of replacements or failures. The ARR values for the data sets can therefore not be directly computed from Table 1. Below we describe each data set and the environment it comes from in more detail. HPC1 is a five year log of hardware replacements collected from a 765 node high-performance computing cluster. Each of the 765 nodes is a 4-way SMP with 4 GB of memory and three to four 18GB 10K rpm SCSI drives. Of these nodes, 64 are used as filesystem nodes containing, in addition to the three to four 18GB drives, 17 36GB 10K rpm SCSI drives. The applications running on this system are typically large-scale scientific simulations or visualization applications. The data contains, for each hardware replacement that was recorded during the five year lifetime of this system, when the problem started, which node and which hardware component was affected, and a brief description of the corrective action. HPC2 is a record of disk replacements observed on the compute nodes of a 256 node HPC cluster. Each node is a 4-way SMP with 16 GB of memory and contains two 36GB 10K rpm SCSI drives, except for eight of the nodes, which contain eight 36GB 10K rpm SCSI drives each. The applications running on this system are typically large-scale scientific simulations or visualization applications. For each disk replacement, the data set records the number of the affected node, the start time of the problem, and the slot number of the replaced drive. HPC3 is a record of disk replacements observed on a 1,532 node HPC cluster. Each node is equipped with eight CPUs and 32GB of memory. Each node, except for four login nodes, has two 146GB 15K rpm SCSI disks. In addition, 11,000 7200 rpm 250GB SATA drives are used in an external shared filesystem and 144 73GB 15K rpm SCSI drives are used for the filesystem metadata. The applications running on this system are typically large-scale scientific simulations or visualization applications. For each disk replacement, the data set records the day of the replacement. The HPC4 data set is a warranty service log of disk replacements. It covers three types of SATA drives used in dozens of separately managed HPC clusters. For the first type of drive, the data spans three years, for the other two types it spans a little less than a year. The data records, for each of the 13,618 drives, when it was first shipped and when (if ever) it was replaced in the field. COM1 is a log of hardware failures recorded by an internet service provider and drawing from multiple distributed sites. Each record in the data contains a timestamp of when the failure was repaired, information on the failure symptoms, and a list of steps that were taken to diagnose and repair the problem. The data does not contain information on when each failure actually happened, only when repair took place. The data covers a population of 26,734 10K rpm SCSI disk drives. The total number of servers in the monitored sites is not known. COM2 is a warranty service log of hardware failures recorded on behalf of an internet service provider aggregating events in multiple distributed sites. Each failure record contains a repair code (e.g. ``Replace hard drive'') and the time when the repair was finished. Again there is no information on the start time of each failure. The log does not contain entries for failures of disks that were replaced in the customer site by hot-swapping in a spare disk, since the data was created by the warranty processing, which does not participate in on-site hot-swap replacements. To account for the missing disk replacements we obtained numbers for the periodic replenishments of on-site spare disks from the internet service provider. The size of the underlying system changed significantly during the measurement period, starting with 420 servers in 2004 and ending with 9,232 servers in 2006. We obtained quarterly hardware purchase records covering this time period to estimate the size of the disk population in our ARR analysis. The COM3 data set comes from a large external storage system used by an internet service provider and comprises four populations of different types of FC disks (see Table 1). While this data was gathered in 2005, the system has some legacy components that were as old as from 1998 and were known to have been physically moved after initial installation. We did not include these ``obsolete'' disk replacements in our analysis. COM3 differs from the other data sets in that it provides only aggregate statistics of disk failures, rather than individual records for each failure. The data contains the counts of disks that failed and were replaced in 2005 for each of the four disk populations. 2.4 Statistical methods We characterize an empirical distribution using two import metrics: the mean and the squared coefficient of variation ( We also consider the empirical cumulative distribution function (CDF) and how well it is fit by four probability distributions commonly used in reliability theory: the exponential distribution; the Weibull distribution; the gamma distribution; and the lognormal distribution. We parameterize the distributions through maximum likelihood estimation and evaluate the goodness of fit by visual inspection, the negative log-likelihood and the chi-square tests. We will also discuss the hazard rate of the distribution of time between replacements. In general, the hazard rate of a random variable 25] Intuitively, if the random variable The hazard rate is often studied for the distribution of lifetimes. It is important to note that we will focus on the hazard rate of the time between disk replacements, and not the hazard rate of disk lifetime distributions. Since we are interested in correlations between disk failures we need a measure for the degree of correlation. The autocorrelation function (ACF) measures the correlation of a random variable with itself at different time lags Another aspect of the failure process that we will study is long-range dependence. Long-range dependence measures the memory of a process, in particular how quickly the autocorrelation coefficient decays with growing lags. The strength of the long-range dependence is quantified by the Hurst exponent. A series exhibits long-range dependence if the Hurst exponent, H, is 14] to obtain estimates of the Hurst parameter using five different methods: the absolute value method, the variance method, the R/S method, the periodogram method, and the Whittle estimator. A brief introduction to long-range dependence and a description of the Hurst parameter estimators is provided in [15]. 3 Comparing disk replacement frequency with that of other hardware components Table 3: Relative frequency of hardware component replacements for the ten most frequently replaced components in systems HPC1, COM1 and COM2, respectively. Abbreviations are taken directly from service data and are not known to have identical definitions across data sets. ┌────────────────────────┐ ┌───────────────────────┐ ┌────────────────────────┐ │ HPC1 │ │ COM1 │ │ COM2 │ ├─────────────────┬──────┤ ├────────────────┬──────┤ ├─────────────────┬──────┤ │ Component │ % │ │ Component │ % │ │ Component │ % │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ Hard drive │ 30.6 │ │ Power supply │ 34.8 │ │ Hard drive │ 49.1 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ Memory │ 28.5 │ │ Memory │ 20.1 │ │ Motherboard │ 23.4 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ Misc/Unk │ 14.4 │ │ Hard drive │ 18.1 │ │ Power supply │ 10.1 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ CPU │ 12.4 │ │ Case │ 11.4 │ │ RAID card │ 4.1 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ PCI motherboard │ 4.9 │ │ Fan │ 8.0 │ │ Memory │ 3.4 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ Controller │ 2.9 │ │ CPU │ 2.0 │ │ SCSI cable │ 2.2 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ QSW │ 1.7 │ │ SCSI Board │ 0.6 │ │ Fan │ 2.2 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ Power supply │ 1.6 │ │ NIC Card │ 1.2 │ │ CPU │ 2.2 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ MLB │ 1.0 │ │ LV Power Board │ 0.6 │ │ CD-ROM │ 0.6 │ ├─────────────────┼──────┤ ├────────────────┼──────┤ ├─────────────────┼──────┤ │ SCSI BP │ 0.3 │ │ CPU heatsink │ 0.6 │ │ Raid Controller │ 0.6 │ └─────────────────┴──────┘ └────────────────┴──────┘ └─────────────────┴──────┘ The reliability of a system depends on all its components, and not just the hard drive(s). A natural question is therefore what the relative frequency of drive failures is, compared to that of other types of hardware failures. To answer this question we consult data sets HPC1, COM1, and COM2, since these data sets contain records for all types of hardware replacements, not only disk replacements. Table 3 shows, for each data set, a list of the ten most frequently replaced hardware components and the fraction of replacements made up by each component. We observe that while the actual fraction of disk replacements varies across the data sets (ranging from 20% to 50%), it makes up a significant fraction in all three cases. In the HPC1 and COM2 data sets, disk drives are the most commonly replaced hardware component accounting for 30% and 50% of all hardware replacements, respectively. In the COM1 data set, disks are a close runner-up accounting for nearly 20% of all hardware replacements. While Table 3 suggests that disks are among the most commonly replaced hardware components, it does not necessarily imply that disks are less reliable or have a shorter lifespan than other hardware components. The number of disks in the systems might simply be much larger than that of other hardware components. In order to compare the reliability of different hardware components, we need to normalize the number of component replacements by the component's population size. Unfortunately, we do not have, for any of the systems, exact population counts of all hardware components. However, we do have enough information in HPC1 to estimate counts of the four most frequently replaced hardware components (CPU, memory, disks, motherboards). We estimate that there is a total of 3,060 CPUs, 3,060 memory dimms, and 765 motherboards, compared to a disk population of 3,406. Combining these numbers with the data in Table 3, we conclude that for the HPC1 system, the rate at which in five years of use a memory dimm was replaced is roughly comparable to that of a hard drive replacement; a CPU was about 2.5 times less often replaced than a hard drive; and a motherboard was 50% less often replaced than a hard drive. Table 2: Node outages that were attributed to hardware problems broken down by the responsible hardware component. This includes all outages, not only those that required replacement of a hardware │ HPC1 │ │ Component │ % │ │ CPU │ 44 │ │ Memory │ 29 │ │ Hard drive │ 16 │ │ PCI motherboard │ 9 │ │ Power supply │ 2 │ The above discussion covers only failures that required a hardware component to be replaced. When running a large system one is often interested in any hardware failure that causes a node outage, not only those that necessitate a hardware replacement. We therefore obtained the HPC1 troubleshooting records for any node outage that was attributed to a hardware problem, including problems that required hardware replacements as well as problems that were fixed in some other way. Table 2 gives a breakdown of all records in the troubleshooting data, broken down by the hardware component that was identified as the root cause. We observe that 16% of all outage records pertain to disk drives (compared to 30% in Table 3), making it the third most common root cause reported in the data. The two most commonly reported outage root causes are CPU and memory, with 44% and 29%, respectively. For a complete picture, we also need to take the severity of an anomalous event into account. A closer look at the HPC1 troubleshooting data reveals that a large number of the problems attributed to CPU and memory failures were triggered by parity errors, i.e. the number of errors is too large for the embedded error correcting code to correct them. In those cases, a simple reboot will bring the affected node back up. On the other hand, the majority of the problems that were attributed to hard disks (around 90%) lead to a drive replacement, which is a more expensive and time-consuming repair Ideally, we would like to compare the frequency of hardware problems that we report above with the frequency of other types of problems, such software failures, network problems, etc. Unfortunately, we do not have this type of information for the systems in Table 1. However, in recent work [27] we have analyzed failure data covering any type of node outage, including those caused by hardware, software, network problems, environmental problems, or operator mistakes. The data was collected over a period of 9 years on more than 20 HPC clusters and contains detailed root cause information. We found that, for most HPC systems in this data, more than 50% of all outages are attributed to hardware problems and around 20% of all outages are attributed to software problems. Consistently with the data in Table 2, the two most common hardware components to cause a node outage are memory and CPU. The data of this recent study [27] is not used in this paper because it does not contain information about storage replacements. 4 Disk replacement rates Figure 1: Comparison of datasheet AFRs (solid and dashed line in the graph) and ARRs observed in the field. Each bar in the graph corresponds to one row in Table 1. The dotted line represents the weighted average over all data sets. Only disks within the nominal lifetime of five years are included, i.e. there is no bar for the COM3 drives that were deployed in 1998. The third bar for COM3 in the graph is cut off - its ARR is 13.5%. In the following, we study how field experience with disk replacements compares to datasheet specifications of disk reliability. Figure 1 shows the datasheet AFRs (horizontal solid and dashed line), the observed ARRs for each of the seven data sets and the weighted average ARR for all disks less than five years old (dotted line). For HPC1, HPC3, HPC4 and COM3, which cover different types of disks, the graph contains several bars, one for each type of disk, in the left-to-right order of the corresponding top-to-bottom entries in Table 1. Since at this point we are not interested in wearout effects after the end of a disk's nominal lifetime, we have included in Figure 1 only data for drives within their nominal lifetime of five years. In particular, we do not include a bar for the fourth type of drives in COM3 (see Table 1), which were deployed in 1998 and were more than seven years old at the end of the data collection. These possibly ``obsolete'' disks experienced an ARR, during the measurement period, of 24%. Since these drives are well outside the vendor's nominal lifetime for disks, it is not surprising that the disks might be wearing out. All other drives were within their nominal lifetime and are included in the figure. Figure 1 shows a significant discrepancy between the observed ARR and the datasheet AFR for all data sets. While the datasheet AFRs are between 0.58% and 0.88%, the observed ARRs range from 0.5% to as high as 13.5%. That is, the observed ARRs by data set and type, are by up to a factor of 15 higher than datasheet AFRs. Most commonly, the observed ARR values are in the 3% range. For example, the data for HPC1, which covers almost exactly the entire nominal lifetime of five years exhibits an ARR of 3.4% (significantly higher than the datasheet AFR of 0.88%). The average ARR over all data sets (weighted by the number of drives in each data set) is 3.01%. Even after removing all COM3 data, which exhibits the highest ARRs, the average ARR was still 2.86%, 3.3 times higher than 0.88%. It is interesting to observe that for these data sets there is no significant discrepancy between replacement rates for SCSI and FC drives, commonly represented as the most reliable types of disk drives, and SATA drives, frequently described as lower quality. For example, the ARRs of drives in the HPC4 data set, which are exclusively SATA drives, are among the lowest of all data sets. Moreover, the HPC3 data set includes both SCSI and SATA drives (as part of the same system in the same operating environment) and they have nearly identical replacement rates. Of course, these HPC3 SATA drives were decommissioned because of media error rates attributed to lubricant breakdown (recall Section 2.1), our only evidence of a bad batch, so perhaps more data is needed to better understand the impact of batches in overall quality. It is also interesting to observe that the only drives that have an observed ARR below the datasheet AFR are the second and third type of drives in data set HPC4. One possible reason might be that these are relatively new drives, all less than one year old (recall Table 1). Also, these ARRs are based on only 16 replacements, perhaps too little data to draw a definitive conclusion. A natural question arises: why are the observed disk replacement rates so much higher in the field data than the datasheet MTTF would suggest, even for drives in the first years of operation. As discussed in Sections 2.1 and 2.2, there are multiple possible reasons. First, customers and vendors might not always agree on the definition of when a drive is ``faulty''. The fact that a disk was replaced implies that it failed some (possibly customer specific) health test. When a health test is conservative, it might lead to replacing a drive that the vendor tests would find to be healthy. Note, however, that even if we scale down the ARRs in Figure 1 to 57% of their actual values, to estimate the fraction of drives returned to the manufacturer that fail the latter's health test [1], the resulting AFR estimates are still more than a factor of two higher than datasheet AFRs in most cases. Second, datasheet MTTFs are typically determined based on accelerated (stress) tests, which make certain assumptions about the operating conditions under which the disks will be used (e.g. that the temperature will always stay below some threshold), the workloads and ``duty cycles'' or powered-on hours patterns, and that certain data center handling procedures are followed. In practice, operating conditions might not always be as ideal as assumed in the tests used to determine datasheet MTTFs. A more detailed discussion of factors that can contribute to a gap between expected and measured drive reliability is given by Elerath and Shah [6]. Below we summarize the key observations of this section. Observation 1: Variance between datasheet MTTF and disk replacement rates in the field was larger than we expected. The weighted average ARR was 3.4 times larger than 0.88%, corresponding to a datasheet MTTF of 1,000,000 hours. Observation 2: For older systems (5-8 years of age), data sheet MTTFs underestimated replacement rates by as much as a factor of 30. Observation 3: Even during the first few years of a system's lifetime ( Observation 4: In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks. This may indicate that disk-independent factors, such as operating conditions, usage and environmental factors, affect replacement rates more than component specific factors. However, the only evidence we have of a bad batch of disks was found in a collection of SATA disks experiencing high media error rates. We have too little data on bad batches to estimate the relative frequency of bad batches by type of disk, although there is plenty of anecdotal evidence that bad batches are not unique to SATA disks. 4.2 Age-dependent replacement rates Figure 2: Lifecycle failure pattern for hard drives [33]. One aspect of disk failures that single-value metrics such as MTTF and AFR cannot capture is that in real life failure rates are not constant [5]. Failure rates of hardware products typically follow a ``bathtub curve'' with high failure rates at the beginning (infant mortality) and the end (wear-out) of the lifecycle. Figure 2 shows the failure rate pattern that is expected for the life cycle of hard drives [4,5,33]. According to this model, the first year of operation is characterized by early failures (or infant mortality). In years 2-5, the failure rates are approximately in steady state, and then, after years 5-7, wear-out starts to kick in. The common concern, that MTTFs do not capture infant mortality, has lead the International Disk drive Equipment and Materials Association (IDEMA) to propose a new standard for specifying disk drive reliability, based on the failure model depicted in Figure 2 [5,33]. The new standard requests that vendors provide four different MTTF estimates, one for the first 1-3 months of operation, one for months 4-6, one for months 7-12, and one for months 13-60. The goal of this section is to study, based on our field replacement data, how disk replacement rates in large-scale installations vary over a system's life cycle. Note that we only see customer visible replacement. Any infant mortality failure caught in the manufacturing, system integration or installation testing are probably not recorded in production replacement logs. The best data sets to study replacement rates across the system life cycle are HPC1 and the first type of drives of HPC4. The reason is that these data sets span a long enough time period (5 and 3 years, respectively) and each cover a reasonably homogeneous hard drive population, allowing us to focus on the effect of age. Figure 3: ARR for the first five years of system HPC1's lifetime, for the compute nodes (left) and the file system nodes (middle). ARR for the first type of drives in HPC4 as a function of drive age in years (right). Figure 4: ARR per month over the first five years of system HPC1's lifetime, for the compute nodes (left) and the file system nodes (middle). ARR for the first type of drives in HPC4 as a function of drive age in months (right). We study the change in replacement rates as a function of age at two different time granularities, on a per-month and a per-year basis, to make it easier to detect both short term and long term trends. Figure 3 shows the annual replacement rates for the disks in the compute nodes of system HPC1 (left), the file system nodes of system HPC1 (middle) and the first type of HPC4 drives (right), at a yearly granularity. We make two interesting observations. First, replacement rates in all years, except for year 1, are larger than the datasheet MTTF would suggest. For example, in HPC1's second year, replacement rates are 20% larger than expected for the file system nodes, and a factor of two larger than expected for the compute nodes. In year 4 and year 5 (which are still within the nominal lifetime of these disks), the actual replacement rates are 7-10 times higher than the failure rates we expected based on datasheet MTTF. The second observation is that replacement rates are rising significantly over the years, even during early years in the lifecycle. Replacement rates in HPC1 nearly double from year 1 to 2, or from year 2 to 3. This observation suggests that wear-out may start much earlier than expected, leading to steadily increasing replacement rates during most of a system's useful life. This is an interesting observation because it does not agree with the common assumption that after the first year of operation, failure rates reach a steady state for a few years, forming the ``bottom of the Next, we move to the per-month view of replacement rates, shown in Figure 4. We observe that for the HPC1 file system nodes there are no replacements during the first 12 months of operation, i.e. there's is no detectable infant mortality. For HPC4, the ARR of drives is not higher in the first few months of the first year than the last few months of the first year. In the case of the HPC1 compute nodes, infant mortality is limited to the first month of operation and is not above the steady state estimate of the datasheet MTTF. Looking at the lifecycle after month 12, we again see continuously rising replacement rates, instead of the expected ``bottom of the bathtub''. Below we summarize the key observations of this section. Observation 5: Contrary to common and proposed models, hard drive replacement rates do not enter steady state after the first year of operation. Instead replacement rates seem to steadily increase over time. Observation 6: Early onset of wear-out seems to have a much stronger impact on lifecycle replacement rates than infant mortality, as experienced by end customers, even when considering only the first three or five years of a system's lifetime. We therefore recommend that wear-out be incorporated into new standards for disk drive reliability. The new standard suggested by IDEMA does not take wear-out into account [5,33]. 5 Statistical properties of disk failures In the previous sections, we have focused on aggregate statistics, e.g. the average number of disk replacements in a time period. Often one wants more information on the statistical properties of the time between failures than just the mean. For example, determining the expected time to failure for a RAID system requires an estimate on the probability of experiencing a second disk failure in a short period, that is while reconstructing lost data from redundant data. This probability depends on the underlying probability distribution and maybe poorly estimated by scaling an annual failure rate down to a few hours. The most common assumption about the statistical characteristics of disk failures is that they form a Poisson process, which implies two key properties: 1. Failures are independent. 2. The time between failures follows an exponential distribution. The goal of this section is to evaluate how realistic the above assumptions are. We begin by providing statistical evidence that disk failures in the real world are unlikely to follow a Poisson process. We then examine each of the two key properties (independent failures and exponential time between failures) independently and characterize in detail how and where the Poisson assumption breaks. In our study, we focus on the HPC1 data set, since this is the only data set that contains precise timestamps for when a problem was detected (rather than just timestamps for when repair took 5.1 The Poisson assumption Figure 5: CDF of number of disk replacements per month in HPC1 computed across the entire lifetime of HPC1 (left) and computed for only years 2-3 (right) The Poisson assumption implies that the number of failures during a given time interval (e.g. a week or a month) is distributed according to the Poisson distribution. Figure 5 (left) shows the empirical CDF of the number of disk replacements observed per month in the HPC1 data set, together with the Poisson distribution fit to the data's observed mean. We find that the Poisson distribution does not provide a good visual fit for the number of disk replacements per month in the data, in particular for very small and very large numbers of replacements in a month. For example, under the Poisson distribution the probability of seeing A chi-square test reveals that we can reject the hypothesis that the number of disk replacements per month follows a Poisson distribution at the 0.05 significance level. All above results are similar when looking at the distribution of number of disk replacements per day or per week, rather than per month. One reason for the poor fit of the Poisson distribution might be that failure rates are not steady over the lifetime of HPC1. We therefore repeat the same process for only part of HPC1's lifetime. Figure 5 (right) shows the distribution of disk replacements per month, using only data from years 2 and 3 of HPC1. The Poisson distribution achieves a better fit for this time period and the chi-square test cannot reject the Poisson hypothesis at a significance level of 0.05. Note, however, that this does not necessarily mean that the failure process during years 2 and 3 does follow a Poisson process, since this would also require the two key properties of a Poisson process (independent failures and exponential time between failures) to hold. We study these two properties in detail in the next two sections. 5.2 Correlations In this section, we focus on the first key property of a Poisson process, the independence of failures. Intuitively, it is clear that in practice failures of disks in the same system are never completely independent. The failure probability of disks depends for example on many factors, such as environmental factors, like temperature, that are shared by all disks in the system. When the temperature in a machine room is far outside nominal values, all disks in the room experience a higher than normal probability of failure. The goal of this section is to statistically quantify and characterize the correlation between disk replacements. We start with a simple test in which we determine the correlation of the number of disk replacements observed in successive weeks or months by computing the correlation coefficient between the number of replacements in a given week or month and the previous week or month. For data coming from a Poisson processes we would expect correlation coefficients to be close to 0. Instead we find significant levels of correlations, both at the monthly and the weekly level. The correlation coefficient between consecutive weeks is 0.72, and the correlation coefficient between consecutive months is 0.79. Repeating the same test using only the data of one year at a time, we still find significant levels of correlation with correlation coefficients of 0.4-0.8. Statistically, the above correlation coefficients indicate a strong correlation, but it would be nice to have a more intuitive interpretation of this result. One way of thinking of the correlation of failures is that the failure rate in one time interval is predictive of the failure rate in the following time interval. To test the strength of this prediction, we assign each week in HPC1's life to one of three buckets, depending on the number of disk replacements observed during that week, creating a bucket for weeks with small, medium, and large number of replacements, respectively ^1. The expectation is that a week that follows a week with a ``small'' number of disk replacements is more likely to see a small number of replacements, than a week that follows a week with a ``large'' number of replacements. However, if failures are independent, the number of replacements in a week will not depend on the number in a prior week. Figure 7: Expected number of disk replacements in a week depending on the number of disk replacements in the previous week computed across the entire lifetime of HPC1 (left) and computed for only year 3 (right). Figure 7 (left) shows the expected number of disk replacements in a week of HPC1's lifetime as a function of which bucket the preceding week falls in. We observe that the expected number of disk replacements in a week varies by a factor of 9, depending on whether the preceding week falls into the first or third bucket, while we would expect no variation if failures were independent. When repeating the same process on the data of only year 3 of HPC1's lifetime, we see a difference of a close to factor of 2 between the first and third bucket. Figure 6: Autocorrelation function for the number of disk replacements per week computed across the entire lifetime of the HPC1 system (left) and computed across only one year of HPC1's operation So far, we have only considered correlations between successive time intervals, e.g. between two successive weeks. A more general way to characterize correlations is to study correlations at different time lags by using the autocorrelation function. Figure 6 (left) shows the autocorrelation function for the number of disk replacements per week computed across the HPC1 data set. For a stationary failure process (e.g. data coming from a Poisson process) the autocorrelation would be close to zero at all lags. Instead, we observe strong autocorrelation even for large lags in the range of 100 weeks (nearly 2 years). We repeated the same autocorrelation test for only parts of HPC1's lifetime and find similar levels of autocorrelation. Figure 6 (right), for example, shows the autocorrelation function computed only on the data of the third year of HPC1's life. Correlation is significant for lags in the range of up to 30 weeks. Another measure for dependency is long range dependence, as quantified by the Hurst exponent 2) to the HPC1 data, we determine a Hurst exponent between 0.6-0.8 at the weekly granularity. These values are comparable to Hurst exponents reported for Ethernet traffic, which is known to exhibit strong long range dependence [16]. Observation 7: Disk replacement counts exhibit significant levels of autocorrelation. Observation 8: Disk replacement counts exhibit long-range dependence. 5.3 Distribution of time between failure Figure 8: Distribution of time between disk replacements across all nodes in HPC1. In this section, we focus on the second key property of a Poisson failure process, the exponentially distributed time between failures. Figure 8 shows the empirical cumulative distribution function of time between disk replacements as observed in the HPC1 system and four distributions matched to it. We find that visually the gamma and Weibull distributions are the best fit to the data, while exponential and lognormal distributions provide a poorer fit. This agrees with results we obtain from the negative log-likelihood, that indicate that the Weibull distribution is the best fit, closely followed by the gamma distribution. Performing a Chi-Square-Test, we can reject the hypothesis that the underlying distribution is exponential or lognormal at a significance level of 0.05. On the other hand the hypothesis that the underlying distribution is a Weibull or a gamma cannot be rejected at a significance level of 0.05. Figure 8 (right) shows a close up of the empirical CDF and the distributions matched to it, for small time-between-replacement values (less than 24 hours). The reason that this area is particularly interesting is that a key application of the exponential assumption is in estimating the time until data loss in a RAID system. This time depends on the probability of a second disk failure during reconstruction, a process which typically lasts on the order of a few hours. The graph shows that the exponential distribution greatly underestimates the probability of a second failure during this time period. For example, the probability of seeing two drives in the cluster fail within one hour is four times larger under the real data, compared to the exponential distribution. The probability of seeing two drives in the cluster fail within the same 10 hours is two times larger under the real data, compared to the exponential distribution. Figure 9: Distribution of time between disk replacements across all nodes in HPC1 for only year 3 of operation. The poor fit of the exponential distribution might be due to the fact that failure rates change over the lifetime of the system, creating variability in the observed times between disk replacements that the exponential distribution cannot capture. We therefore repeated the above analysis considering only segments of HPC1's lifetime. Figure 9 shows as one example the results from analyzing the time between disk replacements in year 3 of HPC1's operation. While visually the exponential distribution now seems a slightly better fit, we can still reject the hypothesis of an underlying exponential distribution at a significance level of 0.05. The same holds for other 1-year and even 6-month segments of HPC1's lifetime. This leads us to believe that even during shorter segments of HPC1's lifetime the time between replacements is not realistically modeled by an exponential distribution. While it might not come as a surprise that the simple exponential distribution does not provide as good a fit as the more flexible two-parameter distributions, an interesting question is what properties of the empirical time between failure make it different from a theoretical exponential distribution. We identify as a first differentiating feature that the data exhibits higher variability than a theoretical exponential distribution. The data has a A second differentiating feature is that the time between disk replacements in the data exhibits decreasing hazard rates. Recall from Section 2.4 that the hazard rate function measures how the time since the last failure influences the expected time until the next failure. An increasing hazard rate function predicts that if the time since a failure is long then the next failure is coming soon. And a decreasing hazard rate function predicts the reverse. The table below summarizes the parameters for the Weibull and gamma distribution that provided the best fit to the data. │ │ Distribution / Parameters │ │ │ Weibull │ Gamma │ │ │ Shape │ Scale │ Shape │ Scale │ │ HPC1 compute nodes │ 0.73 │ 0.037 │ 0.65 │ 176.4 │ │ HPC1 filesystem nodes │ 0.76 │ 0.013 │ 0.64 │ 482.6 │ │ All HPC1 nodes │ 0.71 │ 0.049 │ 0.59 │ 160.9 │ Disk replacements in the filesystem nodes, as well as the compute nodes, and across all nodes, are fit best with gamma and Weibull distributions with a shape parameter less than 1, a clear indicator of decreasing hazard rates. Figure 10: Illustration of decreasing hazard rates Figure 10 illustrates the decreasing hazard rates of the time between replacements by plotting the expected remaining time until the next disk replacement (Y-axis) as a function of the time since the last disk replacement (X-axis). We observe that right after a disk was replaced the expected time until the next disk replacement becomes necessary was around 4 days, both for the empirical data and the exponential distribution. In the case of the empirical data, after surviving for ten days without a disk replacement the expected remaining time until the next replacement had grown from initially 4 to 10 days; and after surviving for a total of 20 days without disk replacements the expected time until the next failure had grown to 15 days. In comparison, under an exponential distribution the expected remaining time stays constant (also known as the memoryless property). Note, that the above result is not in contradiction with the increasing replacement rates we observed in Section 4.2 as a function of drive age, since here we look at the distribution of the time between disk replacements in a cluster, not disk lifetime distributions (i.e. how long did a drive live until it was replaced). Observation 9: The hypothesis that time between disk replacements follows an exponential distribution can be rejected with high confidence. Observation 10: The time between disk replacements has a higher variability than that of an exponential distribution. Observation 11: The distribution of time between disk replacements exhibits decreasing hazard rates, that is, the expected remaining time until the next disk was replaced grows with the time it has been since the last disk replacement. 6 Related work There is very little work published on analyzing failures in real, large-scale storage systems, probably as a result of the reluctance of the owners of such systems to release failure data. Among the few existing studies is the work by Talagala et al. [29], which provides a study of error logs in a research prototype storage system used for a web server and includes a comparison of failure rates of different hardware components. They identify SCSI disk enclosures as the least reliable components and SCSI disks as one of the most reliable component, which differs from our In a recently initiated effort, Schwarz et al. [28] have started to gather failure data at the Internet Archive, which they plan to use to study disk failure rates and bit rot rates and how they are affected by different environmental parameters. In their preliminary results, they report ARR values of 2-6% and note that the Internet Archive does not seem to see significant infant mortality. Both observations are in agreement with our findings. Gray [31] reports the frequency of uncorrectable read errors in disks and finds that their numbers are smaller than vendor data sheets suggest. Gray also provides ARR estimates for SCSI and ATA disks, in the range of 3-6%, which is in the range of ARRs that we observe for SCSI drives in our data sets. Pinheiro et al. analyze disk replacement data from a large population of serial and parallel ATA drives [23]. They report ARR values ranging from 1.7% to 8.6%, which agrees with our results. The focus of their study is on the correlation between various system parameters and drive failures. They find that while temperature and utilization exhibit much less correlation with failures than expected, the value of several SMART counters correlate highly with failures. For example, they report that after a scrub error drives are 39 times more likely to fail within 60 days than drives without scrub errors and that 44% of all failed drives had increased SMART counts in at least one of four specific counters. Many have criticized the accuracy of MTTF based failure rate predictions and have pointed out the need for more realistic models. A particular concern is the fact that a single MTTF value cannot capture life cycle patterns [4,5,33]. Our analysis of life cycle patterns shows that this concern is justified, since we find failure rates to vary quite significantly over even the first two to three years of the life cycle. However, the most common life cycle concern in published research is underrepresenting infant mortality. Our analysis does not support this. Instead we observe significant underrepresentation of the early onset of wear-out. Early work on RAID systems [8] provided some statistical analysis of time between disk failures for disks used in the 1980s, but didn't find sufficient evidence to reject the hypothesis of exponential times between failure with high confidence. However, time between failure has been analyzed for other, non-storage data in several studies [11,17,26,27,30,32]. Four of the studies use distribution fitting and find the Weibull distribution to be a good fit [11,17,27,32], which agrees with our results. All studies looked at the hazard rate function, but come to different conclusions. Four of them [11,17,27,32] find decreasing hazard rates (Weibull shape parameter 30], or increasing [26]. We find decreasing hazard rates with Weibull shape parameter of 0.7-0.8. Large-scale failure studies are scarce, even when considering IT systems in general and not just storage systems. Most existing studies are limited to only a few months of data, covering typically only a few hundred failures [13,20,21,26,30,32]. Many of the most commonly cited studies on failure analysis stem from the late 80's and early 90's, when computer systems where significantly different from today [9,10,12,17,18,19,30]. 7 Conclusion Many have pointed out the need for a better understanding of what disk failures look like in the field. Yet hardly any published work exists that provides a large-scale study of disk failures in production systems. As a first step towards closing this gap, we have analyzed disk replacement data from a number of large production systems, spanning more than 100,000 drives from at least four different vendors, including drives with SCSI, FC and SATA interfaces. Below is a summary of a few of our results. • Large-scale installation field usage appears to differ widely from nominal datasheet MTTF conditions. The field replacement rates of systems were significantly larger than we expected based on datasheet MTTFs. • For drives less than five years old, field replacement rates were larger than what the datasheet MTTF suggested by a factor of 2-10. For five to eight year old drives, field replacement rates were a factor of 30 higher than what the datasheet MTTF suggested. • Changes in disk replacement rates during the first five years of the lifecycle were more dramatic than often assumed. While replacement rates are often expected to be in steady state in year 2-5 of operation (bottom of the ``bathtub curve''), we observed a continuous increase in replacement rates, starting as early as in the second year of operation. • In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks. This may indicate that disk-independent factors, such as operating conditions, usage and environmental factors, affect replacement rates more than component specific factors. However, the only evidence we have of a bad batch of disks was found in a collection of SATA disks experiencing high media error rates. We have too little data on bad batches to estimate the relative frequency of bad batches by type of disk, although there is plenty of anecdotal evidence that bad batches are not unique to SATA disks. • The common concern that MTTFs underrepresent infant mortality has led to the proposal of new standards that incorporate infant mortality [33]. Our findings suggest that the underrepresentation of the early onset of wear-out is a much more serious factor than underrepresentation of infant mortality and recommend to include this in new standards. • While many have suspected that the commonly made assumption of exponentially distributed time between failures/replacements is not realistic, previous studies have not found enough evidence to prove this assumption wrong with significant statistical confidence [8]. Based on our data analysis, we are able to reject the hypothesis of exponentially distributed time between disk replacements with high confidence. We suggest that researchers and designers use field replacement data, when possible, or two parameter distributions, such as the Weibull distribution. • We identify as the key features that distinguish the empirical distribution of time between disk replacements from the exponential distribution, higher levels of variability and decreasing hazard rates. We find that the empirical distributions are fit well by a Weibull distribution with a shape parameter between 0.7 and 0.8. • We also present strong evidence for the existence of correlations between disk replacement interarrivals. In particular, the empirical data exhibits significant levels of autocorrelation and long-range dependence. We would like to thank Jamez Nunez and Gary Grider from the High Performance Computing Division at Los Alamos National Lab and Katie Vargo, J. Ray Scott and Robin Flaus from the Pittsburgh Supercomputing Center for collecting and providing us with data and helping us to interpret the data. We also thank the other people and organizations, who have provided us with data, but would like to remain unnamed. For discussions relating to the use of high end systems, we would like to thank Mark Seager and Dave Fox of the Lawrence Livermore National Lab. Thanks go also to the anonymous reviewers and our shepherd, Mary Baker, for the many useful comments that helped improve the paper. We thank the members and companies of the PDL Consortium (including APC, Cisco, EMC, Hewlett-Packard, Hitachi, IBM, Intel, Network Appliance, Oracle, Panasas, Seagate, and Symantec) for their interest and support. This material is based upon work supported by the Department of Energy under Award Number DE-FC02-06ER25767^2and on research sponsored in part by the Army Research Office, under agreement number More precisely, we choose the cutoffs between the buckets such that each bucket contains the same number of samples (i.e. weeks) by using the 33th percentile and the 66th percentile of the empirical distribution as cutoffs between the buckets. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Personal communication with Dan Dummer, Andrei Khurshudov, Erik Riedel, Ron Watts of Seagate, 2006. G. Cole. Estimating drive reliability in desktop computers and consumer electronics systems. TP-338.1. Seagate. Peter F. Corbett, Robert English, Atul Goel, Tomislav Grcanac, Steven Kleiman, James Leong, and Sunitha Sankar. Row-diagonal parity for double disk failure correction. In Proc. of the FAST '04 Conference on File and Storage Technologies, 2004. J. G. Elerath. AFR: problems of definition, calculation and measurement in a commercial environment. In Proc. of the Annual Reliability and Maintainability Symposium, 2000. J. G. Elerath. Specifying reliability in the disk drive industry: No more MTBFs. In Proc. of the Annual Reliability and Maintainability Symposium, 2000. J. G. Elerath and S. Shah. Server class drives: How reliable are they? In Proc. of the Annual Reliability and Maintainability Symposium, 2004. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The Google file system. In Proc. of the 19th ACM Symposium on Operating Systems Principles (SOSP'03), 2003. Garth A. Gibson. Redundant disk arrays: Reliable, parallel secondary storage. Dissertation. MIT Press. J. Gray. Why do computers stop and what can be done about it. In Proc. of the 5th Symposium on Reliability in Distributed Software and Database Systems, 1986. J. Gray. A census of tandem system availability between 1985 and 1990. IEEE Transactions on Reliability, 39(4), 1990. T. Heath, R. P. Martin, and T. D. Nguyen. Improving cluster availability using workstation validation. In Proc. of the 2002 ACM SIGMETRICS international conference on Measurement and modeling of computer systems, 2002. R. K. Iyer, D. J. Rossetti, and M. C. Hsueh. Measurement and modeling of computer reliability as affected by system activity. ACM Trans. Comput. Syst., 4(3), 1986. M. Kalyanakrishnam, Z. Kalbarczyk, and R. Iyer. Failure data analysis of a LAN of Windows NT based computers. In Proc. of the 18th IEEE Symposium on Reliable Distributed Systems, 1999. T. Karagiannis. Selfis: A short tutorial. Technical report, University of California, Riverside, 2002. Thomas Karagiannis, Mart Molle, and Michalis Faloutsos. Long-range dependence: Ten years of internet traffic modeling. IEEE Internet Computing, 08(5), 2004. Will E. Leland, Murad S. Taqqu, Walter Willinger, and Daniel V. Wilson. On the self-similar nature of ethernet traffic. IEEE/ACM Transactions on Networking, 2(1), 1994. T.-T. Y. Lin and D. P. Siewiorek. Error log analysis: Statistical modeling and heuristic trend analysis. IEEE Transactions on Reliability, 39(4), 1990. J. Meyer and L. Wei. Analysis of workload influence on dependability. In Proc. International Symposium on Fault-Tolerant Computing, 1988. B. Murphy and T. Gent. Measuring system and software reliability using an automated data collection process. Quality and Reliability Engineering International, 11(5), 1995. D. Nurmi, J. Brevik, and R. Wolski. Modeling machine availability in enterprise and wide-area distributed computing environments. In Euro-Par'05, 2005. D. L. Oppenheimer, A. Ganapathi, and D. A. Patterson. Why do internet services fail, and what can be done about it? In USENIX Symposium on Internet Technologies and Systems, 2003. David Patterson, Garth Gibson, and Randy Katz. A case for redundant arrays of inexpensive disks (RAID). In Proc. of the ACM SIGMOD International Conference on Management of Data, 1988. E. Pinheiro, W. D. Weber, and L. A. Barroso. Failure trends in a large disk drive population. In Proc. of the FAST '07 Conference on File and Storage Technologies, 2007. Vijayan Prabhakaran, Lakshmi N. Bairavasundaram, Nitin Agrawal, Haryadi S. Gunawi, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. Iron file systems. In Proc. of the 20th ACM Symposium on Operating Systems Principles (SOSP'05), 2005. Sheldon M. Ross. In Introduction to probability models. 6th edition. Academic Press. R. K. Sahoo, R. K., A. Sivasubramaniam, M. S. Squillante, and Y. Zhang. Failure data analysis of a large-scale heterogeneous server environment. In Proc. of the 2004 International Conference on Dependable Systems and Networks (DSN'04), 2004. B. Schroeder and G. Gibson. A large-scale study of failures in high-performance computing systems. In Proc. of the 2006 International Conference on Dependable Systems and Networks (DSN'06), 2006. T. Schwarz, M. Baker, S. Bassi, B. Baumgart, W. Flagg, C. van Ingen, K. Joste, M. Manasse, and M. Shah. Disk failure investigations at the internet archive. In Work-in-Progess session, NASA/IEEE Conference on Mass Storage Systems and Technologies (MSST2006), 2006. Nisha Talagala and David Patterson. An analysis of error behaviour in a large storage system. In The IEEE Workshop on Fault Tolerance in Parallel and Distributed Systems, 1999. D. Tang, R. K. Iyer, and S. S. Subramani. Failure analysis and modelling of a VAX cluster system. In Proc. International Symposium on Fault-tolerant computing, 1990. C. van Ingen and J. Gray. Empirical measurements of disk failure rates and error rates. In MSR-TR-2005-166, 2005. J. Xu, Z. Kalbarczyk, and R. K. Iyer. Networked Windows NT system field failure data analysis. In Proc. of the 1999 Pacific Rim International Symposium on Dependable Computing, 1999. Jimmy Yang and Feng-Bin Sun. A comprehensive review of hard-disk drive reliability. In Proc. of the Annual Reliability and Maintainability Symposium, 1999. This document was generated using the LaTeX2HTML translator Version 2002 (1.67) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds. Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
{"url":"https://www.usenix.org/legacy/event/fast07/tech/schroeder/schroeder_html/","timestamp":"2014-04-19T12:02:42Z","content_type":null,"content_length":"107328","record_id":"<urn:uuid:a79e8ba7-1f53-40f8-b134-04021316a818>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Pasadena, TX ACT Tutor Find a Pasadena, TX ACT Tutor ...I worked on a daily basis with elementary school kids and grew very fond of them. I look forward to getting a chance to tutor them in the future! I have the national "E" coaching license. 22 Subjects: including ACT Math, chemistry, calculus, physics ...I've taught hundreds of students and logged over 1000 hours with a major test prep company. But I was tired of my company and others like it taking advantage of hard-working families by charging them hundreds of dollars *per hour* for instructors who got paid peanuts and received limited profess... 22 Subjects: including ACT Math, English, college counseling, ADD/ADHD ...I can teach you basic facts, terms and concepts - the vocabulary of psychology -so you will be prepared to master more advanced areas, including performing and writing up psychological research in scientific format. Probability is the basis for statistics; if you can solve problems in probabilit... 20 Subjects: including ACT Math, writing, algebra 1, algebra 2 ...I was a graduate teaching assistant for Genetics, Statistics, Molecular biology,and Tissue culture. I tutored undergraduates genetics in University of Houston for 2 years with great success. Many of those I taught are currently in Medical school, Dental school and PA program. 10 Subjects: including ACT Math, biology, anatomy, microbiology ...Whether you are in junior high or high school, Pre Algebra is important. It gives you the understanding that will be built on throughout your high school and college career. I have taught algebra long enough to know how to help your student and prepare them for what comes next! 20 Subjects: including ACT Math, Spanish, geometry, algebra 1 Related Pasadena, TX Tutors Pasadena, TX Accounting Tutors Pasadena, TX ACT Tutors Pasadena, TX Algebra Tutors Pasadena, TX Algebra 2 Tutors Pasadena, TX Calculus Tutors Pasadena, TX Geometry Tutors Pasadena, TX Math Tutors Pasadena, TX Prealgebra Tutors Pasadena, TX Precalculus Tutors Pasadena, TX SAT Tutors Pasadena, TX SAT Math Tutors Pasadena, TX Science Tutors Pasadena, TX Statistics Tutors Pasadena, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/pasadena_tx_act_tutors.php","timestamp":"2014-04-19T20:19:31Z","content_type":null,"content_length":"23568","record_id":"<urn:uuid:3a4d6537-15c6-4eb6-be5d-176070ee0f2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 5^2 - the square root of 12 + 2^2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/503fd937e4b032264058f6e0","timestamp":"2014-04-16T13:43:41Z","content_type":null,"content_length":"39412","record_id":"<urn:uuid:bfcd9a9e-6a1c-489b-abd8-0cf10f0107e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Need help on a Algebra 2 Honors Question! • one year ago • one year ago Best Response You've already chosen the best response. 4.f(x)= -1 if x ≤ -6 5 if -6 < x ≤ -2 2x + 9 if x > -2 (1 Point) Best Response You've already chosen the best response. I also have part of a table filled out Best Response You've already chosen the best response. Best Response You've already chosen the best response. I just don't understand what I am supposed to do for the first 4 columns. Best Response You've already chosen the best response. if i was good at this ill help but me im not that good Best Response You've already chosen the best response. I just know about the las column never did any like the first two. I could post other I have already done. Best Response You've already chosen the best response. would that help you help me Best Response You've already chosen the best response. like what r u suppose to do? Best Response You've already chosen the best response. plug in numbers for coordinates and graph it. Best Response You've already chosen the best response. Example fx=2x+9 if x > -2 Best Response You've already chosen the best response. Next you would chose a value for x that is greater than -2 which is 0 Best Response You've already chosen the best response. oh i have no idea but i can ask my bro to help u Best Response You've already chosen the best response. so now its 2(0)+9 which gives us 9 if we solve it and that is our y value Best Response You've already chosen the best response. Best Response You've already chosen the best response. giving us the coordinates for a point at (0,9) if we do a few more we get a graph for two of the columns Best Response You've already chosen the best response. I just don't now what to do if there is no x value for the equation. Best Response You've already chosen the best response. yeahh im confused on the same thin ur talkin bout o dont get it either Best Response You've already chosen the best response. This function works just like any other in the other two sections. For -6 < x < -2, the function is just y = 5, so for any x value in that range you'll get 5 as your result. Similarly, for any x < -6, y = 1. Best Response You've already chosen the best response. so any number would be -1 for the first two columns Best Response You've already chosen the best response. For the first two columns, you'll want to use different x values, but the y value will always be -1. So for example, the first two columns might look like: \[\left[ -6 | -1 \right]\]\[\left[ -7 | -1 \right]\]\[\left[ -8 | -1 \right]\] Best Response You've already chosen the best response. and the graph would be a horizontal line on y=-1 right Best Response You've already chosen the best response. Best Response You've already chosen the best response. thanks a lot I relly needed help understanding this and this is the only question that did those with nothing in the lesson that talked about it. Thanks again Best Response You've already chosen the best response. No problem, feel free to message me if you have any other questions. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50aae12ce4b064039cbd6910","timestamp":"2014-04-21T07:49:35Z","content_type":null,"content_length":"83783","record_id":"<urn:uuid:83413451-dfc2-4f51-b69d-edb7e36130c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Monica Algebra Tutors ...I am a certified Montessori Teacher PreK-8 from UCSD. I taught high school for over 25 years so I know how to prepare students. I know what's coming so I know what to emphasize. 72 Subjects: including algebra 2, American history, calculus, chess ...I have worked as a draftsman, model-maker, designer and illustrator. I have worked for Walt Disney Imagineering, Landmark Entertainment, Warner Brothers and other Film companies. I know how to draw and paint concept art, design development drawings, draft, do perspective drawings, to name some of my skills. 10 Subjects: including algebra 2, algebra 1, Spanish, trigonometry ...I come from a family of teachers and related educational professionals. During my time as a teacher I have developed lesson plans that incorporate real life situations, sports, and music within many math topics. I have had the opportunity to work with students of many ages. 14 Subjects: including algebra 2, algebra 1, geometry, SAT math ...I was awarded my Master?s degree in math from UCLA, tutoring to earn a living as I went through graduate school. I went on to a career in aerospace as an engineer and a second career as VP for Technical Marketing and Sales for several very large firms in the electronics industry. As part of my ... 14 Subjects: including algebra 1, algebra 2, calculus, statistics ...More than 12-year experience in teaching chemistry from middle school-level to graduate school-level students. Extensive experience on 1-1 tutoring as well. More than 10 years of experience in teaching math and calculus from middle school-level, high school-level and college-level students. 10 Subjects: including algebra 1, algebra 2, chemistry, calculus
{"url":"http://www.algebrahelp.com/Santa_Monica_algebra_tutors.jsp","timestamp":"2014-04-20T00:40:30Z","content_type":null,"content_length":"25024","record_id":"<urn:uuid:8216c5ae-d670-4532-a182-74b812245d83>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Papers Published 1. Dan, N. and Bejan, A. and Cacuci, D.G. and Schuetz, W., Evolution of a mixture of hot particles, steam, and water immersed in a water pool, Numerical Heat Transfer; Part A: Applications, vol. 34 no. 5 (1998), pp. 463 - 478 . (last updated on 2007/04/06) This article describes numerically the time evolution of an expanding mixture of hot spherical particles, steam, and water. It is assumed that at time t = 0 the mixture components are distributed uniformly through the mixture volume. The mixture expands against a body of water in which it is immersed. The expansion is due to steam generation; it is assumed that the hot particles remain equidistant as the mixture expands. The numerical procedure is based on the moving finite element method. The fluid particles are distributed throughout the domain and are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Mathematically, the problem is formulated as a nonlinear initial boundary value problem with unknown quantities of an objective function (velocity potential) and the profile of the domain. The governing equation is discretized spacewise using the Galerkin finite element method. During expansion, the number of mesh elements remains unchanged, while the location of the nodes changes. The movement of the mesh nodes is attached to the movement of the flow. The focus is on the energy conversion efficiency of the process, i.e., on the extent to which the heat released by the hot material is converted into kinetic energy. The results document the effects of changing the hot-particle size and water pool size. It is shown that the efficiency decreases almost inversely with time and that for times in the 1-ms range it has values of the order of 1%. Thermal expansion;Finite element method;Lagrange multipliers;Computer simulation;Initial value problems;Functions;Equations of motion;Galerkin methods;Energy conversion;Particles (particulate
{"url":"http://fds.duke.edu/db/pratt/mems/faculty/abejan/publications/56905","timestamp":"2014-04-21T05:17:29Z","content_type":null,"content_length":"15647","record_id":"<urn:uuid:04885a8d-4d67-4aab-8e71-1a21a4ad7d42>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: interpolate Replies: 12 Last Post: Dec 31, 2012 9:33 AM Messages: [ Previous | Next ] dpb Re: interpolate Posted: Dec 27, 2012 10:55 AM Posts: 7,872 Registered: 6/7/07 On 12/26/2012 7:45 PM, maryam wrote: > dpb <none@non.net> wrote in message <kbg5ol$ugl$1@speranza.aioe.org>... >> On 12/26/2012 5:03 PM, maryam wrote: >> > dpb <none@non.net> wrote in message <kbfhar$1lo$1@speranza.aioe.org>... >> >> doc interp1 >> >> >> >> >> X = 0:10; V = sin(X); Xq = 0:.25:10; >> >> >> Vq = interp1(X,V,Xq); >> > thank you very much for your answer but I mean we have a matrix n-by-m: >> > assume M= [a b c;d e f; g h i] >> > I want to interpolate its rows so that is equal to >> > M=[a a+d a+2d a+3d ... 7d b b+d ... b+7d c... c+7d ; d... d+7d... >> f+7d ; >> > g ... i+7d] >> > Can you explain me how I could do it? >> I'm virtually certain you asked this same question some days ago and I >> told you then the same thing... >> Use interp1() as the above example w/ an interpolating vector of the >> points at which you want interpolants. Here's why I think I remember >> the question and the response, as I pointed out then, since interp1() >> works on columns you'll need to transpose, interpolate, then transpose >> back. >> I can't interpret the above [a a+d a+2d ... ) precisely but assuming d >> here is a different 'd' than that of the value in the original M(2,1) >> and is just a delta then the interpolant would seem to be >> linspace(0,N*d,N+1) where N is the number of intervals (7 above) added >> to a, b, c, ..., etc. > thank you so much, I'm Beginner in matlab, > I apologize for my request > Is it possible give me matlab code of my question? > I couldn't use 'for' command or M(:,columns) ... > my matrix is 1000-by-1200 d=delta=1/16 > its values (arrays) is arbitrary here That's identical to the example I gave except on an array instead of a vector...and a still-to-be-determined precisely interpolant. What does the 1/16 delta refer to--is it a constant or a fraction of the distance between columns, or what? Your description isn't precise (or at least I can't decipher what it is you want clearly enough to actually write a piece of specific code). Can you give a very small example of a dataset that you could compute the desired result of by hand? It wouldn't have to be but one row and a few values to illustrate what you're actually trying to do...
{"url":"http://mathforum.org/kb/message.jspa?messageID=7944253","timestamp":"2014-04-19T05:12:09Z","content_type":null,"content_length":"31463","record_id":"<urn:uuid:30098141-1627-4623-8647-fe474521c630>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Workshop on Conference on Retrodigitization of mathematical journals and their integration into searchable digital libraries Institute for Experimental Mathematics, Essen August 1-3, 2000 Tuesday, August 1 9.15 -9.20 Opening 9.20 - 10.05 R.K. Dennis A global mathematics library 10.10 -11.00 O. Ninnemann Jahrbuch Project and the Electronic archive of mathematical literature 11.30 - 12.20 S. Thomas JSTOR's experience with retrodigitization of mathematics literature Ann Arbor 12.30 -13.00 M. Höppner What is the digital library NRW good for a mathematician? 14.30 -15.20 M. Köttstorfer DIEPER - Digitised European Periodicals 15.30 - 16.20 R. Fateman Practical ways of digitizing a math library 16.50 -17.40 M. Fock-Althaus AGORA the new digital library system of SRZ and the SB in Göttingen based on XML/RDF format 17.45 - 18.30 M. Enders The Digitization process for mathematical journals - An introduction to the workflow at the GDZ Wednesday, August 2 9.15 -10.00 D. Nastoll MILESS 10.05 - 10.45 E. Mittler Retrodigitized and born digital material - how to bring it together? 11.15 -12.05 M. Okamoto Recognition of mathematical symbols 12.10 -13.00 K. Yokota Layout analysis of mathematical documents 14.30 - 15.00 G. Michler Retrodigitization of "Archiv der Mathematik" 15.00 - 15.45 R. Staszewski Analysis of mathematical documents 16.15 - 17.00 R. Nörenberg Recognition of mathematical formulas 17.10 - 18.00 B. Luchner Das Cross-Ref-Projekt. Eine Initiative führender Wissenschaftsverlage auf der Basis des DOI-Systems Thursday, August 3 9.15 -10.00 M. Suzuki A design of optical recognition system of mathematical documents - An experimental implementation 10.00 - 10.30 Y. Etou Optical recognition of mathematical expressions using spanning tree of minimal cost 11.00 -11.45 F. Tamari On the segmentation of Japanese Area/Mathematical Area 11.45 - 12.00 M. Nakayama Demonstration 12.00 - 12.45 R. Fateman Experiments in creating and using scanned journals on-demand 14.30 -15.20 G.Schneider Aspects of long term preservation of digital libraries 15.50 -17.00 Chair: Dr. J. Bunzel Final discussion DFG Bonn Organizers: Prof. R. Fateman, Prof. Dr. G. Michler, Prof. Dr. E. Mittler The conference is supported by the Deutsche Forschungsgemeinschaft
{"url":"http://www.exp-math.uni-essen.de/algebra/veranstaltungen/retro.htm","timestamp":"2014-04-21T09:36:37Z","content_type":null,"content_length":"10303","record_id":"<urn:uuid:5fa8dfb7-7ae3-4c1a-a651-31913d189a09>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Fernwood, PA Calculus Tutor Find a Fernwood, PA Calculus Tutor ...I took courses in combinatorics, algorithms, introduction to discrete math, Ramsey theory, integer programming and other miscellaneous discrete math courses. I taught Intro to Discrete Mathematics at Rochester Institute of Technology. I hold a PhD in Algorithms, Combinatorics and Optimization. 18 Subjects: including calculus, geometry, statistics, GRE ...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Algebra 2 with a national tutoring chain for five years. 12 Subjects: including calculus, writing, geometry, algebra 1 ...I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes and angles. I have tutored many students in pre algebra and have experience dealing with different types of equations and variables I have spent the past two years at Jacksonville University tutoring math. 13 Subjects: including calculus, geometry, GRE, algebra 1 I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school. 8 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I can help them establish a plan. It usually begins with a written daily ledger of homework assignments, upcoming quizzes and tests, and any long-term projects that need periodic attention. Students learn to update their ledgers every day, including on weekends. 32 Subjects: including calculus, English, geometry, biology Related Fernwood, PA Tutors Fernwood, PA Accounting Tutors Fernwood, PA ACT Tutors Fernwood, PA Algebra Tutors Fernwood, PA Algebra 2 Tutors Fernwood, PA Calculus Tutors Fernwood, PA Geometry Tutors Fernwood, PA Math Tutors Fernwood, PA Prealgebra Tutors Fernwood, PA Precalculus Tutors Fernwood, PA SAT Tutors Fernwood, PA SAT Math Tutors Fernwood, PA Science Tutors Fernwood, PA Statistics Tutors Fernwood, PA Trigonometry Tutors Nearby Cities With calculus Tutor Briarcliff, PA calculus Tutors Bywood, PA calculus Tutors Carroll Park, PA calculus Tutors Darby, PA calculus Tutors East Lansdowne, PA calculus Tutors Eastwick, PA calculus Tutors Kirklyn, PA calculus Tutors Lansdowne calculus Tutors Llanerch, PA calculus Tutors Overbrook Hills, PA calculus Tutors Primos Secane, PA calculus Tutors Primos, PA calculus Tutors Secane, PA calculus Tutors Westbrook Park, PA calculus Tutors Yeadon, PA calculus Tutors
{"url":"http://www.purplemath.com/Fernwood_PA_calculus_tutors.php","timestamp":"2014-04-18T19:06:11Z","content_type":null,"content_length":"24211","record_id":"<urn:uuid:7464d619-9adb-42cb-b8ad-d20a2e96127b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
New Mexico Public Education Department Questions and Answers About Adequate Yearly Progress (AYP) 1. What is Adequate Yearly Progress (AYP)? AYP represents the annual academic targets in reading and math and other indicators that the state, school districts and schools must reach to be considered on track with the federally mandated goal of 100% proficiency by school year 2013-2014. 2. Why does Adequate Yearly Progress (AYP) exist? AYP is part of state and federal statute. The Federal No Child Left Behind (NCLB), 2001. Sec. 1111 (b)(F), states that, “Each state shall establish a timeline for adequate yearly progress. The timeline shall ensure that no later than 12 years after the 2001-2002 school year all students in each group described in subparagraph (C)(v) will meet or exceed the state’s proficient level of academic achievement on the state’s assessments.” New Mexico Statute Article 2C Assessment and Accountability Sec. 22-2C-8 NMSA 1978, Adequate Yearly Progress – “The state shall institute an ‘adequate yearly progress program’ that measures public schools’ improvements…” 3. Who has to make Adequate Yearly Progress (AYP)? • The state • School districts • Schools • Subgroups within schools. The subgroups include the following categories All Students African American Native American Economically Disadvantaged Students with Disabilities who have IEPs (Individual Education Plan) English Language Learners 4. Can a given student be included in more than one subgroup? Yes. For example, a Caucasian student who also receives free lunch would be counted in the All Students, Caucasian, and Economically Disadvantaged groups. 5. What happens if a school does not make AYP? If a school does not make AYP in the same area (e.g. in math, or reading) for 2 consecutive years, then the school receives an NCLB designation as a School In Need of Improvement (SINOI). There are five levels of improvement that carry progressive requirements for monitoring and enhancement. 6. How can schools be removed from improvement status? If a school makes AYP for two consecutive years, that school will no longer be in need of redevelopment, and any improvement designations will be removed. 7. What do schools have to do in order to make AYP? Schools need to: a) Achieve a 95% participation rate on state assessments. b) Reach targets for proficiency. c) Reach targets for one other indicator – for elementary and middle schools that is the attendance rate, and for high schools it is the graduation rate 8. What groups must achieve a 95% participation rate? Participation rates are calculated for all schools that have 40 or more students. In addition, each subgroup with 40 or more students must meet the target of 95% participating in the standards based assessment. 9. If any participation rate (school wide or subgroups over 40 students) for a school does not achieve a 95% participation rate, can it still make AYP? 10. Which assessments may be utilized in calculating participation rates of students? The Standards Based Assessment (SBA) has been used for 4 years, and was designed to assess whether students meet grade-specific standards developed by New Mexico professionals. The New Mexico Alternate Performance Assessment (NMAPA) was similarly designed for special education students who meet qualifications for specialized testing. 11. Who must be tested? All public school students enrolled in grades 3-8, and 11 must participate. The school year 2006-2007 was the last year that 9th graders were tested. Assessment is not required for home and private schooled students. 12. Will the AYP proficiency targets stay the same over time? No. The proficiency targets increase over time as we work towards the goal of 100% proficiency by 2014. At a minimum, a federally mandated increase must occur every 3 years. The targets are called Annual Measurable Objectives (AMOs) and their trajectories are viewable on the NMPED website. 13. How were proficiency starting points established? The NCLB Act prescribed the process for determining starting points for reading and math. Baselines for New Mexico students were established using these steps: 1. Schools were ranked from lowest to highest based upon assessment performance in school year 2003-2004. 2. Beginning with the lowest ranked school, the enrollment for each school was added to the enrollment of the next higher performing school. 3. This continued until 20% of the statewide enrollment had been reached. 4. The starting point became the rate (percentage) of students who scored proficient for the school at that level. 14. How is proficiency defined for the purpose of determining AYP? Assessments rank students as Beginning Step, Nearing Proficient, Proficient, or Advanced Proficient. Students achieving Proficient or Advanced Proficient are considered proficient for AYP and a rate is calculated for the school and district. Proficiency is computed for subgroups only if they have 25 or more students. 15. How are AYP end points established? The NCLB goal is to have 100% of all students proficient by school year 2013-2014. During the intervening years, AYP targets (AMOs) are set to help us move toward meeting that goal. The path to the 100% end point for different schools is viewable on the NMPED website. 16. Do all groups and schools have to meet the same targets? All schools of the same configuration/grade span and subgroups have to meet the same proficiency targets. 17. What are the targets for schools that do not have one of the tested grades? New Mexico tests students only in grades 3-8, and 11. However NCLB statute requires that all schools receive an AYP rating, even if they do not have a tested grade. A Feeder School method is used to assign scores from alumnae of the feeder school to compute AYP. For example a kindergarten-only school (feeder school) will receive the rating and designation from their exited students in grade 3. Where exited students cannot be found in the tested population, district ratings are given to the school. 18. Which students count for proficiency? Only students who are continuously enrolled in the school for a full academic year (FAY) are counted. FAY is defined as continuous enrollment in the same school from test cycle to test cycle (e.g. Spring 2007 to Spring 2008). 19. What about students whose parents refuse to let their children take the test? Those students are included as non-participants when determining AYP participation rates for a school. The students also do not receive a valid score for the assessment, which is considered below proficiency. Therefore these students will adversely affect a school’s ability to meet AYP. 20. How do I calculate the proficiency rate for a school? Use the following formula: 1. Numerator: The number of students scoring proficient or advanced, who were enrolled for a full academic year (FAY). 2. Denominator: The total number of FAY students tested. 3. Divide the numerator by the denominator. The result is the school’s AYP proficiency rate. 4. This rate is calculated separately for reading and for math. 21. How do I calculate the proficiency rate for a subgroup? For any subgroup (other than the All Students subgroup) with 25 or more students enrolled for a full academic year, repeat steps in the prior question. 22. Do schools have to reach AYP targets in both reading and math to make AYP? Yes, all schools must meet both targets, regardless of the size of the school. Separate AYP determinations are made for both reading and math that depend on the school’s meeting proficiency and participation targets in that subject. If the school does not make AYP in either reading or math, the school will not make AYP. 23. Do all subgroups have to reach their proficiency target in reading and math in order to make AYP? Yes. All subgroups of 25 or more FAY students must meet their respective AYP target. If a subgroup does not make AYP in math, the school will not make AYP in math, and the school will not make AYP overall. 24. Do subgroups have different targets? All subgroups are held to the same AYP proficiency standard for the school. Differences occur only because of the size of the subgroup. When groups are very small, computations take this into account and adjust the acceptable target boundary. The boundary is indicated on reports as the “Lower Bound Confidence Interval,” and the formula for its calculation is appended at the end of this document. 25. What is the “other academic indicator”? In addition to reading and math, every school must meet AYP targets in one other academic indicator. For schools with a 12th grade, that indicator is their overall graduation rate. For schools that do not have a 12th grade, attendance is the indicator. 26. Must schools reach their target for the “other academic indicator” to make AYP? Yes. Just like reading and math, the other indicator is rated separately for AYP. If a school does not meet their target for the other indicator, they do not make AYP. 27. What is the target for attendance? The target for attendance rate is 92.0% 28. How do I calculate attendance? All students in kindergarten through 8th grades are included in the calculation, except in cases where these grades are not present (i.e. a 9th grade academy) and then all available grades are used. The calculation uses these steps: 1. All students ever enrolled up to the 120th day of school are included 2. For each student take the number of days enrolled (ENROLLED) 3. For each student take the number of days attended (ATTENDED) 4. For each student compute ATTENDED divided by ENROLLED 5. Average the numbers from step 4, and multiply by 100 to get the percentage 29. How was the target for attendance established? The attendance target was negotiated with the federal government when AYP was first established. These and other federally approved rules can be viewed in the New Mexico Accountability Workbook which is available on the NMPED website. 30. What is the target for graduation? The 2007-2008 target for graduation rate is 90%. Each high school will meet AYP if it: 1) achieves a 90% graduation rate, 2) equals or exceeds the previous year’s graduation rate, or 3) if the graduation rate averaged over three years (this year’s rate and the two previous academic years) equals or exceeds the rate of the previous year. 31. How are graduation rates calculated? School year 2007-2008 will be the last year that this method is being used in New Mexico. The rate for each school and district is found by: 1. Identifying the number of seniors who graduated (GRAD) 2. Identifying the number of seniors who were enrolled in the school on the 40th day of the same year (ENROLLED) 3. Compute GRAD divided by ENROLLED 4. Multiply by 100 to arrive at the percentage 32. How will graduation rates be calculated next year? In 2009 New Mexico will begin using a formula called the Four-Year Cohort rate. This rate represents the proportion of graduates that came from the pool of 9th graders 4 years earlier. Students who leave to another educational setting during that 4 year period are excused from the calculation (i.e. students who transfer to an out-of-state school, or to a private or home school). Similarly students who transfer into public high schools from another non-public school will join their respective cohort and be counted. 33. Why aren’t we using the Four-Year Cohort rate in 2008? The Four-Year Cohort rate requires that 9th graders be given four full years to graduate, including the summer following their senior year. Because New Mexico did not institute a unique student ID tracking system until 2004, the first cohort that could be tracked to the senior year will not be complete until September of 2008. The federal government has allowed New Mexico, like other states, to begin reporting graduation rates that are lagged by one year, in order to fully account for these summer graduates. Therefore, the first cohort will be reported in the Spring of 2009, and it will represent the graduates of the previous year. Subsequent years will follow that pattern (i.e. the graduation rate of 2010 will derived from the cohort 2005-2009). 34. What happens when students attend more than one high school? The treatment of student mobility in the Four-Year Cohort calculation is a complex issue that is currently being deliberated by a task force at NMPED. This and other details of the calculation will be published as they become available. 35. Can some students take longer to graduate? Specialized groups may be allowed 5 or 6 years to graduate, such as special education students with IEPs, or recent immigrants who do not speak English. These rules have not yet been formalized. 36. Who is considered a “graduate”? Graduates are students who graduate with a standard diploma (including the Career and Ability pathways). Students who get a GED or a Certificate of Completion (complete course requirements but do not pass all portions of the New Mexico High School Comprehensive Examination) are considered “non-graduates” in the computation of the graduation rate. 37. What is the 2% Proxy Method? New Mexico is implementing the one-year 2% proxy method using the 2007-2008 data. This method was approved as part of the Accountability Workbook I by The U.S. Department of Education on June 21, 2008. Schools that did not make AYP solely on the proficiency of Special Education students can add to the proportion of proficient scores by an equivalent of 2% of the population eligible for an alternate assessment. This is a proxy for students who would have scored at the proficient level had a modified assessment been available to them. This flexibility is for only the current year and will be handled through the appeals process. Once a modified assessment is developed by New Mexico and approved by the U.S. Department of Education, this proxy will no longer be needed. More details of the procedure for calculating the proxy can be reviewed on page 27 of the June 16, 2008, Accountability Workbook. 38. What is the 1% Cap? School districts and states are limited to the number of proficient scores generated by the alternate assessment for significantly cognitively disabled students. The limitation of 1% of the tested population is applied to students who scored at the proficient level on the New Mexico Alternate Performance Assessment (NMAPA). Districts that exceed this cap will have 1% of their NMAPA proficient scores randomly selected and applied to AYP calculations at the district levels. This cap is not applied at the school level and does not change a student’s score. 39. What is Safe Harbor? Safe harbor is an opportunity for a school to show growth for subgroups that did not make AYP. If a subgroup did not show AYP by meeting the proficiency target (by percent proficient or the confidence interval) a school may demonstrate that the subgroup made AYP by all other measures (participation, attendance/graduation rate) and has diminished the proportion of non-proficient students in that subgroup by 10%. 40. What is a Confidence Interval? As the number of test scores and students diminishes so does our confidence in interpreting results. The U.S Department of Education has allowed us to apply a 99% confidence interval. If the AYP target is 35% proficient in Mathematics, for example, and 101 students are tested, then the target lowers to 24.97, which is the lower bound of the confidence interval. This is similar to the margin of error mentioned in surveys and election results (“give or take 3%”) . The smaller the number of scores used in an analysis the wider the confidence interval (margin of error). Below is the formula New Mexico uses to calculate the confidence interval around the AYP goal depending on the number of students analyzed. The following formula is used to compute confidence factors for AYP targets: n = the number of students z = the critical value (PED is using a 99% confidence level, so z= 2.33) p = AYP target (Annual AYP Goal), expressed as a proportion (e.g., .3370) q = 1-p Source: Statistical Methods in Education and Psychology, Glass and Hopkins 1996
{"url":"http://www.ped.state.nm.us/ayp2008/faq.html","timestamp":"2014-04-20T06:18:16Z","content_type":null,"content_length":"33885","record_id":"<urn:uuid:b9658db7-a172-4a98-8b8d-4571ffb06068>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
RLO: Probability associated with inferential statistics Chance is also known as probability, which is represented numerically. Probability as a number lies between 0 and 1 . A probability of 0 means that the event will not happen. For example, if the chance of being involved in a road traffic accident was 0 this would mean it would never happen. You would be perfectly safe. A probability of 1 means that the event will happen. If the probability of a road traffic accident was 1 there would be nothing you could do to stop it. It will happen. In practice probabilities associated with everyday life events lie somewhere between 0 and 1. A probability of 0.1 means there is a 1 in 10 chance of an event happening, or a 10% chance that an event will happen. Weather forecasters might tell us that there is a 70% chance of rain.
{"url":"http://www.nottingham.ac.uk/nmp/sonet/rlos/statistics/probability/3.html","timestamp":"2014-04-20T13:24:05Z","content_type":null,"content_length":"7285","record_id":"<urn:uuid:77107995-5db1-425e-88ca-6d70af81577a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Error Analysis An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements The need for error analysis is captured in the book's arresting cover shot - of the 1895 Paris train disaster. The early chapters teach elementary techniques of error propagation and statistical analysis to enable students to produce successful lab reports. Later chapters treat a number of more advanced mathematical topics, with many examples from mechanics and optics. End-of-chapter problems include many that call for use of calculators or computers, and numerous figures help readers visualise uncertainties using error bars. User ratings 5 stars 4 stars 3 stars 2 stars 1 star Review: Introduction To Error Analysis: The Study of Uncertainties in Physical Measurements User Review - dead letter office - Goodreads Good (very) basic introduction to the analysis of uncertainty. Written with undergraduate physicists in mind, and he does a good job communicating the intuition involved in handling uncertainty. Good ... Read full review Review: An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements User Review - Sergey - Goodreads At first the book seemed to be a little too school-ish and simple. Yet it contains all the necessary basis information to start understanding uncertainties. Just what I needed. Read full review References from web pages An introduction to error analysis : the study of uncertainties in ... All about An introduction to error analysis : the study of uncertainties in physical measurements by John R. Taylor. librarything is a cataloging and social ... www.librarything.com/ work/ 57180 Taylor: An Introduction to Error Analysis Taylor: Chapters 1-3. nPrecision vs. Accuracy. nSources of Measurement Uncertainty. nReporting Uncertainty Quantitatively. nQuantitative Determination of ... physics.gac.edu/ ~mellema/ Taylor/ Taylor_files/ frame.htm ERROR ANALYSIS. Reference: An Introduction to Error Analysis, 2nd Ed., John R. Taylor (University Science Books, Sausalito, 1997). ... class.phys.psu.edu/ p559/ experiments/ html/ error.html Introduction to Error Analysis for the Physical Chemistry Laboratory Introduction to Error Analysis for the Physical. Chemistry Laboratory. January 22, 2007. 1 Introduction. In the physical chemistry laboratory you will make ... www.colorado.edu/ Chemistry/ chem4581_91/ Error%20Analysis.pdf Treatment of Errors and Uncertainties in the 2 year Lab Treatment of Errors and Uncertainties in the 2. nd. year Lab. References:. 1. An Introduction to Error Analysis, jr Taylor (Oxford University) 1982 (there ... hug.phys.huji.ac.il/ PHYS_HUG/ MAABADA/ Mabada_b/ Error%20Analysis%20in%20Maabada%20Bet.pdf Error Analysis Exercise - Physics 111-Lab Wiki jr Taylor, "An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements, 2nd Edition", University Science Books (1996). ... www.advancedlab.org/ mediawiki/ index.php?title=Error_Analysis_Exercise INTRODUCTION TO ERRORS AND ERROR ANALYSIS To many students and to ... INTRODUCTION TO ERRORS AND ERROR ANALYSIS. To many students and to the public in general, an error is something they have done ... www.ryerson.ca/ physics/ current/ lab_information/ experiments/ IntroToErrorsFinal.pdf [ CLICK HERE & TYPE TITLE OF PAPER ] Sixth International Symposium on Hydrological Applications of Weather Radar. 1. Radar Uncertainties in Cell-based TITAN Analyses ... www.bom.gov.au/ bmrc/ basic/ old_events/ hawr6/ errors_radar_measurements/ RUIZ_Radar%20Uncertainties.pdf A Computational Materials Science Course for Undergraduate Majors For further study, students were assigned readings and selected problems in either An Introduction to Error Analysis: The Study of Uncertainties in Physical ... www.tms.org/ pubs/ journals/ JOM/ 0312/ Rickman/ Rickman-0312.html Marktplaats.nl > John R. Taylor - An introduction to error ... John R. Taylor - An introduction to error analysis, the study of uncertainties in physical measurements, second edition, University science books ... boeken.marktplaats.nl/ studieboeken-en-cursussen/ 154432508-john-r-taylor-an-introduction-to-error-analysis.html Bibliographic information
{"url":"http://books.google.com/books?id=giFQcZub80oC&pg=PA94","timestamp":"2014-04-21T00:05:20Z","content_type":null,"content_length":"102210","record_id":"<urn:uuid:59b4c47b-4f87-43a1-89c8-bde4b0784bc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Physics 208 Exam 3 Name_______________________________________________ You are graded on your work, with partial credit. See the last pages of the exam for formula sheets. Please be clear and well-organized, so that we can easily follow each step of your work. 1. (a) (15) A metal airplane with a wingspan of 10 m moves through the earth's magnetic field, whose magnitude we will approximate as 0.5 T. If the airplane has a velocity of 300 km/hour perpendicular to the magnetic field, what is the induced emf across its wing? (b) (5) Suppose the airplane were made of insulating plastic rather than conducting metal. Would there still be an induced emf across its wing? Explain in a few words. 2. Two point charges are moving as shown in the figure. At a given moment in time, the charge of q = +3.00 µC is located on the y axis at a distance of 0.20 m from the origin, and is moving to the right with a velocity v = 2.00 !106 At the same time, the charge of q' = !2.00 µC is located on the x axis at a distance of 0.10 m from the origin, and is moving upward with a velocity v = 1.00 !106 (a) (6) Determine the magnitude of the magnetic field at the origin due to the first charge q . (b) (2) What is the direction of this field? [up, down, to left, to right, into paper, out of (c) (6) Determine the magnitude of the magnetic field at the origin due to the second charge
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/500/2957768.html","timestamp":"2014-04-18T04:52:48Z","content_type":null,"content_length":"8585","record_id":"<urn:uuid:1bf1618b-d384-443d-a21d-aa94fef09981>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
finding volume using double and triple integrals November 14th 2010, 06:01 PM #1 Senior Member Aug 2009 finding volume using double and triple integrals ok so here is the problem.. im supposed to let G be the solid inside the sphere rho = 3 and outside the cylinder r = 3/(2+cos(theta)). i can use any coordinate system. i wasnt sure how to post my work so i am going to attach it. i jus wanna make sure my integrals are setup correctly, because if they are, then i can finish the problem...if someone could check my work i would appreciate it. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/163267-finding-volume-using-double-triple-integrals.html","timestamp":"2014-04-17T11:15:32Z","content_type":null,"content_length":"29624","record_id":"<urn:uuid:16fe1a20-3f65-40f5-8944-589f5e82fb35>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
list of technical reports and preprints. My list of technical reports and preprints. This is a list of my technical reports and preprints that you may look at: Table of Contents: Petter E. Bj{\o}rstad and Maksymilian Dryja April 1998. This paper studies a variant of the Additive Average Schwarz algorithm \cite{Bjorstad:1997:ASM} where the coarse space consists of two subspaces. This approach results in a simpler and more parallel algorithm while we retain the essential convergence properties of the original method. Our theory is confirmed by numerical experiments showing that the new algorithm is often superior to the original variant. BIBTEX reference: author="Petter E. Bj{\o}rstad and Maksymilian Dryja ", title="A Coarse Space Formulation with good parallel properties for an Additive Schwarz Domain Decomposition Algorithm", journal="Submitted to Numerische Mathematik", Petter E. Bj{\o}rstad and Bj{\o}rn Peter Tj{\o}stheim May 1998. We solve the biharmonic eigenvalue problem $\Delta^2u = \lambda u$ and the buckling plate problem ${\Delta}^2u = - {\lambda}\Delta u$ on the unit square using a highly accurate spectral Legendre-Galerkin method. We study the nodal lines for the first eigenfunction near a corner for the two problems. Five sign changes are computed and the results show that the eigenfunction exhibits a self similar pattern as one approaches the corner. The amplitudes of the extremal values and the coordinates of their location as measured from the corner are reduced by constant factors. These results are compared with the known asymptotic expansion of the solution near a corner. This comparison shows that the asymptotic expansion is highly accurate already from the first sign change as we have complete agreement between the numerical and the analytical results. Thus, we have an accurate description of the eigenfunction in the entire domain. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad and Bj{\o}rn Peter Tj{\o}stheim", Title="A note on high precision solutions of two fourth order eigenvalue problems", Journal="Submitted to Computing", Petter E. Bj{\o}rstad November 1997. This article, written for the science student paper at the University of Bergen in Norwegian (sorry!), gives a brief overview of computer evolution and the use of high performance computers to advance scientific research. The entire text is written in nontechnical style for a general audience. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad", Title="Computational Science, en tredje vei for forskning", Note="In UiB student paper QED " Jeremy Cook, Petter E. Bj{\o}rstad, Jon Br{\ae}khus April 1996. We discuss the use of high performance computing equipment in large, commercial structural analysis programs. In particular, we consider the strategy for modifying a standard industrial code from a pure F77 version to a form suitable for a range of parallel computers. The code is parallelized using the PVM message passing library for communication between processes and the ScaLAPACK and BLACS libraries for parallel linear algebra operations. The parallel code is suitable for a range of parallel computers, however for the purposes of verification and benchmarking, two specific hardware architectures were targeted in this work. These are an 8-node DEC Alpha cluster with 233MHz EV45 processors and FDDI/GIGASwitch interconnect, and a 32-node Parsytec GC/PowerPlus with 64 PowerPC-601 processors and a Transputer interconnect. BIBTEX reference: AUTHOR="Jeremy Cook and Petter Bj{\o}rstad and Jon Br{\ae}khus", Title="Multilevel Parallel Solution of Large, Sparse Finite Element Equations from Structural Analysis", BOOKTITLE="High-Performance Computing and Networking", Note="Lecture Notes in Computer Science" Editor = "H. Liddel and A. Colbrook and B Hertzberger and P. Sloot", Petter E. Bj{\o}rstad November 1996 This paper will highlight, by way of examples, a few seemingly very different mathematical problems and show how they have direct relevance to the construction of efficient computational procedures for the simulation of oil reservoirs on parallel computers. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad", TITLE=" Mathematics, parallel computing and reservoir simulation", BOOKTITLE="Proceedings of the Second European Congress of Mathematic", NOTE="Budapest July 1996", EDITOR="Antal Balog", PUBLISHER="Birkhauser Verlag", Petter E. Bjørstad, Maksimillian Dryja and Eero Vainikko Oktober 1996 We describe and compare some recent domain decomposition algorithms of Schwarz type with respect to parallel performance. A new, robust domain decomposition algorithm -- Additive Average Schwarz is compared with a classical overlapping Schwarz code. Complexity estimates are given in both two and three dimensions and actual implementations are compared on a Paragon machine as well as on a cluster of modern workstations. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad and Maksimilian Dryja and Eero Vainikko", TITLE="Parallel implementation of a Schwarz Domain Decomposition Algorithm", BOOKTITLE="Applied Parallel Computing in Industrial Problems and Optimization", EDITOR="Jerzy Wasniewski and Jack Dongarra and Kaj Madsen and Dorte Olesen", NOTE="Lecture Notes in Computer Science volume 1184", Petter E. Bjørstad and Bjørn Peter Tjøstheim: January 1996. We show that one can derive an $O(N^3)$ spectral-Galerkin method for fourth order (biharmonic type) elliptic equations based on the use of Chebyshev polynomials. The use of Chebyshev polynomials provides a fast transform between physical and spectral space which is advantageous when a sequence of problems must be solved e.g., as part of a nonlinear iteration. This improves the result of Shen which reported an $O(N^4)$ algorithm inferior to the $O(N^3)$ method developed earlier based on Legendre polynomials, but less practical in the case of multiple problems. We further compare our method with an improved implementation of the Legendre-Galerkin method based on the same approach. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad, Bj{\o}rn Peter Tj{\o}stheim", TITLE="Efficient algorithms for solving a fourth order equation with the Spectral-Galerkin method", JOURNAL="SIAM J. Sci. Comp.", YEAR = "1997", P. Bjørstad and T. Sørevik: Nov. 92 This paper discusses the implementation and performance of the computational BLAS kernels in a data-parallel setting. Two different programming languages are compared and several compiler issues are BIBTEX reference: AUTHOR="P. E. Bj{\o}rstad and T. S{\o}revik", TITLE="Two Different Data-Parallel Implementations of the {BLAS}", BOOKTITLE=" Software for Parallel Computation", EDITOR="J. S. Kowalik", YEAR=" 1993" P. Bjørstad and T. Sørevik: Nov. 92 We consider a data-parallel implementation of LU-factorization based on the LAPACK routine DGETRF. We analyze the performance of the required BLAS routines and show that high performance is inhibited by current compiler limitations. BIBTEX reference: AUTHOR="P. E. Bj{\o}rstad and T. S{\o}revik", TITLE="Data-parallel {BLAS} as a basis for {L}apack on massively parallel computers", BOOKTITLE="Linear Algebra for Large Scale and Real-Time Applications", EDITOR="M. Moonen and G. Golub and B. De Moor", PUBLISHER="Kluwer Academic Publishers" Petter E. Bjørstad and Terje Kårstad: July 94 A prototype black oil simulator is described. The simulator has a domain-based data structure whereby the reservoir is represented by a possibly large number of smaller reservoirs each having a complete local data structure. This design is essential for effective use of preconditioning techniques based on domain decomposition. The chapter describes a splitting technique for the solution of the nonlinear system and an effective implementation of the algorithm on massively parallel computer systems. Most communication is localized and long range communication is kept at a minimum. Results from an implementation of the method are reported for a 16384 processor MasPar MP-2. Petter E. Bjorstad and Robert Schreiber: May 94 Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance. Petter E. Bjørstad, Jon Brækhus and Anders Hvidsten: This paper reviews the development that has occurred in order to adopt a large scale structural analysis package, SESAM (SESAM is marketed by Veritas Sesam Systems Inc.), to the rapid changes in computer architecture as well as to the algorithmic advances that has been made in the past few years. We describe a parallel implementation of the sophisticated direct solution algorithm, based on processing substructures in parallel, but also allowing a high degree of parallelism in the computation within each substructure. We further discuss the use of iterative solution strategies. The paper concludes that iterative techniques should be further investigated and that the current knowledge about such methods makes them attractive for certain special classes of problems already today. We provide a few numerical examples in support of our conclusions. BIBTEX reference: AUTHOR="Petter Bj{\o}rstad and Jon Br{\ae}khus and Anders Hvidsten", TITLE="Parallel substructuring algorithms in structural analysis, direct and iterative methods", BOOKTITLE="Fourth International Symposium on Domain Decomposition Metods for Partial Differential Equations", EDITOR="R. Glowinski and Y. A. Kuznetsov and G\'{e}rard Meurant and J. P\'{e}riaux and O. B. Widlund", Petter E. Bjørstad, Randi Moe and Morten Skogen: Jan. 90 Algorithms for the solution of partial differential equations based on a subdivision of the spatial domain, has received much interest in recent years. To a large extent this has been motivated by the new generation of parallel computers. This algorithmic approach can introduce independent parallel tasks of variable granularity, depending on the subdivision and can therefore be adapted to a wide range of parallel computers. We review some of the progress that has been made and report on numerical experiments that illustrate the convergence behavior. We also describe parallel implementations on both shared memory computers (Alliant FX/8) and on local memory systems (Intel iPSC/2 and network connected workstations). BIBTEX reference: AUTHOR=" Petter E. Bj{\o}rstad and Randi Moe and Morten Skogen", TITLE=" Parallel Domain Decomposition and Iterative Refinement Algorithms ", EDITOR="Wolfgang Hackbusch", BOOKTITLE="Parallel Algorithms for PDEs, Proceedings of the 6th GAMM-Seminar held in Kiel, Germany, January 19--21, 1990" ADDRESS="Braunschweig, Wiesbaden", Petter E Bjørstad and Erik Boman: Nov. 90 We apply a preconditioned conjugate gradient algorithm to the SLALOM benchmark, proving that the solution phase of the benchmark is reduced to $O(N^2)$. The algorithm therefore performs well over the full range of input parameters specified by the benchmark. This has a dramatic impact on the benchmark for all computers. We illustrate this with new implementations on an 8192 processor MP-1 system and a Cray X/MP. BIBTEX reference: AUTHOR="Petter Bj{\o}rstad and Erik Boman", TITLE="A New Algorithm for the SLALOM Benchmark", INSTITUTION="Institutt for Informatikk, University of Bergen", NUMBER = "55", YEAR = "1991", NOTE = "Also published in Supercomputing Review" Petter E. Bjørstad and Morten D. Skogen: We discuss implementation of additive Schwarz type algorithms on SIMD computers. A recursive, additive algorithm is compared with a two-level scheme. These methods are based on a subdivision of the domain into thousands of micro-patches that can reflect local properties, coupled with a coarser, global discretization where the `macro' behavior is reflected. The two-level method shows very promising flexibility, convergence and performance properties when implemented on a massively parallel SIMD computer. BIBTEX reference: AUTHOR=" Petter E. Bj{\o}rstad and Morten Skogen", TITLE="Domain Decomposition Algorithms of {S}chwarz Type, Designed for Massively Parallel Computers", EDITOR="David E. Keyes and Tony F. Chan and G{\'e}rard A. Meurant and Jeffrey S. Scroggs and Robert G. Voigt ", BOOKTITLE="Fifth International Symposium on Domain Decomposition Methods for Partial Differential Equations", ADDRESS="Philadelphia, PA", Petter E Bjørstad and Olof B Widlund: Sep. 88 More than one hundred years ago, H.A. Schwarz introduced a domain decomposition method in which the original elliptic equation is solved on overlapping subregions, one after another, in an iterative process. A few years ago, Chan and Resasco, introduced a method that they classified as a domain decomposition method using nonoverlapping subdomains. In this note, it is shown that their method is an accelerated version of the classical method. It is also shown that the error propagation operator of the method can be expressed in terms of Schur complements of certain stiffness matrices and that techniques previously developed for the study of iterative substructuring algorithms can be used to derive estimates on the rate of convergence. BIBTEX reference: AUTHOR=" Petter E. Bj{\o}rstad and Olof B. Widlund ", TITLE=" To Overlap or Not to Overlap: {A} Note on a Domain Decomposition Method for Elliptic Problems ", JOURNAL=" SIAM J. Sci. Stat. Comput.", PAGES= "1053--1061" Petter E Bjørstad, Frederik Manne, Tor Sørevik and M. Vajtersic: Aug. 91. We describe efficient algorithms for matrix multiplication on SIMD computers. We consider SIMD implementations of Winograd's algorithm in the case where additions are faster than multiplications, as well as classical kernels and the use of Strassen's algorithm. Actual performance figures using the MasPar family of SIMD computers are presented and discussed. BIBTEX references: AUTHOR="P. Bj{\o}rstad and F. Manne and T. S{\o}revik and M. Vajter\v{s}ic", TITLE="Efficient Matrix Multiplication on {SIMD} Computers", JOURNAL="SIAM J. Matrix Anal. Appl.", KEY="Matrix multiplication, Winograd's algorithm, Strassen's algorithm, SIMD computer, parallel computing" Petter E. Bjorstad, Jeremy Cook, Hans Munthe-Kaas and Tor Sorevik: Oct. 1991 We describe the results of a three-week project to evaluate and implement a SAR algorithm from FFI on a MasPar MP-1208 at para//ab in Bergen. The report shows that a quite complex program written in Fortran, can be rewritten in C and made to run efficiently on a massively parallel SIMD computer with relatively little effort. The performance shows that general purpose SIMD computers are very competitive with specially designed computers for SAR processing. BIBTEX references: AUTHOR=" Petter E. Bj{\o}rstad and Jeremy Cook and Hans Munthe-Kaas and Tor S{\o}revik", TITLE="Implementation of a {SAR} processing algorithm on {M}as{P}ar INSTITUTION="Institute of Informatics, University of Bergen ", ADDRESS="H{\o}yteknologisenteret,N-5020 Bergen, Norway", Petter E. Bjørstad and Jan Mandel: Dec. 1989 Many parallel iterative algorithms for solving symmetric, positive definite problems proceed by solving in each iteration, a number of independent systems on subspaces. The convergence of such methods is determined by the spectrum of the sums of orthogonal projections on those subspaces, while the convergence of a related sequential method is determined by the spectrum of the product of complementary projections. We study spectral properties of sums of orthogonal projections and in the case of two projections, characterize the spectrum of the sum completely in terms of the spectrum of the product. BIBTEX references: AUTHOR=" Petter E. Bj{\o}rstad and Jan Mandel", TITLE=" On the Spectra of Sums of Orthogonal Projections with Applications to Parallel Computing ", Petter E Bjørstad and Jeremy Cook: Nov. 92 We discuss the use of a massively parallel computer in large scale structural analysis computations. In particular, we consider the necessary modifications to a standard industrial code and a strategy where the code may be allowed to evolve gradually from a pure F77 version to a form more suitable for a range of parallel computers. We present preliminary computational results from a DECmpp computer, showing that we can achieve good price performance when compared to traditional vector supercomputers. BIBTEX references: AUTHOR="Petter Bj{\o}rstad and Jeremy Cook", TITLE="Large Scale Structural Analysis on Massively Parallel Computers", BOOKTITLE="Linear Algebra for Large Scale and Real-Time Applications", EDITOR="M. Moonen and G. Golub and B. De Moor", PUBLISHER="Kluwer Academic Publishers", NOTE="NATO ASI Series" Petter E. Bjørstad: Jan. 88 We consider the classical Schwarz alternating algorithm and an additive version more suitable for parallel processing. The two methods are compared and analyzed in the case of two domains. We show that the rate of convergence for both methods, can be directly related to a generalized eigenvalue problem, derived from subdomain contributions to the global stiffness matrix. Analytical expressions are given for a model case. BIBTEX references: AUTHOR=" Petter E. Bj{\o}rstad", TITLE="{M}ultiplicative and {A}dditive {S}chwarz {M}ethods: {C}onvergence in the 2 domain case", BOOKTITLE="Domain Decomposition Methods ", EDITOR="Tony Chan and Roland Glowinski and Jacques P{\'e}riaux and Olof Widlund", ADDRESS="Philadelphia, PA", Petter E. Bjørstad, W. M. Coughran, Jr. and Eric Grosse: May 94 Modeling semiconductor devices is an important technological problem. The traditional approach solves coupled advection-diffusion carrier-transport equations in two and three spatial dimensions on either high-end scientific workstations or traditional vector supercomputers. The equations need specialized discretizations as well as nonlinear and linear iterative methods. We will describe some of these techniques and our preliminary experience with coarse-grained domain decomposition techniques applied on a collection of high-performance workstations connected at a 100Mb/s shared network. BIBTEX references: AUTHOR="Petter E. Bj{\o}rstad and Coughran, Jr., W. M. and Eric Grosse", TITLE="Parallel Domain Decomposition Applied to Coupled Transport Equations", BOOKTITLE="Seventh International Conference of Domain Decomposition Methods in Scientific and Engineering Computing", EDITOR="David E. Keyes and Jinchao Xu", SERIES="Contemporary Mathematics", NOTE="Held at Penn State University, October 27-30, 1993.", Petter E. Bjørstad: Apr. 89 Algorithms for the solution of partial differential equations based on a subdivision of the spatial domain, has received much interest in recent years. To a large extent this has been motivated by the new generation of parallel computers. This algorithmic approach can introduce independent parallel tasks of variable granularity, depending on the subdivision and can therefore be adapted to a wide range of parallel computers. We review some of the progress that has been made and supply a few new numerical examples that illustrate the convergence behavior. BIBTEX references: AUTHOR= "Petter E. Bj{\o}rstad", TITLE= "Parallel Domain Decomposition and Iterative Refinement Algorithms", Petter E. Bjørstad The laboratory for parallel computing - Parallab actively pursues the porting of significant industrial codes to parallel computing platforms. This work started already in 1985 with the acquisition of Europe's first 64 processor Intel hypercube. As of today, the laboratory has two MIMD machines, an Intel Paragon and a Parsytec GC/Power-Plus. As part of the Europort effort, four industrial codes are currently being ported by Parallab scientists. The paper will describe this effort and the issues, trends and experiences learned from working with industrial codes using MIMD type parallel BIBTEX references: AUTHOR="Petter E. Bj{\o}rstad", TITLE="Experience with industrial applications on MIMD machines", PUBLISHER="K.G. Saur Verlag", NOTE="Proceedings from Supercomputer '95, June 22-24, Mannheim", Petter E. Bjørstad, Maksymilian Dryja and Eero Vainikko Oct. 1995 We develop two additive Schwarz methods with new coarse spaces. The methods are designed for elliptic problems in 2 and 3 dimensions with discontinuous coefficients. The methods use no explicit overlap of the subdomains, subdomain interaction is via the coarse space. The first method has a rate of convergence proportional to (H/h)^(1/2) when combined with a suitable Krylov space iteration. This rate is independent of discontinuities in the coefficients of the equation. The method has good parallelization properties and do not require a coarse grid triangulation, that is, one is free to use arbitrary, irregular subdomains. The second method uses a diagonal scaling in addition to the standard coarse space. This method is not as robust and flexible as the first method. BIBTEX references: AUTHOR="Petter Bj{\o}rstad and Maksymilian Dryja and Eero Vainikko", BOOKTITLE="Domain Decomposition Methods in Sciences and Engineering", TITLE="Additive Schwarz Methods without Subdomain Overlap and with New Coarse Spaces", EDITOR="R. Glowinski and J. P\'{e}riaux and Z. Shi and O. B. Widlund", PUBLISHER="John Wiley \& Sons", NOTE="Proceedings from the Eight International Conference on Domain Decomposition Metods, May 1995, Beijing", Petter E. Bjørstad Oct. 1995 The laboratory for parallel computing - Parallab actively pursues the porting of significant industrial codes to parallel computing platforms. This work started already in 1985 with the acquisition of Europe's first 64 processor Intel hypercube. As of today, the laboratory has three MIMD machines, an Intel Paragon, a Parsytec GC/Power-Plus and a DEC $\alpha$-cluster. As part of the Europort effort, four industrial codes are currently being ported by Parallab scientists. The paper will describe this effort and the issues, trends and experiences learned from working with industrial codes using MIMD type parallel computers. BIBTEX references: AUTHOR="Petter E. Bj{\o}rstad", TITLE="Industrial Computing on {M}{I}{M}{D} machines", EDITOR="Magne Haveraaen", NOTE="Proceedings from NIK'95, Nov 20--22, Gran, Norway", Petter E. Bjørstad, Randi Moe, Rudi Olufsen, Eero Vainikko May 1995 We have studied the parallelization methods for the pressure equation in the reservoir simulator FRONTSIM\footnotemark[3]\ for 3-dimensional reservoirs. In order to obtain a faster simulator and finer resolution, we have studied the use of domain decomposition methods combined with the use of parallel processing. Both the additive Schwarz method and the multiplicative Schwarz method are implemented together with coarse-grid solver techniques. The purpose of the investigation is to find the best strategies for different problem sizes and different approximate solvers. The code is parallelized using the PVM message passing library for communication and has been tested on a cluster of workstations. This effort is part of the EC-ESPRIT III / EUROPORT2 project. Some implementation results and directions for further developments are given. BIBTEX references: AUTHOR="Petter E. Bj{\o}rstad and Randi Moe and Rudi Olufsen and Eero Vainikko", BOOKTITLE="Parallel Programming and Applications: Proceedings of the Workshop on Parallel Programming and Computation (ZEUS '95) and the 4th Nordic Transputer Conference (NTUG '95)", TITLE="Domain Decomposition Techniques in Parallelization of the 3-dimensional FRONTSIM code", Overlap and New Coarse Spaces", EDITOR="Peter Fritzon and Leif Finmo", PUBLISHER="IOS Press", ORGANIZATION="Link\"{o}ping University, Department of Computer and Information Science", Petter E. Bj{\o}rstad, Maksymilian Dryja and Talal Rahman May 2002. Two variants of the additive Schwarz method for solving linear systems arising from the mortar finite element discretization on nonmatching meshes of second order ellip\-tic problems with discontinuous coefficients are designed and analyzed. The methods are defined on subdomains without overlap, and they use special coarse spaces, resulting in algorithms that are well suited for parallel computation. The condition number estimate for the preconditioned system in each method is proportional to the ratio $H/h$, where $H$ and $h$ are the mesh sizes, and it is independent of discontinuous jumps of the coefficients. For one of the methods presented the choice of the mortar (nonmortar) side is independent of the coefficients. BIBTEX reference: AUTHOR="Petter E. Bj{\o}rstad, Maksymilian Dryja and Talal Rahman", Title="Additive Schwarz Methods for Elliptic Mortar Finite Element Problems", Journal="Submitted to Numerische Mathematik", Petter E. Bjørstad, petter@ii.uib.no. Maintained by: Petter E. Bjørstad, petter@ii.uib.no, &copy 1995
{"url":"http://www.ii.uib.no/~petter/reports.html","timestamp":"2014-04-16T07:14:56Z","content_type":null,"content_length":"40082","record_id":"<urn:uuid:c226c2fc-2f97-412f-a9bf-89cbf27d94df>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving that f^2 is convex if f is convex and f>=0 How do I go about this proof? Should I use the definition of convexity: f(ax+(1-a)y)<=af(x)+(1-a)f(y)? You know that for 0<a<1, f(ax+(1-a)y)<af(x)+(1-a)f(y). Since f>0, you can square this inequality to obtain f²(ax+(1-a)y)<a²f²(x)+(1-a)²f²(y)+2a(1-a)f(x)f(y). Now remember that 2f(x)f(y)<f²(x)+f²(y) (because (f(x)-f(y))²>0 ) The inequality becomes f²(ax+(1-a)y)<a²f²(x)+(1-a)²f²(y)+a(1-a)(f²(x)+f²(y))=f²(x)(a²+a-a²)+f²(y)((1-a)²+a(1-a))= af²(x)+(1-a)f²(y) and f² is convex by definition. Note that if f is C², f convex <=> f''>0. In this case you have a simpler proof (f²)'=2ff': (f²)''=2f'2+2ff''>0. (Note that I used < for $<br /> \leq<br />$ all over the place) Ahhh, thank you for your help. I didn't think about 2f(x)f(y)<=f^2(x)+f^2(y). Thank you.
{"url":"http://mathhelpforum.com/differential-geometry/111795-proving-f-2-convex-if-f-convex-f-0-a.html","timestamp":"2014-04-16T11:56:27Z","content_type":null,"content_length":"33942","record_id":"<urn:uuid:4735346f-e69d-4325-8c02-c5f01b48ecb4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Coplay Science Tutor Find a Coplay Science Tutor ...I played on Lower Merion High School's Varsity Tennis team for four years and Muhlenberg College's varsity tennis team for one year. I also individually taught group tennis clinics through Lower Merion Township to children ranging from 7-17 years old. Ever since I was little I have been an avid tennis player and have taken lessons to improve my skills. 26 Subjects: including psychology, reading, calculus, statistics ...This past year, I spent time in the Poconos and Delaware Water Gap tagging bears. I tracked their movements, habits and hibernation cycles. My studies in my Research Psychology degree also gave me more time to spend working with various rodents and amphibians. 15 Subjects: including biology, reading, writing, English ...Score reports are available upon request. Since I am working to be a teacher, I have all my clearances in order (also available upon request) and updated annually, most recently in August, 2010. References are also available from the Academic Resource Center at Muhlenberg College where I have tutored Organic and General Chemistry students for the college as recently as Spring of 14 Subjects: including organic chemistry, biochemistry, physical science, physiology ...Finding that relation and realizing how Social Studies concepts have shaped society can be not only valuable but also intriguing. I received A's in all Social Studies courses I have taken. I helped various students understand the material as well. 28 Subjects: including psychology, reading, English, precalculus ...During these long term jobs, I have constructed diversified lesson plans according to PA Grade Standards, which included STEM activities, inclusion adaptations, and integrated curriculum. I was involved in planning, conducting, and judging Science Fair activities, including PJAS (Pennsylvania J... 9 Subjects: including zoology, botany, genetics, biology
{"url":"http://www.purplemath.com/Coplay_Science_tutors.php","timestamp":"2014-04-17T04:40:48Z","content_type":null,"content_length":"23840","record_id":"<urn:uuid:f404d686-18d3-4338-b281-ee9b95b3d843>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 83 D-1 APPENDIX D DESIGN EXAMPLES CONTENTS D-2 INTRODUCTION TO DESIGN EXAMPLES D-4 DESIGN EXAMPLE 1: AASHTO TYPE III GIRDER 1. Introduction, D-4 2. Description of Bridge, D-5 3. Design Assumptions and Initial Computations, D-6 4. Analysis and Design of Girders for Continuity, D-12 5. Reinforcement for Positive Moments at Interior Supports, D-24 6. Reinforcement for Negative Moments at Interior Supports, D-35 7. Design with Nonlinear Analysis, D-39 D-42 DESIGN EXAMPLE 2: PCI BT-72 GIRDER 1. Introduction, D-42 2. Description of Bridge, D-42 3. Design Parameters, D-42 4. Reinforcement for Positive Moments at Interior Supports, D-44 D-48 DESIGN EXAMPLE 3: 51-IN. DEEP BOX GIRDER (SPREAD) 1. Introduction, D-48 2. Description of Bridge, D-48 3. Design Parameters, D-48 4. Reinforcement for Positive Moments at Interior Supports, D-50 D-54 DESIGN EXAMPLE 4: AASHTO BIII-48 BOX GIRDER (ADJACENT) 1. Introduction, D-54 2. Description of Bridge, D-55 3. Design Assumptions and Initial Computations, D-56 4. Analysis and Design of Girders for Continuity, D-60 5. Reinforcement for Positive Moments at Interior Supports, D-71 6. Reinforcement for Negative Moments at Interior Supports, D-79 D-84 REFERENCES FOR APPENDIX D D-85 SUBAPPENDIX A: INPUT DATA FOR RESTRAINT D-95 SUBAPPENDIX B: INPUT AND OUTPUT FROM RESPONSE 2000 D-105 SUBAPPENDIX C: INPUT AND OUTPUT FROM QCONBRIDGE OCR for page 83 D-2 INTRODUCTION TO DESIGN EXAMPLES The following design examples demonstrate the design OTHER FEATURES OF DESIGN EXAMPLES of precast girder bridges made continuous. The design con- forms to the AASHTO LRFD Bridge Design Specifications Other basic features of the design examples are summa- and the proposed design specifications developed as part of rized as follows. While the bridges in all design examples this research project (see Appendix C). have two equal spans, the design is similar with additional spans, unequal spans, or both. For bridges with more than two spans, live load will cause positive moments at the inte- rior supports, which do not develop in these examples. LIST OF DESIGN EXAMPLES The precast/prestressed concrete girders are made con- tinuous by the placement of a continuity diaphragm at the All design examples are for bridges with two equal spans. interior support, which fills the gap between ends of gird- The different girder and bridge types considered in the design ers from adjacent spans. For all examples with a compos- examples are listed below, with a brief description of distin- ite deck, the continuity diaphragm is placed with the deck guishing features for each: concrete. Therefore, the bridge is considered to be contin- uous for all loads applied after the continuity diaphragm is · Design Example 1: AASHTO Type III Girder--This in place. is a detailed design example for a bridge with relatively Only those details of design that are affected by the use small conventional prestressed concrete girders and a of continuity are presented in the design examples. The composite concrete deck. To demonstrate the signifi- focus of the examples is on flexural design, which is most cant effect of girder age when continuity is established, significantly affected by the consideration of restraint designs are performed assuming continuity is established moments. Design shears, reactions, and deflections are also at girder ages of 28 days, 60 days, and 90 days. affected by continuity, but the procedures for design are not · Design Example 2: PCI BT-72 Girder--This is a brief altered. design example for a bridge with deeper prestressed con- Continuous bridges may be subject to restraint moments crete girders and a composite concrete deck. Only signif- caused by the time-dependent effects of creep and shrink- icant differences from Design Example 1 (DE1) are pre- age. Two approaches have been proposed for considering sented. The design assumes that continuity is established these effects in design of precast concrete bridges made con- at a girder age of at least 90 days. Therefore, according to tinuous: the general and simplified. For the complete design the simplified approach in the proposed specifications, examples (DE1 and DE4), both approaches are considered. restraint moments are not computed. For the brief design examples, the simplified approach is · Design Example 3: 51-in. Deep Box Girder (Spread)-- employed, which does not require the evaluation of restraint This is a brief design example for a bridge with deep moments. box girders and a composite concrete deck. The girders Where required, the restraint moments due to creep and are spaced apart for greater efficiency. Only significant shrinkage are computed using the RESTRAINT spreadsheet differences from DE1 are presented. The design assumes developed in this research project. Design moments from that continuity is established at a girder age of at least sources such as temperature gradient or support settlement 90 days. Therefore, according to the simplified approach may also be considered as required by the owner. Moments in the proposed specifications, restraint moments are not from these sources would be combined with the effects con- computed. sidered in the design examples and compared with the same · Design Example 4: AASHTO BIII-48 Box Girder design criteria. (Adjacent)--This is a detailed design example for a For the detailed design examples--DE1 and DE4--a bridge with box girders placed adjacent to each other. No simple-span design is performed to compare with the two- composite deck is used. An asphalt wearing surface and spans made continuous design. membrane is placed on the box girders to achieve the Both mild reinforcement and pretensioning strands are used desired cross slope. To demonstrate the significant effect to provide the positive moment connection between the pre- of girder age when continuity is established, designs are cast girder and the continuity diaphragm. performed assuming continuity is established at girder Reinforcement in the composite concrete deck is propor- ages of 7 days, 28 days, and 90 days. tioned to resist negative design moments for design examples OCR for page 83 D-3 DE1, DE2, and DE3. For the final design example--DE4-- girder spacing and span length are fixed for each design exam- negative moments are resisted by a connection between the ple. The examples consider only interior girders. tops of the box girders. Each example provides reinforcement details for the con- The design examples represent typical bridges for the cross nections at the continuity diaphragm. Constructability is con- sections considered. The bridge typical section for DE1 and sidered in developing the details. DE2 are the same. The bridge typical section for DE3 is wider, Typical design loads are used in the designs. Conventional while the bridge typical section for DE4 is narrower. The materials are used for all designs. OCR for page 83 D-4 DESIGN EXAMPLE 1: AASHTO TYPE III GIRDER 1 INTRODUCTION Negative moments at the interior pier are caused by dead loads applied to the composite continuous structure, live loads, This design example demonstrates the design of a typical and restraint moments. However, negative restraint moments continuous two-span bridge using the specifications pro- are neglected in the design as allowed by the proposed spec- posed as part of this research. The precast/prestressed con- ifications. In this example, negative moments are resisted by crete girders are made continuous by the placement of a con- mild reinforcement added to the deck slab, which is the most tinuity diaphragm at the interior support, which fills the gap common approach to providing a negative moment connec- between ends of girders from adjacent spans. For this exam- tion. The reinforcement in the negative moment connection ple, the continuity diaphragm is placed with the deck, so the is proportioned using strength design methods. bridge becomes continuous for loads placed on the structure after the deck and continuity diaphragm are in place. Once made continuous, the bridge is subject to restraint moments that may develop from the time-dependent effects 1.1 Age of Girders at Continuity of creep and shrinkage. Restraint moments are caused by To demonstrate the significant effect of girder age when restrained deformations in the bridge. Analysis indicates that continuity is established, designs will be performed assum- the restraint moments vary linearly between supports. For this ing that continuity is established at the following girder ages: two-span bridge, the restraint moments reach maximum val- ues at the center of the interior pier. Reinforcement is provided at the interior pier to resist moments caused by time-dependent · 28 days, effects and applied loads. Restraint moments also affect the · 60 days, and moments within the spans. Therefore, girder designs must · 90 days. be adjusted to account for the additional positive moments caused by restraint. If contract documents specify the minimum girder age at con- Variations in temperature also cause restraint moments in tinuity, the minimum age is known. If the minimum girder age continuous bridges. However, this condition will not be con- at continuity is 90 days, the proposed specifications allow the sidered in this example. If moments from temperature effects designer to neglect the effect of restraint moments. This is were included, girder designs would have to be adjusted in the referred to as the "simplified approach." If the minimum girder same way as they are for restraint moments in this example. age at continuity is not specified, the designer must use the Only those details of design that are affected by the use of "general approach," which considers the effect of restraint continuity are presented in this design example. Therefore, moments. See Section 4 for a discussion of the two approaches. the focus of this example will be on flexural design, which is Since positive restraint moments have the most significant most significantly affected by the consideration of restraint effect on designs, assuming an early age at continuity will moments. While design shears, reactions, and deflections are result in higher positive restraint moments. Two early ages for also affected when compared with design for simple-span continuity (less than 90 days) are considered in this example bridges, the procedures for design are not altered; therefore, to provide information for the designer to make decisions design for these quantities will not be presented. regarding whether to set a minimum girder age at continuity In a two-span bridge with simple-span girders made con- and what that age would be. tinuous, positive restraint moments may develop at the inte- rior support. Positive moments do not develop from live loads for a two-span bridge, and the effect of support settlement is not considered. The positive design moments at the interior 1.2 Design Programs Used support are resisted by mild reinforcement or pretensioning strands that extend into the continuity diaphragm from the Most of the design calculations were performed using a bottom flange of the girder. This positive moment connection commercially available computer program. This was supple- is proportioned using strength design methods to resist any mented by hand and by spreadsheet computations to obtain restraint moments that may develop or to provide a minimum the quantities needed for this design. Restraint moments were quantity of reinforcement. The positive moment connection estimated using the Restraint Program. Fatigue design loads is also provided to enhance the structural integrity of the were computed using the QConBridge Program, which is bridge. Construction details for the positive moment connec- available at no cost from the Washington State DOT website tion are discussed in this example. (see Subappendix C). Moment-curvature relationships for use OCR for page 83 D-5 in the nonlinear analysis portion of Restraint were obtained The connection is made when the deck slab is cast. The gird- using the Response 2000 Program, which is available at no ers are therefore considered continuous for all loads applied cost from a website at the University of Toronto (see Sub- to the composite section. appendix B). The distance between centers of bearings (85.00 ft) is used for computing effects of loads placed on the simple- span girders before continuity is established. After conti- nuity, the design span for the continuous girders is assumed 2 DESCRIPTION OF BRIDGE to be from the center of bearing at the expansion end of the girder to the center of the interior pier, or 86.00 ft. See Fig- The bridge is a typical two-span structure with AASHTO ure D-2-2. The space required between ends of girders to Type III girders and a composite deck slab. The span length accommodate the positive reinforcement connection should for this bridge is approaching the maximum achievable for be considered when laying out the bridge (see Section 5.3). this girder and spacing. The geometry of the bridge is shown The following design example demonstrates the design of in Figures D-2-1 through D-2-3. an interior girder. Design of an exterior girder would be sim- The girders are made continuous by a continuity diaphragm ilar except for loads. For this bridge, the interior girder design that connects the ends of the girders at the interior support. governs. Figure D-2-1. Plan view of bridge. Figure D-2-2. Longitudinal section view of bridge. OCR for page 83 D-6 Figure D-2-3. Typical section of bridge. 3 DESIGN ASSUMPTIONS AND INITIAL See Sections 3.3 and 3.6 for values used in computing COMPUTATIONS these factors. The factors are different for the indicated girder ages when continuity is established because the 3.1 Specifications designs require different values for fc. · Girder self weight: The unit weight of girder concrete AASHTO LRFD Bridge Design Specifications, 2nd Edi- is 0.150 kcf. tion with Interims through 2002 is the primary publication to = 0.583 klf which this appendix will refer (American Assoc. of State · Deck slab (structural): The structural thickness of the Highway and Transportation Officials, 1998). References to deck is 7 3/4 in. (see Section 3.5). articles, equations, and tables in the AASHTO LRFD Spec- = 0.097 ksf on the tributary area for girders ifications will be preceded by the prefix "LRFD" to differen- = 0.751 klf for an interior girder (noncomposite tiate them from other references in this design example. section) Proposed revisions have been developed as part of this · Weight of additional deck thickness: The additional research project (see Subappendix C). References to articles deck slab thickness is 1/4 in. (see Section 3.5). and equations in the proposed specifications will be preceded = 0.003 ksf on the tributary area for girders by the prefix "proposed" to differentiate them from refer- = 0.024 klf for an interior girder (noncomposite ences to items in the AASHTO LRFD Specifications. section) · 2 1/2 in. build-up: The full build-up thickness of 21/2 in. 3.2 Loads is used for dead load computations (see Section 3.5). = 0.042 klf for all girders (noncomposite section) Loads are as follows. · Stay-in-place (SIP) deck forms: = 0.016 ksf on formed area between girders · Live load: HL-93 with 33% dynamic allowance (IM) = 0.103 klf for an interior girder (noncomposite on the design truck. Live-load distribution factors are section) computed using equations in LRFD Table 4.6.2.2.2b-1 · Parapet load: for section type (k) (see LRFD Table 4.6.2.2.1-1): = 0.371 klf per parapet, or 0.742 klf for both parapets = 0.148 klf for each girder (composite section) · Future wearing surface: Distribution of Live Load Moment in Interior Beams One Design Lane Loaded 0.465 lanes / girder = 0.025 ksf on roadway width 28 Days Two or More Design Lanes 0.654 lanes / girder = 0.170 klf for each girder (composite section) · Dead load: dead loads placed on the composite girder 60 & 90 One Design Lane Loaded 0.461 lanes / girder Days are distributed equally to all girders in the cross section Two or More Design Lanes 0.648 lanes / girder (LRFD Article 4.6.2.2.1). OCR for page 83 D-7 3.3 Materials and Material Properties computed, or they may be obtained from a table of section properties: Material properties used for design are given below. A = 559.5 in2, and 3.3.1 Girder Concrete p = 137.9 in. 3.3.1.1 Basic Properties. Girder concrete strengths are dif- LRFD Article 5.4.2.3.2 suggests that only the surface area ferent for the indicated girder ages at continuity because of design requirements. Initial design was performed using prop- exposed to atmospheric drying should be included in the erties shown for a girder age at continuity of 90 days (see computation of the V/S ratio. For the girder, the only sur- Table D-3.3.1.1-1). face that is not exposed to drying, for the life of the mem- ber, is the top surface of the top flange, which will be in 3.3.1.2 Time-Dependent Properties. Time-dependent con- contact with the composite deck slab in the completed crete properties (creep and shrinkage) are needed only if structure. However, the girder will be entirely exposed restraint moments are being included in the analysis and prior to placement of the deck slab concrete. Furthermore, design. Therefore, the following computations are not required the width of the contact area is relatively small compared if the simplified approach is being used (see Section 4). Mea- with the total girder perimeter. Therefore, the suggestion of sured values of the ultimate creep coefficient and the ultimate LRFD Article 5.4.2.3.2 will be disregarded for this girder. shrinkage strain for the concrete should be used if possible. It appears appropriate to neglect the reduction for the con- However, measured creep and shrinkage properties are rarely tact area between girder and deck in most cases, especially available; these quantities are usually estimated. For this design where top flanges are wide and thin. The reduction may be example, the equations in LRFD Article 5.4.2.3 are used to appropriate if the deck will be cast at an early girder age or estimate creep and shrinkage. See the AASHTO LRFD Spec- if the section is stocky, such as a box girder. Please note that ifications for secondary equations and complete definitions for box girders, a fraction of the perimeter of the interior of the terms used in the calculations that follow. Restraint void is included in the perimeter calculation. See DE4 for moments are very sensitive to variations in creep and shrink- discussion. age values, so the best possible estimates should be used. The V/S ratio is Other methods for estimating creep and shrinkage properties may be used as permitted by LRFD Article 5.4.2.3.1. V/S = A/p = 559.5/137.9 = 4.057 in. 3.3.1.2.1 Volume­to­surface area ratio. Both creep and The Commentary (LRFD Article C5.4.2.3.2) indicates that shrinkage equations are dependent upon the volume­to­ the maximum value of the V/S ratio considered in the devel- surface area (V/S) ratio. Since the equations are sensitive to opment of the equations for the creep and shrinkage factors this quantity and the analysis for restraint moments is sensi- in which V/S appears was 6.0 in. This value should be con- tive to creep and shrinkage values, it is important to carefully sidered a practical upper limit for the ratio when using the consider the computation of this ratio. equations in the Specifications. The V/S ratio is generally computed using the equivalent ratio of the cross-sectional area to the perimeter. This quan- 3.3.1.2.2 Ultimate creep coefficient. The creep coefficient tity can be easily computed for most sections. For the stan- may be taken as follows: dard AASHTO Type III girder, the area and perimeter can be (t , t i ) ( ) H -0.118 (t - t i ) LRFD Eq. 5.4.2.3.2-1 0.6 TABLE D-3.3.1.1-1 Girder concrete properties = 3.5k c k f 1.58 - ti 0.6. 120 10.0 + (t - t i ) Girder Age at Continuity 28 Days 60 Days 90 Days Significant load is placed on the girder at release. Therefore, ' f ci (ksi) 7.50 6.00 5.50 ti , the age of concrete when load is initially applied, is taken f 'c (ksi) 8.50 7.00 7.00 to be the age of the girder at release, or typically at 1 day. fr (ksi) 0.700 0.635 0.635 To determine the ultimate value for the creep coefficient, Eci (ksi) 5,520 4,696 4,496 u , where t = , the final term in the equation is assumed to Ec (ksi) 5,589 5,072 5,072 approach unity: wc (kcf) 0.150 0.150 0.150 Note: Eci and Ec are computed using LRFD Eq. 5.4.2.4-1. u = (, 1 day) = 1.80 (for continuity at 60 and 90 days) fr (modulus of rupture) is computed using LRFD Art. 5.4.2.6. = 1.62 (for continuity at 28 days), OCR for page 83 D-8 where 3.3.2.2 Time-Dependent Properties. See Section 3.3.1.2.1 for discussion. kc = factor for V/S ratio, LRFD Eq. C5.4.2.3.2-1 = 0.781, 3.3.2.2.1 V/S ratio. As discussed in Section 3.3.1.2.1, kf = factor for the effect of LRFD Eq. 5.4.2.3.2-2 concrete strength, determination of the V/S ratio should be carefully considered. = 0.691 (using f c = 7.00 ksi, The V/S ratio for the composite deck is computed using the for continuity at 60 and equivalent ratio of the cross-sectional area to the perimeter. 90 days), For a composite deck slab, the area is computed as the = 0.619 (using f c = 8.50 ksi, product of the full depth of the deck and the width of the deck for continuity at 28 days), extending to the center of the bay between girders or to the H = relative humidity, exterior edge of the deck. The area of the build-up could also = 75% (assumed), and be included in the deck area. However, such a refinement of V/S = 4.057 in. (used to determine kc). the computation is not generally justified, since the calcula- tion is not precise. Therefore, the area of the deck for the inte- 3.3.1.2.3 Ultimate shrinkage strain. While it is not always rior girder being considered will be the product of the girder known whether the girder will be steam cured during fabri- spacing, S, and the total deck thickness, hf : cation, the initial strength gain is generally accelerated when compared with "normal" concretes. Therefore, it is reason- A = Shf = 7.75 ft × 8 in. = 7.75(12)(8) = 744 in.2. able to use the shrinkage equation for steam-cured concrete. The shrinkage strain may therefore be taken as The perimeter of the deck slab used to compute the V/S ratio is subject to some refinement based on the recommendation sh = - ks kh (55.0t + t)0.56 × 10 -3 . LRFD Eq. 5.4.2.3.3-2 of LRFD Article 5.4.2.3.2, which indicates that only the sur- face area exposed to atmospheric drying should be included in the computation of the V/S ratio. Since the area of the deck To determine the ultimate shrinkage strain, shu , where t = , that is in contact with the girder will never be exposed to dry- the term in the equation that contains it is assumed to ing, it may be eliminated from the computed perimeter. For approach unity: simplicity and since the top flange of the Type III girder is relatively narrow, this correction will not be taken. It may be shu = sh () = -395 × 10-6 in./in. appropriate to use the correction for the contact area where the contact area with the girder is wide, such as a bulb-T or where box girder. For interior girders, the deck thickness is not con- sidered in computing the perimeter because it is an imaginary ks = size factor = 0.760, and LRFD Eq. C5.4.2.3.3-1 boundary not exposed to drying. Therefore, for the interior kh = humidity factor = 0.929. LRFD Eq. C5.4.2.3.3-2 girder being designed, the perimeter is taken as twice the girder spacing, S: 3.3.2 Deck and Continuity Diaphragm Concrete p = 2S = 2(7.75)(12) = 186.0 in. The same concrete properties are used for the deck slab and continuity diaphragm because they are placed at the The V/S ratio is same time in this example. The subscript d is used to indicate properties related to the deck slab or diaphragm concrete. V/S = A/p = 744/186 = 4.00 in. 3.3.2.1 Basic Properties. The same deck slab concrete This calculation demonstrates that, for a uniform thickness strength is used for all designs: deck slab with no deducted surface area, V/S is simply half of the thickness of the deck. d = 4.00 ksi, fc Since stay-in-place deck forms may be used on this bridge, the bottom of the deck slab may not be exposed to drying. frd = 0.480 ksi, LRFD Art. 5.4.2.6 This would increase the V/S ratio to 8.00 in., which exceeds wcd = 0.150 kcf, and the V/S limit used to develop the equations for correction fac- tors, kc and ks The increased V/S will reduce the corrections Ecd = 3,834 ksi. LRFD Eq. 5.4.2.4-1 factors, but not significantly. Therefore, the effect of the deck forms is neglected. According to proposed Article 5.14.1.2.7j, design at the con- tinuity diaphragm will use the concrete strength of the pre- 3.3.2.2.2 Ultimate creep coefficient. The age of the deck cast girder where noted. concrete at loading is not as well defined as it is for the girder. OCR for page 83 D-9 An early age of 14 days is assumed to provide a conservative 0.5-in.- or 0.6-in.-diameter low-relaxation seven-wire strand; estimate of deck creep behavior (a larger creep coefficient). An early age at loading is also a reasonable assumption because Aps = 0.153 in.2 (0.5-in.-diameter strand, for continuity at some load will be transferred to the deck shortly after casting 60 and 90 days) and because it restrains the continued downward deflection of the = 0.217 in.2 (0.6-in.-diameter strand, for continuity at girder under the load of the deck (due to creep): 28 days); (t , t i ) fpu = 270 ksi; ( ) H -0.118 (t - t i ) 0.6 LRFD Eq. 5.4.2.3.2-1 = 3.5k c k f 1.58 - ti 0.6. 120 10.0 + (t - t i ) fpy = 0.90 fpu = 243 ksi; As described previously, t = is used to obtain the ultimate fpj = 0.75 fpu = 202.5 ksi; and value for the creep coefficient, u : Ep = 28,500 ksi. u = (, 14 days) = 1.70, 3.3.3.1 Transfer Length. At the ends of pretensioned gird- where ers, the force in the pretensioning strands is transferred from the strands to the girder concrete over the transfer length. The kc = factor for V/S ratio LRFD Eq. C5.4.2.3.2-1 stress in the strands is assumed to vary linearly from zero at = 0.775, the end of the girder to the full effective prestress, fpe , at the kf = factor for the effect of LRFD Eq. 5.4.2.3.2-2 transfer length. The transfer length, t , may be estimated as concrete strength = 0.897 (using f c d = 4.00 ksi), t = 60 db, LRFD Art. 5.11.4.1 H = relative humidity = 75% (assumed), and = 60(0.5 in.) V/S = 4.00 in. (used for kc; this is a simplified value, based = 30 in. (for continuity at 60 on the full 8-in.deck thickness and neglecting any effect of SIP metal forms and the area of contact and 90 days), and with the girder). = 60 (0.6 in.) = 36 in. (for continuity at 28 days), 3.3.2.2.3 Ultimate shrinkage strain. Since deck slab con- crete is normally moist cured, the equation for shrinkage for where moist-cured concrete is used: db = nominal strand diameter. sh = - ks kh (35.0t + t)0.51 × 10 -3 . LRFD Eq. 5.4.2.3.3-1 The location at a transfer length from the end of the girder is a critical stress location at release. Therefore, moments and stresses computed for this location are shown in various tables To determine the ultimate shrinkage strain, shu , where t = , in this example. These locations are identified in the tables the term in the equation that contains t is assumed to with the heading "Trans" or "Transfer." Values in tables dif- approach unity: fer for continuity at 28 days and for continuity at 60 and 90 days because the different strand size results in a different shu = sh () = -353 × 10-6 in./in., transfer length. where 3.3.4 Mild Reinforcement ks = size factor = 0.745, and LRFD Eq. C5.4.2.3.3-1 kh = humidity factor = 0.929. LRFD Eq. C5.4.2.3.3-2 Mild reinforcement is as follows: The restraining effect of longitudinal reinforcement in the fy = 60 ksi, and deck slab on the free shrinkage is not considered in this design example. Proposed Article C5.14.1.2.7c states that the effect Es = 29,000 ksi. may be computed by proposed Equation C5.14.1.2.7c-1. 3.3.3 Prestressing Strand 3.4 Stress Limits The material properties of the prestressing strand are as The following stress limits are used for the design of the follows: girders for the service limit state. For computation of girder OCR for page 83 D-10 stresses, the sign convention will be compressive stress is prestressed. Instead, it is designed to satisfy the specified positive (+) and tensile stress is negative (-). Signs are not requirements at the strength limit state. shown for limits in the following, they will be applied in later stress comparisons. Compression: LRFD Table 5.9.4.2.1-1 fc1 = 0.60 w f c , for full-service loads (w = 1 for girders); 3.4.1 Pretensioned Strands fc2 = 0.45 f c , for effective prestress (PS ) and full dead loads The stress limits for low relaxation strands are as follows: (DL); and Immediately prior to transfer: LRFD Table 5.9.3-1 , for live load plus one-half of effective PS and fc3 = 0.40 f c full DL. fpi = 0.75 fpu = 202.5 ksi. Tension: LRFD Table 5.9.4.2.2-1 At service limit state after losses: LRFD Table 5.9.3-1 For the precompressed compression zone, ft1 = 0.19 f c , fp = 0.80 fpy = 199.4 ksi. assuming moderate corrosion conditions. For locations other than the precompressed compression zone, such as at The stress limits above are not discussed in this example the end of the girder where the top of the girder may go into because they do not govern designs. tension under the effect of the negative live-load moment, the LRFD Specifications give no stress limits. Therefore, the following limits have been proposed, which take the 3.4.2 Concrete same form as those for temporary tensile stresses at release 3.4.2.1 Temporary Stresses at Release. See Table given in LRFD Table 5.9.4.1.2-1, but with the specified D-3.4.2.1-1. concrete compressive strength, f c, rather than the concrete : compressive strength at release, f ci Compression: LRFD Art. 5.9.4.1.1 ft2 = 0.0948 f c 0.2 ksi, or fcR = 0.60 f c i . ft3 = 0.24 f c with reinforcement to resist the tensile force Tension: LRFD Table 5.9.4.1.2-1 in the concrete. ftR1 = 0.0948 f c i 0.2 ksi, or Numerical values for the stress limits are given in Table D-3.4.2.2-1. ftR2 = 0.24 f ci with reinforcement to resist the tensile force in the concrete. 3.5 Other Design Assumptions 3.4.2.2 Final Stresses at Service Limit State after Losses. Intermediate diaphragms are not used in the design exam- The following stress limits are given for the girder concrete. ple. Temporary steel or timber cross frames are generally Compressive stresses may be checked for the deck slab, but required during erection to stabilize girders. However, the never govern, so they are not included here. Tensile stresses weight of these temporary components is minor and is in the deck slab at interior supports should not be compared neglected in these calculations. with limits for the service limit state because the deck is not TABLE D-3.4.2.2-1 Final stress limits after losses TABLE D-3.4.2.1-1 Temporary stress limits at Girder Age at Continuity release 28 Days 60 Days 90 Days ' Girder Age at Continuity f c (ksi) 8.50 7.00 7.00 28 Days 60 Days 90 Days fc1 (ksi) 5.100 4.200 4.200 f ' (ksi) 7.50 6.00 5.50 fc2 (ksi) 3.830 3.150 3.150 ci fcR (ksi) 4.500 3.600 3.300 fc3 (ksi) 3.400 2.800 2.800 ftR1 (ksi) ­0.200 ­0.200 ­0.200 ft1 (ksi) ­0.550 ­0.500 ­0.500 ftR2 (ksi) ­0.660 ­0.590 ­0.560 ft2 (ksi) ­0.200 ­0.200 ­0.200 Note: Values are rounded to two significant digits. ft3 (ksi) ­0.700 ­0.630 ­0.630 OCR for page 83 D-11 The top 0.25 in. of the deck is assumed to be a sacrificial Sb = 6,185 in3, and wearing surface; the structural deck thickness is taken as 7.75 in. for design purposes. The weight of the remaining St = 5,071 in.3. 0.25 in. of the deck is included as additional load on the non- composite girder. For simplicity, the full thickness of the build-up is applied 3.6.2 Composite Section (Girder with Deck Slab) to the full length of the girder for dead-load computations. In most design situations, a value is used that is less than the 3.6.2.1 Effective Deck Width. The effective width for the specified build-up thickness at the center of the bearings composite deck at all limit states is determined according to because the actual thickness will vary along the length of the LRFD Article 4.6.2.6.1. The effective deck width for an inte- girder from the maximum of 2 1/2 in. at the center of bearings. rior beam is the least of the following: The build-up is neglected when computing composite sec- tion properties that are used to calculate stresses for service 1. One-quarter of the span length (260 in.); limit state design since the build-up will vary along the bridge. 2. Average spacing of adjacent girders (93 in.) GOV- However, for computation of section properties and strength ERNS; and calculations related to the reinforcement at the continuity 3. Twelve times the average thickness of the slab (93 in.), diaphragm, the build-up is included. This is done because the plus the greater of build-up is specified at the center of the bearings, so the full a. The web thickness (7 in.) or build-up will be provided at the continuity diaphragm location. b. One-half of the width of the top flange of the girder Potential deck cracking, if considered in the analysis of this (8 in.). continuous bridge, could increase positive design moments. However, potential deck cracking is neglected as allowed in In the case of this design example, the average spacing of LRFD Article 4.5.2.2. adjacent girders controls, resulting in an effective deck width of 93 in. 3.6 Section Properties 3.6.2.2 Transformed Effective Deck Width. The com- 3.6.1 Noncomposite Section (Girder Only) posite deck slab is transformed using the modular ratio, n, for computing stresses at the service limit state. See Table The section properties for a standard AASHTO Type III D-3.6.2.3-1. girder are as follows (see Figure D-3.6.1-1): 3.6.2.3 Section Properties. The build-up is neglected when h = 45.00 in., computing section properties because the build-up height varies along the length of the girder, with the minimum height A = 559.5 in.2, at or near midspan. See Section 3.5. Composite section prop- erties vary for different girder ages at continuity because the I = 125,390 in.4, girder concrete strength is different. yb = 20.27 in., TABLE D-3.6.2.3-1 Composite section properties yt = 24.73 in., Girder Age at Continuity 28 Days 60 Days 90 Days hc (in.) 52.75 52.75 52.75 n = Ecd/Ec 0.686 0.756 0.756 beff (in.) 93.00 93.00 93.00 beff tr = n beff (in.) 63.80 70.31 70.31 Ac (in2) 1,053.9 1,104.3 1,104.3 Ic (in4) 342,585 353,928 353,928 ybc (in.) 33.69 34.38 34.38 ytc (in.) 11.31 10.62 10.62 ytcd (in.) 19.06 18.37 18.37 Sbc (in3) 10,168 10,293 10,293 Stc (in3) 30,294 33,340 33,340 Figure D-3.6.1-1. AASHTO * Stcd (in3) 26,203 25,493 25,493 *Note: Stcd = (Ic / ytcd) / n, so that fcd = M / Stcd. Type III girder. OCR for page 83 D-100 Geometric Properties Strain Discontinuity in Concrete Gross Conc. Trans (n=6.56) Area (in2 ) 1169.1 1295.5 Concrete 93.0 9 - #5 Inertia (in4 ) 405953.4 465041.1 Types 2 layers of y t (in) 4000 9 - #6 18.7 18.9 9 - #5 y b (in) 36.5 36.4 4 - S.5 p = 7.10 ms 55.3 St (in3 ) 7000 21693.1 24641.0 base 7.0 38 - S.5 type p = 7.10 ms Sb (in3 ) 11110.9 12783.8 8 - #5 Crack Spacing 22.0 2 x dist + 0.1 db / Loading (N,M,V + dN,dM,dV) 0.0 , -100.0 , 0.0 + 0.0 , 1.5 , 0.0 Concrete Rebar P-Steel fc' = 7000 psi fu = 90 ksi fpu = 266 ksi All dimensions in inches Clear cover to reinforcement = 1.75 in a = 0.75 in fy = 60 Low Relax End of Girder - 60 Day ft = 308 psi (auto) c' = 2.22 ms s = 100.0 ms p = 43.0 ms TJT 8/20/2002 Figure DB.2.2.2-1. Cross section and material information. 6000.0 5000.0 4000.0 Moment (ft-kips) 3000.0 2000.0 1000.0 0.0 -90.0 0.0 90.0 180.0 270.0 360.0 450.0 540.0 -1000.0 -2000.0 -3000.0 Curvature (rad/106 in) Figure DB.2.2.2-2. Moment curvature analysis plot. OCR for page 83 D-101 Geometric Properties Gross Conc. Trans (n=8.22) Area (in2 ) 1312.0 1427.4 93.0 Inertia (in4 ) 433853.5 473331.8 9 - #6 9 - #5 y t (in) 17.1 16.7 9 - #5 9 - #6 y b (in) 38.1 38.5 55.3 St (in3 ) 25332.4 28342.6 7.0 Sb (in3 ) 11380.2 12278.5 8 - #5 Crack Spacing 22.0 2 x dist + 0.1 db / Loading (N,M,V + dN,dM,dV) 0.0 , -100.0 , 0.0 + 0.0 , 1.5 , 0.0 Concrete Rebar fc' = 4000 psi fu = 90 ksi All dimensions in inches Clear cover to reinforcement = 1.69 in a = 0.75 in fy = 60 Diaphragm - 60 Day ft = 246 psi (auto) c' = 1.93 ms s = 100.0 ms TJT 8/20/2002 Figure DB.2.2.3-1. Cross section and material information. 1000.0 500.0 -200.0 0.0 200.0 400.0 600.0 800.0 0.0 Moment (ft-kips) -500.0 -1000.0 -1500.0 -2000.0 -2500.0 -3000.0 Curvature (rad/106 in) Figure DB.2.2.3-2. Moment curvature analysis plot. OCR for page 83 D-102 Geometric Properties Strain Discontinuity in Concrete Gross Conc. Trans (n=6.08) Area (in2 ) 1094.0 1160.2 Concrete 93.0 Inertia (in4 ) 351550.9 386683.8 Types 2 layers of y t (in) 4000 9 - #5 18.3 18.8 2 - S.6 y b (in) 34.4 33.9 p = 7.10 ms 52.8 8500 St (in3 ) 19189.6 20541.9 base 7.0 type Sb (in3 ) 10210.6 11397.9 32 - S.6 p = 7.10 ms Crack Spacing 22.0 2 x dist + 0.1 db / Loading (N,M,V + dN,dM,dV) 0.0 , -100.0 , 0.0 + 0.0 , 1.5 , 0.0 Concrete Rebar P-Steel fc' = 8500 psi fu = 90 ksi fpu = 266 ksi All dimensions in inches Clear cover to reinforcement = 1.70 in a = 0.75 in fy = 60 Low Relax Girder Midspan - 28 Days ft = 333 psi (auto) c' = 2.37 ms s = 100.0 ms p = 43.0 ms TJT 8/20/2002 Figure DB.2.3.1-1. Cross section and material information. 6000.0 5000.0 4000.0 Moment (ft-kips) 3000.0 2000.0 1000.0 0.0 -800.0 -600.0 -400.0 -200.0 0.0 200.0 400.0 600.0 -1000.0 -2000.0 Curvature (rad/106 in) Figure DB.2.3.1-2. Moment curvature analysis plot. OCR for page 83 D-103 Geometric Properties Strain Discontinuity in Concrete Gross Conc. Trans (n=6.08) Area (in2 ) 1134.0 1225.9 Concrete 93.0 Inertia (in4 ) 396800.1 452140.6 Types 2 layers of 4000 y t (in) 9 - #5 19.2 20.1 4 - S.6 y b (in) 36.0 35.1 p = 7.10 ms 55.3 8500 St (in3 ) 20659.8 22488.1 base 7.0 30 - S.6 type p = 7.10 ms Sb (in3 ) 11008.9 12865.3 8 - #5 8 - #5 Crack Spacing 22.0 2 x dist + 0.1 db / Loading (N,M,V + dN,dM,dV) 0.0 , -100.0 , 0.0 + 0.0 , 1.5 , 0.0 Concrete Rebar P-Steel fc' = 8500 psi fu = 90 ksi fpu = 266 ksi All dimensions in inches Clear cover to reinforcement = 1.70 in a = 0.75 in fy = 60 Low Relax End of Girder - 28 Days ft = 333 psi (auto) c' = 2.37 ms s = 100.0 ms p = 43.0 ms TJT 8/20/2002 Figure DB.2.3.2-1. Cross section and material information. 6000.0 5000.0 4000.0 Moment (ft-kips) 3000.0 2000.0 1000.0 0.0 -160.0 -80.0 0.0 80.0 160.0 240.0 320.0 400.0 480.0 -1000.0 -2000.0 -3000.0 Curvature (rad/106 in) Figure DB.2.3.2-2. Moment curvature analysis plot. OCR for page 83 D-104 Geometric Properties Gross Conc. Trans (n=8.22) Area (in2 ) 1312.0 1388.1 93.0 Inertia (in4 ) 433853.5 481993.3 2 layers of y t (in) 17.1 17.6 9 - #5 y b (in) 38.1 37.6 55.3 St (in3 ) 25332.4 27334.8 7.0 Sb (in3 ) 11380.2 12813.2 2 layers of 8 - #5 Crack Spacing 22.0 2 x dist + 0.1 db / Loading (N,M,V + dN,dM,dV) 0.0 , -100.0 , 0.0 + 0.0 , 1.5 , 0.0 Concrete Rebar fc' = 4000 psi fu = 90 ksi All dimensions in inches Clear cover to reinforcement = 2.44 in a = 0.75 in fy = 60 Diaphragm - 28 Days ft = 246 psi (auto) c' = 1.93 ms s = 100.0 ms TJT 8/20/2002 Figure DB.2.3.3-1. Cross section and material information. 1800.0 1500.0 1200.0 900.0 Moment (ft-kips) 600.0 300.0 -800.0 -600.0 -400.0 -200.0 0.0 200.0 400.0 600.0 800.0 0.0 -300.0 -600.0 -900.0 -1200.0 -1500.0 -1800.0 Curvature (rad/106 in) Figure DB.2.3.3-2. Moment curvature analysis plot OCR for page 83 D-105 SUBAPPENDIX C: INPUT AND OUTPUT FROM QCONBRIDGE The examples discussed within this subappendix are as DC.1 PROGRAM INFORMATION follows: The program QConBridge was used to compute the fatigue load effects. While QConBridge also reports load effects for · Design Example 1: AASHTO Type III Girder Bridge. other types and combinations of loadings, these were not used · Design Example 2: PCI BT-72 Girder Bridge. in this study because the design for these load effects was per- · Design Example 3: 51-IN.-Deep Spread Box Girder formed using another computer program. Therefore, these Bridge--The design spans for the bridge in this example other results have been deleted from the output that follows. are the same as Design Example 1; therefore, the output The program QConBridge was developed by the Washing- from Design Example 1 (see Section DC.1) was used for ton State DOT and is available free of charge on the depart- this example. ment website: · Design Example 4: AASHTO BIII-48 Adjacent Box Girder Bridge--The design spans for the bridge in this www.wsdot.wa.gov/eesc/bridge/software/index.cfm. example are the same as Design Example 1; therefore, the output from Design Example 1 (see Section DC.1) The version of the program used for this study is shown in was used for this example. the figure below taken from the program. OCR for page 83 OCR for page 83 OCR for page 83 OCR for page 83 OCR for page 83
{"url":"http://www.nap.edu/openbook.php?record_id=13746&page=83","timestamp":"2014-04-20T19:10:13Z","content_type":null,"content_length":"112973","record_id":"<urn:uuid:0653e52e-4930-46c2-be9e-263a82067c50>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized Partial Credit Model - D The generalized partial credit model (GPCM) is an IRT model developed to analyze partial credit data, where responses are scored 0,1,…, k, where k is the highest score category for the item. Masters (1982) developed the partial credit model (PCM) by formulating a polytomous rating response model based on the Rasch (1960) dichotomous model. In Masters’ formulation, the probability of choosing the kth category over the k - 1th category of item i is governed by the dichotomous response model. Let P[jk](q) denote the specific probability of selecting the kth category from m[j] possible categories of item j. For each adjacent categories, the probability of the specific categorical response k over k - 1 is given by the conditional probability: where k = 1,2,..., m[j]. Note that in this model, all items are assumed to have uniform discriminating power. Muraki (1992) extented Masters’ PCM by relaxing the assumption of uniform discriminating power of test items based on the two-parameter (2PL) logistic response model. In Muraki’s formulation, the probability of choosing category k over category k - 1 is given by the conditional probability: where k = 1,2,...,m[j]. The above equation can be written as After normalizing each P[jk](q) so that S P[jk](q) = 1, the GPCM is written as where D, is a scaling constant set to 1.7 to approximate the normal ogive model, a[j] is a slope parameter , b[j] is an item location parameter , and d[j],v is a category parameter . The slope parameter indicates the degree to which categorical responses vary among items as q level changes. With m[j] categories, only m[j] - 1 category parameters can be identified. Indeterminacies in the parameters of the GPCM are resolved by setting d[j],0 = 0 and setting b[j] - d[j],k is the point on the q scale at which the plots of P[j],k-1(q) and P[jk](q) intersect and so characterizes the point on the q scale at which the response to item j has equal probability of falling in response category k -1 and falling in response category k. The figure below illustrates the item category response functions (what is known as item characteristic curves for dichotomous items) for a GPCM item with parameters a=1 and b=(0,-2,4). It is clear from this graph that the point of intersection of P[j],k-1(q) and P[jk](q) corresponds to an equal probability (i.e., P = .5) of being in category k -1 or category k.
{"url":"http://am.air.org/help/NAEPTextbook/htm/dgeneralizedpartialcreditmodel.htm","timestamp":"2014-04-17T03:55:10Z","content_type":null,"content_length":"10338","record_id":"<urn:uuid:1fc34977-ace5-484b-957f-c0d8a54850c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
June 2012 Archives While finding material for the previous post I stumbled across the following interesting paper. W. Wang, B. Jüttler, D. Zheng, and Y. Liu, “Computation of rotation minimizing frames,” ACM Transactions on Graphics, vol. 27, no. 1, pp. 1-18, Mar. 2008. It describes a technique similar to the use of Frenet Frames, but which avoids the nasty twisting that can occur at curvature inflection points. I’ve not looked in to the details yet, but rotation minimizing frames might possibly be worth considering for integration into Functy if it gives a better visualisation. One of the main aims with Functy has always been to allow functions to be rendered using shaders on the GPU and using the function derivative to generate normals. This should be faster than rendering on the CPU. Defining the normals mathematically should also give more accurate results, and since we have the functions to play around with, it just seems like the sensible thing to do. This presented a bit of a challenge for the new curve functions though. As is so often the case when using shaders, the problem is one of parallelism. If you have a function, the position of each vector in the model should be independent of the others and therefore a prime target for parallelism. With a curve you have the path of the curve and the radius at a particular point determined mathematically as long as you have the position along the curve, s and the rotation around the curve p given to you. However, what isn’t necessarily pre-determined is the orientation of the curve. To explain this a bit further, consider the curve in the diagram below. Notice how the vectors perpendicular to the curve change direction as you move along the curve. These vectors are used to define the thickness of the curve at a particular point. In two-dimensions this is fine, as there’s no ambiguity about which direction these vectors should be pointing in. However, lets now consider this in 3D. Suddenly these vectors can rotate around the axis of the curve, and while the vector must always lie within the plane perpendicular to the curve, there’s still an infinite number of possible directions that the vector can point. In Functy, we use all of these directions, because the p variable defines a full rotation around the curve, so that it becomes a tube (rather than a line). But we still need to decide which direction the zero angle should point. Some how or other a choice has to be made for this. There are a number of possibilities. We could set it randomly. However, this means there will be no consistency from one piece of the curve to the next, and if the cross section isn’t a circle, will result in a random twisting of the curve. This would look rubbish, so it’s not an option. In my 3D Celtic Knot program I came up against exactly the same problem. Since it was essential for the start and end of a curve to match up exactly, I used an iterative approach there. For each piece, the rotation between the previous and next point on the curve is calculated and the perpendicular vector transformed by this in order to establish its new position. At the end of the curve an adjustment is made to ensure pairs of curves will always fit perfectly together. This is possible because the maximum adjustment needed will never be more than 2π/x radians where x is the number of segments that make up the cross section of the curve (called the Radial Accuracy in Functy). However, unfortunately this technique can’t be easily parallelised since the orientation of each piece depends on the last, meaning that it couldn’t be translated easily into shader code. For Functy I therefore needed a different solution. Luckily for me this isn’t a new problem, and the solution came in the form of Frenet Frames. A Frenet Frame is an orthogonal set of axes that’s defined based on the curvature and torsion of the curve at a particular point. Since it’s (in general) canonically defined at each point on the curve, it can be calculated independently from the other points, by calculating the derivatives and second-derivatives of the curve. More specifically, it requires that the tangent, normal and binormal vectors of the curve be calculated. There’s a decent explanation in the Wikipedia section “Other expressions of the frame”, and there’s also a neat Wolfram Demonstration too. Since these three vectors can be calculated using the derivative of the curve, there’s no need to iterate along the curve, which makes it perfect for calculation using shaders. This is now implemented in Functy, and it seems to work pretty well. On my laptop, which has a decent but not mindblowing graphics card, animating a Frenet curve on the GPU using shader code is considerably faster than using the CPU. The only problem is that this method has a tendency to generate curves with twists in. That is, the axis can make sudden rotations around the direction of the curve. In general this isn’t a problem, but can cause the curve to ‘pinch’ if the resolution of the pieces is too low. Below is a particularly extreme example. Usually it’s not as bad as this, but it’s a shame nonetheless. For the benefit to be had from parallelisation I’m willing to live with it.
{"url":"http://functy.sourceforge.net/?m=201206","timestamp":"2014-04-20T08:26:29Z","content_type":null,"content_length":"20289","record_id":"<urn:uuid:a8ab9c72-9d1d-4755-b303-118c4f3e7bf0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - math-teach Discussion: math-teach A discussion of teaching mathematics, including conversations about the NCTM Standards. It is not officially sponsored by or affiliated with the NCTM. To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe math-teach in the body of the message. To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe math-teach in the body of the message.
{"url":"http://mathforum.org/kb/forum.jspa?forumID=206&start=19110","timestamp":"2014-04-16T10:47:41Z","content_type":null,"content_length":"38654","record_id":"<urn:uuid:2eea222b-36ad-44a8-8532-4c94c1f5fea8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
This annotated week in baseball history: July 24-July 30, 2011 On July 30, 2011 Richard will be running a half-marathon. In honor of his run, he looks at the distance travelled by some notable players. When not writing about baseball—I know, hard to believe I have an existence otherwise—I spend much of my time running around parts of Manhattan and Long Island in training for various road races. This Saturday, I will be doing a half-marathon whose course, in part, will take me around CitiField. This got me thinking about distances, and baseball distances in particular. (For another take on this question, I highly recommend W.P Kinsella’s collection of short stories, Go the Distance, which has a really excellent story on this point.) Unlike some sports—football, notably—baseball is not really a game about distances. Many of them, the base paths, the space between home plate and the mound, are standardized. And the distances unique to each ballpark, the fences and foul ground, while certainly capable of changing a game, are almost exclusively relevant to the flight of the ball, rather than that of people. For example, last year Roy Halladay threw 3,568 (pretty effective) regular-season pitches. For the sake of argument, we’ll say that each pitch travelled the full mound distance. That means last year Halladay threw pitches that went a grand total of 215,864 feet. That is equivalent to nearly 41 miles worth of pitches, or more than a full marathon and a half marathon combined. And that’s not even counting his work in the postseason. CC Sabathia throwing a small part of his 47 miles of pitches in 2009 (Icon/SMI) (For the record, the distance leader for 2010, postseason inclusive, was Tim Lincecum, who threw more than 45 miles worth of pitches, although that is less than the workload done by CC Sabathia, who pitched over 47 miles in 2009.) At least in recent memory, the single game high in pitch distance was Edwin Jackson’s 2010 no-hitter when he threw 149 pitches—or roughly a mile and three-quarters. Or, put another way, Jackson needed more than 300 feet of pitches to retire each batter. Of course, Jackson was allowed to throw all those pitches because he had allowed no hits. In normal circumstances, this would reduce the distance traveled by the batters on the other team. In his no-hitter, Jackson walked eight men, and hit B.J. Upton, which made for a relatively busy day on the basepaths. (Indeed, the victorious Diamondbacks actually had fewer baserunners than the no-hit Rays.) But while individual game distances are interesting, I am more curious about how far a player would travel in a strong offensive season. The modern record for runs scored is 177 by Babe Ruth during his remarkable 1921 season. Since a trip around the bases is 360 feet, this means Ruth ran—or jogged, anyway, though this was the young Ruth—63,720 feet, which is just more than 12 miles. But of course, Ruth did reach base several times when he did not end up coming around to score. It is at this point we reach a relatively obvious problem, which probably should have occurred to me sooner—like before I started writing this column and had to figure it out. In any case, counting the runs and then simply the adding the distance covered by base hits would lead to some double counting. Now, there are two ways to go about doing this, one of which would give us an exact answer but which is extremely difficult—bordering on impossible—and the other way, which I will be doing. The exact way, for the record, would involve reviewing play-by-play (which is limited or missing in many cases for baseball prior to the Second World War) and determining exactly when Ruth scored and how he got on base in those cases. With that possibility ruled out, we can instead come around to the easier way, in which case we will determine the (approximate) minimum distance Ruth traveled in 1921. We know that Ruth scored 177 runs (at 360 feet per run), and that accounts for the distance covered in his 59 home runs. That leaves us with 113 hits: 36 doubles, nine triples and 68 singles. Ruth also walked 150 times—no surprise there—and was hit by four pitches. At this point things get a little tricky—very tricky if, like me, you find calculating a 15 percent tip to be strenuous work—but we will assume, for the sake of finding the minimum distance, that Ruth scored on all his triples and double in 1921. Since the Babe had a combined 104 extra-base hits that year, this leaves us with 73 trips around the bases yet uncounted. Now we subtract 73 from Ruth’s 150 walks (being easier than doing some wacky math when one subtracts it from his 68 singles) and we are left with 149 times when Ruth reached first. That gives him a distance travelled—at the least—of 77,130 feet in 1921 simply on running the bases alone. (You’ll notice that I left stolen bases out of this equitation. For the record, Ruth stole 17, but was caught 13 times. I was working on the math for this when I realized I’d gone cross-eyed, so I stopped. Apologies.) That comes to, roughly, 14.6 miles. That seem unimpressive compared to the totals of Halladay or Lincecum, but it is only fair to remember that Ruth himself was doing the distances in this case, rather than the ball. Finally—because I’m a masochist—I did the same math for Ruth’s career total. Ruth traveled—again this is roughly the minimum distance—1,035,000 feet on the bases in his career. That comes to just shy of 200 miles and is greater than the distance (as the crow flies) between Yankee Stadium and Fenway Park. Of course, these are just a few of the distances one can calculate in baseball, and Ruth’s 77,130 feet in 1921 is unlikely to replace his 238 OPS+ as the popular measure of its dominance. But it is an interesting way to look at the numbers and give us a new and different sense of a season or player we seemingly knew in every way. Leave a Reply Cancel reply
{"url":"http://www.hardballtimes.com/this-annotated-week-in-baseball-history-july24-july-30-2011/","timestamp":"2014-04-16T04:47:45Z","content_type":null,"content_length":"47153","record_id":"<urn:uuid:ff383333-030a-418d-b7d0-2930ada5fb3d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
THE SLOPE OF A Definition of the slope "Up" or "down"? Horizontal and vertical lines The slope-intercept form The general form Parallel and perpendicular lines 2nd level: The point-slope formula The two-point-formula IN THE PREVIOUS LESSON, we discussed the equation of a straight line. Sketching the graph of the equation of a line should be a basic skill. Consider this straight line. The (x, y) coördinates at B have changed from the coördinates at A. By the symbol Δx ("delta x") we mean the change in the x-coördinate. That is, Δx = x[2] − x[1]. (As for using the subscripts 1 and 2, see Lesson 32, the section, The distance bewteen any two points.) Similarly, Δy ("delta y") signifies the resulting change in the y-coördinates. Δy = y[2] − y[1]. Δx is the horizontal leg of that right triangle; Δy is the vertical leg. By the slope of a straight line, then, we mean this number: Slope = _Vertical leg_ = Δy = Horizontal leg Δx For example, If the value of y changes by 2 units when the value of x changes by 3, then the slope of that line is 2 . What does slope 2 mean? It indicates the rate at which a change in the value of x produces a change in the value of y. 2 units of y per -- for every -- 3 units of x. For every 3 units that line moves to the right, it will move up 2. That will be true between any two points on that line. Over 6 and up 4, over 15 and up 10. Because a straight line has one and only one slope. (Theorem 8.1 of Precalculus.) In each line above, the x-coördinate has increased by 1 unit. In the line on the left, however, the value of y has increased much more than in the line on the right. The line on the left has a greater slope than the line on the right. The value of y has changed at a much greater rate. If the x-axis represents time and the y-axis distance, as is the case in many applications, then the rate of change of y with respect to x -- of distance with respect to time -- is called speed or velocity. So many miles per hour, or meters per second. Up or down? Which line do we say is sloping "up"? And which is sloping "down"? Since we imagine moving along the x-axis from left to right, we say that the line on the left is sloping up, and the line on the right, down. What is more, a line that slopes up has a positive slope. While a line that slopes down has a negative slope. For, both the x- and y-coördinates of B are greater than the coördinates of A, so that both Δx and Δy are positive. Therefore their quotient, which is the slope, is positive. But while the x-coördinate of D is greater than the x-coördinate of C, so that Δx is positive, the y-coördinate of D is less than the y-coördinate of C, so that Δy, 5 − 8, is negative. Therefore that quotient is negative. Problem 1. What number is the slope of each line? To see the answer, pass your mouse over the colored area. To cover the answer again, click "Refresh" ("Reload"). Do the problem yourself first! Horizontal and vertical lines What number is the slope of a horizontal line -- that is, a line parallel to the x-axis? And what is the slope of a vertical line? A horizontal line has slope 0, because even though the value of x changes, the value of y does not. Δy = 0. Δy = 0 = 0. (Lesson 5) Δx Δx For a vertical line, however, the slope is not defined. The slope tells how the y-coördinate changes when the x-coördinate changes. But the x-coördinate does not change -- Δx = 0. A vertical line does not have a slope. Δy = Δy = No value. (Lesson 5) Δx 0 Problem 2. a) Which numbered lines have a positive slope? 2 and 4. b) Which numbered lines have a negative slope? 1 and 3. c) What slope has the horizontal line 5? 0. c) What slope has the vertical line 6? It does not have a slope. Example 1. Calculate the slope of the line that passes through the points (3, 6) and (1, 2) Solution. To do this problem, here again is the definition of the slope. It is the number Δy = Difference of y-coördinates = Δx Difference of x-coördinates Therefore, the slope of the line passing through (3, 6) and (1, 2) is: Δy = 6 − 2 = 4 = 2 = 2. Δx 3 − 1 2 1 Note: It does not matter which point we call the first and which the second. But if we calculate Δy starting with (3, 6), then we must calculate Δx also starting with (3, 6). As for the meaning of slope 2: On the straight line that joins those two points, for every 1 unit the value of x changes, the value of y will change by 2 units. That is the rate of change of y with respect to x. 2 for every 1. Problem 3. Calculate the slope of the line that joins these points. a) (1, 5) and (4, 17) b) (−3, 10) and (−2, 7) 17 − 5 = 12 = 4 _ 10 − 7 _ = 3 = −3 4 − 1 3 −3 − (−2) −1 c) (1, −1) and (−7, −5) d) (2, −9) and (−2, −5) −5 − (−1) = −4 = ½ −9 − (−5) = −4 = −1 −7 − 1 −8 2 − (−2) 4 The slope-intercept form This linear form y = ax + b is called the slope-intercept form of the equation of a straight line. Because, as we can prove (Topic 9 of Precalculus): a is the slope of the line, and b is the y-intercept. Problem 4. What number is the slope of each line, and what is the meaning of each slope? a) y = 5x − 2 The slope is 5. This means that y increases 5 units for every 1 unit x increases. That is the rate of change of y with respect to x. The slope is − 2 . This means that y decreses 2 units for every 3 units of x.. Problem 5. a) Write the equation of the straight line whose slope is 3 and whose a) y-intercept is 1. y = 3x + 1 b) Write the equation of the straight line whose slope is −1 and whose a) y-intercept is −2. y = −x − 2 c) Write the equation of the straight line whose slope is 2 and which passes through the origin. y = 2 x. The y-intercept b is 0. Problem 6. Sketch the graph of y = −2x. This is a straight line of slope −2 -- over 1 and down 2 -- that passes through the origin: b = 0. (Compare Lesson 33, Problem 15.) The general form This linear form Ax + By + C = 0 where A, B, C are integers (Lesson 2), is called the general form of the equation of a straight line. Problem 7. What number is the slope of each line, and what is the meaning of each slope? a) x + y − 5 = 0 This line is in the general form. It is only when the line is in the slope-intercept form, y = ax + b, that the slope is a. Therefore, on solving for y: y = −x + 5. The slope therefore is −1. This means that the value of y decreases 1 unit for every unit that the value of x increases. b) 2x − 3y + 6 = 0 −3y = −2x − 6 y = 2 x + 2, on dividing every term by −3. The slope therefore is 2 . This means that the line goes up 2 for every 3 units it goes over. c) Ax + By + C = 0 By = −Ax − C y = − A x − C , on dividing every term by B. B B We can view that as a formula for the slope when the equation is in the general form. For example, if the equation is 4x − 5y + 2 = 0, then the slope is Parallel and perpendicular lines Straight lines will be parallel if they have the same slope. The following are equations of parallel lines: y = 3x + 1 and y = 3x − 8. They have the same slope 3. Straight lines will be perpendicular if 1) their slopes have opposite signs -- one positive and one negative, and 2) they are reciprocals of one another. That is: If m is the slope of one line, then a perpendicular line has slope − 1 . To be specific, if a line has slope 4, then every line that is perpendicular to it has slope −¼. (We will prove that below.) Problem 8. Which of these lines are parallel and which are perpendicular? a) y = 2x + 3 b) y = −2x + 3 c) y = ½x + 3 d) y = 2x − 3 a) and d) are parallel. b) and c) are perpendicular. Problem 9. If a line has slope 5, then what is the slope of a line that is perpendicular to it? − 1 Problem 10. If a line has slope − 2 , then what is the slope of a perpendicular line? 3 Problem 11. If a line has equation y = 6x − 5, then what is the slope of a perpendicular line? − 1 Theorem: The slopes of perpendicular lines If two straight lines are perpendicular to one another, then the product of their slopes is −1. That is: If the slope of one line is m, then the slope of the perpendicular line Let L[1] be a straight line, and let the perpendicular straight line L[2] cross L[1] at the point A. Let L[1] have slope m[1], and let L[2] have slope m[2]. Assume that m[1] is positive. Then m[2], as we will see, must be negative. Draw a straight line AB of length 1 parallel to the x-axis, and draw BC at right angles to AB equal in length to m[1]. Extend CB in a straight line to join L[2] at D. Now, since the straight line L[2] has one slope m[2] (Theorem 8.1 of Precalculus), the length of BD will be |m[2]|. For in going from A to D on L[2], we go over 1 and down |m[2]|. That is, m[2] is a negative number. (Same figure.) Angle CAD is a right angle. Therefore angle a is the complement of angle ß. But triangle ABD is right-angled, and therefore the angle at D is also the complement of angle ß; therefore the angle at D is equal to angle a. The right triangles ABC, ABD therefore are similar (Topic 5 of Trigonometry), and the sides opposite the equal angles are proportional: This implies m[1]m[2]| = 1. But m[2] is negative. Therefore, m[1]m[2] = −1. Which is what we wanted to prove. 2nd Level Next Lesson: Simultaneous linear equations Table of Contents | Home Please make a donation to keep TheMathPage online. Even $1 will help. Copyright © 2014 Lawrence Spector Questions or comments? E-mail: themathpage@nyc.rr.com
{"url":"http://www.themathpage.com/alg/slope-of-a-line.htm","timestamp":"2014-04-20T12:33:18Z","content_type":null,"content_length":"40096","record_id":"<urn:uuid:0fb7c097-382a-4c20-bc7b-0b20c41204b4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Crypto Formula Malfunction Hi, probably not the best place for this but maybe someone can help. For one of my programs i’m trying to impliment my own version of the RSA algorithm, the encryption works fine, however the decrption does not, even though it uses basically the same algorithm. The code is as follows: // g++ main.c -o main #include <stdio.h> #include <stdlib.h> #include "math.h" int do_crypto(int M, int e, int N); int main() // TWO PRIMES FOR OUR KEY int p = 17; int q = 11; // THIRD PART OF OUR KEY (PUBLIC) int e = 7; // CALCULATE THE OTHER PART OF THE PUBLIC KEY int N = p * q; // = 187 // THE CHARACTER TO ENCODE AS ASCII int M = 88; //int M = "M"; // ENCRYPT A CHARACTER //int C = int ( M * exp(e) ) % N; int C = do_crypto(M, e, N); // int M, int e, int N) printf("C = %d \n", C); // CALCULATE THE DECRYPT KEY //int d = ( 1 % ( (p-1) * (q-1) ) ) / e; int d = ( ( (p-1) * (q-1) ) / e ); printf("d = %d \n", d); // DECRYPT THE CHARACTER int m = do_crypto(11, 23, 187); // int C, int d, int N) printf("m = %d \n", m); return 0; int do_crypto(int M, int e, int N) int iret = int ( M * exp(e) ) % N; return iret; When decrypting I have put in actual values for the keys etc, and the result should be 88. The actual formulas are: ENCRYPT: C = Me (Mod N) DECRYPT: M = Cd (Mod N) *** Please note the ‘e’ and ‘d’ are supposed to be superscript, e.g. raised to the power of… M * exp(e) That’s really not doing what you expect. Me can be computed with pow(M, e). I did try that first. I’ve re-tried with the following results: C = -145 d = 22 m = -145 If your getting different results please show code? The result of pow(11, 23) is gigantic, it’s even too large to store in a double without loosing precision. On top of that, the primes you’re using are way too small; the encryption can be broken really easily. So what’s the intent of this? :huh: I did wonder whether this was a problem. I tried using unsigned long’s to see if result was slightly different, it wasn’t. I’m currently looking at http://www.efgh.com/software/rsa.htm, this defines it’s own data type from what I can tell, using an unsigned, or unsigned short, as they call it ‘mpuint’. To be honest i’m new to such large numbers, yet aware. In one book I have it mentions <complex> numbers, but not really looked at them yet. For now i’m just messing around trying to write programs from reading ‘The Code Book’ by Simon Singh (this is why the primes are small). The final use is for a simple encryption method for an in-game chat client. Also i’m aware of alternative existing methods for secure communication, but hey I like to try… Cheers for any help! It’s definitely worth it to try to implement RSA for your own experience. But if you want to use encryption for something serious like chatting in a game, it would be far easier to find an existing encryption library. However, under the assumption that you do want to do RSA yourself, you will need a data type capable of storing very large numbers (bigger than 32 bits or even 64 bits). With a little bit of work, you can implement this yourself with a C++ class. You just need to break up the number into 32-bit chunks. It’s as if each chunk is a digit, and the number is being expressed in base 232. Addition, multiplication, and all the usual operations can be implemented on such numbers essentially by the same algorithms that you would use in grade school to work with multi-digit numbers in base 10. Also, be aware that even if you get it working perfectly RSA is rather slow. Most systems that use RSA to do data encryption actually only use RSA at the beginning of a conversation, to agree upon a symmetric encryption key. The rest of the conversation is encrypted using a symmetric method (the same key both encodes and decodes the data), which is much faster.
{"url":"http://devmaster.net/posts/10671/crypto-formula-malfunction","timestamp":"2014-04-16T04:27:01Z","content_type":null,"content_length":"21348","record_id":"<urn:uuid:cc49595b-f9aa-401c-aa32-7350f3b8135e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Inharmonicity due to Stiffness for Guitar Strings Frequencies for a Stiff String The equation of motion for a stiff string under tension is: $T\frac{{\partial }^{2}y}{\partial {x}^{2}}-\text{}YS{\kappa }^{2}\frac{{\partial }^{4}y}{\partial {x}^{4}}=\text{}\rho S\frac{{\partial }^{2}y}{\partial {t}^{2}}$ where the first term represents the net force on the string due to the tension T, the second term involves the net force due to the bending moments and shear forces and depends on the elastic modulus (Young's modulus Y), the cross-sectional area S and the radius of gyration κ, and the quantity on the right of the equal sign represents the inertia property (mass per unit length times acceleration) of the string. For a cylindrical string, the radius of gyration equals half the radius, κ = a/2. At low frequencies (with long wavelengths), the strings behaves as if it were completely flexible, and transverse waves on the string travel with a speed $c=\sqrt{\frac{T}{{\rho }_{\ell }}}$ associated with normal transverse waves on a flexible string. At high frequencies (with short wavelengths) the string acts more like a stiff bar, and transverse waves show a dispersive effect, traveling with the speed of flexural bending waves $v\text{}=\text{}\sqrt{\omega \kappa \sqrt{Y/{\rho }_{\ell }}}$ which depends on frequency. If the string was perfectly flexible, then the resonances frequencies of the string would be harmonics such that the higher frequencies would be exactly integer multiples of the fundamental frequency, f[n] = n f[o]. However, for a stiff string, the frequencies are not exactly integer multiples, and the string exhibits inharmonicity. Fletcher and Rossing^[1] define an inharmonicity constant $B=\frac{{\pi }^{2}YS{\kappa }^{2}}{T{L}^{2}}=\frac{{\pi }^{2}Y\left(\pi {a}^{2}\right)\left({a}^{2}/4\right)}{T{L}^{2}}$ to demonstrate the effect that bending stiffness has on the frequencies for the strings of a musical instrument. For a string which is "fixed" a both ends with a pinned boundary condition (also called "simply supported), the frequency of the -th partial of the string is is the fundamental frequency of the string. If the string has clamped boundary conditions at both ends, then the -th partial of the string is given by ${f}_{n}=n{f}_{o}\sqrt{1+B{n}^{2}}\left(1\text{}+\text{}\frac{2}{\pi }\sqrt{B}+\frac{4}{\pi }B\right)$ In both cases, the frequencies of the string are no longer harmonics.
{"url":"http://www.acs.psu.edu/drussell/Demos/Stiffness-Inharmonicity/Stiffness-B.html","timestamp":"2014-04-20T20:55:21Z","content_type":null,"content_length":"12626","record_id":"<urn:uuid:2586eefc-5886-41e2-89b0-a5afe11dd60a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Upper Darby Algebra 2 Tutor ...Before that, I tutored students in high school math as well as elementary math. I especially like to make students realize that math is not the enemy and that it is very useful for daily life. I served as an elementary school tutor during my first two years of college. 10 Subjects: including algebra 2, algebra 1, Latin, SAT math ...Let's Get Going! I'm a friendly guy with a lot of energy. The tutor student relationship is crucial to having a successful learning environment. 14 Subjects: including algebra 2, chemistry, physics, geometry ...In 1985, I earned my Bachelor of Arts degree in English Literature within 3.5 years. I then completed a certification course to become a paralegal. I worked with the same employer in Cherry Hill, NJ for 11 years. 23 Subjects: including algebra 2, reading, writing, geometry ...I have degree in mathematics from Rutgers University. Also, I have experience tutoring and teaching students in pre-algebra and algebra, as well as tutoring in calculus. I have a degree in 16 Subjects: including algebra 2, English, physics, calculus ...Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects of math. I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fu... 22 Subjects: including algebra 2, calculus, statistics, geometry
{"url":"http://www.purplemath.com/upper_darby_algebra_2_tutors.php","timestamp":"2014-04-21T10:48:06Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:a2e2fbda-662e-4f7f-83d0-608df3b93bba>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
University of Illinois Mathematics Colloquium — Special Lecture Spring 2012 John Francis Northwestern University Factorization homology of topological manifolds Factorization homology, or the topological chiral homology of Lurie, is a homology theory for manifolds conceived as a topological analogue of Beilinson & Drinfeld's algebraic theory of factorization algebras. I'll describe an axiomatic characterization of factorization homology, Ă la Eilenberg-Steenrod. The excision property of factorization homology allows one to see factorization homology as a simultaneous generalization of singular homology, the cohomology of mapping spaces, and Hochschild homology. Excision for factorization homology also facilitates a short proof of the nonabelian PoincarĂ© duality of Salvatore and Lurie; this proof generalizes to give a nonabelian PoincarĂ© duality for stratified manifolds, joint work with David Ayala & Hiro Tanaka. Finally, I'll outline work in progress with Kevin Costello, expressing quantum invariants of knots and 3-manifolds in terms of factorization homology. Thursday, January 19, 2012, 245 Altgeld Hall, 4:00 p.m.
{"url":"http://www.math.illinois.edu/Colloquia/12SP/francis_jan19-12.html","timestamp":"2014-04-16T10:10:16Z","content_type":null,"content_length":"14391","record_id":"<urn:uuid:0aa7ed83-1e88-431c-b56d-ec3885aed9e1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Goldstein, Larry - Department of Mathematics, University of Southern California • 6. G. G. Lorentz, Approximation of Functions, 2nd ed., Chelsea, New York, 1986. 7. M. Uchiyama, Korovkin-type theorems for Schwarz maps and operator monotone functions in C- • Zero Biasing in One and Higher Dimensions, and Applications #+# • Total Variation Distance for Poisson Subset Numbers • Exposure strati ed case-cohort designs rnulf Borgan Larry Goldstein y Bryan Langholz z Janice Pogoda x • Functional BKR Inequalities, and their Duals, with Applications • Sequential Analysis, 25: 351363, 2006 Copyright Taylor & Francis Group, LLC • Biostatistics (2001), 0, 0, pp. 122 Printed in Great Britain • The Annals of Probability 2006, Vol. 34, No. 5, 17821806 • A Probabilistic Proof of the LindebergFeller Central Limit Theorem • JOURNAL OF COMPUTATIONALBIOLOGY Volume 1,Number2,4994 • Two Choice Optimal Stopping #+ David Assaf 0 , Larry Goldstein and Ester SamuelCahn 0 • Two Choice Optimal Stopping David Assaf 0 • The Annals of Applied Probability 2010, Vol. 20, No. 2, 696721 • Ratio Prophet Inequalities when the Mortal has Several Choices • Total Variation Distance for Poisson Subset Numbers # + • A Statistical Version of Prophet Inequalities 1 David Assaf 2 , Larry Goldstein 3 and Ester Samuel-Cahn 4 • Information and Asymptotic Eciency of the Case-Cohort Sampling Design in Cox's Regression • Stein's Method and the Zero Bias Transformation with Application to Simple Random Sampling • L 1 Bounds in Normal Approximation Larry Goldstein • Zero Biasing and a Discrete Central Limit Theorem Larry Goldstein # and Aihua Xia + • A Statistical Characterization of Regular Ian Abramson and Larry Goldstein • Ascertainment bias in rate ratio estimation from case-sibling control studies of variable age-at-onset • The Annals of Probability 2010, Vol. 38, No. 4, 16721689 • Size bias, sampling, the waiting time paradox, and infinite divisibility: when is the increment • On Optimal Allocation of a Continuous Resource Using an Iterative Approach and Total Positivity • Efficiency Calculations for the Maximum Partial Likelihood Estimator in Nested-Case Control Sampling • A Probabilistic Proof of the Lindeberg-Feller Central Limit Theorem • Zero Biasing in One and Higher Dimensions, and Applications • Berry Esseen Bounds for Combinatorial Central Limit Theorems and Pattern Occurrences, using Zero and • Normal Approximation for Hierarchical Structures Larry Goldstein • Distributional transformations, orthogonal polynomials, and Stein characterizations • 13 1 Goldstein L., Reinert G. Distributional transformations, orthogonal polynomials, and • LARRY GOLDSTEIN YOSEF RINOTT A Permutation Test For Matching and its • The Annals of Statistics 2005, Vol. 33, No. 2, 871914 • Information and Asymptotic Efficiency of the Case-Cohort Sampling Design in Cox's Regression • Ascertainment bias in rate ratio estimation from case-sibling control studies of variable age-at-onset • ADVANCES IN APPLIED MATHEMATICS 8,194-207 (1987) Mapping DNA by Stochastic Relaxation*#' • Asymptotically Independent Markov Sampling: a new MCMC scheme for Bayesian Inference • Bernoulli 15(2), 2009, 569597 DOI: 10.3150/08-BEJ162 • Concentration of measure for the number of isolated vertices in the Erdos-Renyi random graph by size bias couplings • Bounds in Normal Approximation Larry Goldstein • August 5, 2006 9:3 WSPC/Trim Size: 9in x 6in for Proceedings assaf-cahn-goldstein-table-2 MAXIMIZING EXPECTED VALUE WITH TWO STAGE • Addendum to: A Statistical Characterization of Regular Simplices • A Statistical Version of Prophet Inequalities1 David Assaf2 • Applications of size biased couplings for concentration of measures Subhankar Ghosh and Larry Goldstein • Normal Approximation for Hierarchical Structures Larry Goldstein • Berry Esseen Bounds for Combinatorial Central Limit Theorems and Pattern Occurrences, using Zero and • Stein's Method and the Zero Bias Transformation with Application to Simple Random Sampling • A Curious Connection Between Branching Processes and Optimal David Assaf, Larry Goldstein and Ester Samuel-Cahn • Clubbed Binomial Approximation for the Lightbulb Process • Stochastic comparisons of stratified sampling techniques for some Monte Carlo estimators • Exposure stratified case-cohort designs Ornulf Borgan * Larry Goldstein y Bryan Langholz z Janice Pogodax • Berry Esseen Bounds for Combinatorial Central Limit Theorems and Pattern Occurrences, using Zero and • On Efficient Estimation of Smooth Functionals L. Goldstein* • A Curious Connection Between Branching Processes and Optimal Stopping • Information and Asymptotic Efficiency of the Case-Cohort Sampling Design in Cox's Regression • Normal Approximation for Hierarchical Structures Larry Goldstein • LARRY GOLDSTEIN -YOSEF RINOTT A Permutation Test For Matching and its • Functional BKR Inequalities, and their Duals, with Applications • Risk Set Sampling in Epidemiologic Cohort Studies Bryan Langholz • *y Two Choice Optimal Stopping • Stein's Method and the Zero Bias Transformation with Application to Simple Random Sampling • |August|5, 2006 9:3 WSPC/Trim Size: 9in x 6in for Proceedings assaf-* *cahn-goldstein-table-2|| • Total Variation Distance for Poisson * y • August 5, 2006 9:3 WSPC/Trim Size: 9in x 6in for Proceedings assafcahngoldsteintable2 MAXIMIZING EXPECTED VALUE WITH TWO STAGE • Local Central Limit Theorems, the High Order Correlations of Rejective • doi: 10.1111/j.1467-9469.2006.00542.x Board of the Foundation of the Scandinavian Journal of Statistics 2007. Published by Blackwell Publishing Ltd, 9600 Garsington • Exposure stratified case-cohort designs rnulf Borgan • Adv. Appl. Prob. 28,1051-1071 (19%) ' Printed in N. Ireland • Optimal Two Choice Stopping on an Exponential Larry Goldstein # • A Statistical Characterization of Regular Simplices • Methods for the analysis of sampled cohort data in the Cox proportional hazards model • Ratio Prophet Inequalities when the Mortal has *y • L1 Bounds in Normal Approximation Larry Goldstein • Zero Biasing in One and Higher Dimensions, and *yz • Distributional transformations, orthogonal polynomials, and Stein characterizations • Zero Biasing and a Discrete Central Limit Theorem Larry Goldstein* and Aihua Xiay • LARRY GOLDSTEIN --YOSEF RINOTT A Permutation Test For Matching and its • Addendum to: A Statistical Characterization of Regular Simplices • Ratio Prophet Inequalities when the Mortal has Several Choices y • Local Central Limit Theorems, the High Order Correlations of Rejective • Addendum to: A Statistical Characterization of Regular Simplices • A Statistical Version of Prophet Inequalities1 David Assaf2, Larry Goldstein3 and Ester Samuel-Cahn4 • Approximations to Profile Score Distributions Larry Goldstein1 • Optimal Two Choice Stopping on an Exponential Sequence • Ascertainment bias in rate ratio estimation from case-sibling control studies of variable age-at-onset • A Probabilistic Proof of the Lindeberg-Feller Central Limit Theorem • Multivariate normal approximations by Stein's method and size bias couplings • Functional BKR Inequalities, and their Duals, with Applications • USC School of Policy, Planning, and • December 6, 2010 2:15-3:15 PM • April 1, 2011 3:30 4:30 PM • November 16, 2010 12:00 3:00 pm • October 22, 2010 3:30 4:30 PM • September 23, 2011 3:30 -4:30 PM • October 17, 2011 3:30 -4:30 PM • March 26, 2012 2:15 PM-3:15 PM • March 28, 2011 2:15-3:15 PM • February 28, 2011 2:15-3:15 PM • April 22, 2011 3:30 -4:30 PM • October 29, 2010 3:30 4:30 PM • April 11, 2011 2:15-3:15 PM • October 7, 2011 3:30 -4:30 PM • February 6, 2012 2:15 PM-3:15 PM • November 15, 2010 2:15-3:15 PM
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/00/425.html","timestamp":"2014-04-21T15:40:07Z","content_type":null,"content_length":"23572","record_id":"<urn:uuid:8c1fc669-863f-402f-87f8-50fe391c7dc9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Please Help!!! Join Date Feb 2009 Rep Power This was my assignment for my computer science class, underneath I wrote a program for it but I can't seem to get it to compile, can anyone help :confused:: Write a program to allow the user to calculate the area and perimeter of a square, or the area and circumference of a circle, or the area of a triangle. To do this, the user will enter one of the following characters: S, C, or T. The program should then ask the user for the appropriate information in order to make the calculation, and should display the results of the calculation. See the example program execution shown in class. The program should use dialog boxes. When expecting an S, C, or T, the program should reject other characters with an appropriate message.Get extra points for allowing both the uppercase and lowercase versions of a valid character to work. Name the program ShapesCalc.java. import java.util.Scanner; public class ShapesCalc { public static void main(String[] args) { //Create a Scanner Scanner input = new Scanner(System.in); //Prompt the user to Enter S,C, or T System.out.print("Enter S, C, or T: "); char shape = input.nextChar(); if (shape == S); System.out.print("Enter length of side: "); double length = input.nextDouble(); //Calculate perimeter and area of square perimeter= (4*length); System.out.println("Perimeter is equal to: " + perimeter + "Area is equal to: " + area); else if (shape == C); System.out.print("Enter radius: "); double radius = input.nextDouble(); //Calculate circumference and area of circle circumference= (2*3.14*radius); System.out.println("Circumference is equal to: " + circumference + "Area is equal to: " + area2);} else if (shape == T); System.out.print("Enter length of base: "); double base = input.nextDouble(); System.out.print("Enter height of triange: "); double height = input.nextDouble(); //Calculate area of triangle System.out.println("Area is equal to: " + area3); System.out.println("Incorrect variable please enter S,C, or T only"); System.out.println("Justeena Leonard"); Join Date Sep 2008 Rep Power Join Date Feb 2009 Rep Power it has like 6,000 errors!!! It says: 'else' without 'if' illegal start of type ; expected <identifier> expected class, interface, or enum expected Join Date Sep 2008 Rep Power then i suggest you correct each error line by line. the compiler, most of the time, will tell you what you need to fix, including the exact spot on the errant line. for example: 'else' without 'if' = you are using an 'else' without and 'if'. if you did this deliberately, you should start learning from scratch all over again. if you didn't, then you probably messed up your curly bracers somewhere. matter of fact, while i haven't read your code, i'm sure that most of your errors are due to problems with your bracers. Join Date Feb 2009 Rep Power I have gone through it like by line I don't see the problem ... I am new to java and I really don't get it but i'm just trying to get through this class ... even if I wanted to figure out where I went wrong I wouldn't even know where to start Join Date Sep 2008 Rep Power then remove everything and add in line by line to see what works and what doesn't. Join Date Feb 2009 Rep Power OK I deleted everything and then added everything up to the if (shape == S) and its telling me it can't find the variable S Join Date Feb 2009 Rep Power o and it says there is something wrong with this line char shape = input.nextChar(); its 'S', not S, just S and it assumes its a variable. Use single quotes for chars. "; expected" is pretty self explanatory. Another problem, you have many statements per if/else if/else but no braces, and you put a ; right after the test expression for the ifs, that means nothing will happen.. Maybe I'll try to clean it up for you.... Tell me if you want a cool Java logo avatar like mine and I'll make you one. As much as your "Help!!" title annoyed me, I took a few minutes to make it compile. Java Code: import java.util.Scanner; public class ShapesCalc { public static void main(String[] args) { //Create a Scanner Scanner input = new Scanner(System.in); double area; //Prompt the user to Enter S, C, or T System.out.print("Enter S, C, or T: "); char shape = input.next().charAt(0); if (shape == 'S') { System.out.print("Enter length of side: "); double length = input.nextDouble(); //Calculate perimeter and area of square double perimeter= ( 4 * length); System.out.println("Perimeter is equal to: " + perimeter + "Area is equal to: " + area); } else if (shape == 'C') { System.out.print("Enter radius: "); double radius = input.nextDouble(); //Calculate circumference and area of circle double circumference = (2 * 3.14 * radius); area = (Math.PI * (radius * radius)); System.out.println("Circumference is equal to: " + circumference + "Area is equal to: " + area); } else if (shape == 'T') { System.out.print("Enter length of base: "); double base = input.nextDouble(); System.out.print("Enter height of triange: "); double height = input.nextDouble(); //Calculate area of triangle area = (base * height) / 2; System.out.println("Area is equal to: " + area); } else { System.out.println("Incorrect variable please enter S,C, or T only"); Look through it and see what you did wrong. A better way to do this would be with a switch statement. Tell me if you want a cool Java logo avatar like mine and I'll make you one. Join Date Feb 2009 Rep Power You are the most amazing person in the world!!!! Thank you so much!!!! Your welcome.. Here is how it would be done with a switch statement: Java Code: import java.util.Scanner; public class ShapesCalc { public static void main(String[] args) { //Create a Scanner Scanner input = new Scanner(System.in); double area; //Prompt the user to Enter S, C, or T System.out.print("Enter S, C, or T: "); char shape = input.next().charAt(0); switch(shape) { case 's':; case 'S': { System.out.print("Enter length of side: "); double length = input.nextDouble(); //Calculate perimeter and area of square double perimeter= ( 4 * length); System.out.println("Perimeter is equal to: " + perimeter + "Area is equal to: " + area); }; break; case 'c':; case 'C': { System.out.print("Enter radius: "); double radius = input.nextDouble(); //Calculate circumference and area of circle double circumference= (2*3.14*radius); area = (Math.PI * (radius * radius)); System.out.println("Circumference is equal to: " + circumference + "Area is equal to: " + area); }; break; case 't':; case 'T': { System.out.print("Enter length of base: "); double base = input.nextDouble(); System.out.print("Enter height of triange: "); double height = input.nextDouble(); //Calculate area of triangle area = (base * height) / 2; System.out.println("Area is equal to: " + area); }; break; default: { System.out.println("Incorrect variable please enter S,C, or T only"); notice I put cases for lowercase letters that did nothing, but had no break statement, so they fall through to the uppercase case, so entering lowercase letters will work too. Another way instead of putting lowercase cases would be to make the char uppercase beforehand. Last edited by MK12; 02-21-2009 at 04:09 PM. Tell me if you want a cool Java logo avatar like mine and I'll make you one.
{"url":"http://www.java-forums.org/new-java/16172-please-help.html","timestamp":"2014-04-16T23:06:26Z","content_type":null,"content_length":"100638","record_id":"<urn:uuid:fd414dc5-eff1-4ce4-a54d-02411d9ffe0c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00199-ip-10-147-4-33.ec2.internal.warc.gz"}