content
stringlengths
86
994k
meta
stringlengths
288
619
st: Compute mean for groups leaving one member out [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Compute mean for groups leaving one member out From kokootchke <kokootchke@hotmail.com> To statalist <statalist@hsphsun2.harvard.edu> Subject st: Compute mean for groups leaving one member out Date Fri, 23 Oct 2009 01:38:35 -0400 Dear all: I have a panel dataset of 40 countries at quarterly frequency from 1990:1 to 2006:4. I would like to compute the average of a variable (called icrg) for all countries in a given time period. My problem is that in doing this computation, I would like to leave one country out of the calculation. For example, in period 1990:1 for Mexico, I would like to compute the average of icrg for ALL other countries in this period EXCLUDING Mexico. Could you suggest a simple way of doing this? Thank you very much. Windows 7: It helps you do more. Explore Windows 7. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-10/msg01064.html","timestamp":"2014-04-16T16:05:35Z","content_type":null,"content_length":"7031","record_id":"<urn:uuid:8bba7a4b-dbc5-4416-a2f0-19d806276474>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
North Easton ACT Tutor Find a North Easton ACT Tutor ...I know that all children can have success with math and internalize a positive, healthy idea that "I'm getting pretty good at this, and I even find this interesting." This is one of the most important attitudes to develop in young minds that will last through their school years and into college ... 9 Subjects: including ACT Math, geometry, algebra 2, algebra 1 ...They lose their confidence, say negative things to themselves, and give up. My approach is to rebuild a student's confidence and enthusiasm for learning so that he/she will be able carry on after the tutoring is finished. Learning English for ESL/ESOL students is about developing the necessary ... 30 Subjects: including ACT Math, reading, writing, English ...Students often worry about the ACT Science section, but truth is, they don't need to study any science at all. I work with students on understanding that this section is basically reading comprehension using scientific experiments. The method I teach is similar to what I teach for the reading: ... 26 Subjects: including ACT Math, English, linear algebra, algebra 1 I love math! I'm that geeky math-loving girl, that was also a cheerleader, so I pride myself in being smart and fun!! I was an Actuarial Mathematics major at Worcester Polytechnic Institute (WPI), and worked in the actuarial field for about 3.5 years after college. Since then I have been a nanny and a tutor and a cheerleading coach, while also starting a family. 17 Subjects: including ACT Math, calculus, actuarial science, linear algebra ...I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. I find real life examples and a crystal clear explanation are crucial for success. 19 Subjects: including ACT Math, Spanish, chemistry, calculus
{"url":"http://www.purplemath.com/north_easton_act_tutors.php","timestamp":"2014-04-17T21:55:31Z","content_type":null,"content_length":"23911","record_id":"<urn:uuid:e9fa1d95-a0aa-4feb-b0e1-b903001960f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Carrington Rotation Start and Stop Times Traditionally the start and stop times of Carrington rotations are derived assuming that the solar rotation period as viewed from the Earth is a constant. This assumption is perfectly adequate for most studies. We've had several inquiries asking how we derived these times, so for academic purposes we describe below both the original derivation and the presently-used technique that accounts for the variable length of Carrington rotations. The following quote (emphasis added) is from from the " Carrington and Bartels Calendars" page at Stanford University: Lord Carrington determined the solar rotation rate by watching low-latitude sunspots in the 1850s. He defined a fixed solar coordinate system that rotates in a sidereal frame exactly once every 25.38 days (Carrington, Observations of the Spots on the Sun, 1863, p 221, 244). The synodic rotation rate varies a little during the year because of the eccentricity of the Earth's orbit; the mean synodic value is about 27.2753 days. See the back of an Astronomical Almanac for details. The rotation of the Sun and the orbital motion of the Earth are in the same direction (both counterclockwise when viewed from north of Earth's orbital plane). As a consequence the apparent (synodic) solar rotation period is a bit longer than the true (sidereal) period. (Hint: it may help to think of the sideREAL period as the 'real' period.) If you think about it you can convince yourself that the sidereal period is equal to the synodic period divided by (1+frac), where frac = 27.2753/365.25 = the fraction of a year the Earth moves in its orbit during one synodic solar rotation. The Earth is farthest from the Sun in July and closest in January. Objects that are farther from the Sun have a smaller angular velocity (a longer 'year') than do closer objects. Hence the synodic period is a bit smaller in July than in January. Previously we determined the start and stop times of Carrington rotations by assuming a constant period of 27.2753 days. The phase was taken from the Astronomical Almanac, which gives a value for Carrington longitude of 349.03 degrees at 0000 UT on 1 January 1995. One can then derive the Carrington longitude in degrees (call it OLD) as a function of time: OLD = 349.03 - (360.* X / 27.2753), where X is the number of days since 1 January 1995. It is understood that OLD is to be taken modulo 360. Note that the Carrington longitude decreases as time increases. If one now compares the values of OLD with the values listed in the Almanac one finds reasonable agreement, with maximum discrepancies of about 4 hours. To get a better estimate of the start and stop times, we find the difference between OLD and the values listed in the Astronomical Almanac, and then fit the difference (ALMANAC - OLD) with a sine-cosine series: Fit#1 = f + X/g + a*SIN(2*π*X/e) + b*SIN(4*π*X/e) + h*SIN(6*π*X/e) + c*COS(2*π*X/e) + d*COS(4*π*X/e) + i*COS(6*π*X/e) In Figure 1 the red data points are the values of (ALMANAC - OLD), and the blue line reperesents Fit#1. The improved estimate is then NEW = OLD + Fit#1 The values of (NEW - ALMANAC) are plotted in Figure 2. The maximum discrepancy is now about 2 minutes. Notice that the data points between Days ~ 350 and 700 appear disjoint from the rest of the data set. This appears to be caused by a small offset in the Almanac tables for 1996. Each volume of the Almanac presents data for the stated year, for the last day of the previous year and the first day of the next year. The last day of the previous year is referred to as "January 0", and the first day of the subsequent year is called "December 32". This repetition of data allows a comparison of data from the 1995, 1996, and 1997 almanacs, shown in Table 1. It appears that the longitudes listed for 1996 is systematically very slightly larger than for either 1995 or 1997. The fitting procedure described above can be generalized to allow for a different offset in 1996. We define: Fit#2 = Fit#1 + j for the year 1996 Fit#2 = Fit#1 for all other years NEW2 = OLD + Fit#2 Values of (ALMANAC - NEW2) are shown in Figure 3. The residuals are less than 1 minute nearly all of the time. The function NEW2 is then used to generate the Carrington rotation start and stop times in our plots and listings.
{"url":"http://umtof.umd.edu/pm/crn/CARRTIME.HTML","timestamp":"2014-04-17T09:41:54Z","content_type":null,"content_length":"5860","record_id":"<urn:uuid:5495dcf8-290f-4884-a08c-58bcad5fa715>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Please go through this as if I am a moron! May 12th 2009, 02:15 AM #1 Mar 2009 Number of seats per plane: 120 Average load factor (percentage of seats filled): 75% Average full passenger economy fare: $70 Average variable cost per passenger: $30 Fixed operating expenses per month: $1,200,000 What is the break-even point in number of flights (hint: you need to use load factor)? And please one at a time! Last edited by mr fantastic; May 12th 2009 at 02:28 AM. Reason: Merged posts Number of seats on a plane - 120 Average load factor - 75% So in other words, every flight on average has 90 people on it. Average fare per passenger $70 Average costs per passenger $30 In other words, we make on average $40 per passenger. So for a flight, we usually have 90 people, and we make $40 per person, so 90 * $40 means $3600 profit per flight. We need to cover $1,200,000 in expenses per month, making $3600 per flight. 1200000 / 3600 = 333.3 flights before you break even (i.e. 333rd flight you're at a loss, 334th you've made a small profit). Moral of the story, the aviation business sucks. May 12th 2009, 03:54 AM #2 Sep 2007
{"url":"http://mathhelpforum.com/algebra/88639-please-go-through-if-i-am-moron.html","timestamp":"2014-04-16T04:34:51Z","content_type":null,"content_length":"27748","record_id":"<urn:uuid:707b1c99-8308-4937-9f30-6bddeb75c851>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
03-16-2006 #1 Registered User Join Date Feb 2006 Can someone explain how to write this as an exponential value since i have NO clue at all? monPayment=(loanAmt*(monIntRate)/(1-(1-monIntRate) and then right here is the exponent -noMonths) How do I do it as an exponent for -noMonths? Something with Pow I know...but I cannot for the life of me figure it out! pow(x, y) computes x to the y'th power. There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as Okay, so wait...I would write it like this? pow( (loanAmt*(monIntRate)/(1-(1-monIntRate), -noMonths) Is that right? Why don't you just calculate the term and its exponent on one line and use that result for the full calculation on another line? If a calculation is too complex for you, break it down. You could calculate the numerator on one line, then you could calculate the term with the exponent on another line, then you could calculate the full denominator on a third line, and then you could calculate the numerator over the denominator on a 4th line. Last edited by 7stud; 03-16-2006 at 09:24 PM. monPayment=(loanAmt*(monIntRate)/(1-pow((1-monIntRate), -noMonths)) you forgot a ')' on the end there Last edited by major_small; 03-16-2006 at 09:52 PM. Join is in our Unofficial Cprog IRC channel Server: irc.phoenixradio.org Channel: #Tech Team Cprog Folding@Home: Team #43476 Download it Here Detailed Stats Here More Detailed Stats 52 Members so far, are YOU a member? Current team score: 1223226 (ranked 374 of 45152) The CBoard team is doing better than 99.16% of the other teams Top 5 Members: Xterria(518175), pianorain(118517), Bennet(64957), JaWiB(55610), alphaoide(44374) Last Updated on: Wed, 30 Aug, 2006 @ 2:30 PM EDT Yeah I see that now but that's because it was like that originally and I just copypasted it so I will not take the blame for this Umm..yeah, the parenthesis were my fault. Looking at it again, I forgot one. Anyway, does that (with an extra parenthesis) work for an exponent? I'm going to try it that way! You guys rock!! Make sure you include <cmath> when you're using pow(). Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. 03-16-2006 #2 Join Date Jul 2005 03-16-2006 #3 Registered User Join Date Feb 2006 03-16-2006 #4 Registered User Join Date Apr 2003 03-16-2006 #5 03-16-2006 #6 03-17-2006 #7 03-17-2006 #8 Registered User Join Date Feb 2006 03-18-2006 #9
{"url":"http://cboard.cprogramming.com/cplusplus-programming/77036-exponents.html","timestamp":"2014-04-16T08:52:26Z","content_type":null,"content_length":"70605","record_id":"<urn:uuid:168e3022-89b9-48bf-a2e6-67cd8af67d3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Palos Hills SAT Math Tutor Find a Palos Hills SAT Math Tutor ...Developing vocabulary is a vitally important component in developing critical reading. My approach to developing vocabulary is through word study, to help students better identify common suffixes, prefixes and root words. As a teacher I can attest that the instruction of grammar has truly fallen by the wayside in education, from elementary school all the way through high school. 20 Subjects: including SAT math, reading, English, writing ...These are skills I bring to all my tutoring engagements. I have an undergraduate and masters degree in Mechanical Engineering. As a result I have extensive experience in physics, statics, dynamics, heat & mass transfer, thermodynamics and controls, as well as the mathematics that support these ... 17 Subjects: including SAT math, physics, calculus, GRE ...Have a wonderful day.I am quite able with computers, in both hardware and software. I have helped numbers of people with computers, my friends and mostly my parents. Having also worked as a computer consultant for my university, I have a lot of experience helping clients with computer and technical problems that they encounter on campus. 16 Subjects: including SAT math, English, chemistry, writing ...In addition, I also work with test preparation including district wide tests and will be working with students preparing to improve their scores on NWEA and the new PARCC test which will be replacing the ISAT next academic year. I you want someone with a record of success with established growth... 76 Subjects: including SAT math, English, Spanish, reading ...I have extensive teaching experience as a Stanley Kaplan Test Prep instructor teaching students how to prepare for the GRE. Most students that I have worked with have gone on to do well on the quantitative sections of the GRE. I also have a B.S and M.S in mathematics so I'm uniquely qualified to tutor the quantitative sections of the GRE. 19 Subjects: including SAT math, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Palos_Hills_SAT_Math_tutors.php","timestamp":"2014-04-19T23:49:16Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:6d5fac07-dab6-400b-b69f-6f2651f4dc4c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Recommended text on principal component analysis? I need to learn principal component analysis from scratch for my research project, but it seems like a fairly staple technique that there isn't a text on it which stands out in particular... I found one by Jolliffe and another by Izenman (Modern Multivariate Statistical Techniques, Springer) with a chapter on linear dimensional reduction and principal component analysis. Is there anything you would recommend me? Also bearing in mind, I don't have much background in applied statistics... Thanks in advance!
{"url":"http://www.physicsforums.com/showthread.php?t=453434","timestamp":"2014-04-17T04:08:04Z","content_type":null,"content_length":"20140","record_id":"<urn:uuid:84077b76-7cfe-4786-ab85-a24cd3b49df4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
F Programming Books Programmer's Guide to F by Walt Brainerd, Charlie Goldberg, and Jeanne Adams, ISBN 0-9640135-1-7, approximately 400 pages, published by The Fortran Company, US$33. This is a tutorial introduction suitable for a beginner with many complete example programs. It is similar in style to Programmer's Guide to Fortran 90 by the same authors. essential Fortran 90 & 95 by Loren Meissner, ISBN 0-9640135-3-3, published by Unicomp, US$39. This is a textbook designed with emphasis on numerical computing in science and engineering. Here is a list of contents. It is based on Meissner's book Fortran 90. All of the examples conform to both F and Essential Lahey Fortran and there is an appendix comparing the two subset languages. Lots of examples from the book are available in electronic form. They are all in one big file along with test input data. F Tutorial by Robert Moniot of Fordham University is a brief tutorial on F for those especially interested in numerical computations. It is available only in PDF format, purchased from the Fortran Store and delivered by e-mail.
{"url":"http://www.fortran.com/F/books.html","timestamp":"2014-04-18T20:59:26Z","content_type":null,"content_length":"2640","record_id":"<urn:uuid:b9e5f982-838a-485b-bde4-9521da46e259>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by brandy on Saturday, April 4, 2009 at 8:14pm. What is exponential notation • 4th grade math - drwls, Saturday, April 4, 2009 at 8:18pm An example of exponential notation would be writing 1.25*10^6 instead of 1,250,000. The "6" is written as an exponent (power) of 10. It is an easy and convenient way to write very large or very small numbers, and makes multiplying and dividing them easier. Related Questions 4th grade math - What is exponential notation math - Could someone please how this problem should be solved? Simplify and ... math - The question that have is to simplify and write the answer in ... Math - what is a exponential notation? math - exponential notation? math - how do you write the measurement of 12x12x12 in exponential notation? Math help - Which number is equal to 7(3.5 x 10to the 4th) written in scientific... MAth - Express using exponential notation sqrt (6xy)^3 math - how would you write 20,500 in exponential notation form . Algerba 1 - Write in Scientific Notation 1. (3X10to the tenth) (7X10to the -4th...
{"url":"http://www.jiskha.com/display.cgi?id=1238890447","timestamp":"2014-04-19T10:27:14Z","content_type":null,"content_length":"8211","record_id":"<urn:uuid:bf71e516-7efc-4e79-a45f-f4418a4b61a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Fundamental Theorem of Algebra The fundamental theorem of algebra: a polynomial of degree with complex coefficients has complex roots, counting multiplicity. Euler, Gauss, Lagrange, Laplace, Weierstrass, Smale, and many other mathematicians worked to prove this theorem, using complex analytic, topological, or algebraic methods. Let . This Demonstration lets you vary the coefficients of . The roots (or zeros) of are shown as red points in a contour plot of the absolute value of in the complex plane. Contour lines of the absolute values of circle these roots. Mouseover a root to display its approximate value. Click a green-boxed zero to set that variable to zero.
{"url":"http://demonstrations.wolfram.com/TheFundamentalTheoremOfAlgebra/","timestamp":"2014-04-21T02:41:37Z","content_type":null,"content_length":"43517","record_id":"<urn:uuid:dc0df111-678b-49ad-8e1c-d03de3409603>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Proofs from a Decision Procedure Results 1 - 10 of 11 , 2000 "... not be interpreted as representing the o cial policies, either expressed or implied, of NSF or the U.S. Government. This thesis describes the design of a meta-logical framework that supports the representation and veri cation of deductive systems, its implementation as an automated theorem prover, a ..." Cited by 81 (17 self) Add to MetaCart not be interpreted as representing the o cial policies, either expressed or implied, of NSF or the U.S. Government. This thesis describes the design of a meta-logical framework that supports the representation and veri cation of deductive systems, its implementation as an automated theorem prover, and experimental results related to the areas of programming languages, type theory, and logics. Design: The meta-logical framework extends the logical framework LF [HHP93] by a meta-logic M + 2. This design is novel and unique since it allows higher-order encodings of deductive systems and induction principles to coexist. On the one hand, higher-order representation techniques lead to concise and direct encodings of programming languages and logic calculi. Inductive de nitions on the other hand allow the formalization of properties about deductive systems, such as the proof that an operational semantics preserves types or the proof that a logic is is a proof calculus whose proof terms are recursive functions that may be consistent.M + - 16th International Conference on Rewriting Techniques and Applications , 2005 "... www.lsi.upc.es/{~roberto,~oliveras} Abstract. Many applications of congruence closure nowadays require the ability of recovering, among the thousands of input equations, the small subset that caused the equivalence of a given pair of terms. For this purpose, here we introduce an incremental congruen ..." Cited by 32 (2 self) Add to MetaCart www.lsi.upc.es/{~roberto,~oliveras} Abstract. Many applications of congruence closure nowadays require the ability of recovering, among the thousands of input equations, the small subset that caused the equivalence of a given pair of terms. For this purpose, here we introduce an incremental congruence closure algorithm that has an additional Explain operation. First, two variations of union-find data structures with Explain are introduced. Then, these are applied inside a congruence closure algorithm with Explain, whereak-step proof can be recovered in almost optimal time (quasi-linear in k), without increasing the overall O(n log n)runtime of the fastest known congruence closure algorithms. This non-trivial (ground) equational reasoning result has been quite intensively sought after (see, e.g., [SD99, dMRS04, KS04]), and moreover has important applications to verification. , 2006 "... Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi-)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of i ..." Cited by 18 (1 self) Add to MetaCart Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi-)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of input equations, the small subset that explains the equivalence of a given pair of terms. In this paper we present an algorithm satisfying all these requirements. First, building on ideas from abstract congruence closure algorithms [Kapur (1997,RTA), Bachmair & Tiwari (2000,CADE)], we present a very simple and clean incremental congruence closure algorithm and show that it runs in the best known time O(n log n). After that, we introduce a proof-producing union-find data structure that is then used for extending our congruence closure algorithm, without increasing the overall O(n log n) time, in order to produce a k-step explanation for a given equation in almost optimal time (quasi-linear in k). Finally, we show that the previous algorithms can be smoothly extended, while still obtaining the same asymptotic time bounds, in order to support the interpreted functions symbols successor and predecessor, which have been shown to be very useful in applications such as microprocessor verification. - In Proceedings of the International Conference on Automated Deduction , 2000 "... . The ability of a theorem prover to generate explicit derivations for the theorems it proves has major benets for the testing and maintenance of the prover. It also eliminates the need to trust the correctness of the prover at the expense of trusting a much simpler proof checker. However, it is ..." Cited by 17 (0 self) Add to MetaCart . The ability of a theorem prover to generate explicit derivations for the theorems it proves has major benets for the testing and maintenance of the prover. It also eliminates the need to trust the correctness of the prover at the expense of trusting a much simpler proof checker. However, it is not always obvious how to generate explicit proofs in a theorem prover that uses decision procedures whose operation does not directly model the axiomatization of the underlying theories. In this paper we describe the modications that are necessary to support proof generation in a congruence-closure decision procedure for equality and in a Simplex-based decision procedure for linear arithmetic. Both of these decision procedures have been integrated using a modied Nelson-Oppen cooperation mechanism in the Touchstone theorem prover, which we use to produce proof-carrying code. Our experience with designing and implementing Touchstone is that proof generation has a relatively low c... - Proceedings of Conference on Correct System Design, E.- R. Olderog and B. Steffen, (Eds.), LNCS 1710, Springer-Verlag , 1999 "... . Translation validation is an alternative to the verification of translators (compilers, code generators). Rather than proving in advance that the compiler always produces a target code which correctly implements the source code (compiler verification), each individual translation (i.e. a run of th ..." Cited by 16 (1 self) Add to MetaCart . Translation validation is an alternative to the verification of translators (compilers, code generators). Rather than proving in advance that the compiler always produces a target code which correctly implements the source code (compiler verification), each individual translation (i.e. a run of the compiler) is followed by a validation phase which verifies that the target code produced on this run correctly implements the submitted source program. In order to be a practical alternative to compiler verification, a key feature of this validation is its full automation. Since the validation process attempts to "unravel" the transformation effected by the translators, its task becomes increasingly more difficult (and necessary) with the increase of sophistication and variety of the optimizations methods employed by the translator. In this paper we address the practicability of translation validation for highly optimizing, industrial code generators from Signal, a widely used synchronous... - THEOR. COMPUT. SCI , 2000 "... We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy which is attached to a binary. Concrete realizations of this framework are proofcarrying code (PCC), where the evidence for safety i ..." Cited by 5 (0 self) Add to MetaCart We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy which is attached to a binary. Concrete realizations of this framework are proofcarrying code (PCC), where the evidence for safety is a formal proof generated by a certifying compiler, and typed assembly language (TAL), where the evidence for safety is given via type annotations propagated throughout the compilation process in typed intermediate languages. Validity of the evidence is established via a small trusted type checker, either directly on the binary or indirectly on proof representations in a logical framework (LF). - Pages 221–230 of: Symposium on Logic in Computer Science , 2001 "... We develop a uniform type theory that integrates intensionality, extensionality, and proof irrelevance as judgmental concepts. Any object may be treated intensionally (subject only to # -conversion), extensionally (subject also to ##-conversion), or as irrelevant (equal to any other object at the sam ..." Cited by 5 (3 self) Add to MetaCart We develop a uniform type theory that integrates intensionality, extensionality, and proof irrelevance as judgmental concepts. Any object may be treated intensionally (subject only to #-conversion), extensionally (subject also to ##-conversion), or as irrelevant (equal to any other object at the same type), depending on where it occurs. Modal restrictions developed in prior work for simple types are generalized and employed to guarantee consistency between these views of objects. Potential applications are in logical frameworks, functional programming, and the foundations of first-order modal logics. - 2nd International Workshop on Pragmatics of Decision Procedures in Automated Reasoning , 2004 "... Congruence closure algorithms are nowadays central in many modern applications in automated deduction and verification, where it is frequently required to recover the set of merge operations that caused the equivalence of a given pair of terms. For this purpose we study, from the algorithmic point o ..." Cited by 2 (0 self) Add to MetaCart Congruence closure algorithms are nowadays central in many modern applications in automated deduction and verification, where it is frequently required to recover the set of merge operations that caused the equivalence of a given pair of terms. For this purpose we study, from the algorithmic point of view, the problem of extracting such small proofs. - IN PROCEEDINGS OF THE DARPA INFORMATION SURVIVABILITY CONFERENCE AND EXPOSITION , 2000 "... We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy that is attached to a binary. Concrete realizations of this framework are proof-carrying code (PCC), where the evidence for safety i ..." Cited by 2 (1 self) Add to MetaCart We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy that is attached to a binary. Concrete realizations of this framework are proof-carrying code (PCC), where the evidence for safety is a formal proof generated by a certifying compiler, and typed assembly language (TAL), where the evidence for safety is given via type annotations propagated throughout the compilation process in typed intermediate languages. Validity of the evidence is established via a small trusted type checker, either directly on the binary or indirectly on proof representations in a logical framework (LF). "... this document presents the details of our proposed research project. We organize this presentation into sections, with each section giving an overview of a specific major subproblem, its relationship to the overall research goal, and our plans for addressing it. These major subproblems are as follow ..." Add to MetaCart this document presents the details of our proposed research project. We organize this presentation into sections, with each section giving an overview of a specific major subproblem, its relationship to the overall research goal, and our plans for addressing it. These major subproblems are as follows: the development of resource-bound and access-control policies and enforcement mechanisms, the design of programming languages for application development, the design and development of certifying compilers, and the use of logical frameworks for e#cient proof representation. We conclude the proposal with a brief discussion of our overall research plan and our approach to disseminating our software and research results
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4395","timestamp":"2014-04-17T19:53:53Z","content_type":null,"content_length":"38970","record_id":"<urn:uuid:40a63f96-bac1-4b46-9beb-3af8234cf32c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Dynamics of a higher-order rational difference equation. (English) Zbl 1106.39005 The authors consider the dynamics of the difference equation where the initial conditions ${y}_{-k},\cdots ,{y}_{-1},{y}_{0}$ are non-negative, $k\in ℕ$, and the parameters $p$ and $q$ are non-negative. They study characteristics such as the global stability, the boundedness of positive solutions and the character of semicycles of above difference equation. Firstly, they show that every solution of the difference equation is bounded from above and from below by positive constants. Next, the results about the character of semicycles are presented. Finally, they discuss the global asymptotic stability and show that, when $k$ is even, the positive unique equilibrium is globally asymptotically stable if and only if $q<1$, and when $k$ is odd, the positive unique equilibrium is globally asymptotically stable for all values of the parameters $p$ and $q$. The results obtained solve an open problem from the monograph by M. R. S. Kulenovic and G. Ladas [Dynamics of second order rational difference equations with open problems and conjectures. Boca Raton, FL: Chapman & Hall/CRC. (2002; Zbl 0981.39011)]. 39A11 Stability of difference equations (MSC2000) 39A20 Generalized difference equations
{"url":"http://zbmath.org/?q=an:1106.39005","timestamp":"2014-04-17T13:12:46Z","content_type":null,"content_length":"22961","record_id":"<urn:uuid:7f718713-f969-4e1f-8a7d-5548c884b52c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Math and the Law Date: 11/24/98 at 17:14:46 From: Eduardo Alvarez Subject: Algebra First of all I would like to congratulate you because of these cool Web pages. My question is the following: How can we use algebra in real life? I'm kind of confused because I don't know how do we use algebra in real life. Please respond to me as soon as possible because this is a Thank you, Eduardo Alvarez Date: 11/25/98 at 14:12:31 From: Doctor Terrel Subject: Re: Algebra Hi there, Lots of people need to use algebra in real life - doctors, bankers, even lawyers (believe it!). Look at this e-mail I once saw on the Internet, from Gerald VonKorff, dated October 29, 1998: I'm a lawyer. I use mathematics all the time. Let me give you a few 1. I am defending the State of Minnesota in an industrial tax appeal. The appraiser on the other side creates a regression analysis of so-called comparable sales. The regression analysis is voodoo econometrics. But if I have no understanding of variance, of statistical significance etc, how do I challenge his junk 2. Several years ago, I defended a County in a major environmental litigation. The Army Corps of Engineers sought to fine my client $250,000 per day, because allegedly the County changed the course and current of a drainage ditch in a way that allegedly illegally increased drainage in violation of Section 504 of the Clean Water Act. Using hydrological equations, we were able to show, indeed to convince the Army Corps that their mathematics and engineering was wrong, because they failed to properly apply Manning's formula. Result, a saving of millions of dollars to our client. 3. A car is in an accident. An expert for the other side reconstructs an accident using faulty physical principles. If the lawyer does not understand physics, the lawyer is handicapped in challenging the analysis. Lawyers and their clients constantly use mathematics and statistics to analyze data to persuade. A lawyer who understands mathematics is a better and more persuasive lawyer. I believe that the same can be said for other professions. Engineering of course is obvious. But medicine and related fields too require an understanding of mathematics. Geography, environmental sciences, surveying - all of these fields now use mathematics. For more on real life uses of math, see the Dr. Math FAQ: - Doctors Terrel and Sarah, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52324.html","timestamp":"2014-04-19T09:51:27Z","content_type":null,"content_length":"7611","record_id":"<urn:uuid:c18a42dd-1c7b-4e99-90ab-fbb5cf9169d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
How the ECDSA algorithm works To popular demand, I have decided to try and explain how the ECDSA algorithm works. I’ve been struggling a bit to understand it properly and while I found a lot of documentation about it, I haven’t really found any “ECDSA for newbies” anywhere. So I thought it would be good to explain in simple terms how it works so others can learn from my research. I have found some websites that explain the basic principles but nowhere near enough to actually understand it, others that explains things without any basics, making it incomprehensible, and others that go way too deep into the the mathematics behind it. ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of data (a file for example) in order to allow you to verify its authenticity without compromising its security. Think of it like a real signature, you can recognize someone’s signature, but you can’t forge it without others knowing. The ECDSA algorithm is basically all about mathematics.. so I think it’s important to start by saying : “hey kids, don’t slack off at school, listen to your teachers, that stuff might be useful for you some day!” So the principle is simple, you have a mathematical equation which draws a curve on a graph, and you choose a random point on that curve and consider that your point of origin. Then you generate a random number, this is your private key, you do some magical mathematical equation using that random number and that “point of origin” and you get a second point on the curve, that’s your public key. When you want to sign a file, you will use this private key (the random number) with a hash of the file (a unique number to represent the file) into a magical equation and that will give you your signature. The signature itself is divided into two parts, called R and S. In order to verify that the signature is correct, you only need the public key (that point on the curve that was generated using the private key) and you put that into another magical equation with one part of the signature (S), and if it was signed correctly using the the private key, it will give you the other part of the signature (R). So to make it short, a signature consists of two numbers, R and S, and you use a private key to generate R and S, and if a mathematical equation using the public key and S gives you R, then the signature is valid. There is no way to know the private key or to create a signature using only the public key. Alright, now for the more in depth understanding, I suggest you take an aspirin right now as this might hurt! Let’s start with the basics (which may be boring for people who know about it, but is mandatory for those who don’t) : ECDSA uses only integer mathematics, there are no floating points (this means possible values are 1, 2, 3, etc.. but not 1.5..), also, the range of the numbers is bound by how many bits are used in the signature (more bits means higher numbers, means more security as it becomes harder to ‘guess’ the critical numbers used in the equation), as you should know, computers use ‘bits’ to represent data, a bit is a ‘digit’ in binary notation (0 and 1) and 8 bits represent one byte. Every time you add one bit, the maximum number that can be represented doubles, with 4 bits you can represent values 0 to 15 (for a total of 16 possible values), with 5 bits, you can represent 32 values, with 6 bits, you can represent 64 values, etc.. one byte (8 bits) can represent 256 values, and 32 bits can represent 4294967296 values (4 Giga).. Usually ECDSA will use 160 bits total, so that makes… well, a very huge number with 49 digits in it… ECDSA is used with a SHA1 cryptographic hash of the message to sign (the file). A hash is simply another mathematical equation that you apply on every byte of data which will give you a number that is unique to your data. Like for example, the sum of the values of all bytes may be considered a very dumb hash function. So if anything changes in the message (the file) then the hash will be completely different. In the case of the SHA1 hash algorithm, it will always be 20 bytes (160 bits). It’s very useful to validate that a file has not been modified or corrupted, you get the 20 bytes hash for a file of any size, and you can easily recalculate that hash to make sure it matches. What ECDSA signs is actually that hash, so if the data changes, the hash changes, and the signature isn’t valid anymore. Now, how does it work? Well Elliptic Curve cryptography is based on an equation of the form : y^2 = (x^3 + a * x + b) mod p First thing you notice is that there is a modulo and that the ‘y‘ is a square. This means that for any x coordinate, you will have two values of y and that the curve is symmetric on the X axis. The modulo is a prime number and makes sure that all the values are within our range of 160 bits and it allows the use of “modular square root” and “modular multiplicative inverse” mathematics which make calculating stuff easier (I think). Since we have a modulo (p) , it means that the possible values of y^2 are between 0 and p-1, which gives us p total possible values. However, since we are dealing with integers, only a smaller subset of those values will be a “perfect square” (the square value of two integers), which gives us N possible points on the curve where N < p (N being the number of perfect squares between 0 and p). Since each x will yield two points (positive and negative values of the square-root of y^2), this means that there are N/2 possible ‘x‘ coordinates that are valid and that give a point on the curve. So this elliptic curve has a finite number of points on it, and it’s all because of the integer calculations and the modulus. Another thing you need to know about Elliptic curves, is the notion of “point addition“. It is defined as adding one point P to another point Q will lead to a point S such that if you draw a line from P to Q, it will intersect the curve on a third point R which is the negative value of S (remember that the curve is symmetric on the X axis). In this case, we define R = -S to represent the symmetrical point of R on the X axis. This is easier to illustrate with an image : So you can see a curve of the form y^2 = x^3 + ax + b (where a = -4 and b = 0), which is symmetric on the X axis, and where P+Q is the symmetrical point through X of the point R which is the third intersection of a line going from P to Q. In the same manner, if you do P + P, it will be the symmetrical point of R which is the intersection of the line that is a tangent to the point P.. And P + P + P is the addition between the resulting point of P+P with the point P since P + P + P can be written as (P+P) + P.. This defines the “point multiplication” where k*P is the addition of the point P to itself k times… here are two examples showing this : Here, you can see two elliptic curves, and a point P from which you draw the tangent, it intersects the curve with a third point, and its symmetric point it 2P, then from there, you draw a line from 2P and P and it will intersect the curve, and the symmetrical point is 3P. etc… you can keep doing that for the point multiplication. You can also already guess why you need to take the symmetric point of R when doing the addition, otherwise, multiple additions of the same point will always give the same line and the same three intersections. One particularity of this point multiplication is that if you have a point R = k*P, where you know R and you know P, there is no way to find out what the value of ‘k‘ is. Since there is no point subtraction or point division, you cannot just resolve k = R/P. Also, since you could be doing millions of point additions, you will just end up on another point on the curve, and you’d have no way of knowing “how” you got there. You can’t reverse this operation, and you can’t find the value ‘k‘ which was multiplied with your point P to give you the resulting point R. This thing where you can’t find the multiplicand even when you know the original and destination points is the whole basis of the security behind the ECDSA algorithm, and the principle is called a “ trap door function“. Now that we’ve handled the “basics”, let’s talk about the actual ECDSA signature algorithm. For ECDSA, you first need to know your curve parameters, those are a, b, p, N and G. You already know that ‘a‘ and ‘b‘ are the parameters of the curve function (y^2 = x^3 + ax + b), that ‘p‘ is the prime modulus, and that ‘N‘ is the number of points of the curve, but there is also ‘G‘ that is needed for ECDSA, and it represents a ‘reference point’ or a point of origin if you prefer. Those curve parameters are important and without knowing them, you obviously can’t sign or verify a signature. Yes, verifying a signature isn’t just about knowing the public key, you also need to know the curve parameters for which this public key is derived from. So first of all, you will have a private and a public key.. the private key is a random number (of 20 bytes) that is generated, and the public key is a point on the curve generated from the point multiplication of G with the private key. We set ‘dA‘ as the private key (random number) and ‘Qa‘ as the public key (a point), so we have : Qa = dA * G (where G is the point of reference in the curve So how do you sign a file/message ? First, you need to know that the signature is 40 bytes and is represented by two values of 20 bytes each, the first one is called R and the second one is called S .. so the pair (R, S) together is your ECDSA signature.. now here’s how you can create those two values in order to sign a file.. first you must generate a random value ‘k‘ (of 20 byes), and use point multiplication to calculate the point P=k*G. That point’s x value will represent ‘R‘. Since the point on the curve P is represented by its (x, y) coordinates (each being 20 bytes long), you only need the ‘x‘ value (20 bytes) for the signature, and that value will be called ‘R‘. Now all you need is the ‘S‘ value. To calculate S, you must make a SHA1 hash of the message, this gives you a 20 bytes value that you will consider as a very huge integer number and we’ll call it ‘z‘. Now you can calculate S using the equation : S = k^-1 (z + dA * R) mod p Note here the k^-1 which is the ‘modular multiplicative inverse‘ of k… it’s basically the inverse of k, but since we are dealing with integer numbers, then that’s not possible, so it’s a number such that (k^-1 * k ) mod p is equal to 1. And again, I remind you that k is the random number used to generate R, z is the hash of the message to sign, dA is the private key and R is the x coordinate of k*G (where G is the point of origin of the curve parameters). Now that you have your signature, you want to verify it, it’s also quite simple, and you only need the public key (and curve parameters of course) to do that. You use this equation to calculate a point P : P= S^-1*z*G + S^-1 * R * Qa If the x coordinate of the point P is equal to R, that means that the signature is valid, otherwise it’s not. Pretty simple, huh? now let’s see why and how… and this is going to require some mathematics to verify : We have : P = S^-1*z*G + S^-1 * R *Qa but Qa = dA*G, so: P = S^-1*z*G + S^-1 * R * dA*G = S^-1 (z + dA* R) * G But the x coordinate of P must match R and R is the x coordinate of k * G, which means that : k*G = S^-1 (z + dA * R) *G we can simplify by removing G which gives us : k = S^-1(z + dA * R) by inverting k and S, we get : S = k^-1 (z + dA *R) and that is the equation used to generate the signature.. so it matches, and that is the reason why you can verify the signature with it. You can note that you need both ‘k‘ (random number) and ‘dA‘ (the private key) in order to calculate S, but you only need R and Qa (public key) to validate the signature. And since R=k*G and Qa = dA*G and because of the trap door function in the ECDSA point multiplication (explained above), we cannot calculate dA or k from knowing Qa and R, this makes the ECDSA algorithm secure, there is no way of finding the private keys, and there is no way of faking a signature without knowing the private key. The ECDSA algorithm is used everywhere and has not been cracked and it is a vital part of most of today’s security. Now I’ll discuss on how and why the ECDSA signatures that Sony used in the PS3 were faulty and how it allowed us to gain access to their private key. So you remember the equations needed to generate a signature.. R = k*G and S= k^-1(z + dA*R) mod p.. well this equation’s strength is in the fact that you have one equation with two unknowns (k and dA) so there is no way to determine either one of those. However, the security of the algorithm is based on its implementation and it’s important to make sure that ‘k‘ is randomly generated and that there is no way that someone can guess, calculate, or use a timing attack or any other type of attack in order to find the random value ‘k‘. But Sony made a huge mistake in their implementation, they used the same value for ‘k‘ everywhere, which means that if you have two signatures, both with the same k, then they will both have the same R value, and it means that you can calculate k using two S signatures of two files with hashes z and z’ and signatures S and S’ respectively : S – S’ = k^-1 (z + dA*R) – k^-1 (z’ + da*R) = k^-1 (z + da*R – z’ -dA*R) = k^-1 (z – z’) So : k = (z – z’) / (S – S’) Once you know k, then the equation for S because one equation with one unknown and is then easily resolved for dA : dA = (S*k – z) / R Once you know the private key dA, you can now sign your files and the PS3 will recognize it as an authentic file signed by Sony. This is why it’s important to make sure that the random number used for generating the signature is actually “cryptographically random”. This is also the reason why it is impossible to have a custom firmware above 3.56, simply because since the 3.56 version, Sony have fixed their ECDSA algorithm implementation and used new keys for which it is impossible to find the private key.. if there was a way to find that key, then the security of every computer, website, system may be compromised since a lot of systems are relying on ECDSA for their security, and it is impossible to crack. Finally! I hope this makes the whole algorithm clearer to many of you.. I know that this is still very complicated and hard to understand. I usually try to make things easy to understand for non technical people, but this algorithm is too complex to be able to explain in any simpler terms. After all that’s why I prefer to call it the MFET algorithm (Mathematics For Extra Terrestrials) But if you are a developer or a mathematician or someone interested in learning about this because you want to help or simple gain knowledge, then I’m sure that this contains enough information for you to get started or to at least understand the concept behind this unknown beast called “ECDSA”. That being said, I’d like to thank a few people who helped me understand all of this, one particularly who wishes to remain anonymous, as well as the many wikipedia pages I linked to throughout this article, and Avi Kak thanks to his paper explaining the mathematics behind ECDSA, and from which I have taken those graph images aboves. P.s: In this article, I used ’20 bytes’ in my text to talk about the ECDSA signature because that’s what is usually used as it matches the SHA1 hash size of 20 bytes and that’s what the PS3 security uses, but the algorithm itself can be used with any size of numbers. There may be other inaccuracies in this article, but like I said, I’m not an expert, I just barely learned all of this in the past 158 Responses to How the ECDSA algorithm works 1. when are u releasing 4.00 jb im so excited □ You dickhead. Stop asking that question. If he knew then he would tell us. 2. when are u releasing 4.00 jb im so excited please tell us 4. Nice article very clear and easy to understand, a lot of people don’t understand the complexity of crypto algorithms but hopefully this will help. looks like its going to take a hardware hacker to rip out the keys 5. Just wait for the 4.00JB you all pushing nd nagging is not helping, took time for 3.55 so jus wait for 4.00.. Ps I’M EXITED TOO LOL 6. Assassin’s Creed 3.55 Fix Please □ Assassin’s Creed RV 3.55 Fix Please 7. It is possible to change a file with dumb data to make a sha-1 collision (2 different files with the same hash) this way you can use one signature to sign another file. Do you know how to do it? I will search here and send you the code Sha1 and md5 are not used anymore because of this finding… □ It is “possible” but it needs to be bruteforced and i’s just as complicated and long as finding the private key directly (means it would take thousands of years on a super computer), so it’s not a solution. 8. This Blog is monitored by SONY and Kaz Hirai personally! 9. Really love the effort that you guys are putting into all this. I’m a huge fan. It might be a dumb question here as I don’t know much, but what if you try to modify the PS3 firmware files to not verify the signature instead of trying to replicate the algorithm. 10. Makoman, if you break the sig, all the changes will be made in your pkgs what is much less dangerous than messing with the firmware code. There may be other reasons, but this one is my main 11. Hi,erm i know i should’nt ask you this,but this is kinda urgent for me.how is jb 4.0 progressing?because i am confusing whether to buy ps3 or xbox360,for some game reason,i choose ps3,but seems like it is unjailbreakable,so i dont know whether i should buy ps3 or not.thanks… □ Choose the Xbox 360. OFW 3.66+ isn’t pirate-able, at most Homebrew-able. So if your decision point about buying ps3 or 360 is be able to play some games without buying them. buy a 360. (Better yet, buy a Gaming PC) ☆ i got a lot of games in ps3,so i will buy ps3,and i believe kakaroto will solve the problem sooner or later… 12. KAkAroto Try REusing the Random Keys man 13. Hello Kakarot, his work is exceptional, for some time been following your articles and you’re really a fantastic guy, congratulations. Sorry for english I’m Brazilian. I am using a translator. I have a doubt and would like you to help me. Well that’s it, I have a PS3 FW 3.72 on this and I know the FW 3:56 onwards there is no release so far, right … I know that you are working hard to create for the CFW to serve the other FW. So my current situation in Stand-Alone if I upgrade to FW4.00 when you can unravel the mysteries of this algorithm (which we know you will) all the time until FW developed by Sony will be liable to customization? 14. I’m sick and tired of people asking me every day “please update the status” or “why didn’t you update it in the last 2 hours” or “is the status correct ?” or “what does the letter I mean?” or “Why is that task still at 0%” or “why didn’t that task change today?”, etc… I thought I’d give you a status page so you can follow SILENTLY the progress, but all it did was flood me even more with people asking me questions all the time about it, so I’m taking it down, you don’t deserve to know wtf is happening or where we are in fixing all the issues (not ‘you’ specifically, but all those who can’t keep their mouth shut and need to fucking annoy me every hour). Sorry for the collateral damage. The current status is : IT”S BEING WORKED ON!!!! It will be release when it is ready, and asking me all the time about it IS NOT HELPING. I never answered anyone asking me about the status or when it will be released or all of that, so don’t try the “maybe he’ll answer me”, no I won’t, I just might block you instead. – KaKaRoTo THANK U EVER1 FOR ANNOYING KAKAROTO SO MUCH THAT HE TOCK IT DOWN :’( □ well,i just wanna know whether 4.0 is jailbreakable or not.if yes,i buy choose ps3,if not,i will go for xbox360..i dont wanna stuck in the middle of nowhere. ☆ Are u really that dumb u can’t read what he said in his previous posts (facepalm) ○ dumb ass,this is what he said in twitter….yeah,it may never be complete and never released, I honestly don’t have the courage/energy to continue at this point.. :(i dont wanna risk,thats why i ask.i tried to be polite,you start this first. ■ im a dumbass and i was the one who started this …. (faceplam) ○ Are u really that dumb….thats insulting..ok,back to the topic,assume i am dumb,is 4.0 jailbreakable? ■ @BROLY: Please don’t attack the other commenters! @josh: if you read the blog, you’d know your answer (not for now and no idea when it will be possible) ■ ok,thanks kakaroto,i will get my ps3 soon!!!!appreciate ur work a lot!!good luck!! 15. Thanks you for your constant support to PS3 scene , I hope you are getting results within your hard work , I wish you the best and we all looking forward for your 4.0 cfw. 16. Hi, first of all, great explanation. I’m not sure I got everything right: How would the curves look in this case? Since we’re using integer we only have defined values if x and y were integer. So we get some dots and the function would’t be continuous. We could calculate the tangent of point P using the derivation, but it’s not very probable to have an intersection. That would rule out more points, so it would be less than N/2 possible x coordinates (say M). They may be can calculated. I also think k<=M. Depending on M, k coul'd be calculated. But likely I got it all wrong… □ Yeah, the curves would look like a discontinuous set of points, however, don’t forget we are using a 20 bytes integer value (49 digits in decimal) so it’s a huge curve, if you zoom out enough, then you wouldn’t notice anymore that it’s discontinuous.. also, if you look at the math behind it, you’ll realize that it will always give you an intersection with another point, Avi Kak’s Lecture14 explains the stuff pretty well. Also, I realized there was a mistake in the blog post, there are n possible x values, but each x gives 2 points (+/- sqrt) so that means there are 2*n points on the curve where 2*n = N (number of points). as for the value of N, it’s really huge… it’s 161 bits (since it would be 2*n where n is 160 bits) 17. They fixed the ECDSA-Algo in 3.56. Alright but the Applications made for 3.55 are still working. So we could use the old algo (with the fixed random number), calculate the old key, resign using the fixed random number and the ps3 should accept it just like any old game O.o Or am i missing something? □ yea u missed the fact that the new games eboots are signed with new keys, and to resign the eboot, you need to know the new keys in order to decrypt the eboot and thn resign it, ofcourse if u have the new keys its pointless to even resign em. ☆ Why do we need to resign them with the new keys? We would only need that to enable Piracy and since Piracy is a bad thing we dont need to resign them. (This Blogentry & My Reply is not about enable new games to run on older firmwares!) ○ Yeah, what you missed is that the old games (for firmware 1.0 even) *always* had two signatures, not just one.. they were just stupid enough to have a bug in firmware 3.55 that didn’t check for that second signature. We have one of the two keys for signing old games, so we’re still stuck ■ Alright, so thats like we can fake the sign on the paper but not the seal on the envelope? ■ no.. it’s more like you want to fake a signature on a check, but you can’t fake the signature on the paper because the bank compares it with the original… so the idea is to simply change the original that the bank compares it to so it think it’s good, but it’s in a vault and you don’t have access to the bank’s vault. ■ Perdoe minha ignorância,mas é possível calcular a segunda chave,a partir da falha de segurança da FW3.55 ?Parabens pelo seu esforço,tenho certeza que vai ser para voce + ou – como foi para mim fechar o Ninja Gaiden na dificuldade Master Ninja,parecia que ia ser impossivel,mas fechei…Thank you brother !!! 18. Hey KaKaRoTo, awesome work you’re doing on cracking this thing, please keep on doing this and working hard on it, don’t listen to the haters, they’re just impatient and can’t wait for something that they’re getting for free, i don’t think they know the saying, “Don’t bite the hand that feeds you.”, good work dude! 19. I get the just of that but my head relly fuking hurts pahahaha i sat here for 2hrs tryna get my head around that. Good Work man. Your almost there 20. “if there was a way to find that key, then the security of every computer, website, system may be compromised since a lot of systems are relying on ECDSA for their security” Not really. Most of the security today is based on RSA, not ECDSA, which are two different beasts. □ not really, RSA is used for encryption, ECDSA for signatures. TLS supports ECDSA, certificates support ECDSA. Read this for example : http://docs.redhat.com/docs/en-US/Red_Hat_Certificate_System/8.0/html/Deployment_Guide/SSL-TLS_ecc-and-rsa.html And while you may be right that “most” of today’s security is based around RSA, it doesn’t exclude the fact that “a lot” of systems are relying on ECC for security. ☆ Bzz, wrong again. You don’t seem to know that RSA can also sign messages, not just encrypt. ○ Bzz, shutup again. while RSA can be used for signatures, it’s mostly used for encryption, ECC can also be used for encryption, but usually it’s used for signing. so, you lurk around the blog looking for where and when I might have said something that wasn’t 100% true? dude, you really have nothing better to do in your life. Go troll somewhere else, thank you. ■ Sure, I will. Good luck with your future efforts, you will need it. 21. I AM BRAZILIAN AND WOULD LIKE TO LEAVE A COMMENT … THAT USERS MAY APPEAR HERE, MAKING COMMENTS DISCOURAGEMENT OR WANTING TO TAKE THE FOCUS. THIS IS NORMAL AND WE MUST LEARN TO FILTER! … BE EMPLOYEES OF SONY, TRYING TO DESTABILIZE. STRENGTH, COURAGE, AS SOON AS THE CFW 4.0 …. KAKAROTO WILL BE MAKING HISTORY. YOUR NAME WILL BE REMEMBERED FOR YEARS …. HOW WILL SMITH IN …. “THE LEGEND”. CONGRATULATIONS AND DO NOT GIVE UP BECAUSE IF A MAN MADE A MAN CAN UNDO. 22. Interesting read although the math part of computer science was the part I hated lol 24. kakaroto, nice job and keep it up. however, giving the complexity and robustness of the algo, I wonder, how hard would it be to bruteforce the solution? if you can somehow make this into a project with distributed computing, like the folding@home, rosetta and the like, maybe bruteforcing a solution wouldnt take that long. is that legal? is it feasible? because am betting you would have millions of ps3 munching this away for you. 25. …and how did math get the key needed, □ he used fw3.15 and linux to get key ☆ no the was geo math did never have the key he prove to be a big fat ugly liar 26. Man i wish i will be like you one day 27. really f**k your mind u r big man karkar 28. just a short Q, y not instead of trying to make new cfw/jb with higher firmwares why not recreate a whole ps3 fw with ur own codes etc then encrypt/decrypt game eboots with ur fw/but u cant change the keys/codes on newer eboots right, so why not find out how the eboot actualy works then make new eboots to work with the game and FW . (maybe u guy thought of this or its to hard “inpossible idk guide me) hemi11p 29. just a short Q. y not instead of trying to make new cfw/jb with higher firmwares, recreate ur the entire firmware to ur own codes/keys/hashes etc. then encrypt/decrypt the newer game eboots with ur codes/keys/hashes, BUT if we could do that we would just encrypt/decrypt the games to 3.55 right. so why not learn/find out how the eboots fully work then recreate the whole eboot to ur fw codes/keys/hashes that should work? well in my head it sounds very difficult to do but what bought you dude, hemi 30. sorry for double post lol 32. KaKaRoToKS, what are the known variables for the fixed value of K up until now? 33. thx for this brain’s food. Keep on the good work! 34. Contact me at sportspr94[at]gmail.com if you want a tip. 35. Just a couple of questions to see if I’m missing something. Can lines be tangent to a point? I think they can only be tangent to a point on a curve. Unless that is what you meant? Also, since we are dealing with integer numbers then how can the concept of tangent lines apply? The set of integer numbers is not continuous and is, therefore, not differentiable no matter how you scale them. In the graph for “y^2 = x^3 – 4x + 2″, your point P appears to be , but 2P appears to be graphed as , which is certainly not + . I see similar things for the other graphs. Could you please Thank you for your time. 36. if the private key is a dead end , how about the per_console_root_key_0? metldr and bootldr is decrypted with this key. 37. [...] How the ECDSA algorithm works [...]
{"url":"http://kakaroto.homelinux.net/2012/01/how-the-ecdsa-algorithm-works/","timestamp":"2014-04-21T07:03:35Z","content_type":null,"content_length":"105881","record_id":"<urn:uuid:b3da055a-cc2d-496e-95a7-c729b633c677>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Preference Learning with Extreme Examples In this paper, we consider a general problem of semi-supervised preference learning, in which we assume that we have the information of the extreme cases and some ordered constraints, our goal is to learn the unknown preferences of the other places. Taking the potential housing place selection problem as an example, we have many candidate places together with their associated information (e.g., position, environment), and we know some extreme examples (i.e., several places are perfect for building a house, and several places are the worst that cannot build a house there), and we know some partially ordered constraints (i.e., for two places, which place is better), then how can we judge the preference of one potential place whose preference is unknown beforehand? We propose a Bayesian framework based on Gaussian process to tackle this problem, from which we not only solve for the unknown preferences, but also the hyperparameters contained in our model. Fei Wang, Bin Zhang, Ta-Hsin Li, Wenjun Yin, Jin Dong, Tao Li
{"url":"http://ijcai.org/papers09/Abstracts/216.html","timestamp":"2014-04-21T07:43:34Z","content_type":null,"content_length":"1472","record_id":"<urn:uuid:1b8c1744-3cf0-4bc0-8c05-07372cf35501>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
How many in the intersection of two sets May 12th 2013, 01:07 PM #1 Junior Member Oct 2011 How many in the intersection of two sets I'm seeking confirmation that the correct answer to this question is $0$, which is one of the choices in the multiple-choice version of the question, and why. Q: A dance studio has $35$ students. A poll shows that $12$ study tap and $19$ study ballet. What is the minimum number of students who are studying both tap and ballet? I'm not clear if the question's original wording (which I don't have handy) meant that $n$ students studying tap means " $12$ students who study only tap" or " $12$ students who are studying at least tap and possibly also ballet" (and the same ambiguous wording for ballet). But note that $12+19$ is less than $35$ total students. So are there $4$ other students who might have been taking both ( $4$ was not answer choice). Also, $12$ students was not an answer choice - as in there are $12$ students taking tap and all are also taking ballet, which has $5$ more students, not taking tap. Please explain the question and answer? Thanks Re: How many in the intersection of two sets I'm seeking confirmation. that the correct answer to this question is $0$, which is one of the choices in the multiple-choice version of the question, and why. Q: A dance studio has $35$ students. A poll shows that $12$ study tap and $19$ study ballet. What is the minimum number of students who are studying both tap and ballet? I'm not clear if the question's original wording (which I don't have handy) meant that $n$ students studying tap means " $12$ students who study only tap" or " $12$ students who are studying at least tap and possibly also ballet" (and the same ambiguous wording for ballet). But note that $12+19$ is less than $35$ total students. So are there $4$ other students who might have been taking both ( $4$ was not answer choice). Also, $12$ students was not an answer choice - as in there are $12$ students taking tap and all are also taking ballet, which has $5$ more students, not taking tap. Unless you can post the exact wording of the original, it is pointless to even try to help. As posted, the numbers are inconstant, unless there are others qualifiers left out For example: if 19 take only ballet and 12 take only tap, then this can be answered. Try to find the exact wording. Last edited by Plato; May 12th 2013 at 01:47 PM. Re: How many in the intersection of two sets I see four possibilities: the numbers can refer to students that study (1) those disciplines and possibly something else or (2) those disciplines only, and (a) there are other disciplines besides tap and ballet or (b) there are no other disciplines. (1a) The answer is 0 under the following scenario: 12 people study only tap. 19 people study only ballet and 4 people study salsa. (1b) This variant is impossible. (2a) The answer is 0 under the same scenario as in (1a). (2b) The answer is 4. The question as it is phrased strongly suggests (1a). Re: How many in the intersection of two sets The original question's wording is ambiguous. That was my point. And it seems that it must be a typo or something. But since you asked I went and looked for the exact wording: In a dance school with 35 students, a poll shows that 12 are studying tap dance and 19 are studying ballet. What is the minimum number of students in the school who are studying both tap dance and ballet? A. 0 B. 7 C. 9 D. 12 E. 31 Last edited by mathDad; May 12th 2013 at 05:32 PM. Re: How many in the intersection of two sets I see four possibilities: the numbers can refer to students that study (1) those disciplines and possibly something else or (2) those disciplines only, and (a) there are other disciplines besides tap and ballet or (b) there are no other disciplines. (1a) The answer is 0 under the following scenario: 12 people study only tap. 19 people study only ballet and 4 people study salsa. (1b) This variant is impossible. (2a) The answer is 0 under the same scenario as in (1a). (2b) The answer is 4. The question as it is phrased strongly suggests (1a). You defined scenario (1a) to mean that those numbers represent people who are studying that discipline and possibly others. So for (1a) 12 ppl study tap and ballet, 19 ppl study ballet (7 of them study only ballet), and 4 ppl study salsa. Answer 12 (which is a choice - contrary to what I said in the OP). Scenario (2b) seems impossible: tap and ballet are the only disciplines and there are not even 35 students in those two classes. Re: How many in the intersection of two sets The original question's wording is ambiguous. That was my point. And it seems that it must be a typo or something. But since you asked I went and looked for the exact wording: In a dance school with 35 students, a poll shows that 12 are studying tap dance and 19 are studying ballet. What is the minimum number of students in the school who are studying both tap dance and ballet? A. 0 B. 7 C. 9 D. 12 E. 31 The only slightly ambiguous thing I see about this question is whether there are other disciplines studied in the school. There is no ambiguity with respect to studying tap or ballet exclusively. If I say that I am studying computer science, it means just that. It does not mean that I am not studying anything else besides CS. Similarly, if it is said that 12 people are studying tap dance, it means that they are studying tap dance for sure, but possibly other disciplines as well. Concerning other disciplines besides tap and ballet in the school, in the absence of any statement to the contrary it is natural to allow this possibility. Also, as post #3 says and as explained below, variant (1b) is impossible and is not one of the answer options. Therefore, the most natural interpretation is (1a). I see four possibilities: the numbers can refer to students that study (1) those disciplines and possibly something else or (2) those disciplines only, and (a) there are other disciplines besides tap and ballet or (b) there are no other disciplines. (1a) The answer is 0 under the following scenario: 12 people study only tap, 19 people study only ballet and 4 people study salsa. (1b) This variant is impossible. (2a) The answer is 0 under the same scenario as in (1a). (2b) The answer is 4. The question as it is phrased strongly suggests (1a). No, the phrase "possibly others" admits many scenarios. The 12 people who study tap dance don't have to study ballet as well: they may study nothing else, some of them may study tap dance and ballet, some of them may study tap dance and ballet while others tap dance and salsa, etc. However, the minimum number of students who study both tap dance and ballet is zero and is achieved, in particular, by the scenario in the quote above. Under interpretation (2b), 12 people study only tap and 19 people study only ballet. What do the rest 4 study? Since there are no other disciplines besides tap and ballet, they must study both. These 4 and not included into groups of 12 and 19 who study one dance only: this is consistent with interpretation (2). On the other hand, (1b) is indeed impossible because we cannot say that the remaining 4 people study both dances: otherwise, they would be included into the groups of tap and ballet students according to (1). However, this is just thinking of what might theoretically have been. Any interpretation except (1a) is extremely unlikely. May 12th 2013, 01:35 PM #2 May 12th 2013, 02:17 PM #3 MHF Contributor Oct 2009 May 12th 2013, 05:22 PM #4 Junior Member Oct 2011 May 12th 2013, 05:32 PM #5 Junior Member Oct 2011 May 13th 2013, 12:03 PM #6 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/algebra/218849-how-many-intersection-two-sets.html","timestamp":"2014-04-21T00:15:40Z","content_type":null,"content_length":"61339","record_id":"<urn:uuid:eafc255f-aaaa-45b9-84fe-a392cf2cd366>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Facts Website? February 6, 2013 at 3:53 AM Hello ladies! So, as you know I recently withdrew my son from conventional school. I have spent the last week or so testing and reviewing concepts to see exactly what he knows. I've discovered he has not memorized all of his multiplication facts. He is 10, a visual learner and loves the computer. Do you knw of a good, free site to practice math facts, specifically multiplication? Thanks! February 6, 2013 at 12:58 PM February 6, 2013 at 3:48 PM I had my children make their own multiplication chart I went dollar store and brought multiplication facts cards we go to the library I make a worksheet on the ones he do not know. We make up a multiplication games like multiplication bingo. February 6, 2013 at 3:59 PM Hi. :) Here is a link to a wwebsite that I think your son might like.... Also here is a search result for lessons and worksheets.... February 6, 2013 at 5:46 PM The couple I use are listed! Good luck! February 6, 2013 at 6:12 PM shepherd software has several cool edu games! its free online
{"url":"http://mobile.cafemom.com/group/114079/forums/read/18030992/Math_Facts_Website?use_mobile=1","timestamp":"2014-04-16T20:43:51Z","content_type":null,"content_length":"23379","record_id":"<urn:uuid:afb881ab-8895-4130-8ea2-e9295b9bbdd4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics and Chemistry Mathematics and Chemistry, September 1, 2008 - June 30, 2009 IMA Seminar on Mathematics and Chemistry A view of outstanding problems in density functional theory Fermi contact interactions evaluated from hidden relations in the Schrödinger equation Stability of traveling waves in quasi-linear hyperbolic systems with relaxation and diffusion Computational strategy options in tackling a problem of molecular excited state dynamics Fast rotating Bose-Einstein condensates in asymmetric harmonic traps Surface effect on the quantum size energy levels in semiconductor nanocrystals Functionals of the one-particle-reduced density matrix and their relation to the full Schrödinger equation Effective dynamics using conditional expectations Stochastic "interacting particle systems" models for reaction-diffusion systems Non-ergodicity of the Nosé-Hoover dynamics Adaptive methods for efficient sampling. Applications in molecular dynamics Adiabatic switching for degenerate ground states
{"url":"http://ima.umn.edu/2008-2009/seminars/","timestamp":"2014-04-20T06:20:47Z","content_type":null,"content_length":"41058","record_id":"<urn:uuid:362c0cad-0273-4ca5-b761-593934ec982a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
West Boxford Precalculus Tutor Find a West Boxford Precalculus Tutor ...Physical Science is often the first high school level science class students take and the first science class with a strong math emphasis. Topics such as measurement error, graphing data and significant figures are general skills that apply to many other more specific subjects like chemistry, ph... 12 Subjects: including precalculus, chemistry, algebra 2, calculus ...I am certified in MATLAB and in Programming because I've completed large data analysis and instrument control projects in various languages, such as C++, MATLAB, LABVIEW, Pascal and Mathematica. I've learned C++ through B. Stroustrup's book the C++ programming language. 47 Subjects: including precalculus, chemistry, calculus, reading ...Although I do not teach SAT Math per se through this venue, I will ensure that my coaching will also benefit the student in the Math SAT. Calculus is essential in engineering- I know this first hand. I routinely take calculus courses at local colleges to keep up to speed in this level of math. 13 Subjects: including precalculus, calculus, geometry, ASVAB ...I seek to identify the cause and provide strategies to address the underlying reason for difficulty. Having a BS in Speech Pathology and a MS in Human Services Administration, I am well prepared to address educational needs and recommend changes to IEPs or 504 plans, where appropriate. Special Education is not, however my limitation. 45 Subjects: including precalculus, chemistry, English, writing I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. 16 Subjects: including precalculus, French, elementary math, algebra 1
{"url":"http://www.purplemath.com/west_boxford_precalculus_tutors.php","timestamp":"2014-04-19T02:38:59Z","content_type":null,"content_length":"24400","record_id":"<urn:uuid:8530d46a-ccc9-44b1-aa9e-0d79c3a9de49>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
limit and supremum January 17th 2012, 02:13 AM limit and supremum Ok, this might be a silly/easy question but I have to make sure. Let say we have a seq of pos. numbers $\{x_n\}$ such that $x_n<x_{n+1}$$\forall n$ and $lim_{n\rightarrow \infty}(x_n/x_{n+1})=1$. Can I then, knowing the above, conclude that $\sup\{x_n/x_{n+1}; n\in\mathbb{N}\}=1$? (Thinking) January 17th 2012, 02:32 AM Re: limit and supremum Ok, this might be a silly/easy question but I have to make sure. Let say we have a seq of pos. numbers $\{x_n\}$ such that $x_n<x_{n+1}$$\forall n$ and $lim_{n\rightarrow \infty}(x_n/x_{n+1})=1$. Can I then, knowing the above, conclude that $\sup\{x_n/x_{n+1}; n\in\mathbb{N}\}=1$? (Thinking) If $0<a<b$ then is it true that $\frac{a}{b}<1~?$ January 17th 2012, 04:08 AM Re: limit and supremum Yes, ofcourse it's true. January 17th 2012, 06:51 AM Re: limit and supremum Well then each term of $\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\} < 1$. Thus $\sup \left( {\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\}} \right) \le 1$. What is the contradiction if we suppose that $\sup \left( {\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\}} \right)<1~?$ March 3rd 2012, 05:00 PM Re: limit and supremum Well then each term of $\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\} < 1$. Thus $\sup \left( {\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\}} \right) \le 1$. What is the contradiction if we suppose that $\sup \left( {\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\}} \right)<1~?$ $\sup \left( {\left\{ {\frac{{a_n }}{{a_{n + 1} }}} \right\}} \right)<\frac{a_{n}}{a_{n+1}}$ For some n
{"url":"http://mathhelpforum.com/differential-geometry/195462-limit-supremum-print.html","timestamp":"2014-04-17T20:19:35Z","content_type":null,"content_length":"10653","record_id":"<urn:uuid:395cc64c-281f-44d2-a405-55b031ab2c54>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate the speed of a stepper motor ? 1. 13th February 2008, 07:01 #1 Member level 5 Join Date Feb 2007 0 / 0 Stepper motor doubt in the data sheet of the stepper motor it is specified that step angle is 15 5 rpm @ 200pps torque is 60 N/cm Can anyone tell me how to calculate the speed of the motor ? 2. 13th February 2008, 07:14 #2 Advanced Member level 3 Join Date Nov 2006 67 / 67 3. 13th February 2008, 07:29 #3 Advanced Member level 5 Join Date Jan 2008 Bochum, Germany 7945 / 7945 Re: Stepper motor doubt The motor seems to have a 100:1 gear, provided a pulse in "200 pps" means a full step. You get 200*(15/360) rps = 200*(15/360)*60 rpm = 500 rpm at the motor shaft or 5 rpm with the gear. 1 members found this post helpful. 4. 13th February 2008, 08:03 #4 Member level 5 Join Date Feb 2007 0 / 0 Stepper motor doubt Can you please explain to me how u got the gear ratio ? and i need to know the delay as i read speed=stepangle/delay [rad/sec] How can we determine it is full step from 200pps? And what is meant by gear ratio of a stepper motor? 5. 13th February 2008, 08:16 #5 Advanced Member level 3 Join Date Nov 2006 67 / 67 6. 13th February 2008, 08:23 #6 Advanced Member level 5 Join Date Jan 2008 Bochum, Germany 7945 / 7945 Re: Stepper motor doubt I don't know if your stepper motor has a gear, you should now better. Without a gear, the data don't make sense. However, if it has a gear, the reduction ration is normally printed on the case or at least in the data sheet. I have written my calculation, the reduction ratio is 500:5 = 100 to my opinion. Stepangle and speed of motors is usually not given in rad respectively rad/s. 1 members found this post helpful. 7. 13th February 2008, 08:54 #7 Advanced Member level 3 Join Date Nov 2006 67 / 67 Re: Stepper motor doubt this may be help full for you ! STEP MODES Stepper motor "step modes" include Full, Half and Microstep. The type of step mode output of any motor is dependent on the design of the driver. FULL STEP Standard (hybrid) stepping motors have 200 rotor teeth, or 200 full steps per revolution of the motor shaft. Dividing the 200 steps into the 360º's rotation equals a 1.8º full step angle. Normally, full step mode is achieved by energizing both windings while reversing the current alternately. Essentially one digital input from the driver is equivalent to one step. HALF STEP Half step simply means that the motor is rotating at 400 steps per revolution. In this mode, one winding is energized and then two windings are energized alternately, causing the rotor to rotate at half the distance, or 0.9º's. (The same effect can be achieved by operating in full step mode with a 400 step per revolution motor). Half stepping is a more practical solution however, in industrial applications. Although it provides slightly less torque, half step mode reduces the amount "jumpiness" inherent in running in a full step mode. 15 step angle how? if 200pps it should be 1.8 = 360 /200 and , so for 1 revolution you need 200 feed and between that there is some delay , which define the actual rpm of motor . and this delay adjusted according to your requirement . for optimum the delay = 1ms for 1 revolution complete dalay = 200ms so , 1 r in 200ms therefor 5 r in a 1000ms = 5rps , 5*60 =300rpm thank you ! 1 members found this post helpful. 8. 13th February 2008, 10:06 #8 Advanced Member level 5 Join Date Jan 2008 Bochum, Germany 7945 / 7945 Re: Stepper motor doubt Hello manish12, apart from all useful information you gave, if a motor is specified to have 15° step-angle, it's most likely not a hybrid motor, but there are other types as well. Be sure that I have used a lot of stepper motors with 15 or 7.5° step angle in various designs. 9. 13th February 2008, 15:48 #9 Advanced Member level 3 Join Date Nov 2006 67 / 67 Re: Stepper motor doubt 360/15 = 24 teeth stepper motor with 6/5 wire i think ! but , rotor jumping is greater here than 1.8 . i always prefer 4 winding 6/5 wire stepper motor . simple to design driver ckt as compare to unipolar. 10. 14th February 2008, 04:39 #10 Member level 5 Join Date Feb 2007 0 / 0 Re: Stepper motor doubt Hi all, Thanks for your valuable opinions. I got only one sheet as datasheet for the stepper motor which i selected. From which i got only few info. Please find the attachment please explain whatever you understand from this 11. 14th February 2008, 06:35 #11 Advanced Member level 5 Join Date Jan 2008 Bochum, Germany 7945 / 7945 Re: Stepper motor doubt the motor is exactly as expected, including the 100:1 (or 1:100 as the manufacturer prefers to write) gear reduction ratio, that I already calculated. Also voltage and coil resistance is given, unipolar circuit had been already statet (what manish12 designates a "4 winding 6/5 wire stepper motor"). The only information not explicitely given in the datasheet is the maximum operation frequency. But implicitely, it's clear that with suggested 200 pps (pulses per second = Hz) nominal torque values can be achieved. With this motor type and unipolar constant voltage drive, you could expect fast decrease of torque above 200 Hz (may be 350 or 400 Hz can be achieved with low torque). Also not explicitely stated, but expectable for a gear motor with effectively no inertia seen by the motor, is to have 200 Hz as start-stop-frequency, not needing speed ramps. There has been apparently doubt how to calculate rotation speed for this motor. However, as the datasheet confirms, the formula is simply given by the 5 rpm@200 pps expression. Required step frequency is intended revolutions per minute multiply by 40 Hz, usual up to maximum of 5 rpm or 200 Hz, respectively a minimum step-to-step delay of 5 ms. 1 members found this post helpful. 12. 14th February 2008, 07:00 #12 Member level 5 Join Date Feb 2007 0 / 0 Re: Stepper motor doubt Why 40hz was multiplied? Is it standard? How did u reach a delay of 5ms? 13. 14th February 2008, 08:52 #13 Advanced Member level 5 Join Date Jan 2008 Bochum, Germany 7945 / 7945 Re: Stepper motor doubt simply 5 rpm@200 Hz = 1 rpm@40 Hz, also 1/200 Hz = 5 ms 1 members found this post helpful. 14. 14th February 2008, 09:02 #14 Member level 5 Join Date Feb 2007 0 / 0 Re: Stepper motor doubt Hi Frank, Thanks a lot for your help. One last question. Can this controller MM908E626(stepper motor driver with hbridge inbuilt) drive this unipolar motor ? can the vsup of the microcontroller be directly given to the stepper motor supply?
{"url":"http://www.edaboard.com/thread117578.html","timestamp":"2014-04-16T22:13:36Z","content_type":null,"content_length":"99778","record_id":"<urn:uuid:84301f70-fa11-4337-a296-3053d068f36e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Computing a^((N-1)/2) mod N Date: 12/07/98 at 19:30:32 From: Dana Perriello Subject: How is a^((N-1)/2) mod N done? The Proth's theorem states that N (which is k*2^n+1) is prime if there is an integer a where a^((n-1)/2)=-1(mod N). I want to write a program using that formula. Is there a shortcut for doing a^((n-1)/2) mod N? Date: 12/08/98 at 20:36:01 From: Doctor Kate Subject: Re: How is a^((N-1)/2) mod N done? I'm glad to see you're exploring so much. There are a few shortcuts for trying to do large powers in mods. The first uses the nice property if a = b(mod n) and c = d(mod n), then ac = bd(mod n). You can work this out in the integers by using the definition of the a = b + nk and c = d + nl for some integers we'll call k and l. So then ac = (b + nk)(d + nl) = bd + nkd + nlb + n^2kl = bd + n(kd + lb + nkl) which means that ac = bd(mod n) since kd + lb + nkl is just some For the rest of my letter, I'm going to write "reduced form of a mod n" to mean the remainder of a when divided by n. We call it "reduced" because it's pretty small. Now what we know, in essence, is that we can get the same answer by reducing things before or after multiplying them together, when we're working in some mod n. Since multiplying smaller things is easier, reducing before we multiply will save us work (and stop the computer from choking on huge numbers). So how can we use this for a shortcut? Well, if the exponent is really large, we can "split it up" because a^(p+q) = (a^p)(a^q) So if we can figure out a^p and a^q, we can reduce them before we multiply them together to get a^(p+q). So we never have to deal with a number as big as a^(p+q) at all. So you might suggest then that to find a^m, you could simply multiply a by itself, reduce, then multiply again by a, then reduce, and so on. However, this is still a lot of work. To save work, there's a really clever trick: Suppose we want to compute a^m, where m is some power of 2. Say a^(2^k), for some integer k. So to begin, we multiply a by a, and reduce. We now know the reduced form of a^2. So instead of multiplying by a and reducing twice more, why don't we just multiply by a^2 instead? This will save a step. So then we find (a^2)*(a^2). When we reduce this, we have the reduced form of a^4, so we can save three steps. This is a really useful trick in So you'll have a nice reduced form of a^(2^k) in k steps (try it!) instead of 2^k steps. That's much quicker! But what if m isn't a power of 2? Well, then we can try to write it as a sum of powers of 2. For instance, suppose you have a^21 (mod n). Then, 21 = 16 + 4 + 1 So we can write a^(21) = a^(16+4+1) = a^16 * a^4 * a So we need only figure out reduced forms for a^16, a^4 and a (well, we already have one for a). It will take four steps to find a^16, and we'll find a^4 on the way. So let's do a concrete example. Take 3^21 (mod 7) We find 3^2: 3^2 = 9 = 2(mod 7), so 2 is the reduced form of 3^2. We find 3^4: 3^4 = (3^2)^2 = 2^2 = 4(mod 7), so 4 is the reduced form of 3^4. We find 3^8: 3^8 = (3^4)^2 = 4^2 = 16 = 2(mod 7), so 2 is the reduced form of 3^8. We find 3^16: 3^16 = (3^8)^2 = 2^2 = 4(mod 7), so 4 is the reduced form of 3^4. So now we have 3^21 = 3^16 * 3^4 * 3 = 4 * 4 * 3 = 48 = 6 (mod 7). It's so easy now that we can do it by hand, without the computer! This is a bit tricky to program on a computer, but it is the most useful shortcut we have available for this sort of thing. Don't hesitate to write us again with questions, and good luck! - Doctor Kate, The Math Forum Date: 03/23/2001 at 02:46:52 From: Mike Li Subject: The mistake Dr. Kate made about ac=bd(mod n) Hi - A few days ago I asked a question about modulus, but then I was lucky enough to find Dr. Kate's post in the archive section. It was a great explanation, it helped me to solve the question I asked. However, I believe there was a mistake in the reply: "a = b(mod n) and c = d(mod n), then ac = bd(mod n)". This is not right. Dr. Kate was right about: ac = (b + nk)(d + nl) = bd + nkd + nlb + n^2kl = bd + n(kd + lb + nkl) But then (kd + lb + nkl) might not be the appropriate value to give us the right remainder; in other words, ac still contains extra multiples of n. To get rid of them, we use ac(mod n); therefore the right equation should be: a = b(mod n) and c = d(mod n), then ac(mod n)= bd(mod n) which can be also expressed as a(mod n)^e(mod n)=a^e(mod n), the same equation as in my previous question. This site is great - I love it! Date: 03/23/2001 at 09:08:13 From: Doctor Peterson Subject: Re: The mistake Dr. Kate made about ac=bd(mod n) Hi, Mike. It appears you have a slight misunderstanding about the meaning of (mod n) as used here - an error that is probably due to the misuse of the term "mod" in the computer programming world, since I see that your previous question dealt with programming. In math, (mod n) is not an operator or modifier of a number (giving its remainder), but a modifier of the whole equation, or rather of the "equals" (congruence) symbol, which tells us in what sense two numbers are "equal." When we write a = b (mod n) (properly using the triple "=" rather than the ordinary "two lines" symbol we have to use here), it means that a and b are "congruent modulo n" - that is, that they differ by a multiple of n. Nothing is said about the size of a or b. (It's worth noting that "modulo n" is actually Latin for "with respect to the modulus, or divisor, n." So "mod n" is really an adverbial phrase modifying the verb "is congruent." And, contrary to common usage, the "modulus" here is n, not a.) In computer languages, "mod" (or a symbol such as %) is used as a remainder operator, so that "b mod n" means "the remainder when you divide b by n." Since this function must give a single number as its value, this is taken to be the principal value of the remainder, 0 <= b mod n < n. (There are conflicting opinions as to what it should mean when b and/or n are negative, sometimes leading to a need for two different operators or functions, MOD and REM; but I won't go into that issue.) In that case, if I said a = b mod n I would be making a statement about the size of a (that it is less than n), not only about the relation between a and b. In these terms, if I wanted to express my congruence above, I could say a mod n = b mod n since two numbers are congruent if and only if they have the same remainder. That appears to be what you are saying; but what you mean is exactly the same as what Dr. Kate said, using proper mathematical Here are some pages I found (by doing a search for "congruent remainder modulo") that may help clarify some of these ideas: Congruences (from Beachy/Blair, Abstract Algebra) Introduction to Modular Arithmetic - Trailpost 3 Congruences - Number Theory and Cryptography (Booth) (download PDF file) You can find a lot more, and probably better, discussions than these if you look around; these are just the first few I found. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55889.html","timestamp":"2014-04-16T21:58:29Z","content_type":null,"content_length":"12673","record_id":"<urn:uuid:147a6257-1780-4a5b-9969-30005dd69ad4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Permanent magnet strength For the distances you mention, a reasonable approximation would be to consider the elliptical magnet to be a dipole m, and the face of the cylindrical magnet to be like a uniformly charged disk. A formula for the force would be [tex]F=\frac{2\pi R^2 Mm}{(d^2+R^2)^{3/2}}[/tex], where R is the radius (1 cm} of the cylindrical magnet, M is its magnetization, and d is the distance (1.5+1.1/2) from the face of the cylinder to the middle of the elliptical magnet. This is all in Gaussian-cgs units. You could measure M by the force to separate two identical cylindrical magnets given in post #2. You could measure the magnetic moment m by the torque in a known B field (in by torque=m B cos\theta. This approximation should be reasonable until you get too close together or too far apart, when more complicated formulas would be needed.
{"url":"http://www.physicsforums.com/showthread.php?p=3589239","timestamp":"2014-04-18T13:57:45Z","content_type":null,"content_length":"62442","record_id":"<urn:uuid:aae9549d-4bdc-47ab-bbb0-a329679c2d01>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
The Control Handbook Results 1 - 10 of 44 - AUTONOMOUS ROBOTS , 1998 "... Off-road autonomous navigation is one of the most difficult automation challenges from the point of view of constraints on mobility, speed of motion, lack of environmental structure, density of hazards, and typical lack of prior information. This paper describes an autonomous navigation software sys ..." Cited by 36 (11 self) Add to MetaCart Off-road autonomous navigation is one of the most difficult automation challenges from the point of view of constraints on mobility, speed of motion, lack of environmental structure, density of hazards, and typical lack of prior information. This paper describes an autonomous navigation software system for outdoor vehicles which includes perception, mapping, obstacle detection and avoidance, and goal seeking. It has been used on sev- eral vehicle testbeds including autonomous HMMWV's and planetary rover prototypes. To date, it has achieved speeds of 15 km/hr and excursions of 15 km. We introduce algorithms for optimal processing and computational stabilization of range imagery for terrain mapping purposes. We formulate the problem of trajectory generation as one of predictive control searching trajectories expressed in command space. We also formulate the problem of goal arbitration in local autonomous mobility as an optimal control problem. We emphasize the modeling of vehicles in ... - Journal of Machine Learning Research "... Lyapunov design methods are used widely in control engineering to design controllers that achieve qualitative objectives, such as stabilizing a system or maintaining a system's state in a desired operating range. We propose a method for constructing safe, reliable reinforcement learning agents ba ..." Cited by 16 (2 self) Add to MetaCart Lyapunov design methods are used widely in control engineering to design controllers that achieve qualitative objectives, such as stabilizing a system or maintaining a system's state in a desired operating range. We propose a method for constructing safe, reliable reinforcement learning agents based on Lyapunov design principles. In our approach, an agent learns to control a system by switching among a number of given, base-level controllers. These controllers are designed using Lyapunov domain knowledge so that any switching policy is safe and enjoys basic performance guarantees. Our approach thus ensures qualitatively satisfactory agent behavior for virtually any reinforcement learning algorithm and at all times, including while the agent is learning and taking exploratory actions. We demonstrate the process of designing safe agents for four dierent control problems. In simulation experiments, we nd that our theoretically motivated designs also enjoy a number of practical benets, including reasonable performance initially and throughout learning, and accelerated learning. Keywords: Reinforcement Learning, Lyapunov Functions, Safety, Stability 1. - Philosophical Transactions of the Royal Society of London A , 2003 "... Attempts to formulate realistic... In this paper I shall describe some of the aspects of these biological models that are likely to be useful for building robot control systems. In particular, I shall consider the evolution of appropriate innate starting points for learning/adaptation, patterns of l ..." Cited by 12 (10 self) Add to MetaCart Attempts to formulate realistic... In this paper I shall describe some of the aspects of these biological models that are likely to be useful for building robot control systems. In particular, I shall consider the evolution of appropriate innate starting points for learning/adaptation, patterns of learning rates that vary across different system components, learning rates that vary during the system's lifetime, and the relevance of individual differences across the evolved populations , 2007 "... We discuss a parallel library of efficient algorithms for the solution of linear-quadratic optimal control problems involving large-scale systems with state-space dimension up to O(10 4). We survey the numerical algorithms underlying the implementation of the chosen optimal control methods. The appr ..." Cited by 11 (10 self) Add to MetaCart We discuss a parallel library of efficient algorithms for the solution of linear-quadratic optimal control problems involving large-scale systems with state-space dimension up to O(10 4). We survey the numerical algorithms underlying the implementation of the chosen optimal control methods. The approaches considered here are based on invariant and deflating subspace techniques, and avoid the explicit solution of the associated algebraic Riccati equations in case of possible ill-conditioning. Still, our algorithms can also optionally compute the Riccati solution. The major computational task of finding spectral projectors onto the required invariant or deflating subspaces is implemented using iterative schemes for the sign and disk functions. Experimental results report the numerical accuracy and the parallel performance of our approach on a cluster of Intel Itanium-2 processors. - Dimension Reduction of Large-Scale Systems , 2005 "... We discuss the efficient implementation of model reduction methods such as modal truncation, balanced truncation, and other balancing-related truncation techniques, employing the idea of spectral projection. Mostly, we will be concerned with the sign function method which serves as the major computa ..." Cited by 9 (6 self) Add to MetaCart We discuss the efficient implementation of model reduction methods such as modal truncation, balanced truncation, and other balancing-related truncation techniques, employing the idea of spectral projection. Mostly, we will be concerned with the sign function method which serves as the major computational tool of most of the discussed algorithms for computing reduced-order models. Implementations for large-scale problems based on parallelization or formatted arithmetic will also be discussed. This chapter can also serve as a tutorial on Gramian-based model reduction using spectral projection methods. 1 , 1998 "... We consider uncertain linear systems where the uncertainties, in addition to being bounded, also satisfy constraints on their phase. In this context, we define the "phase-sensitive structured singular value" (PS-SSV) of a matrix, and show that sufficient (and sometimes necessary) conditions for stab ..." Cited by 6 (2 self) Add to MetaCart We consider uncertain linear systems where the uncertainties, in addition to being bounded, also satisfy constraints on their phase. In this context, we define the "phase-sensitive structured singular value" (PS-SSV) of a matrix, and show that sufficient (and sometimes necessary) conditions for stability of such uncertain linear systems can be rewritten as conditions involving PS-SSV. We then derive upper bounds for PS-SSV, computable via convex optimization. We extend these results to the case where the uncertainties are structured (diagonal or block-diagonal, for instance). 1 Introduction A popular paradigm for modeling control systems with uncertainties is illustrated in Fig. 1. Here P (s) is the transfer function of a stable linear system, and \Delta is a stable operator that represents the "uncertainties" that arise from various sources such as modeling errors, neglected or unmodeled dynamics or parameters, etc. Such control system models have found wide acceptance in robust con... - In Proc. of CP-2001 workshop on On-Line combinatorial problem solving and Constraint Programming, Paphos , 2001 "... The design of a problem solver for a particular problem depends on the problem type, the system resources, and the application requirements, as well as the specific problem instance. The difficulty in matching a solver to a problem can be ameliorated through the use of online adaptive control of sol ..." Cited by 5 (1 self) Add to MetaCart The design of a problem solver for a particular problem depends on the problem type, the system resources, and the application requirements, as well as the specific problem instance. The difficulty in matching a solver to a problem can be ameliorated through the use of online adaptive control of solving. In this approach, the solver or problem representation selection and parameters are defined appropriately to the problem structure, environment models, and dynamic performance information, and the rules or model underlying this decision are adapted dynamically. This paper presents a general framework for the adaptive control of solving and discusses the relationship of this framework both to adaptive techniques in control theory and to the existing adaptive solving literature. Experimental examples are presented to illustrate the possible uses of solver control. 1 , 2007 "... Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated large-scale systems have to be solved very often. To attack these problems in reasonable time it is abs ..." Cited by 4 (4 self) Add to MetaCart Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated large-scale systems have to be solved very often. To attack these problems in reasonable time it is absolutely necessary to reduce the dimension of the underlying system. We focus on model reduction by balanced truncation where a system theoretical background provides some desirable properties of the reduced-order system. The major computational task in balanced truncation is the solution of large-scale Lyapunov equations, thus the method is of limited use for really large-scale applications. We develop an effective implementation of balancing-related model reduction methods in exploiting the structure of the underlying problem. This is done by a data-sparse approximation of the large-scale state matrix A using the hierarchical matrix format. Furthermore, we integrate - In Proceedings of AAAI Spring Symposium on Intelligent Embedded and Distributed Systems , 2002 "... The remarkable increase in computing power together with a similar increase in sensor and actuator capabilities now under way is enabling a significant change in how systems can sense and manipulate their environment. These changes require control algorithms capable of operating a multitude of inter ..." Cited by 4 (0 self) Add to MetaCart The remarkable increase in computing power together with a similar increase in sensor and actuator capabilities now under way is enabling a significant change in how systems can sense and manipulate their environment. These changes require control algorithms capable of operating a multitude of interconnected components. In particular, novel “smart matter” systems will eventually use thousands of embedded, microsize sensors, actuators and processors. In this paper, we propose a new framework for a on-line, adaptive constrained optimization for distributed embedded applications. In this approach, on-line optimization problems are decomposed and distributed across the network, and solvers are controlled by an adaptive feedback mechanism that guarantees timely solutions. We also present examples from our experience in implementing smart matter systems to motivate our ideas. - In Leslie Hogben (Hrsg.), Handbook of Linear Algeba, Kapitel 57, Chapman & Hall/CRC , 2006 "... Summary and ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1410940","timestamp":"2014-04-21T01:13:46Z","content_type":null,"content_length":"38308","record_id":"<urn:uuid:f6a5bf69-29b7-472d-ac6e-fe0492510c57>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Plss sum1 help me to construct a C++ program for the following question. Construct a program in C++ to calculate the value when you perform the following operations using functions and keyboard input. Addition Subtraction Multiplication Integer division Real division Best Response You've already chosen the best response. first, you'll need a way to let the user enter values and commands using the keyboard. Do you know how to do that? Best Response You've already chosen the best response. It will b cout<<"Enter the values"; Isnt t? Best Response You've already chosen the best response. No, cout is for output, not input. Please find out how to do keyboard input in C++, then come back. Best Response You've already chosen the best response. cout<<"Enter the values"; cin>>values; Best Response You've already chosen the best response. Please construct a complete program that will let the user enter a value, and then echo back that value. Best Response You've already chosen the best response. Suppose f the ques is to construct a C++ program to display the registration number #include<iostream> using namespace std; int main() { int regno; cout<<"Enter the Registration Number"<<endl; cin>>regno; cout<<regno<<endl; system ("PAUSE"); return 0; } Is this right ? Best Response You've already chosen the best response. looks good so far (you can determine yourself whether it is correct by running it). Since the user is meant to be able to provide commands (as well as numbers), an int input will not suffice. Change your program to allow the user to enter something like "+ 3 4". You can change the prompt to say something like "Please enter a command" or whatever you like. Best Response You've already chosen the best response. Hi @future_engineer can u help me vth this? Best Response You've already chosen the best response. @atjari , hii bro...I am new into CSomputer course...haven't learnt much abt C yet...:( Best Response You've already chosen the best response. It's k. Thanx a lot. Best Response You've already chosen the best response. Best Response You've already chosen the best response. The approach would be, initiate a switch case for all those operations after declaring them as functions, then declare a function for the user to input data. Then define each function. I know its an old approach, i used it in C for some ADTs. Best Response You've already chosen the best response. can u jst show me hw that program vl luk lyk f u dnt mind? I stil am unabl to get a clear f the suitabl pro for my ques. Best Response You've already chosen the best response. which part of my last answer did you not understand? Yes, at some point you'll need to create functions that actually execute the operations, but they won't help you if the user cannot type in which operations he wants to choose. Best Response You've already chosen the best response. Jst a moment let me write and c. Best Response You've already chosen the best response. BTW, "real division" strictly speaking cannot be done on a computer, because irrational numbers cannot be represented. Best Response You've already chosen the best response. if v r to use the case statement where do v insert t? in the main functu=ion or sub func? Best Response You've already chosen the best response. You need to decide how the user will decide on the operations. The question leaves out quite a few important specifications: how does the user choose the operations? (what is the syntax, i. e. "+ 3 4" =prefix, "3 + 4" =infix; the first one is easier to program but harder for to use. Are combinations allowed? "4 + 3 * 5" only pairs of operands, or would add(6,9,7) be allowed? Best Response You've already chosen the best response. Best Response You've already chosen the best response. do you mean "where do we insert it?" Best Response You've already chosen the best response. ya. coz we need to ask the user to insert the choice number of the operation. Best Response You've already chosen the best response. @nczempin why induce complications for multiple operations? Lets get over with a simple program and then analyse these infix, prefix, postfix tweaks eh? Best Response You've already chosen the best response. @atjari the switch cases take care of that. Best Response You've already chosen the best response. That's the problem I hav nw. I dnt knw wher to insert it @arcticf0x . Best Response You've already chosen the best response. I am extremely sorry @nczempin . V havnt learnt infix, postfix etc. Best Response You've already chosen the best response. i mean we havnt learnt them yet. Best Response You've already chosen the best response. nobody's introducing any complications; atjari will have to decide what to use, since the question doesn't specify it. If we are free to choose (no idea if we are) then the easiest to program would assume 2 operands, no combinations. In that case it would be easiest to choose the operation from 5 options (and we can just use numbers" and the prompt for the 2 operands. Then he wouldn't need to parse the string. Best Response You've already chosen the best response. Well, by now I have suspected this, so let's just assume the simplest possible way, in which you let the user input 3 numbers: operation operand1 operand2 Best Response You've already chosen the best response. and then you can do a switch on the operation, and in the case you do the calculation Best Response You've already chosen the best response. or, rather, in the case section you call the appropriate function, since the question asks you to use functions. Best Response You've already chosen the best response. Have you learnt about enumerations? Best Response You've already chosen the best response. Best Response You've already chosen the best response. something like this: switch (operation){ case 1: result =add(operand1,operand2); break; case 2: result = subtract(operand1, operand2); break; ... default: // some kind of error handling. Best Response You've already chosen the best response. ok then just use ints, like I did in the example Best Response You've already chosen the best response. you get the idea? Best Response You've already chosen the best response. Ya a little. Best Response You've already chosen the best response. well, see how far you get with this rough outline; you'll have to declare the variables, the functions, the text printed out etc. Best Response You've already chosen the best response. can u check the attachment and say if I hav started correctly? Best Response You've already chosen the best response. the functions need parameters, e. g. "int sum(int a, int b);" And of course you'll need to define them too at some point, but just declaring works for now (you'll get link errors) Best Response You've already chosen the best response. and since you're supposed to support "real division" (by which I'm assuming they mean "floating point division", your you may want to allow floats or doubles to be used. definitely in the realdiv function, and once you have that, it's up to you to decide whether you want to allow only integers as operands, or if floats/doubles are also allowed. Since you'll have to allow float or double for realdiv, this may make sense (although a program that only allows integers should also be fine; integerdiv(5, 2) = 2, realdiv(5,2) = 2.5 Best Response You've already chosen the best response. If u dnt mind cn u do me one more favour? Best Response You've already chosen the best response. It's past midnite here. If u can write the whole program and show me I vl me much more grateful to u. Best Response You've already chosen the best response. Only if you dnt mind plssssssss. It's 1a.m in my place. Best Response You've already chosen the best response. Best Response You've already chosen the best response. sorry mate, I won't just program it for you. I have other things to do. Just go to bed and do it tomorrow Best Response You've already chosen the best response. It's k. Anyways thanx. I need to submit t in the morning. Best Response You've already chosen the best response. Thanx a lot for helping me. Best Response You've already chosen the best response. just put the switch in the main method, do the functions like this int add(int a, int b); int subtract(int a, int b); ... float realdiv(int a, int b); int main(){ int operation; int operand1; int operand2; // here do the inputting, like you previously did for just 1 variable, but here you collect all three values // here you insert my switch example } int add(int a, int b){ Best Response You've already chosen the best response. int add(int a, int b){ return a+ b; } // and the others int integerdiv(int a, int b){ return a/b; } float realdiv(int a, int b){ return (float) a /(float) b; } Best Response You've already chosen the best response. now just put all those pieces together and correct the errors. Best Response You've already chosen the best response. So kind of u mate. Vl never forget this help of urs. Thanx a lot. Best Response You've already chosen the best response. Best Response You've already chosen the best response. When I compile I get operation as undeclared. Best Response You've already chosen the best response. I marked the place where the cout and cin goes in the comment // here do the inputting, Best Response You've already chosen the best response. well, in the code I gave you it's declared. Best Response You've already chosen the best response. if you put the switch in the main function (where I put the comment that says // here you insert my switch example) and also the "int operation;" it should be declared. Best Response You've already chosen the best response. if i compile nw it says case label not within switch statement Best Response You've already chosen the best response. please try to figure out these compile errors on your own. The error message should help you. Best Response You've already chosen the best response. here result is int or float? Best Response You've already chosen the best response. please show some effort, don't just keep asking others for the answer. What do you think? int or float? Best Response You've already chosen the best response. Extremely sorry for troubling u a lot. it's float. Best Response You've already chosen the best response. If I compile I dnt get the correct answer is not displayed. Best Response You've already chosen the best response. Best Response You've already chosen the best response. yes, because there is an error in your code. Best Response You've already chosen the best response. bt the error is nt displayed. Best Response You've already chosen the best response. a programming error can still occur even if the program compiles correctly. debug the code, see what happens. What makes you think that your code would display the correct answer? Best Response You've already chosen the best response. I havnt learnt about debugging except its definition. Best Response You've already chosen the best response. Best Response You've already chosen the best response. well, before your next programming assignment you should learn how to set breakpoints and step through your code in whatever IDE you are using. In the meantime, I'll just give you the answer: You are displaying the result before you are computing it. Best Response You've already chosen the best response. I assure u that I vl definitely learn t bt for the moment vl u pls tell me where to insert t? Best Response You've already chosen the best response. oh come on. where do you compute the result? Best Response You've already chosen the best response. when I compile it performs only subtraction for any any choice given Best Response You've already chosen the best response. Best Response You've already chosen the best response. you forgot some break statements. that would explain you getting division on choices 3 and 4, but not subtraction I gotta go now, sry. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f537623e4b019d0ebb094f5","timestamp":"2014-04-18T21:06:33Z","content_type":null,"content_length":"211629","record_id":"<urn:uuid:265988f6-359d-42dc-b357-62947476dded>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Hull Moving Average Formula By Alan Hull Website: AlanHull.com The Hull Moving Average solves the age old dilemma of making a moving average more responsive to current price TRADING SYSTEMS. Removing Lag, Forecasting Data. Trading Indexes With The Hull Moving Average . by Max Gardner. Moving averages smooth data and make it … Hello Guys, Need help here.. Anyone have the hull moving average for MT4? Hull moving average just like in regular indicator in VTtrader. Please share it with me. Traders have long used moving averages to measure momentum and define areas of support and resistance. Moving averages are calculated by averaging the value of Hull Moving Average (HMA) - Largest database of free indicators, oscillators, systems and other useful tools for trading system developers. Amibroker (AFL ... The Hull Moving Average solves the age old dilemma of making a moving average more responsive to current price activity whilst maintaining curve smoothness. Back to the top. Tornado activity: Hull-area historical tornado activity is slightly above Iowa state average. It is 179% greater than the overall U.S. average. Back to the top. Tornado activity: Hull-area historical tornado activity is slightly above Georgia state average. It is 83% greater than the overall U.S. average. In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic ... In time series analysis, the moving-average (MA) model is a common approach for modeling univariate time series models. The notation MA(q) refers to the moving ...
{"url":"http://find.wiki.gov.cn/w_Hull+Moving+Average+Formula/","timestamp":"2014-04-16T13:09:35Z","content_type":null,"content_length":"12837","record_id":"<urn:uuid:56b62cc5-f76d-4738-bddc-cf00ef214f48>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What are the next three terms in this sequence: -30, -14,-6,-2,0? • one year ago • one year ago Best Response You've already chosen the best response. 1, 1.5, 1.75 [There is a Mathematics group by the way. Some geniuses are there.] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506cb419e4b060a360fea9d1","timestamp":"2014-04-20T01:00:12Z","content_type":null,"content_length":"27643","record_id":"<urn:uuid:8ca0f71b-5056-4396-a2b3-77b7269a2eda>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Total Spaces of Quasicoherent Sheaves up vote 11 down vote favorite You can construct a total space of a quasicoherent sheaf on an scheme by taking relative spec of the symmetric algebra of the dual sheaf. For locally free sheaves, you get vector bundles, and every vector bundle arises this way. What about sheaves that are not locally free? Are there any other sheaves for which the total space is a useful construction? ag.algebraic-geometry sheaf-theory Do you mean Seifert bundles? They are bundles with nonreduced fibers. see Kollar for more info. – user1290 Oct 29 '09 at 21:17 add comment 2 Answers active oldest votes We can back up a step and ask about the relative spec of the symmetric algebra of any coherent sheaf. Then there are lots of nice examples. If $X=\mathrm{Spec}\ k[t]$ and $E$ is the skyscraper sheaf at $0$, then we get $\mathrm{Spec}\ k[t,u]/(tu)$. If $X$ is $\mathrm{Spec}\ k[x,y]$ and $E$ is the ideal sheaf of the origin, then I get the relative spec of $\mathrm{Sym}_X (E)$ is $\mathrm{Spec}\ k[x,y,t,u]/(xu-ty)$. This has a one dimensional fiber over the points of $X$ other then $(0,0)$, and a two dimensional fiber over $(0,0)$. It looks like the fiber up vote over $x \in X$ is the dual vector space to $E_x \otimes_{\mathcal{O}_x} k(x)$, but I am not sure whether this is always true. 2 down vote If you insist that $E$ be the dual sheaf to some sheaf $F$, then you run into the problem that non-locally-free dual sheaves are hard to come by in dimensions 1 and 2. See my question, to which there are several good answers. add comment This sounds like it's related to my old question, here. In my answer, I needed that the sheaf is locally something nice in the etale topology, but if it is locally nice in the Zariski up vote 1 topology, then it will be a fiber bundle, the question is only which open sets are needed for local triviality. down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry sheaf-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/3315/total-spaces-of-quasicoherent-sheaves","timestamp":"2014-04-16T07:49:55Z","content_type":null,"content_length":"54613","record_id":"<urn:uuid:1a09b046-8d7a-460f-9253-bad2d0960457>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Jozsef Solymosi Office: MATH 116 MATH 309 Section 201 Term II, 2011-12 Topics in Geometry Prerequisite: One of MATH 152, MATH 221, MATH 223 and one of MATH 220, MATH 226, CPSC 121. Specifically, it will be assumed that students are familiar with basic techniques of mathematical proof and reasoning such as induction and proof by contradiction. Students will be expected to write logically correct and mathematically coherent proofs as part of homework and examinations. The course syllabus will be as follows: • Euler’s formula. Convexity, Graphs, Trees, Convex Polyhedron, Planar Graphs. • Planar Graphs. Drawings, Colouring, Structure, Kuratowski’s Theorem. • Geometric Graphs. Drawings, Intersections, Crossings. • Crossing Number. Basic Probability, Upper and Lower Bounds. • Point-Line incidences. Szemeredi-Trotter Theorem (bounds on incidences) • Metric problems in Discrete Geometry. The unit distances problem, distinct distances. • Combinatorics of point-sets. Erdos-Szekeres Theorem, Halving lines. • Circle Packing. Planar circle packing, Lattice Packing, Sphere Packing. • Geometry of Numbers. Pick’s Theorem, Minkowski’s Theorem, applications. Evaluation: There will be two midterm exams and one final exam, as well as weekly homework assignments. Homework will be assigned on Thursdays, and due the following Thursday in class. Late homework will not be accepted. The course mark will be computed as follows: Final Exam: 50 percent Midterm Exams: 20 percent x 2 = 40 percent Homework: 10 percent You are required to be present at all examinations. No makeup tests will be given. Non-attendance at an exam will result in a mark of zero being recorded. What to study? The book “Proofs from the BOOK” contains most of the material we've learned related to Euler’s formula. You can read this book online via the UBC library. Read Chapter 12, “Three applications of Euler’s formula” up to Sylvester’s problem. Here is the first practice midterm with solutions: Question 1. Prove that every planar graph is five colourable. (If you can't remember the proof then you will find it in many elementary graph theory book or on the web. e.g.: http://en.wikipedia.org/wiki/Five_color_theorem, but try to prove it first) Solution: http://en.wikipedia.org/wiki/Five_color_theorem Question 2. Illustrate the convex hull of the following point sets in the plane. Which are the vertices of the convex hull? Write the other points as a convex combination of some of the vertices. {(0,1), (-2,-1), (4,-1), (1,1), (1,0)}. I can't attach a picture now. Check the practice questions similar to this one. (1,0) is a convex combination of the points (-2,-1), (4,-1), (1,1). (You can choose another triple if you want) We are looking for nonnegative a1,a2, and a3 such that a1+a2+a3=1 and a1(-2,-1)+a2(4,-1)+a3(1,1)=(1,0) . So we have three linear equations, one for convexity (the sum=1) and two for the above equation for each coordinates. You solve to get a1=1/4, a2=1/4, and a3=1/2 . Question 3. Given a planar graph on 17 vertices. Suppose that there is a drawing of this graph with 17 faces (countries). Show that the graph has a vertex with degree 3 or less. Solution: By Euler's formula we know that the number of edges is 32. The sum of the degrees is 32*2=64, so the average degree is 64/17=3.76470588 < 4. The average is smaller than 4, therefore there should be at least one vertex with degree less then 4. Question 4. Prove that the Grötzsch Graph is not planar. Picture : http://en.wikipedia.org/wiki/File:Groetzsch-graph.svg Solution: The graph has 11 vertices and 20 edges. We know that any simple triangle-free planar graph on v ≥ 3 vertices has at most 2v − 4 edges. This graph has no triangles and has 20> 2*11-4=18 edges, therefore it can't be planar. Question 5. Given a planar graph G such that all but four vertices have degree four or less. Prove that G is four-colourable. Solution: We prove the statement by induction. Every graph on four vertices is four-colourable. Now let us suppose that the statement holds for graphs on n vertices for some n>4. We show that then it holds for graphs on n+1 vertices as well. Draw the n+1-- vertex graph G on the plane and choose a vertex v with degree four or less. Colour the graph without v using four colours. If the neighbours of v were coloured by less than four colours then we can colour v by the fourth colour. If we used four colours then we will modify the colouring so that eventually we will use three colours for the neighbours of v. The colours of the neighbours are black (v1), white (v2), red (v3), and blue (v4) in clockwise order. Let us consider the subgraph of G spanned by the black and red vertices. There are two cases: a, The two vertices v1 and v3 are not connected in the graph spanned by the black and red vertices. In this case we can simply swap the colouring of the black and red vertices in the connected component of v1 in the spanned subgraph. After that colour v by black to colour G by using four colours. b, The two vertices v1 and v3 are connected in the graph spanned by the black and red vertices. In this case the two vertices v2 and v4 can't be connected in the graph spanned by the white and blue vertices. Swap the colouring of the white and blue vertices in the connected component of v2 in the spanned subgraph. After that colour v by red. The notes below contain most of the material we've learned related to transformations, symmetries, and lattices. There is also a collection of practice problems: Here are some practice questions for convexity. Here are the solutions. Here are the solutions of the second midterm. The notes below contain most of the material we've learned related to the crossing number and applications including Szemerdi-Trotter. The crossing number inequality « What's new From: terrytao.wordpress.com/2007/09/18/the-crossing-number-inequality/ Here are the practice questions for the crossing number. Here are two more practice questions for the crossing number. Homework assignments HW #1 Question 2. What is the minimum number of edges one should erase from K6 (the complete graph on 6 vertices) to make the remaining graph planar? (a) show that one should remove at least 3 edges. Solution one: Suppose that there is a planar drawing of G which is K6 minus 2 edges. The number of countries (faces) is denoted by f, the number of edges is 13, and the number of vertices is 6. By Euler's formula we know that 6-13+f=2, so f=9. However, every country has at least three borders. The number of borders is exactly twice the number of edges (Both sides of an edge define a border). On one hand every country has at least three borders, therefore 3f should be at most as large (smaller or equal) as twice the number of edges. On the other hand 3*9=27 is larger than 2*13=26. This is a contradiction, so G can not be planar. Solution two: We saw in class that if G is a planar graph on n > 2 vertices then the number of edges is at most 3n-6. In our case that means that the number of edges is at most 3*6-6=12 as needed. Question 3. What is the minimum number of edges one should erase from K3;4 (the complete bipartite graph on 3+4 vertices) to make the remaining graph planar? (a) show that one should remove at least 2 edges. Let us suppose that one can remove one edge of K3;4 and there is a planar drawing of this graph. By Euler's formula 7-11+f=2, so f=6. Since K3;4 was a bipartite graph, every country has at least four borders. By the same argument we had in Solution one of the previous question, we know that 4*f =24 should be smaller or equal than 2*11=22. This is a contradiction, so we know that K3;4 minus an edge can't be planar. HW #2 2. Exercises: a. (2 points) Prove the following statement: If a tree has a vertex with degree k, then the tree has at least k vertices having degree one. Solution one: Note that every tree (with at least two vertices) has at least two vertices with degree one. Proof: Consider the longest path in the tree. The two end-vertices of the path have degree one otherwise we could extend the path to a longer one. Now let us suppose that vertex v has degree k. If we remove v from the graph then we have k trees left. If a tree has one vertex only then it had degree one in the original tree. If the tree has at least two vertices then it has at least two degree one vertices and at most one of them was connected to v. The second one had degree one in the original graph. Solution two: One can argue that we can start k walks from v using the k edges connected to v. Continue each walks as long as we can without going back along an edge we used already. All k walks will end when we cant go any further without turning back. The endpoints of these walks should have degree one, as we could continue the walk otherwise. HW #3 Question 1. The Heawood graph is the graph of the skeleton of the Szilassi polyhedron. (You can find it's picture at http://mathworld.wolfram.com/HeawoodGraph.html for example) Prove that it is not planar. Proof by contradiction: Let us suppose that it is planar and has a good embedding (drawing) into the plane. The smallest cycle has 6 vertices, therefore if the number of faces is denoted by f then the number of borders=2*edges is at least 6f. By Euler's formula we know that f=21-14+2=9. But 2*edges=42 which is less than 6f = 54, it is a contradiction. Question 2. Prove the Four Colour Theorem for planar graphs with no triangles. Show that if a planar graph is triangle-free then it can be coloured by four colours. (Hint: maybe such graphs always have a vertex with small degree) Proof by induction: One can show that a triangle-free planar graph always has a vertex with degree 3 or less (We will prove it later). Then the proof that it is four-colourable goes by induction. Base case: any graph on 4 (or less) vertices is 4-colourable. Suppose that every triangle-free planar graph on n vertices is 4-colourable. Take any triangle-free planar graph on n+1 vertices. If it has a vertex with degree three or less then remove it, the remaining graph can be 4 coloured by the induction hypothesis. The removed vertex had 3 or less neighbours only so we can colour it using the fourth colour. All we have to prove is that there is always a vertex having degree 3 or less. We can suppose that the graph is connected and has at least 4 vertices. (If it is not connected than take any connected component of the graph.) Every face has at least 4 borders. Therefore the number of edges is at least 2f. v-e+f=2 implies v-e+e/2 is at least 2. So, on one hand e<=2v-4. On the other hand the sum of the degrees is 2e. If every vertex had degree 4 or larger then 2e would be at least 4v which is not possible since e<=2v-4. Question 3. Suppose that a planar graph has two connected components (You draw two connected planar graphs next to each other and this will be your new graph). What is the right variant of Euler's formula in this case? Prove your formula! Proof: The formula is v-e+f=3. For the connected components G1 and G2 the formula v1-e1+f1=2 and v2-e2+f2=2 hold and v=v1+v2, e=e1+e2. So, v-e=4-f1-f2+4 . The components have one country (face) which separates them. It was counted in the formula twice, i.e. f=f1+f2-1. Question 4. Prove that if a planar graph on n vertices has 2n-5 faces, then it has at least 3 vertices with degree at most 5. Proof: Using Euler's formula we see that this graph has 3n-7 edges. If all but two vertices had degree 6 or larger then the sum of degrees would be at least 6(n-2) which is more than twice the number of edges 2(3n-7). But the two should be equal, so we see that the graph had at least 3 vertices with degree 5 or less. Here are the questions of HW#6. And here are the solutions.
{"url":"http://www.math.ubc.ca/~solymosi/309.htm","timestamp":"2014-04-17T09:44:52Z","content_type":null,"content_length":"23600","record_id":"<urn:uuid:5415c3fc-a5b7-4797-83c5-f844a39f8cb6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
module BatLazyList: sig .. end Lazy lists of elements. Lazy lists are similar to lists, with the exception that their contents are only computed whenever requested. This makes them particularly useful in contexts where streams of data are to be handled. Note For this documentation, we will assume the existence of a lazy list syntax extension such that [^ ^] is the empty lazy list and [^ a;b;c ^] is the lazy list containing elements a, b, c. Note Enumerations (as featured in module BatEnum) and lazy lists (as featured in this module) are quite similar in purpose. Lazy lists are slightly higher level, insofar as no cloning is required to get them to work, which makes them slightly more useful in contexts where backtracking is common. Enumerations, on the other hand, are closer to traditional stream processing, and require more low-level marking whenever backtracking is required, but may be faster and more memory-efficient when used properly. Either choice is recommended over OCaml's built-in Stream. Author(s): David Teller exception Empty_list Empty_list is raised when an operation applied on an empty list is invalid. For instance, hd nil will raise Empty_list. exception Invalid_index of int Invalid_index is raised when an indexed access on a list is out of list bounds. exception Different_list_size of string Different_list_size is raised when applying functions such as iter2 on two lists having different size. exception No_more_elements Note The types are kept concrete so as to allow pattern-matching. However, it is generally easier to manipulate BatLazyList.nil and BatLazyList.cons. type 'a t = 'a node_t Lazy.t The type of a lazy list. type 'a node_t = | Nil | Cons of 'a * 'a t (* The type of an item in the list. *) include BatEnum.Enumerable include BatInterfaces.Mappable val nil : 'a t The empty list. val cons : 'a -> 'a t -> 'a t Build a list from a head and a tail. val (^:^) : 'a -> 'a t -> 'a t As cons: x^:^l is the lazy list with head x and tail l val peek : 'a t -> 'a option peek l returns the first element of l, if it exists. val get : 'a t -> ('a * 'a t) option get l returns the head and tail of l, if l is not empty. List creation val from : (unit -> 'a) -> 'a t from next creates a (possibly infinite) lazy list from the successive results of next. Raises LazyList.No_more_elements to denote the end of the list. val from_while : (unit -> 'a option) -> 'a t from next creates a (possibly infinite) lazy list from the successive results of next. The list ends whenever next returns None. val from_loop : 'b -> ('b -> 'a * 'b) -> 'a t from_loop data next creates a (possibly infinite) lazy list from the successive results of applying next to data, then to the result, etc. The list ends whenever the function raises LazyList. val seq : 'a -> ('a -> 'a) -> ('a -> bool) -> 'a t seq init step cond creates a sequence of data, which starts from init, extends by step, until the condition cond fails. E.g. seq 1 ((+) 1) ((>) 100) returns [^1, 2, ... 99^]. If cond init is false, the result is empty. val unfold : 'b -> ('b -> ('a * 'b) option) -> 'a t unfold data next creates a (possibly infinite) lazy list from the successive results of applying next to data, then to the result, etc. The list ends whenever the function returns None val init : int -> (int -> 'a) -> 'a t Similar to Array.init, init n f returns the lazy list containing the results of (f 0),(f 1).... (f (n-1)). Raises Invalid_argument "LazyList.init" if n < 0. val make : int -> 'a -> 'a t Similar to String.make, make n x returns a list containing n elements x. val range : int -> int -> int t Compute lazily a range of integers a .. b as a lazy list. The range is empty if a <= b. Higher-order functions val iter : ('a -> 'b) -> 'a t -> unit Eager iteration iter f [^ a0; a1; ...; an ^] applies function f in turn to a0; a1; ...; an. It is equivalent to begin f a0; f a1; ...; f an; () end. In particular, it causes all the elements of the list to be val iteri : (int -> 'a -> unit) -> 'a t -> unit Eager iteration, with indices iteri f [^ a0; a1; ...; an ^] applies function f in turn to a0; a1;...; an, along with the corresponding 0,1..n index. It is equivalent to begin f 0 a0; f 1 a1; ...; f n an; () end. In particular, it causes all the elements of the list to be evaluated. val map : ('a -> 'b) -> 'a t -> 'b t Lazy map map f [^ a0; a1; ... ^] builds the list [^ f a0; f a1; ... ^] with the results returned by f. Not tail-recursive. Evaluations of f take place only when the contents of the list are forced. val mapi : (int -> 'a -> 'b) -> 'a t -> 'b t Lazy map, with indices mapi f [^ a0; a1; ... ^] builds the list [^ f 0 a0; f 1 a1; ... ^] with the results returned by f. Not tail-recursive. Evaluations of f take place only when the contents of the list are forced. val fold_left : ('a -> 'b -> 'a) -> 'a -> 'b t -> 'a Eager fold_left LazyList.fold_left f a [^ b0; b1; ...; bn ^] is f (... (f (f a b0) b1) ...) bn. This causes evaluation of all the elements of the list. val fold_right : ('a -> 'b -> 'b) -> 'b -> 'a t -> 'b Eager fold_right fold_right f b [^ a0; a1; ...; an ^] is f a0 (f a1 (... (f an b) ...)). This causes evaluation of all the elements of the list. Not tail-recursive. Note that the argument order of this function is the same as fold_left above, but inconsistent with other fold_right functions in Batteries. We hope to fix this inconsistency in the next compatibility-breaking release, so you should rather use the more consistent eager_fold_right. Since 2.2.0 val eager_fold_right : ('a -> 'b -> 'b) -> 'a t -> 'b -> 'b Eager fold_right As fold_right above, but with the usual argument order for a fold_right. Just as fold_left on a structure 'a t turns an element-level function of type ('b -> 'a -> 'b), with the accumulator argument 'b on the left, into a structure-level function 'b -> 'a t -> 'b, fold_right turns a function ('a -> 'b -> 'b) (accumulator on the right) into a 'a t -> 'b -> 'b. val lazy_fold_right : ('a -> 'b Lazy.t -> 'b) -> 'a t -> 'b Lazy.t -> 'b Lazy.t Lazy fold_right lazy_fold_right f (Cons (a0, Cons (a1, Cons (a2, nil)))) b lazy (f a0 (lazy (f a1 (lazy (f a2 b))))) Forcing the result of lazy_fold_right forces the first element of the list; the rest is forced only if/when the function f forces its accumulator argument. Since 2.1 val mem : 'a -> 'a t -> bool mem x l determines if x is part of l. Evaluates all the elements of l which appear before x. val memq : 'a -> 'a t -> bool As mem, but with physical equality val find : ('a -> bool) -> 'a t -> 'a find p l returns the first element of l such as p x returns true. Raises Not_found if such an element has not been found. val rfind : ('a -> bool) -> 'a t -> 'a rfind p l returns the last element x of l such as p x returns true. Raises Not_found if such element as not been found. val find_exn : ('a -> bool) -> exn -> 'a t -> 'a find_exn p e l returns the first element of l such as p x returns true or raises e if such an element has not been found. val rfind_exn : ('a -> bool) -> exn -> 'a t -> 'a find_exn p e l returns the last element of l such as p x returns true or raises e if such an element has not been found. val findi : (int -> 'a -> bool) -> 'a t -> int * 'a findi p e l returns the first element ai of l along with its index i such that p i ai is true. Raises Not_found if no such element has been found. val rfindi : (int -> 'a -> bool) -> 'a t -> int * 'a findi p e l returns the last element ai of l along with its index i such that p i ai is true. Raises Not_found if no such element has been found. val index_of : 'a -> 'a t -> int option index_of e l returns the index of the first occurrence of e in l, or None if there is no occurrence of e in l val index_ofq : 'a -> 'a t -> int option index_ofq e l behaves as index_of e l except it uses physical equality val rindex_of : 'a -> 'a t -> int option index_of e l returns the index of the last occurrence of e in l, or None if there is no occurrence of e in l val rindex_ofq : 'a -> 'a t -> int option rindex_ofq e l behaves as rindex_of e l except it uses physical equality Common functions val next : 'a t -> 'a node_t Compute and return the next value of the list val length : 'a t -> int Return the length (number of elements) of the given list. Causes the evaluation of all the elements of the list. val is_empty : 'a t -> bool Returns true if the list is empty, false otherwise. val would_at_fail : 'a t -> int -> bool would_at_fail l n returns true if l contains strictly less than n elements, false otherwise val hd : 'a t -> 'a Return the first element of the given list. Raises Empty_list if the list is empty. Note: this function does not comply with the usual exceptionless error-management recommendations, as doing so would essentially render it useless. val tl : 'a t -> 'a t Return the given list without its first element. Raises Empty_list if the list is empty. Note: this function does not comply with the usual exceptionless error-management recommendations, as doing so would essentially render it useless. val first : 'a t -> 'a As hd val last : 'a t -> 'a Returns the last element of the list. Raises Empty_list if the list is empty. This function takes linear time and causes the evaluation of all elements of the list val at : 'a t -> int -> 'a at l n returns the element at index n (starting from 0) in the list l. Raises Invalid_index is the index is outside of l bounds. val nth : 'a t -> int -> 'a Obsolete. As at Association lists These lists behave essentially as HashMap, although they are typically faster for short number of associations, and much slower for for large number of associations. val assoc : 'a -> ('a * 'b) t -> 'b assoc a l returns the value associated with key a in the list of pairs l. That is, assoc a [^ ...; (a,b); ...^] = b if (a,b) is the leftmost binding of a in list l. Raises Not_found if there is no value associated with a in the list l. val assq : 'a -> ('a * 'b) t -> 'b val mem_assoc : 'a -> ('a * 'b) t -> bool val mem_assq : 'a -> ('a * 'b) t -> bool val rev : 'a t -> 'a t Eager list reversal. val eager_append : 'a t -> 'a t -> 'a t Evaluate a list and append another list after this one. Cost is linear in the length of the first list, not tail-recursive. val rev_append : 'a t -> 'a t -> 'a t Eager reverse-and-append Cost is linear in the length of the first list, tail-recursive. val append : 'a t -> 'a t -> 'a t Lazy append Cost is constant. All evaluation is delayed until the contents of the list are actually read. Reading itself is delayed by a constant. val (^@^) : 'a t -> 'a t -> 'a t As lazy append val concat : 'a t t -> 'a t Lazy concatenation of a lazy list of lazy lists val flatten : 'a t list -> 'a t Lazy concatenation of a list of lazy lists val split_at : int -> 'a t -> 'a t * 'a t split_at n l returns two lists l1 and l2, l1 containing the first n elements of l and l2 the others. Raises Invalid_index if n is outside of l size bounds. val split_nth : int -> 'a t -> 'a t * 'a t Obsolete. As split_at. Dropping elements val unique : ?cmp:('a -> 'a -> int) -> 'a t -> 'a t unique cmp l returns the list l without any duplicate element. Default comparator ( = ) is used if no comparison function specified. val unique_eq : ?eq:('a -> 'a -> bool) -> 'a t -> 'a t as unique except only uses an equality function. Use for short lists when comparing is expensive compared to equality testing Since 1.3.0 val remove : 'a -> 'a t -> 'a t remove l x returns the list l without the first element x found or returns l if no element is equal to x. Elements are compared using ( = ). val remove_if : ('a -> bool) -> 'a t -> 'a t remove_if cmp l is similar to remove, but with cmp used instead of ( = ). val remove_all : 'a -> 'a t -> 'a t remove_all l x is similar to remove but removes all elements that are equal to x and not only the first one. val remove_all_such : ('a -> bool) -> 'a t -> 'a t remove_all l x is similar to remove but removes all elements that are equal to x and not only the first one. val take : int -> 'a t -> 'a t take n l returns up to the n first elements from list l, if available. val drop : int -> 'a t -> 'a t drop n l returns l without the first n elements, or the empty list if l have less than n elements. val take_while : ('a -> bool) -> 'a t -> 'a t take_while f xs returns the first elements of list xs which satisfy the predicate f. val drop_while : ('a -> bool) -> 'a t -> 'a t drop_while f xs returns the list xs with the first elements satisfying the predicate f dropped. val to_list : 'a t -> 'a list Eager conversion to string. val to_stream : 'a t -> 'a Stream.t Lazy conversion to stream. val to_array : 'a t -> 'a array Eager conversion to array. val enum : 'a t -> 'a BatEnum.t Lazy conversion to enumeration val of_list : 'a list -> 'a t Lazy conversion from lists Albeit slower than eager conversion, this is the default mechanism for converting from regular lists to lazy lists. This for two reasons : * if you're using lazy lists, total speed probably isn't as much an issue as start-up speed * this will let you convert regular infinite lists to lazy lists. val of_stream : 'a Stream.t -> 'a t Lazy conversion from stream. val of_enum : 'a BatEnum.t -> 'a t Lazy conversion from enum. val eager_of_list : 'a list -> 'a t Eager conversion from lists. This function is much faster than BatLazyList.of_list but will freeze on cyclic lists. val of_array : 'a array -> 'a t Eager conversion from array val filter : ('a -> bool) -> 'a t -> 'a t Lazy filtering. filter p l returns all the elements of the list l that satisfy the predicate p. The order of the elements in the input list is preserved. val exists : ('a -> bool) -> 'a t -> bool Eager existential. exists p [^ a0; a1; ... ^] checks if at least one element of the list satisfies the predicate p. That is, it returns (p a0) || (p a1) || ... . val for_all : ('a -> bool) -> 'a t -> bool Eager universal. for_all p [^ a0; a1; ... ^] checks if all elements of the list satisfy the predicate p. That is, it returns (p a0) && (p a1) && ... . val filter_map : ('a -> 'b option) -> 'a t -> 'b t Lazily eliminate some elements and transform others. filter_map f [^ a0; a1; ... ^] applies lazily f to each a0, a1... If f ai evaluates to None, the element is not included in the result. Otherwise, if f ai evaluates to Some x, element x is included in the result. This is equivalent to match f a0 with | Some x0 -> x0 ^:^ (match f a1 with | Some x1 -> x1 ^:^ ... | None -> [^ ^]) | None -> [^ ^] . val eternity : unit t An infinite list of nothing val sort : ?cmp:('a -> 'a -> int) -> 'a t -> 'a t Sort the list using optional comparator (by default compare). val stable_sort : ('a -> 'a -> int) -> 'a t -> 'a t Operations on two lists val map2 : ('a -> 'b -> 'c) -> 'a t -> 'b t -> 'c t map2 f [^ a0; a1; ...^] [^ b0; b1; ... ^] is [^ f a0 b0; f a1 b1; ... ^]. Raises Different_list_size if the two lists have different lengths. Not tail-recursive, lazy. In particular, the exception is raised only after the shortest list has been entirely consumed. val iter2 : ('a -> 'b -> unit) -> 'a t -> 'b t -> unit iter2 f [^ a0; ...; an ^] [^ b0; ...; bn ^] calls in turn f a0 b0; ...; f an bn. Tail-recursive, eager. Raises Different_list_size if the two lists have different lengths. val fold_left2 : ('a -> 'b -> 'c -> 'a) -> 'a -> 'b t -> 'c t -> 'a fold_left2 f a [^ b0; b1; ...; bn ^] [^ c0; c1; ...; cn ^] is f (... (f (f a b0 c0) b1 c1) ...) bn cn. Eager. Raises Different_list_size if the two lists have different lengths. val fold_right2 : ('a -> 'b -> 'c -> 'c) -> 'a t -> 'b t -> 'c -> 'c fold_right2 f [^ a0; a1; ...; an ^] [^ b0; b1; ...; bn ^] c is f a0 b0 (f a1 b1 (... (f an bn c) ...)). Eager. Raises Different_list_size if the two lists have different lengths. Tail-recursive. val for_all2 : ('a -> 'b -> bool) -> 'a t -> 'b t -> bool Same as , but for a two-argument predicate. Raises Different_list_size if the two lists have different lengths. val exists2 : ('a -> 'b -> bool) -> 'a t -> 'b t -> bool Same as , but for a two-argument predicate. Raises Different_list_size if the two lists have different lengths. val combine : 'a t -> 'b t -> ('a * 'b) t Transform a pair of lists into a list of pairs: combine [^ a0; a1; ... ^] [^ b0; b1; ... ^] is [^ (a0, b0); (a1, b1); ... ^]. Raises Different_list_size if the two lists have different lengths. Tail-recursive, lazy. val uncombine : ('a * 'b) t -> 'a t * 'b t Divide a list of pairs into a pair of lists. module Infix: sig .. end Infix submodule regrouping all infix operators Boilerplate code val print : ?first:string -> ?last:string -> ?sep:string -> ('a BatInnerIO.output -> 'b -> unit) -> 'a BatInnerIO.output -> 'b t -> unit Override modules The following modules replace functions defined in LazyList with functions behaving slightly differently but having the same name. This is by design: the functions meant to override the corresponding functions of LazyList. module Exceptionless: sig .. end Exceptionless counterparts for error-raising operations module Labels: sig .. end Operations on LazyList with labels.
{"url":"http://ocaml-batteries-team.github.io/batteries-included/hdoc2/BatLazyList.html","timestamp":"2014-04-17T01:15:23Z","content_type":null,"content_length":"56759","record_id":"<urn:uuid:fceeac44-4407-47e7-bbc1-4d988b5956b1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Shortest Path Algorithms for Nearly Acyclic Directed Graphs "... Shortest path problems can be solved very efficiently when a directed graph is nearly acyclic. Earlier results defined a graph decomposition, now called the 1-dominator set, which consists of a unique collection of acyclic structures with each single acyclic structure dominated by a single associate ..." Cited by 2 (2 self) Add to MetaCart Shortest path problems can be solved very efficiently when a directed graph is nearly acyclic. Earlier results defined a graph decomposition, now called the 1-dominator set, which consists of a unique collection of acyclic structures with each single acyclic structure dominated by a single associated trigger vertex. In this framework, a specialised shortest path algorithm only spends delete-min operations on trigger vertices, thereby making the computation of shortest paths through non-trigger vertices easier. A previously presented algorithm computed the 1-dominator set in O(mn) worst-case time, which allowed it to be integrated as part of an O(mn + nr log r) time all-pairs algorithm. Here m and n respectively denote the number of edges and vertices in the graph, while r denotes the number of trigger vertices. A new algorithm presented in this paper computes the 1-dominator set in just O(m) time. This can be integrated as part of the O(m+r log r) time spent solving single-source, improving on the value of r obtained by the earlier tree-decomposition single-source algorithm. In addition, a new bi-directional form of 1-dominator set is presented, which further improves the value of r by defining acyclic structures in both directions over edges in the graph. The bi-directional 1-dominator set can similarly be computed in O(m) time and included as part of the O(m + r log r) time spent computing single-source. This paper also presents a new all-pairs algorithm under the more general framework where r is defined as the size of any predetermined feedback vertex set of the graph, improving the previous all-pairs time complexity from O(mn + nr 2) to O(mn + r 3). - Research and Practice in Information Technology , 2005 "... This paper presents new algorithms for computing shortest paths in a nearly acyclic directed graph G = (V, E). The new algorithms improve on the worst-case running time of previous algorithms. Such algorithms use the concept of a 1-dominator set. A 1-dominator set divides the graph into a unique col ..." Cited by 1 (0 self) Add to MetaCart This paper presents new algorithms for computing shortest paths in a nearly acyclic directed graph G = (V, E). The new algorithms improve on the worst-case running time of previous algorithms. Such algorithms use the concept of a 1-dominator set. A 1-dominator set divides the graph into a unique collection of acyclic subgraphs, where each acyclic subgraph is dominated by a single associated trigger vertex. The previous time for computing a 1dominator set is improved from O(mn) to O(m), where m = |E| and n = |V|. Efficient shortest... "... This paper provides a brief algebraic characterization of constraint violations in Optimality Theory (OT). I show that if violations are taken to be multisets over a fixed basis set Con then the merge operator on multisets and a ‘min ’ operation expressed in terms of harmonic inequality provide a se ..." Cited by 1 (1 self) Add to MetaCart This paper provides a brief algebraic characterization of constraint violations in Optimality Theory (OT). I show that if violations are taken to be multisets over a fixed basis set Con then the merge operator on multisets and a ‘min ’ operation expressed in terms of harmonic inequality provide a semiring over violation profiles. This semiring allows standard optimization algorithms to be used for OT grammars with weighted finite-state constraints in which the weights are violation-multisets. Most usefully, because multisets are unordered, the merge operation is commutative and thus it is possible to give a single graph representation of the entire class of grammars (i.e. rankings) for a given constraint set. This allows a neat factorization of the optimization problem that isolates the main source of complexity into a single constant γ denoting the size of the graph representation of the whole constraint set. I show that the computational cost of optimization is linear in the length of the underlying form with the multiplicative constant γ. This perspective thus makes it straightforward to evaluate the complexity of optimization for different constraint sets. 1 "... Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy, H(S), for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and e ..." Add to MetaCart Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy, H(S), for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and each Xi is already solved. We propose a generic algorithm that merges Xi’s repeatedly, and finishes when k becomes 1. We use the entropy measure to analyze three example problems, sorting, shortest paths and minimum spanning trees. For sorting Xi is an ascending run, and for minimum spanning trees, Xi is interpreted as a partially obtained minimum spanning tree for a subgraph. For shortest paths, Xi is an acyclic part in the given graph. When k is small, the graph can be regarded as nearly acyclic. The entropy measure, H(S), is defined by regarding pi = |Xi|/|X | as a probability measure, that is, H(S) = −nΣ k i=1pi log pi, where n = Σ k i=1|Xi|. We show that we can sort the input data S(X) in O(H(S)) time, and that we can complete the minimum cost spanning tree in O(m + H(S)) time, where m in the number of edges. Then we solve the shortest path problem in O(m + H(S)) time. Finally we define dual entropy on the partitioning process, whereby we give the time bounds on a generic quicksort and the shortest path problem for another kind of nearly acyclic graphs. , 1999 "... There are many algorithms for the all pairs shortest path problem, depending on variations of the problem. The simplest version takes only the size of vertex set as a parameter. As additional parameters, other problems specify the number of edges and/or the maximum value of edge costs. In this ..." Add to MetaCart There are many algorithms for the all pairs shortest path problem, depending on variations of the problem. The simplest version takes only the size of vertex set as a parameter. As additional parameters, other problems specify the number of edges and/or the maximum value of edge costs. In this
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1330396","timestamp":"2014-04-16T15:19:35Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:68292b0c-c99b-441a-8d75-da5b87127708>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
On the notion of partial semigroup up vote 2 down vote favorite A partial binary operation on a set $X$ is just a (partial) function $\varphi: X \times X \rightharpoonup X$ (I'm using \rightharpoonup for partial maps), and a partial magma is a pair $\mathbb M = (M, \star)$ such that $M$ is a set and $\star$ is a partial binary operation on $M$ (I say that $\mathbb M$ is a magma if $\star$ is total). But what about partial semigroups? At least in principle, many alternative definitions are possible: The only thing I would take for certain is that a partial semigroup must be a partial magma $\mathbb M = (M, \star)$ for which $\star$ satisfies some kind of associativity, and of course I've my personal list. Specifically, I say that $\mathbb M$ is 1. (properly) associative if for all $x,y,z \in M$ such that $(x \star y) \star z$ and $x \star (y \star z)$ are defined, it holds $(x \star y) \star z = x \star (y \star z)$. 2. left pre-associative if for all $x,y,z \in M$ such that $x \star y$ and $y \star z$ is defined, it holds that "$(x \star y) \star z$ is defined" implies "$x \star (y \star z)$ is defined and $(x \star y) \star z = x \star (y \star z)$". 3. right pre-associative if the dual of $\mathbb M$ is left pre-associative. 4. pre-associative if it is both left and right pre-associative. 5. strongly associative if for all $x,y,z \in M$ it holds that "$x \star y$ and $y \star z$ are defined" implies "$(x \star y) \star z$ and $x \star (y \star z)$ are defined, and also $(x \star y) \ star z = x \star (y \star z)$". 6. left dissociative if for all $x,y,z \in M$ it holds that "$(x \star y) \star z$ is defined" implies "$x \star (y \star z)$ is defined and $(x \star y) \star z = x \star (y \star z)$". 7. right dissociative if the dual of $\mathbb M$ is left dissociative. 8. dissociative if it is both left and right dissociative. In this taxonomy (which doesn't aim to be complete by any means), "being (propertly) associative" corresponds to the weakest possible form of associativity, in the sense that it is implied by all the others. Moreover, all of the above properties collapse into each other if $\mathbb M$ is a magma. So, the (somewhat philosophical) question is: What should a partial semigroup be? Do you envisage any "higher logic" advocating for one instead of another choice? My own answer is that a partial semigroup should be a strongly associative partial magma, in the sense of the above condition 5. But, on the one hand this doesn't seem to be the "standard" definition in the literature (see, e.g., R.H. Schelp, A partial semigroup approach to partially ordered sets, Proc. London Math. Soc. (1972), s3-24 (1), 46-58, where partial semigroups are pre-associative partial magmas, in the sense of the above condition 4), and on the other hand I can't give myself a reason why this should be better or worse than something different (which bothers me much...). ra.rings-and-algebras semigroups magmas add comment 4 Answers active oldest votes There were many articles of Soviet semigroupists (Vagner, Lyapin and their pupils). E.g., see Evseev, A.E. A survey of partial groupoids. Transl., II. Ser., Am. Math. Soc. 139, 43-67 (1988); translation from Properties of semigroups, Interuniv. Collect. Sci. Works, Leningrad 1984, 39-76 (1984). up vote Lyapin, E.S. The possibility of a semigroup continuation of a partial groupoid. (English) Sov. Math. 33, No.12, 82-85 (1989); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 1989, No.12 2 down (331), 68-70 (1989). Lyapin, E.S.; Evseev, A.E. The theory of partial algebraic operations. Transl. from the Russian by J. M. Cole. Mathematics and its Applications (Dordrecht). 414. Dordrecht: Kluwer Academic Press. x, 235 p. (1997). Boris, can you do homological stuff on partial grupoids? – Victor Mar 5 '13 at 14:36 1 Many thanks for the references. I will look at the papers tomorrow, but I don't think that the authors give any conceptual motivation to support their own definitions, right? – Salvo Tringali Mar 5 '13 at 14:44 @Victor :Sorry, I do not quite understand your question :-( – Boris Novikov Mar 5 '13 at 15:31 @Salvo Tringali: Maybe. As I knew Lyapin, he was a great "conceptualist" in Semigroups. – Boris Novikov Mar 5 '13 at 15:35 @Boris. Thanks, but I may have problems in finding a copy of the book that you mention as a last entry in your list. Yet, I see from the 1988 English translation of Evseev's survey that he's using the term groupoid in the sense of Bourbaki's magma. That's fine, but I find the definition of a subgroupoid provided in the paper unconceivable from any conceptual point of view: Evseev lets a submagma of a magma $\mathbb M=(M,\star)$ be a subset $N$ of $M$ s.t. $a\star b\in N$ for all $a,b\in N$ s.t. $a\star b$ is defined in $\mathbb M$. In my view, this is "wrong", for it doesn't agree with [...] – Salvo Tringali Mar 5 '13 at 16:08 show 7 more comments The "right" definition of "partial semigroup" probably depends on the use one wants to make of these structures. Vitaly Bergelson, Neil Hindman, and I needed partial semigroups in our paper "Partition Theorems for Spaces of Variable Words", and we used the definition that says if either of $(x*y)*z$ and $x*(y*z)$ is defined then so is the other and they are equal. That seems up vote quite a natural definition, and it worked well for our purposes. I don't know (though I may have known when working on the paper) whether some other definition would have worked as well. 1 down [The paper is in Proc. London Math Soc. (3) 68 (1994) pp. 449-476, and a version of it is on my web site at http://www.math.lsa.umich.edu/~ablass/bbh.pdf .] Yours is precisely my notion of dissociativity, which I've not included in the list in the OP since I myself don't find it particularly "natural". And although we agree that one "right" definition, of course, does not exist, I'd be quite surprised if a "categorial" perspective on the question cannot provide a conceptual motivation, going beyond the scope of a mere "principle of local utility", for favoring one choice over another (which is really what I'm looking for). In any case, thank you much for your answer and the reference. – Salvo Tringali Mar 5 '13 at 14:54 @Salvo: Yes, it depends on what you like. There is probably no final choice, it depends. That is why we have also groups, groupoids etc. - A stronger reason for the above partial semigroup definition by @Andreas is that you do not need brackets any more in word expressions. I think that's the point for that definition. - On the other hand it is also somewhat limiting. For example, most (arbitrary) subsets of groups are not partial semigroups in this sense. Think about it in Z. – Hans Mar 5 '13 at 19:33 @Salvo: mhm, does this also hold if we take condition 2) in your list and (x y) z is defined, but y z is not defined? - ED: ok, you mean that we know that we must interpret xyz as (xy)z because the other possibility makes no sense. Ok, I see. Good argument. It will be hard to work with such expressions practically, but it's interesting. Maybe they should indeed also be considered. – Hans Mar 5 '13 at 21:37 Yes, but now that I read myself, I must say that it's expressed in a wrong way. What I mean is that either (xy)z and x(yz) are both defined (and then they're equal to each other), one, and only one, of them is defined, or none of them is defined. In the first two cases, we have a unique unambiguous way to interpret the expression xyz. I'll delete my previous comment to avoid confusion. – Salvo Tringali Mar 6 '13 at 9:09 add comment I agree with Andreas Blass: the best notion depends on what you need to do. If you are going to do many general algebraic constructions, you might benefit from George Graetzer's classic textbook "Universal Algebra", which develops much theory starting from partial algebras. My hunch is he uses your 1) to build varieties of partial semigroups, but you should check it out for up vote yourself. 0 down vote Gerhard "Ask Me About System Design" Paseman, 2013.03.05 Then I will repeat that, on the practical level, I agree with you and him, but my question is slightly different. I know you know this very well, but let me stress the obvious: Even if some questions are not formal, it can still make sense to look for answers which, in presence of many possible alternatives, are better motivated than others by some sort of abstract rationale, and hence canonical in a suitable sense, especially when you're working on foundational aspects and have to decide your basic definitions. On a similar note, if I'm given a set and asked to put a partial order (...) – Salvo Tringali Mar 5 '13 at 17:23 (...) on it, and if the questioner is not expressing an interest in any particular application, then I have no doubt as to the answer which I should provide. – Salvo Tringali Mar 5 '13 at @Gerhard. I followed your suggestion, and checked it out for myself, but there's no occurrence of the strings "sociativ" or "semigroup" in the 2nd chapter of the 2008 edition of Grätzer's book (which is apparently the only chapter in the whole text devoted to partial algebras). – Salvo Tringali Mar 5 '13 at 18:22 You can pick your foundations and then build what you can; you can also pick your constructs and find what foundations work best to extend those constructs or explain them more thoroughly. Your question likely warrants more of a philosophical discussion than this Q&A forum is meant to provide. Alternately, you could say "I give up. I will become a general algebraist, use George's book, and follow his rationale and definitions; I accept his as the right definition of partial semigroup." . Gerhard "It's Really Up To You" Paseman, 2013.03.05 – Gerhard Paseman Mar 5 '13 at 18:28 I am glad you followed my suggestion. However, I think that you need to look not just at substrings, but to consider his definition of variety, and possibly pseudo or quasi variety. Your definitions may turn out to have equational or Horn sentence equivalents, or require a certain set of all but finitely satisfiable sentences. I am unsure which would fit your needs best, and you may need to spend half an hour or more reading it to find what you need. For more on partial algebras, I think Burmeister has some material. Gerhard "Relying On Decade Old Memories" Paseman, 2013.03.05 – Gerhard Paseman Mar 5 '13 at 18:35 show 1 more comment There is the notion of semigroupoid (see for example chapter 2 of Groupoid Metrization Theory by Mitrea, Mitrea, Mitrea and Monniaux or this paper by R.Exel) for the exact definition and up vote some motivation for it) which generalizes semigroups in the same way that groupoids generalize groups. The associativity condition in semigroupoids doesn´t seem to be in your list, as far as 0 down I can tell. Exel's notion of associativity (for what it's worth, I know that paper quite well) is just a disjunctive combination of "my" notions of strong associativity and left/right dissociativity, and you're right that I've not included the last two in the OP (basically, for the reason that I mention in the first comment to Andreas Blass' answer). On another hand, I don't agree that sgrpds, as they're defined in Exel's work, generalize sgrps in the same way as grpds do with grps, but this may depend on our relative definitions: For me, a grpd is a cat all of whose arrows are iso. I'm editing the OP. – Salvo Tringali Mar 5 '13 at 16:46 You mean "conjunctive combination"? Also, the algebraic definition of groupoid (which is equivalent to the cat definition according to en.wikipedia.org/wiki/Groupoid#Algebraic) has the same associativity notion as Exel´s notion. – Ramiro de la Vega Mar 5 '13 at 18:28 But I've never meant that the two definitions of grpd given by Wiki.en are not equivalent to each other. Instead, I said that I don't know which definition (of grpd) you are thinking of when you claim that Exel's sgrpds generalize sgrps in the same way as grpds do with grps: Some people, for instance, use the term "grpd" to refer to Bourbaki's magmas (e.g., J. Howie). Secondly, I'd rather say that grps are to grpds as monoids are to cats, and monoids are to cats as sgrps are to (Mitchell's) semicats, and not to Exel's sgrpds. As for the rest, Definition 2.1 in Exel's paper expresses (...) – Salvo Tringali Mar 5 '13 at 20:17 (...) associativity as the (inclusive) disjunction of three conditions: He says that a partial map $\cdot:\Lambda\times\Lambda\rightharpoonup\Lambda$ is associative if "[...] either of (i) ..., or (ii) ..., or (iii) ... [...]" holds. Now, if my English is not worse than I want to admit to myself, "if either of (i) ..., or (ii) ..., or (iii) ... holds" should mean "if at least one among (i) ..., (ii) ..., or (iii) ... holds". Right?! – Salvo Tringali Mar 5 '13 at 20:17 The assertion $\forall x [p(x) \lor q(x) \implies r(x)]$ is equivalent to the conjunction of $\forall x [p(x) \implies r(x)]$ and $\forall x [q(x) \implies r(x)]$ not to their disjunction. – Ramiro de la Vega Mar 5 '13 at 23:38 show 3 more comments Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras semigroups magmas or ask your own question.
{"url":"http://mathoverflow.net/questions/123614/on-the-notion-of-partial-semigroup/123631","timestamp":"2014-04-19T10:03:44Z","content_type":null,"content_length":"93959","record_id":"<urn:uuid:0187b879-b752-4bf0-952f-32fde8d5c2c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
The Metric System--Laying the Groundwork Before Common Core, teaching the Metric System to 5th grade students was not part of the 5th grade math curriculum in Kansas. So, last year was the first year I taught the metric system to my students. That experience turned out to not be a pleasant one. I think several things came into play: First, I, as a citizen of the United States have not been exposed to the metric system a lot. That unfamiliarity leads to an uncomfortableness when trying to teach. Second, students get bogged down with the Customary Measurement System (making conversions in particular) that they try to make the metric system more difficult than what it really is. Fast forward a year and it is time to teach metric system conversions to my students. This time, however, I am ready. For one thing, I, myself, am much more comfortable with the content. In addition, what I didn't realize is that I had been laying the groundwork for the concept all year long. Let me explain. With the new Common Core standards there is a real emphasis in looking for patterns and to embrace more than one strategy to solve an equation or problem. Manipulating numbers so that one can us mental math is also a big idea. All year long we had been practicing these very skills. When I realized I could apply what I'd been teaching all year to metric conversions, my life and my student's lives got a whole lot easier. Back in September, we spent a great deal of time looking at multiplication and division patterns when working with multiples of ten. I even created 2 different sets of materials-- Zero Can Be Your Hero - Multiplication Patterns with Multiples of 10 Zero Can Be Your Hero - Division Patterns with Multiples of 10 for practicing the skill. After introducing early in the year, I then made a conscious effort to point out those types of problems anytime they showed up anywhere. After a month or two, all I would have to say is 'it's a Zero the Hero' problem and my kids knew exactly how to apply the pattern to solve the problem mentally. And, then I had a 'light-bulb' moment...making metric conversions is like a big ole' Zero the Hero problem...well kind of...I had to call upon another source of groundwork that had been laid earlier in the year...'the invisible decimal'. When we worked on multiplying and dividing decimals...I once again referred to Zero the Hero. I told them that you can apply the multiple of 10 pattern but instead of adding or subtracting zeros, you move the decimal. So, what happens if you divide or multiply by a multiple of 10 and the number being dividend or the factors don't have a decimal? That, my friends, is where the 'invisible decimal' comes into play. I told my students that every number has a decimal point. Some you can see (1.24) and some you can't see (124). When a number doesn't appear to have a decimal, you must locate the 'invisible decimal' (124.0) Again, I referred to the 'invisible decimal' any time it was appropriate. So, here is what my metric conversion lesson using these two groundwork pieces and the top part of my Metric Measure Conversion Mat sounded like in my classroom yesterday... Example Problem - 5000 kilometers = how many meters First we looked at the metric measures mat. I had pointed out that the center box is our starting point, anything to the left of that starting point is larger and increases by 10 times (a multiple of 10 number) in size with each box. Moving right does the opposite. Then I reminded students what we learned when making customary conversions...to go from a large to a small, multiply. From a small to a large, divide. I had them identify the measurements in the problem by placing a 'red marker' (small counter) on the kilometer box and a 'black marker' (small counter) on the meter box. Next, identify the fact that we are going from large to small so we will multiply. To determine what we will multiply by, I had them 'hop' the 'red marker' to the 'black marker' and count those hops as they go. There were 3 hops, so I asked them "What is the third power of 10?" 1000. So, I wrote the problem will be 5,000 x 1000, pointed out it was a "Zero the Hero" problem and they had 5,000,000 in no time. Then I suggested another way...this uses both Zero the Hero and the Invisible Decimal...they liked this even better. Here's how it works... Look at the number we are converting and find the 'invisible decimal'. Once you've found it, make it visible (5000.) Now look at your metric measures mat and find kilometers and meters. Which direction will you hop to make the conversion? (To the right.) How many hops? (3) Hop the invisible decimal 3 hops to the right and insert zeros to fill the gaps as you go (a skill we practiced earlier in the year). Stop after 3 hops. What number did you get? (5,000,000). Easy as that! Now remember, my kids understand why this works because we have been doing this all year long. I would not recommend just showing this strategy without your students having an understanding of the pattern or concept. I'm not going to lie, some students picked it up right and others took longer. As with any new math concept the key is to practice the skill over and over. But oh my, how much smoother it went this year as opposed to last. This summer I will be designing a bundle of products that will help you lay the groundwork and teach metric measure using this strategy. Frankly, I can't wait to get started 1 comment : 1. I love this!! I have taught my students mental math about adding the zeros to the end, but haven't connected it to the metric system! Thanks for the light bulb moment!
{"url":"http://www.mrsbsbest.com/2013/04/the-metric-system-laying-groundwork.html","timestamp":"2014-04-19T08:02:16Z","content_type":null,"content_length":"105663","record_id":"<urn:uuid:870367a3-e5d1-4b79-a40d-431629f066da>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus of Variations (Cambridge Studies in Advanced Mathematics) This textbook on the calculus of variations leads the reader from the basics to modern aspects of the theory. One-dimensional problems and the classical issues such as Euler-Lagrange equations are treated, as are Noether's theorem, Hamilton-Jacobi theory, and in particular geodesic lines, thereby developing some important geometric and topological aspects. The basic ideas of optimal control theory are also given. The second part of the book deals with multiple integrals. After a review of Lebesgue integration, Banach and Hilbert space theory and Sobolev spaces (with complete and detailed proofs), there is a treatment of the direct methods and the fundamental lower semicontinuity theorems. Subsequent chapters introduce the basic concepts of the modern calculus of variations, namely relaxation, Gamma convergence, bifurcation theory and minimax methods based on the Palais-Smale condition. The prerequisites are knowledge of the basic results from calculus of one and several variables. After having studied this book, the reader will be well equipped to read research papers in the calculus of variations. [via]
{"url":"http://www.bookfinder.com/dir/i/Calculus_of_Variations_Cambridge_Studies_in_Advanced_Mathematics_/0521642035/","timestamp":"2014-04-19T18:17:22Z","content_type":null,"content_length":"25254","record_id":"<urn:uuid:b572b9d8-068f-4a6f-9d72-1f1cfc0ef0db>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 24 3 [7]+ 2 -7 4p+6 if p =1 1/2 how did you get 7 for a answer 21-3c if c=7 2[7] - 9 27 -15 divided by 3 Rachel class recycled 8 3/5 boxes of paper in a month. if they recycled another 7 9/10 boxes the next month what is the total amount they recycled? an architect built a road 6 5/6 miles long. the next road he built was 3 9/10 miles long. what is the combined length of the two roads? bill spent 7 9/10 hours working on is reading and math homework. if he spent 5 3/6 hours on his reading homework, how much time did he spend on his math homework? Adding and subtracting fraction On Monday Roger spent 8 1/10 hours studying. On Tuesday he spent another 1 3/10 hours studying. What is the combined time he spent studying? 3 1/4 +n=9 1/12 Neoclassicism I english 11 I have these same exact questions! I need help! The sides of a triangle measure 9 , 15 , and 18 . If the shortest side of a similar triangle measures 6 , find the length of the longest side of this triangle . a . 5 b . 10 c. 12 d . 15 if the measures of the angles of a triangle are in the ratio 1:3:5 , the number of degrees in the measure of the smallest angle is a. 10 b. 20 c. 60 d. 180 If the measures of the angles of a triangle are in the ratio 1:3:5 , what is the measure in degrees of the smallest angle ? a. 10 b. 20 c. 60 d. 180 thanks , and would be greatly appreciated if answered ASAP . world history Because of a power struggle in Great Britain, a civil war erupted between the royalists who supported Charles, and the roundheads who supported Parliament. true or false world history Because of a power struggle in Great Britain, a civil war erupted between the royalists who supported Charles, and the roundheads who supported Parliament. social studies Why did the us go to war with spain. Tanisha idk wat to do ur welcome!! yes the answer is d 7th grade math r u tryna write an equation ta figure the problem out? what do you think it is tell me and i well tell you if it is correct and if its wrong then ill tell you what it is
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Chyna","timestamp":"2014-04-18T13:38:01Z","content_type":null,"content_length":"9245","record_id":"<urn:uuid:40b1e849-9cca-4f9a-b870-8563fcedfe71>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
If you have an n by m grid of points, you can connect 4 points to make a square in (2mn^3-2mn-2n^4+n^... If you have an n by m grid of points, you can connect 4 points to make a square in (2mn^3-2mn-2n^4+n^2)/12 different ways. Syndicated 2012-12-06 02:52:15 from David Barksdale - Google+ Posts
{"url":"http://www.advogato.org/person/amatus/diary.html?start=92","timestamp":"2014-04-18T03:46:38Z","content_type":null,"content_length":"11708","record_id":"<urn:uuid:cd13c9e4-2468-4296-9a96-dda8dc613bee>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Set the precedence of unary `-` equal to the precedence `-`, same for `+` Bug #7331 Set the precedence of unary `-` equal to the precedence `-`, same for `+` Status: Rejected Priority: Normal Assignee: - Category: core Target version: - ruby -v: 1.9.3 Backport: I will not be surprised if this proposal is rejected, because the compatibility would be a mess, but i feel i need to start a discussion. This continues the discussion in #7328. To the best of my knowledge, in mathematics the unary minus has the same precedence as the binary one. However, in the expression Ruby computes first (({-a})), and then the product. This means that if i want (for whatever reason) to compute the expression in the natural order, i have to put the parentheses: which looks very strange to me. I would consider this a bug. • Status changed from Open to Rejected unary minus has been higher precedence in the long history of programming languages. probably it's related to negative number literals. @nobu : nice idea. By the way, as this proposal is rejected, i do not personally oppose anymore to giving the unary - and + the highest precedence (#7328). I've read that previously the main reason of giving the binary power operator ** higher precedence than the unary - was that "people with mathematical background wanted so". If this is the case, i might be able to convince them to change their mind. Probably many of them are just unaware how - a * b is parsed in Ruby, and that this behavior is not going to change. This has no relevance to the issue, but I think your point of view is wrong. :-) You should do it the "mathematical" way not only for people with mathematical background but even more so for "normal" people. My common sense as a teacher leads me to believe that a "normal" person would tend to just take a formula from a textbook, e. g. to calculate compound interest or whatever, without ever thinking about subleties like operator precedence. A mathematician is much more likely to make sure what the exact precedence rules in the used programming language are, or to use parentheses just to make sure, and he would also have less difficulty in adapting to a different set of rules. I observe a related phenomenon with students that happily calculate the volume of a sphere with (({4 / 3 * Math::PI * radius**3})), not thinking about subleties like integer division. So, stay as close to the mathematical rules as possible. (And not primarily for the mathematician's sake!) @stomar, do i understand correctly that you insist on reopening this feature request? I wouldn't mind it too, but apparently some people prefer to be able to write -2 * -2 instead of the proper way (-2) * (-2). No, absolutely not! Such a change would certainly have a bigger impact than I could imagine. I just wanted to make a point for not deviating further from mathematical rules by changing -2**2 to result in 4. @stomar, as mathematical parsing rules are not and apparently will not be observed in Ruby, i do not understand very well your point of view. Either a programmer/user knows the operation precedence is Ruby, or not. Unless you want to encourage use of Ruby without understanding how an expression the user writes will be parsed, let us assume that the user is required to know that when he/she writes -2 * -2, this actually means (-2) * (-2) for Ruby, and will be executed accordingly. Do you think that remembering in the same time that -2 ** -2 means -(2 ** (-2)) and not (-2) ** (-2) will be easy? Does this parsing -2 * -2 -> (-2) * (-2) -2 ** -2 -> -(2 ** (-2)) make more mathematical sense than this one -2 * -2 -> (-2) * (-2) -2 ** -2 -> (-2) ** (-2) To me, it would be easier to remember that Ruby alway implicitly adds parentheses around an expression of the form -a, than to remember that the unary minus has higher precedence than the binary one, and that for some reason the order of precedence of unary - and binary * is inverted. I do not quite understand in what metric the current Ruby parsing is closer to the subset of "mathematically natural ones" than would be the one proposed in #7328, given that the present proposal stays rejected. :) Generally, adding "wrong" syntax just for consistency is IMO no option. But, looking over the thread again, I don't see your problem in the first place: - 2 * 3 means (-1) * 2 * 3 = (- 2) * 3 = - (2 * 3), which all results in -6, so what's wrong with that? Furthermore, -2 * -2 is mathematically invalid syntax, you would have to write -2 * (-2), but here also there is no ambiguity involved, - (2 * (-2)) = 4 = (-2) * (-2). The only thing that seems worth mentioning is that Ruby allows the shortcut 2 * - 3 for 2 * (-3). stomar (Marcus Stollsteimer) wrote: Generally, adding "wrong" syntax just for consistency is IMO no option. Well, when the wrong syntax is already present (- a * b is (- a) * b) i do not think #7328 would make the syntax as a whole "more wrong" (IMO it would even make it right in some sense). But, looking over the thread again, I don't see your problem in the first place: - 2 * 3 means (-1) * 2 * 3 = (- 2) * 3 = - (2 * 3), which all results in -6, so what's wrong with that? The question is about parsing, not about the arithmetic. There is (i believe) no Ruby specification saying that (-a) * b must produce the same result and side-effects as -(a * b). If it was a part of Ruby specification, i wouldn't have reported this issue. Furthermore, -2 * -2 is mathematically invalid syntax, you would have to write -2 * (-2), but here also there is no ambiguity involved, - (2 * (-2)) = 4 = (-2) * (-2). In Ruby, -a * -b is allowed, and means (-a) * (-b), not -(a * (-b)). Hence an ambiguity for an unsuspecting user. The only thing that seems worth mentioning is that Ruby allows the shortcut 2 * - 3 for 2 * (-3). Yes, this is worth mentioning :). If there are other arguments, i think it would be better to continues the discussion in the thread for #7328. Well, when the wrong syntax is already present (- a * b is (- a) * b) i do not think #7328 would make the syntax as a whole "more wrong" (IMO it would even make it right in some sense). 1. Changing the precedence of **' and unary-' would affect the results and is in discrepancy with mathematical rules. 2. Changing the precedence of *' and unary-' would not affect the results and would not have any benefits. 3. The current parsing is NOT wrong, since -a = (-1) * a, so - a * b = (-1) * a * b, and you have essentially the same precedence, so the "natural" way is to go from left to right. So most essentially, there is no argument for your statement that - (a * b) is a more "natural" order of evaluation than (-a) * b, which was the starting point of your "bug" report. stomar (Marcus Stollsteimer) wrote: 1. Changing the precedence of **' and unary-' would affect the results and is in discrepancy with mathematical rules. There are already discrepancies with mathematical rules, like allowing 2 * -3. 1. Changing the precedence of *' and unary-' would not affect the results and would not have any benefits. My request was about changing parsing, i agree that for integers it would not affect the result. The benefit would be to eliminate a discrepancy with mathematical parsing rules and to not mislead a new user who is likely to believe that - a * b means - (a * b). In mathematics, (-1) * 2 * 3 = (- 2) * 3 = - (2 * 3) because it is a property of integers, but - 2 * 3 is a shorthand syntax for - (2 * 1. The current parsing is NOT wrong, since -a = (-1) * a, so - a * b = (-1) * a * b, and you have essentially the same precedence, so the "natural" way is to go from left to right. You are talking about the result (for integers), not about parsing. How is the current parsing not mathematically wrong if - a * b is parsed as (- a) * b? (I repeat, i am talking about parsing.) I would propose the following experiment to evaluate which parsing would be less surprising to a new Ruby user :). Take 10 new Ruby users who do not know about this discussion and ask them to insert parentheses into the following arithmetic expressions in the way they believe Ruby does it, without experimenting with the interpreter: - 2 * - 2 - 2 ** - 2 - 2 * - 2 ** - 2 - 2 ** - 2 * - 2 I have come to the opposite opinion. I do not think Ruby must reflect mathematical notation exactly. If you think about it, mathematical notations can be somewhat loose, a bit more like natural language rather then a rigorous syntax. There are plenty of examples. Personally I'd even rather see #** move down a notch so that unary operators always take precedence. I find such a simplification makes it easier to read the code --official math parsing be damned. @alexeymuranov: I understood that you were talking about parsing. I refute your primary supposition. In the mathematical expression -2·3, the unary `-' can be interpreted both as a multiplication, (-1)·2·3, and as subtraction, 0 - 2·3. Both interpretations are equally valid. Which one you choose has no effect on the result. I think there is no sense in discussing or speculating further, as long as you do not provide a citation that confirms your mere assumption that the "mathematical parsing rule" (I doubt there is one) is -(a * b). And which one feels more "natural" to you (or to any Ruby user) is no argument at all. And BTW, I do not think anyone of us is really qualified enough in this field to decide on this issue. stomar (Marcus Stollsteimer) wrote: I refute your primary supposition. In the mathematical expression -2·3, the unary `-' can be interpreted both as a multiplication, (-1)·2·3, and as subtraction, 0 - 2·3. Both interpretations are equally valid. Which one you choose has no effect on the result. It has to be interpreted one way or the other. This is what operation precedence is for. This is, for example, how Model Theory deals with formulas without all explicit parentheses. This is what parsing is in the sense of formal grammar. I believe that in mathematical language -2·3 is parsed (-(2·3)), as if 0 was taken out of (0-(2·3)). @trans, this is why mathematical notation is not loose, because there is operation precedence taught in elementary school. I think there is no sense in discussing or speculating further, as long as you do not provide a citation that confirms your mere assumption that the "mathematical parsing rule" (I doubt there is one) is -(a * b). And which one feels more "natural" to you (or to any Ruby user) is no argument at all. I will think about a reference, it seems that Google doesn't know :). And BTW, I do not think anyone of us is really qualified enough in this field to decide on this issue. What field are you talking about? I think i am qualified in mathematics, but Ruby is not mathematics, and it is not up to me to decide. I do not mind learning something new anyway. I believe that in mathematical language -2·3 is parsed (-(2·3)), as if 0 was taken out of (0-(2·3)). Exactly, you believe, I do believe differently (namely that there is no general agreement on the precedence because it has no consequences in mathematics). We're both believers, but do not know... After some thinking, i want to add my last word on this :). I have not seen any "official" rule on parsing "- 2 * 3" (unless i was taught this in elementary school but not sure). However, i'll try to explain my reasons for believing that "- (2 * 3)" should be the "right" parsing. In my opinion, it is not a proper way to teach a second-grade child to evaluate "- 2 * 3" by saying: Hey! (or Yo!) Multiplication is associative and distributes over addition and subtraction, whether you compute "- 2 * 3" as "- (2 * 3)", as "(-2) * 3", or as "((-1)2)3", it will be all the same! Because if i do, they may remain wondering what "- 2 * 3" means and how to evaluate it. So, i would simply say that the minus in front stands for "0 -", and the evaluation rules are the usual ones: first multiplications and divisions from left to right, then additions and subtractions from left to right. Which gives • 2 * 3 = 0 - 2 * 3 = 0 - (2 * 3) = - (2 * 3). Returning back to parsing expressions in programming languages, the reason to parse "2 * - 3" as "2 * (- 3)" is not the operator precedence, but the fact that this expression is "invalid" as written, and fortunately, as "-" can be a unary prefix but "*" cannot be a unary suffix, there exists a unique way to make this expression valid: add parentheses around "- 3". Just to be sure that my opinion on parsing arithmetic unary minus is not a complete non-sense, i asked two colleagues across the hall about their opinions. First they said that in "- 2 * 3" parentheses do not matter, and i can put them any way i like. So i started doubting if my opinion is not a complete nonsense. However then i changed my question into asking how to teach a child to compute and how to parse an expression in a programming language, and explained my point of view, and they both agreed. (But maybe Marcus would be able to convince them otherwise.) Also available in: Atom PDF
{"url":"https://bugs.ruby-lang.org/issues/7331","timestamp":"2014-04-16T08:15:02Z","content_type":null,"content_length":"44424","record_id":"<urn:uuid:d9b73816-5119-474d-8534-a357a02053ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
PLaneT Package Repository : Home > williams > science.plt > package version 4.8 #lang racket ;;; Science Collection ;;; random-source.rkt ;;; Copyright (c) 2004-2011 M. Douglas Williams ;;; This file is part of the Science Collection. ;;; The Science Collection is free software: you can redistribute it and/or ;;; modify it under the terms of the GNU Lesser General Public License as ;;; published by the Free Software Foundation, either version 3 of the License ;;; or (at your option) any later version. ;;; The Science Collection is distributed in the hope that it will be useful, ;;; but WITHOUT WARRANTY; without even the implied warranty of MERCHANTABILITY ;;; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public ;;; License for more details. ;;; You should have received a copy of the GNU Lesser General Public License ;;; along with the Science Collection. If not, see ;;; <http://www.gnu.org/licenses/>. ;;; ----------------------------------------------------------------------------- ;;; This code adds some additional functionality to the PLT Scheme implementation ;;; of SRFI 27 provided with PLT Scheme V207 (and presumably later versions). ;;; The main additional functionality is to define a parameter, ;;; current-random-source, that provides a separate random stream reference for ;;; each thread. The default value for this random stream reference is ;;; default-random-stream as provided by SRFI 27. A guard procedure ensures that ;;; the value of current-random-source is indeed a random-source, otherwise a ;;; type error is raised. ;;; As of V371.1, there is a new implementation of SRFI 27 in PLT Scheme. The ;;; underlying PLT Scheme random source rountines were modified at some point to ;;; use the same algorithms as SRFI 27. The new SRFI 27 implemtation wrappers ;;; around this functionality. There are a few differences between the old ;;; implementation and the new one that required changes in this module. In ;;; particular, some of the SRFI 27 procedures are now macros. [This has been ;;; reverted in PLT Scheme.] ;;; The following is OBE. ;;; Instead of the set-random-source-state! procedure just being an alias for ;;; random-source-state-set!, it now calls it directly. This was done because ;;; the latter is now a macro and the aliasing does not work. However, this also ;;; breaks the ability to set the state of the default-random-source. ;;; ----------------------------------------------------------------------------- ;;; Version Date Description ;;; 0.9.0 08/05/04 This is the initial release of the random source module to ;;; augment SRFI 27. (MDW) ;;; 1.0.0 09/20/04 Marked as ready for Release 1.0. (MDW) ;;; 1.0.1 07/13/05 Added make-random-source-vector. (MDW) ;;; 1.0.2 10/18/05 Added optional second argument to ;;; make-random-source-vector. (MDW) ;;; 1.0.3 08/24/07 Updated to be compatible with the new SRFI 27 ;;; implementation. (MDW) ;;; 1.0.4 09/12/07 The SRFI 27 implementation is changing back to the same ;;; interface as before, i.e., no macros for the standard ;;; functionality. (MDW) ;;; 2.0.0 11/17/07 Added unchecked version of functions and getting ready for ;;; PLT Scheme V4.0. (MDW) ;;; 2.1.0 06/07/08 More PLT Scheme V4.0 changes. (MDW) ;;; 4.0.0 05/12/10 Changed the header and restructured the code. (MDW) (require srfi/27) ;;; Provide a parameter for the current random source - See PLT ;;; MzScheme: Language Manual, Section 7.7 Parameters. (define current-random-source (lambda (x) (when (not (random-source? x)) (raise-type-error 'current-random-source "random-source" x)) ;;; The macros with-random-source and with-new-random-source provide ;;; a convenient method for executing a body of code with a given ;;; random stream. The body is executed with current-random-source ;;; set to the specified random-source. (define-syntax-rule (with-random-source random-source body ...) (parameterize ((current-random-source random-source)) body ...)) (define-syntax-rule (with-new-random-source body ...) (parameterize ((current-random-source body ...)) ;;; (random-uniform-int r n) -> exact-nonnegative-integer? ;;; r : random-source? ;;; n : exact-positive-integer? ;;; (random-uniform-int n) -> exact-nonnegative-integer? ;;; n : exact-positive-integer? ;;; The procedure random-uniform-int returns an integer in the range ;;; 0 ... n-1 using the specified random-source or (current-random- ;;; source) is none is specified. Note that the random-integer and ;;; random-real functions from SRFI 27 DO NOT understand (current- ;;; random-source) and always use default random-source. (define random-uniform-int ((r n) ;; Note that random-source-make-integers returns a procedure ;; that must be applied to get the random integer. Thus the ;; extra set of parentheses. ((random-source-make-integers r) n)) (random-uniform-int (current-random-source) n)))) ;;; (random-uniform r) -> (and/c inexact-real? (real-in 0.0 1.0)) ;;; r : random-source? ;;; (random-uniform) -> (and/c inexact-real? (real-in 0.0 1.0)) ;;; The procedure random-uniform returns a double precision real in ;;; the range (0.0, 1.0) (non-inclusive) using the specified ;;; random-source or (current-random-source) if none is specified. ;;; Note that the random-integer and random-real functions from SRFI ;;; 27 DO NOT understand (current-random-source) and always use ;;; default-random-source. (define random-uniform ;; Note that random-source-make-reals returns a procedure that ;; must be applied to get the random number. Thus the extra ;; set of parentheses. ((random-source-make-reals r))) (random-uniform (current-random-source))))) ;;; Also provide alternatives to random-source-state-ref and ;;; random-source-state-set! from SRFI 27. (define random-source-state random-source-state-ref) (define set-random-source-state! random-source-state-set!) ;The following was needed at a point during the V371 timeframe when the ;SRFI 27 implementation was in flux. Basically, random-source-state-set! ; was implemented as a macro and was not a first-class object. ;(define (set-random-source-state! s state) ; (random-source-state-set! s state)) ;;; (make-random-source-vector n i) -> (vectorof random-source) ;;; n : exact-nonnegative-integer? ;;; i : exact-non-negative-integer? ;;; (make-random-source-vector n) -> (vectorof random-source) ;;; n : exact-nonnegative-integer? (define make-random-source-vector ((n i) (lambda (j) (let ((random-stream (make-random-source))) (random-source-pseudo-randomize! random-stream i j) (lambda (j) (let ((random-stream (make-random-source))) (random-source-pseudo-randomize! random-stream 0 j) ;;; Module Contracts (all-from-out srfi/27) (rename-out (random-uniform-int unchecked-random-uniform-int) (random-uniform unchecked-random-uniform) (random-source-state unchecked-random-source-state) (set-random-source-state! unchecked-set-random-source-state!) (make-random-source-vector unchecked-make-random-source-vector))) (case-> (-> random-source? exact-positive-integer? exact-nonnegative-integer?) (-> exact-positive-integer? exact-nonnegative-integer?))) (case-> (-> random-source? (and/c inexact-real? (real-in 0.0 1.0))) (-> (and/c inexact-real? (real-in 0.0 1.0))))) (-> random-source? any)) (-> random-source? any/c void?)) (case-> (-> exact-nonnegative-integer? exact-nonnegative-integer? (vectorof random-source?)) (-> exact-nonnegative-integer? (vectorof random-source?)))))
{"url":"http://planet.racket-lang.org/package-source/williams/science.plt/4/8/random-source.rkt","timestamp":"2014-04-18T05:31:03Z","content_type":null,"content_length":"20059","record_id":"<urn:uuid:126416ef-16c4-4387-96f2-38a911c5912c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Dayton electric motor capacitor, help identifying On Sep 8, 11:50*pm, Tony Hwang wrote: Eric in North TX wrote: I have a 2hp Dayton (Grainger) model# *6K393N that has a missing capacitor, I pulled the covers to check them, and one side, the longer cover was empty, just the 2 wires. I'm hoping someone has one with that model # that would be kind enough to look to see what value the missing cap should be. There have to be 1000's of those out there, I got it in a deal a while back, & hadn't tried to use it till the other day, it didn't take long to find the problem, but now to find out what cap it takes..... Google is your friend. Run capacitor is 5 MFD, 370V AC rating. I spent quite a bit of time googleing before resorting to begging for help, can you tell me where you found it. The other cap is there, it is a 16mfd
{"url":"http://www.diybanter.com/home-repair/286614-dayton-electric-motor-capacitor-help-identifying.html","timestamp":"2014-04-19T19:34:30Z","content_type":null,"content_length":"70997","record_id":"<urn:uuid:86074ea4-e7f8-45e3-a692-158406d6dfe3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
permutation meaning and example i see, so its just rearrange the list into random right? No, it is NOT random. It is a set of all possible arrangements. For combinations, on the other hand, order does not matter, so the combinations of abc are abc OR bca OR cab, etc --- but they are all counted as the same thing.
{"url":"http://www.physicsforums.com/showthread.php?p=4214410","timestamp":"2014-04-20T00:49:29Z","content_type":null,"content_length":"33751","record_id":"<urn:uuid:c93f5b20-a376-45c4-b8f3-217e548c45c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
and Computation in General Logic Results 1 - 10 of 14 , 1996 "... We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF c ..." Cited by 217 (44 self) Add to MetaCart We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF combines the expressive power of dependent types with linear logic to permit the natural and concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with references including its type system, its operational semantics, and a proof of type preservation. Another example is the encoding of a sequent calculus for classical linear logic and its cut elimination theorem. LLF can also be given an operational interpretation as a logic programming language under which the representations above can be used for type inference, evaluation and cut-elimination. 1 Introduction A logical framework is a formal system desig... , 1991 "... this paper we describe Elf, a meta-language intended for environments dealing with deductive systems represented in LF. While this paper is intended to include a full description of the Elf core language, we only state, but do not prove here the most important theorems regarding the basic building b ..." Cited by 175 (50 self) Add to MetaCart this paper we describe Elf, a meta-language intended for environments dealing with deductive systems represented in LF. While this paper is intended to include a full description of the Elf core language, we only state, but do not prove here the most important theorems regarding the basic building blocks of Elf. These proofs are left to a future paper. A preliminary account of Elf can be found in [26]. The range of applications of Elf includes theorem proving and proof transformation in various logics, definition and execution of structured operational and natural semantics for programming languages, type checking and type inference, etc. The basic idea behind Elf is to unify logic definition (in the style of LF) with logic programming (in the style of Prolog, see [22, 24]). It achieves this unification by giving types an operational interpretation, much the same way that Prolog gives certain formulas (Horn-clauses) an operational interpretation. An alternative approach to logic programming in LF has been developed independently by Pym [28]. Here are some of the salient characteristics of our unified approach to logic definition and metaprogramming. First of all, the Elf search process automatically constructs terms that can represent object-logic proofs, and thus a program need not construct them explicitly. This is in contrast to logic programming languages where executing a logic program corresponds to theorem proving in a meta-logic, but a meta-proof is never constructed or used and it is solely the programmer's responsibility to construct object-logic proofs where they are needed. Secondly, the partial correctness of many meta-programs with respect to a given logic can be expressed and proved by Elf itself (see the example in Section 5). This creates the possibilit... , 1999 "... Research in dependent type theories [M-L71a] has, in the past, concentrated on its use in the presentation of theorems and theorem-proving. This thesis is concerned mainly with the exploitation of the computational aspects of type theory for programming, in a context where the properties of programs ..." Cited by 70 (13 self) Add to MetaCart Research in dependent type theories [M-L71a] has, in the past, concentrated on its use in the presentation of theorems and theorem-proving. This thesis is concerned mainly with the exploitation of the computational aspects of type theory for programming, in a context where the properties of programs may readily be specified and established. In particular, it develops technology for programming with dependent inductive families of datatypes and proving those programs correct. It demonstrates the considerable advantage to be gained by indexing data structures with pertinent characteristic information whose soundness is ensured by typechecking, rather than human effort. Type theory traditionally presents safe and terminating computation on inductive datatypes by means of elimination rules which serve as induction principles and, via their associated reduction behaviour, recursion operators [Dyb91]. In the programming language arena, these appear somewhat cumbersome and give rise to unappealing code, complicated by the inevitable interaction between case analysis on dependent types and equational reasoning on their indices which must appear explicitly in the terms. Thierry Coquand’s proposal [Coq92] to equip type theory directly with the kind of , 1994 "... LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO ..." Cited by 68 (10 self) Add to MetaCart LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO is intended to be used for interactively constructing proofs in mathematical theories presented in these logics. I have developed LEGO over six years, starting from an implementation of the Calculus of Constructions by G erard Huet. LEGO has been used for problems at the limits of our abilities to do formal mathematics. In this thesis I explain some aspects of the meta-theory of LEGO's type systems leading to a machine-checked proof that typechecking is decidable for all three type theories supported by LEGO, and to a verified algorithm for deciding their typing judgements, assuming only that they are normalizing. In order to do this, the theory of Pure Type Systems (PTS) is extended and f... - In Sixth Annual IEEE Symposium on Logic in Computer Science , 1991 "... We present algorithms for unification and antiunification in the Calculus of Constructions, where occurrences of free variables (the variables subject to instantiation) are restricted to higher-order patterns, a notion investigated for the simply-typed -calculus by Miller. Most general unifiers and ..." Cited by 61 (15 self) Add to MetaCart We present algorithms for unification and antiunification in the Calculus of Constructions, where occurrences of free variables (the variables subject to instantiation) are restricted to higher-order patterns, a notion investigated for the simply-typed -calculus by Miller. Most general unifiers and least common antiinstances are shown to exist and are unique up to a simple equivalence. The unification algorithm is used for logic program execution and type and term reconstruction in the current implementation of Elf and has shown itself to be practical. The main application of the anti-unification algorithm we have in mind is that of proof generalization. 1 Introduction Higher-order logic with an embedded simply-typed - calculus has been used as the basis for a number of theorem provers (for example [1, 19]) and the programming language Prolog [16]. Central to these systems is an implementation of Huet's pre-unification algorithm for the simply-typed -calculus [12] which has shown it... - Proceedings of the 11th International Conference on Automated Deduction , 1992 "... . We exhibit a methodology for formulating and verifying metatheorems about deductive systems in the Elf language, an implementation of the LF Logical Framework with an operational semantics in the spirit of logic programming. It is based on the mechanical verification of properties of transformatio ..." Cited by 32 (9 self) Add to MetaCart . We exhibit a methodology for formulating and verifying metatheorems about deductive systems in the Elf language, an implementation of the LF Logical Framework with an operational semantics in the spirit of logic programming. It is based on the mechanical verification of properties of transformations between deductions, which relies on type reconstruction and schema-checking. The latter is justified by induction principles for closed LF objects, which can be constructed over a given signature. We illustrate our technique through several examples, the most extensive of which is an interpretation of classical logic in minimal logic through a continuation-passing-style transformation on proofs. 1 Introduction Formal deductive systems have become an important tool in computer science. They are used to specify logics, type systems, operational semantics and other aspects of languages. The role of such specifications is three-fold. Firstly, inference rules serve as a high-level notation w... - Journal of Logic and Computation , 1999 "... Linear and other relevant logics have been studied widely in mathematical, philosophical and computational logic. We describe a logical framework, RLF, for defining natural deduction presentations of such logics. RLF consists in a language together, in a manner similar to that of Harper, Honsell and ..." Cited by 23 (7 self) Add to MetaCart Linear and other relevant logics have been studied widely in mathematical, philosophical and computational logic. We describe a logical framework, RLF, for defining natural deduction presentations of such logics. RLF consists in a language together, in a manner similar to that of Harper, Honsell and Plotkin's LF, with a representation mechanism: the language of RLF is the lL-calculus; the representation mechanism is judgements-as-types, developed for relevant logics. The lL-calculus type theory is a first-order dependent type theory with two kinds of dependent function spaces: a linear one and an intuitionistic one. We study a natural deduction presentation of the type theory and establish the required proof-theoretic meta-theory. The RLF framework is a conservative extension of LF. We show that RLF uniformly encodes (fragments of) intuitionistic linear logic, Curry's l I -calculus and ML with references. We describe the Curry-Howard-de Bruijn correspondence of the lL-calculus with a s... , 1999 "... The lL-calculus is a dependent type theory with both linear and intuitionistic dependent function spaces. It can be seen to arise in two ways. Firstly, in logical frameworks, where it is the language of the RLF logical framework and can uniformly represent linear and other relevant logics. Second ..." Cited by 8 (6 self) Add to MetaCart The lL-calculus is a dependent type theory with both linear and intuitionistic dependent function spaces. It can be seen to arise in two ways. Firstly, in logical frameworks, where it is the language of the RLF logical framework and can uniformly represent linear and other relevant logics. Secondly, it is a presentation of the proof-objects of BI, the logic of bunched implications. BI is a logic which directly combines linear and intuitionistic implication and, in its predicate version, has both linear and intuitionistic quantifiers. The lL-calculus is the dependent type theory which generalizes both implications and quantifiers. In this paper, we describe the categorical semantics of the lL-calculus. This is given by Kripke resource models, which are monoid-indexed sets of functorial Kripke models, the monoid giving an account of resource consumption. We describe a class of concrete, set-theoretic models. The models are given by the category of families of sets, parametrized over a small monoidal category, in which the intuitionistic dependent function space is described in the established way, but the linear dependent function space is described using Day's tensor product. , 1998 "... We give a canonical program refinement calculus based on the lambda calculus and classical first-order predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is t ..." Cited by 6 (1 self) Add to MetaCart We give a canonical program refinement calculus based on the lambda calculus and classical first-order predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is that it should be possible to induce a refinement calculus in a generic manner from a programming language and a program logic. For concreteness, we adopt the simply-typed lambda calculus augmented with primitive recursion as a paradigmatic typed functional programming language, and use classical first-order logic as a simple program logic. A key feature is the construction of the refinement calculus in a modular fashion, as the combination of two orthogonal extensions to the underlying programming language (in this case, the simply-typed lambda calculus). The crucial observation is that a refinement calculus is given by extending a programming language to allow indeterminate expressions (or ‘stubs’) involving the construction ‘some program x such that P ’. Factoring this into ‘some x...’ - Theoretical Computer Science , 2000 "... We introduce the main concepts and problems in the theory of proof-search in type-theoretic languages and survey some specific, connected topics. We do not claim to cover all of the theoretical and implementation issues in the study of proof-search in type-theoretic languages; rather, we present som ..." Cited by 2 (1 self) Add to MetaCart We introduce the main concepts and problems in the theory of proof-search in type-theoretic languages and survey some specific, connected topics. We do not claim to cover all of the theoretical and implementation issues in the study of proof-search in type-theoretic languages; rather, we present some key ideas and problems, starting from well-motivated points of departure such as a definition of a type-theoretic language or the relationship between languages and proof-objects. The strong connections between different proof-search methods in logics, type theories and logical frameworks, together with their impact on programming and implementation issues, are central in this context.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1936245","timestamp":"2014-04-20T17:29:37Z","content_type":null,"content_length":"40716","record_id":"<urn:uuid:a4e8ab96-1643-410c-bba6-f1e227ab2497>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
D.N.Perera, P.Harrowell, A two dimensional glass: microstructure and dynamics of a 2D binary mixture, J.Non-Cryst.Solids 235-237, 314-319, 1998. D.N.Perera, P.Harrowell, Solute-enhanced diffusion in a dense two-dimensional liquid, Phys.Rev.Lett. 80, 4446-4449, 1998. D.N.Perera, P.Harrowell, Origin of the temperature dependences of diffusion and structural relaxation in a supercooled liquid, Phys.Rev.Lett. 81, 120-12, 1998.
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report98/Appendix_B/E_Harrowell_g79_98.html","timestamp":"2014-04-23T21:48:37Z","content_type":null,"content_length":"9462","record_id":"<urn:uuid:c926afcb-c0ff-4701-a014-7b5676d40e20>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear transformation question April 1st 2010, 02:24 PM #1 Junior Member Sep 2008 Linear transformation question I am having problem with this question T: P2 --->R^2 T(a+bx+cx^2)=(a,b) Find the basis for the kernel T and basis for the image T according to the text book Kernel = { T(v)= 0 }, so the kernel should look like T (a+bx+cx^2)=(0,0) how exactly do you find the basis? As for the image, up to my understanding, is it the entire vector space of P2 map into a subspace of R^2? I am having problem with this question T: P2 --->R^2 T(a+bx+cx^2)=(a,b) Find the basis for the kernel T and basis for the image T according to the text book Kernel = { T(v)= 0 }, so the kernel should look like T (a+bx+cx^2)=(0,0) how exactly do you find the basis? As for the image, up to my understanding, is it the entire vector space of P2 map into a subspace of R^2? Come on. Let $(a,b)\in\mathbb{R}^2$ be arbitrary.What is $T(a+bx)$? Also, $T(a+bx+cx^2)=(a,b)=0\implies a=b=0$ April 1st 2010, 02:27 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/136886-linear-transformation-question.html","timestamp":"2014-04-17T02:19:28Z","content_type":null,"content_length":"34230","record_id":"<urn:uuid:8084f797-5e26-41dd-b00c-1ad155e292ec>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Experiment with forces - Formula question act a force of 7N on it This was a vertically-directed force? What arrangement did you use to cause 7N to continuously act on the ball while it was falling? How did you neutralize gravity for your experiment? the time of ball going down was 3s i wanted to find the high and what i did is: ##high=force acted * time / mass## So you invented your own equation I'm surprised, too, but can't explain it. The equation you need is: s = ut + ½·at
{"url":"http://www.physicsforums.com/showthread.php?t=585040","timestamp":"2014-04-21T12:21:58Z","content_type":null,"content_length":"31739","record_id":"<urn:uuid:14e094a1-de87-4b0d-b91f-3342a29ace59>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
From OpenWetWare Lotka-Volterra Modelling First, we considered the Lotka-Volterra model and it's consequences before actually modelling our system because of the similarities. • Document on how to effectively extract parameters from Michaelis Menten model, by Matthieu Bultelle Theoretical Analysis Analysis of ODEs of the given system - mathematical approach to study and characterise the systems behaviours • To have a basic understanding of the Lotka-Volterra (LV) Equation on how it resulted in a unstable oscillation • To study to the key parameters for their effects on the amplitude and the frequency of output oscillation • Further modify the LV equation to model our real system • Further study the modified model based on LV model • Propose a set of realistic parameters which will achieve the system design aim Progress: <showhide> 1. Analysis report of the past progress --31/07/06 • We have underestimated the complexity within the simple lotka-volterra model • However, we did have a basic understanding of the LV ODEs, the following is the abstract of above document __HIDER__ • vector field - a powerful way to represent the system behaviour • Control modelling - using simulink in Matlab □ use control modelling technique to model and simulate the system □ from the above model, we are able to change the parameters and initial condition to study the behaviour of the system • we have found that the LV system is not as easy as expected, in fact, it is very sensitive to noise, any small perturbations will trigger the system into a different cycle • It is also impossible to characterise the effect of changing paramenter on output waves' frequency and amplitude. • However, the result from Simulink and matlab programming is not promising, the output seems to be noisy. Simulink is very complex itself, and we fail to apply any advance control method to simplify our system, hence the method of using Simulink is discarded 2. Back to simplicity -- another new matlab programme has been done, we just used ode23 to solve LV differential equations • To avoid the complexity, we will only vary one parameter at a time, to find out about the resultant change on the output period and frequency • Since as mentioned above, the system behaviour might be totally different, here we are going to study a typical set of parameters, and hopefully could use this approach to create a template to study our real system paremeters. Result: <showhide> -- click here for the first report --02/08/06 • This document shows graphs of magnitude of output against the changing of parameters. A typical graph: __HIDER__ </hide></showhide> <showhide> -- click here for the second report --04/08/06 • This document is an improvement over first report. A general trend can be concluded from this documents, a few chosen graphs are shown here. __HIDER__ </hide></showhide> <showhide> -- click here for the third report --07/08/06 • This document studies the effect by changing the initial condition, a few chosen graphs are shown here. __HIDER__ </hide></showhide> <showhide> -- click here for the fourth report --07/08/06 • This document verify the local behaviours of parameters by using a smaller range. __HIDER__ </hide></showhide> <showhide> -- click here for the fifth report --10/08/06 • Further verify the local effects of a single parameter while setting the rest of parameters to a different value. Presentation of results has changed to a table form format. __HIDER__ </hide></showhide> <showhide> -- click here for the sixth report --10/08/06 • Once we confirmed the local behaviour and choose a set of parameters, we ran a stress analysis to study how sensitive the system is to each parameters. Example:__HIDER__ </hide></showhide> Conclustion: • We have done the numerical analysis with a given set of ODEs and parameters • Now we should be able to analyse, predict and choose a set of parameters to achieve our system design aim. • However, as the ODE system will become more and more complicated, if possible, we need a theoretical approach to understand what the system should behave before analysing them numerically. 3. Jacobian anaylsis of stationary point for stability of oscillation. click here --04/08/06 Method to determine the stability of a stationary point: • Formulate the differential equations. • Find out about the stationary point by setting the differential equations to 0. • Form Jacobian matrix using partial differentiation on differential equation. • Substitute the stationary point value into the matrix and obtain its eigen value. • Determine whether the stationary point is stable. <showhide> The above document shows how a matlab programme is used to generate required data. __HIDER__ <hide> </hide></showhide> • Now we have a Matlab tool to study a given ODEs <showhide> 4. Introduction to analysis of the 2-D ODEs. click here --14/08/06 • Step by step analysis of pure Lotka-Volterra ODEs, detail explanations of every step taken __HIDER__ • stationary point: points when the system reach a steady state (equilibrium) • Jacobian matrix: best linear approximation to a differentiable function near a given point, can be used to determine whether these points are stable • Eigen value of Jacobian matrix: The sign of the real parts of eigenvalues will determine whether the given stationary point is stable. • Hence we can use Trace and determinant of a Jacobian matrix to determine the stability of 2-D ODEs • Vector Field Representation: A powerful numerical method to help us visualise the stability of a give point. • Thus we have define our approach to study a give ODEs with appropriate explanations. </hide></showhide> --revised version 1 click here --21/08/06 • An improved version, rephrased various definations, and added the reason for using eigen-values to determine the stability of a stationary point. <showhide> 5. Study and Characterisation of a limit cycle. click here --06/09/06 • Showing a typical ODEs that will result in a nice limit cycle; this should be our aim of the modelling. There is also a short discussion on how to characterise the limit cycle. Graph __HIDER__ 6. Stability analysis of modified Lotka-Volterra systems • The pure Lotka-Voleterra model is too ideal to achieve, after our careful analysis, we think that our model should be modified similar to Michaelis-Menten __HIDER__ <showhide> -- Version 1, production of the prey is modelled by pseudo Michaelis-Menten kinetics click here --16/08/06 • After the analysis, there are two stationary points [0, 0]&[d/c, V*c/b/(k*c+d)], first being unstable, second being stable __HIDER__ <showhide> -- Version 2, productions of the prey and predator are modelled by pseudo MM kinetics click here --21/08/06 • There are also two stationary points • The following graphs shows case when k<m & k>m __HIDER__ <showhide> -- Version 3, production and death of the prey is modelled by pseudo MM kinetics click here --06/09/06 -- Version 4, Combination of Version 1 2 3 click here --06/09/06 -- Version 5, with E, click here Further Study of the complicated modelling of Version 4 Specail thanks to Matthieu Bultelle for all the mathematical analysis of this model! -- Trial 1, by defining R=B*C/D to simplify stability analysis, click here for analysis and click here for appendix of full-size graphs --06/09/06 -- Trial 2, interesting cases of trial 1, click here for cases of R=1 and click here for an interesting nice limit cycle --06/09/06 -- Trial 3, by re-defining R=C/D for better system analysis click here for analysis and click here for appendix of full-size graphs --07/09/06 List of templates designed by Matthieu Bultelle: -- Part 1, Dynamic Analysis of a system, click here -- Part 2, Poincare Analysis for limit cycle, click here -- Part 3, Theoretical Model Analysis, click here -- Part 4, Presentation of Results, click here Template for analysing dynamical system : IGEM:IMPERIAL/2006/project/modelling_template A powerful Java Applet Molecular Prey-Predator using JOde a=1;a0=1;b=5;b0=0.5;c=0.01;c0=1;d=0.02;e=0.0 Instructions on using the JOde Applet Using the applet written by: Marek Rychlik
{"url":"http://openwetware.org/index.php?title=IGEM:IMPERIAL/2006/project/Oscillator/Modelling/LV&oldid=80686","timestamp":"2014-04-18T16:13:59Z","content_type":null,"content_length":"40940","record_id":"<urn:uuid:08611228-431a-41d9-8c8e-e6312d90a4b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
7 projects tagged "Algorithms" C Almost Generic Library (CAGL) is a set of C macros which generates typed arrays, lists (singly or doubly-linked), hash tables, and balanced binary trees, as well as many useful functions to manipulate them. The containers grow automatically, and their memory is managed by the library. The container data, or elements, may also be managed by the library, depending on the options specified by the programmer. The aim is to free C programmers from the drudgery of implementing common data structures and algorithms. CAGL also provides some safety by making the containers typed instead of void pointers. Although, at most, two macros are invoked to declare and define a container type, manipulation of the containers is done using functions generated by the macros. A simple naming convention is used to get around the limitation that C doesn't support function overloading.
{"url":"http://freecode.com/tags/algorithms?page=1&with=66&without=","timestamp":"2014-04-20T12:21:14Z","content_type":null,"content_length":"55146","record_id":"<urn:uuid:4b0a01af-d3c2-4bcf-be6e-033802fbe64c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
25 Oct 22:44 2012 Building all possible element combinations from N lists. dokondr <dokondr <at> gmail.com> 2012-10-25 20:44:31 GMT Hi all, I am looking for the algorithm and code better then the one I wrote (please see below) to solve the problem given in the subject. Unfortunately I finally need to implement this algorithm in Java. That's why I am not only interested in beautiful Haskell algorithms, but also in the one that I can without much pain implement in procedural PL like Java :( Build all possible element combinations from N lists. Valid combination consists of k <= N elements. Where each element of a single combination is taken from one of the N lists. When building combination any list can give only one element for this combination. In other words you can not take more then one element from one and the same list when building current combination. Thus combinations between elements from the same input list are not allowed. Yet when building next combination you can use any of the lists again. Order of elements in combinaton does not matter. For example: input = [ output = (see bigger example at the end of the code) Current implementation is based on the idea of using combinations accumulator. After first run accumulator contains: 1) All elements from the first two lists. These two lists are taken from the input list of lists. The elements of these first two lists are put in accumulator as is. For example: ["a"],["b"],["1"],["2"] 2) All combinations of these elements: Note: Combinations between elements from the same input list are not allowed Next, on each run we add combinations built from accumulator elements together with next list taken from the input list of list. input = [ addAll :: [[[a]]] -> [[a]] addAll [] = [] addAll (x:[]) = x addAll (x:y:[]) = accum x [y] addAll inp = accum (initLst inp) (restLst inp) initLst inp = fstLst ++ sndLst ++ (addLst fstLst sndLst) where fstLst = inp !! 0 sndLst = inp !! 1 restLst inp = (tail . tail) inp accum :: [[a]] -> [[[a]]] -> [[a]] accum xs (r:rs) = accum (xs ++ r ++ addLst xs r) rs accum xs _ = xs addLst :: [[a]] -> [[a]] -> [[a]] addLst (x:xs) ys = addOne x ys ++ addLst xs ys addLst _ [] = [] addLst [] _ = [] addOne :: [a] -> [[a]] -> [[a]] addOne x (y:ys) = [x ++ y] ++ addOne x ys addOne x [] = [] For example: input = [ output = Haskell-Cafe mailing list Haskell-Cafe <at> haskell.org
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/101137","timestamp":"2014-04-19T19:53:06Z","content_type":null,"content_length":"22526","record_id":"<urn:uuid:c4a8853a-24f8-4558-8f2d-46880399a313>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
NCERT Solutions for Class 9th Maths: Chapter 14 Statistics National Council of Educational Research and Training (NCERT) Book Solutions for Class 9 Subject: Maths Chapter: Chapter 14 – Statistics Class 9 Maths Chapter 14 Statistics NCERT Solution is given below. Click Here to view All Chapters Solutions for Class 9th Maths Stay Updated. Get All Information in Your Inbox. Enter your e-Mail below:
{"url":"http://schools.aglasem.com/?p=1754","timestamp":"2014-04-18T10:34:05Z","content_type":null,"content_length":"76951","record_id":"<urn:uuid:526f2160-de5b-4526-9f6b-07fa58319d31>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Wellington, FL Algebra 2 Tutor Find a Wellington, FL Algebra 2 Tutor ...I have taught Mathematics and Physics in High School and College for over 10 years, I have also taught Spanish privately in recent years. I have a degree in Physics and post-degree studies in Geophysics and Information Systems. My background as a physicist, my experience as a teacher in college... 10 Subjects: including algebra 2, Spanish, physics, geometry ...I have worked with children as they prepare for kindergarten and as they advance through elementary. I have also done some work to helping children with learning disabilities. I like to follow Montessori methods of teaching but also believe that learning concepts by memorization is necessary to establish fundamentals of learning. 41 Subjects: including algebra 2, reading, chemistry, calculus ...Before my retirement I worked at a prestigious preparatory school teaching Multivariable Calculus, Calculus (Honors and AP), Pre-Calculus, and Algebra 2. I specialized in building curricula that integrated graphing calculator technology. Since my retirement in 2009, I have been providing privat... 6 Subjects: including algebra 2, calculus, geometry, algebra 1 I am a highly experienced mathematics tutor and certified teacher with over 10 years of experience helping students succeed academically in advanced math subjects such as Algebra 1 & 2, Geometry, Pre-Calculus, Trigonometry, Calculus, SAT/ACT math, and beyond. I hold bachelors degrees in electrical ... 14 Subjects: including algebra 2, calculus, geometry, statistics I have been tutoring in Boca Raton for the last 10 years and references would be available on request. I basically tutor Math, mostly junior high and high school subjects and also tutor college prep, both ACT and SAT. I have also tutored SSAT. 10 Subjects: including algebra 2, geometry, algebra 1, SAT math
{"url":"http://www.purplemath.com/Wellington_FL_algebra_2_tutors.php","timestamp":"2014-04-16T16:11:30Z","content_type":null,"content_length":"24481","record_id":"<urn:uuid:31fd0ce0-6d75-4784-af42-15174cb233bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geomblog Ed Pegg a nice page describing some of the effort put in by CBS to achieve mathematical versimilitude on . Some of the salient points: • After CBS finished the pilot, they sent out letters to many mathematicians, asking for help in getting the math right (Ed was one of those who responded). • They previewed the pilot at the 2005 Joint Math Meeting in Atlanta (the first TV pilot shown at a math conference!) • They have mathematicians as advisors on the show. I promise not to bore you with more Numb3rs posts (till next Friday at least). I only just got around to watching the first two episodes of Numb3rs. My plan (for as long as I can stand it) is to record any mathematical nuggets of interest that are dropped in the show. Overall, the series is decent as a CSI-type action thriller. There is far more action than I had anticipated, which is probably a good thing: makes it more crowd-friendly. I don't know if it will last very long: the acting is a bit uneven so far, and the writers have introduced plot lines that need to be explored further (the tension between Don and his boss; the (non) romance between Charlie and his "graduate student"...) Peeve alert: Apparently this grad student is from Madras, and her parents are arranging her marriage with a Hindu banker from Goa. Just Hindu ? Not a Vadama Iyer brahmin of an appropriate gotram ? and from Goa, of all places ? They eat meat there, for crying out loud. In any case, who lives in Goa anyway ? I thought only foreign tourists live there. So here's the setup. Don is the FBI agent, Charlie is his "math genius almost 30yrold brother" with degrees from MIT and Stanford (not Princeton ?). There are various other irrelevant characters like the Obligatory Female Sidekick (OFS), Token agent of color (TAC), and gruff but caring dad (GCD). I don't expect to mention them much. There is also the crazy string theorist/mentor for Charlie, who for some reason needs help with his math, and has to suffer cracks like "Feynman and Witten did their own math". WARNING: there will be spoilers; don't read this if you have a problem with that. 1. Pilot: Main plot line revolves around determining from a set of points (locations of murders by a serial killer) the home base of the killer. Much rambling about sprinklers and how from the location of the drops one can reconstruct the location of a rotating sprinkler. The key dramatic element is familiar to anyone who knows clustering: Make sure you know what the "right" number of cluster centers is !!! Short story: they thought the killer had one base, but he actually had two. Mathematical jargon: Not too bad overall. They even resisted the urge to claim that all mathematicians atrophy after age 30, merely having the mentor point out that Charlie was at the peak of his talents and most mathematicians have only 5-6 good years in them. There was some good stuff about the difficult of humanly generating true randomness. and the usual bashing of the lottery (Don's boss makes fun of Charlie, but has a lottery ticket in his pocket). A useful mention of Galois (the mentor was worried that Charlie was getting distracted by "unclean" human problems). One of the writers on the show has been reading blogs. There was a line that looked like it came from Sean Carroll's Preposterous Universe: "Why do we remember the past and not the future" Charlie allegedly "built a weak force theory" with less than 96% probabillty. Whatever that means... 2. Uncertainty: If there was any doubt in your mind that you should watch this series, this episode should dispel all of that. This, my friends, is the famous P vs NP episode (earlier than episode 4!) Plot: Charlie has a new equation that predicts, given a series of bank robberies, where the next one will be. When the episode opens, he is being rather cocky about it. As it turns out he is right about the next location, but the standoff ends in a bloodbath and Don is injured, shocking Charlie so much he goes into a deep depression. As it turns out, the robbers are after something more Mathematical jargon: The first inkling of what is to come appears after Charlie returns home, shell-shocked after seeing his brother wounded. He assembles blackboards like a madman in the garage, and as he walks by one board, you see the magic words: "SUBSET SUM". A few minutes later, you see another board covered with VERTEX COVER and COLORING and CLIQUE, with lines connecting them. Be still, my beating heart.... Mentor dude walks in, and Charlie first states a theorem (the first of the series): Minesweeper consistency is NP-Complete. and then mumbles something about 3SAT and coloring algorithms. Finally Don arrives, and the mystery is solved. It turns out that Charlie works on P vs NP whenever he's depressed. As OFS puts it: 'beats binge drinking and strip clubs'. Charlie is apparently so obsessed with this that when his mother was dying of cancer he spent the entire time working on it. By the end of the episode all are happy and Charlie has realized that instead of working on P vs NP, he will work on problems that he has some hope of solving. Now there's a lesson for researchers everywhere ! What warmed the cockles of my heart was the constant referring to P vs NP as a "famous math problem". Now I would have been a little happier if this had been referred to as a "famous computer science problem", but the thought of "pure" mathematicians everywhere being driven crazy by this statement is good enough for me :) Tags: numb3rs, review Most of us are familiar with "Zipfian distributions" or "heavy-tailed" distributions, in which the probability of occurrence of the i^th most frequent element is roughly proportional to 1/i^a, where a is a constant close to 1. A related "law" is Benford's law, which states that On a wide variety of statistical data, the first digit is d with the probability log[10] ( 1 + 1/d) An interesting discussion of this law reveals some underlying structure, specifically that if there is any law governing digit distributions, then it is scale invariant iff it is given by Benford's Some other related laws that are more amusing, if less mathematically interesting are: Lotka's Law: The number of authors making n contributions is about 1/n^a of those making one contribution, where a is often nearly 2 Bradford's Law: Journals in a field can be divided into three parts, each with about one-third of all articles: 1) a core of a few journals, 2) a second zone, with more journals, and 3) a third zone, with the bulk of journals. The number of journals is 1:n:n^2 as of today, I now have over 100,000 page views (starting from Mar 31). Phew.... The National Geographic Magazine has a spread on coffee, "the world's most popular psychoactive drug". Be sure to check out the multimedia tour of coffee organized by photographer Bob Sacha. And do the quiz. You'll be surprised at some of the answers. Courtesy Karen Ho. You just can't win: when a conference is in the US, people have a hard time getting into the country, and when it isn't, people have a hard time getting back into the country. At least one SODA PC member had visa trouble precluding their coming to Vancouver. Moreover, one of the Best Student Paper awardees (Ravikrishna Kolluri) couldn't make it for visa issues: someone else had to give the talk in his place. It's nice to have outsiders peep into our cloistered theory community from time to time. At dinner, I was trying to explain some of the business meeting discussion to John Baez, and the increasingly obvious look of incredulity on his face and my ever-more-convoluted attempts to explain myself soon made it apparent that something was seriously wrong. Pretty much all of the annual handwringing we do over short/long/submissions/acceptance /quality/reviews/PC load and what have you, would just disappear in a "publish in journal/talk in conference" model. I realize that this is far too radical (though not innovative) an idea to ever be implemented any time soon, but you have to wonder why we in algorithms (and in computer science in general) put ourselves through such misery for no good reason. I have yet to hear a convincing reason why our model is superior to the math model^1. Admittedly our business meetings would be a lot shorter (not to mention less beer), but I am willing to pay this price :) 1The only reason I have heard is that our journals take too long to publish papers, but that assumes that this would not change if we had no conference reviewing. Adam Buchsbaum now has slides for his SODA 2005 business meeting presentation. After opening remarks and some basic statistics, he looked at acceptance rates across topics, and found that overall there is no advantage to be gained by focusing on specific areas. He thankfully skipped the now de rigeur "silly" section of the business meeting, where exotic formulae and elaborate paper titles that optimize acceptance are proposed. The next section discussed diversity on the PC and in accepted papers, recapping the statistics that I talked about earlier. Why Are Submissions Going Up ? Next, he addressed what is a growing concern: increased submission levels and how to deal with them (this has become a serious enough problem that the ACM has a task force investigating the matter). His data comprises submission/acceptance information from 1999 forward (although for 1999 he only had long paper submission/acceptance rates). The most striking fact is the steady increase in submissions over the past 4 years. Note that this is not accounted for by the start of short paper submissions in 1999; current submissions levels are well beyond that. Most of the increase can be attributed to an increased number of long submissions (but this will be addressed below). The main takeaway from this graph and from the subsequent plots is that the popular hypothesis "submission increases can be explained by the dot-com bust and lots of people returning to academia", is not supported by the data. The increase in new authors (as defined as people who had not submitted before year X) is steady, but not bursty. In fact the increase in paper submissions can be directly attributed to an increase in submitting authors, an increase that comes from both new authors and returning authors. Interestingly, drop-outs (people who never submit after year X) match returning authors quite well. It does not appear to be the case that people are submitting more; weighted (by number of authors) and normal "papers/author" shows a small increase, but not significant. Short vs Long The next part of the presentation is where things get really interesting. Two charts display statistics on the scores of papers. The first one indicates the expected bell-curve like chart. The second chart (suggested by Piotr Indyk) plots score against reverse rank, and shows that except at the top 10% and at the bottom 10% of papers, there is no natural cutoff where an acceptance-rejection line can be drawn. This is significant, because often acceptance rates are claimed to be set by some "natural cutoff": for SODA this year, this appears not to be the case. What all of this feeds into is a serious debunking of the value of short papers. Adam takes all the reasons that have been proposed for having short papers in the first place, and trashes them one by 1) Short papers increase acceptance of discrete math papers BEEP ! This is not the case. Papers labelled as "discrete math" (which also includes CS papers with a strong discrete math component) in fact were accepted at a higher rate (34.5%) than the overal rate (27%). They also broke down into long and short submission rates the same way as non-DM papers. 2) Short papers provide an anchor for page lengths not at 12 (so that authors don't feel that their 6 page submission is viewed as inferior to a 12 page submission) PZZT ! Wrong answer. Acceptance rates were not correlated with paper length. In fact many good 7-10 page papers were accepted. Compounding this was the fact that many short submissions had 7 pages of appendices (!). 3) Short papers allow for nascent and inter-disciplinary work. This is a noble idea, but since we have reached the limit (135) of papers we can accept, any such short paper has to compete directly against a long paper, and invariably loses out. Basically, it is a zero-sum game at this point. There was much wailing and gnashing of teeth at this point, but IMHO Adam's rather convincing presentation of the data quelled much discussion. After all, it's hard to have a strong opinion when there is clear data that contradicts it. Will short papers die ? It's not clear. After one of the over 10 billion straw votes that we had, it was decided that the SODA 2006 PC would "take into consideration" the sense of the community, which was: a) No extra tracks b) Short papers MUST DIE (err... have problems). still no paper reviews: wireless access at the conference was limited, and in a way this was a blessing. Instead of lugging my laptop around and fiddling with email in the middle of talks, I actually sat and listened (having to chair sessions helped as well). Not having the proceedings was a mixed blessing: most people felt that having access to the proceedings (or at least abstracts) would have made paper sampling a lot easier. On the other hand, I didn't have the urge to read the proceedings during talks. Overall it was a nice conference, and Vancouver is a great town. For once, the hotel was quite pleasant, and the meetings rooms/lounging areas were just right. over the next few days/weeks I will start posting some paper reviews, on whatever inspires me. gave an entertaining talk on loop quantum gravity. He managed to hint at a surprising amount of math without losing the audience, and if the questions at the end were any indication, people enjoyed a rather different talk to the normal SODA fare. p.s SODA proceedings update: in Montana and moving steadily forward. If I had drunk as much as prescribed, I'd be weaving right now. It almost seemed like people were finally getting tired of the endless discussions. Adam Buchsbaum did a deadly job of killing all feeble attempts to keep short papers alive (death by statistics was the coroner's report), and we shall see what the outcome is. SODA 2006 will be in Miami (but not in Miami Beach alas). Cliff Stein will be the chair. It appears that SODA 2007 will either be in Monterrey, Mexico or New Orleans. Either way is fine with me :). A pseudo-triangulation is a polygon in the plane with only three convex vertices. An interesting question about pseudo-triangulations is the following: Is the number of triangulations of a planar point set at most the number of minimal pseudo-triangulations ? From what I understand, this is true with equality if the points are in convex position. Herve Bronnimann talked about an experimental approach to validating this conjecture for small values of n. p.s no wireless connection at the conference alas, so posting will be occasional. 1. The SODA (and ALENEX) name badge has on its flip side a form that reads: IN CASE OF EMERGENCY Name of person to contact Phone # to reach this person I didn't realize SODA had become such a heart-stopping adventure. Next thing you know we'll have FBI agents roaming the corridors. 2. The SODA proceedings will not be available at the conference because the train carrying them was derailed in Minnesota. I kid you not. p.s If ever we had an argument to reduce the number of papers accepted to SODA... p.p.s On the bright side, I don't have to throw my back out lugging copies of the proceedings around with me. So ALENEX started and ended today. The workshop started off with a keynote address by Bernard Moret on the Tree of Life and algorithmic issues in phylogeny. He set out an impressively complex landscape of problems, most of which required some serious algorithm engineering (fitting for ALENEX). One quote that I found rather amusing: Truth is not part of the vocabulary of algorithm design This was when discussing how the goal of algorithms (optimization) differs from the goal of biology (scientific truth). Later on, at the business meeting, we had the customary discussion of mindless statistics (how many papers were written by left-handed dwarfs from the eastern provinces of Transylvania ?). 15/60 papers were accepted, which is an impressive acceptance ratio. The submission rate appears to be relatively stable, and 15 is the maximum number of papers that can fit in a one-day workshop. Another mildly interesting stat was the fraction of papers submitted from Europe - 44.6%. This (to me) appears to be quite high, especially given that the workshop is in the West Coast. Cathy McGeoch brought out a beautiful poster representing the history of NP-Completeness. It has a top section describing NP-hardness and complexity, and the main area of the poster is taken up by a graph in which vertices are NP-hard problems and edges are reductions. The nodes are colored by their Garey+Johnson classification, and immediately you see how SAT pops out as the source problem of choice for so many reductions. The poster is being sold for $120 + $15 shipping (it's at least 36'' X 48''), and was designed by an ex-Amherst faculty member who now runs a graphic design company. No web pictures or thumbnails as of yet: I'll update this post when they appear. If you want to order a poster, send mail to Cathy [ ccm at cs dot REMOVEamherstTHIS dot edu ]. [Update: Click here for the poster website and a nifty Flash tool to navigate the poster.] One of the motivations that Cathy mentioned was the lack of computer science art. It would be cool if we could create a poster for Chad Brewbaker's Complexity Zoo map. In later postings I will mention some of the talks that I found interesting. I've been blogging from conferences for a while now, and I notice that I often feel the pressure to say something about every paper, for fear that by exclusion, I am making a statement about a talk/paper. Rest assured that this is not the case. In related news, Lance inspires yet another theorist to start a blog. He is indeed the Blogfather of theory :). I can understand fellow researchers being confused (OK I can't, but I can at least try), but the New York Times ? With four titles of his own, Mr. Eslambolchi, who also directs Bell Labs and AT&T's network operations, manages engineers, technicians and researchers in 11 divisions and speaks directly to dozens of customers a month to hear what they want Repeat after me: Bell Labs is Lucent, Shannon Labs is AT&T. Am at SODA/ALENEX right now: posting will be slow, but I will attempt to have some kind of summary of the daily activities. If anyone is reading this AND is at SODA AND doesn't have a blog of their own, feel free to email me to post conference updates here. It's been many days now since Afra Zamorodian announced his new monograph 'Topology and Computing'. Now that I finally got a chance to read an excerpt, I am encouraged ! Finally, I am hoping that there is a way to understand Betti numbers without needing an entire year's worth of study :). Great job, Afra ! Topology is an area that has many intriguing connections with geometry and algorithms in general (Kneser's conjecture, the Kahn-Saks-Sturtevant partial resolution of the evasiveness conjecture), and we need books that can bridge the gap. Another "geometric' text is the wonderful book by Matousek on the Borsuk-Ulam theorem, and a more 'combinatoric' text in this regard is the survey 'Topological Methods' by Björner in the Handbook of Combinatorics. Lett3rs. coming to a screen near you... This is a cool quote: We have to nip it in the bud or soon there will be no security left after these intellectuals get through with us. This is the source: alt.locksmithing Via BB. p.s Matt Blaze used to work at AT&T Labs. Tags: locks, cryptography I'm teaching a class, something that I don't often do, and have been discovering interesting differences (duh!!) between a one-hour talk and a 1.5hr lecture. A recent article on Slate excerpts a new book by Eric Liu called 'Guiding Lights', about teachers in different walks of life, and how they hone their craft. One quote from the article was interesting: Like any good teacher, Bryan is a master of misdirection: working on a fastball to improve a change-up, using dry work without a ball to sharpen performance with a ball, and talking about how to keep a quiet head when, in fact, we were talking about how to keep a quiet mind. It's an interesting way of describing the work a teacher does. A review from Publisher's Weekly at the Amazon.com page talks of other techniques, like receiving before transmitting—that is, tuning in to the student's unique qualities and motivations; unblocking and unlocking—helping students overcome their inner obstacles; and zooming in and out—breaking the subject down and then connecting it other matters. I'm curious to know what the full-time teachers think of this... Here would be a cool thing to hack: HubMed is an RSS interface to PubMed (the equivalent of arxiv, but much better, for the bio/medical community). Plug your PubMed search into HubMed, and lo and behold, out pops a search with an RSS feed you can subscribe to. Think of it as a specialized Feedster for PubMed. The CoRR now has an experimental full text search to go with their category searches. You could always subscribe to RSS feeds for new papers in specific categories. Wouldn't it be neat to have a Feedster-like interface for general text searches, so I could get all new papers that mention "art gallery" for example ? It seems like it is not hard conceptually: a web form to process requests for searches, and a crawler that queries CoRR (along with some parsing goo to make XML). "This is your last chance. After this, there is no turning back. You take the blue pill, the story ends, you awake in your bed and cut whatever you want to cut. You take the red pill, you stay in OrigamiLand, and I show you how deep the creases go. Remember: all I’m offering is the Fold, nothing more." Via emmous Two approaches: 1. If you are a student. 2. If you have a young child :) Welcome, one and all, to the Scian Melt #6. The Scian Melt is a carnival of science, with a heavy dose of Indian spices. For a second helping, visit the Scian Melt homepage. The terrible tsunami is constantly on our minds: Wikipedia has much more information about how a tsunami is formed, as well as links to the latest news on the disaster itself. If you are one of the ten remaining people on the planet who hasn't visited SEA-EAT, please do so, and contribute in whatever way you can. Tony Denmark has a page of before-and-after scenes of devastation in Sri Lanka and Indonesia. The devastation in Banda Aceh is truly horrifying. It is a cliche (but still true) that tragedies bring out the best in people, and remind us of the worth of those that we so easily dismiss in calmer times. Anna from sepiamutiny points out that the vast majority of men working to remove bodies in the most affected areas of Tamil Nadu are Dalits, or "untouchables" as they used to be called. Considering how so many others are shying away from dealing with the vast numbers of corpses and the imminent health catastrophe, they deserve our recognition. In the wake of the disaster, controversy raged over whether the tsunamis could have been predicted, or if anything could have been done to lessen its impact. Janaki Kremmer at the Christian Science Monitor pens an article about how mangrove forests saved hundreds of villagers by forming a natural barrier between villages and the sea. But along with true science comes the wackos. There are numerous conspiracy theories about the cause of the tsunami, with one of the more popular ones being an Israeli-Indian joint nuclear test. Arm your falsifier guns, folks ! In other news, Atanu Dey laments as to why Indians are prone to both reflexive self-bashing and a blind pride that obscures our true failings. He asks Does anyone ever ask the question: Why is India the way it is?... When was the last time you ever heard of a conference where serious people with lots of knowledge and understanding got together to examine that question? Well, as it turns out, the 92nd Indian Science Congress was held last week in Ahmedabad, with discussions on Indian pharmaceuticals, stem cell research, and disease prevention. It turns out that haldi (turmeric) is really really good for you. Research now shows that curcumin (an active ingredient of haldi) can be used to design effective anti-malarial drugs. What's more, curcumin has been shown to help stave off Alzheimer's, as well reduce plaque that may have already built up in the brain. Eat Indian food often, people ! To end this melt here's a nice page summarizing the history of Indian Mathematics (I am after all a computer scientist). The next Melt will be hosted by MadMan: please send your nominations to melt [at] thescian [dot] com or madman [at] madmanweb [dot] com These are organizations that have fired, threatened, disciplined, fined or not hired people because of their blogs: 1.) Delta Air Lines 2.) Wells Fargo 3.) Ragen MacKenzie 4.) Starbucks 5.) Microsoft (some say yay, some say nay) 6.) Friendster 7.) the Houston Chronicle 8.) the St. Louis Post-Dispatch 9.) Nunavut Tourism (Canada) 10.) the Committee on Degrees in Social Studies, Harvard University 11.) Maricopa County Superior Court of Arizona Self Help Center and Library 12.) Mike DeWine, US Senator (R-Ohio) 13.) the Durham Herald-Sun 14.) Kerr-McGee 15.) ESPN 16.) Apple (according to this blog entry AND this article) 17.) Statistical Assessment Service (DC nonprofit) 18.) Minnesota Public Radio 19.) The Hartford Courant 20.) the International Olympic Committee (barred athletes from blogging during the Olympics last summer) 21.) Health Sciences Centre, Winnipeg, Manitoba, Canada (?) 22.) the National Basketball Association (NBA) With the hammer being applied by myself :). At least I have a Ph.D ! From PHD ...even if you cannot prove it ? This was the question posed by Edge.org to a number of scientists, intellectuals, and thinkers of various hues. There have been numerous comments on this article, and a spread in the NYT, but what I found interesting was the choice of people interviewed. There were extremely few computer scientists in the bunch, and not a theoretician among them ! John McCarthy thinks that the continuum hypothesis is false... Any thoughts on what statements we believe to be true, even if we can't prove them ? P ? NP is not included :) A number of us were fretting about how Numb3rs would caricature mathematicians, while NBC slipped this right under our noses: NBC's new sitcom, "Committed," a series centered on the romance between Nate (Josh Cooke), an obsessive-compulsive math genius and his nutty girlfriend, Marni (Jennifer Finnigan), makes it clear: psychological disorders are the next big thing. As for the character 'Nate': Nate, a math genius whose family makes the Tennenbaums seem like the Partridge family, works in a used-record store and nurses his fixations: he has an obsessive fear of elevators, blocked emergency exits and throwing things out. One episode revolves around Nate's attempts to keep Marni from seeing his apartment, which looks like a cross between the CollyerBrothers' brownstone and the schizophrenic mathematician's garage in "A Beautiful Mind" The review ends with this rather cruel jab: Mr. Cooke is appealing in the role of Nate, but he seems a little wholesome for someone who makes Venn diagrams to map out a conversational point. Hey ! Watch it there ! Us borderline-psychotic obsessive-compulsive math geeks have feelings too. But it seems a bit odd that the first link of the google search "soda 2005" is a page from the Geomblog, and even worse, the official SODA 2005 page doesn't appear anywhere in the top 50 links for this search. The topic of acceptance ratios and PC sizes has been discussed to death, and I will start by saying that I have nothing new to add to this debate :). However, on the issue of reviewer load, one new idea had popped up a while ago: * To handle the case of recycled papers that go from conference to conference in the hope of finding reviewers that haven't seen them: - Create a repository where all submitted papers are registered. When a paper is submitted to a conference, a link is set up, and then when reviews return, they are filed with the paper. If the paper is submitted again, the reviews are there to see. I was reading the Dec issue of CACM (all right, I admit it: I do read the CACM, but ONLY because this issue was on the blogosphere!), and in David Patterson's letter was this nugget: Perhaps the most novel approach to the whole problem is being taken by the database community under the leadership of SIGMOD. The three large database conferences are going to coordinate their reviewing so that a paper rejected by one conference will be automatically passed along to the next one with the reviews. Should the author decide to revise and resubmit the paper, the original reviewers will read the revision in light of their suggestions. The next program committee would then decide whether or not to accept the revision. Hence, database conferences will take on many of the aspects of journals in their more efficient use of reviewers' efforts in evaluating revisions of a paper. I assume the three bigs are SIGMOD, PODS and VLDB ? It would be interesting to see how this works out. There are pros and cons in carrying prior reviews along with a paper, and a lab experiment such as this might reveal interesting side effects. Another interesting chart in this letter was a plot of submissions and acceptance ratios for four major conferences (SIGMOD/STOC/ISCA/PODC) over the past 5 years. What I find most striking about this graph is that on the one hand, SIGMOD submission counts have gone up (pretty much what one would expect), but the acceptance rate has held roughly steady, implying the conference has grown in size over the years. In STOC, on the other hand, submission counts rose, and then remained steady, but acceptance rates have dropped precipitously ! Seems strange that this would happen. One of my pet peeves has been hearing a slew of theories trotted out to "explain" the increased submission rate in a variety of conferences (though not STOC); my gripe has been that none of this theories have anything more to back them up than plausible sounding words. Interestingly, Patterson's letter has something to say about this as well: ACM's research conferences are run by its Special Interest Group (SIGs). I've been working with the SIG Governing Board to help form a task force to study this issue, looking at why submissions are increasing and documenting approaches like those discussed here, and to evaluate their effectiveness. They plan to report back in early 2005. If you have any comments or suggestions, please contact task force chair Alexander L. Wolf (alw@cs.colorado.edu). It appears that art and geometry go well together. Take George Hart's geometric sculptures, or the Voronoi building. I found this intriguing gallery of visualization via del.icio.us/math: The above picture is a visualization of nodes in a graph evolving by the rule: A. Move close to friends but not closer than some minimum distance. B. Distance self from non-friends as much as possible.
{"url":"http://geomblog.blogspot.com/2005_01_01_archive.html","timestamp":"2014-04-19T04:21:17Z","content_type":null,"content_length":"312541","record_id":"<urn:uuid:98cc1674-315a-4164-be63-d885352b1c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Open Problems related to computing obstruction sets Isolde Adler Humboldt University Berlin, Germany 13th September 2008 Robertson and Seymour proved Wagner's famous Conjecture and showed that every minor ideal has a polynomial time decision algorithm. The algorithm uses the obstructions of the minor ideal. By Robertson and Seymour's proof we know that there are only finitely many such obstructions. Nevertheless, the proof is non-constructive and for many minor ideals we do not know the obstructions. Since the 1980ies, research has been done to overcome this non-constructiveness, but many interesting problems still remain unsolved. This is a small collection of open problems in the field of computing obstructions for minor ideals. We give a short introduction to the open problems from a paper by Adler, Grohe and Kreutzer [2], and to other open problems. This collection is meant to stimulate research in this area and it is far from exhaustive. 1 Introduction Graphs are finite and simple. For a graph G we denote the vertex set by V (G) and the edge set by E(G). Let G be a graph and {u, v} E(G). By contracting the edge {u, v} we mean the following
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/728/3141956.html","timestamp":"2014-04-16T07:32:38Z","content_type":null,"content_length":"8395","record_id":"<urn:uuid:207f19fc-dc46-4a57-a4a6-e6867e0202ac>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
This is just a fun activity, not relating directly to math, but perhaps relating peripherally to problem solving. Any one of these three pages will take most students a half hour or more. Students may find it fun to do this in small groups. Good activity for a day where only a few students show up and you want to challenge them with something they will enjoy. This problem relates to math. Easy to understand, but with a little catch where some insight will help. A 10–15 minute activity. Can be done in groups. For the a short class period before the holiday or as a 10 to 15 minute filler. It should be done individually IN PENCIL. Be sure to have extras available. A matching test. The vocabulary is all mathematical, but the matching definitions aren’t. Can be done in groups. A 15 – 20 minute activity. A fill-in-the-blanks test. The vocabulary is all mathematical, but the matching definitions aren’t. Can be done in groups. A 10 – 15 minute activity. These are easy little “problems” or thought-provokers. Allow a half hour. This is a problem that involves spatial relationships and may be difficult for students to do in their heads. It will be easier if blocks (1 inch cubes) are available for students to use in trying to solve the problem. This is the original problem as stated. It is not as obvious or as easy as the problem statement makes it seem. Try it before you look at the solutions. This is the original problem with a grid drawn as a visual reference. These are more difficult problems that will usually require some time and thought to get the correct answer or the best answer. Think outside the box on both of these. I recommend giving them separately. Easy to understand. This may have to be taught. I suggest doing one or even two of them together as a whole class before having students work on their own. These are easy enough that it is OK to give a whole sheet of them. Note there are two separate pages so they can be used on different days. This puzzle has many solutions. There is an underlying principle for this problem which leads to any or all solutions quickly and easily. This problem can be used as a springboard to further discussion of the principle. All of these problems are easy to understand. The level of difficulty is intermediate. They don’t require high level math skills, just problem-solving skills such as finding a pattern or working backward or something as simple as trying the problem to see what happens. Problem #2 may be difficult. These are simple to intermediate in difficulty. Working backward, making a list, or simply guessing and checking will solve these. These are easy. Insight would make them easier. These problems relate to topics in algebra (distance problems, exponents) or in geometry (circles, triangles, Pythagorean Theorem, polygons), and the calendar. They are interesting and some have counterintuitive answers. I suggest using these when your students are studying the topics of which these are examples. These are all easy - if you reason them out. No math skills are involved. Problem 4 looks difficult or perhaps tricky, but is easy to solve if you don’t try to do it in your head. The third problem is easy. The first has a simple solution, but does require insight. The second problem will require some thought to reach the best solution. Looking at Cubes Max - The maximum solution to the problem. Looking at Cubes Min - The minimum solution to the problem. Every Fifth Man - This appeared as a mystery short story and has some mathematical content. Questions to God - This is just for fun. Smullyan - This is an article about Raymond Smullyan, who authored several books of unusual problems and puzzles. Several of them are included with this article. The Fun They Had - A single page Isaac Asimov short story of the future when machines did the teaching. World According to STUDENT Bloopers - This is just for fun. Try to read it without laughing. I dare you! The cartoons below all appeared in newspapers and relate to teaching or math. Cartoon 04 - Peanuts Cartoon 05 - Peanuts Cartoon 06 - Funky Cartoon 07 - Funky Cartoon 11 - Peanuts Cartoon 13 - Sally Forth Arthur Gask - A cartoon story in the style of Jules Pfeiffer. All of these items may be helpful or interesting to teachers. At the very least they should stimulate some thought. This was written by a teacher for teachers. In addition to listing the “sins,” there is an extremely brief discussion and a Corresponding Virtue for each of the “sins.” Some classroom teacher tools made up by teachers. They are included to provide some ideas. Included are: class rules, a holistic general scoring guide, a grade sheet for group grades for oral presentations, a homework grading guide, some possible questions you could use for students to write (3 minute essays) about their experience working in a group, a guide that an outside evaluator might use to rate your instruction, and a self-evaluation of your instruction. This is here primarily because it lists 75 suggestions for classroom management. If you read through these suggestions and even 2 or 3 strike you as being helpful, this will be worth reading. Of course, it won’t hurt much if you glance at the rest of the article, also. This recounts the experience of a teacher trying for the first time to implement constructivist learning in the classroom. It contrasts it with a more traditional approach used by another Far and away the best short discussion (8 pages) on implementing group learning. The other 12 pages are lists that my be helpful. All lists and tables relate directly to the article. This could go with the 13 Sins and the management suggestions in that it succinctly lays out some do’s and don’ts in interacting with students. An online discussion of what to do if students don’t do their homework. Several teachers contributed their ideas or methods for dealing with this problem. Diverse ideas. Very thought-provoking. A handout that you can consider sharing with parents. A short list of suggestions for working with special education students. Go Back
{"url":"http://gphillymath.org/ResourceDisks/","timestamp":"2014-04-21T12:18:13Z","content_type":null,"content_length":"30913","record_id":"<urn:uuid:1cc53617-3e2e-4552-a801-5452c732f4d6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
fsmActions-0.4.0: Finite state machines and FSM actions Source code Contents Index Finite state machines. Here an FSM is a map from symbols to actions. Symbols are parametric (will usually be Strings or Chars). Actions specify the action of a symbol on each state, and are represented as lists of transitions: one per state. States are just numbers, from 0 to n, corresponding to indices on transition lists in Actions. Then deterministic actions are just Ints, identifying the state to transition to under that action; nondeterministic actions are lists of Ints: all the states to possibly transition to under that action. Data types States are integers, counting from zero. newtype DestinationSet Source Destination sets are just lists of States. Actions are lists of DestinationSets, indexed by source State. destinationSets :: [DestinationSet] Finite state machine whose nodes are labelled with type sy. Words are lists of symbols. Simple FSM operations fromList :: Ord sy => [(sy, Action)] -> FSM sy Source Create an FSM from a list of symbol, Action pairs. toList :: FSM sy -> [(sy, Action)] Source Turn an FSM into a list of symbol, Action pairs. delete :: Ord sy => sy -> FSM sy -> FSM sy Source Delete a symbol and its action from an FSM. lookup :: Ord sy => sy -> FSM sy -> Maybe Action Source Look up a symbol's Action in an FSM fsmMap :: (sy -> Action -> a) -> FSM sy -> [a] Source Map a function over the FSM. states :: FSM sy -> [State] Source Compute the list of states of the FSM. Only really meaningful if the FSM's well-formedness is not BadLengths. With current implementation, is just [0..n] for some n (or empty). alphabet :: FSM sy -> [sy] Source Compute the alphabet of an FSM. normalise :: FSM sy -> FSM sy Source Normalise an FSM, i.e. normalise all its Actions. normaliseAction :: Action -> Action Source Operations on actions mkAction :: [[State]] -> Action Source Build an action given a nested list of destination states. mkDAction :: [State] -> Action Source Build a deterministic action given a list of destination states. append :: Action -> Action -> Action Source Append two Actions, ie compute the Action corresponding to the application of the first followed by the second. actionLookup :: Action -> State -> DestinationSet Source Compute the DestinationSet reached by following some Action from some State. action :: Ord sy => FSM sy -> Word sy -> Maybe Action Source Compute the Action for some Word over some FSM. The word might contain symbols outside the FSM's alphabet, so the result could be Nothing. actionEquiv :: Ord sy => FSM sy -> Word sy -> Word sy -> Bool Source Test if two Words are action-equivalent over some FSM. Destination sets destinationSet :: Ord sy => FSM sy -> State -> Word sy -> Maybe DestinationSet Source Compute the DestinationSet for some Word at some State of an FSM. The word might contain symbols outside the FSM's alphabet, or the state might be out of range, so the result could be Nothing. destinationEquiv :: Ord sy => FSM sy -> State -> Word sy -> Word sy -> Bool Source Test if two Words are destination-equivalent at some State of an FSM. fsmIdentity :: FSM sy -> Action Source Compute the identity action for a given FSM. identity :: Int -> Action Source Compute the identity action for a given number of states isDAction :: Action -> Bool Source Test if an Action is deterministic or not. isDFSM :: FSM sy -> Bool Source Compute whether an FSM is deterministic or not. Produced by Haddock version 2.6.0
{"url":"http://hackage.haskell.org/package/fsmActions-0.4.0/docs/Data-FsmActions.html","timestamp":"2014-04-16T13:46:34Z","content_type":null,"content_length":"32732","record_id":"<urn:uuid:1ffb963d-14e5-4bac-bcf4-38e7b73eca68>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: MSC abolishes Set Theory !!! Stephen G Simpson simpson at math.psu.edu Tue Jul 7 20:09:47 EDT 1998 Olivier Gerard 4 Jun 1998 07:58:35 +0200 writes: > The Mathematics Subject Classification is being revised. Math > Reviews and Zentralblatt will be moving to the revised system > beginning in 2000. The proposed revision can be found at > http://www.ams.org/mathweb/msc2000.html I am finding that the links from this URL to the subject classifications are somehow broken. Does anyone have a better URL? For logic and foundations, it's better to go directly to > I presume that some of you will be keen to look up if every > aspects of modern FOM research is being adressed in this > revised classification. Yes, an excellent idea. I just had a look and was pleased to discover that Reverse Mathematics is getting a number of its own: 03B30 Foundations of classical theories (including reverse mathematics) [See also 03F35] 03F35 Second- and higher-order arithmetic and fragments [See also 03B30] Another change is that the major classification number for set theory, 04XX, is being abolished! Actually, it is being merged into 03EXX, the set theory subclassification of the logic and foundations Set theorists, what do you think of this change? Is set theory a subject on its own, or is it really part of logic and foundations? > I think that a debate on the purpose and relevance of a > classification of mathematical subjects would be an appropriate > topic for this list. Absolutely, I agree. See also my little essay at in which a hierarchical classification of subjects plays a crucial role in defining what we mean by "foundations of mathematics". Thank you, Olivier, for a very useful post. I'm only sorry I didn't follow up on it sooner. -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-July/001877.html","timestamp":"2014-04-20T01:09:33Z","content_type":null,"content_length":"4451","record_id":"<urn:uuid:9c3f03fd-a5e1-4636-9837-dea38dca3a5d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
operator precedence Author operator precedence Ranch Hand Joined: Nov what i dont underatand in the code below is about operator precedence. 22, 2008 class demo Posts: 18944 { static int x; public static void main(String[] args) {int i=5*(2+6);// case 1 int y = z() * (z() + z());//case 2 private static int z() { return ++x; this prints out 40 and 5 case 1 and case 2 are similar except that methods have replaced variables. i know that Java evaluates the operands of each operator in a left-to-right manner but if that is the case then in the first case the result should have been 16 ie 5*2=10 +6=16; but it is not so. if braces offer precedence then case 1 is correct to print out 40 but then case 2 must print out 9. Ranch Hand haii, rahul: Joined: Jul i think i can answer it. first evaluation from left to right, and then caculation with operator precedence order. allright. 18, 2000 regds Posts: 35 zhewen P.S. could you check my new posted problems Ranch Hand Hi Rahul. Joined: Nov U are right about the execution of the statement and that arithmetic operators group left to right.But remember that this grouping is important only in the case of operators of equal 22, 2008 precedence.Here * has a greater precedence than +.And () has highest precedence.Removing () will give 16 and 9 respectively. Posts: 18944 But ,I think, for static members , they are initialized at class load time in order of appearance from left to right.So first x=1 and then 1*(2+3) because of the () operator.Looking at this way the answer will come out to be 5. Please correct me If I am wrong.I am not sure on this one. I request the moderators to shed some light on this one. Thanks in advance. Ranch Hand Hi Rahul, Joined: Apr In the first case the () has got the highest precedence. The expression will be evaluated as 5*(8)=40. 28, 2000 However in the second case since U are invoking methods, the operation on x is the same as the order in which the method is called. Posts: 87 int y = z()* (z() + z()); Here the left most z() is evaluated first as 1. The expression becomes 1*(z() + z()) with the value of x=1 Now the first z() inside the bracket is evaluated returning a value of 2 the expression becomes 1*(2 + z()); Finally the lst z() is evaluated to become the expression 1* (2 + 3) = 5. Hence the answer U get. Hope this helps Ajay Kumar Regds<BR>Ajay Kumar Ranch Hand Joined: Nov can someone point out to some sorces to where i can get more info on the subject above. 22, 2008 regds. Posts: 18944 Rahul. Ranch Hand hi udyan, Joined: Nov if () have the highest precedence then 22, 2008 Posts: 18944 public class Tester { public static void main(String[] args) { int i = 0; byte b = 0; b += b -i+ (i= 10); should print out 0 but it prints out 10 which is evaluation from left to right. [This message has been edited by rahul_mkar (edited June 21, 2000).] Ranch Hand I got the answer to ur question when I changed ur code a bit.I made the method invocation statement as z()+z()*z().The result of this code was 7.How this will evaluate is Joined: Nov 1)The statement evaluates from left to right.So first z() is called and x=1. 22, 2008 2)Now the compiler sees * has a higher priority than +.So value of first z() ie x=1 is stored in temp location Posts: 18944 3) Again z() is called and now x=2 4) Again z() is called and now x=3 5)2*3 gives 6 which is added to 1 obtained earlier and finally y=7 All statements are fully evaluated before execution proceeds to the next statement.Extending the same explanation to the code u have put up now u will get ur answer of b=10 and i=10 Ranch Hand Hi Rahul, Joined: Apr This is my understanding and I hope it helps you. 03, 2000 There are few things which one needs to remember before evaluating an expression and they are: Posts: 136 1. Evaluation Order: All operands are evaluated left to right irrespective of the order of execution of operations. You agree to this. Right. 2. Evaluate Left-Hand Operand First: The left-hand operand of a binary operator appears to be fully evaluated before any part of the right-hand operand is evaluated. So, applying the above rules to both examples: a. int i=5*(2+6);// case 1 We will evaluate from left to right. So considering 5*(2+6),the left hand operand (which is 5) is evaluated. Since 5 is a constant, there is nothing to evaluate and hence we keep it as b. int y = z() * (z() + z());//case 2 We will evaluate from left to right. So considering z()*(z()+z()), the left hand operand (which is z()) is evaluated. After evaluation(x=0 and so return ++x gives 1) the result is x=1. So, now the above equation is int y = 1 * (z() + z()) 3. Evaluate Operands before Operation: Java also guarantees that every operand of an operator (except the conditional operators &&, | |, and ? If the binary operator is an integer division / or integer remainder % , then its execution may raise an ArithmeticException, but this exception is thrown only after both operands of the binary operator have been evaluated and only if these evaluations completed normally. a. int i=5*(2+6);// case 1 so, int i = 5*(2+6) is = 5*(2+6) as 2 and 6 are constants and there is nothing to evaluate. Remember, we have still not started the operation and hence the equation still has * and () intact. int y = 1 * (z() + z()) z() is evaluated twice and we get 2 and finally 3 from the evaluation of z(). So, the equation is now int y = 1 * (2 + 3). Remember, once again we have still not started the operation and hence the equation still has + and () intact. 4.Evaluation Respects Parentheses and Precedence: Java implementations must respect the order of evaluation as indicated explicitly by parentheses and implicitly by operator precedence. So parenthesis gets the first preference before other operators in the operator precedence. So now we have a. int i=5*(2+6);// case 1 int i = 5 * (8) b. int y = 1 * (2 + 3);// case 2 int y = 1 * (5) and finally we get i = 40 and y = 5. I hope the above explanation helps you and if you want to know more then I suggest that you read topic Evaluation Order in the JLS [This message has been edited by Suma Narayan (edited June 21, 2000).] subject: operator precedence
{"url":"http://www.coderanch.com/t/191905/java-programmer-SCJP/certification/operator-precedence","timestamp":"2014-04-20T08:18:53Z","content_type":null,"content_length":"35546","record_id":"<urn:uuid:1797e9da-74d0-4994-9999-af0e14ff9600>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap Methods read online Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap Methods book download Edward J. Dudewicz, Zaven A. Karian Fitting Statistical Distributions The Generalized Lambda Distribution and Generalized Bootstrap Methods Throughout the physical and social sciences researchers face. Probability and Statistics eBook Collection Free Pdf Book Exploratory Factor Analysis Fitting Statistical Distributions ? Th Generalized Lambda Distribution and Generalized Bootstrap Methods Foundations of Modern Probability Fundamentals of Probability and Statistics for EngineersA Method for Simulating Burr Type III and Type XII Distributions . The act of making an organism barren or infertile unable to reproduce A course offered for a small group of advanced students Flying downward in a&nbsp;. E book under: Science Ebooks . Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap Methods [Zaven A. In general , the indices of -skew and -kurtosis are bounded in the interval , and as in conventional moment theory, a symmetric distribution has -skew equal to zero [18]. Dudewicz. Fitting Statistical Distributions The Generalized Lambda Distribution and Generalized Bootstrap Methods By Zaven A. Amazon.com: Customer Reviews: Fitting Statistical Distributions. WWith the development of new fitting methods , their increased use in applications, and improved computer languages, the fitting of statistical distributions to data has come a long way since the introduction of the generalized lambda distribution (GLD) in 1969. Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap. Capturing cattle or horses with a lasso A futile or unprofitable endeavor A form of casino in which face cards have extra point values A game played against a computer Download PDF Book . . Fitting Statistical Distributions: The Generalized Lambda. Lambda Distribution and Generalized Bootstrap. Fitting Statistical&nbsp;. Dudewicz, Zaven A. Fitting Statistical Distributions: The Generalized Lambda. Dudewicz] on Amazon.com. Fitting Statistical Distributions: The Generalized Lambda. Karian, et al. Fitting Statistical Distributions: The Generalized Lambda Distribution . Free Probability and Statistics Ebooks Collection 2 Book Download . The boundary&nbsp;. *FREE. Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap Methods eBook PDF Online . and Generalized Bootstrap Methods. Handbook of&nbsp;. Handbook of Fitting Statistical Distributions with R - CRC Press Book . Published May 24th e-book Empowering Your life with Dreams download The Rise of Southern Republicans
{"url":"http://felishanymiy.webs.com/apps/blog/show/30814818","timestamp":"2014-04-21T05:03:52Z","content_type":null,"content_length":"41463","record_id":"<urn:uuid:853b1a3d-a21e-439a-8ee4-6d1f517aec65>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - Matrix rotation? Cactus Dan 03-14-2004, 04:24 PM How do I calculate the rotation of a matrix around one of its local axes by a given angle? For example: I have a function that will return an object's matrix. Then I have an angle (theta) that I've calculated and want the matrix to rotate around its current local Y axis by theta. I'm using my software's scripting language that seems to only have a function to set an object's rotation in HPB world coordinates. Thanks in advance for any help. Cactus Dan
{"url":"http://forums.cgsociety.org/archive/index.php/t-129452.html","timestamp":"2014-04-20T19:09:12Z","content_type":null,"content_length":"8750","record_id":"<urn:uuid:e18fbffe-7f76-4d0a-adee-eebe9520441e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Houston Texans Message Board & Forum - TexansTalk.com - View Single Post - Sage wants a chance to be a starter. Originally Posted by I would suggest that we are overvalue our own players. Look back in the past. Even now. Lets make some assumptions. First of all. IMHO if Schaub isn't the answer, then the answer lies in college somewhere. In other words, I don't think management thinks that Sage is superbowl quality. Schaub may or may not be, but Sage isn't. That means that I am assuming, based on the fact that we had Sage on our roster and still went and spent two picks on Schaub, that Sage isn't our QB of the future. Furthermore I am assuming that if Schaub goes down for a significant period of time, that we're in trouble this year. In order to make a serious run at the playoffs, we're going to have to have a) Schaub stay healthy and b) Schaub play better than he or Sage played last year. Sage is not the "QB of the future" that much is true. Doesnt mean we cant make a run at the playoffs or even the superbowl with Sage at the helm. If Trent Dilfer and Brad Johnson are superbowl quality QBs, then I dont see any reason to say Sage is not a superbowl quality quarterback. You do realize, Sage was 4-1 as our starting quarterback. Winning 4 out of 5 games is not a bad record, if we can win at that pace the Texans would definitely be in the playoffs. So while your first assumption is rather dubious, your second assumption is downright incorrect.
{"url":"http://www.texanstalk.com/forums/showpost.php?p=940689&postcount=150","timestamp":"2014-04-21T11:19:18Z","content_type":null,"content_length":"16079","record_id":"<urn:uuid:9f932deb-a0d4-47d3-bdf1-a5ca812dd92d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD AND APPARATUS FOR SINGLE BURST EQUALIZATION OF SINGLE CARRIER SIGNALS IN BROADBAND WIRELESS ACCESS SYSTEMS Inventors: Russell Mckown (Richardson, TX, US) Assignees: ADVANCED RECEIVER TECHNOLOGIES, LLC IPC8 Class: AH04L2701FI USPC Class: 375232 Class name: Equalizers automatic adaptive Publication date: 2013-03-21 Patent application number: 20130070834 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A receiver implementing a single carrier single burst equalization (SC-SBE) method is capable of achieving near optimal reception of individual single carrier RF bursts by making an accurate estimate of the burst's propagation channel impulse response (CIR). The SC-SBE method uses a CIR based coefficient computation process to obtain filter coefficients for a minimum mean square error decision feedback equalizer (MMSE-DFE). The MMSE-DFE filter computation process computes a sufficiently large number of coefficients for the DFE filters, i.e., the feed forward filter (FFF) and feedback filter (FBF), so that each filter spans the maximum anticipated length of the CIR. In order to implement the filters efficiently, a coefficient selection process eliminates less significant computed FFF and FBF coefficients. The resulting FFF and FBF are sparse filters in that most of the taps in the filter delay lines do not have a filter coefficient. An method, comprising: estimating a channel impulse response; determining feed forward coefficients and feedback coefficients for at least one time domain filter based on the channel impulse response; and selecting a subset of the feed forward coefficients and a subset of the feedback coefficients wherein a number of coefficients in each subset is less than a total number of the determined coefficients. The method of claim 1, wherein selecting a subset of feed forward coefficients and a subset of the feedback coefficients comprises: selecting a coefficient having an amplitude being one of a largest coefficient amplitudes of the feed forward coefficients or the feedback coefficients. The method of claim 2, wherein selecting a subset of feed forward coefficients and a subset of the feedback coefficients further comprises: selecting a coefficient having an amplitude greater than or K*σ , wherein sigma. and σ are standard deviations of a subset of coefficients whose amplitudes are not one of the largest coefficient amplitudes of the feed forward coefficients or the feedback coefficients, wherein K is a threshold parameter that provides protection against coefficient computation noise. The method of claim 3, wherein the selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises: pre-selecting computed feed forward coefficients and feedback coefficients. The method of claim 4, wherein the pre-selecting the computed feed forward coefficients and feedback coefficients comprises: examining the computed feed forward coefficients and feedback coefficients; and storing each of the computed coefficients having an amplitude being one of a largest coefficient amplitudes in a coefficient buffer that defines a coefficient search space for a minimization process. The method of claim 4, wherein selecting a coefficient having an amplitude greater than K*σ or K*σ comprises: sorting the computed feed forward coefficients into a number of largest feed forward coefficients and a number of smallest feed forward coefficients; sorting the computed feedback coefficients into a number of largest feedback coefficients and a number of smallest feedback coefficients; and calculating sigma. and sigma. based on the number of smallest feed forward coefficients and the number of smallest feedback coefficients. The method of claim 6, further comprising: selecting a coefficient to include in the subset of feed forward coefficients if the amplitude of the coefficient is one of a largest coefficient amplitudes of the feed forward coefficients and is greater than K*σ ; and selecting a coefficient to include in the subset of feedback coefficients if the amplitude of the coefficient is one of a largest coefficient amplitudes of the feedback coefficients and is greater than K*σ An apparatus, comprising: circuitry for estimating a channel impulse response; circuitry for determining feed forward coefficients and feedback coefficients; and circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients, wherein a number of coefficients in each subset is less than a total number of the determined coefficients. The apparatus of claim 8, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises: circuitry for selecting each coefficient in the subsets of the coefficients based on an amplitude of the coefficient satisfying at least one of: the amplitude being one of a largest coefficient amplitudes of the feed forward coefficients or the feedback coefficients; and the amplitude being greater than K*σ or K*σ , where sigma. or sigma. are standard deviations of a subset of coefficients whose amplitudes are not one of the largest coefficient amplitudes of the feed forward coefficients or the feedback coefficients, wherein K is a threshold parameter that provides protection against coefficient computation noise. The apparatus of claim 8, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises: circuitry for pre-selecting the feed forward coefficients and feedback coefficients. The apparatus of claim 10, wherein the circuitry for pre-selecting the feed forward coefficients and feedback coefficients comprises: circuitry for examining the determined feed forward coefficients and feedback coefficients; circuitry for selecting a coefficient having an amplitude being one of a largest coefficient amplitudes from the determined feed forward coefficients and feedback coefficients; and circuitry for storing the coefficient having an amplitude being one of a largest coefficient amplitudes in a coefficient buffer that defines a coefficient search space for a minimization process. The apparatus of claim 9, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises: circuitry for sorting the determined feed forward coefficients into a number of largest feed forward coefficients and a number of smallest feed forward coefficients; and circuitry for calculating the sigma. based on the number of smallest feed forward coefficient. The apparatus of claim 9, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises: circuitry for sorting the determined computed feedback coefficients into a number of largest feedback coefficients and a number of smallest feedback coefficients; and circuitry for calculating the sigma. based on the number of smallest feedback coefficients. The apparatus of claim 12, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients further comprises: circuitry for selecting a coefficient to include in the subset of feed forward coefficients if the amplitude of coefficient is one of a largest coefficient amplitudes of the feed forward coefficients and is greater than K*σ The apparatus of claim 13, wherein the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients further comprises: circuitry for selecting a coefficient to include in the subset of feedback coefficients if the amplitude of the coefficient is one of a largest coefficient amplitudes of the feed forward coefficients and is greater than K*σ An apparatus, comprising: circuitry for estimating a channel impulse response; circuitry for determining feed forward filter coefficients and feedback filter coefficients for a decision feedback equalizer having sufficient length to cover a maximum anticipated channel impulse response; and circuitry for pre-selecting the feed forward filter coefficients and the feedback filter coefficients. The apparatus of claim 16, wherein the circuitry for pre-selecting the feed forward filter coefficients and feedback filter coefficients comprises: circuitry for selecting a number of largest feed forward coefficients and a number of largest feedback coefficients; a buffer for storing the number of largest feed forward coefficients and largest feedback coefficients; and circuitry for identifying from the number of largest feed forward coefficients and largest feedback coefficients a subset of coefficients. An apparatus, comprising: circuitry for estimating a channel impulse response; circuitry for determining feed forward filter coefficients and feedback filter coefficients; circuitry for sorting the feed forward filter coefficients and the feedback filter coefficients based on amplitudes of the coefficients; circuitry for computing standard deviations of the coefficients whose amplitudes are not one of the largest coefficient amplitudes of the feed forward filter coefficients or the feedback filter coefficients; and circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients based on the standard deviations. The apparatus of claim 18, wherein the circuitry for sorting the feed forward filter coefficients and the feedback filter coefficients based on amplitudes of the coefficients comprises: circuitry for sorting circuitry for sorting the determined computed feedback coefficients into a number of largest feed forward coefficients and a number of smallest feed forward coefficients based on the The apparatus of claim 19, wherein the circuitry for computing standard deviations comprises circuitry for computing the standard deviations based on the smallest number of feed forward coefficients and the smallest number of feedback coefficients, and p1 the circuitry for selecting a subset of the feed forward coefficients and a subset of the feedback coefficients comprises a threshold comparison circuitry for comparing the number of largest feed forward coefficients and the number of smallest feedback coefficients to a product of the standard deviations and a threshold parameter that provides protection against coefficient computation noise. CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application is a continuation of U.S. patent application Ser. No. 12/157,738, filed on Jun. 12, 2008, which in turn is a continuation of U.S. patent application Ser. No. 10/796,596, filed on Mar. 9, 2004, now issued U.S. Pat. No. 7,388,910, issued on Jun. 17, 2008, which claims priority to a U.S. Provisional Application No. 60/453,162, filed on Mar. 10, 2003, entitled METHOD AND APPARATUS FOR SINGLE BURST EQUALIZATION OF SINGLE CARRIER SIGNALS IN BROADBAND WIRELESS ACCESS SYSTEMS, the content of each of which is incorporated herein by reference. BACKGROUND OF THE DISCLOSURE [0002] The development of wireless metropolitan area networks (WMAN's) and wireless local area networks (WLAN's) for broadband wireless access (BWA) to voice and data telecommunication services is an area of considerable economic and technological interest. The WMAN systems typically employ a point-to-multipoint topology for a cost effective system deployment. For example, proposed WMAN systems operating in the 2 to 6 GHz radio frequency (RF) range consist of base station cell tower sites with 3 to 6 antenna/transceiver sectors, with capacity goals of 40 to 80 Megabits per sector, and with coverage goals of 5 to 15 kilometer cell radius. Example WLAN systems include installations at areas both inside and outside of residences or businesses and public areas such as trains, train stations, airports or stadiums. These WMAN and WLAN systems can also be integrated to form a wide area network (WAN) that can be national or even global in coverage. WMAN systems are primarily discussed because they are technically the most challenging. However, the invention may also be used in broadband wireless access systems in general. The primary problem in broadband wireless telecommunication is the considerable variation in the quality of the RF reception. The RF reception varies due to the type of terrain, due to the presence of obstacles between the base station and the subscriber station (SS), and due to the fairly high probability of receiving the same transmission by means of multiple RF propagation paths. The latter problem is referred to as "multipath" and the above set of reception problems is often collectively, and loosely, referred to as the non-line-of-sight (NLOS) reception problem. When the SS is moving, there is the additional problem of Doppler induced channel variability. A robust NLOS BWA system for fixed or mobile subscribers is a technical challenge. The WMAN systems of interest typically have RF channels that are the composite of multiple radio propagation paths over large distances. A consequence of these multipath propagation channels is that the received radio signal waveforms are distorted relative to the original transmitted radio signal waveforms. Prior art high data rate WMAN signaling technologies that are intended to mitigate the multipath performance degradations are orthogonal frequency division multiplexing (OFDM) and single carrier with frequency domain equalization (SC-FDE). FIGS. 1 and 2 are diagrams of known OFDM and SC-FDE methods of transmitting and receiving signals with digital data modulations through dispersive propagation channels that impose some degree of multipath signal distortions. These diagrams emphasize the method specific signal processing elements and illustrate their dependence on FFT block processing. FIG. 1 shows a block diagram illustrating certain processes performed by a system implementing the OFDM method. An inverse fast Fourier transform (inverse FFT) 110 transforms the data (symbols) 108 to be transmitted. Cyclic prefix insertion process 112 creates a serial output block with ends that are circular in content. These processes occur within OFDM transmitter 114. The transmitter output 115 passes through a propagation channel 116 to become input 117 to OFDM receiver 118. The method specific processes the OFDM receiver include a forward fast Fourier transform (FFT) process 120 that creates intermediate data symbols that have been distorted by the propagation channel, a process 122 to invert the channel and a process 124 to detect the original symbols, i.e., to provide the received data output 126. The OFDM symbol detection process 124 may, for example, include Viterbi decoding, symbol de-interleaving, and Reed-Solomon forward error detection/correction (FEC). The specific detection process 124 depends on the coding/interleaving that was applied to the transmitted data symbols 108. FIG. 2 shows a block diagram illustrating certain processes performed by a system implementing an SC-FDE method. The data symbols to be transmitted 128 are input to a preamble and cyclic prefix insertion process 130. The preamble sequence has good correlation properties to support channel estimation and the cyclic prefix insertion creates circular output blocks to simplify receiver FFT operations. These processes occur within SC-FDE transmitter 132. The transmitter output 133 passes through a propagation channel 134 to become input 135 to SC-FDE receiver 136. The method specific processes in SC-FDE receiver 136 include a forward FFT 138 process to transform the signal into the frequency domain, a frequency domain filter to invert the channel 140, an inverse FFT 142 to restore the signal to the time domain, and symbol detection 144 that provides the received data output 146. The SC-FDE symbol detection process 144 may include a non-linear decision feedback equalizer (DFE) in addition to decoding and de-interleaving operations. As in the OFDM method, the detection process 144 may, for example, include Viterbi decoding, de-interleaving, and a Reed-Solomon FEC, or functionally similar operations, depending on the coding/interleaving that was applied to the transmitted data symbols 128. Operationally, the OFDM and SC-FDE systems differ mainly in the placement of the inverse FFT. In the OFDM method the inverse FFT is at the transmitter to code the data into the sub-carriers. In the SC-FDE method the inverse FFT is at the receiver to get the equalized signal back into the time domain for symbol detection. Although FIG. 2 shows the SC-FDE signal to have a cyclic prefix insertion 130, this is actually an option for SC-FDE that trades useable bandwidth for a slightly decreased number of receiver computations and a potential performance improvement. In the OFDM method, the cyclic prefix insertion 112 and the associated loss of useable bandwidth are mandatory. For high data rate single carrier (SC) systems, WMAN multipath RF channel distorts the signal by mixing data symbols that were originally separated in time by anywhere from a few symbols to a few hundreds of symbols. This symbol mixing is referred to as inter-symbol interference (ISI) and makes the SC wireless link useless unless equalization is performed. It is generally agreed that traditional time domain adaptive equalization techniques are impractical to solve this problem since the computations per bit are proportional to the ISI span, which in the WMAN channels of interest can be hundreds of symbols. However, the FFT can be used to provide efficient frequency domain equalization for single carrier signaling. This is the basis of the single carrier frequency domain equalization (SC-FDE) method discussed above. SC-FDE is known to work well in terms of multipath mitigation and is practical in terms of transceiver computations per bit. A modem SC-FDE method is described by David Falconer, Lek Ariyavisitakul, Anader Benyamin-Seeyar and Brian Eidson in "Frequency Domain Equalization for Single-Carrier Broadband Wireless Systems", IEEE Communications Magazine, Vol. 40, No. 4, April 2002. For high data rate OFDM systems, WMAN multipath RF channels often result in severe spectral nulls. These spectral nulls make the OFDM wireless link useless unless interleaving and coding are performed. Coherent OFDM also requires equalization. However, OFDM with interleaving, coding, and equalization is known to work well in terms of maintaining a WMAN link in the presence of multipath and is equivalent to SC-FDE in terms of transceiver computations per bit. A critical comparison of the OFDM and SC-FDE techniques is given by Hikmet Sari, Georges Karam and Isabelle Jeanclaude in "Transmission Techniques for Digital Terrestrial TV Broadcasting", IEEE Communications Magazine, February 1995. SUMMARY OF THE DISCLOSURE [0011] The invention allows for use of time domain equalization in single carrier broadband wireless systems, thereby overcoming one or more problems associated with using traditional time domain equalization techniques and avoiding the disadvantages of OFDM and SC-FDE systems. An example of one use of a preferred embodiment of the invention is a receiver implementing a single carrier single burst equalization (SC-SBE) method. Such a receiver is capable of achieving near optimal reception of individual single carrier RF bursts. The receiver makes an estimate of the burst's propagation channel impulse response (CIR) and then uses a CIR-based coefficient computation process to obtain filter coefficients for a time domain equalization process. A subset comprising the most significant coefficients is selected for filters in the equalization process allowing more efficient implementation of the filters in the time domain. For example, if a time a minimum mean square error decision feedback equalizer (MMSE-DFE) used a MMSE-DFE filter computation process computes a sufficiently large number of coefficients for the DFE filters, i.e., the feed forward filter (FFF) and feedback filter (FBF), so that each filter spans the maximum anticipated length of the CIR. For NLOS WMAN systems, this results in hundreds of computed coefficients for both the FFF and the FBF. In order to implement the filters efficiently, a coefficient selection process eliminates less significant computed FFF and FBF coefficients. The resulting FFF and FBF are "sparse" filters in that the sense that most of the taps in the filter delay lines do not have a filter coefficient. This allows the filters to be efficiently implemented in the time domain. The availability of time domain, sparse equalization filters avoid problems associated with the prior art OFDM and SC-FDE methods which use block processing FFT procedures. These problems include a large block granularity that limits bandwidth efficiency. In contrast, the SC-SBE method allows the bandwidth efficiency to be maximized. The coefficient selection process improves the radio telecommunication link's performance for the majority of WMAN propagation channels. In the remaining WMAN channels the coefficient selection procedures assure that any performance degradation will be BRIEF DESCRIPTION OF THE DRAWINGS [0015] FIG. 1 is a block diagram of the transmitter and receiver implementing a prior art OFDM method. FIG. 2 is a block diagram of a transmitter and receiver that implement a prior art SC-FDE method. FIG. 3 is a block diagram of a transmitter and receiver implementing a SC-SBE method with a coefficient selection process and sparse filter DFE. FIG. 4 is a block diagram of a transmitter and receiver that illustrates the SC-SBE operations relative to other operations in a single carrier receiver. FIG. 5 is a block diagram of an SC-SBE processor. FIG. 6 is a block diagram illustrating a DFE performance based coefficient selection process with an exhaustive search. FIG. 7 is a block diagram illustrating a DFE performance based coefficient selection process with amplitude based pre-selection. FIG. 8 is a block diagram illustrating an amplitude threshold based coefficient selection process. DETAILED DESCRIPTION OF THE INVENTION [0023] A significant problem is created by both the OFDM and SC-FDE methods due to their reliance on the large block FFT operation. The problem, not recognized, is that the large block FFT operation restricts the efficiency of time division duplexing (TDD) and time division multiple access (TDMA) techniques. Modern TDD/TDMA techniques provide the opportunity for efficient use of a single RF channel for both downlink and uplink burst communication. For example, adaptive TDD/TDMA techniques, are defined in the IEEE Standard 802.16®-2001. In the adaptive TDD technique the position in time of the border separating a TDD frame's downlink and uplink traffic is adapted to best suit the relative amount of downlink and uplink traffic. It is well known that when properly implemented adaptive TDD is more spectrally efficient than the older frequency domain duplexing (FDD) technique which simply uses 2 RF channels, one for downlink and one for uplink. Proper utilization of TDD/TDMA techniques, however, requires flexibility with respect to the allowed (allocated) burst durations since the burst durations that are desired depend on the variable size of the data to be transferred. For WMAN systems, the OFDM and SC-FDE signaling techniques use an FFT whose size is typically in the 256 to 2048 sample point range. The problem is that the block FFT operations impose a large granularity on the TDD bandwidth allocation scheme that results in bandwidth inefficiency. The block restricted bandwidth allocation granularity is equivalent to the FFT size, in the range of 256 to 2048, or so, equivalent SC symbol time slots. This large TDD bandwidth allocation granularity significantly decreases the efficiency of a TDD/TDMA BWA system. Another problem with the OFDM and SC-FDE methods that results in bandwidth inefficiency is the periodic insertion of a cyclic prefix--this directly turns valuable bandwidth into overhead. In contrast, methods and apparatus for single burst equalization of single carrier RF communications signals (SC-SBE), described in connection with FIGS. 3-7, do not require FFT operations or cyclic prefixes and provides greater flexibility in TDD bandwidth allocation, providing an allocation granularity/resolution of one single carrier symbol time slot. This allows the time-frequency bandwidth efficiency to be optimized in a SC-SBE based TDD/TDMA BWA system. The SC-SBE method takes advantage of a coefficient selection process and a DFE that uses time domain sparse filters. Note that the SC-SBE method and apparatus, described below in connection with FIGS. 3-7, has implementations at the system level of BWA systems, e.g., WLAN, WMAN and WAN systems, in components that make up the BWA systems, e.g., base station and subscriber station receivers, and in the circuits that make up the receiver or transceiver components. Unless otherwise represented, circuitry for performing the functions or processes referenced below may be implemented as hardware, software, or a combination of hardware and software. Implementation examples include software executing on a digital signal processor (DSP) or a general purpose computer, as well as logic elements executing in a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC). Discrete functional blocks in the figures do not imply discrete hardware components. FIGS. 3, 4 and 5 illustrate SC-SBE concepts with reference to functional block schematic diagrams of example embodiments of transmitters and receivers. FIG. 3 is a block diagram of an example of a SC transmitter and SC-SBE receiver that facilitates a comparison with the OFDM and SC-FDE methods diagrammed in FIGS. 1 and 2. FIG. 4 is a block diagram of a SC-SBE receiver that illustrates the relation of the SC-SBE operations to other receiver operations, in particular the SC-SBE functions as a pre-processor to standard detection circuitry such as a Viterbi decoder. FIG. 5 is an example of a SC-SBE DFE with sparse, time domain filters. Referring to FIG. 3, the data symbols to be transmitted 158 by transmitter 162 are input to preamble insertion circuitry 160 that inserts a preamble symbol sequence having good correlation properties to facilitate an estimation of the propagation channel impulse response. The transmitter output 163 is an RF signal burst that passes through a propagation channel 164 to become the input 165 to receiver 166. In receiver 166, multipath channel equalization is accomplished with a sparse filter DFE 168, described below. The input signal is delayed by a predetermined amount by delay circuit 170. The delay is to allow time for the following to take place. A CIR estimate 173 is computed by process 172. Estimates 175 of signal power and noise power are computed by process 178. The signal power estimate is used to set an input gain 176 for the sparse filter DFE 168. The MMSE-DFE filter coefficients 179 are computed by process 178 based on the CIR estimate 173 and the noise power estimate 175. The filter coefficient subset 183 is selected by process 182 using methods described below. The selected filter coefficient subset 183 allows a high performance DFE 168 based on efficient time domain sparse filters. The DFE 168 provides channel equalized data 191 to symbol detection process circuitry 196 that performs symbol decoding and de-interleaving operations. The operations of symbol detection 196 are the inverse of the coding/interleaving operations applied to the transmitted data symbols 158. Symbol detection 196 provides the received data output 197 that is of interest to the media access control (MAC) layer of the BWA system's base stations or subscriber stations. It is preferred that the processing delay 170 is less than the minimal RF signal burst duration which is on the order of 50 microseconds. This requirement insures that the receiver is capable of maintaining real time throughput at the TDD frame time scale which is on the order of 2 to 4 milliseconds. FIG. 4 is a block schematic diagrams that illustrates the SC-SBE processing circuitry relative to other operations in one exemplary embodiment of a receiver. In this example an SC-SBE circuit 198 is inserted between a conventional SC receiver front-end/timing recovery circuit 200 and a conventional symbol detection circuit 196. Referring to FIG. 4, in the SC receiver front-end processing circuitry 200, an RF signal is acquired by antenna 201 and processed by analog amplification, filtering and frequency down conversion 202 to provide a much lower frequency analog baseband or pass-band signal 203 that can be converted into digital samples by sampling circuitry 204 for subsequent digital signal processing. Pulse shape filter 206 provides signal matched filtering that eliminates out of band noise and provides a signal 207 suitable for feed-forward timing recovery circuitry 208. Timing recovery in the SC receiver front-end processor 200 is desirable since it allows subsequent processing to be performed more efficiently using only one digitized sample per symbol. To achieve the timing recovery, the flow of the digitized burst signal is delayed by a predetermined amount by delay circuit 210. The delay is to allow time for symbol timing recovery circuitry 212 to compute the timing offset estimate 213. The timing estimate 213 controls interpolation circuitry 214 which outputs near optimally sampled data 215 to the SC-SBE circuit 198. As illustrated in FIG. 4, SC-SBE circuitry 198 can function as a pre-processor to a traditional symbol detection process or circuitry 196. For a more explicit example, symbol detection could be based on the transmit coding specified by the IEEE Standard 802.16a®-2003. In this case the symbol detection circuitry 196 would be a Viterbi decoder, a de-interleaver and a Reed-Solomon FEC, followed by a de-randomizer to recover the transmitted data of interest. The SC-SBE circuitry 198 does not depend on, nor require knowledge of, the type of symbol detection circuitry 196 that follows it. The detection circuitry 196 outputs the received data 197 that is sent to a MAC layer, not shown. Referring to FIG. 4, the CIR estimation process 172 in the SC-SBE circuit 198 can, for example, be performed with known non-parametric cross-correlation techniques in which the received preamble data is cross-correlated with the transmitted preamble data. Example preambles that are well suited for a cross-correlation based CIR estimation are the so called constant amplitude zero autocorrelation (CAZAC) sequences. The IEEE Standard 802.16a®-2003 specifies a CAZAC sequence for this purpose. The signal (S) and noise (N) power estimation process 174 can be designed based on a variety of known methods. For example, a preamble composed of multiple CAZAC sequences can be Fourier transformed over a time interval spanning two contiguous sequences. The power at even harmonics of the total period is an estimate of the signal plus noise (S+N) power, whereas the power at odd harmonics is an estimate of the noise power, N 175. The signal only power can then be estimated by subtraction. The input gain 176 that is required by the sparse filter DFE 168 can be computed as the ratio of the desired signal level to the square root of the estimated signal power, S. The CIR estimate 173 and the noise power estimate 175 are input to the MMSE-DFE filter coefficient computation process 178. Three alternative fast methods are known that can compute the filter coefficients 179 for an MMSE DFE from a CIR estimate and a noise power estimate. These published, computationally efficient MMSE-DFE coefficient computation procedures are: Naofal Al-Dhahir and John M. Cioffi, "Fast Computation of Channel-Estimate Based Equalizers in Packet Data Transmissions", IEEE Transactions on Signal Processing, pp. 2462-2473, 11, 43 (November 1995); Bin Yang, "An Improved Fast Algorithm for Computing the MMSE Decision-Feedback Equalizer", Int. J. Electronic Communications (AEU), Vol. 53, No. 6, pp. 339-345 (1999); and N. R. Yousef and R. Merched, "Fast Computation of Decision Feedback Equalizer Coefficients", U.S. Patent Application 2003/0081668 (May 1, 2003). Each of these coefficient computation procedures can be used to efficiently compute the large number, e.g., hundreds, of filter coefficients 179 that define the long time span filters required by the DFE for WMAN systems. The reason channel equalization filters with long time spans are required is, that in order to be effective, the filters must span the maximum variability in the propagation delay in the multipath channel. This maximum variability is generally quantified by the `delay spread` parameter. In order to characterize a multipath channel, it's CIR must be measured for delays somewhat larger than the delay spread. For example, in some WMAN systems, the multipath delay spread can exceed 10 microseconds and, typically, a single carrier signal having a 20 MHz bandwidth can be modulated with 16 million symbols per second. If the DFE filters for this SC system support the conventional minimum of one coefficient per symbol, this equates to greater than 160 coefficients per filter. It is straightforward to estimate the CIR over a 10 or more microsecond time span, for example using the above mentioned CAZAC cross-correlation technique. Furthermore, the above mentioned three alternative fast coefficient procedures provide means of computing the coefficients 179. However, to directly implement filters with such a large number of filter coefficients in the time domain would require an inordinately large amount of computation. Indeed, this excessive time domain computation requirement was the motivation for the known SC-FDE method diagrammed in FIG. 2. However, as mentioned earlier, block FFT and cyclic prefix operations of the SC-FDE method create the problem of inefficient TDD/TDMA bandwidth utilization. The SC-SBE method and apparatus solve the computation problem by selecting a subset of the coefficients for a time domain filter. This time domain filter with sparsely populated coefficients (or "sparse" filter) provides approximation of the outputs of the time domain filter when all the calculated coefficients are used. Thus, the coefficient selection process simplifies the DFE filters and improves receiver performance. Furthermore, a sparse time domain filter DFE allows efficient TDD/TDMA bandwidth utilization. The coefficient selection process 182 examines the large number of computed MMSE-DFE coefficients 179 to identify a much smaller subset of coefficients 183 to be used by the DFE filters. The sparse, time domain filters of DFE 168 efficiently implement these coefficients, avoiding the need for FFT filter techniques that result in TDD/TDMA inefficiency. FIG. 5 shows, in more detail, a schematic block diagram of an example SC-SBE processing unit 198 in order to illustrate the sparse, time domain filter DFE circuit 168. Two outputs are shown from the coefficient computation process 178: a complete set of feed forward filter (FFF) coefficients 180 and a complete set of feedback filter (FBF) coefficients 181. The `complete set of coefficients` refers to the MMSE-DFE coefficient computation 178 providing the number of coefficients required to span the maximum time lag of the CIR estimate 173, which, given the above 20 MHz bandwidth WMAN system example, could result in 160 or more FFF and FBF coefficients, each. The coefficient selection process 182 inputs the complete set of coefficients, 180 and 181, and outputs a considerably smaller number of selected coefficients, 184 and 185, for use in defining a sparse FFF 186 and a sparse FBF 188, respectively. The sparse FFF 186 and the sparse FBF 188 preferably have delay lines equal in length to the number of coefficients in the complete sets, 180 and 181, say greater than 160 each. However, the number of non-zero coefficients in the sparse filters 186 and 188 are determined by the considerably smaller number of selected coefficients, 184 and 185, for example, 16 each. The delayed received signal 171 is input to a multiplier circuit 194 having input gain 176 as the multiplier coefficient. This creates a signal amplitude scaled input 195 to the sparse FFF 186. The amplitude scaling matches the amplitude of the signal component input to the FFF with the amplitude of the symbol decisions 193 that are input to the FBF 188. This allows the sparse FFF output 187 and the sparse FBF output 189 to be directly input to summation circuit 190 to form the MMSE-DFE output 191, the channel equalized signal that is input to the subsequent symbol detection process (not shown). The MMSE-DFE output 191 is also input to symbol decision circuitry 192 that provides the symbol decisions 193 that are in turn input to the sparse FBF 188. As drawn in FIG. 5, the sparse FFF and FBF filters, 186 and 188, and the nonlinear symbol decision circuitry 192 constitute an example of a hard decision-decision feedback equalizer (H-DFE) structure. The H-DFE is well known and has been extensively analyzed. Alternative decision feedback equalizer (DFE) structures exist that can readily take advantage of the coefficient selection and sparse time domain filter elements of the SC-SBE method by simply substituting the sparse coefficient FFF and FBF filters defined above for the full coefficient FFF and FBF filters of the alternative DFE structure. An example of an alternative DFE structure is the soft decision/delayed decision integration of the DFE and Viterbi decoder, referred to as the S/D-DFE and developed in S. Lek Ariyavisitakul, and Ye Li, "Joint Coding and Decision Feedback Equalization for Broadband Wireless Channels", IEEE J. on Selected Areas in Communications, Vol. 16, No. 9, December, 1998. They indicate the S/D-DFE provides an approximate 3 dB performance improvement over the H-DFE. The performance advantage of the S/D-DFE structure over the H-DFE structure can be expected to be retained when these SC-SBE methods and apparatus are applied to both structures. The coefficient selection process 182 is a pruning of the complete set of coefficients that are output from the MMSE-DFE coefficient computation process 178. The coefficient selection process provides three major benefits: the ability to perform computationally efficient time domain equalization of channels having large multipath delay spreads; the ability to implement arbitrary TDD/TDMA bandwidth allocations using the minimal allocation granularity of one single carrier symbol; and improved overall receiver performance by avoiding the use of FFF and FBF coefficients which decrease the performance. The first benefit is based on the fact that sparse time domain filters can be used to efficiently implement the reduced set of selected filter coefficients, the number of selected filter coefficients probably needs to be less than 32. The second benefit is based on the fact that time domain filters, in general, allow arbitrary TDD/TDMA bandwidth allocations with the minimal granularity, e.g., one single carrier (SC) symbol. As discussed above, in contrast to the block processing frequency domain filtering techniques, standard time domain filtering does not impose block restraints on the TDD/TDMA allocation granularity, i.e., there is no additional computation cost for arbitrary start/stop allocations into a TDD time frame. The third benefit is improved overall receiver performance, in terms of the nature of the typical CIR of a WMAN system and the coefficient selection process. That the symbol error rate versus signal to noise ratio (SER versus SNR) performance can be improved with fewer taps is evident from the trivial example of the ideal additive white Gaussian noise (AWGN) channel. The DFE filter configuration that achieves the optimum performance for an AWGN channel is known to be equivalent to an all-pass filter. For the DFE to be equivalent to an all-pass filter requires that the tap selection algorithm select one FFF tap and zero FBF taps. That the performance is typically improved is evident in the nature of the WMAN RF propagation CIR. An optimally sampled CIR typically has a few clusters of coefficients with each cluster consisting of only a few significant coefficients. This sparsely clustered feature of a WMAN propagation channel CIR is due to the reflections near either the transmit or receive antennas being convolved with reflections far from either antenna. The desired FFF and FBF filters mimic this sparsely clustered feature of the WMAN CIR. Setting maximums of 16 FFF and 16 FBF coefficients is a conservative design for such channels. However, the maximum allowed number of coefficients is not critical and can be left up to the implementation design engineer based on detailed engineering design considerations. For example, extensive computer simulations with accepted models for WMAN channels indicate that acceptable performance is obtained with the maximum number of allowed coefficients set anywhere between 8 and 32. This leaves the relatively rare cases where, in order to obtain the best DFE receiver performance, the WMAN CIR demands more than the allowed maximum of N_sparse=16 or so coefficients each in the FFF and FBF filters. Fortunately, since the example coefficient selection processes discussed below select the most significant coefficients, the performance degradation in these cases will be slight. The slightly diminished DFE performance associated with having only the N_sparse most important coefficients is a good trade for what the coefficient selection process provides: efficient time domain equalization for NLOS WMAN channels with very large delay spreads and the bandwidth efficiency associated with arbitrary TDD allocations. For example, consider an SC-SBE processor that estimates the CIR and computes the MMSE-DFE coefficients based on a received preamble composed of a 256 symbol length Frank CAZAC as defined in the IEEE Standard 802.16a®-2003. In this example, the MMSE-DFE coefficient computation 188 outputs NF=256 computed FFF coefficients 180 and NB=255 computed FBF coefficients 181. The coefficient selection procedure 182 inputs the complete set of computed coefficients and outputs, in this example at most 16 or so most significant FFF coefficients 184 and at most 16 or so most significant FBF coefficients 185. With these selected coefficients, the sparse time domain FFF and FBF filters, 186 and 188, efficiently span a delay spread of 512 symbols while retaining a TDD/TDMA allocation granularity of an individual symbol. FIG. 6 illustrates a schematic block diagram of one example embodiment of the coefficient selection process 182. In this embodiment, the coefficients are selected jointly for the FFF and FBF filters based on the minimization of a DFE performance cost function. The received (RX) signal from timing recovery circuitry 215 is stored in an RX signal buffer 300. The complete set of computed FFF and FBF coefficients, 180 and 181, are stored in a computed coefficient buffer 302. Portions of the RX signal that contain known data 301 and a test FFF and FBF coefficient subset 303 are input to a sparse filter DFE computation circuit 304 that provides the DFE output 305 for input to a cost function computation circuit 306. An example of a DFE performance cost function is the averaged error vector magnitude (EVM) that can be computed as the standard error of the DFE output relative to the known data. The DFE performance cost 307 is input to a minimization process 310 that identifies the coefficient selection that minimizes the cost subject to the constraints that the number of FFF coefficients cannot exceed N_FFF 308 and the number of FBF coefficients cannot exceed N_FBF 309. The minimization process 310 can be thought of as performing a non-linear parameter estimation where the parameters being estimated are the addresses defining the coefficient subset, e.g., the test selection 311, that minimizes the DFE performance cost 307. The results of the cost minimization coefficient selection, i.e., the sparse coefficient vectors, 184 and 185, are output to the sparse FFF and FBF filters, 186 and 188, respectively. A potential problem with the above embodiment of the coefficient selection process 182, as illustrated in FIG. 6, is that it is exhaustive in its search and may, consequently, require excessive time and computation. FIG. 7 illustrates a schematic block diagram of a modification of the DFE cost minimization based coefficient selection process that will reduce the search time and the associated computation. In this example embodiment, a pre-selection of the computed coefficients to be searched is performed based on the coefficient amplitude. The pre-selection consists of examining the computed FFF and FBF coefficients, 180 and 181, to perform a selection 312 of the N_FFF largest amplitude FFF coefficients 313 and a selection 314 of the N_FBF largest amplitude FBF coefficients 315. The N_FFF largest FFF coefficients 313 and the N_FBF largest FBF coefficients 315 are stored in a coefficient buffer 302 that defines the coefficient search space for the minimization process 310. This pre-selection based on amplitude can be expected to have little effect on the DFE coefficient selection, i.e., the sparse coefficient vectors, 184 and 185, since the exclusion of small amplitude filter coefficients generally have little effect on the filter output. FIG. 8 illustrates a schematic block diagram of a third example embodiment of the coefficient selection process 182. This embodiment requires neither the above iterative DFE performance cost minimization procedures nor any input of the received signal. In this embodiment the coefficients are selected based on the computed coefficient's amplitude satisfying two conditions: it is one of the N_FFF or N_FBF largest amplitude coefficients of the computed FFF or FBF coefficients, respectively; and it is bigger than K*σ or K*σ , where σ or σ are the standard deviations of the NF-N_FFF smallest amplitude computed FFF coefficients or NB-N_FBF smallest amplitude computed FBF coefficients, respectively. Threshold parameter input 331, represented by the variable K, that provides protection against coefficient computation noise. NF and NB are the number of computed FFF and FBF coefficients, 180 and 181, respectively. N_FFF and N_FBF, 308 and 309, are the maximum number of non-zero FFF and FBF coefficients, 184 and 185, respectively, to be selected. As illustrated in FIG. 8, the computed FFF coefficients 180 are sorted by amplitude sorting circuitry 316 into the N_FFF largest coefficients 317 and the NF-N_FFF smallest coefficients 318. Similarly, the computed FBF coefficients 181 are sorted by amplitude sorting circuitry 320 into the N_FBF largest coefficients 321 and the NB-N_FBF smallest coefficients 322. The sets of smallest coefficients, 318 and 322, are input to standard deviation circuitry, 324 and 326, that output the standard deviations, σ 325 and σ 327. The threshold comparison circuitry 328 selects the non-zero sparse filter FFF coefficients 184 as the subset of the N_FFF largest computed FFF coefficients 317 that are also greater than K*σ . Similarly, the threshold comparison circuitry 330 selects the non-zero sparse filter FBF coefficients 185 as the subset of the N_FBF largest computed FBF coefficients 321 that are also greater than Comparing the above three example embodiments of the coefficient selection process 182, that are diagrammed in FIGS. 6, 7 and 8, the following observations can be made. All three embodiments input the complete set of computed coefficients, 180 and 181, from the MMSE-DFE coefficient computation process 178 and output the selected filter coefficients, 184 and 185, that contain at most N_FFF and N_FBF non-zero coefficients, respectively. The embodiments diagrammed in FIGS. 6 and 7 require input of the RX signal and perform an iterative non-linear cost minimization to achieve the coefficient selection. The embodiment diagrammed in FIG. 7 performs a pre-selection based on the coefficient amplitude to reduce the coefficient search space and the associated computations. The embodiment diagrammed in FIG. 8 selects the coefficients based solely on the coefficient amplitude and can be expected to require the least amount of computation and be the simplest to implement. The embodiment of FIG. 6 is useful to illustrate the fundamental concept of the coefficient selection process 182 which is to eliminate computed coefficients so that the sparse time domain filters can be used in the DFE with either a DFE performance improvement or an insignificant performance loss. However, the embodiment of FIG. 6 is anticipated to exhibit an excessive computational demand. On the other hand, embodiments of FIGS. 7 and 8 are practical implementations that have been extensively tested by means of computer simulation using accepted WMAN CIR models. Some of these simulation results are described in Russell McKown, "802.16e Proposal: Link Performance of WirelessMAN-SCa Mobile Subscriber Stations", IEEE c802.16e-03/19r2, Mar. 11, 2003, which is incorporated herein by reference. Patent applications by Russell Mckown, Richardson, TX US Patent applications by ADVANCED RECEIVER TECHNOLOGIES, LLC Patent applications in class Adaptive Patent applications in all subclasses Adaptive User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130070834","timestamp":"2014-04-17T00:57:30Z","content_type":null,"content_length":"79662","record_id":"<urn:uuid:34fb09a9-814d-4968-968b-b08e28ca3b26>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Imre Simon Passed Away Recently (Janos Simon emailed me this information that should interest readers of this Blog.) Imre Simon, a distinguished Hungarian-born Brazilian Theoretical Computer Scientist passed away August 12. Most of his scientific accomplishments were in "European" Theoretical Computer Science--his main research area was combinatorics of words: he was responsible for the introduction of the formal study in Theoretical Computer Science, of the algebraic structure that is now known at "Tropical Semiring". [The structure is the abstraction of the Kleene operation on languages: sum, product (without inverses, but with the usual distributive properties), and star. The name "tropical" is an allusion to Brazil.] Still, his first result in Theory, initially published as a University of Waterloo Technical Report, is a study of the compexity of the Davis-Putnam procedure for proving tautologies (I. Simon, On the time required by the Davis-Putnam tautology recognition algorithm. Notices Amer. Math. Soc. 18 (1971) 970.) He shows that the naive impementation runs in exponential time. I believe that this was the first formal result in proof complexity in the West. Another result, that may be known to our community is his 1978 FOCS paper (I. Simon, Limited subsets of a free monoid, in Proceedings of the 19th Annual Symposium on Foundations of Computer Science, IEEE 19 (1978).) He considers the following problem on finite automata: Given a regular expression R, consider the language R = I + R + .... + R + .... It is easy to get a procedure that tests whether R=R*. (Exercise for the interested reader!) Now ask the "next" question: is R = I + R + .... + R (instead of an infinite union, just the union of the first i terms). This is also easy, for any fixed i (Exercise for the reader who is still interested.) The question that Imre solved was: Is it decidable whether there is an i, such that R = I + R + .... + R The answer is Yes. Of course, another question is "why does anyone care?" The answer to that is that the proof is very nice--actually Imre gave two proofs, one combinatorial, and one algebraic. More importantly, this is a case where algebraic techniques can be imported to reveal hidden structure, and help attack questions of decidability and compexity. The strategy has proven to be quite important and successful in other contexts.) A relatively recent biographical sketch and bibliography can be found in the Festchrift for his 60th birthday that appeared as a RAIRO special Imre also had an important role in the development of the Theoretical Computer Science community, and, more generally, academic Computer Science in Brazil--in particular the CS Departments at USP (Sao Paulo University) and UNICAMP (University of Campinas) have benefited from his energy, organization and dedication. He was a coauthor of the first monograph on Theoretical Computer Science [T. Kowaltowski, C. Lucchesi, I. Simon and J. Simon, Aspectos Teoricos da ComputaCao. Projeto Euclides. Livros Tecnicos e Cient B1ficos Editora Rio de Janeiro (1979). Prepublished at the occasion of 11o Coloquio Brasileiro de Matematica, IMPA, Rio de Janeiro (1977)], was an advisor and mentor to numerous young Brazilian scientists, helped launch the LATIN series of Theory Conferences, was instrumental in bringing to Brazil visitors like Schutzenberger, Bollobas, Adi Shamir, Lessig, and Benkler, and was an effective booster of the Brazilian Computer Science community both in the Brazilian science establishment and in the Brazilian government. He was also a generous and unselfish person, and a personal friend. He will be missed. 5 comments: 1. Imre Simon also had an interest on some social aspects of computer science and I've been fortunate enough to attend one of his lectures on this subject at UNICAMP, Brazil. He will certainly be 2. Are Imre and Janos related? 3. No. But we were good friends. 4. Just portuguese-proofing the post: Aspectos Teóricos da Computação. Projeto Euclides. Livros Técnicos e Científicos (Theoretical Aspects of Computing. Euclides Project. Technical and Scientific Books). 5. I know this was posted ages ago but I just stumbled upon it. I'm an Applied Math student in Mexico City. I'm working on my thesis for my bachelor degree. It's about tropical cryptography. I just wanted to drop by and pay my respects :)
{"url":"http://blog.computationalcomplexity.org/2009/08/imre-simon-passed-away-recently.html","timestamp":"2014-04-16T08:04:13Z","content_type":null,"content_length":"163891","record_id":"<urn:uuid:2c4b49ec-5e5c-484e-9cac-7e78ca7b6fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: IV est. /w binary endogenous and binary dependent var From jean ries <ries@ires.ucl.ac.be> To statalist@hsphsun2.harvard.edu Subject Re: st: IV est. /w binary endogenous and binary dependent var Date Thu, 09 Dec 2004 14:07:56 +0100 I had a look at biprobit, too. The syntax seems compatible with what Guido wants to do, e.g., biprobit (y = w x1-xn) (w = z x1-xn) but it wasn't clear to me from a quick perusal of the manual entry that biprobit will properly handle the fact that the w that appears in the first eqn is the dependent variable in the second. The help file refers to this as a "Seemingly unrelated bivariate probit model", which makes me think that it won't do what Guido wants. Any biprobit experts out there who can clarify this? © Copyright 1996–2014 StataCorp LP | Terms of use | Privacy | Contact us | What's new | Site index
{"url":"http://www.stata.com/statalist/archive/2004-12/msg00321.html","timestamp":"2014-04-18T13:15:33Z","content_type":null,"content_length":"7334","record_id":"<urn:uuid:282f408d-61ad-45fb-9190-1b6ee893fd67>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Another Goodness of Fit test....GRRRR August 18th 2008, 08:08 PM #1 Jul 2008 Another Goodness of Fit test....GRRRR I'm so confused with this goodness of fit stuff. I'm trying to solve the following problem...can anyone help?? The safety director of Honda USA took samples at random from the file of minor accidents and classified them according to the time the accident took place. Using the goodness-of-fit test and the .01 level of significance, determine whether the accidents are evenly distributed (uniform) throughout the day. Write a brief explanation of your conclusion. (accidents) Number of Accidents I'm so confused with this goodness of fit stuff. I'm trying to solve the following problem...can anyone help?? The safety director of Honda USA took samples at random from the file of minor accidents and classified them according to the time the accident took place. Using the goodness-of-fit test and the .01 level of significance, determine whether the accidents are evenly distributed (uniform) throughout the day. Write a brief explanation of your conclusion. (accidents) Number of Accidents 8 up to 9 AM 9 up to 10 AM 10 up to 11 AM 11 up to 12 PM 1 up to 2 PM 2 up to 3 PM 3 up to 4 PM 4 up to 5 PM You can use a chi-square test for goodness of fit. The null hypothesis is H0: $p_1 = p_2 = p_3 = p_4 = p_5 = p_6 = p_7 = p_8 = \frac{1}{8}$ where $p_i$ is the probability of an accident in the ith hourly interval for i = 1, 2, ...... 8. The chi-square test statistic will have 8 - 1 = 7 degrees of freedom. Now calculate the value of $X^2$ and test it for significance at the 0.01 level: $X^2 = \sum_{i=1}^{8} \frac{(n_i - n p_i)^2}{np_i}$ where n is the total number of accidents and $n_i$ is the number of accidents in the ith hourly interval (you should be familiar with this formula and where it comes from). August 18th 2008, 09:20 PM #2
{"url":"http://mathhelpforum.com/advanced-statistics/46216-another-goodness-fit-test-grrrr.html","timestamp":"2014-04-18T14:12:58Z","content_type":null,"content_length":"39752","record_id":"<urn:uuid:2058ab9c-44b7-4e52-805a-d5366801f418>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
arbitrary sort algorithm April 5th, 2013, 07:15 PM #1 Join Date Jun 2008 arbitrary sort algorithm Hi, does anyone know of a C++ sort algorerhythm or library that can take input of a bunch of rows of data and then sort rows by an arbitrary defined order of one of the columns ie sort rows by value of the first column in this order (boba bobc bobe bobx) etc? Last edited by Jarwulf; April 5th, 2013 at 08:31 PM. Re: arbitrary sort algorithm a C++ sort algorerhythm or library that can take input of a bunch of rows of data and then sort rows by an arbitrary defined order of one of the columns You can quite easily sort a 2D-array using the standard C++ sort algorithm. If the 2D-array has a normal C++ memory layout it's most efficient to first create an array holding pointers to the rows. Then this row pointer array is sorted using a sort criterion which decides the pairwise order between row pointers based on row contents. Finally the original 2D-array is copied to a new 2D array according to the sorted row order available in the row pointer array. Alternatively you do this in-place in the original using some swapping scheme. Note that if the 2D-array often is handled row-wise it may be convenient not to use the standard C++ memory layout, but to switch to a 1D-array-of-pointers-pointing-to-1D-arrays-of-elements data structure. This very much simplifies the sorting process described above because the row pointer array now already exists as part of the 2D-array. Last edited by nuzzle; April 7th, 2013 at 03:44 AM. April 7th, 2013, 02:08 AM #2 Elite Member Join Date May 2009
{"url":"http://forums.codeguru.com/showthread.php?536049-arbitrary-sort-algorithm&p=2112335","timestamp":"2014-04-17T11:17:31Z","content_type":null,"content_length":"67805","record_id":"<urn:uuid:f61654e9-08a1-4167-b7ee-515f5034cc79>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Why do this problem? This problem gives a fascinating insight into topology and various ideas from higher mathematics. It is ideal as a guided class activity or a solitary workout for the more able maths student. It will provide interesting insights to those interested in pursuing mathematics beyond school. The ideas raised in this problem would make for interesting visual displays of mathematics. Possible approach The problem naturally splits into two parts. First it is crucial that students will need to understand how the square relates to the physical torus and then students can begin to concentrate on the colouring aspect. A good way to test understanding of the first part is to ask students to give explanations of the setup to each other. Clear explanations will give evidence of understanding; students must realise that closed loops drawn on the torus will yield lines which pass through opposite points on the squares. You can draw some patterned squares which don't give nice pictures on the torus to reinforce the point that opposite sides on the squares are to be identified (some suggestions are shown in the key questions). The second part of the problem is best approached practically, with students being encouraged to draw designs on squares and work through the possible colourings. There are two levels of sophistication that can be used. At one level students can draw designs on the square and try to work out the minimal colouring. (Note again that for a design to be valid, the lines must intersect the opposite sides of the square at the correct corresponding points.). At the highest level, students can try to create patterns with the specific properties of needing 5, 6 and 7 colourings. Although the thinking level in this problem is quite high, the content level is relatively low; the problem could be attempted by younger students, perhaps in a maths club context. The design possibilities for this task are interesting, and perhaps students could try to draw the designs from various squares onto tori, or vice versa. Key questions Do you understand how the squares relate to tori? Can you see why the images below don't give rise to closed loops on tori? Possible extension There are, in fact, no designs which need more than 7 colours to fill. Although the proof will be beyond students, interested students could research this idea on the internet. Alternatively, students could try to find as many topologically different patterns which require 7 colours to fill. Possible support You could first, or instead, try the problem Painting By Numbers which raises many of the interesting colouring ideas of this problem but without the problems raised by the topology of the torus.
{"url":"http://nrich.maths.org/7027/note?nomenu=1","timestamp":"2014-04-21T09:57:58Z","content_type":null,"content_length":"6063","record_id":"<urn:uuid:35366fec-da04-41c8-8192-8ba20f2640a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
László Rédei Born: 15 November 1900 in Rákoskeresztur, Budapest, Hungary Died: 21 November 1980 in Budapest, Hungary Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index László Rédei's name also appears as Ladislaus Rédei, in particular it is this version of his name which appears on his papers in the 1940s. He was brought up in Budapest, the town of his birth. He attended school in Budapest, then studied at the University of Budapest. He continued to undertake research at the University of Budapest and published his first paper in 1921. The paper was Existence theorem for the primitive root of the congruence x^φ(p^a) - 1 p^α) (Hungarian). In the following year he was awarded his doctorate but, before the award of his doctorate, Rédei had already become a secondary teacher of mathematics, taking up his first teaching appointment in 1921. For nearly twenty years Rédei worked as a secondary school teacher. At first he taught in schools in small towns but eventually he was appointed to teach at a school in Budapest. It is remarkable that during these years, as well as undertaking the demanding job of school teaching, he was undertaking research and publishing papers. These papers are impressive both for their quality and for the number which he published during this period. By 1940, the year he moved from secondary school teaching to become a lecturer at Szeged University, Rédei had published over 35 papers on algebraic number theory, particularly on class groups of quadratic number fields. He may have remained outside universities for nearly 20 years, but he did achieve high recognition for his work in other ways. He was awarded an habilitation from Debrecen in 1932, studied at the University of Göttingen with a Humboldt scholarship in the academic year 1934-35, and was awarded the Julius König medal in 1940. To understand how a vacancy occurred at the University in Szeged we need to look at the events which began while Rédei was undertaking research in Budapest. Cluj (known also by its German name, Klausenburg, and its Hungarian name, Kolozsvár) had, with the rest of Transylvania, been incorporated into Romania in 1919. The University in Cluj, which had been named the Franz Joseph University since 1881, became a Romanian institution and was officially opened as such by King Ferdinand on 1 February 1920. The Hungarian university in Cluj moved first to Budapest, then to Szeged. In 1940, Hungary captured the part of Romania containing Cluj, and the Hungarian university was moved back from Szeged to Cluj. A new university was founded in Szeged in the same year and many of the staff chose to remain in Szeged and work at the new university rather than move to Cluj. However Gyula Szokefalvi-Nagy left the Bolyai Institute in Szeged and moved to Cluj, with Rédei becoming his replacement. In 1941 Rédei was appointed to the Chair of Geometry in Szeged but later he was appointed to the Chair of Algebra and Number Theory. He remained at the University of Szeged until 1967. Let us look now at Rédei's work on algebraic number theory. His first papers were devoted to providing new proofs of the quadratic reciprocity law. He then moved on to what would be his main work for around 25 years. Gauss had proved that the number of even invariants of the class group of a quadratic number field is one less than the number of prime factors occurring in the discriminant of the number field. However, when Rédei started work on the problem there was no information on the size of the cyclic components. In the early 1930s he obtained a formula for the number of cyclic components which have order at least 4. The paper [1] examines the work which led up to the solution of the problem by Rédei in 1953:- In 1953 L Rédei published his famous article "Die 2-Ringklassen-gruppe des quadratischen Zahlkörpers und die Theorie des Pell-schen Gleichung", after many years of investigation of Pell's equation. He gave a unified theory for the structure of class groups of real quadratic number fields and conditions for solvability of Pell's equation and other indeterminate equations. This was not the only problem concerning quadratic number fields which Rédei investigated over this period. Between 1936 and 1942 he looked at the problem of determining which real quadratic number fields Q(√d) have a ring of integers which is a Euclidean ring. There are actually 21 such fields but Rédei did not achieve this classification. He did however publish a number of papers such as Über den Euklidschen Algorithmus in reell quadratischen Zahlkörpern (Hungarian) (1940), Über den Euklidischen Algorithmus in reellquadratischen Zahlkörpern (1941), and Zur Frage des Euklidischen Algorithmus in quadratischen Zahlkörpern (1942). In these he found several of the 21 cases, also showing that many others do not have a Euclidean ring of integers. He did, however, get one of these wrong for he 'proved' in the last of the three papers we mentioned that Q(√97) has a ring of integers which is a Euclidean ring. This error was only spotted in 1952. The other two main areas to which Rédei contributed are group theory and semigroup theory. In group theory he worked for many years of factorisations of finite abelian groups, looking at properties of abelian groups in every element had a unique factorisation as a product of elements one from each of a number of specified subsets of the group. His most general result in this area is the subject of the paper Neuer Beweis des Hajósschen Satzes über die endlichen Abelschen Gruppen (1955). He also worked on finite p-groups and a major text Endliche p-Gruppen was published in 1989, nine years after his death. A reviewer writes:- The classical approach to the study of p-groups consists in the investigation of their subgroups and central series. The author presents a new approach to the investigation of finite p-groups. It is based on the notion of a basis (of minimal length) of an arbitrary p-group. One of Rédei's most important contributions to semigroup theory is his proof that every finitely generated commutative semigroup is finitely presented. This result appears in his book Theorie der endlich erzeugbaren kommutativen Halbgruppen (1963) which was translated into English as The theory of finitely generated commutative semigroups (1965). However, he contributed numerous other significant results about semigroups, for example classifying all semigroups whose proper subsemigroups are groups and providing a wide range of interesting examples of semigroups. Let us mention other important books written by Rédei. In 1965 he wrote Begründung der euklidischen und nichteuklidischen Geometrien nach F Klein which was translated into English and published in 1968 as Foundation of Euclidean and non-Euclidean geometries according to F Klein. There is also Lückenhafte Polynome über endlichen Körpern (1970), translated into English as Lacunary polynomials over finite fields (1973). L Carlitz writes in a review:- The author has written this book with great care. It is by no means easy to read. However the remarkable results and methods will repay careful study. Perhaps his most famous textbook is Algebra written in Hungarian and published in 1954 with a German translation being published in 1959. Paul Halmos, reviewing the Hungarian edition, writes:- The book is intended to be both a university text and a reference volume. It treats its subject carefully, systematically, and exhaustively. The exposition is, in principle, self-contained. There are no exercises, but there are many examples and counterexamples designed to supply the concrete background needed for the understanding of the abstract material. Whenever possible, the author proceeds from the general to the special. Finally we quote from [3] concerning Rédei as a teacher:- During the almost thirty years he spent as Szeged University, he had a great influence on his pubils. In his lectures he was hardly interested in the volume of the presented material, but in putting it in the proper light. He lectured without the slightest trace of rhetoric, but most of what he said was clear even for the weaker students and at the same time it contained many stimulating remarks for the good ones. He was able to create an excellent scientific atmosphere around him. He always felt his pupils to be collaborators, and never refused to learn from them. A whole generation of Hungarian algebraists can be considered as more or less direct pupils of his. Article by: J J O'Connor and E F Robertson List of References (6 books/articles) Mathematicians born in the same country Additional Material in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © August 2007 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Redei.html","timestamp":"2014-04-17T12:42:43Z","content_type":null,"content_length":"17758","record_id":"<urn:uuid:522a4859-6104-4e22-9077-55bba7c9aec7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Cut And Shoot, TX Algebra 2 Tutor Find a Cut And Shoot, TX Algebra 2 Tutor ...I care about my students, their needs, and their concerns. I have extensive experience in all aspects of computers. My experience includes installing and monitoring hardware and devices as well as setting up networking. 54 Subjects: including algebra 2, chemistry, reading, English I am a former classroom teacher that is looking to continue to help children and adults learn and maximize their future opportunities. I taught 6-12th grade reading and language arts, math, pre-algebra and high school English. I also taught adult education courses to help adults earn their GEDs. 39 Subjects: including algebra 2, Spanish, English, chemistry I stress the learning of problem-solving skills above and beyond just learning facts. If you are willing to try and are open to new approaches, I can help you develop skills that can last a lifetime. I have a Ph.D. in Analytical Chemistry and have many years of industry experience using both analytical and organic chemistry. 6 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...I have officially taught Theater and Speech at the High School level in an economically disadvantaged area. From that experience I developed a life coaching style of tutoring. Because of my background in theater and the military I was able to tutor students in many subjects. 21 Subjects: including algebra 2, English, reading, algebra 1 ...I performed with the Stratford High School Orchestra during my four years in high school under the direction of Dr. Michael Alexander, who is now the head conductor at Baylor University, and received lessons from David Klingensmith, a Houston-based jazz bassist. While playing at Stratford, the ... 20 Subjects: including algebra 2, chemistry, biology, GRE Related Cut And Shoot, TX Tutors Cut And Shoot, TX Accounting Tutors Cut And Shoot, TX ACT Tutors Cut And Shoot, TX Algebra Tutors Cut And Shoot, TX Algebra 2 Tutors Cut And Shoot, TX Calculus Tutors Cut And Shoot, TX Geometry Tutors Cut And Shoot, TX Math Tutors Cut And Shoot, TX Prealgebra Tutors Cut And Shoot, TX Precalculus Tutors Cut And Shoot, TX SAT Tutors Cut And Shoot, TX SAT Math Tutors Cut And Shoot, TX Science Tutors Cut And Shoot, TX Statistics Tutors Cut And Shoot, TX Trigonometry Tutors Nearby Cities With algebra 2 Tutor Conroe algebra 2 Tutors Hufsmith algebra 2 Tutors Magnolia, TX algebra 2 Tutors Mont Belvieu algebra 2 Tutors New Caney algebra 2 Tutors New Waverly, TX algebra 2 Tutors North Cleveland, TX algebra 2 Tutors Oak Ridge N, TX algebra 2 Tutors Oak Ridge North, TX algebra 2 Tutors Panorama Village, TX algebra 2 Tutors Pinehurst, TX algebra 2 Tutors Plum Grove, TX algebra 2 Tutors Porter, TX algebra 2 Tutors Shenandoah, TX algebra 2 Tutors Willis, TX algebra 2 Tutors
{"url":"http://www.purplemath.com/Cut_And_Shoot_TX_algebra_2_tutors.php","timestamp":"2014-04-20T11:32:54Z","content_type":null,"content_length":"24321","record_id":"<urn:uuid:bc23161c-a01e-47c5-9ad0-e62cc49647c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Tech Briefs 10 Million DOF CFD Solutions on a Laptop In our News of July 15, 2006, we illustrated the performance of our algebraic multi-grid solver in CFD. However, since then, we have significantly improved the solver for large industrial CFD For example, the turbulence k-epsilon exhaust manifold model (shown in the above figure), in which the steady-state fluid flow is solved with about 1.5 million 3D elements, is now solved in only 53 minutes clock time, using only 2 GB memory, on a single processor PC. The robustness of the solution procedure has also been improved so that the default solver parameters (i.e., relaxation factors and convergence tolerances) will work for most problems. To further illustrate the efficiency of our solver, the graphs below show the solution times (clock times) and memory usage when solving the steady-state manifold and turnaround duct problems, previously described in our September 28, 2004, and July 15, 2006 News. For the graphs below, a single processor 3.2GHz PC costing under $2,000 was used. These solution times can be reduced by using a multi-processor machine. Of course, the overall efficiency of our CFD solutions will also be of much benefit in the solution of complex FSI problems using ADINA.
{"url":"http://www.adina.com/newsgD014a.shtml","timestamp":"2014-04-18T08:22:02Z","content_type":null,"content_length":"14627","record_id":"<urn:uuid:a04100f7-8126-41cb-84b1-294fdf662aae>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodlynne, NJ Math Tutor Find a Woodlynne, NJ Math Tutor ...I was originally an engineer for a helicopter company for nearly 4 years and I resigned to start a career in education. I found little fulfillment in the business world especially because I didn't believe I was having a strong positive impact on society. This is not to say that I did not have a... 16 Subjects: including precalculus, algebra 1, algebra 2, calculus ...Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD. There and since I have worked with several students with ADD and ADHD both in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored test taking for many tests, including the Praxis many times. 58 Subjects: including calculus, differential equations, biology, algebra 2 ...I deeply enjoy helping students pick the best university for their learning and career goals. I have Pennsylvania Level II teaching certifications in English 7-12, Math 7-12, and Health K-12. I am a PhD student in education who has finished all of her coursework and preliminary exams. 47 Subjects: including SAT math, precalculus, ACT Math, piano ...A student who wants to learn music theory will inevitably learn a lot of ear training and sight singing skills. The emphasis for a theory student, though, would be on writing and analysis. Learn to compose in your own way, at your own pace, from a classically trained composer. 8 Subjects: including algebra 1, algebra 2, prealgebra, Java ...I have experience tutoring in math (up to Algebra II), reading, computer applications and SAT/ACT test prep tutoring. I also have two years of teaching experience as a teacher at The Young Women's Leadership School at Rhodes where I taught Middle School Math my first year and Algebra II this pas... 37 Subjects: including probability, GRE, SAT math, reading Related Woodlynne, NJ Tutors Woodlynne, NJ Accounting Tutors Woodlynne, NJ ACT Tutors Woodlynne, NJ Algebra Tutors Woodlynne, NJ Algebra 2 Tutors Woodlynne, NJ Calculus Tutors Woodlynne, NJ Geometry Tutors Woodlynne, NJ Math Tutors Woodlynne, NJ Prealgebra Tutors Woodlynne, NJ Precalculus Tutors Woodlynne, NJ SAT Tutors Woodlynne, NJ SAT Math Tutors Woodlynne, NJ Science Tutors Woodlynne, NJ Statistics Tutors Woodlynne, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/woodlynne_nj_math_tutors.php","timestamp":"2014-04-19T17:35:40Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:ab838e3c-a7cc-4729-a14f-a996399702b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Harper, WA Math Tutor Find a Harper, WA Math Tutor ...I am detail oriented, and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience. I have worked as a laboratory chemist and as an instructor at Tacoma Community College for several years. I have also taught high school level sciences and mathematics. 12 Subjects: including geometry, ASVAB, algebra 1, algebra 2 ...I attended the University of Arizona where I received two majors in Marketing and Entrepreneurship and earned a degree in Business Administration. I am now applying to grad schools to receive my masters in teaching secondary mathematics. My original major in college was Math Education, and I switched to business my Freshmen year. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra I am a former math teacher with 7 years of experience teaching middle and high school. I have 18 years of tutoring experience with students of all ages, including adults. I have a Bachelors Degree in Mathematics and a Masters Degree in Math Education. 16 Subjects: including SAT math, algebra 1, algebra 2, geometry ...The students I have tutored tackle Precalculus problems with a sense of confidence and deep understanding. I have a bachelor's degree in computer engineering. The course required high scores in calculus for acceptance and included two years of advanced math including calculus. 16 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry ...I'm very interested in continuing with these subjects as well as expanding to history, other sciences (chemistry), and software with which I am adept (MATLAB, Excel, LabView).My teaching style is very relaxed, but to the point. I want to help you understand the material, so I'm pretty heavy on c... 25 Subjects: including algebra 1, algebra 2, linear algebra, calculus Related Harper, WA Tutors Harper, WA Accounting Tutors Harper, WA ACT Tutors Harper, WA Algebra Tutors Harper, WA Algebra 2 Tutors Harper, WA Calculus Tutors Harper, WA Geometry Tutors Harper, WA Math Tutors Harper, WA Prealgebra Tutors Harper, WA Precalculus Tutors Harper, WA SAT Tutors Harper, WA SAT Math Tutors Harper, WA Science Tutors Harper, WA Statistics Tutors Harper, WA Trigonometry Tutors Nearby Cities With Math Tutor Annapolis, WA Math Tutors Bethel, WA Math Tutors Colby, WA Math Tutors Colchester, WA Math Tutors Enetai, WA Math Tutors Forest City, WA Math Tutors Fragaria, WA Math Tutors Orchard Heights, WA Math Tutors Overlook, WA Math Tutors South Colby Math Tutors Southworth Math Tutors Waterman, WA Math Tutors Wautauga Beach, WA Math Tutors Westwood Village, WA Math Tutors Westwood, WA Math Tutors
{"url":"http://www.purplemath.com/harper_wa_math_tutors.php","timestamp":"2014-04-16T10:10:29Z","content_type":null,"content_length":"23798","record_id":"<urn:uuid:3c926e35-6013-4bf4-862f-d12a2a43625d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Apollonius's Ellipse and Evolute Revisited - The Discriminant of the Related Quartic One may approach the problem in a more algebraic way. If the arbitrary point within the ellipse is O(x[0],y[0]) and a point on the ellipse is given parametrically by P(acos(t),bsin(t)), then the desired geometric orthogonality condition is that the vector OP be perpendicular to the tangent vector T to the ellipse at P or áacos(t)-x[0],bsin(t)-y[0]ñ·á-asin(t),bcos(t)ñ =0 . (7) Simplifying this condition (7) and eliminating sin(t) from it gives the location of the normal points as solutions to the quartic equation in cos(t): (a^2-b^2)^2cos^4(t) -2ax[0](a^2-b^2)cos^3(t) +(a^2x[0]^2+b^2y[0]^2 -(a^2-b^2)^2)cos^2(t) +2ax[0](a^2-b^2)cos(t) -a^2x[0]^2=0. (8) This is equivalent to a quartic equation in x(t)=a cos(t), which in turn is the result of using the equation of the hyperbola of Apollonius to eliminate y from the pair of quadratic equations for the hyperbola and ellipse. Older books on the theory of equations discuss the nature of the roots of third and fourth degree polynomials, e.g.[4,5]. They develop the theory of the discriminant of the cubic and quartic, generalizing what every student knows about quadratic equations, and which is conveniently programmed into modern computer algebra systems. In fact as in the quadratic case, the discriminant can be defined as a product of the squared differences of all the distinct pairs of roots of the polynomial, modulo a normalizing constant, so it is zero precisely when there are multiple roots of the polynomial and nonzero otherwise, while its sign is correlated with the number of real roots. If q(x)=a[4]x^4+a[3]x^3+a[2]x^2+a[1]x+a[0], a[4] ¹ 0 is an arbitrary quartic with real coefficients, then one may define its discriminant in terms of these coefficients, the vanishing of which implies the existence of a multiple root as in the quadratic case. The discriminant of q is given by the formula æ a[2]^2 ö 3 =4 è + 4a[0] a[4] -a[1]a[3] ø æ 2a[2]^3 a[1]a[2]a[3] 8a[0]a[2] a[4] ö 2 -27 è -a[1]^2a[4] - a[0]a[3]^2 - + + ø (9). In our case from (8) we have a[4]=(a^2-b^2)^2 a[3] = 2ax[0](a^2-b^2)=-a[1], a[2]=a^2x[0]^2+b^2y[0]^2-(a^2-b^2)^2 a[0] = -a^2x[0]^2. (10) Substituting (10) into (9) and imposing the condition (5) that (x[0],y[0]) is on the evolute, one may show that D = 0 for all 0 £ t < 2p. Thus for these points there are either three normals from the point or two when the point is a cusp. One can also show that except for the discriminant vanishing on the axes, where the cosine must have at least a repeated root by symmetry, one has D < 0 for points outside the evolute, implying two distinct real roots and hence two normals, and D > 0 for points within the evolute, implying either four distinct real roots (and hence four normals) or two distinct pair of complex conjugage roots. However, the latter would imply that there are no normals, but as noted in the introduction there are always at least two so this cannot happen. The reader interested in seeing more details of this development and a chance to investigate this problem or similar ones may download a Maple worksheet at http://www3.villanova.edu/maple/misc/frenetellipse.htm or simply view the worksheet in web page format. This is a beautiful example of how, empowered by a computer algebra system, one can follow one's nose in uncovering elegant mathematical structure that would otherwise be unreachable in practice. It is also a useful lesson on the increasing importance of the use of computer algebra systems in doing mathematics.
{"url":"http://www.maa.org/publications/periodicals/convergence/apolloniuss-ellipse-and-evolute-revisited-the-discriminant-of-the-related-quartic","timestamp":"2014-04-18T13:07:47Z","content_type":null,"content_length":"107935","record_id":"<urn:uuid:7d746ef3-0bd4-4393-a8cd-7167cde19608>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
NASA: Practical Uses of Math And Science (PUMAS) View PUMAS Example Isoperimetric Geometry The isoperimetric theorem states that: “Among all shapes with an equal area, the circle will be characterized by the smallest perimeter” which is equivalent to “Among all shapes with equal perimeter, the circle will be characterized by the largest area.” The theorem’s name derives from three Greek words: ‘isos’ meaning ‘same’, ‘peri’ meaning ‘around’ and ‘metron’ meaning ‘measure’. A perimeter (= ‘peri’ + ‘metron’) is the arc length along the boundary of a closed two-dimensional region (= a planar shape). So, the theorem deals with shapes that have equal perimeters. Grade Level: High School (9-12) Curriculum Topic Benchmarks: M1.4.8, M2.4.9, M5.4.2, M5.4.10, M5.4.11 Subject Keywords: Geometry, Shape, Area, Perimeter Author(s): Jan Bogaert PUMAS ID: 01_22_03_1 Date Received: 2003-01-22 Date Revised: 2003-03-24 Date Accepted: 2003-03-26 Example Files View this example (PDF Document, 89.89 KB, opens in new window) View this example (Word Document, 60 KB) Activities and Lesson Plans As yet, no Activities/Lesson Plans have been accepted for this example. Teachers' Assessment On-Line Teachers' Assessment Form Comment by Ismail Kocayusufoglu on May 24, 2004 "There is an easy way proving the main part of this example. Here it is. Let Area(Rectangle) = Area(Circle) (Denote A(R)=A(C) for convenience). Show that Perimeter(Circle) < Perimeter(Rectangle). (Denote as P(C) Proof. A(R)=A(C) r= ã[(ab)/ƒÎ] Need to show that P(C)< 2(a+b) . To show: 2ƒÎr<2(a+b) ƒÎr<(a+b) ƒÎ ã[(ab)/ƒÎ] < (a+b) So, it is enough to show that ƒÎab<(a+b)2. Lemma : Given a,b, a>b, then ƒÎab<(a+b). Proof of Lemma : Since (a-b)>0: (a-b)2 = a2+b2-2ab > 0 a2+b2 > 2ab a2+b2 > (ƒÎ-2)ab ; since 2>(ƒÎ-2) a2+b2 > ƒÎab-2ab a2+b2+2ab > ƒÎab (a+b) 2 > ƒÎab as claimed. This proves the theorem."
{"url":"https://pumas.gsfc.nasa.gov/examples/index.php?id=51","timestamp":"2014-04-21T10:10:36Z","content_type":null,"content_length":"12292","record_id":"<urn:uuid:2ba7c747-00bc-44eb-94d4-5bdc2e85184b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
On Tuesday 08 July 2008, Jay Vaughan wrote: > > In the link you can see the coordinates and use the spreadsheet > > attached > > to his mail to calculate the right x,y,z. > > This works very well, i've been able to get a fix easy now. > > Could you do a step-by-step guide for how to do this, and put it on > the web somewhere? I think it would be easy to go from your guide to > an automatic web-page/script that makes it easier for people to do > this optimization themselves .. It would be very easy to take the formulae from the spreadsheet in javascript. Alternatively you can use it directly in the script if you glue in the calcs in perl to calculate $posx, $posy and $posz as below. Variable $posacc is the estimated accuracy of the supplied location in m. ## WGS84 constants my $a; my $b; my $lat; # latitude my $lon; # longitude my $h; # height above ellipsiod (m) my $e; # first eccentricity my $N; # Radius of curvature (m) ## set lat & lon in degrees $lat = 47.3; $lon = 8.5; ## convert to radians for sin / cos to work with $lat = deg_to_rads($lat); $lon = deg_to_rads($lon); $h = 0.0; $posacc = 150000.0; # define WGS84 constants $a = 6378137.0; $b = 6356752.31424518; # calc intermediate parameters $e = sqrt(($a**2 - $b**2)/$a**2); $N = $a / sqrt(1 - ($e**2 * (sin($lat))**2)); $posx = ($N + $h) * cos($lat) * cos($lon); $posy = ($N + $h) * cos($lat) * sin($lon); $posz = ((($b**2 / $a**2) * $N) + $h) * sin($lat); The whole thing needs reimplementing in a redistributable form. I'll be using it as an excuse to learn python :-) It may take a while since the beer festival's on for the rest of the week. If anyone else wants to do something with it I've added links to the various useful resources to the wiki http://wiki.openmoko.org/wiki/GTA02_GPS
{"url":"http://lists.openmoko.org/pipermail/community/2008-July/020941.html","timestamp":"2014-04-19T22:14:23Z","content_type":null,"content_length":"5053","record_id":"<urn:uuid:22812fe0-17e3-4b43-87bc-dd763ea6060a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Neural and adaptive systems Neural and adaptive systems: fundamentals through simulations Develop New Insight into the Behavior of Adaptive Systems This one-of-a-kind interactive book and CD-ROM will help you develop a better understanding of the behavior of adaptive systems. Developed as part of a project aimed at innovating the teaching of adaptive systems in science and engineering, it unifies the concepts of neural networks and adaptive filters into a common framework. It begins by explaining the fundamentals of adaptive linear regression and builds on these concepts to explore pattern classification, function approximation, feature extraction, and time-series modeling/ prediction. The text is integrated with the industry standard neural network/adaptive system simulator NeuroSolutions. This allows the authors to demonstrate and reinforce key concepts using over 200 interactive examples. Each of these examples is 'live,' allowing the user to change parameters and experiment first-hand with real-world adaptive systems. This creates a powerful environment for learning through both visualization and experimentation. Key Features of the Text * The text and CD combine to become an interactive learning tool. * Emphasis is on understanding the behavior of adaptive systems rather than mathematical derivations. * Each key concept is followed by an interactive example. * Over 200 fully functional simulations of adaptive systems are included. * The text and CD offer a unified view of neural networks, adaptive filters, pattern recognition, and support vector machines. * Hyperlinks allow instant access to keyword definitions, bibliographic references, equations, and advanced discussions of concepts. The CD-ROM Contains: * A complete, electronic version of the text in hypertext format * NeuroSolutions, an industry standard, icon-based neural network/adaptive system simulator * A tutorial on how to use NeuroSolutions * Additional data files to use with the simulator "An innovative approach to describing neurocomputing and adaptive learning systems from a perspective which unifies classical linear adaptive systems approaches with the modern advances in neural networks. It is rich in examples and practical insight." -James Zeidler, University of California, San Diego We haven't found any reviews in the usual places. 15 other sections not shown Bibliographic information
{"url":"http://books.google.com/books?id=jgMZAQAAIAAJ&q=backpropagation&dq=related:ISBN0876648324&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-16T05:14:31Z","content_type":null,"content_length":"121606","record_id":"<urn:uuid:427ae6fd-9882-4593-a9af-e6286b920a87>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum to infinity May 7th 2011, 11:01 PM Sum to infinity The sum to infinity of a geometric progression is 5 and the sum to infinity of another series formed by taking the first, fourth, seventh, tenth, ... terms (that is, U1+U4+U7+U10+...) is 4. Find the common ratio of the first series. May 7th 2011, 11:11 PM The ratio of the second series is the cube of the first one. May 7th 2011, 11:28 PM May 7th 2011, 11:40 PM mr fantastic
{"url":"http://mathhelpforum.com/pre-calculus/179840-sum-infinity-print.html","timestamp":"2014-04-21T02:22:18Z","content_type":null,"content_length":"5200","record_id":"<urn:uuid:180fe056-1a8d-4cd0-beaf-0a88b7b7f6bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
The Solution of the General Initial Value Problem for the Full Three Dimensional Three-Wave Resonant Interaction - Physica D , 1998 "... Hamiltonian Lie-Poisson structures of the three-wave equations associated with the Lie algebras su(3) and su(2, 1) are derived and shown to be compatible. Poisson reduction is performed using the method of invariants and geometric phases associated with the reconstruction are calculated. These resul ..." Cited by 16 (5 self) Add to MetaCart Hamiltonian Lie-Poisson structures of the three-wave equations associated with the Lie algebras su(3) and su(2, 1) are derived and shown to be compatible. Poisson reduction is performed using the method of invariants and geometric phases associated with the reconstruction are calculated. These results can be applied to applications of nonlinear-waves in, for instance, nonlinear optics. Some of the general structures presented in the latter part of this paper are implicit in the literature; our purpose is to put the three-wave interaction in the modern setting of geometric mechanics and to explore some new things, such as explicit geometric phase formulas, as well as some old things, such as integrability, in this context. - in The Arnoldfest , 1997 "... The integrable structure of the three-wave equations is discussed in the setting of geometric mechanics. Lie-Poisson structures with quadratic Hamiltonian are associated with the three-wave equations through the Lie algebras su(3) and su(2, 1). A second structure having cubic Hamiltonian is shown to ..." Cited by 4 (1 self) Add to MetaCart The integrable structure of the three-wave equations is discussed in the setting of geometric mechanics. Lie-Poisson structures with quadratic Hamiltonian are associated with the three-wave equations through the Lie algebras su(3) and su(2, 1). A second structure having cubic Hamiltonian is shown to be compatible. The analogy between this system and the rigid-body or Euler equations is discussed. Poisson reduction is performed using the method of invariants and geometric phases associated with the reconstruction are calculated. We show that using piecewise continuous controls, the transfer of energy among three 1 waves can be controlled. The so called quasi-phase-matching control strategy, which is used in a host of nonlinear optical devices to convert laser light from one frequency to another, is described in this context. Finally, we discuss the connection between piecewise constant controls and billiards. , 2002 "... Abstract. We provide a brief review of some of the major research results arising from the method of the Inverse Scattering Transform. 1. ..." Add to MetaCart Abstract. We provide a brief review of some of the major research results arising from the method of the Inverse Scattering Transform. 1. , 708 "... (will be inserted by the editor) How many types of soliton solutions do we know? ..." , 1996 "... The plane wave solutions of the three-wave resonant interaction in the plane are considered. It is shown that rank-one constraints over the right derivatives of invertible operators on an arbitrary linear space gives solutions of the three-wave resonant interaction that can be understood as a Darbou ..." Add to MetaCart The plane wave solutions of the three-wave resonant interaction in the plane are considered. It is shown that rank-one constraints over the right derivatives of invertible operators on an arbitrary linear space gives solutions of the three-wave resonant interaction that can be understood as a Darboux transformation of the plane wave solutions. The method is extended further to obtain general Darboux transformations: for any solution of the three-wave interaction problem and vector solutions of the corresponding Lax pair large families of new solutions, expressed in terms of Grammian type determinants of these vector solutions, are given.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4134866","timestamp":"2014-04-19T20:12:00Z","content_type":null,"content_length":"21876","record_id":"<urn:uuid:b5b16b98-9435-4eb1-ae89-11b4f189c2ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Charge movement in a magnetic field along the z-axis (into page/out of page) ohhh! is that regardless of it being a positive or negative charge? No. What I have described in relation to RHR applies to positive charges. To consider negative charges you can do one of two things. You can either follow the same steps, but with your hand, or do a RHR and reverse the resulting vector. I find it easier to use my left hand just so I don't confuse anything!
{"url":"http://www.physicsforums.com/showthread.php?t=228187","timestamp":"2014-04-20T16:08:05Z","content_type":null,"content_length":"63416","record_id":"<urn:uuid:d5849c9e-55a7-47d6-b15c-ac40751a4b6d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
The Singapore Model Method for Learning Mathematics The Singapore Model Method for Learning Mathematics Marshall Cavendish Int (S) Pte Ltd, Singapore Usually Ships Within One Business Day Product Code: SMMLM Description Contents_Samples This monograph serves as a resource book on the Model Method. The main purpose is to make explicit how the Model Method is used to develop students' understanding of fundamental mathematics concepts and proficiency in solving basic mathematics word problems. Through the construction of a pictorial model to represent the known and unknown quantities and their Products in relationships in a problem, students gain better understanding of the problem and develop their abilities in mathematical thinking and problem solving. This will provide a strong same foundation for the learning of mathematics from the primary to secondary levels and beyond. category... This monograph also features the Mathematics Framework of the Singapore mathematics curriculum and discusses the changes that it has undergone over the past two decades. These changes reflected the changing emphases, needs and challenges in the mathematics curriculum as we entered the 21st century. The monograph is organized into seven chapters. ● Chapter 1 provides an overview of the Mathematics Framework and the Model Method as key features of the Singapore mathematics curriculum. ● Chapter 2 highlights the evolution of the Mathematics Framework over the last two decades ● Chapter 3 and 4 illustrate the use of pictorial models in the development of the concepts of the four operations as well as fraction, ration and percentage. ● Chapter 5 explains and discusses how the Model Method is used for solving structurally complex word problems at the primary level. ● Chapter 6 illustrates how the Model Method can be integrated with the algebraic method to formulate algebraic equations for solving problems. ● Chapter 7 concludes the monograph by discussing some perspectives of problem solving that account for the success of the Model Method and the connection between the Model Method and the algebraic method. Above is an extract from The Singapore Model Method for Learning Mathematics (reproduced with the permission of the publishers). Our recommendation: This book is a valuable resource for anyone wanting a general overview of the Model Method that includes examples covering all levels of Primary Mathematics for grades 3 - 6, where it is primarily used, as well as ideas for how to use the Model Method to help students visualize and conceptualize a problem so that they can formulate an algebraic expression to solve
{"url":"http://www.singaporemath.com/The_Singapore_Model_Method_for_Learning_Mathematic_p/smmlm.htm","timestamp":"2014-04-20T13:19:15Z","content_type":null,"content_length":"56428","record_id":"<urn:uuid:6fe3eb7a-947f-4cd5-a113-ed4990928a3a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoyvin Glavin! Prominent among my gripes about Rails is its handling of view helpers. These are mixed into the view object to provide easy access from template code, but the price of this convenience is a total lack of namespacing, reminiscent of ye olden PHP days. What’s more is that Rails seems to have an excessively narrow opinion about what constitutes a “view”: view helpers are easy to call from HTML and email templates, but there’s no consideration paid to other possible outlets from your app, including HTTP headers, command line output, file streams, or email subject lines. Solutions to the latter problem are divers and arcane, frequently involving an undocumented, hard-to-remember expression that dispatches helpers through the controller instance. Of course, this works if you’ve actually got a controller instance handy, but isn’t so great when you’re, say, running a Rake task. And in the first place, since most helper methods are just text filters, it seems silly that they should be coupled to this or that object. Why can’t we just call them from anywhere, as the generic utilities that they are? As it turns out, most of the built-in helper methods are mixed into the ActionView::Helpers module as instance methods. In order to call them as globals, you need to host them in another module where they appear as class methods. And thus, my favorite workaround: With this in place, you can call your helpers as globally-accessible methods, as follows: The pleasant thing about this solution is that it’s easy to remember, and works from whatever context you’re in, whether it’s the body of a Rake task, a controller action, a method on a model class, or even your view templates.
{"url":"http://hoyvinglavin.com/","timestamp":"2014-04-16T18:57:10Z","content_type":null,"content_length":"69395","record_id":"<urn:uuid:4eac0650-04ee-4de3-8f40-80db3c72a396>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Times Greater Than, Times As Much As Date: 05/02/99 at 01:39:30 From: Kenny Ng Subject: As much as & Greater than I would like to ask you the following questions. 1) A number is five times greater than x. Will this number be 6x or 5x? 2) A number is five times as much as x. Will this number be 5x or 6x? I myself think that the answer to question 1 should be 6x, and the answer to question 2 should be 5x. But my maths teacher disagrees. Can you help me? Date: 05/03/99 at 17:06:19 From: Doctor Peterson Subject: Re: As much as & Greater than Hi, Kenny. Interesting question! I would say that both are 5x, though I see what you are saying, and there are similar phrases where I would definitely say 6x is correct: 3) increased by 500% (not the amount, but the increase, is 500% of the original value.) 4) 500% greater than x (Note that "50% greater than" clearly means "150% of".) 5) 5 times as much again (Likewise, "half as much again" means to add half to what you already have.) These could be called "incremental multiplication," where the multiplication gives not the final number but the amount to be added to the original number. The reason I take (1) as only 5x is that I can't picture using "times" in this way in an incremental sense; I wouldn't say "1/2 times This is why we restrict our use of words in math more than in everyday English (or any other language), to avoid ambiguity, so mathematicians are sometimes seen as being too picky about words. These phrases all really belong to the everyday world, and before we can really do math on them they have to be translated into more proper and careful phrases like "five times x." And I would try to avoid saying "five times greater," because it is definitely on the edge, and might be taken either way. Out of curiosity I just did a quick search for the phrases "times more" and "times greater," and found things like "2.1 times more than" in an ad; "100 times more efficient," "10 times more detail," and "60 times more objects" describing telescopes, "100 times faster than" describing a super-computer, and "10 times greater" in an explanation of the Richter scale - the latter clearly is meant to mean "10 times as much," and I think all are intended that way. So if the phrase is correctly interpreted your way, it's certainly so common to misuse it that I would always ask "do you mean 10 times as much?" before doing the math. I also searched the Dr. Math archives, and found one case where a question asks "how many times more likely is it", and the "doctor" quietly rephrases it in his answer, saying "58 times as likely." But in the following answer, the phrase "five and a half times greater" is taken as equivalent to "6 and a half times what it was," agreeing with you (with a warning that "many people don't quite grasp those So here's my answer: "N times more than X" technically should mean (N+1)X, but is so commonly used to mean NX that it would be dangerous to follow the former interpretation without asking questions. I'm going to keep looking for a dictionary or other authoritative source to support one view or the other (or both, most likely). - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52334.html","timestamp":"2014-04-16T19:13:16Z","content_type":null,"content_length":"8601","record_id":"<urn:uuid:b023c922-c637-4652-bc82-a4c431cbf504>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Aventura, FL Math Tutor Find an Aventura, FL Math Tutor ...Sincerely, RocioI would like to be certified in ESL/ESLO because I have been teaching ESL for adults for two years at Stony Point High School. I have also taken the training with Literacy of Austin group. Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second 16 Subjects: including SAT math, algebra 1, algebra 2, chemistry ...All of the students I have tutored thus far enjoy working with me and I have improved their grades significantly. All I ask of my students is simply a desire to learn and work hard. With this, we can overcome any obstacle hindering their progression in the field of science or mathematics. 27 Subjects: including geometry, ACT Math, discrete math, differential equations ...It is my goal to help students master those foundational skills of the elementary curriculum so that that they can apply those key, basic principles to every new challenge they meet as they progress as young scholars. I actively engage my students and encourage parents to be an active participan... 17 Subjects: including geometry, English, writing, differential equations ...I currently hold basketball skills camps at local Miami gyms. I am currently working with three 15 year old JV and Varsity basketball players. I also served as team manager for F.I.U. for two years under Donnie Marsh. 8 Subjects: including geometry, prealgebra, study skills, algebra 1 ...Each lesson uses what was learned before. If you don't understand one of the foundational steps, you will get lost later on. Kelvin uses multiple techniques to help students understand the basics and does a thorough review so those building blocks stay fresh. 3 Subjects: including algebra 1, geometry, prealgebra Related Aventura, FL Tutors Aventura, FL Accounting Tutors Aventura, FL ACT Tutors Aventura, FL Algebra Tutors Aventura, FL Algebra 2 Tutors Aventura, FL Calculus Tutors Aventura, FL Geometry Tutors Aventura, FL Math Tutors Aventura, FL Prealgebra Tutors Aventura, FL Precalculus Tutors Aventura, FL SAT Tutors Aventura, FL SAT Math Tutors Aventura, FL Science Tutors Aventura, FL Statistics Tutors Aventura, FL Trigonometry Tutors Nearby Cities With Math Tutor Dania Math Tutors Dania Beach, FL Math Tutors Golden Beach, FL Math Tutors Golden Isles, FL Math Tutors Hallandale Math Tutors Hallandale Beach, FL Math Tutors Hollywood, FL Math Tutors Miami Gardens, FL Math Tutors Miramar, FL Math Tutors N Miami Beach, FL Math Tutors North Miami Beach Math Tutors Opa Locka Math Tutors Pembroke Park, FL Math Tutors Sunny Isles Beach, FL Math Tutors West Park, FL Math Tutors
{"url":"http://www.purplemath.com/Aventura_FL_Math_tutors.php","timestamp":"2014-04-18T11:29:25Z","content_type":null,"content_length":"23918","record_id":"<urn:uuid:3a642525-9225-45fc-a92f-1119b0c9c1d8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnetoelasticity of Fe100-xGex(5 < x < 18) Single Crystals From 81 K to 300 K Title Magnetoelasticity of Fe100-xGex(5 < x < 18) Single Crystals From 81 K to 300 K Publication Journal Article Year of 2009 Authors Petculescu G, LeBlanc JB, Wun-Fogle M, Restorff JB, Burton WC, Cao JX, Wu RQ, Yuhasz WM, Lograsso TA, Clark AE Journal Ieee Transactions on Magnetics Volume 45 Pages 4149-4152 Date 10/01 ISBN Number 0018-9464 Accession ISI:000270149700190 Keywords alloys, constants, elastic constants, gallium alloys, germanium alloys, iron alloys, magnetoelasticity, magnetostriction, surfaces Temperature dependent magnetoelastic properties of Fe100-xGex(5 < x < 18) single crystals have been measured. Tetragonal magnetostriction (3/2)lambda(100) measurements at x = 5.7, 12.1, 14.9, and 17.7 were performed between 78 K and 426 K and resonant ultrasound spectroscopy measurements were used to determine the shear elastic constant c' from 5 K to 300 K for x = 6.4, Abstract 7.2, 10.8, 14.6, 17.7, and 17.9. A clear distinction was observed between the temperature dependencies of lambda(100) for the A2 and D0(3) phases of Fe100-xGex. The elastic constant c' displays a monotonic decrease with concentration through the different phases (6 < x < 18) and at all temperatures. Experimental values of the tetragonal magnetoelastic coupling constant -b(1) at 81 K were remarkably consistent with theoretical values determined by density functional calculations at 0 K. URL <Go to ISI>://000270149700190 DOI 10.1109/Tmag.2009.2025969
{"url":"https://www.ameslab.gov/node/946","timestamp":"2014-04-16T17:12:11Z","content_type":null,"content_length":"22429","record_id":"<urn:uuid:82ab4a0e-8e75-4956-a02f-219aaf942428>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Yan Cao , Associate Professor Department of Mathematical Sciences University of Texas at Dallas P.O. Box 830688, EC 35 Richardson TX 75083-0688 Tel: (972)883-6458 Email: yan 'dot' cao 'at' utdallas 'dot' edu Research Interests My research interests lie in the general areas of computer vision and pattern theory, especially biomedical applications. My current research is focused on shape analysis and shape models in two and three dimensions. People can recognize the same or similar shapes in any circumstance, and they have a good intuition on what is the main structure and what is the variation of a shape. We want the computer to do the same thing. There is a lot of mathematics to be done to achieve this goal. The theory in three dimensions is especially challenging. In two dimensions, the medial axis and the geometric heat equation are powerful tools, but both of these are less satisfactory in three dimensions. Understanding the geometry of three-dimensional shapes still presents many problems. • Medical image analysis, computational anatomy • Computer vision, pattern theory • Shape analysis and shape models • Geometric PDE
{"url":"https://www.utdallas.edu/~yxc069200/","timestamp":"2014-04-23T07:40:27Z","content_type":null,"content_length":"5383","record_id":"<urn:uuid:28a0eaad-8a1e-4158-8b32-be657210219b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Calling T.Engineer 1. you mean for arbitrary n,m. 2. what do you mean by z. 3. can you tell me how to evalute eq(15) to get this result: δ_n,m 2^n n! sqr(pi). if I you will know how they get this result for Hn, Hm, so I can also evaluted for my equation with Hn * cos (...) but this is my problem I dont know how they get this general formula.
{"url":"http://www.physicsforums.com/showthread.php?t=179061&page=4","timestamp":"2014-04-16T07:47:08Z","content_type":null,"content_length":"53612","record_id":"<urn:uuid:11cb2766-e148-4ad5-86db-01d9086803a9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
C program to find the roots of a quadratic equation 1. 65662 C program to find the roots of a quadratic equation This program is written is C, not C++. File should have a .c extension. Make broad use of comments so that I can understand the program. This program should be written with the use of "if" statements, not loops. I have a similar assignment to turn in, so the instructions must be thoroughly followed. Details about what to do: Use scanf to read values for double variables a, b, and c, which are assumed to be the coefficients of a quadratic equation. Print the input numbers in the form of a quadratic equation, using the form shown in the example section below. If a is equal to zero, print a message that this is linear equation. In case b is not zero, there is a single root: -c/b, and if b does equal zero, there is no root. Print the value of this root or print that there is no root. In this case, your program should not do any of the following items. Otherwise, assuming a is not zero, calculate the discriminant d given by d = b2 - 4*a*c If the discriminant is greater than zero, then there are two real roots. Print a message that says this, use the quadratic formula to calculate their values, r1 and r2, and print these two values. In case the discriminant is equal to zero, there is a single repeated real root. Print a message that says this, use the quadratic formula to calculate the value, and print this value. If the discriminant is less than zero, then there are two imaginary roots. Print a message that there are two imaginary roots and then print the real and imaginary part of each root. The real part of both roots is the same: -b/(2*a), while the imaginary parts are sqrt(-d)/(2*a) and -sqrt(-d)/(2*a). Notice that we have d < 0, so -d > 0, and it makes sense to take the square root of -d. (It is important for you to realize that the C language has no built-in facilities to handle complex numbers. We are simply working with real numbers, and calculating real and imaginary parts (both reals) of the roots as complex numbers.) Sample input and output: You may use a separate run for each input triple of numbers, since we haven't studied loops yet. Below in bold are the three numbers that you enter for each separate run. Everything else is what your program should output. You should try each of these 8 sets of inputs and your answers should be the same. For full credit, your output should look exactly the same as the output shown. Input 3 numbers for a, b, and c: 0.0 2.0 4.0 This is a linear equation (no quadratic term) Root: -2.000000 Input 3 numbers for a, b, and c: 0.0 0.0 -2.0 This is a linear equation (no quadratic term) There is no root Input 3 numbers for a, b, and c: 1.0 -4.0 4.0 There is a single repeated root Root: 2.000000 Input 3 numbers for a, b, and c: 1.0 -4.0 -12.0 There are two real roots Root 1: 6.000000 Root 2: -2.000000 Input 3 numbers for a, b, and c: 1.0 -6.0 25.0 Roots are complex conjugates Root 1: Real part: 3.000000, imaginary part: 4.000000 Root 2: Real part: 3.000000, imaginary part: -4.000000 Input 3 numbers for a, b, and c: 0.5 0.0 -3.0 There are two real roots Root 1: 2.449490 Root 2: -2.449490 Input 3 numbers for a, b, and c: 3.5 -4.0 -2.75 There are two real roots Root 1: 1.626059 Root 2: -0.483202 Input 3 numbers for a, b, and c: 1.0 -3.0 3.0 Roots are complex conjugates Root 1: Real part: 1.500000, imaginary part: 0.866025 Root 2: Real part: 1.500000, imaginary part: -0.866025
{"url":"https://brainmass.com/math/basic-algebra/65662","timestamp":"2014-04-17T10:17:53Z","content_type":null,"content_length":"29492","record_id":"<urn:uuid:3dfa5fbd-3e28-4124-b8a0-371040ebc072>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Uniform circular motion Sections 5.1 - 5.2 Uniform circular motion When an object is experiencing uniform circular motion, it is traveling in a circular path at a constant speed. If r is the radius of the path, and we define the period, T, as the time it takes to make a complete circle, then the speed is given by the circumference over the period. A similar equation relates the magnitude of the acceleration to the speed: These two equations can be combined to give the equation: This is known as the centripetal acceleration; v^2 / r is the special form the acceleration takes when we're dealing with objects experiencing uniform circular motion. A warning about the term "centripetal force" In circular motion many people use the term centripetal force, and say that the centripetal force is given by: I personally think that "centripetal force" is misleading, and I will use the phrase centripetal acceleration rather than centripetal force whenever possible. Centripetal force is a misleading term because, unlike the other forces we've dealt with like tension, the gravitational force, the normal force, and the force of friction, the centripetal force should not appear on a free-body diagram. You do NOT put a centripetal force on a free-body diagram for the same reason that ma does not appear on a free body diagram; F = ma is the net force, and the net force happens to have the special The centripetal force is not something that mysteriously appears whenever an object is traveling in a circle; it is simply the special form of the net force. Newton's second law for uniform circular motion Whenever an object experiences uniform circular motion there will always be a net force acting on the object pointing towards the center of the circular path. This net force has the special form It's useful to look at some examples to see how we deal with situations involving uniform circular motion. Example 1 - Twirling an object tied to a rope in a horizontal circle. (Note that the object travels in a horizontal circle, but the rope itself is not horizontal). If the tension in the rope is 100 N, the object's mass is 3.7 kg, and the rope is 1.4 m long, what is the angle of the rope with respect to the horizontal, and what is the speed of the object? As always, the place to start is with a free-body diagram, which just has two forces, the tension and the weight. It's simplest to choose a coordinate system that is horizontal and vertical, because the centripetal acceleration will be horizontal, and there is no vertical acceleration. The tension, T, gets split into horizontal and vertical components. We don't know the angle, but that's OK because we can solve for it. Adding forces in the y direction gives: This can be solved to get the angle: In the x direction there's just the one force, the horizontal component of the tension, which we'll set equal to the mass times the centripetal acceleration: We know mass and tension and the angle, but we have to be careful with r, because it is not simply the length of the rope. It is the horizontal component of the 1.4 m (let's call this L, for length), so there's a factor of the cosine coming in to the r as well. Rearranging this to solve for the speed gives: which gives a speed of v = 5.73 m/s. Example 2 - Identical objects on a turntable, different distances from the center. Let's not worry about doing a full analysis with numbers; instead, let's draw the free-body diagram, and then see if we can understand why the outer objects get thrown off the turntable at a lower rotational speed than objects closer to the center. In this case, the free-body diagram has three forces, the force of gravity, the normal force, and a frictional force. The friction here is static friction, because even though the objects are moving, they are not moving relative to the turntable. If there is no relative motion, you have static friction. The frictional force also points towards the center; the frictional force acts to oppose any relative motion, and the object has a tendency to go in a straight line which, relative to the turntable, would carry it away from the center. So, a static frictional force points in towards the Summing forces in the y-direction tells us that the normal force is equal in magnitude to the weight. In the x-direction, the only force there is is the frictional force. The maximum possible value of the static force of friction is As the velocity increases, the frictional force has to increase to provide the necessary force required to keep the object spinning in a circle. If we continue to increase the rotation rate of the turntable, thereby increasing the speed of an object sitting on it, at some point the frictional force won't be large enough to keep the object traveling in a circle, and the object will move towards the outside of the turntable and fall off. Why does this happen to the outer objects first? Because the speed they're going is proportional to the radius (v = circumference / period), so the frictional force necessary to keep an object spinning on the turntable ends up also being proportional to the radius. More force is needed for the outer objects at a given rotation rate, and they'll reach the maximum frictional force limit before the inner objects will.
{"url":"http://physics.bu.edu/~duffy/py105/Circular.html","timestamp":"2014-04-21T09:36:29Z","content_type":null,"content_length":"6865","record_id":"<urn:uuid:4ee7631a-d2c3-4e69-ac47-4f67d4f72914>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Rate of Return on Investment Re: Calculating Rate of Return on Investment $5000 into 1 million dollars AND what are the chances you end up going bankrupt if you dont reach the goal. One more thing now. "Chances," refers to a probability, but your original question wanted how long. Which do you require? Also, you want the loss to cost 5.5% and the win to only gain 5% ? Last edited by bobbym (2013-02-14 09:03:39) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://mathisfunforum.com/viewtopic.php?pid=253447","timestamp":"2014-04-20T13:51:41Z","content_type":null,"content_length":"27289","record_id":"<urn:uuid:4b8da492-fabe-431a-927d-ec1a209a00a9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Past research interests Between 1997 and 2001, I was a PhD student in the Deployable Structures Laboratory of the Engineering Department at the University of Cambridge. My supervisor was Simon Guest. The project I worked on concerned composite ribs with a circular arc as cross-section. These ribs, which have a novel composite layup, appear to have two equilibrium states, an extended form and a coiled form. They are hence similar to the well-known tape-measure, apart from the fact that the composite structures are stable at both equilibrium states, whereas tape-measures are stable only in the extended form and require a spindle or casing to hold them in the coiled form. The reseach I did derived several analytical models of the rib (as a beam, allowing linear deformations only, or both linear and non-linear, or as a shell), and investigated possible modes of deformation. This allowed me to determine the precise location of the second equilibrium state. The work then continued by investigating the stability of this second equilibrium state. These stability expressions also provided an explanation for the lack of stability of the standard tape measure (and other isotropic materials) in the coiled form. Prior to that, I studied Engineering at the University of Cambridge graduating with a Masters of Engineering with Distinction in 1997. The first two years of this course are in General Engineering: the subjects studied are Mechanics, Materials, Structural Mechanics, Fluid Dynamics and Thermodynamics, Linear Circuits, Electrical Power, Digital Circuits, Linear Systems and Control and Mathematics. At the end of the second year one begins to specialise by choosing two elective courses; I chose to do mine in Civil Engineering and Information Engineering. In the third year I studied Soil Mechanics, Structures, Solid Mechanics, Materials and Management Science. At the end of the third year, there are two small projects; mine were in Structural Modelling (where one learns to use analysis packages such as Oasys, ABAQUS and Masterseries) and French (where I produced a report on the use of numerical analysis techniques such as Finite Element Analysis and Computational Fluid Dynamics in the design of the Airbus). In the fourth year, half the year is spent studying modules, and half the year working on a major project. The modules I studied were Geotechnical Engineering, Structural Dynamics and Earthquake Engineering, Thinwalled Structures, Structural Steel, Designing with Composites, Finite Elements, French and Linear Algebra and Optimisation. My fourth year project was done in collaboration with the Ecole des Mines Materials' Centre, in Evry, and consisted of introducing shell elements (axisymmetric and 3 dimensional) into their finite element program, ZeBuLoN, and also implementing the Riks' algorithm, thus permitting the analysis of, amongst other things, shell buckling problems. My final report on the project is available in pdf format. One of the problems that can thus be studied is that of a spherical cap under uniform pressure. Note that the buckling mode shown is not the most likely to occur; an anti-symmetrical mode is much more likely, but the axisymmetric shell element implementation did not contain the ability to use Fourier series, as there was insufficient time, and the 3D shell elements were less convincing in their correctness. For work experience, I worked at the CEA at Saclay, in the Service d'Etudes Mecaniques et Thermiques, in the summer between my first two years at university, and at the Ecole des Mines in Paris, in the Centre des Materiaux in Evry, between my second and third year, and between my third and fourth year. At the CEA, I used the INCA module of the finite element analysis package, CASTEM, to investigate the failure of cracked cylindrical tubes subjected to an applied couple. At the Ecole des Mines, I worked initially on validating the new C++ version of their finite element code, ZeBuLoN, and (whilst working for a small spin-off company for a month) developed code in C to model spring and beam elements in the post-processor. The second summer I was there, I started work on my fourth year project. Please consult my CV for further information. Content and design by Diana Galletly Last updated January 2005.
{"url":"http://www.chiark.greenend.org.uk/~galletly/work/past.html","timestamp":"2014-04-16T19:17:44Z","content_type":null,"content_length":"8410","record_id":"<urn:uuid:cc28bc19-4418-49f1-9032-5ecbb1c9bff9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
I came across this page today while installing SciPy, which I am using to plot 3D shapes in matplotlib. It is a collection of windows binaries for many useful python modules, and has proven very Unofficial Windows Binaries for Python Extension Packages I often need to quickly generate some random attribute values in a shapefile, either to test code, write an answer on GIS.SE or to mock up some data to test a workflow. There are a few ways to do this simply in pure python using the various random functions, but if all I need to do is populate an attribute table the ArcGIS field calculator will do the job. The syntax of the command is very un-pythonic, but does allow for a range of different random datasets to be generated. I recently used this command to create some percentage data at work: Which clearly creates random integers between 0 and 100. After reading through the helpfiles I found a list of all the different random distributions that can be used, which I have reproduced below for my own convenience: □ UNIFORM {Minimum}, {Maximum}—A uniform distribution with a range defined by the {Minimum} and {Maximum}. Both the {Minimum} and {Maximum} are of type double. The default values are 0.0 for {Minimum} and 1.0 for {Maximum}. □ INTEGER {Minimum}, {Maximum}—An integer distribution with a range defined by the {Minimum} and {Maximum}. Both the {Minimum} and {Maximum} are of type long. The default values are 1 for {Minimum} and 10 for {Maximum}. □ NORMAL {Mean}, {Standard Deviation}—A Normal distribution with a defined {Mean} and {Standard Deviation}. Both the {Mean} and {Standard Deviation} are of type double. The default values are 0.0 for {Mean} and 1.0 for {Standard Deviation}. □ EXPONENTIAL {Mean}—An Exponential distribution with a defined {Mean}. The {Mean} is of type double. The default value for {Mean} is 1.0. □ POISSON {Mean}—A Poisson distribution with a defined {Mean}. The {Mean} is of type double. The default value for {Mean} is 1.0. □ GAMMA {Alpha}, {Beta}—A Gamma distribution with a defined {Alpha} and {Beta}. Both the {Alpha} and {Beta} are of type double. The default values are 1.0 for {Alpha} and 1.0 for {Beta}. □ BINOMIAL {N}, {Probability}—A Binomial distribution with a defined {N} and {Probability}. The {N} is of type long, and {Probability} is of type double. The default values are 10 for {N} and 0.5 for {Probability}. □ GEOMETRIC {Probability}—A geometric distribution with a defined {Probability}. The {Probability} is of type double. The default value for {Probability} is 0.5. □ NEGATIVE BINOMIAL {N}, {Probability}—A negative binomial distribution with a defined {N} and {Probability}. The {N} is of type long, and {Probability} is of type double. The default values are 10 for {N} and 0.5 for {Probability}. There are, of course, many other ways to do this but I find that this weird little function does a pretty good job, once you get to grips with the odd syntax. I wanted to create some contours from a DEM file today and decided to use GDAL instead of Arc or FME and I was again pleasantly surprised by the simplicity and power of the GDAL environment. My first port of call was to read the documentation, which showed that this command's syntax does not differ greatly from the rasterize command discussed in the last post. I decided to generate a set of 10m contours as a shapefile using the following command: The switches do the following: • -b 1 selects the band of the image to process, which defaults to 1 • -a elevation is the name of the contour elevation attribute which will be created • -snodata -9999 tells GDAL the value of nodata cells in the input raster, so they can be ignored • ns67ne.tif contour.shp are the input and output files, respectively • -i 10 is the spacing between each contour I wanted to convert a vector outline of the GB coastline into a raster in order to use it as a background layer in MapGuide Maestro. I would usually do such a process in ArcMap, but I am trying to learn open source alternatives to these types of functions. As a result I have been playing around with GDAL and OGR for a week or so now, and have been very impressed with their power and versatility, I only wish I could run them at a UNIX shell at work instead of at the windows command line. With these two packages, their python bindings, numpy, pyshp and matplotlib I think I could begin to move away from ArcGIS without too much pain. Version 1.8 and up of GDAL support the creation of output raster files when running the GDAL_RASTERIZE command, which makes this operation a single line process, there are a number of switches and options which can be used but here I will only go through the switches used for this process. The command I used to convert the vector outline to a raster file was: The individual switches operate as follows: • -a ID Indicates the attribute to be used as the key when converting the vector file. This can be extended using an SQL statement and the -SQL switch to allow for selections of only parts of a file to be converted. • -l GB_Fix Is the name of the layer to be converted • -a_nodata -9999 Assigns a raster nodata value of -9999 • -a_srs EPSG:3857 Sets the coordinate system using an EPSG code • -tr 1000 1000 Sets the raster cell size, the smaller these values are the bigger the output file will be. I initially ran the command with 10 10 and got a file of over 15Gb • -ot Float32 The datatype of the output file, either Byte, Float32 or Float64 depending on the file format • GB_Fix.shp dest.tif The input and output filenames, including extensions This is just scraping the surface of this one command but serves as a starting point for any future conversions I have to perform using GDAL.
{"url":"http://pygis.blogspot.co.uk/","timestamp":"2014-04-18T23:16:49Z","content_type":null,"content_length":"58947","record_id":"<urn:uuid:13a6e94d-fe74-4fc4-bff5-a7848701d082>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference Between Energy and Force Energy vs Force Force and energy are two fundamental concepts in both classical and relativistic mechanics. It is important to have a clear interpretation of these terms in order to excel in such fields. This article we will discuss the basics of the two concepts, force and energy, their similarities and finally their differences. Energy is a non-intuitive concept. The term “energy” is derived from the Greek word “energeia” which means operation or activity. In this sense, energy is the mechanism behind an activity. Energy is not a directly observable quantity. But it can be calculated by measuring external properties. Energy can be found in many forms. Kinetic energy, thermal energy and potential energy are to name a few. Energy was thought to be a conserved property in the universe up until the special theory of relativity was developed. The theory of relativity along with quantum mechanics showed that energy and mass are the interchangeable. This gives rise to the energy – mass conservation of the universe. Both of these quantities are two forms of matter. The famous equation E = mc^2 gives us the amount of energy that can be obtained from m amount of mass. However, when nuclear fusion or nuclear fission is not presented, it can be considered that the energy of a system is conserved. The kinetic energy is the energy that causes the movements of the object; the potential energy arises due to the place where the object is placed, and the thermal energy arises due to temperature. Force is a fundamental concept in all forms of physics. In the most basic sense, there are four fundamental forces. These are gravitational force, electromagnetic force, weak force and strong force. These are also known as interactions and are non-contacting forces. The day to day force we use when pushing an object or doing any work is contact force. It must be noted that forces always act in pairs. The force from object A on object B is equal and opposite to the force from object B on object A. This is known as Newton’s third law of motion. The common interpretation of force is the “ability to do work”. It must be noted that to do work a force is required, but every force does not necessarily do work. To apply a force, an amount of energy is required. This energy is then transferred to the object which the force is acted upon. This force does work on the second object. In this sense, force is a method to transfer energy. The classical mechanics was developed mainly by Sir. Isaac Newton. His three laws of motion are the foundation of all classical mechanics. In the second law, the net force acting upon an object is defined as the rate of change of the momentum of the object. │What is the difference between force and energy? │ │ │ │• Energy is an ability to operate or activate things while force is a method of transferring energy.│ │ │ │• Energy and mass of a closed system is conserved, but there is no such conservation for force. │ │ │ │• Force is a vector quantity while energy is a scalar. │ Related posts:
{"url":"http://www.differencebetween.com/difference-between-energy-and-vs-force/","timestamp":"2014-04-16T07:13:48Z","content_type":null,"content_length":"91435","record_id":"<urn:uuid:21c97efc-fdcd-4e53-9ea9-f0f899960228>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
some examples of induction schemes in ACL2 Major Section: INTRODUCTION-TO-THE-THEOREM-PROVER Here are some pages illustrating various induction schemes suggested by recursive functions. Classical Induction on Natural Numbers: see example-induction-scheme-nat-recursion. Induction Preserving Even/Odd Parity or Induction Downwards by 2 or Induction with Multiple Base Cases: see example-induction-scheme-down-by-2 for an induction in which the induction hypothesis decreases the induction variable by an amount other than 1. This illustrates that the induction hypothesis can be about whatever term or terms are needed to explain how the formula recurs. The example also illustrates an induction with more than one Base Case. Classical Induction on Lists: see example-induction-scheme-on-lists for an induction over linear lists, in which we inductively assume the conjecture for (cdr x) and prove it for x. It doesn't matter whether the list is nil-terminated or not; the Base Case addresses all the possibilities. Classical Induction on Binary (Cons) Trees: see example-induction-scheme-binary-trees for an induction over the simplest form of binary tree. Here the Induction Step provides two hypotheses, one about the left subtree and one about the right subtree. Induction on Several Variables: see example-induction-scheme-on-several-variables for an induction in which several variables participate in the case analysis and induction hypotheses. Induction Upwards: see example-induction-scheme-upwards for an induction scheme in which the induction hypothesis is about something ``bigger than'' the induction conclusion. This illustrates that the sense in which the hypothesis is about something ``smaller'' than the conclusion is determined by a measure of all the variables, not the magnitude or extent of some single variable. Induction with Auxiliary Variables or Induction with Accumulators: see example-induction-scheme-with-accumulators for an induction scheme in which one variable ``gets smaller'' but another is completely arbitrary. Such schemes are common when dealing with tail-recursive functions that accumulate partial results in auxiliary variables. This example also shows how to provide several arbitrary terms in a non-inductive variable of a Induction with Multiple Induction Steps: see example-induction-scheme-with-multiple-induction-steps for an induction in which we make different inductive hypotheses depending on which case we're in. This example also illustrates the handling of auxiliary variables or accumulators.
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v6-2/EXAMPLE-INDUCTIONS.html","timestamp":"2014-04-20T07:30:57Z","content_type":null,"content_length":"3847","record_id":"<urn:uuid:e8184015-38d4-4cf4-bcfd-98134df817f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and non-probability sampling 1. 217793 Probability and non-probability sampling What is the difference between probability and non-probability sampling? In the answer provide examples on the use of probability sampling and non-probability sampling on real life problems. This discusses the difference between probability and non-probability sampling and gives examples of each, as well as a website for further exploration.
{"url":"https://brainmass.com/math/probability/217793","timestamp":"2014-04-21T02:29:58Z","content_type":null,"content_length":"25157","record_id":"<urn:uuid:11f23271-d2a3-411d-9b2e-984cd6d11ffe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
geometric model for elliptic cohomology Special and general types Special notions Functorial quantum field theory There are well-known geometric models for some cohomology theories. For instance A geometric model for elliptic cohomology is supposed to be an analogous construction for elliptic cohomology or for tmf. It is an old idea that analogous to how differential K-theory is modeled by parallel transport in vector bundles and hence by functors $Bord_1(X) \to Vect$, elliptic cohomology should somehow be modeled by functors $Bord_2(X) \to Vect$, where $Bord_2(X)$ is a 1-category of cobordisms equipped with maps to $X$. Stephan Stolz and Peter Teichner have initiated a program studying this in the article What is an elliptic object?. The fundamental idea of this program is essentially to encode parallel transport along cobordisms with maps into a given space $X$ pretty much along the lines of functorial differential cohomology. One expects that this encodes actually the differential refinements of the corresponding cohomology theories, such as differential K-theory. However, currently in the program one divides out concordance which effectively divides out the differential information and keeps just the underlying topological information. The fundamental result of this program so far is that there is a useful notion of 2d FQFT over $X$ such that its partition function is indeed topological modular form-valued, so that it is a candidate for a model of tmf. The following is going to be an exposition of this partial result: See also the entry A recent survey of the program is The program was initiated in After the original sketch in terms of extended FQFT subsequent work concentrated on ordinary 1-categorical constructions, the goal being to make very clear, and transparent and rigorous the constructions involved and the claim that the partition function of a (2|1)-dimensional -Eudlidean FQFT is an integral modular form. This latest development on this is in warning I am being told that this is by now outdated and to be replaced by an improved version, which however is apparently not available yet
{"url":"http://ncatlab.org/nlab/show/geometric+model+for+elliptic+cohomology","timestamp":"2014-04-18T18:14:56Z","content_type":null,"content_length":"42570","record_id":"<urn:uuid:de6097ad-bd45-4282-85d7-790cdf0371c2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Cartoon Dolls Community - Doll forums and doll maker discussion boards - View Single Post - Subect Help: Math I'll explain this question by question so you can understand. These formulas will apply throughout and note that I included key words that will tell you when to use them: probability=# of relevant outcomes/total # of outcomes probability(and, with replacement)=relevant outcomes/total outcomes x relevant outcomes/total outcomes probability(and, without replacement)=relevant outcomes/total outcomes x relevant outcomes(-1 if applicable)/total outcomes-1 probability(or)=relevant outcomes/total outcomes + relevant outcomes/total outcomes 1. First we will find the total number of outcomes, which is simply found by adding together all of the cards(32+25=57). Now, the relevant outcomes are 18(Indians) and 10(Bengals). To find the probability of choosing one of each(I'm assuming this is with replacement, so there isn't a missing card afterwards), you simply multiply. So it'd be the second formula from above: 18/57 x 10/57 = 180/3249 I'm figuring you know how to simpify, so I'm not going to explain that. 2. For this one, we are going to use our third formula. We already know from the given information that there are 25 football cards to chose from, so now we need to apply what we know about the types of cards here. The real difference is because we are not replacing a card, that means the number of cards are going to go down to 24 because there is one less possibility(though because we want different types of cards, the top number remains unaffected. If we wanted two bengals cards, it would also go down by 1). So we have: 10/25 x 15/24 = 150/600 3. First, we need to know the total number of outcomes for each roll, which is 8 as we were told. This won't change because we are using two different dies, each with 8 sides that cannot be taken away, so we are going to use formula two. Next, we need to know how many "3"s are on each die, which is 1 for each one. So our formula is: 1/8 x 1/8 = 1/64 4. As always, we determine the formula we are going to use. The keyword is "without replacement", so we will use formula three. Next, we want to find the total number of outcomes: 8 + 12 = 20 Now, we need to put in the relevant outcomes. Since our first pick is blue, the first number will be 8/20, and since the second pick is green, the second number will be 12/19(notice because the item of choice is different so we didn't subtract from the top number; we did for the bottom one though since one is gone from the total). So, our formula is: 8/20 x 12/19 = 96/380 5. This seems confusing, but it really isn't. What we are doing is the same this as above with the second formula, there just are different totals involved(9 and 4, respectively). Since these is only going to be one correct possibility for each, that means the relevant outcome for both is "1". So, our formula is: 1/9 x 1/4 = 1/36 You found this already, but it always helps to go over it again. The key thing about probability is just understanding what goes where. Its confusing regardless, but you just have to keep going at it. If this doesn't make sense, please ask me to clarify confusing points. Probability is a tough thing to explain. Av & Sig Credit: Me
{"url":"http://www.thedollpalace.com/forum/876620-post9.html","timestamp":"2014-04-18T13:38:13Z","content_type":null,"content_length":"13500","record_id":"<urn:uuid:bc0c97f9-ddce-447b-ac31-05851bc9f298>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
ARX Estimator Estimate parameters of ARX model from SISO data in Simulink software returning idpoly object The ARX block uses least-squares analysis to estimate the parameters of an ARX model and returns the estimated model as an idpoly object. For information about the default algorithm settings used for model estimation, see arxOptions. Each estimation generates a figure with the following plots: ● Actual (measured) output versus the simulated or predicted model output. ● Error in simulated model, which is the difference between the measured output and the model output. Model Definition The ARX model is defined, as follows: ● y(t) is the output at time . ● and are the parameters to be estimated. ● is the number of poles of the system. ● is the number of zeros of the system. ● is the number of input samples that occur before the inputs that affect the current output. ● are the previous outputs on which the current output depends. ● are the previous inputs on which the current output depends. ● e(t) is a white-noise disturbance value. The ARX model can also be written in a compact way using the following notation: and is the backward shift operator, defined by . The following block diagram shows the ARX model structure. The block accepts two inputs, corresponding to the measured input-output data for estimating the model. First input: Input signal. Second input: Output signal. The ARX Estimator block outputs a sequence of multiple models (idpoly objects), estimated at regular intervals during the simulation. The Data window field in the block parameter dialog box specifies the number of data samples to use for estimation, as the simulation progresses. The output format depends on whether you specify the Model Name in the block parameter dialog box. Dialog Box Integers n[a], n[b], and n[k] specify the number of A and B model parameters and n[k] is input-output delay, respectively. Number of input data samples that specify the interval after which to estimate a new model. Default: 25 Sampling time for the model. Number of past data samples used to estimate each model. A longer data window should be used for higher-order models. Too small a value might cause poor estimation results, and too large a value leads to slower computation. Default: 200. Name of the model. Whether you specify the model name determines the output format of the resulting models, as follows: ● If you do not specify a model name, the estimated models display in the MATLAB^® Command Window in a transfer-function format. ● If you specify a model name, the resulting models are output to the MATLAB workspace as a cell array. Simulation: The algorithm uses only measured input data to simulate the response of the model. Prediction: Specifies the forward-prediction horizon for computing the response K steps in the future, where K is 1, 5, or 10. This example shows how you can use the ARX Estimator block in a Simulink^® model. 1. Specify the data from iddata1.mat for estimation: load iddata1; IODATA = z1; 2. Create a new Simulink model, as follows: ● Add the IDDATA Source block and specify IODATA in the Iddata object field of the IDDATA Source block parameters dialog box. ● Add the ARX Estimator block to the model. Set the sample time in the block to 0.1 seconds and the simulation end time to 30 seconds. ● Connect the Input and Output ports of the IDDATA Source block to the u and y ports of the ARX Estimator block, respectively. 3. Run the simulation. See Also Related Commands Topics in the System Identification Toolbox User's Guide
{"url":"http://www.mathworks.com.au/help/ident/ref/arxestimator.html?nocookie=true","timestamp":"2014-04-23T12:11:56Z","content_type":null,"content_length":"51957","record_id":"<urn:uuid:2a96f22a-e093-44bf-8e70-a7087231320a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
The QPair class is a template class that stores a pair of items. More... #include <QPair> Public Types typedef first_type typedef second_type Public Functions QPair () QPair ( const T1 & value1, const T2 & value2 ) QPair<T1, T2> & operator= ( const QPair<T1, T2> & other ) Public Variables Related Non-Members QPair<T1, T2> qMakePair ( const T1 & value1, const T2 & value2 ) bool operator!= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator< ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) QDataStream & operator<< ( QDataStream & out, const QPair<T1, T2> & pair ) bool operator<= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator== ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator> ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator>= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) QDataStream & operator>> ( QDataStream & in, QPair<T1, T2> & pair ) Detailed Description The QPair class is a template class that stores a pair of items. QPair<T1, T2> can be used in your application if the STL pair type is not available. It stores one value of type T1 and one value of type T2. It can be used as a return value for a function that needs to return two values, or as the value type of a generic container. Here's an example of a QPair that stores one QString and one double value: QPair<QString, double> pair; The components are accessible as public data members called first and second. For example: pair.first = "pi"; pair.second = 3.14159265358979323846; QPair's template data types (T1 and T2) must be assignable data types. You cannot, for example, store a QWidget as a value; instead, store a QWidget *. A few functions have additional requirements; these requirements are documented on a per-function basis. See also Container Classes. Member Type Documentation typedef QPair::first_type The type of the first element in the pair (T1). See also first. typedef QPair::second_type The type of the second element in the pair (T2). See also second. Member Function Documentation QPair::QPair () Constructs an empty pair. The first and second elements are initialized with default-constructed values. QPair::QPair ( const T1 & value1, const T2 & value2 ) Constructs a pair and initializes the first element with value1 and the second element with value2. See also qMakePair(). QPair<T1, T2> & QPair::operator= ( const QPair<T1, T2> & other ) Assigns other to this pair. Member Variable Documentation T1 QPair::first The first element in the pair. T2 QPair::second The second element in the pair. Related Non-Members QPair<T1, T2> qMakePair ( const T1 & value1, const T2 & value2 ) Returns a QPair<T1, T2> that contains value1 and value2. Example: QList<QPair<int, double> > list; list.append(qMakePair(66, 3.14159)); This is equivalent to QPair<T1, T2>(value1, value2), but usually requires less typing. bool operator!= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is not equal to p2; otherwise returns false. Two pairs compare as not equal if their first data members are not equal or if their second data members are not equal. This function requires the T1 and T2 types to have an implementation of operator==(). bool operator< ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is less than p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). QDataStream & operator<< ( QDataStream & out, const QPair<T1, T2> & pair ) Writes the pair pair to stream out. This function requires the T1 and T2 types to implement operator<<(). See also Serializing Qt Data Types. bool operator<= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is less than or equal to p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the This function requires the T1 and T2 types to have an implementation of operator<(). bool operator== ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is equal to p2; otherwise returns false. Two pairs compare equal if their first data members compare equal and if their second data members compare equal. This function requires the T1 and T2 types to have an implementation of operator==(). bool operator> ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is greater than p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). bool operator>= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is greater than or equal to p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). QDataStream & operator>> ( QDataStream & in, QPair<T1, T2> & pair ) Reads a pair from stream in into pair. This function requires the T1 and T2 types to implement operator>>(). See also Serializing Qt Data Types. © 2013 Digia Plc and/or its subsidiaries. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Documentation sources may be obtained from www.qt-project.org. Digia, Qt and their respective logos are trademarks of Digia Plc in Finland and/or other countries worldwide. All other trademarks are property of their respective owners. Privacy Policy
{"url":"https://developer.blackberry.com/native/reference/cascades/qpair.html","timestamp":"2014-04-19T11:58:51Z","content_type":null,"content_length":"37341","record_id":"<urn:uuid:99a7ddbb-a068-4fc3-a4e6-b94f697c7647>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2010 [00207] [Date Index] [Thread Index] [Author Index] Re: FindRoots? • To: mathgroup at smc.vnet.net • Subject: [mg112331] Re: FindRoots? • From: "Ingolf Dahl" <ingolf.dahl at telia.com> • Date: Thu, 9 Sep 2010 05:31:10 -0400 (EDT) To Andrzej Kozlowski and MathGroup, Yes, I was too harsh in using the word "snobbery". But I feel that it is not good to let the best (the Semenov method?) be an enemy of the good (RootSearch), when the Semenov method not yet is fully implemented. You have argued for the capabilities of Reduce to find zeroes, so I wonder of course if a routine based on the Semenov method would fail in a similar way. So much better if it does not. Maybe we should ask MathGroup, if there is anyone willing to implement your demonstrations as a more general package? The code does not look too difficult, even if I not immediate gasp all details (I usually don't). And as usual, the hell might be in the details. However, in order to implement the Semenov method and to obtain provable results, you state that there are "certain mild conditions" that must be fulfilled. These conditions seem to be that the program needs to obtain explicit and valid upper bounds for the first and second derivatives. The task to find these bounds cannot be automated in a provable way in the general case, so one problem has effectively been replaced by another. One could put the responsibility to solve this second problem on the user, but I cannot see such a solution as superior. Nevertheless, there seems to be niches where the Semenov method might be very useful, especially in the multidimensional case, so it should be worth the effort to create a full implementation. I doubt that such an implementation will be option-free - would it not be nice to have an "Automatic" way of estimating the bounds of the derivatives? But then some roots might be missed. Best regards Ingolf Dahl > -----Original Message----- > From: Andrzej Kozlowski [mailto:akozlowski at gmail.com] > Sent: den 8 september 2010 06:59 > To: mathgroup at smc.vnet.net > Subject: [mg112299] Re: FindRoots? > On 7 Sep 2010, at 18:40, Andrzej Kozlowski wrote: > > > > On 7 Sep 2010, at 10:02, Ingolf Dahl wrote: > > > >> But anyway, I consider it as snobbery to say that Reduce is superior to > >> RootSearch. They do not play in the same division, and perform > >> tasks. > > > > Why should it matter to me what you call it? Is that an argument? > > > > If you read what I have written on this subject you may noticed that there are ways to do > essentially the same things that RootSearch attempts to do and obtain the complete set of > roots (provably complete) (I am not referring to using Reduce). That, I do call "superior" > and really I don't care if anyone thinks it's "snobbery" or not. > > > > Andrzej Kozlowski > In case my meaning is still unclear, I will once again repeat something that I have written > already on this forum several times on various occasions. > If Wolfram really wanted a method that solves numerical transcendental equations that > could implement one of several existing sophisticated modern algorithms, including the one > due to Semenev (2007) that is the subject of my demonstrations: > http://demonstrations.wolfram.com/SolvingSystemsOfTranscendentalEquations/ > and > ations/ > Unlike the SearchRoot package this algorithm finds the complete set of solutions of any > system of n equations in n unknowns satisfying certain mild conditions. I have no problem > at all calling it "superior" to what is implemented in the SearchRoot package. Moreover, it > can solve systems of equations (yes, systems) which Mathematica is at present is unable > even to attempt (one such example is given in the first demonstration above) and neither of > course can SearchRoot. The set of solutions is provably complete and the user does not need > to set any options. > My implementation of Semenev's algorithm is probably inefficient as it was only meant to > demonstrate the method and not to be used in practice. Semenev's paper gives accurate > complexity estimates for the method which show that it is quite practical if the number of > equations (and unknowns) is not too large. > It would be trivial for Wolfram to implement this algorithm and it would not require leasing > anything from anyone, with all the problems that this usual entails. > Andrzej Kozlowski
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00207.html","timestamp":"2014-04-17T18:40:06Z","content_type":null,"content_length":"29642","record_id":"<urn:uuid:9481d953-253a-4c50-a239-e9d082d68998>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Station Q Overview - Microsoft Research Station Q Overview Station Q is a Microsoft Research lab located on the campus of the University of California, Santa Barbara focused on studies of topological quantum computing. The lab combines researchers, theorists, and experimentalists from mathematics, physics and computer science in partnership with academic and research institutions around the globe. Microsoft’s Station Q and its collaborators are exploring theoretical and experimental approaches to creating the quantum analog of the traditional bit—the qubit. The group is led by Dr. Michael Freedman, a renowned mathematician who has won the prestigious Fields Medal, the highest honor in mathematics. Microsoft Research is dedicated to conducting both basic and applied research in more than 55 areas of computing, including some of the most challenging computational and technical areas. Quantum computing is a promising yet challenging field of research at the intersection of computer science and physics. Devising solutions to decades-old challenges would lead to a whole new paradigm in computing enabling new kinds of computing power, and computational solutions and services that are unattainable today. What Is Quantum Computing? Quantum computing is a field of research that applies the principles of quantum physics and new directions in materials science to building a new type of computers that leverage quantum effects in computation. Beyond creating quantum computers, the field also includes studies of algorithms that such computers can execute. In a conventional computer, transistors manipulate bits of information, and each bit has a value of either 1 or 0. Electrical signals and electronic components are used to represent states that are set to 1 or 0. With a quantum computer, the classical bit gives way to something called a quantum bit, or qubit. A qubit is not represented in a transistor but in a quantum mechanical state of a particle such as photon polarization, electron spin, or in even more exotic degrees of freedom. According to the superposition principle of quantum mechanics, at any given moment the spin of an electron can simultaneously be both up and down with specified “amplitudes” that are mathematically related to the individual probability of each direction. Quantum computing uses this principle – being able to represent multiple values with a separate probability for each value – to evaluate many possible solutions simultaneously rather than one at a time. For some problems, quantum computers will be no faster than conventional computers. But for certain types of problems, like searching databases, breaking cryptographic codes, or simulating very large, complex physical systems, quantum machines would be dramatically faster than any classical computing system. For more information, press only: Rapid Response Team, Waggener Edstrom Worldwide, (503) 443-7070
{"url":"http://research.microsoft.com/en-us/press/stationq_bg.aspx","timestamp":"2014-04-17T12:50:05Z","content_type":null,"content_length":"14159","record_id":"<urn:uuid:239b047e-ce91-44ef-932d-e2d545a522af>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Representation of consistent recursive rules. Amo, Ana del and Montero de Juan, Francisco Javier and Molina, E. (2001) Representation of consistent recursive rules. European journal of operational research, 130 (1). pp. 29-53. ISSN 0377-2217 Restricted to Repository staff only until 2020. Official URL: http://www.sciencedirect.com/science/article/pii/S0377221700000321 This paper develops the recursive model for connective rules (as proposed in V. Cutello, E. Molina, J. Montero, Associativeness versus recursiveness, in: Proceedings of the 26th IEEE International Symposium on Multiple-valued Logic, Santiago de Compostela, Spain, 29-31 May, 1996, pp. 154-159; V. Cutello, E. Molina, J. Montero, Binary operators and connective rules, in: M.H. Smith, M.A. Lee, J. Keller, J. Yen (Eds.), Proceedings of NAFIPS 96, North American Fuzzy Information Processing Society, IEEE Press, Piscataway, NJ, 1996, pp. 46-49), where a particular solution in the Ordered Weighted Averaging (OWA) context (see V. Cutello, J. Montero, Recursive families of OWA operators, in: P.P. Bonissone (Ed.), Proceedings of the Third IEEE Conference on Fuzzy Systems, IEEE Press, Piscataway, NJ, 1994, pp. 1137-1141; V. Cutello, J. Montero, Recursive connective rules, International Journal of Intelligent Systems, to appear) was translated into a more general framework. In this paper, some families of solutions for the key recursive equation are obtained, based upon the general associativity equation as solved by K. Mak (Coherent continuous systems and the generalized functional equation of associativity, Mathematics of Operations Research 12 (1987) 597-625). A context for the representation of families of binary connectives is given, allowing the characterization of key families of connective rules. (C) 2001 Elsevier Science B.V. All rights reserved. Item Type: Article Uncontrolled Fuzzy sets; Fuzzy connectives; Connective rules; Recursiveness; Associativity Subjects: Sciences > Mathematics > Logic, Symbolic and mathematical ID Code: 16779 References: J. Aczel, Lectures on Functional Equations and their Applications, Academic Press, New York, 1966. V. Cutello, E. Molina, J. Montero, Associativeness versus recursiveness, in: Proceedings of the 26th IEEE International Symposium on Multiple-valued Logic, Santiago de Compostela, Spain, 29±31 May, 1996, pp. 154-159. V. Cutello, E. Molina, J. Montero, Binary operators and connective rules, in: M.H. Smith, M.A. Lee, J. Keller, J. Yen (Eds.),Proceedings of NAFIPS 96, North American Fuzzy Information Processing Society, IEEE Press, Piscataway, NJ, 1996, pp. 46-49. V. Cutello, J. Montero, Recursive families of OWA operators, in: P.P. Bonissone (Ed.), Proceedings of the Third IEEE Conference on Fuzzy Systems, IEEE Press, Piscataway, NJ, 1994, pp. 1137±1141. V. Cutello, J. Montero, Recursive connective rules, International Journal of Intelligent Systems, to appear. H. Dyckoff, Basic concepts for a theory of evaluation: hierarchical aggregation via autodistributive connectives in fuzzy set theory,European Journal of Operational Research 20 (1985) 221-233. H. Dyckoff, W. Pedrycz, Generalized means as model of compensative connectives, Fuzzy Sets and Systems 14 (1984) 143-154. J. Dombi, Basic concepts for a theory of evaluation: the aggregative operator, European Journal of Operational Research 10 (1982) 282-293. J. Dombi, A general class of fuzzy operators, the De Morgan class of fuzzy operators and fuzziness measures induced by fuzzy operators, Fuzzy Sets and Systems 8 (1982) 149-163. J.C. Fodor, J.L. Marichal, M. Roubens, Characterization of the ordered weighted averaging operators, IEEE Transactions of Fuzzy Systems 3 (1995) 236±240 Prepublication 93.011. J.C. Fodor, M. Roubens, Fuzzy Preference Modelling and Multi-criteria Decision Support, Kluwer, Dordrecht, 1994. J.C. Fodor, R.R. Yager, A. Rybalov, Structure of uninorms, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 5 (1997) 411-427. J.C. Fodor, S. Jenei, On reversible triangular norms, Fuzzy Sets and Systems 104 (1999) 43-51. G.H. Haardy, J.E. Littlewood, G. Polya, Inequalities, Cambridge UP, 1954. S. Jenei, New family of triangular norms via contrapositive symmetrization of residuated implications, Fuzzy Sets and Systems 110 (2000) 157±174. T.C. Koopmans, Representation of preference ordering with independent components of consumption, in: C.B. McGuire, R. Radner (Eds.), Decision and Organization, North-Holland, Amsterdam, 1972, 57-78 (2nd edition by the University of Minnesota Press, 1986). K.T. Mak, Coherent continuous systems and the generalized functional equation of associativity, Mathematics of Operations Research 12 (1987) 597-625. J.L. Marichal, P. Mathonet, E. Tousset, Characterization of some aggregation functions stable for positive transformations,Fuzzy Sets and Systems 102 (1999) 293-314. M. Mas, G. Mayor, J. Suñer, J. Torrens, Generation of multi-dimensional aggregation functions, Mathware and Soft Computing 5 (1998) 233-242. J. Montero, J. Tejada, V. Cutello, A general model for deriving preference structures from data, European Journal of Operational Research 98 (1997) 98-110. R.R. Yager, On ordered weighted averaging aggregation operators in multi-criteria decision making, IEEE Transactions on Systems, Man and Cybernetics 18 (1988) 183-190. R.R. Yager, Families of OWA operators, Fuzzy Sets and Systems 59 (1993) 125-148. R.R. Yager, A. Rybalov, Uninorm aggregation operators, Fuzzy Sets and Systems 80 (1996) 111-120. R.R. Yager, Fusion of ordinal information using weighted median aggregation, International Journal of Approximate Reasoning 18 (1998) 35-52. Deposited On: 22 Oct 2012 10:15 Last Modified: 07 Feb 2014 09:35 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/16779/","timestamp":"2014-04-19T20:15:58Z","content_type":null,"content_length":"40776","record_id":"<urn:uuid:b29a4e25-007e-405d-bd91-4d44f81bad0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
2001 U.S. Championship Puzzles 1. Number Boxes Place the digits 1 through 9 (each used exactly once) into the circles so that the numbers inside every rectangle have the same sum. 2. Domino Stacking You are given a set of 10 dominoes (0-1, 0-2, 0-3, 0-4, 1-2, 1-3, 1-4, 2-3, 2-4, and 3-4). Place these dominoes into the diagram below so that each digit is different from the horizontally adjacent digits (if any) and each digit is larger than the one below it (if any). 3. Retrograde Battleship Locate the position of the 10 ship fleet in the grid. There is one 4-unit batleship, two 3-unit cruisers, three 2-unit destroyers, and four 1-unit submarines. They do not touch each other, even diagonally. The possible placements of the ships are given. Which ones are the real ships? Click here for the answers. Click here to go back to the Puzzle Palace.
{"url":"http://www2.stetson.edu/~efriedma/champ/US2001/","timestamp":"2014-04-21T00:52:40Z","content_type":null,"content_length":"1631","record_id":"<urn:uuid:dc13aee8-398d-413b-b675-79357341ef66>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Retsil Prealgebra Tutor ...I graduated from Liberty University (Virginia) in 2011 with a Bachelor's Degree in Spanish and a Virginia state teaching license. I am currently employed as the Spanish teacher at an area school. During my tutoring lessons, I will be using many materials personalized to the interests of the student, such as stories, music and translation activities. 6 Subjects: including prealgebra, English, Spanish, grammar ...I have a current Washington state teaching certificate and over 16 years experience working with students with disabilities, including organizational and study skills. I have a valid Washington state teaching certificate with endorsements in elementary education, special education K-8 and readin... 22 Subjects: including prealgebra, reading, English, dyslexia ...I think understanding how they approach mathematics is the key to better explaining the topic and helping them overcome their struggles. I have passed the WyzAnt qualifications to be an Algebra 1 tutor. I have completed mathematics courses up to Calculus 2 and have past experience tutoring Algebra 1. 4 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...At the first meeting, I work really hard to understand all my students as people, and not as a label. I personalize my lesson plans to meet your needs. Not only that, I design my lesson plans to best match your style of learning and comprehension. 38 Subjects: including prealgebra, reading, chemistry, writing I recently became a CompTIA Certified Technical Trainer (Dec 2012) and became certified in 2010 as a Navy Master Training Specialist and was formerly an instructor in electronics theory and repair. My students average age was 18-23 and I have experience with younger children as a father to a 10 yea... 11 Subjects: including prealgebra, reading, geometry, algebra 1
{"url":"http://www.purplemath.com/Retsil_Prealgebra_tutors.php","timestamp":"2014-04-19T23:37:20Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:547153f0-ee42-4bc1-968c-3a437ee0d64e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00023-ip-10-147-4-33.ec2.internal.warc.gz"}