content
stringlengths
86
994k
meta
stringlengths
288
619
BSc (Honours) Mathematics - Educational aims This degree introduces you to mathematical concepts and thinking, and helps you to develop a mathematical approach. Our aims are that you should achieve: • familiarity with the essential ideas of pure mathematics (particularly analysis, linear algebra and group theory), with the opportunity also to become acquainted with some of: number theory, mathematical logic, combinatorics, geometry, topology • ability to apply the main tools of applied mathematics (particularly Newtonian mechanics, differential equations, vector calculus, numerical methods and linear algebra), with the opportunity also to meet some of: advanced calculus, fluid mechanics, advanced numerical analysis • ability to model real-world situations and to use mathematics to help develop solutions to practical problems • ability to follow complex mathematical arguments and to develop mathematical arguments of your own • experience of study of mathematics in some breadth and depth • understanding of some of the more advanced ideas within mathematics • development of your capability for working with abstract concepts • ability to communicate mathematical ideas, proofs and conclusions effectively • ability to work with others on mathematical modelling problems and their validation • skills necessary to use mathematics in employment, or to progress to further study of mathematics • ability to use a modern mathematical computer software package in pursuance of the above aims. You will also have the opportunity to develop knowledge of, and the ability to apply, some important concepts and techniques of Statistics. Learning outcomes The learning outcomes of this degree (of which there is considerable overlap between the last two) are described in four areas: Knowledge and understanding On completion of this degree, you will have knowledge and understanding of: • the elements of linear algebra, analysis and group theory • the concepts behind the methods of Newtonian mechanics, differential equations, vector calculus, linear algebra and numerical analysis, and be able to model real-world situations using these The degree programme is flexible at Level 3, offering you also a considerable choice of mathematical topics and related topics, such as physics. You will further develop your mathematical knowledge and understanding in the topics you choose to study. Currently the following topics are available: • pure mathematics: number theory, combinatorics, geometry, metric spaces, further group theory and analysis • applied mathematics: advanced calculus, fluid mechanics, advanced numerical analysis, methods for partial differential equations, variational principles • data analysis and statistical methods and be able to model real-world situations using these methods. Depending on your Level 3 study, you will be able to apply your knowledge and understanding to practical problems or to further advancing your understanding of mathematics. (For example, after completion of this degree you may wish to consider going on to the Mathematics MSc programme.) The topics may change from time to time, and if they do they will be replaced by others at a similar level and providing similar learning outcomes. Cognitive skills On completion of this degree, you will have acquired: • ability in mathematical and statistical manipulation and calculation, using a computer package when appropriate • ability to assemble relevant information for mathematical and statistical arguments and proofs • ability to understand and assess mathematical proofs and construct appropriate mathematical proofs of your own • ability to reason with abstract concepts • judgement in selecting and applying a wide range of mathematical tools and techniques • qualitative and quantitative problem-solving skills. Practical and/or professional skills On completion of this degree, you will be able to demonstrate the following skills: Application: apply mathematical and statistical concepts, principles and methods Problem solving: analyse and evaluate problems (both theoretical and practical) and plan strategies for their solution Information technology: use information technology with confidence to acquire and present mathematical and statistical knowledge, to model and solve practical problems and to develop mathematical Communication: communicate relevant information accurately and effectively, using a format, structure and style that suit the purpose (including an appropriate presentation) Collaboration: work collaboratively with others on projects requiring mathematical knowledge and input Independence: be an independent learner, able to acquire further knowledge with little guidance or support. Key skills On completion of the degree, you will be able to demonstrate the following key skills: • read and/or listen to documents and discussions having mathematical content, with an appropriate level of understanding • communicate information having mathematical or statistical content accurately and effectively, using a format, structure and style that suits the purpose (including an appropriate presentation) • work collaboratively with others on projects requiring mathematical knowledge and input. Application of number • exhibit a high level of numeracy, appropriate to a Mathematics graduate. Information technology • use information technology with confidence to acquire and present mathematical and statistical knowledge, to model and solve practical problems and to develop mathematical insight. Learning how to learn • be an independent learner, able to acquire further knowledge with little guidance or support. Teaching, learning and assessment methods Knowledge, understanding and application skills, as well as cognitive (thinking) skills, are acquired through distance-learning materials that include specially written module texts, guides to study, assignments and (where relevant) projects, and specimen examination papers; through a range of multimedia material (including computer software on some modules); and through feedback from tutors on your assignments. You will work independently with the distance-learning materials, but are encouraged (particularly at Level 1) to form self-help groups with other students, communicating face to face, by telephone, by email or by online forums. Students will generally be supported by optional tutorials and day schools, which you are strongly advised to attend whenever possible. Written tutor feedback on assignments provides you with individual tuition and guidance. Modules at higher levels build on the foundations developed in recommended pre-requisite modules at lower Some Level 1 modules have an examination, as do most modules at Levels 2 and 3. Generally, these permit you to bring and use the module handbook, thus reducing the need for memorisation and concentrating on your ability to apply concepts and techniques and express them clearly and coherently. For each individual module, you must pass both the continuous assessment and the examination (or end-of-module assessment) in order to obtain a pass. At Level 2 and above, your pass will be graded, and the grades will contribute to the determination of the class of Honours degree that you are awarded. Skills are taught and assessed throughout the programme. Problem solving as described above is assessed, particularly in the Applied Mathematics part of the programme. Information technology The use of computing and IT is developed and assessed throughout this qualification. Communication skills are developed and assessed throughout the programme as you work on assignments and receive feedback from your tutor. Communication skills in the context of a presentation, as well as collaboration skills in the assessed group work, are developed in some modules. The university experience, including distance learning using OU study materials, should develop your ability as a strong independent learner. The acquisition of the skills of communication, information technology and independence (or learning how to learn) have been covered above. Application of number is crucial for all higher-level mathematical skills. It is explicitly taught and assessed in the Access and Level 1 modules. The other modules in the programme assume that you already have this skill to an extent appropriate to the module level. On completion of the degree you will certainly have acquired a high level of numeracy.
{"url":"http://www3.open.ac.uk/study/undergraduate/qualification/learning-outcome/q31.htm","timestamp":"2014-04-20T01:17:38Z","content_type":null,"content_length":"23169","record_id":"<urn:uuid:c9dbd5c8-5501-4e10-ab8a-eeca212536ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
systems of equations algebra i puzzle Author Message Patlisk Cinmomc Posted: Friday 29th of Dec 12:50 Hi dudes . I am in deep trouble . I simply don’t know what to do . You know, I am having trouble with my math and need a helping hand with systems of equations algebra i puzzle. Anyone you know whom I can contact with distance of points, graphing function and conversion of units? I tried hard to get a tutor, but my effort was in vain. They are hard to find and also pretty costly . It’s also difficult to find someone quick enough and I have this assignment coming up. Any advice on what I should do? I would very much appreciate a quick response. Jahm Xjardx Posted: Saturday 30th of Dec 09:43 Don’t fret my friend. It’s just a matter of time before you’ll have no trouble in answering those problems in systems of equations algebra i puzzle. I have the exact solution for your algebra problems, it’s called Algebrator. It’s quite new but I assure you that it would be perfect in helping you in your math problems. It’s a piece of software where you can answer any kind of algebra problems with ease . It’s also user friendly and shows a lot of useful data that makes you learn the subject matter fully. From: Odense, Denmark, EU Gog Posted: Saturday 30th of Dec 17:57 Algebrator is really a good software program that helps to deal with math problems. I remember facing troubles with interval notation, ratios and factoring. Algebrator gave step by step solution to my algebra homework problem on typing it and simply clicking on Solve. It has helped me through several math classes. I greatly recommend the program. From: Austin, TX CHS` Posted: Saturday 30th of Dec 19:44 I advise using Algebrator. It not only helps you with your math problems, but also displays all the necessary steps in detail so that you can improve the understanding of the From: Victoria City, Hong Kong Island, Hong Kong totaelecdro Posted: Monday 01st of Jan 17:46 Sounds appealing . Where can I get this software ? From: Nimes (South of CHS` Posted: Tuesday 02nd of Jan 07:03 The software is a piece of cake. Just give it 15 minutes and you will be a professional at it. You can find the software here, http://www.algebra-equation.com/ From: Victoria City, Hong Kong Island, Hong Kong
{"url":"http://www.algebra-equation.com/solving-algebra-equation/like-denominators/systems-of-equations-algebra-i.html","timestamp":"2014-04-16T15:58:18Z","content_type":null,"content_length":"25606","record_id":"<urn:uuid:fd158dd9-6573-4be1-899e-e90c1bb21c86>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
A Neural Network in SQL Server Why would anyone want to code a neural network in SQL Server? Just to see if's possible, or for the fun of it, or to see how the model is, you find your reason. In my case it was for fun, because I wanted to make the Entity Relationship Diagram (ERD) to model a neural network inside a database, and see if it worked like any real neural network would. Later I used Analysis Services Neural Network algorithm to compare its results against mine. I'll show you in this article the results, the design and the basic concepts. In the next article I'll show you the code and how I worked with Analysis Services Neural Network and Decision Trees and also will explain how to understand the results obtained from it. Neural Networks Concept Basically, a neural network (NN) is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system in the brain. It is designed to "think" like a human brain would, in order to come to a conclusion based on information given to it. To do that the facts are presented to it (in this case, the measured variables and the results obtained with them), so it can "learn" from them and later solve other similar problems. There's plenty of information in the web if you want to know more than this. You can start here http:// In the human brain action potentials are the electric signals that neurons use to convey information to the brain and travel through the net using what is called the synapse (the area of contact between the cells, even if there is no physical contact between them). As these signals are identical, the brain determines what type of information is being received based on the path that the signal took. The brain analyzes the patterns of signals being sent and from that information it can interpret the type of information being received. To emulate that behavior, the artificial neural network has several components: the node plays the role of the neuron, the weights are the links between the different nodes, so it is what the synapse is in the biological net. The input signal is modified by the weights and summarized to obtain the total input value for a specific node (see diagram below). An "activation function" is used to amplify the results of that input and obtain the value of the particular node. There are three regions in a NN: the input region which holds one node for each input variable; the hidden region, where there could be several internal layers; and the output region that holds the result sets. Below is a diagram to help you understand the model: The nodes in the different layers are connected by weighed links. The figure shows the calculation that takes place between several input nodes Xi, the addition of these values in the hidden layer for a particular node Yi=Sum(Wij*Xi) and the application of the activation function to get Vi and the output node to get the value of the output variable. There are several algorithms used in neural networks, one of them is backpropagation, which is the one I used. Basically, what the backpropagation algorithm does is to propagate backwards the error obtained in the output layer while comparing the calculated value in the nodes to the real or desired value. This propagation is made by distributing the error and modifying the weights or links between the previous and present nodes. Going backwards, the values of the nodes in the hidden layer can be modified and so can be the weights between the input and hidden layer, but not the values of the nodes in the input layer as they are the values of the variables we are using. Once the algorithm got to the input layer it goes again forward with the new modified weights and calculates the results in the output layer again. This process is repeated until an error of around 5% is reached (this was my end condition) or another end condition is met. The data I wanted to see if I could forecast rain with real weather numeric data. Normally, the rain forecast is made by using NN with graphical data from satellites. So I knew my approach wouldn't be that accurate as I was already aware of that fact, but that didn't bother me at all since I was just testing. I went to get real weather data from the National Meteorological Service in Buenos Aires and they gave me five years of data from one of the observatories in the city. Below is the list of variables: │Variable │Description │ │Date │Date the measure was taken │ │Month │Month corresponding to date │ │MaxTemp │Maximum Temperature │ │MinTemp │Minimum Temperature │ │MeanTemp │Mean Temperature │ │DueTemp │Due Temperature │ │Wind │Velocity of the wind │ │CloudPct │Percent of the sky covered with clouds │ │Pressure │Mean Pressure │ │Humidity │Percent of humidity in the air │ │SunDuration│Duration of the sun brightness in hours │ │Rain_mm │millimeters of rain │ As the predicted variable I used Probability of Precipitation (PoP) for the next day. To have the real value to use to make the net learn, I calculated PoP for the next day based on the Rainmm variable already in there. This would be a bit variable (1: it will rain tomorrow, 0: it won't rain tomorrow). The first approach was to use all the variables for the input region, and the output region will only consist of PoP. The middle region (the hidden layer) will always vary in size depending on the tests and the problem to solve. Usually we'd start with two times the size of the input region and go down until the net learns the correct way and approaches the results in the best way. In my case I started with 20 nodes. There is a portion of the data set used by the network to learn and the rest is used to test once the network weights have been calculated to see if the network can predict in a accurate way. The NN code programmed in SQL Server would do the following: 1) Randomly generate the weights for the links between the nodes in the hidden region and the input region. 2) Calculate the values for the nodes in the hidden region and follow the same process to get the values for the variable in the output region. 3) Compare the value in the output variable with the real value in the data set and go back to the hidden region to correct the weights, and so on, until the error between the output value and the real one is smaller than 5%. The Entity Relationship Diagram Below is the ERD that shows the design of the database that models the NN. The NN is modelled by the two entities Layers and Nodes. The Steps entity is used to track the number of times the algorithm goes through the NN as a whole. The RunNumber in the weights entity is used to track the number of times the algorithm goes through the net within the same data set. The Stage can have two values: "Learning" and "Testing". DeltaW is the difference between the original weight and the weight modified by the calculated error. Here's the nice thing with SQL Server (or any DBMS): triggers for update can be programmed on the weights table to gather information of what is going on with the weights that were being changed while the network was "thinking and learning" and later get an idea of how that was working from the inside, which is one of the reasons I wanted to do the NN inside SQL Server in the first place. Experimentation and Results Now we get to the interesting part where I show you the results. I was really surprised because it wasn't working very well and at first it didn't even predict properly. I will show you the process and how I came up with that conclusion. First of all, we can say that a NN is able to predict the results of a variable, if the results we get from it are better than the results calculated using any probability calculus. In the case of PoP, as it could only have two values (1 or 0) the probability of error was 50%, so if the network was wrong 50% of the time, then it wasn't predicting correctly. To analyze what was going on with my NN, I decided I needed extra help, so I used to Analysis Services to really understand why it wasn't working and used a Decision Tree algorithm with the same data I had. This way, I'd know which variable was good to do the prediction of PoP and which wouldn't help at all. Here's what the decision tree Analysis Services gave me: The tree shows that there are a few variables useful in the prediction: TDue, Wind, CloudPct, Humidity and Rainmm. Also, we can see that the prediction is not going to be accurate as the nodes in the tree are not pure (this can be seen in the blue and red bars below the nodes, they show the proportion of the cases with PoP=1 over the total cases for a certain range in the values of the variable shown in the node). So it's clear from here what variables should be used, and also that the neural network won't be able to be very accurate in the prediction. So I took out of the input layer the variables that weren't useful and ran the network again in my database, and here is what I found: This is the classification matrix, it' is used to show the results of the predicted variable compared to the real values in the learning data. The columns show the real data and the rows the data from the network. In this case, there are 728 cases where the network predicted it wasn't going to rain correctly but in the other 60 cases it predicted it was going to rain and it didn't (false negative). Also, there were 243 cases when the network predicted that it was going to rain and it didn't rain (a false negative) A false positive is a predicted value of 1 when the real value was 0 and a false negative is exactly the opposite, a predicted value of 0 when the real value was 1. Usually it is desirable to have false positives instead of false negatives as there is less damage in them. As a way to show the impact of a false positive, imagine we are using the NN to predict a disease in a person, a false positive will imply that the person will need to go over more tests to get certain that he or she has the disease whereas with a false negative the person will leave the hospital thinking he or she is healthy. As the NN algorithm goes back and forth to get the correct weights that will allow it to predict the output variable, so the weights vary in value from the initial randomly generated until the final ones that comply with the error of 5% between the desire output value and the predicted value. Below is the evolution of the weight between two nodes I could gather thanks to the data the trigger logged in the weightlog table: I hope this has been as an interesting ride for you as it was for me. I showed you the NN modeled in a database, then we went through the process to analyze the initial results and how to interpret them using SSAS decision trees, and how it helped to give better use to the data to be able to reach the goal of the rain prediction using numeric data with the NN model. Also I explained some of the concepts of the classification matrix an how to read its results and you could see how the internode weight evolved while the network was learning. My intention is to continue this series by showing how to use SSAS to build the decision tree and also the neural network with this same data, and also to show you the NN database, functions and procedures used to run and test the network.
{"url":"http://www.sqlservercentral.com/articles/SQL+Server/68139/","timestamp":"2014-04-20T20:25:09Z","content_type":null,"content_length":"50317","record_id":"<urn:uuid:b22b7c41-a585-4004-90c2-b057da391f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
f06pbc Matrix-vector product, real rectangular band matrix f06pdc Matrix-vector product, real symmetric band matrix f06pgc Matrix-vector product, real triangular band matrix f06pkc System of equations, real triangular band matrix f06sbc Matrix-vector product, complex rectangular band matrix f06sdc Matrix-vector product, complex Hermitian band matrix f06sgc Matrix-vector product, complex triangular band matrix f06skc System of equations, complex triangular band matrix f07bdc LU factorization of real m by n band matrix f07bec Solution of real band system of linear equations, multiple right-hand sides, matrix already factorized by f07bdc f07bgc Estimate condition number of real band matrix, matrix already factorized by f07bdc f07bhc Refined solution with error bounds of real band system of linear equations, multiple right-hand sides f07brc LU factorization of complex m by n band matrix f07bsc Solution of complex band system of linear equations, multiple right-hand sides, matrix already factorized by f07brc f07buc Estimate condition number of complex band matrix, matrix already factorized by f07brc f07bvc Refined solution with error bounds of complex band system of linear equations, multiple right-hand sides f07hdc Cholesky factorization of real symmetric positive-definite band matrix f07hec Solution of real symmetric positive-definite band system of linear equations, multiple right-hand sides, matrix already factorized by f07hdc f07hgc Estimate condition number of real symmetric positive-definite band matrix, matrix already factorized by f07hdc f07hhc Refined solution with error bounds of real symmetric positive-definite band system of linear equations, multiple right-hand sides f07hrc Cholesky factorization of complex Hermitian positive-definite band matrix f07hsc Solution of complex Hermitian positive-definite band system of linear equations, multiple right-hand sides, matrix already factorized by f07hrc f07huc Estimate condition number of complex Hermitian positive-definite band matrix, matrix already factorized by f07hrc f07hvc Refined solution with error bounds of complex Hermitian positive-definite band system of linear equations, multiple right-hand sides f07vec Solution of real band triangular system of linear equations, multiple right-hand sides f07vgc Estimate condition number of real band triangular matrix f07vhc Error bounds for solution of real band triangular system of linear equations, multiple right-hand sides f07vsc Solution of complex band triangular system of linear equations, multiple right-hand sides f07vuc Estimate condition number of complex band triangular matrix f07vvc Error bounds for solution of complex band triangular system of linear equations, multiple right-hand sides f08hcc All eigenvalues and optionally all eigenvectors of real symmetric band matrix, using divide and conquer f08hec Orthogonal reduction of real symmetric band matrix to symmetric tridiagonal form f08hqc All eigenvalues and optionally all eigenvectors of complex Hermitian band matrix, using divide and conquer f08hsc Unitary reduction of complex Hermitian band matrix to real symmetric tridiagonal form f08lec Reduction of real rectangular band matrix to upper bidiagonal form f08lsc Reduction of complex rectangular band matrix to upper bidiagonal form f08ufc Computes a split Cholesky factorization of real symmetric positive-definite band matrix A f08utc Computes a split Cholesky factorization of complex Hermitian positive-definite band matrix A f16rbc 1-norm, ∞-norm, Frobenius norm, largest absolute element, real band matrix f16rec 1-norm, ∞-norm, Frobenius norm, largest absolute element, real symmetric band matrix f16ubc 1-norm, ∞-norm, Frobenius norm, largest absolute element, complex band matrix f16uec 1-norm, ∞-norm, Frobenius norm, largest absolute element, complex Hermitian band matrix © The Numerical Algorithms Group Ltd, Oxford UK. 2002
{"url":"http://www.nag.com/numeric/CL/manual/html/indexes/kwic/band.html","timestamp":"2014-04-20T18:56:17Z","content_type":null,"content_length":"21176","record_id":"<urn:uuid:48f3f4fc-f8b5-4813-84a7-e2b318cd7735>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Coarse-grained description and discretization B. Representation through pairwise interactions and direct MC simulations C. Field-theoretic representation and SCF theory D. SCMF simulations A. Homopolymer melts 1. Molecular conformations 2. Density fluctuations 3. Long-ranged inter- and intramolecular correlations B. Diblock copolymer melts
{"url":"http://scitation.aip.org/content/aip/journal/jcp/125/18/10.1063/1.2364506","timestamp":"2014-04-20T19:23:18Z","content_type":null,"content_length":"132681","record_id":"<urn:uuid:5b380927-33a5-4015-a42d-df2d9fae59e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Group Theory: Birdtracks, Lie's, and Exceptional Groups This book has to be seen to be believed! The title, Group Theory, is nothing if not surprising, given that the material dealt with by Predrag Cvitanović in these roughly 250 pages requires a level of sophistication well beyond what is offered in the early stages of university algebra. In point of fact, the general theme of the book under review is Lie theory with representation theory in the foreground, and Cvitanović’s revolutionary goal is to develop large parts of the subject strictly by means of calculi of diagrams (e.g., “birdtracks”) and, for lack of a better word, the attendant Indeed, it is the book’s subtitle, Birdtracks, Lie’s, and Exceptional Groups, that gives at least a hint of what is to follow and that it will take us far off the beaten track: what is a birdtrack, after all? Well, we are quickly told that it is a “notation inspired by the Feynman diagrams of quantum field theory,” originally invented (in prototype) by Sir Roger Penrose. Birdtracks in fact present invariant tensors and “invariant tensor diagrams replace algebraic reasoning in carrying out all group theoretic computations… [The indicated] diagrammatic approach is particularly effective in evaluating complicated coefficients and group weights, and revealing symmetries hidden by conventional algebraic or index notations.” It is for these reasons that Group Theory boasts more than 4000 diagrams and illustrations, thus yielding an average of sixteen per page. Zowie! Given the preceding brief sketch of birdtracks’ raison d’être it is proper, then, to compare them, as suggested, to the most famous, and supremely successful, ploy for replacing gruesome if not prohibitive calculations by a yoga of diagrams, namely, Richard Feynman’s ultra-slick short-hand for doing perturbation computations (integration by parts on metabolic steroids) in Q(uantum) E (lectro) D(ynamics). Says Cvitanović on p. 42: “In developing the ‘birdtrack’ notation in 1975 I was inspired by Feynman diagrams and the elegance of Penrose’s ‘binors’… So why… ‘birdtracks’ and not ‘Feynman diagrams’? The difference is that here diagrams are not a mnemonic device, an aid in writing down an integral that is to be evaluated by other techniques… Here ‘birdtracks’ are everything — unlike Feynman diagrams, here all calculations are carried out in terms of birdtracks, from start to finish.” This having been said, it is clear that the reader of this monograph should already be rather familiar with Lie groups and representation theory (I do like Serre’s old book for learning this gorgeous material) and be disposed to adopt an utterly pictorial way of doing calculations in this area. The fact that Cvitanović’s Group Theory is not intended for rookies is revealed right off the bat by the list of chapters. We find, on p. 5 (!), the following passage: “…the first seven chapters [of 21] are largely a compilation of definitions and general results that might appear unmotivated on first reading. The reader is advised to work through the examples… in [the second] chapter, jump to the topic of possible interest… and birdtrack if able or backtrack if necessary.” Obviously this sage advice is not aimed at a novice; it’s even fair to say, I think, that Cvitanović has the in-crowd of Lie theorists (or those aspiring accordingly) as his target audience. Furthermore, this audience ought to be peppered with a decent number of physicists: consider, e.g., the following remarks on p. 166: “What are these ‘spinsters’? A trick for relating SO(n) antisymmetric reps to Sp(n) symmetric reps? That can be achieved without spinsters: indeed Penrose… had observed many years ago that SO(-2) yields Racah coefficients in a much more elegant manner than the usual angular momentum manipulations…” On the other hand, a few lines down the page we encounter metaplectic representations of the symplectic group, pointing to deep themes in analytic number theory (reciprocity laws for global number fields treated in the style of Weil and Kubota). It is worth mentioning in this connection that metaplectic covering groups , as such, originate in André Weil’s famous 1964 Acta Math. paper, “Sur certains groupes d’opérateurs unitaires,” which has the projective Weil representation at its core. But this paper was preceded by an article by I. E. Segal, introducing the prototype of this projective representation in a physics context, and nowadays it is often referred to as the Segal-Shale-Weil representation (when it’s not simply called the oscillator representation). In any event, this theme, as also the very orientation of Cvitanović’s Group Theory, properly belongs to the area where physics and mathematics meet. Thus, for the right reader, which is to say, an R^>0-linear combination of mathematician and physicist equipped with a zeal for novel combinatorics-flavored diagram-gymnastics, this book will be a treat and a thrill, and its new and radical way to compute many things Lie is bound to make its mark. Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
{"url":"http://www.maa.org/publications/maa-reviews/group-theory-birdtracks-lies-and-exceptional-groups","timestamp":"2014-04-16T14:27:11Z","content_type":null,"content_length":"101040","record_id":"<urn:uuid:9f2983cb-60b5-4741-90c7-d456481c5f31>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslindale Geometry Tutor Find a Roslindale Geometry Tutor ...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. 16 Subjects: including geometry, French, elementary math, algebra 1 ...I have also found that study skills and organization play a large role in students' academic success, and that certain study techniques are particularly useful for math and physics. I like to check my students' notebooks and give them guidelines for note-taking and studying. Here too I stress a... 9 Subjects: including geometry, calculus, physics, algebra 1 ...Before I began a family I was in the actuarial field. I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students. I did this from 25 Subjects: including geometry, reading, English, writing ...I am also the advisor for the high school math club and the advisor of the National Honor Society at a local high school.I have taught: SAT Prep, Pre-Caluculus, Trigonometry, Algebra 2 honors, Algebra 2 standard course, Geometry honors & Standard, Algebra 1, MCAS Prep, Pre-Algebra and 4-8th grade... 12 Subjects: including geometry, algebra 1, GED, algebra 2 ...Being a good proofreader requires a keen eye for grammatical construction, common syntax and typology errors, as well as an understanding of style expectations and guidelines. I am willing to proofread manuscripts and papers in any and all subjects as well as help students hone their own proofre... 63 Subjects: including geometry, English, chemistry, physics
{"url":"http://www.purplemath.com/roslindale_geometry_tutors.php","timestamp":"2014-04-19T14:42:34Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:fc9d1c94-c1c6-4b2c-8c81-3b0b0a34af0e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Arithmetic Algorithms "This is one of the best available textbooks on computer arithmetic design" - review, Analog Dialogue See the Computer Arithmetic Algorithms Simulator - a companion website featuring Java and JavaScript simulators of many of the algorithms discussed in the book. Book features • Table of contents and features summary - GIF file or PostScript file • Main features (PDF file) • Review of the 1st edition of the book from IEEE Computer Magazine • Review of the 2nd edition of the book from Analog Dialogue • Review of the 2nd edition of the book from ACM SIGACT News Ordering information • Order the book from A. K. Peters (part of CRC Press) • Order the book from Amazon.com For students and instructors • Corrections for the 1st printing, 2002 • Powerpoint slides for instructors • Solutions to selected problems (chapters 1 - 10) - PostScript file, PDF file • Solutions to almost all the problems (for instructors only: for access contact the publisher susie.carlisle AT taylorandfrancis DOT com) Relevant computer arithmetic links See the Computer Arithmetic Algorithms Simulator - a companion website featuring Java and JavaScript simulators of many of the algorithms discussed in the book.
{"url":"http://www.ecs.umass.edu/ece/koren/arith/","timestamp":"2014-04-17T03:53:52Z","content_type":null,"content_length":"3507","record_id":"<urn:uuid:1d5d8b69-e8a6-4a7d-ab6d-5b4a796313b9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Isomorphisms I have the group G whose elements are infinite sequences of integers (a1, a2, ...). These sequences combine termwise as such: (a1, a2,...)(b1, b2,...) = (a1+b2, a2+b2,...) I would like to find an isomorphism from G x Z (the direct product of G and the integers) to G as well as an isomorphism from G x G to G. So far, I have found several homomorphisms for both of these but all of them lack the injective property so fail to be isomorphisms. What sorts of functions can I construct that are isomorphisms to G for these two groups?
{"url":"http://www.physicsforums.com/showthread.php?t=299059","timestamp":"2014-04-16T19:02:20Z","content_type":null,"content_length":"30725","record_id":"<urn:uuid:53dbe65c-7aaf-4d9a-ac26-e1bd68dd873f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of a polygon March 22nd 2007, 01:26 PM #1 Feb 2007 Area of a polygon Hello, I was wondering if anyone can help me with this problem. A) Let A_n be the area of a polygon with n equal sides inscribed in a circle with radius r. By dividing the polygon into n congruent triangles with central angle of (2pi)/n, show that A_n = [n(r^ B) Show that limit as n goes to infinity A_n = (pi)(r^2). For this one, I think that you can use the fact that (sin x)/x = 1 Appreciate it if anyone can contribute anything. Hello, I was wondering if anyone can help me with this problem. A) Let A_n be the area of a polygon with n equal sides inscribed in a circle with radius r. By dividing the polygon into n congruent triangles with central angle of (2pi)/n, show that A_n = [n(r^ B) Show that limit as n goes to infinity A_n = (pi)(r^2). For this one, I think that you can use the fact that (sin x)/x = 1 Appreciate it if anyone can contribute anything. You want to find, lim n--> oo (1/2)n(r^2)*sin(2pi/n) Instead do, r^2*lim n--> oo (1/2) (1/n)(r^2)*sin(2pi n) Multiply numerator and denominator by pi to get, r^2 *lim n--> oo pi*(1/2pi*n)*sin(2pi n) But lim n--> oo sin(2pi n)/(2pi n) = 1. March 22nd 2007, 02:12 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/12854-area-polygon.html","timestamp":"2014-04-19T20:50:43Z","content_type":null,"content_length":"33422","record_id":"<urn:uuid:fd134952-c687-4cd9-8ed4-681dfaba3320>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Maintainer byorgey@cis.upenn.edu Safe Haskell None primeLayout :: (Backend b R2, Renderable (Path R2) b) => [Colour Double] -> Integer -> Diagram b R2 -> Diagram b R2Source primeLayout takes a positive integer p (the idea is for it to be prime, though it doesn't really matter) and a diagram, and lays out p rotated copies of the diagram in a circular pattern. There is a special case for p = 2: if the given diagram is taller than it is wide, then the two copies will be placed beside each other; if wider then tall, they will be placed one above the other. The regular p-gon connecting the centers of the laid-out diagrams is also filled in with vertical bars of color representing the number p. In particular, there is one color for each decimal digit (the provided list should have length 10 and represents the digits 0-9), and the colors, read left to right, give the decimal expansion of p. import Diagrams.TwoD.Factorization = pad 1.1 . centerXY . hcat' (with & sep .~ 0.5) . map (sized (Width 1)) $ [ primeLayout defaultColors 5 (circle 1 # fc black) , primeLayout defaultColors 103 (square 1 # fc green # lw 0) , primeLayout (repeat white) 13 (circle 1 # lc orange) colorBars :: Renderable (Path R2) b => [Colour Double] -> Integer -> Path R2 -> Diagram b R2Source Draw vertical bars of color inside a polygon which represent the decimal expansion of p, using the provided list of colors to represent the digits 0-9. import Diagrams.TwoD.Factorization colorBarsEx = colorBars defaultColors 3526 (square 1) factorDiagram' :: (Backend b R2, Renderable (Path R2) b) => [Integer] -> Diagram b R2Source Create a centered factorization diagram from the given list of factors (intended to be primes, but again, any positive integers will do; note how the below example uses 6), by recursively folding according to primeLayout, with the defaultColors and a base case of a black circle. import Diagrams.TwoD.Factorization factorDiagram'Ex = factorDiagram' [2,5,6] factorDiagram :: (Backend b R2, Renderable (Path R2) b) => Integer -> Diagram b R2Source Create a default factorization diagram for the given integer, by factoring it and calling factorDiagram' on its prime factorization (with the factors ordered from smallest to biggest). import Diagrams.TwoD.Factorization factorDiagramEx = factorDiagram 700 ensquare :: (Backend b R2, Renderable (Path R2) b) => Double -> Diagram b R2 -> Diagram b R2Source Place a diagram inside a square with the given side length, centering and scaling it to fit with a bit of padding. import Diagrams.TwoD.Factorization ensquareEx = ensquare 1 (circle 25) ||| ensquare 1 (factorDiagram 30) fdGrid :: (Renderable (Path R2) b, Backend b R2) => [[Integer]] -> Diagram b R2Source fdGrid n creates a grid of factorization diagrams, given a list of lists of integers: the inner lists represent L-R rows, which are laid out from top to bottom. import Diagrams.TwoD.Factorization fdGridEx = fdGrid [[7,6,5],[4,19,200],[1,10,50]] fdGridList :: (Renderable (Path R2) b, Backend b R2) => Integer -> Diagram b R2Source fdGridList n creates a grid containing the factorization diagrams of all the numbers from 1 to n^2, ordered left to right, top to bottom (like the grid seen on the cover of Hacker Monthly, http:// import Diagrams.TwoD.Factorization grid100 = fdGridList 10 grid100Big = grid100 fdMultTable :: (Renderable (Path R2) b, Backend b R2) => Integer -> Diagram b R2Source fdTable n creates a "multiplication table" of factorization diagrams, with the diagrams for 1 to n along both the top row and left column, and the diagram for m*n in row m and column n. import Diagrams.TwoD.Factorization fdMultTableEx = fdMultTable 13
{"url":"http://hackage.haskell.org/package/diagrams-contrib-1.0.0.1/docs/Diagrams-TwoD-Factorization.html","timestamp":"2014-04-21T02:31:50Z","content_type":null,"content_length":"22837","record_id":"<urn:uuid:b3cd0cc4-7d17-4ce3-bc3e-4a77ad8d15cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Periodic orbits of Hamiltonian systems and symplectic Ibort, A. and Martínez Ontalba, Celia (1996) Periodic orbits of Hamiltonian systems and symplectic reduction. Journal of physics A: Mathematical and general, 29 (3). pp. 675-687. ISSN 0305-4470 Restricted to Repository staff only until 2020. Official URL: http://iopscience.iop.org/0305-4470/29/3/018 The notion of relative periodic orbits for Hamiltonian systems with symmetry is discussed and a correspondence between periodic orbits of reduced and unreduced Hamiltonian systems is established. Variational principles with symmetries are studied from the point of view of symplectic reduction of the space of loops, leading to a characterization of reduced periodic orbits by means of the critical subsets of an action functional restricted to a submanifold of the loop space of the unreduced manifold. Finally, as an application, it is shown that if the symplectic form ! has finite integral rank, then the periodic orbits of a Hamiltonian system on the symplectic manifold .M; !/ admit a variational characterization. Item Type: Article Uncontrolled Keywords: relative periodic orbits; Hamiltonian systems; symmetry; variational principles with symmetries; periodic orbits; critical subsets; unreduced manifold Subjects: Sciences > Mathematics > Differential equations ID Code: 16809 References: Abraham R and Marsden J E 1978 Foundations of Mechanics 2nd edn (New York: Benjamin) Cendra H and Marsden J E 1987 Physica 27D 63 Cendra H, Marsden J E and Ibort L A 1987 J. Geom. Phys. 29 541 Fortune B 1985 Invent. Math. 81 29 Fortune B and Weinstein A 1985 Bull. Am. Math. Soc. 12 128 Freed D S 1988 J. Diff. Geom. 28 223 Gotay M J 1982 Proc. Am. Math. Soc. 84 111 Guillemin V and Sternberg S 1984 Symplectic Techniques in Physics (Cambridge: Cambridge University Press) Gotay M J and Tuynman G M 1989 Lett. Math. Phys. 18 55 Ibort A and Martínez–Ontalba C 1994 C.R. Acad. Sci. Paris Série II 318 561 Ibort A and Martínez–Ontalba C 1995 Arnold’s conjecture and symplectic reduction J. Geom. Phys. to appear Klingenberg W 1978 Lectures on Closed Geodesics (Berlin: Springer) Marsden J E 1993 Lectures on Mechanics (Cambridge: Cambridge University Press) Smale S 1970 Invent. Math. 10 305 Weinstein A 1978 Math. Z. 159 235 Deposited On: 23 Oct 2012 08:27 Last Modified: 07 Feb 2014 09:36 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/16809/","timestamp":"2014-04-19T22:09:54Z","content_type":null,"content_length":"28512","record_id":"<urn:uuid:eb0f4560-93e3-4029-9072-700dceef9fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurements of Two Adolescent Groups “The measurement of statistics can intrigue students when the subject is themselves and the measurements and statistics obtained concern subjects which interest them” was the claim I made at the start of the paper done last year in the Statistics I Seminar at the Yale-New Haven Teachers’ Institute. I became intrigued with the study too, and so this paper is a follow up and comparison with the measurement and statistics of pregnant adolescent students at the Polly T. McCabe Center in 1984-1985. In order to broaden this unit and make it applicable to students in other schools, lesson plans are again included using some of the methods and theories learned in the Statistics Seminar. Most of these plans were “tried out” by the Biology and Physical Science students at the McCabe The main objective of this unit is to use statistical methods to try to get a clearer picture of the size, scope, and perhaps changing patterns concerning the problem of teenage pregnancy in New Haven. Beside measuring geographical location, age, and grade placement of the students, surveys were filled out by the students and analyzed in order to understand the viewpoints of the teenagers directly involved in the problems of pregnancy and the nurturing and care of a baby, while continuing their education. The second objective is to use the lesson plans to integrate statistics into the science classroom, with students measuring, calculating, graphing, and making sense of their findings. The student enrollment at Polly T. McCabe Center for 1985-1986 was 200. This is almost the same enrollment as the preceding year when the number of students was 206. The graphs and maps for the statistical comparison study done at the McCabe Center are at the end of this section. The issue of geographical location of students was of particular interest this year because of the relocation of the McCabe Center in September, 1985. After nineteen years at 111 Whalley Avenue, the McCabe Center moved to a renovated facility at 390 Columbus Avenue, adjacent to the Hill Health Center. By comparing the two maps, one can see at a glance that there is a heavier concentration of students living in the arbitrarily designated circle called Area I in 1985-1986. The McCabe Center move to the center of Area I for the 1985-1986 school year may account for some of the increase of numbers of students in this area. The calculated 14% increase in Area I could mean that students who might ordinarily drop out because of the pregnancy have decided to attend McCabe Center because of the convenient location. However, without an exhaustive follow up on this data, one cannot assess the real cause, or causes, of this 14% increase. It could simply mean that there has been an increase in the rate of teenage pregnancy in this area. Of course, the many causative factors that might cause a rate increase could also be studied. The other areas of the map for 1985-1986 show only a slight decrease of 1%, 2%, or 3% from the previous year, except for Area VI which showed a slightly larger decrease of 6% student concentration from the previous year. This may not be significant, or there may be some transportation problem involved, since Area VI represents the outlying areas of the city. (See Graph I) The comparison of the age distribution of the two groups from 1984-1985 and 1985-1986 showed very little difference. The average age of the 206 students in 1984-1985 was 15.81. This was calculated by adding all the ages and dividing by the number of students. ( ) In 1985-1986 the average age of the students was 15.91. When one looks at Graph #2, the difference that does exist is apparent because the 1984-1985 graph is more dispersed or spread out on both sides of the mean, while the 1985-1986 curve has lost the extreme values at each end (age 11 and age 20) that were present in the previous year, and shows less dispersion and a higher peak at the mean. These facts are shown mathematically by calculating and charting the standard deviation from the mean which is 1.63 in 1984-1985, and 1.37 in 1985-1986. However, the average ages are so close, and the dispersion of the curve so close too that one cannot make any inferences concerning statistical differences in these two populations. But there was a percentage drop in students under fifteen years of age from 19% in 1984-1985 to 15% in 1985-1986. Hopefully, this would be the very beginning of a trend in this direction. However, the T test, which proves the validity of differences between two populations, showed that there was no statistical difference between these two groups. The comparison of grade distribution in the school between 1984-1985 and 1985-1986 was also quite similar, as one might expect because of the close correlation in ages between the two groups. However, Graph III shows the high point of each graph in a different grade. The grade that has the greatest number of students (the mode) is the ninth grade in 1984-1985. In 1985-1986, the mode has shifted to the tenth grade. This probably means that there were fewer students repeating the ninth grade, since the ages of the two groups are approximately the same. Graph IV represents the numbers of students who were enrolled in various high schools or middle schools before entering Polly T. McCabe Center. The years 1984-1985 and 1985-1986 are again contrasted on the bar graph. It is difficult to interpret the change in numbers in the three major high schools because Lee High School had already phased out some of its programs and these students had gone to the other high schools. In the middle schools there was little change within each school, but there was a total drop from 25 students in 1984-1985 to 18 students in 1985-1986. One statistic not studied in 1984-1985, but studied this year, was the place where students received pre-natal care. Approximately 44% of the students used the clinic at Yale New Haven Hospital, 23% used Hill Health Center, 11% used the Hospital of St. Raphael, 9% used Community Health Care Plan, 7% used private physicians, 6% used Fair Haven Community Health Clinic, and some students used various other places. The emphasis on pre-natal care is important to the young mothers and the babies. In a study by Ooms, about 10% of the teenagers with pre-natal care gave birth to low birth rate babies “and of the teenage mothers with no prenatal care, 26% had low birthweight infants”. In the study done at McCabe last year on 118 babies, only 10.1% of the adolescent mothers had a low birthweight newborn. (below 5.5 lbs.) One of the methods for preventing teenage pregnancy that is often mentioned is the teaching of sex education in the schools. Fifty-nine students at Polly T. McCabe Center participated in a survey designed and given by Lillian Townsend, a staff member at the school. Some sex education is taught in some of the New Haven schools where it is called Family Life Education. In the unsigned surveys, 95% of the students said that they would want their children to have Family Life Education courses when they are in school. Five percent were undecided, and nobody said no. 86% thought boys and girls should take the classes together. 88% thought that parents should be notified that the classes were being taught, but only 60% thought the parents should be able to attend a class. 78% thought that parents should not be allowed to keep the child from attending the classes. Most thought the classes should begin in the middle schools. 17% thought 5th grade would be the best grade to start. Family Life Education could include not only sex education but also decision making skills that would be so helpful to the pre-adolescent and the adolescent. Child care is of great concern to the adolescent mother and is necessary if she is to continue her education. A sample Child Care Survey which 1 wrote and gave to 43 students is included with a bar graph of some of the results. The first nine questions were only answered by the students who had already had their babies. The 23 students who were still pregnant only answered question 10. Some of the students wrote in answers on the back of the paper when asked in question 10 about an ideal plan for child care while they are in school. Most of the students said they needed someone they trusted, and most said they already had found someone to be trusted and take care of the baby the same way they would. For example: “I want someone to take care of him like I do. I feed him, I talk to him, I bathe him, I play with him.” This year, for the first time, eight students from Polly T. McCabe Center are able to bring their babies to the Mary Sherlock Day Care Center which is adjacent to the school. These young mothers are able to see their babies during the school day. There is, however, only the limited number of babies that can be cared for, because this Day Care Center also serves the surrounding community. More emphasis has been placed this year on the role of the male in preventing teenage pregnancy. There have been talks in the schools, posters and other publicity involving the males more in the decision making process of this problem. In a survey that 27 McCabe students participated in called “The Teen Fathers Needs Assessment”, the emphasis was on involving the baby’s father in a job training program. According to the results of the survey, 33% of the fathers were still in school. 67% of the young mothers thought the baby’s father might be interested in a job training program. (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) ____2. Have each student measure a sample lima bean length wise at its longest point, and pass this one sample around room. Ask each student to write down his answer on a little slip of unsigned paper and to hand in. ____3. The teacher can collect papers, and the next day post the cumulative results on the board from which the class can make a histogram. To graph two distribution curves on the same graph as the pulse rate of the class with a normal distribution is graphed next to the higher rates of the pulses of the class after exercise, Purpose: but still containing a normal distribution. These two groups would be interesting to compare, using a t test but the scope of this type of study would perhaps be better suited to a math Materials: Clock or watch with a second hand. Method: 1. Have students seated and resting. 2. Have them practice taking their own pulses with 1st, 2nd, and 3rd fingers lightly pressing the under side of the wrist near the thumb, or on neck. 3. Time for 30 seconds and multiply by 2 to get pulse rate. Repeat a few times, check answer with previous answer, to make sure the students feel they are getting accurate counts. Repeat a third time, but this time ask the students to record the number on a paper in a table. (See example below.) 4. Ask the students to walk, or jog, in place, for 2 minutes. 5. After the 2 minutes, quickly repeat the pulse, taking procedure while standing. 6. Have students record results in the table. 7. The teacher can then collect papers, and post results on board. Students can help with the class tally. (See example.) 8. Graph results of pulse when sitting to get histogram or curve, then graph results after class exercised to get histogram, or curve. Do on same graph. NOTE: The variations within certain limits should be discussed with the class so nobody worries about the variety of pulse rates obtained. Many variables could be added to this experiment. The average pulse rate of females is supposed to be slightly faster (by about 7 beats) than the average pulse rate of males. This would be fun to check in a laboratory lesson such as the above plan. (figure available in print form) 1. Bunyon, Richard P. and Haber, Audrey, Fundamentals of Behavioral Statistics: Adison Wesley Publishing Co., 1984. 2. Donovan, Frank, Lets Go Metric, New York: Weybright and Talley, 1974. 3. Green, Marvin H, International and Metric Units of Measurement. New York: Chemical Publishing Company, 1973. 4. Ooms, Theodora, Teenage Pregnancy in a Family Context. Philadelphia: Temple University Press, 1981. 5. Rowntree, Derek, Statistics Without Tears. New York: Charles Scribner’s Sons, 1981. 6. Sternberg, Laurence, Adolescence. New York: Alfred A. Knopf, Inc. 1985. 7. Watkins, Chang-Van Horn, and Kron, Biology for Living. Morristown, New Jersey, 1982. Contents of 1986 Volume V | Directory of Volumes | Index | Yale-New Haven Teachers Institute
{"url":"http://yale.edu/ynhti/curriculum/units/1986/5/86.05.02.x.html","timestamp":"2014-04-16T13:40:46Z","content_type":null,"content_length":"24997","record_id":"<urn:uuid:2d8dc05d-66ce-4c13-9e3d-5d04a6251a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
a 2.0 uF and UF capacitor are connected in a series across an 8.0 V dc source what is the charge on the 2.0 uF capacitor? Number of results: 96 Determine the ac voltage across each capacitor and the current in each branch of the circuit? 1. Vs = 10 V rms, f = 300 Hz, C1 = 0.01 uF, C2 = 0.022 uF, C3 = 0.015 uF, C4 = 0.047 uF, C5 = 0.01 uF and C6 = 0.015 uF. Thursday, May 27, 2010 at 6:16pm by UNKNOWN. a=gsin(theta) a=9.8 sin 25° a= 0.43 Uf=Ui +at Uf= 0 + a(10) Uf=0 +4.3 Uf=4.3 m/s Sunday, October 19, 2008 at 11:59pm by Richard What is the value of the total capacitive reactance in each circuit? A. f = 1 kHz and C = 1.047 uF. B. f = 1 Hz, C1 = 10 uF and C2 = 15 uF. C. F = 60 Hz, C1 = 1 uF and C2 = 1 uF. Thursday, May 27, 2010 at 6:09pm by UNKNOWN. A 3.00 uF and a 5.00 uF capacitor are connected in series cross a 30.0V battery. A 7.00 uF capacitor is then connected in parallel across the 3.00 uF capacitor. a)Calculate the equivalent capacitance of the circuit I think the answer is 3.33 uF. b)Determine the voltage across ... Tuesday, February 16, 2010 at 7:07pm by ALan For the 10 uF capacitor, Q' = C' V = (10*10^-6)*40 = 400 uC For the series 4 uF and 3 uF capacitors, which both acquire the same charge Q", 10 V = Q"/4*10^-6 + Q"/3*10^-6 = Q"/1.714*10^-6 The two series capacitors act like a single 1.714 uF capacitor Q" = 17.14 uC is the ... Tuesday, April 1, 2008 at 1:37pm by drwls MATH. (physics) I don't see why they both don't charge to the full 12 V, if they are charged separately or in parallel. What you seem to be talking about is charging them together in Series. In that case, the sum of the voltages is the supply voltage (12 V), and they both hold the same charge... Thursday, May 27, 2010 at 3:58am by drwls If I understand the circuit, the 3.0 and 4.0 uF capacitors are connected in series with a 10.0 uF capacitor connected in parallel and all of this across a 40 v cell. First determine the capacitance of the series 3.0 and 4.0. 1/C = 1/3.0 + 1/4.0 = 1.7 uF. Then add in the ... Tuesday, April 1, 2008 at 1:37pm by DrBob222 You have a circuit with a 4 uF capacitor connected in series to 2 capacitors connected in parallel (2 uF and 1.5 uF). The battery is 12 V. The first question asked what the circuit's equivalence capacitance was. I calculated it to be 1.9 uF, which is correct. The next question... Saturday, February 19, 2011 at 8:13pm by Jon A 3.0-uF capacitor and a 4.0-uF capacitor are connected in series across a 40.0-V battery. A 10.0-uF capacitor is also connected directly across the battery terminals. Find the total charge that the battery delivers to the capacitors. I NEED HELP WITH THIS PROBLEM. THANKS! Tuesday, April 1, 2008 at 1:37pm by Becky An 8.3 uF and a 2.9 uF capacitor are connected in series across a 24 V battery. What voltage is required to charge a parallel combination of the two capacitors to the same total energy? Tuesday, October 15, 2013 at 11:09pm by Anonymous An 8.3 uF and a 2.9 uF capacitor are connected in series across a 24 V battery. What voltage is required to charge a parallel combination of the two capacitors to the same total energy? Wednesday, October 16, 2013 at 2:01pm by Anonymous An 8.3 uF and a 2.9 uF capacitor are connected in series across a 24 V battery. What voltage is required to charge a parallel combination of the two capacitors to the same total energy? Wednesday, October 16, 2013 at 11:06pm by Anonymous Uranium-235 can be separated from U -238 by fluorinating the uranium to form UF 6 which is a gas) and then taking advantage of the different rates of effusion and diffusion for compounds containing the two isotopes. Calculate the ratio of effusion rates for 238 UF 6 and 235 UF... Saturday, August 10, 2013 at 12:34pm by Anonymous A parallel-plate capacitor is constructed from two circular metal plates or radius R. The plates are separated by a distance of 1.2mm. 1. What radius must the plates have if the capacitance of this capacitor is 1.1 uF? 2. If the separation between the plates is increased, ... Sunday, February 20, 2011 at 3:38pm by Jon A parallel-plate capacitor is constructed from two circular metal plates or radius R. The plates are separated by a distance of 1.2mm. 1. What radius must the plates have if the capacitance of this capacitor is 1.1 uF? 2. If the separation between the plates is increased, ... Monday, February 21, 2011 at 1:39pm by Jon a series RLC circuit has a peak current of 1 A with a frequency of 54 kHz. if the resistance of the circuit is 51 kHz, the capacitance of the circuit is 19 uF and the inductance of the circuit is 25 uF, determine the average power of the circuit. Monday, March 12, 2012 at 12:15pm by fatih Capacitor A has a capacitance CA =29.4 uF and us initially charged to a voltage of V1= 10.70 V. Capacitor B is initially uncharged . When the switches are closed, connecting the two capacitors, the voltage on capacitor A drops to V2 = 6.40 V. What is the capacitance CB (in uF... Wednesday, November 23, 2011 at 10:53pm by Jp Series: Ct = 4.98*8.98/(4.98+8.98) = 3.20 uF. Parallel: Ct = 4.98 + 8.98 = 13.96 uF. Thursday, February 6, 2014 at 7:27pm by Henry a. C = C1*C2/(C1+C2) = 30 uF 33.6*C2/(33.6+C2) = 30 33.6C2 = 1008+30C2 33.6C2-30C2 = 1008 3.6C2 = 1008 C2 = 280 uF In series. b. C1 + C2 = 30 uF 29.3 + C2 = 30 C2 = 30 - 29.3 = 0.7 uF in parallel. Tuesday, October 15, 2013 at 11:33pm by Henry Q could not change, no flow of current but the initial capacitance Ci may change Ui = .5 Q^2 / Ci so what happens to C ? K = C/Ci so now C = K Ci Uf = .5 Q^2 / (K Ci) so Uf/Ui = .5 Q^2 / (K Ci) /.5 Q ^2 / (Ci) = 1/K or B) Sunday, February 3, 2013 at 6:38pm by Damon sorry I copy and pasted and it came out weird. Part a- take the area under the curve and multiply it by (-1) because W= -delta u (area under the curve). part b- use Kf + Uf = Ki + Ui we know that Ui= 0 and Uf= 2J (just by looking at the graph) they gave you your final velocity ... Wednesday, December 2, 2009 at 10:52pm by Gia Ct = C1*C2/(C1+C2) = 8.3*2.9/(8.3+2.9) = 2.15 uF Qt = Ct*E = 2.15 * 24 = 51.6 uC.=Q1=Q2. Energy=0.5C*E^2 = 0.5*2.15*24^2 =619.2 Joules Parallel Combination: Ct = C1 + C2 = 8.3 + 2.9 = 11.2 uF. Energy = 0.5*Ct*E^2 = 619.2 Joules 0.5*11.2*E^2 = 619.2 5.6E^2 = 619.2 E^2 = 110.6 E... Wednesday, October 16, 2013 at 2:01pm by Henry Physics Classical Mechanics T=(m*g)/(1+(m*b^2)/(1/2*m*R^2)) ùf=sqrt((2*l)/(1/2*m*R^2)*(m*g)/(1+(m*b^2)/(1/2*m*R^2))) Ôr=m*g+(2*m*b)/pi*((2*l)/(1/2*m*R^2)*(m*g)/(1+(m*b^2)/(1/2*m*R^2))) Thursday, November 14, 2013 at 7:05am by Greco 1. RC Low-pass Filter Fc = 4000 Hz. R = 2000 Ohms(selected). C = 1/(2pi*F*Xc)=1/(6.28*4000*2000) = 2*10^-8 Farads. = 0.02 uF. The output is measured across C. 2. RC High-Pass Filter. R = 4700 Ohms (selected). C = 1/(6.28*2000*4700) = 1.69*10^-8 Farads = 0.0169 uF. The output is... Monday, April 9, 2012 at 11:05pm by Henry To plot the potential energy, simply plug your givens into the equation U=(1/2)kx^2. You should get a parabola that goes to 0 in the center. To determine the turning points of the block, use the law of conservation of energy. You know that Ui+Ki = Uf+Kf since there is no ... Sunday, October 7, 2012 at 11:23am by Erica To plot the potential energy, simply plug your givens into the equation U=(1/2)kx^2. You should get a parabola that goes to 0 in the center. To determine the turning points of the block, use the law of conservation of energy. You know that Ui+Ki = Uf+Kf since there is no ... Sunday, October 16, 2011 at 4:58pm by Erica Electrical Engineering At what frequency is a 0.0001 uF capacitor operating if its reactance is 45 k ohm? Sunday, April 4, 2010 at 1:34am by Jay How much energy is stored in the 120 uf capacitor of a defibrillator if the potential across the plates is 4 x 10^3 V? Tuesday, December 6, 2011 at 6:19pm by Zeno for an RLC circuit with a resistance of 11 k*ohm, a capacitance of 2 uF, and an inductance of 24 H, what frequency is needed to minimize the impedance? Monday, March 12, 2012 at 11:46am by fatih sequence algebra find the n th term uf the sequence 3/2 6/4 9/5 12/6 15/7 Tuesday, November 30, 2010 at 5:31am by dunstan Physics, Thinking check please The combined resistance of the circuit it correct. C = time constant/R = 4.8*10^-8 F = 0.048 uF Capacitances in Farads tend to be small Monday, February 16, 2009 at 8:33am by drwls A 4.0-uF capacitor is charged to 600 volts. how much energy is stored in the electric field of the capacitor? Sunday, March 9, 2014 at 1:16pm by Cameron A 1.5 uF capacitor is connected to a 9.0 V battery. Use PE=1/2C(change in V)^2 to find the energy stored in the capacitor. Sunday, June 5, 2011 at 11:01pm by Michael a series RLC circuit is composed of a 6.00-mH inductor and a 2.50-uF capacitor. what is the resonant frequency of this circuit? Monday, December 10, 2012 at 4:24pm by Anonymous A 6 uF capacitor is initially charged to 100V and then connected across a 600 ohm resistor. a) what is the initial charge on the capacitor? Sunday, March 3, 2013 at 9:53pm by Owen an LC circuit with a 3 uF capacitor and a 3 H inductor has a current I(t)=10*sin(2t) supplied to it . after 2 s, how much charge is stored on a plate of the capacitor? Monday, March 12, 2012 at 11:43am by fatih an LC circuit with a 3 uF capacitor and a 3 H inductor has a current I(t)=10*sin(2t) supplied to it . after 2 s, how much charge is stored on a plate of the capacitor? Tuesday, March 13, 2012 at 12:37pm by fatih I cannot find a way, either. The most capacitance you can achieve is by arranging all three capacitors in parallel, which results in only 0.09 uF capacitance. Thursday, October 11, 2012 at 7:04pm by drwls fl= flux uf= kinetic flux fx= potential flux l= lib C= constant Monday, November 19, 2012 at 9:59pm by MathematicsPro A fully charged 98 uF capacitor is discharged through a 125 ohms resistor. How long will it take for the capacitor to lose 75% of its initial energy Wednesday, November 23, 2011 at 10:45pm by JM Find the Time Constant for a 50 k resistor in a series with a 60 uF capacitor connected to a DC circuit. Also how long will it take for this capacitor to become charged? Monday, June 10, 2013 at 10:17am by John an RlC circuit has a resistance of 4 kOhm, a capacitance of 33 uF, and an inductance of 23 H. if the frequency of the alternating current is 2/pi kHz, what is the phase shift between the current and the voltage. Saturday, March 10, 2012 at 6:27pm by fatih a 2 uF capacitor is connected in series with a 1.2 M ohms resistor and a 5 v battery for a long time. What is the current in the resistor 1s after disconnecting the battery? Wednesday, November 23, 2011 at 7:20pm by JM Three capacitors with C1=C2=C3= 10.00 uF are connected to battery with voltage V= 194 V. What is the voltage across capacitor C1 Wednesday, November 23, 2011 at 11:50pm by JL In a trapezoid EFGH, side EF is parallel to side HG. The measure of ÚE is 90o and the measure of ÚF is 85o. What is the measure of ÚG? Thursday, February 2, 2012 at 10:09am by morgan a series RLC circuit has a peak current of 4 A with a frequency of 23 kHz. if the resistance of the circuit is 60 kohm, the capacitance of the circuit is 16 uF, and the unductance of the circuit is 24 uH, determine the average power of the circuit. Monday, March 12, 2012 at 1:35pm by fatih a series RLC circuit has a peak current of 4 A with a frequency of 23 kHz. if the resistance of the circuit is 60 kohm, the capacitance of the circuit is 16 uF, and the unductance of the circuit is 24 uH, determine the average power of the circuit. Tuesday, March 13, 2012 at 12:33pm by Fatih a series RLC circuit has a peak current of 4 A with a frequency of 23 kHz. if the resistance of the circuit is 60 kohm, the capacitance of the circuit is 16 uF, and the unductance of the circuit is 24 uH, determine the average power of the circuit. Include Steps Monday, March 12, 2012 at 4:29pm by fatih a series RLC circuit has a peak current of 4 A with a frequency of 23 kHz. if the resistance of the circuit is 60 kohm, the capacitance of the circuit is 16 uF, and the unductance of the circuit is 24 uH, determine the average power of the circuit. Include Steps Monday, March 12, 2012 at 4:29pm by fatih Water flows through a 4.0-cm-diameler horizontal pipe ata speed uf 1.3 m/s. The pipe then narrows down to a diameter of 2.0 cm. Ignoring viscosity, what is the pressure difference between the wide and narrow sections of the pipe Thursday, November 19, 2009 at 8:15pm by john Math and science If a circuit has a 3.9 k ohms resistor and a 5 uF capacitor, find the current flow in the circuit at 0.005 seconds, if the maximum current flow in the circuit is 1.5 mA. Monday, November 1, 2010 at 2:44pm by Mario Math and science If a circuit has a 3.9 k omega resistor and a 5 uF capacitor, find the current flow in the circuit at 0.005 seconds, if the maximum current flow in the circuit is 1.5 mA. Monday, November 1, 2010 at 2:50pm by Mario problem solving If a circuit has a 3.9 k ohms resistor and a 5 uF capacitor, find the current flow in the circuit at 0.005 seconds, if the maximum current flow in the circuit is 1.5 mA. Monday, November 1, 2010 at 4:13pm by milton at 3.68100017 kHz the reactance in a circuit of a 9 uF capacitor and an inductor are equal in magnitude. what is the value of the inductor? a. .000020 H b. .000020 mH c. .000021 uH d. .000021 mH Saturday, March 10, 2012 at 6:16pm by fatih Physics Help! A 220 uF capacitor with a 10% tolerance rating is charged. The voltage across the capacitor is measured to be 110 +/- 5.5 volt. Calculate the electrical charge stored in the capacitor. Estimate the error in the charge by propagating the uncertainty in the capacitance and the ... Friday, January 27, 2012 at 10:08pm by Mary A 450 uF capacitor is charged to 295 V. Then a wire is connected between the plates. How many joules of thermal energy are produced as the capacitor discharges if all of the energy that was stored goes into heating the wire? I am totally confused I don't even know where to ... Monday, February 4, 2008 at 5:56pm by J Kf+Uf=Ki+Ui .5*m*v^2 + 0=.5*m*(vi)^2 + m*9.8*1.3meter m=mass v= velo u want to solve for vi= your initial velo, for you 14m/s my values where .16kg ball, 13m ceiling, 1.7 off the ground. answer one for me was 14.88, answer 2 was 15.96m/s Sunday, March 27, 2011 at 4:15pm by john 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) d=? Monday, November 19, 2012 at 9:32pm by PhysicsPro Rectangle ABCD is similar to rectangle WYYZ. Uf tge the area of rectangle ABCD is 90 square inches, what is the area of rectangle WXYZ? Thursday, January 20, 2011 at 5:31pm by nat 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) ____m/s Monday, November 19, 2012 at 9:28pm by PhysicsPro 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) v1= __? Monday, November 19, 2012 at 9:33pm by PhysicsPro Physic II Suppose a capacitor of C=2.0 uF with no excess charge on it is suddenly connected in series with a resistor of R=447 ohms , and an ideal battery of voltage V=25 V . How long will the capacitor take to gain 80% of the charge it will eventually have when current stops flowing ... Thursday, March 13, 2014 at 11:14am by Yolune physics university 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) fnet=-2.5x1013N Monday, November 19, 2012 at 7:19pm by PhysicsPro ^^^Wrong! 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) ANS:___? Monday, November 19, 2012 at 9:59pm by Mikey Standing at the edge of a cliff 240 m tall. You throw a ball into the air one directly upward at 5 m/a and another downward at -5 m/s. Use conservation energy to show that they will have the same speed at the bottom. I know I have to do it twice.. And it should seem simple. I ... Saturday, February 8, 2014 at 9:52pm by Michelle Standing at the edge of a cliff 240 m tall. You throw a ball into the air one directly upward at 5 m/a and another downward at -5 m/s. Use conservation energy to show that they will have the same speed at the bottom. I know I have to do it twice.. And it should seem simple. I... Friday, April 11, 2014 at 2:56pm by Michelle Energy = 0.5C*V^2 = 473 J. C = Q/V, Substitute Q/V for C: 0.5(Q/V)*V^2 = 473, 0.5QV = 473, V=473 / 0.5Q=946 / Q=946 / .0791=11960 volts. C = Q/V = 0.0791 / 11960 = 6.6*10^-6 F. = 6.6 uF. Thursday, February 2, 2012 at 2:25pm by Henry What effect will be produced on a capacitor if the separation between the plates is increased? a. it will increase the charge b. it will decrease the charge c. it will increase the capacitance d. it will decrease the capacitance D A 0.25 uF capacitor is connected to a 9.0 V ... Saturday, March 14, 2009 at 9:13pm by physics Math and science RC = 3.9k * 5 uF = 19.5 ms = 0.0195 s. t / RC = 5 ms / 19.5 ms = 0.256 ms = 0.000256 s. i = i max / e^(t/RC), i = 1.5 ma / e^0.256 = 1.5 / 1.292 = 1.16 mA @ 5 ms (0.005 s). s = Seconds. ms = Monday, November 1, 2010 at 2:44pm by Henry F = Fr*sqrt(Xl/Xc) F = 1470*sqrt(29/5) = 3540.23 Hz a. WL = 29 L = 29/W = 29/22244 = 1.3*10^-3 H=1.3mH b. 1/WC = 5 WC = 1/5 = 0.2 C = 0.2/22244 = 9*10^-6F = 9 uF. NOTES 1. Sqrt = Square root 2. Fr = Resonant Freq. 3. W = 2PI*F. Monday, April 8, 2013 at 2:54pm by Henry Assuming that the area and plate spacing is the same for the two capacitors, the capacitance of Y is 50 uF. In series, both will store the same charge Q but the Y capacitor will have 1/5 the voltage drop of X, since V = Q/C and C is five times greater for capacitor Y. Vx + Vy... Saturday, March 2, 2013 at 5:37am by drwls The voltage on A dropped from 10.70 to 6.40 V, or by 4.30 Volts. Capacitor A lost C(A)*deltaV = 1.264*10^-4 Coulombs of charge. That charge flowed to capacitor B, leaving it with 1.264*10^-4 Coulombs and a charge of 6.40 V C(B) = Q/V = 1.264*10^-4/6.4 = 19.8 uF Wednesday, November 23, 2011 at 10:53pm by drwls Doogleberry Physcis In an L-R-C series circuit, the resistance is 380 Ohms, the inductance is 0.400 H, and the capacitance is 2.00×10−2 uF. What is the resonant frequency of the circuit? (omega naught) The capacitor can withstand a peak voltage of 570 volts. If the voltage source operates ... Sunday, September 23, 2012 at 4:46am by Doogleberry R = 6.44*10^6 Ohms. C = 1.00 uF = 1*10^-6 Farads. n=t/R*C=6 / (6.44*10^6 * 1*10^-6)=0.932 Vc = E-Vr. Vr = E/e^n. = 2/e^0.932 = 0.788 Volts. I=V/R = 0.788/6.44*10^6=1.22*10^-7 Amps Energy=E*I*T= (2*1.22*10^-7)*6=1.47*10^-6 Joules. Saturday, November 10, 2012 at 10:53pm by Henry something tells me you divided instead uf multiplying somewhere. In problems like this, be sure to keep track of the units, so you cancel out the ones you don't want. As for you answer of .00742 lbs, even a cursory sanity check would give a clue that it's bogus. Gold is heavy... Wednesday, September 12, 2012 at 1:00pm by Steve F = 13 kHz. L = 0.050 H. C = 5.0 uF. A=0.2rad * (360Deg/6.28rad)=11.46 Deg. phase diff. Xl = 6.28*13000Hz*0.05 = 4082 Ohms. Xc = 1 / (6.28*13000Hz*5*10^-6) = 2.45 Ohms. tan(11.46) = (Xl-Xc)/R = 4080 / R. R = 4080 / tan(11.46) = 20,124 Ohm. Saturday, March 10, 2012 at 6:28pm by Henry F = 2 / 3.14 = 0.6369 kHz. = 637 Hz. R = 4,000 Ohms. L = 23 h. C = 33 uF. Xl = 6.28*636.9*23 = 91994 Ohms. Xc = 1 / (6.28*637*33*10^-6) = 7.58 Ohms. tanA = (Xl-Xc) / R = 91986 / 4000 = 23. A = 87.5 Deg., Inductive. Therefore, the current lags the voltage by 87.5 Deg. Saturday, March 10, 2012 at 6:27pm by Henry (a) How much charge does a battery have to supply to a 5.0 uF capacitor to create a potential difference of 1.5 V across its plates? How much energy is stored in the capacitor in this case? (b) How much charge would the batter have to supply to store 1.0 J of energy in the ... Monday, February 4, 2008 at 5:59pm by J a. L = 33.0 mH = 0.033 H. Xl = 2.2 k. Ohms = 2200 Ohms. Xl = 2pi*F*L = 2200 Ohms. F = 2200 / 2pi*L=2200 / (6.28*0.033h) = 10,616 Hz. b. C = 1/(2pi*F*Xc). C=1 / (6.28*10616*2200)=6.818*10^-9 F.= 0.00682 uF. c.Xl = 3 * 2200 = 6600 Ohms. d = Xc = (1/3)*2200 = 733.3 Ohms. Wednesday, March 21, 2012 at 12:08am by Henry Mr. Oakley, my wood shop teachger is kind of strange. He has a 12 inch strip of wood with only four marks in it that he uses as a ruler. Yesterday, I asked him about it. "What good is a ruler you use uf ut only has 4 marks on it? There are some lengths you can't measure!" "... Monday, November 15, 2010 at 6:39pm by Issa a.What electric potential difference exists across a 5.2 µF capacitor that has a charge of 2.1 10-3 C? how do i use uF and C into a formula to find answer?? b.An oil drop is negatively charged and weighs 8.5 10-15 N. The drop is suspended in an electric field intensity of 5.3 ... Tuesday, February 5, 2008 at 5:18pm by kailey Q E R BY UNI OPL GHR Q AW ZWE R T TY QAZW D R T Y Y U II OLPÑE TB Q ZW C R Y YU I OLPÑH S AQ R Y V F H V A V W T D Y VRVCR ERTE S TYU GD IKL R GE TD V D YR Y UF H D YD RBN GB V S WQ AZS X T HY UJ IKOLPÑ XQ RT D YFG BFVAQ RVEDQ WAZSWEDXRFC FTGYHB UJNMIK OLPÑ QAZWS WS RD CRRERVEFFR RW T BUNF TRS ... Sunday, October 14, 2012 at 3:32pm by Q AZ WS E T Y UNIKOLP Blocks of mass m1 and m2 are connected by a massless string that passes over a frictionless pulley. Mass m1 slides on a frictionless surface. Mass m2 is released while the blocks are at rest. The pulley is a solid disk with a mass mp and a radius R. Use conservation of energy ... Monday, November 24, 2008 at 9:15pm by Elizabeth Beginner Spanish I need help please Complete the sentences with the correct forms of conseguir, ir, pensar, querer, or suponer. Nosotros _______ de excursión a las montañas, Toño. ¡Yo____ que vas a dormir todo el fin de semana! No, Carmen. No voy a dormir. Voy a ver videos. Siempre____ ... Monday, October 28, 2013 at 9:30pm by Kelly I wanted to know if i used the right verbs Complete the sentences with the correct forms of conseguir, ir, pensar, querer, or suponer. 1. Nosotros (blank) de excursión a las montañas, Toño. 2. ¡Yo (blank) que vas a dormir todo el fin de semana! 3. No, Carmen. No voy a dormir. ... Tuesday, October 29, 2013 at 7:46pm by Ashley A block of mass m1 = 22.0 kg is connected to a block of mass m2 = 40.0 kg by a massless string that passes over a light, frictionless pulley. The 40.0-kg block is connected to a spring that has negligible mass and a force constant of k = 260 N/m as shown in the figure below. ... Tuesday, March 12, 2013 at 1:53am by Anonymous a. Ct = C1*C2?(C1+C2) Ct = 7*11/(7+11) = 4.28 uF. = Total capacitance. b. Qt = Ct*V = 4.28 * 9 = 38.5 uC=Q1=Q2 Q1 = C1*V1 = 38.5 7 * V1 = 38.5 V1 = 5.5 Volts. C2 * V2 = Qt V2 = 38.5/11 = 3.5 Volts. c. Q1 = Q2 = Qt = 38.5 uC. Monday, October 14, 2013 at 1:47pm by Henry english homework As far as making a dialog funny, you have to focus on a few things: --Very clear characters. You have to make sure you know what the characters are like. --Establish a conflict that usually comes from how the people might differently handle the situation. --Be sure to make the... Monday, April 25, 2011 at 8:19am by MattsRiceBowl Two capacitors to 3 uF charged to a 100 V, and the other to 200 V. Determine the voltage between the plates, if the capacitors connected in parallel Два конденсатора емк... Saturday, May 28, 2011 at 1:11pm by lisa plz any one i am abit confused can you answer these questions thx 05. The space surrounding a charge within which the influence of its charge extends is known as. (a) Electric field (b) Magnetic field (c) Line of force (d) Electric intensity 06. A region around a stationary ... Tuesday, February 4, 2014 at 5:44am by Sania plz any one i am abit confused can you answer these questions thx 05. The space surrounding a charge within which the influence of its charge extends is known as. (a) Electric field (b) Magnetic field (c) Line of force (d) Electric intensity 06. A region around a stationary ... Tuesday, February 4, 2014 at 5:45am by Sania Someone please correct this. Ma soeur et je toujours ai se ressemblent comme deux gouttes d'eau. Tout le monde nous avons rencontré avons pensé nous étions des jumeaux. Elle est une année plus vieille que me. Nous avions des cheveux sombres et yeux grands et sombres. Cependant... Wednesday, March 28, 2007 at 10:01pm by Anonymous plz any one these are some mcqs i am preparing for a test i m not sure with the answers of these mcqs this is not a homework not an online course exam so feel free to answer these thankx MCQS 01. Changes of equal magnitude are separated by some distance. If the changes are in ... Thursday, January 30, 2014 at 2:47am by Sania plz any one these are some mcqs i am preparing for a test i m not sure with the answers of these mcqs this is not a homework not an online course exam so feel free to answer these thankx MCQS 01. Changes of equal magnitude are separated by some distance. If the changes are in ... Thursday, January 30, 2014 at 2:48am by Sania plz any one these are some mcqs i am preparing for a test i m not sure with the answers of these mcqs this is not a homework not an online course exam so feel free to answer these thankx MCQS 01. Changes of equal magnitude are separated by some distance. If the changes are in ... Thursday, January 30, 2014 at 2:48am by Sania plz any one these are some mcqs i am preparing for a test i m not sure with the answers of these mcqs this is not a homework not an online course exam so feel free to answer these thankx MCQS 01. Changes of equal magnitude are separated by some distance. If the changes are in ... Thursday, January 30, 2014 at 2:49am by Sania
{"url":"http://www.jiskha.com/search/index.cgi?query=a+2.0+uF+and+UF+capacitor+are+connected+in+a+series+across+an+8.0+V+dc+source+what+is+the+charge+on+the+2.0+uF+capacitor%3F","timestamp":"2014-04-20T23:50:21Z","content_type":null,"content_length":"39052","record_id":"<urn:uuid:122bb013-4efa-4ffc-bb71-bc61b64a1455>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7092101 - Methods and systems for static multimode multiplex spectroscopy This invention was made with Government support under Contract No. N01-AA-23103 awarded by NIH. The Government has certain rights in the invention. The present invention relates to methods and systems for static multimode multiplex spectroscopy. More particularly, the present invention relates to methods and systems for static multimode multiplex spectroscopy using different multi-peak filter functions to filter spectral energy emanating from different points of a diffuse source and for combining the multi-peak measurements to estimate a property of the source. Optical spectroscopy may be used to detect and quantify characteristics of a source, such as the average spectrum of the source or the density of chemical and biological species of the source. As used herein, the term “source” refers to the object being spectrally analyzed. In determining chemical composition of a source, the spectral signature of the target species may be encoded on the optical field by mechanisms including absorption, admission, inelastic scattering, fluorescence, and wave mixing. Many sources are referred to as large etendue because they reflect or scatter light in a large solid angle over a large area. The etendue of a source is a measure of both the spatial extent of the source and the solid angle in to which it radiates. Large etendue sources are also referred to as incoherent or diffuse sources because low spatial coherence is implicit in a large solid angle radiated over a large area. An example of a large etendue source is laser-illuminated biological tissue. One problem with measuring light radiated from large etendue sources using conventional spectrometers is that conventional spectrometers use filters that decrease optical throughput. For example, a conventional multiplex spectrometer measures light emanating from a source using elemental detectors with a narrowband color filter placed over each detector. Using narrowband color filters over each detector reduces the optical throughput of the conventional spectrometer. As a result, conventional multiplex spectrometers that utilize narrowband filters are incapable of accurately determining the optical properties of diffuse sources. In another type of conventional spectrometer, multimodal measurements are taken in series and the measurements are combined to estimate the optical properties of a diffuse source. The spectrometers that perform such measurements are referred to as scanning spectrometers. Using scanning spectrometers to measure diffuse sources is disadvantages because such detectors require microelectromechanical and/or piezoelectric components in order to successively apply different spectral filters to the detectors, and such components are expensive and difficult to fabricate. A further disadvantage of scanning spectrometers is that taking multiple measurements in series increases measurement time. Increasing measurement time may be undesirable for some types of measurements, such as in vivo tissue measurements. Yet another disadvantage of scanning spectrometers is lack of intelligent spectral filters. Conventional scanning spectrometers typically capture the full spectrum of electromagnetic radiation. Capturing the full spectrum is inefficient because some measurements contain radiation bands that are not of interest to analyzing a particular source. Accordingly, in light of the difficulties associated with conventional spectroscopy, there exists a need for improved methods and systems for multimode multiplex spectroscopy capable of accurately and efficiently measuring characteristics of large etendue sources. The present invention includes methods and systems for static multimode multiplex spectroscopy. According to one aspect of the invention, a method for static multimode multiplex spectroscopy includes receiving spectral energy emanating from a plurality of different points of a diffuse source and simultaneously applying different multi-peak filter functions to the spectral energy emanating from the different points to produce a multichannel spectral measurement for each point. The multichannel spectral measurements for the different points are combined to estimate a property, such as the average spectrum or the chemical composition, of the diffuse source. One capability of the invention is capturing spectral signatures from sources that radiate into highly multimodal fields. By measuring spectral projections of different modes and different points in the field and using the projections to compute weighted measures of average spectral properties of the modes, the spectral signatures of multimodal sources can be captured. An example of aggregate spectral properties that may be determined include the mean spectrum of the modes or the mean value of portions for filters on the modal spectra. Unlike conventional scanning spectrometers, a multimode multiplex spectrometer of the present invention is static in that it simultaneously measures multimodal projections at the same instance in time. Measuring multimodal projections at the same instance in time reduces the need for moving parts and increases the speed at which spectral projections can be measured. A multimodal multiplex spectrometer (MMS) according to the present invention analyzes a diffuse incoherent or partially coherent optical source. Such sources are “multimodal” because they are described by multiple modes in a coherent mode decomposition [1] using: □ 1. a spatially distributed array of optical detectors. Each detector may have a distinct spectral response or may have associated with it a distinct spectral filter. □ 2. targeted spectral response design. The spectral responses of the joint array of detectors are designed to enable efficient and accurate spectral density estimation or chemical species density estimation. In some cases, as in quantum dot detectors or photonic crystal structures, the distributed spectral responses may be randomly achieved. In other cases, as in thin film filters or volume holographic filters, the spectral responses are actively designed to achieve target analysis. □ 3. algorithms for target or spectral estimation from detector array data. An MMS of the present invention may include spectrally diverse multiplex filters or detectors for spectral and target estimation. Spectral diversity means that different measurement values depend on a diversity of spectral channels in the field. For spectrally diverse multiplex filters, different measurement values depend on the amplitude of more than one spectral channel in the field. The filtering function in such systems contains multiple peaks. Multiplex spectroscopy generally relies on structured transformations, such as the Fourier transform of FT-IR spectroscopy. Using volume holograms, multichannel thin film filters, 3D structured photonic materials or circuits or nanostructured electronic detectors, MMS systems can be programmed to achieve arbitrary spectral responses in each detector element. Complex “matched” multichannel design is also possible for other filter technologies, for example by structured spatio-temporal sampling of two-beam interferometers. The present invention may also include estimation of mean spectra or mean chemical densities of diffuse sources by sampling different spectral projections on distributed spatial sampling points. Conventional spectroscopy uses tightly focused beams and spatial filtering to estimate spectral characteristics of only a single mode. In principle, the spectral densities of different modes may vary. MMS spectrometers are designed such that “multiplex” measurements combine data from both spectral and modal distributions. The measurements are combined to measure average spectral or chemical densities over the modal distribution. However, it is not required to estimate the independent spectra of specific points or modes. MMS systems according to the invention may also use randomly distributed spectral disparity from scattering of 3D structures or from nanostructure detector elements to achieve spatially distributed multiplex signals. MMS systems of the present invention may also use constrained optimization of distributed multiplex detectors to directly estimate source spectra or chemical densities. Accordingly, it is an object of the invention to provide methods and systems for accurately and efficiently measuring spectral properties of diffuse sources. It is another object of the invention to provide methods and systems for static multimode multiplex spectroscopy that use different multi-peak filter functions on each detector. It is another object of the invention to provide methods and systems for combining measurements obtained by a multimode multiplex spectrometer of the invention to determine a property of a diffuse Some of the objects of the invention having been stated hereinabove, and which are addressed in whole or in part by the present invention, other objects will become evident as the description proceeds when taken in connection with the accompanying drawings as best described hereinbelow. Preferred embodiments of the invention will now be explained with reference to the accompanying drawings of which: FIG. 1 is a block diagram of a system for static multimode multiplex spectroscopy according to an embodiment of the present invention; FIG. 2 is a flow chart illustrating exemplary steps for determining spectral properties of a diffuse source according to an embodiment of the present invention; FIG. 3 is a schematic diagram illustrating the operation of a two-beam interferometer; FIG. 4 is a schematic diagram illustrating wave geometry in a static two-beam interferometer; FIG. 5 is a schematic diagram illustrating a static two-beam interferometer suitable for performing static multimode multiplex spectrometry according to an embodiment of the present invention; FIG. 6 is a schematic diagram of a sampled interferometer suitable for obtaining multi-peak spectral measurements from a plurality of different points of a diffuse source according to an embodiment of the present invention; FIG. 7A is a perspective view of a plurality of discrete filter elements and discrete detectors for obtaining multi-peak spectral measurements emanating from different points of a diffuse source according to an embodiment of the present invention; FIG. 7B is a side view of an elemental detector and a filter element illustrated in FIG. 7A. FIG. 8 is a spectral band diagram for a thin film filter suitable for obtaining multi-peak spectral measurements from different points of a diffuse source according to an embodiment of the present FIG. 9 is a graph of a filter response for a single filter suitable for obtaining a multi-peak spectral measurement from a diffuse source according to an embodiment of the present invention; FIG. 10 is a graph of multiple uncorrelated filter responses suitable for obtaining different multi-peak spectral measurements from a plurality of points on a diffuse source according to embodiment of the present invention; FIG. 11A is a graph of a Raman spectra for ethanol; FIGS. 11B and 11C are graphs of transmittance functions for different thin film features for detecting concentration of ethanol in a diffuse source according to an embodiment of the present FIG. 12 is a schematic diagram of a 3-D volume hologram suitable for obtaining multi-peak spectral measurements from different points on a diffuse source according to an embodiment of the present FIG. 13 is a schematic diagram of an array of dielectric microspheres and detectors suitable for obtaining multi-peak spectral measurements according to an embodiment of the present invention; FIG. 13A is a graph illustrating exemplary multi-peak filter functions of a photonic crystal suitable for use with embodiments of the present invention; FIG. 14A is a schematic diagram of a multi-axis filter array suitable for obtaining multi-peak spectral measurements from different points on a diffuse source according to an embodiment of the present invention; and FIG. 14B is a schematic diagram of a stacked filter/detector array suitable for obtaining multi-peak spectral measurements according to an embodiment of the present invention. The present invention includes methods and systems for simultaneously obtaining multi-peak spectral measurements from a diffuse source and for combining the multi-peak projections to determine a property of the source. FIG. 1 is a block diagram of a system for simultaneously obtaining different multi-peak spectral measurements from different points on a diffuse source and for combining the measurements to estimate a property of a diffuse source according to an embodiment of the present invention. Referring to FIG. 1, an exemplary system 100 includes a static filter or interferometer array 102, a detector array 104, illumination sources 105 and a multi-peak spectral measurements combination module 106 implemented on a computer 108. Static filter or interferometer array 102 includes a plurality of filters or interferometers for simultaneously obtaining different multi-peak spectral measurements emanating from a diffuse source 110. Detector array 104 converts the optical signals for each filtered multi-peak spectral measurement in to electrical signals and inputs the signals to computer 108. Illumination sources 105 illuminate diffuse source 110 for spectral analysis. Multi-peak spectral measurements combination module 106 combines the measurements to estimate a property of diffuse source 110. FIG. 2 is a flow chart illustrating the overall steps performed by the system 100 illustrated in FIG. 1 in measuring optical properties of diffuse source 110 using simultaneously-obtained multi-peak spectral measurements according to an embodiment of the present invention. Referring to FIG. 2, in step 200, the system 100 simultaneously receives spectral energy emanating from a plurality of different points of a diffuse source. In step 202, the system applies different multi-peak filter functions to the spectral energy emanating from each point. Examples of filter functions suitable for use in the present invention will be described in detail below. In step 204, the multichannel spectral measurements to which the filter functions were applied are combined to estimate a property of the diffuse source. For example, the multichannel spectral measurements may be combined to estimate the average spectrum of the source or the chemical composition of the source. Exemplary algorithms for combining the multichannel spectral measurements will be described in detail below. Local Spectral Model According to one embodiment of the invention, spectral content may be measured and analyzed separately for each spatial point of a diffuse source. In order to obtain a spectral measurement for each point in a diffuse source, an array of elemental detectors and filters may be used. The array of elemental detectors and filters may be positioned close to the source so that spectral energy emanating from each point can be distinguished. The spectral energy emanating from each point can be separately analyzed and used to compute a property of the diffuse source, such as the average spectrum. The procedure for separately analyzing spectral energy emanating from each point of a diffuse source is referred to herein as the local model because local spectral measurements are obtained for each point. The system illustrated in FIG. 1 can be referred to as a static multimode multiplex spectrometer because it simultaneously measures different multi-peak spectral projections emanating from a diffuse source. The system illustrated in FIG. 1 may be used to measure spectral content of multiple spatial mode systems in which the spectral content of the modes is correlated. However, the system illustrated in FIG. 1 is not concerned with independently characterizing the spectral content of different modes. Rather, the system makes different multiplex measurements on different modes and uses this multiplex data to estimate global features of a diffuse source, such as the average mode spectrum or the density of specific target chemicals. Equation (1) shown below illustrates exemplary measurements made by each detector in detector array 104 illustrated in FIG. 1. In Equation (1), I(ν,r) is the spectral intensity distribution as a function of frequency ν and radius r, h(ν,r) is a spatially localized spectral response or filter function. The measurement at a single point r of the form shown in Equation (1) is referred to herein as a spectral projection. As will be described in more detail below, each filter in filter array 102 may have a different multi-peak filter function h(ν,r). The filter functions h(ν,r) are preferably selected so that the filter functions are invertible, tailored to the expected properties of the source being analyzed, and different from each other. Exemplary filter functions suitable for use with embodiments of the invention will be described in detail below. Definitions for variables used in the equations herein are listed in the Appendix at the end of this specification. Measurements of local projections of the power spectrum are much easier to make than measurements of the full power spectrum at each point or in each mode. Nevertheless, these reduced measurements are useful in estimating source parameters, such as the mean spatially integrated power spectrum. The mean spectrum may be defined as $S _ ⁡ ( ν ) = 1 A ⁢ ∫ A ⁢ I ⁡ ( ν , r ) ⁢ ⅆ r ( 2 )$ where the integral is averaging is over a source area or volume A. As an example of how Equation (1) can be used to estimate the mean spectrum {overscore (S)} (ν), it can be assumed that there exist contours in R^2 over which h(ν,r) is constant. Integrating along these contours, the following equation is obtained: {overscore (S)}(ν)=∫∫h ^−1(ν^−1 ,l)m(r)dl [□] dl [⊥] m(l)=∫{overscore (S)}(ν)h(ν,l)dν(3) where l is a parameter along curves orthogonal to the contours. Assuming further that there exists h^−1(ν,l) such that ∫h^−1(λ′,l)h(ν,l)dl=∂(ν′−ν), {overscore (S)}(ν) can be estimated using the following equation {overscore (S)}(ν)=∫∫h ^−1(ν^−1 ,l)m(r)dl [□] dl [⊥](4) For example, for a two-beam interferometer or multichannel fiber system, one might choose h(ν,r) =cos(2πανx/c). In this case the contours of integration are lines along the y axis. And the following measurements are obtained: $m ⁡ ( x ) = ∫ [ ∫ S ⁡ ( ν , r ) ⁢ ⅆ y ] ⁢ ⁢ cos ⁡ ( 2 ⁢ ⁢ π ⁢ ⁢ α ⁢ ⁢ ν ⁢ ⁢ x c ) ⁢ ⅆ ν = ∫ S _ ⁡ ( ν ) ⁢ ⁢ cos ⁡ ( 2 ⁢ ⁢ π ⁢ ⁢ α ⁢ ⁢ ν ⁢ ⁢ x c ) ⁢ ⅆ ν ( 5 )$ Equation (4) is the easily inverted Fourier cosine transformation of {overscore (S)}(ν). In practice, this function is sampled at discrete values of x, which yields a discrete cosine transformation of the spectral density. More generally, both the target spectrum and the measured values may be considered discretely. In this case the transformation between the target spectrum and the measurements takes the form of a linear relationship between a vector describing the target spectrum, {right arrow over (s)}, and a vector describing the measurement state, {right arrow over (m)} of the form {right arrow over (m)}=H {right arrow over (s)}. Formally, one may estimate the target spectrum as {right arrow over (s)}[e]=H^−1{right arrow over (m)}, although nonlinear estimation algorithms may improve the reconstruction fidelity. In cases where the rank H is less than the number of components in {right arrow over (s)}, nonlinear techniques in combination with target constraints, such as that the components of the vector must be nonnegative or that the target source consists of a discrete number of active channels, may be needed for target estimation. Thus, multi-peak spectral measurements combination software 106 may have access to the inverse filter functions h^−1(ν′,r) for each filter function h(ν′,r) implemented by array 102. Software 106 may calculate the average spectrum for each filter function by multiplying the measurement for that filter function and the inverse of the filter function and integrating over the area of the source using Equation (4). The average spectrum for each section of the source can then be added to determine the average spectrum of the source. In considering the local model for multimodal analysis, it is not necessary to assume that I(ν,r) is confined to a plane or even a manifold. It can be assumed that present invention samples spectral projections at a sufficiently large set of such that inversion according to Equation (4) is well conditioned without requiring that the support of the sample point be compact. Modal Spectral Model Equation (1) assumes a spectrally filtered version of the field can be captured at a point. Obtaining spectrally filtered versions of a field at each point of a diffuse source may be accomplished by incorporating nanostructured electronics or non-homogeneous atomic systems in static filter array 102. For example, a quantum dot spectrometer [5] that achieves a spatially localized spectral projection will be described in detail below. In an alternate embodiment, spectral projections may be captured by focusing each spatial point of a source into a fiber and using a static interferometer on each fiber output to capture a projection. However, this approach would result in an unwieldy and complex instrument that would be difficult to manufacture. Exemplary implementations for capturing spectral projections at individual points of a diffuse source will be discussed in detail below in the section labeled “Implementations.” In yet another alternate embodiment of the invention, rather than measuring spectral projections at each individual point of a diffuse source, it may desirable to measure projections of different modes, apply different multi-peak filter functions to each mode, and combine the measurements from each mode to estimate a spectral property of the diffuse source. Modal measurements can be taken at a location spaced from the source and do not require elemental detectors. Hence, modal-based instruments may be less complex than the elemental-based instruments describe above. In most spectrometers, spectrally selective measures of the field are obtained via propagation through an interferometer or filter rather than by local sampling. In these systems multi-peak spectral measurements combination module 106 may analyze the projection of the field measured by detector array 104 using a modal theory based on optical coherence functions. In one embodiment, software 106 uses the coherent mode decomposition of the cross-spectral density. The cross-spectral density is defined as the Fourier transform of the mutual coherence function Γ(r[1],r[2], τ) [1] W(r [1] ,r [2], ν)=∫Γ(r [1] ,r [2], τ)e ^−i2πντ dτ(6) W(r[1],r[2], ν) is Hermitian and positive definite in transformations on functions of r[1], and r[2], by which properties one can show that it can be represented by a coherent mode expansion of the $W ⁡ ( r 1 , r 2 , ν ) = ∑ n ⁢ λ n ⁡ ( ν ) ⁢ ⁢ ϕ n * ⁡ ( r 1 , ν ) ⁢ ⁢ ϕ n ⁡ ( r 2 , ν ) ( 7 )$ where λ[n](ν) is real and positive and where the family of functions φ[n](r,ν) are orthonormal such that ∫φ*[m](r,ν)φ[n](r,ν)d^2r=δ[mn]. As discussed above, the present invention may use MMS to estimate the mean spectrum of an intensity distribution on an input plane, I(ν,r). In terms of the cross spectral density, this input intensity is I(ν,r)=W(r[1],r[1], ν). An MMS is a linear optical system that transforms the coherent modes under the impulse response h(r,r′,ν). After propagation through the system the cross-spectral density is $W ~ ⁡ ( r 1 , r 2 , ν ) = ∑ n ⁢ λ n ⁡ ( ν ) ⁢ ⁢ ψ n * ⁡ ( r 1 , ν ) ⁢ ⁢ ψ n ⁡ ( r 2 , ν ) ( 8 )$ where ψ[n](r,ν)=∫φ[n](r′,ν)h(r,r′,ν)d^2r′ and the functions ψ[n](r,ν) are not necessarily orthogonal [6]. The MMS records measurements of the form $m i = ∫ A i ⁢ ∫ W ~ ⁡ ( r , r , ν ) ⁢ ⅆ ν ⁢ ⅆ r = ∫ A i ⁢ ∫ ∑ n ⁢ λ n ⁡ ( ν ) ⁢ ⁢ ψ n * ⁡ ( r , ν ) ⁢ ⁢ ψ n ⁡ ( r , ν ) ⁢ ⅆ ν ⁢ ⅆ r = ∫ ∑ n ⁢ λ n ⁡ ( ν ) ⁡ [ ∫ ∫ ϕ n * ⁡ ( r ′ , ν ) ⁢ ⁢ ϕ n ⁡ ( r ″ , ν ) ⁢ H i ⁡ ( r ′ , r ″ , ν ) ⁢ ⅆ 2 ⁢ r ′ ⁢ ⅆ 2 ⁢ r ″ ] ⁢ ⅆ ν = ∫ ∑ n ⁢ λ n ⁡ ( ν ) ⁢ H ~ i n ⁡ ( ν ) ⁢ ⅆ ν ( 9 )$ where A[i ]is the surface area of the i^th detector in detector array 104 and $H i ⁡ ( r ′ , r ″ , ν ) = ∫ A i ⁢ h * ⁡ ( r , r ′ , ν ) ⁢ h ⁡ ( r , r ″ , ν ) ⁢ ⅆ r$ {tilde over (H)} [i] ^n(ν)=∫∫φ[n]*(r′,ν)φ[n](r″, ν)H [i](r′,r″,ν)d^2 r′d ^2 r″. As with the local model, one goal of MMS is to estimate the mean spectrum, which in this case is $S _ ⁡ ( ν ) = 1 N ⁢ ∑ n ⁢ λ n ⁡ ( ν ) .$ The spectral content of the modes is assumed to be highly correlated, and it can be assumed that λ[n](ν)={overscore (S)}(ν)−Δλ[n](ν) such that $m i = ⁢ ∫ S _ ⁡ ( ν ) ⁢ ⁢ H _ i ⁡ ( ν ) ⁢ ⁢ ⅆ ν - ∫ ∑ n ⁢ Δ ⁢ ⁢ λ n ⁢ ⁢ ( ν ) ⁢ ⁢ H ~ i n ⁡ ( ν ) ⁢ ⅆ ν ≈ ⁢ ∫ S _ ⁡ ( ν ) ⁢ ⁢ H _ i ⁡ ( ν ) ⁢ ⁢ ⅆ ν ( 10 )$ $H _ i ⁡ ( ν ) = N ⁢ ⁢ ∑ n ⁢ H ~ i n ⁡ ( ν ) ,$ assuming that $〈 ∫ ∑ n ⁢ Δ ⁢ ⁢ λ n ⁡ ( ν ) ⁢ ⁢ H ~ i n ⁡ ( ν ) ⁢ ⅆ ν 〉 = 0.$ The goal of MMS design is to create a sensor such that Equation (10) is well conditioned for inversion. Thus, similar to the local model discussed above, the software 106 may estimate the average spectrum of a diffuse source using Equation (4). Chemical or Biological Analysis Model An MMS system according to the present invention may also be modeled as a direct measure of chemical or biological species. Let c[i](r) represent the concentration of spectral species i at position r. Suppose that species i generates a spectrum s[i](ν). An MMS system measures spectral positions at diverse positions r. The overall spectrum at r is $S ⁡ ( v , r ) = ∑ i ⁢ c i ⁡ ( r ) ⁢ s i ⁡ ( v ) .$ Measurements take the form $m ⁡ ( r ) = ∫ S ⁡ ( v , r ) ⁢ h ⁡ ( v , r ) ⁢ ⅆ v = ∑ i ⁢ c i ⁡ ( r ) ⁢ ∫ s i ⁡ ( v , r ) ⁢ h ⁡ ( v , r ) ⁢ ⅆ v = ∑ i ⁢ H i ⁡ ( r ) ⁢ c i ⁡ ( r )$ where H[i](r)=∫s[i](ν,r)h(ν,r)dν. If the measurements are considered as discrete digital samples integrated over a finite spatial range (i.e. $m j = ∫ A j ⁢ m ⁡ ( r ) ⁢ ⁢ ⅆ r$ where A[j ]is the area of the j^th sensor) and it is assumed that the concentration distribution as observed by the sensor represents the mean, the transformation between the concentrations and the measurements take the form $m j = ∑ i ⁢ H j ⁢ ⁢ i ⁢ c _ i .$ In some cases this transformation may be linearly invertible for the mean concentrations {overscore (c)}[i]. In most cases, however, the measurements either over constrain or under constrain the concentrations. In these cases, well known algorithms as partial least squares (PLS) may be used to estimate one or more target concentrations [7–11]. The spectral projection kernels h(ν,r) should be designed to make estimation of the {overscore (c)}[i ]tractable and efficient. The exact filter design arises from the PLS algorithm. One may recursively optimize h(ν,r) in simulation using PLS to achieve maximal fidelity. Design of h(ν,r) may consist of sampling and geometric design in the case of interferometers, but is more likely to occur through hologram, thin film filter or photonic crystal design. The design methodology for these processes is discussed below. Spectrometers may be subdivided into various classes, such as dispersive or multiplex and scanning or static [12]. A dispersive spectrometer separates color channels onto a detector or detector array for isomorphic detection. A multiplex spectrometer measures linear combinations of spectral and temporal channels. Arrays of multiplex data are inverted to estimate the spectral density. A scanning spectrometer functions by mechanical or electro-optic translation of optical properties, as in a rotating grating or a moving mirror. A static interferometer captures full spectra in a single time step by mapping wavelength or multiplex measurements onto a static sensor array. Static grating spectrometers based on linear detector arrays have been available for some time; while static multiplex spectrometers have emerged over the past decade [13–22]. Spectrometers may be characterized on the basis of many factors, including etendue and acceptance angle, throughput, spectral resolution and resolving power. The etendue is the integral of the differential product of the solid angle of emissions over the surface of the source. The etendue may be considered roughly as the input area times the acceptance angle of the spectrometer. The throughput is the photon efficiency of the instrument. The spectral resolution is the resolution of the reconstructed spectrum. The resolving power is the ratio of the center wavelength of the reconstructed spectrum to the spectral resolution. For grating spectrometers, the spectral resolution of an instrument and the etendue are proportional. Optical fields may be described in terms of spatial and temporal modes. The modes of a system form a complete set of self-consistent solutions to the boundary conditions and wave equations within that system. Spectroscopy measures the spectral content of optical fields by measuring the mode amplitudes as a function of wavelength. In general, spectrometers employ spatial filtering to restrict the number of spatial modes in the system. This restriction is necessary because mechanisms for determining spectral content usually assume that the field is propagating along a common axis through the system. Imaging spectrometers, in contrast, independently measure the spectrum of multiple spatial modes. Optical spectrometer design is motivated by the fact that optical detectors are not themselves particularly spectrally sensitive. Most electronic detectors have spectrally broad response over a wide range. Optical components, such as gratings, filters, interferometers, etc., preprocess the field prior to electronic detection to induce spectrally dependent features on the intensity patterns sensed by spectrally insensitive electronic devices. However, as will be described below quantum dot detectors could change this situation [5]. The following methods may be used in filter/interferometer array 102 and detector array 104 to implement the transformation of Equation (1) Two-Beam Interferometers In one embodiment, static filter/interferometer array 102 and detector array 104 may be implemented using a two-beam interferometer. A two-beam interferometer, such as a Michelson, Mach-Zender, Sagnac or birefringent system, separates the source with a beam splitter or polarizer and recombines it on a detector. Two-beam interferometers have long been used as scanning Fourier transform spectrometers. In this implementation, the relative optical path along the arms of the interferometer is scanned and the optical signal is measured as a function of time delay. If the transformation from a source plane to the interferometer output plane is imaging, these instruments can function as “hyperspectral” cameras in which each pixel contains high resolution spectra. Over the past decade, there has been increasing emphasis on “static” Fourier transform interferometers. In a static interferometer, the spectrum is measured in a single shot [8–13, 15, 17]. Advantages of static interferometers include reduced dependence on mechanical components, compact and stable implementation and lowered cost. In one embodiment, static filter/interferometer array 102 and detector array 104 may be implemented using static interferometers for large etendue sources. A static two-beam interferometer captures a spectrum in a single shot by measuring the signal generated by colliding wavefronts. FIG. 3 illustrates an example of the operation of a static two-beam interferometer. In FIG. 3, the colliding wavefronts 300 and 302 induce an interference pattern 304 on sensor plane 306 that can be inverted to describe the spectrum of the source. The beam paths in two-beam interferometers delay and redirect the direction of propagation of the interfering waves. In FIG. 3, the wavefronts 300 and 302 intersect at their midpoints, which can be achieved in imaging and source doubling interferometers. Other designs, notably Michelson systems, introduce a shear in addition to the tilt between the wavefronts. Two-beam interferometers may be further subdivided into null time delay and true time delay instruments. True time delay instruments transversely modulate the beams with a lens or other imaging instruments to reproduce the field with a delay. Despite the variety of mechanisms that create two-beam interference, uniform wavefront two-beam interferometers can be described using a relatively small number of parameters in a simple model. A two-beam interferometer interferes the beam radiated from a source with a rotated, translated and delayed version of the same beam (the possibility of a change in spatial scale is discounted for simplicity.) FIG. 4 illustrates the relative geometry of two beams 400 and 402 in a two-beam interferometer. In FIG. 4, the signal produced on sensor plane 404 is the mean superposition of the fields along the two arms. m(r)=Γ(r,r,0)+Γ(r′,r′,0)+Γ(r,r′,τ)+Γ(r′,r, τ)(11) where Γ(r,r′,τ) is the mutual coherence between points r and r′ for time delay τ. The effect of the interferometer is to rotate and displace and delay one beam relative to the other. The transformation is described by $r ′ = R θ ⁢ ⁢ ϕ ⁡ ( r - r o ) + d ⁢ ⁢ = R θ ⁢ ⁢ ϕ ⁢ r + d ′ ( 12 )$ where R[θφ] is a rotation about the center point r[0 ]and d is a displacement. In a static instrument, R[θφ], d′=r[0]+d, and τ are fixed. System design consists of selecting these parameters and the sampling points and integration areas for the measurements defined by Equation (11). As described above, an object of the present invention is measuring spectral content of spatially broadband “incoherent” sources. The coherent mode decomposition for such a source can be expressed in terms of any complete set of modes over the spatio-spectral support of the field. Using a plane wave decomposition, the mutual coherence can be modeled as $Γ ⁡ ( r 1 , r 2 , τ ) = ∑ l , m , n ⁢ α l ⁢ ⁢ m ⁢ ⁢ n ⁢ exp ⁡ ( - j ⁢ ⁢ k l ⁢ ⁢ m ⁢ ⁢ n · ( r 1 - r 2 ) ) ⁢ exp ⁡ ( j ⁢ c ⁢ ⁢ τ k l ⁢ ⁢ m ⁢ ⁢ n ) ( 13 )$ $k l ⁢ ⁢ m ⁢ ⁢ n = 2 ⁢ ⁢ π ⁢ ⁢ l L x ⁢ i x + 2 ⁢ ⁢ π ⁢ ⁢ m L y ⁢ i y + 2 ⁢ ⁢ π ⁢ ⁢ n L z ⁢ i z .$ One goal of static MMS according to the present invention is to measure the mean spectrum, which in this case is $S _ ⁡ ( v ) = 1 A ⁢ ∫ Γ ⁡ ( r , r , τ ) ⁢ ⅇ - j2 ⁢ ⁢ π ⁢ ⁢ v ⁢ ⁢ τ ⁢ ⅆ r ⁢ ⁢ ⅆ τ ⁢ ⁢ = ∑ l , m , n ⁢ α l ⁢ ⁢ m ⁢ ⁢ n ⁢ δ ⁡ ( v - c k l ⁢ ⁢ m ⁢ ⁢ n ) ( 14 )$ The quantity that a two beam interferometer measures, however, is $Γ p ⁡ ( r , r , τ ) = ∑ l , m , n ⁢ α l ⁢ ⁢ m ⁢ ⁢ n ⁢ exp ⁡ ( - j ⁢ ⁢ k l ⁢ ⁢ m ⁢ ⁢ n · ( I - R θ ⁢ ⁢ ϕ ) · r ) ⁢ exp ⁡ ( j ⁢ ⁢ k l ⁢ ⁢ m ⁢ ⁢ n · d ′ ) ⁢ exp ⁡ ( j ⁢ c ⁢ ⁢ τ k l ⁢ ⁢ m ⁢ ⁢ n ) ( 15 )$ τ may be non-zero, but it is fixed for a given measurement. Generally, an interferometer measures Γ(r,r′,τ) at a fixed time over a simple manifold, such as a plane. The power spectrum of the source is estimated by taking a spatial Fourier transform along one or more dimensions in the plane, which yields $S p ⁡ ( u , τ ) = ∑ l , m , n ⁢ α l ⁢ ⁢ m ⁢ ⁢ n ⁢ exp ⁡ ( j ⁢ ⁢ k l ⁢ ⁢ m ⁢ ⁢ n · d ′ ) ⁢ exp ⁡ ( j ⁢ c ⁢ ⁢ τ k l ⁢ ⁢ m ⁢ ⁢ n ) ⁢ sinc ⁡ ( [ u - [ k l ⁢ ⁢ m ⁢ ⁢ n ′ ] • ] ⁢ A ) ( 16 )$ where A is the extent of the sensor plane and assume that the origin of r lies on the sensor plane. k[lmn]′=k[lmn]·(I−R[θφ]) and [k[lmn]′]∥ is the component of k[lmn]′ parallel to the sensor plane. Any non-zero value for d′ substantially reduces the bandwidth in k[lmn ]over which Equation (16) may be used to extract the estimated spectrum of Equation (14). Since by definition a large etendue spectrometer must accept a large bandwidth in k[lmn], spectrometers for which the point of rotation for the interfering beams is not in the sensor plane are not well suited for implementing filter array 102 and detector array 104. Static Sagnac, Mach-Zender and birefringent spectrometers are capable of producing a center of rotation within the sensor plane and are therefore suitable for use as filter array/spectrometer 102 and detector array 104 according to the present invention. For these sensors, Equation (15) may be used to estimate {overscore (S)}(ν) to the extent that one can assume that [k[lmn]′]∥ can be associated with |k[lmn]|. Since spatial frequency resolution in S[p](u,τ) is 1/A, it can be assumed that no ambiguity results if the longitudinal bandwidth of k[lmn ]is less that 1/A. Since the spectral resolution in estimating {overscore (S)}(ν) will be c/A, for a simple two beam interferometer, spectral resolution and etendue are inversely related. A simple static interferometer, even with image transfer between the source and sensor planes and rotation in the sensor plane, cannot simultaneously maintain high spectral resolution and high etendue. The following sections consider alternative designs to overcome this challenge. Static Two-Beam Interferometer for Multimode Multiplex Spectroscopy FIG. 5 illustrates an example of a static two-beam interferometer suitable for multimode multiplex spectroscopy according to an embodiment of the present invention. Referring to FIG. 5, interferometer 500 includes a beam splitter 502 and a plurality of mirrors 504–512 located at different distances from a detector 514. In addition, interferometer 500 includes a mirror 516, and imaging optics 518 and 520. Beam splitter 502 may be any suitable type of beam splitter for splitting the optical power of a received signal. Mirrors 504–512 may be any suitable type of mirrors capable of reflecting optical energy. Mirrors 504–512 are preferably located at different distances from detector 514 so that the interference pattern produced by each detector for each point on diffuse source 522 is different. Mirror 516 may be any suitable mirror for reflecting incident energy back to beam splitter 502. Imaging optics 518 and 520 may be lenses for projecting points of source 522 onto mirror 516 and detector array 514. Detector array 514 may be any suitable type of detector capable of detecting optical energy. In the illustrated example, beam splitter 502 receives light rays 524 and 526 emanating from points P1 and P2 on source 522. Light ray 524 enters beam splitter 502 and is split into components 528 and 530. Similarly, light ray 526 is incident on beam splitter 502 and is split into components 532 and 534. Light ray component 528 is reflected by mirror 504 proceeds back through beam splitter 502 , through optics 518 and is focused on detector array 514. Similarly, component 530 is reflected by mirror 516 and by beam splitter 502 through optics 518 and onto detector array 514. The interference of light ray components 528 and 530 produces an interference pattern for point P1 on detector array 514. Similarly, the interference of light ray components 532 and 534 produces an interference pattern for point P[2 ]on detector array 514. According to an important aspect of the invention, the difference in distance between interference light paths for different points on source 522 preferably varies. For example, the distance traveled by light ray component 528 is preferably different from the distance traveled by light ray component 532 in reaching detector array 514. Assuming that the spectra of different points on source 522 are related, the interference patterns for the different points detected by detector array 514 can be used to estimate a property of the source 522, such as the chemical composition or the average Sampled Interferometers The challenge of using measurements of the form represented by Equation (15) to estimate {overscore (S)}(ν)can be addressed by revised sampling strategies for Γ(r,r′,τ). The most direct approach is to sample as a function of τ. True time delays can be introduced in the field by waveguiding or by imaging. The waveguiding approach may include coupling each point in the source plane through a different fiber interferometer. FIG. 6 illustrates a sampled interferometer in which each optical fiber includes a different fiber interferometer according to an embodiment of the present invention. Referring to FIG. 6, a plurality of optical fibers 600, 602, and 604 measures spectral projections emanating from different points of a source of interest. Each optical fiber 600, 602, and 604 may include a different fiber interferometer with different relative interference delay. For example, optical fiber 600 may include an in-line interferometer 606 including different optical path lengths that result in a time delay t[2 ]minus t[1]. Optical fiber 602 may include an in-line interferometer 608 with different optical path lengths such that the interference delay is equal to t[4 ]minus t[3]. Optical fiber 604 may include an in-line interferometer 610 with a delay of t[6 minus t] [5]. In a preferred embodiment of the invention, (t[6]−t[5])≠(t[4]−t[3])≠(t[2]−t[1]). Using different interference delays for each optical fiber enables different multi-peak filter functions to be obtained for each point of a diffuse source. Assuming that the spectra of the different points on the source are related, overall spectral properties of the source can be determined. In moving from a two-beam interferometer to a fiber array, the measurements implemented by filter array/interferometer 102 and detector 104 change from continuous transform systems to discrete sampling. While a two-beam interferometer samples by integrating on pixels across a plane, more general devices consist of discrete 3D structures and sample more general space time points in Γ (r,r′,τ). A segmented two-beam interferometer is another example of a sampled system suitable for use with the present invention. Such an interferometer may include an array of static two beam interferometers, based on Sagnac or Wollaston designs. Each interferometer may use imaging optics to induce a true coarse time delay between the beams and tilt to induce a fine propagation time delay. The aperture of the interferometers is preferably matched to the etendue of the source. The main advantage of an array of two-beam interferometers is that the number of discrete devices is much reduced relative to the fiber array approach. One may view these approaches as a spectrum spanning the effective number of time delays per interferometer from one to N. The acceptance angle of the system falls as the number of time delays per interferometer increases. In view of this trade-off and the manufacturing complexity and cost of making an array of interferometers, this approach may be less preferable than other approaches. Filters may be more cost-effective to implement than sampled interferometers. Accordingly, exemplary filter implementations are described in detail The filter approach to MMS seeks to directly implement measurements of the form shown in Equation (1). As in the previous section, measurements are implemented in discrete form. FIG. 7A illustrates an exemplary filtered detector array suitable for use with embodiments of the present invention. Referring to FIG. 7A, a plurality of filters 700 are located on a source plane 702. A detector 704 may be located on each filter 700. Each filter 700 may implement a different multi-peak filter function h[i](ν). In FIG. 7B, each filter 700 may include a plurality of layers. The layers for each filter implement the filter functions h[i](ν). Each detector makes a measurement of the form. $m i = ∫ ∫ A i ⁢ I ⁡ ( ?? , r ) ⁢ h i ⁡ ( ?? ) ⁢ ⅆ ?? ⁢ ⅆ r ( 17 )$ where A[i ]is the area of the ith detector element. Filters 700 may include absorptive materials, as in color cameras, or interference filters. In contrast with cameras, many different filter functions h[i](ν) may be implemented. In order to achieve the throughput advantage of multiplex spectroscopy [23], each filter 700 integrates a broad sample of the source spectrum. Design of the filter functions for high fidelity and high resolution reconstruction of the source is weighing design problem. The primary disadvantages of the absorptive approach are lack of spectral resolution and lack of programmability. The advantages of the absorptive approach are that the filters may in principle be very thin. The disadvantages of the interference approach are that the filters use propagation to filter and thus must be relatively thick. Interference filters may also be relatively challenging to fabricate to precise specifications and their response may depend on angle of incidence, thus limiting the etendue over which they are effective. Accordingly, four approaches to filters suitable for use with the present invention will now be described: layered media, volume holograms, 3D structured materials and absorptive media. The simplest thin film filters consist of a layered stack of media of different refractive indices. Commonly, such filters are constructed using periodic stacks. Wavelengths and waves resonant with the filter are selectively reflected or transmitted. Limited angular acceptance is of the primary disadvantages of conventional thin film filters, but they can be optimized for high angular degeneracy. Recently, a group at MIT has shown that a thin film filter may be designed to selectively reject all light at a given wavelength, independent of angle of incidence [24–39]. This filter acts as a very high etendue wavelength selector. For multiplex spectroscopy, a filter with high angular invariance but also broad and nearly random spectral response is preferred. Such filters have been shown to be possible through simulation. FIG. 8 shows the band diagram for a thin film filter consisting of 10 layers with alternating refractive indices of 1.5 and 2.5. The layer thicknesses are uniformly and randomly distributed between zero and ten times the free space center wavelength. In FIG. 8, the values on the negative side of the horizontal axis reflect transmission of the TM mode as a function of frequency scaling from 0.9 to 1.1 times the central wavelength. The horizontal scale is the transverse wavenumber n sin θ running from normal incidence to incidence from air along the surface of the filter. The values on the positive side of the horizontal axis represent the transmission of the TE mode as a function of incident wavenumber and relative frequency. Ideally, for a high etendue filter, the bright bands of transmission in FIG. 8 would be horizontal. Curvature in these bands corresponds to variation in the transmission at a single wavelength as a function of angle of incidence. If the transmission across FIG. 8 is summed at each wavelength, the spectral response of the filter for spatially broadband (incoherent) sources can be estimated. FIG. 9 illustrates the results of summing the transmission bands in FIG. 8 at each wavelength. In FIG. 9, the spectral response is highly structured. Creation of an MMS system using multichannel thin film filters includes realizing a large number of filters with responses as structured as the response shown in FIG. 9. FIG. 10 illustrates the spectral responses of 5 thin film filters realized with random layer thickness. In FIG. 10, the different spectral responses are highly uncorrelated. Optimized design of the layer thicknesses in a set of spectrally varying filters is expected to substantially improve the orthogonality and inversion rank of multichannel thin film filters. As shown in FIG. 8, however, curvature in the spectral response is difficult to completely remove. Substantial flattening of this response is expected to require 3D modulated filters, such as volume holograms or photonic crystals. Biological or Chemical Filter Design Molecules emit or absorb characteristic spectra in a variety of situations. Raman spectra, which are inelastic shifts of scattered radiation due to intermolecular vibrational resonances, are particularly characteristic of molecular sources. Ethanol, for example, has Raman lines at 436, 886, 1062, 1095, 1279 and 1455 inverse centimeters relative to the excitation source. FIG. 11A illustrates typical Raman spectra for ethanol in water. In diffuse multicomponent environments, many spectra signals will be present. Partial least squares algorithms weight spectral components or individual measurements so to enable estimation of target densities. In designing an MMS sensor, one balances physical reliability of a filter function against the design goals. A thin film filter, for example, can be designed to pass multiple wavelengths to selectively measure components of interest based on PLS optimization. For example, FIG. 11B shows the transmittance of a thin film filter designed for ethanol detection. A multiplex filter design for ethanol however, does not necessarily have to match just the peaks of the ethanol Raman spectra. The design may be derived from the PLS optimization or other suitable multivariate optimization The filter illustrated in FIG. 11B would be one of 4–16 different components in a multichannel MMS detector system. An example of a different multi-peak filter function that may be used in a multichannel MMS detector for detecting ethanol is shown in FIG. 11C. This process may be repeated for each filter element to form an MMS detector with different multi-peak filter functions for ethanol detection according to an embodiment of the invention. Once the different filters are created, measurements from the different filters can be combined using the equations described above to estimate the chemical composition of a spectrally diffuse source. For example, measurements may be simultaneously taken using the different multi-peak filters. Each measurement may be multiplied by the inverse of its respective filter function to yield the concentration of the compound of interest measured by each detector element. The concentrations may be combined to estimate the average concentration in the source using the equations described above. Variation in the filter response as a function of angle of incidence is a problem with thin film systems. Spatial filtering would be needed to restrict the angles of incidence to a range consistent with the desired spectral response. 3D Filters In yet another embodiment of the invention, array 102 may be implemented using a 3D filter. A 3D filter is modulates the index of refraction along all three dimensions. The simplest form of 3D spectral filter is a volume hologram, typically recorded by a photorefractive effect. Volume holograms can be extraordinarily selective spectrally, especially if they are recorded along the direction of propagation. The disadvantage of volume holograms is that they are based on very weak index modulations and that these weak modulations fall rapidly as the complexity of the hologram is increased [40]. The advantage of volume holograms is that the spatial and spectral response of the system can be precisely programmed. As an MMS system, a volume hologram recorded as a set of “shift multiplexed” [41] reflection gratings could be set to operate as an arbitrary multichannel filter on each source point. FIG. 12 illustrates an example of a volume hologram suitable for use with the present invention. In FIG. 12, a hologram layer 1200 includes a plurality of reflective elements for reflecting light emanating from different source points 1202. The light emanating from the different source points are detected by detector elements 1204. By using strong holographic materials, such as photopolymers, and combining holographic confinement with layered media, this approach may be successful in creating a high etendue MMS system. In yet another alternate embodiment of the invention the spectral selectivity and programmability of volume holography can be exchanged for the ease of fabrication and manufacturing associated with photonic crystals or photonic glasses (quasi-random structured materials). The idea of using photonic crystals as multiplex spectral filters is particularly promising in the context of recent results on “superprism” effects [42–50]. The superprism effect yields high spectral dispersion on propagation through photonic crystals. The effective dispersion may be 1–2 orders of magnitude in excess of corresponding values for conventional dielectrics. For MMS applications, very thin samples of microcavities or gratings may be used. Photonic Crystal Structures FIG. 13 illustrates an example of a photonic crystal suitable for use with the present invention. In FIG. 13, a photonic crystal structure may include a plurality of dielectric spheres 1300 located in front of detector elements 1302. Dielectric spheres 1300 may be located in a thin film medium 1304 and detectors 1302 may be located in another medium 1306. In one exemplary embodiment, dielectric spheres 1300 may be made of glass. Light incident from a diffuse source is scattered by dielectric, spheres 1300, captured by detectors 1302 and processed in order to determine properties of the source. The spatio-spectral mapping from the source to the detectors creates a quasi-random mapping for MMS analysis. FIG. 13A shows such a mapping for an inhomogeneous photonic crystal fabricated at Clemson University and tested by the inventors of the present invention at Duke University [51–55]. The goal of fabricating such a photonic crystal has been to design a filter that uniformly passes a single wavelength [51–55]. However, due to spatial non-uniformities, such a crystal can be used to determine spectral properties of a diffuse source by inverting the filter functions of different positions in the crystal and combining the measurements, as described above. Each curve in FIG. 13A corresponds to a spectral measurement at a different position, r[i], behind a photonic crystal filter. The measurements were made by illuminating the sample with a spatially incoherent spectrally broadband source and then measuring the transmitted spectrum at points behind the photonic crystal using a fiber coupled spectrometer. The collection area for the fiber was 9 microns in diameter. The differences in the spectral response for each curve represent spectral diversity, as defined above. Over the spectral range from 650 to 750 nm, the spectral diversity of the different detection points is high. Detectors measuring the total optical power at each detection point over this range would measure {right arrow over (m)}=∫{right arrow over (h)}(ν){overscore (S)} (ν)dν, where {right arrow over (h)}(ν) is a vector of functions. Each component function corresponds to one of the spectral response curves in FIG. 13A. As discussed above, this vector transformation may be inverted to estimate the mean spectrum or the chemical composition of a sample. In yet another alternate embodiment, rather than using dielectric spheres, rectilinear structures can be used to compress the modal propagation range and create spatio-spectral structure on the sensor plane. FIG. 14A illustrates a rectilinear crystalline structure suitable for use with the present invention. Referring to FIG. 14A, a plurality of rectilinear structures 1400–1410 are located in front of detectors 1412. Rectilinear structures 1400–1410 may include strips of absorptive or reflective material. Rectilinear structures 1400–1410 are preferably different from each other to produce a different filter response for different points of a diffuse source. FIG. 14B illustrates a stacked filter/detector array suitable for obtaining multi-peak spectral measurements according to an embodiment of the present invention. Detectors 1416 are embedded in the filter stack 1414 so that some source frequencies are absorbed at one detector layer while passing other frequencies to the subsequent detector layers. Spectrally Sensitive Detectors As noted above, multichannel filter operation can be used spectrally selective absorbers, rather than inteferometric filters. Ideally, these absorbers consist of a heterogeneous set of relatively narrow band species. For example, a detector formed from an array of quantum dots may be used, as in [5]. Rather than attempting to electrically isolate individual dot channels, however, one may integrate over a selection of spectral channels. If one made detectors containing numbers of dots proportional to or less than the number of spectral channels the dots absorbed, one would expect relatively random spectral responses in each detector. An array of such detectors might be used to reconstruct the mean spectrum. The disclosure of each of the following references is hereby incorporated herein by reference in its entirety. • [1] L. Mandel and E. Wolf, Optical coherence and quantum optics. Cambridge: Cambridge University Press, 1995. • [2] K. J. Zuzak, M. D. Schaeberle, E. N. Lewis, and I. W. Levin, “Visible reflectance hyperspectral imaging: Characterization of a noninvasive, in vivo system for determining tissue perfusion,” Analytical Chemistry, vol. 74, pp. 2021–2028, 2002. • [3] T. H. Pham, F. Bevilacqua, T. Spott, J. S. Dam, B. J. Tromberg, and S. Andersson-Engels, “Quantifying the absorption and reduced scattering coefficients of tissuelike turbid media over a broad spectral range with noncontact Fourier-transform hyperspectral imaging,” Applied Optics, vol. 39, pp. 6487–6497, 2000. • [4] T. H. Pham, C. Eker, A. Durkin, B. J. Tromberg, and S. Andersson-Engels, “Quantifying the optical properties and chromophore concentrations of turbid media by chemometric analysis of hyperspectral diffuse reflectance data collected using a fourier interferometric imaging system,” Applied Spectroscopy, vol. 55, pp. 1035–1045, 2001. • [5] J. L. Jimenez, L. R. C. Fonseca, D. J. Brady, J. P. Leburton, D. E. Wohlert, and K. Y. Cheng, “The quantum dot spectrometer,” Applied Physics Letters, vol. 71, pp. 3558–3560, 1997. • [6] E. Wolf, “Coherent-mode propagation in spatially band-limited wave fields,” Journal of the Optical Society of America A, vol. 3, pp. 1920–1924, 1986. • [7] M. C. Denham, “Implementing Partial Least-Squares,” Statistics and Computing, vol. 5, pp. 191–202, 1995. • [8] D. M. Haaland and E. V. Thomas, “Partial Least-Squares Methods for Spectral Analyses .1. Relation to Other Quantitative Calibration Methods and the Extraction of Qualitative Information,” Analytical Chemistry, vol. 60, pp. 1193–1202, 1988. • [9] A. Phatak and F. de Hoog, “Exploiting the connection between PLS, Lanczos methods and conjugate gradients: alternative proofs of some properties of PLS,” Journal of Chernometrics, vol. 16, pp. 361–367, 2002. • [10] J. A. Westerhuis, T. Kourti, and J. F. MacGregor, “Analysis of multiblock and hierarchical PCA and PLS models,” Journal of Chemometrics, vol. 12, pp. 301–321, 1998. • [11] S. Wold, J. Trygg, A. Berglund, and H. Antti, “Some recent developments in PLS modeling,” Chemometrics and Intelligent Laboratory Systems, vol. 58, pp. 131–150, 2001. • [12] J. F. James and R. S. Sternberg, The Design of Optical Spectrometers. London: Chapman & Hall, 1969. • [13] J. Courtial, B. A. Patterson, A. R. Harvey, W. Sibbett, and M. J. Padgett, “Design of a static Fourier-transform spectrometer with increased field of view,” Applied Optics, vol. 35, pp. 6698–6702, 1996. • [14] J. Courtial, B. A. Patterson, W. Hirst, A. R. Harvey, A. J. Duncan, W. Sibbett, and M. J. Padgett, “Static Fourier-transform ultraviolet spectrometer for gas detection,” Applied Optics, vol. 36, pp. 2813–2817, 1997. • [15] E. V. Ivanov, “Static Fourier transform spectroscopy with enhanced resolving power,” Journal of Optics a-Pure and Applied Optics, vol. 2, pp. 519–528, 2000. • [16] C. C. Montarou and T. K. Gaylord, “Analysis and design of compact, static Fourier-transform spectrometers,” Applied Optics, vol. 39, pp. 5762–5767, 2000. • [17] M. J. Padgett and A. R. Harvey, “A Static Fourier-Transform Spectrometer Based on Wollaston Prisms,” Review of Scientific Instruments, vol. 66, pp. 2807–2811, 1995. • [18] B. A. Patterson, M. Antoni, J. Courtial, A. J. Duncan, W. Sibbett, and M. J. Padgett, “An ultra-compact static Fourier-transform spectrometer based on a single birefringent component,” Optics Communications, vol. 130, pp. 1–6, 1996. • [19] B. A. Patterson, J. P. Lenney, W. Sibbett, B. Hirst, N. K. Hedges, and M. J. Padgett, “Detection of benzene and other gases with an open-path, static Fourier-transform UV spectrometer,” Applied Optics, vol. 37, pp. 3172–3175, 1998. • [20] D. Steers, B. A. Patterson, W. Sibbett, and M. J. Padgett, “Wide field of view, ultracompact static Fourier-transform spectrometer,” Review of Scientific Instruments, vol. 68, pp. 30–33, • [21] S. Strassnig and E. P. Lankmayr, “Elimination of matrix effects for static headspace analysis of ethanol,” Journal of Chromatography A, vol. 849, pp. 629–636, 1999. • [22] G. Zhan, “Static Fourier-transform spectrometer with spherical reflectors,” Applied Optics, vol. 41, pp. 560–563, 2002. • [23] J. F. James, R. S. Sternberg, and a. joint, The design of optical spectrometers. London: Chapman & Hall, 1969. • [24] I. Abdulhalim, “Omnidirectional reflection from anisotropic periodic dielectric stack,” Optics Communications, vol. 174, pp. 43–50, 2000. • [25] D. Bria, B. Djafari-Rouhani, E. H. El Boudouti, A. Mir, A. Akjouj, and A. Nougaoui, “Omnidirectional optical mirror in a cladded-superlattice structure,” Journal of Applied Physics, vol. 91, pp. 2569–2572, 2002. • [26] K. M. Chen, A. W. Sparks, H. C. Luan, D. R. Lim, K. Wada, and L. C. Kimerling, “SiO2/TiO2 omnidirectional reflector and microcavity resonator via the sol-gel method,” Applied Physics Letters , vol. 75, pp. 3805–3807, 1999. • [27] E. Cojocaru, “Omnidirectional reflection from Solc-type anisotropic periodic dielectric structures,” Applied Optics, vol. 39, pp. 6441–6447, 2000. • [28] E. Cojocaru, “Omnidirectional reflection from finite periodic and Fibonacci quasi-periodic multilayers of alternating isotropic and birefringent thin films,” Applied Optics, vol. 41, pp. 747–755, 2002. • [29] M. Deopura, C. K. Ullal, B. Temelkuran, and Y. Fink, “Dielectric omnidirectional visible reflector,” Optics Letters, vol. 26, pp. 1197–1199, 2001. • [30] Y. Fink, J. N. Winn, S. H. Fan, C. P. Chen, J. Michel, J. D. Joannopoulos, and E. L. Thomas, “A dielectric omnidirectional reflector,” Science, vol. 282, pp. 1679–1682, 1998. • [31] B. Gallas, S. Fisson, E. Charron, A. Brunet-Bruneau, G. Vuye, and J. Rivory, “Making an omnidirectional reflector,” Applied Optics, vol. 40, pp. 5056–5063, 2001. • [32] C. Hooijer, D. Lenstra, and A. Lagendijk, “Mode density inside an omnidirectional mirror is heavily directional but not small,” Optics Letters, vol. 25, pp. 1666–1668, 2000. • [33] S. H. Kim and C. K. Hwangbo, “Design of omnidirectional high reflectors with quarter-wave dielectric stacks for optical telecommunication bands,” Applied Optics, vol. 41, pp. 3187–3192, • [34] J. Lekner, “Omnidirectional reflection by multilayer dielectric mirrors,” Journal of Optics a-Pure and Applied Optics, vol. 2, pp. 349–352, 2000. • [35] Z. Y. Li and Y. N. Xia, “Omnidirectional absolute band gaps in two-dimensional photonic crystals,” Physical Review B, vol. 6415, pp. art. no.-153108, 2001. • [36] D. Lusk, I. Abdulhalim, and F. Placido, “Omnidirectional reflection from Fibonacci quasi-periodic one-dimensional photonic crystal,” Optics Communications, vol. 198, pp. 273–279, 2001. • [37] W. H. Southwell, “Omnidirectional mirror design with quarter-wave dielectric stacks,” Applied Optics, vol. 38, pp. 5464–5467, 1999. • [38] B. Temelkuran, E. L. Thomas, J. D. Joannopoulos, and Y. Fink, “Low-loss infrared dielectric material system for broadband dual-range omnidirectional reflectivity,” Optics Letters, vol. 26, pp. 1370–1372, 2001. • [39] X. Wang, X. H. Hu, Y. Z. Li, W. L. Jia, C. Xu, X. H. Liu, and J. Zi, “Enlargement of omnidirectional total reflection frequency range in one-dimensional photonic crystals by using photonic heterostructures,” Applied Physics Letters, vol. 80, pp. 4291–4293, 2002. • [40] D. Brady and D. Psaltis, “Control of Volume Holograms,” Journal of the Optical Society of America a-Optics Image Science and Vision, vol. 9, pp. 1167–1182, 1992. • [41] G. Barbastathis, M. Levene, and D. Psaltis, “Shift multiplexing with spherical reference waves,” Applied Optics, vol. 35, pp. 2403–2417, 1996. • [42] T. Baba and M. Nakamura, “Photonic crystal light deflection devices using the superprism effect,” Ieee Journal of Quantum Electronics, vol. 38, pp. 909–914, 2002. • [43] T. Ochiai and J. Sanchez-Dehesa, “Superprism effect in opal-based photonic crystals,” Physical Review B, vol. 6424, pp. art. no.-245113, 2001. • [44] M. Koshiba, “Wavelength division multiplexing and demultiplexing with photonic crystal waveguide couplers,” Journal of Lightwave Technology, vol. 19, pp. 1970–1975, 2001. • [45] A. Shinya, M. Haraguchi, and M. Fukui, “Interaction of light with ordered dielectric spheres: Finite- difference time-domain analysis,” Japanese Journal of Applied Physics Part 1-Regular Papers Short Notes & Review Papers, vol. 40, pp. 2317–2326, 2001. • [46] A. Sharkawy, S. Y. Shi, and D. W. Prather, “Multichannel wavelength division multiplexing with photonic crystals,” Applied Optics, vol. 40, pp. 2247–2252, 2001. • [47] H. M. van Driel and W. L. Vos, “Multiple Bragg wave coupling in photonic band-gap crystals,” Physical Review B, vol. 62, pp. 9872–9875, 2000. • [48] H. B. Sun, Y. Xu, J. Y. Ye, S. Matsuo, H. Misawa, J. F. Song, G. T. Du, and S. Y. Liu, “Photonic gaps in reduced-order colloidal particulate assemblies,” Japanese Journal of Applied Physics Part 2-Letters, vol. 39, pp. L591–L594, 2000. • [49] H. Kosaka, T. Kawashima, A. Tomita, M. Notomi, T. Tamamura, T. Sato, and S. Kawakami, “Superprism phenomena in photonic crystals: Toward microscale lightwave circuits,” Journal of Lightwave Technology, vol. 17, pp. 2032–2038, 1999. • [50] H. Kosaka, T. Kawashima, A. Tomita, M. Notomi, T. Tamamura, T. Sato, and S. Kawakami, “Superprism phenomena in photonic crystals,” Physical Review B, vol. 58, pp. R10096–R10099, 1998. • [51] Foulger, S. H., P. Jiang, et al. (2002). “Photonic bandgap composites based on crystalline colloidal arrays.” Abstracts of Papers of the American Chemical Society 223: 044-POLY. • [52] Foulger, S. H., P. Jiang, et al. (2001). “Photonic bandgap composites.” Advanced Materials 13(24): 1898-+. • [53] Perpall, M. W., K. P. U. Perera, et al. (2002). “High yield precursor polymer for inverse carbon opal photonic materials.” Abstracts of Papers of the American Chemical Society 223: 222-POLY. • [54] Smith, D. W., S. G. Chen, et al. (2002). “Perfluorocyclobutyl copolymers for microphotonics: Thermo-optics, electro-optics, rare earth doping, and micromolding.” Abstracts of Papers of the American Chemical Society 224: 063-POLY. • [55] Smith, D. W., S. R. Chen, et al. (2002). “Perfluorocyclobutyl copolymers for microphotonics.” Advanced Materials 14(21): 1585-+. r radial coordinate in space A area ν frequency k wave vector λ wavelength c speed of light t time τ time delay m(r) measurement at a single point r I(ν, r) spectral intensity distribution h(ν, r) filter function {overscore (S)}(ν) mean power spectrum {right arrow over (s)} target spectrum vector {right arrow over (s)}[e] target spectrum vector estimate {right arrow over (m)} measurement state vector H transformation matrix W cross spectral density function Γ mutual coherence function ψ mode distribution φ orthonormal mode distribution δ delta function c[i] molecular concentration of the i^th sample R[θφ] rotation operator It will be understood that various details of the invention may be changed without departing from the scope of the invention. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the invention is defined by the claims as set forth hereinafter.
{"url":"http://www.google.com/patents/US7092101?dq=6,952,563","timestamp":"2014-04-20T01:34:38Z","content_type":null,"content_length":"194675","record_id":"<urn:uuid:c1cee9d4-3d41-461f-9953-9be132bba370>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Design and Use of the Microsoft Excel Solver Daniel Fylstra Frontline Systems Inc. P.O. Box 4288 Incline Village, NV 89450 Leon Lasdon MSIS Department McCombs School of Business The University of Texas at Austin Austin, TX 78712-0212 John Watson Software Engines 725 Magnolia St. Menlo Park, CA 94025 Allan Waren Computer and Information Science Department Cleveland State University Cleveland, Ohio 44115 Published in INTERFACES, Vol. 28, No. 5, Sept-Oct 1998, pp. 29-55. We describe the design and use of the spreadsheet optimizer that is bundled with Microsoft Excel. We explain why we and Microsoft made certain choices in designing its user interface, model processing, and solution algorithms for linear, nonlinear and integer programs. We describe some of the common pitfalls encountered by users, and remedies available in the latest version of Microsoft Excel. We briefly survey applications of the Solver and its impact in industry and education. Since its introduction in February 1991, the Microsoft Excel Solver has become the most widely distributed and almost surely the most widely used general-purpose optimization modeling system. Bundled with every copy of Microsoft Excel and Microsoft Office shipped during the last eight years, the Excel Solver is in the hands of 80 to 90 percent of the 35 million users of office productivity software worldwide. The remaining 10 to 20 percent of this audience use either Lotus 1-2-3 or Quattro Pro, both of which now include very similar spreadsheet solvers, based on the same technology used in the Excel Solver. This widespread availability has spawned many applications in industry and government. In education, increasing numbers of MBA and undergraduate business instructors have adopted the Excel Solver as their tool for introducing students to optimization; most management science textbooks now include coverage of the Excel Solver, and several recent texts use it exclusively in the optimization We review the background and design philosophy of the Excel Solver. We seek to explain why the Excel Solver works the way it does, to clear up some common misunderstandings and pitfalls, and to suggest ideas for good modeling practice when using spreadsheet optimization – described under the Modeling Practice headings. We also briefly survey applications of the Excel Solver in industry and education and describe how practitioners who are not affiliated with the OR/MS community use it. The example models in this paper are available on Practice Online at (http://silmaril.smeal.psu.edu/ pol.html) and at http://www.frontsys.com/interfaces.htm. Much more information – over 200 web pages at this writing – is available on Frontline Systems’ World Wide Web site (http://www.frontsys.com). The Microsoft Excel Solver combines the functions of a graphical user interface (GUI), an algebraic modeling language like GAMS [Brooke, Kendrick, and Meeraus 1992] or AMPL [Fourer, Gay, and Kernighan 1993], and optimizers for linear, nonlinear, and integer programs. Each of these functions is integrated into the host spreadsheet program as closely as possible. Many of the decisions we and Microsoft made in designing the Solver were motivated by this goal of seamless integration. Optimization in Microsoft Excel begins with an ordinary spreadsheet model. The spreadsheet’s formula language functions as the algebraic language used to define the model. Through the Solver’s GUI, the user specifies an objective and constraints by pointing and clicking with a mouse, and filling in dialog boxes. The Solver then analyzes the complete optimization model and produces the matrix form required by the optimizers in much the same way that GAMS and AMPL do. The optimizers employ the simplex, generalized reduced gradient, and branch and bound methods to find an optimal solution and sensitivity information. The Solver uses the solution values to update the model spreadsheet, and provides sensitivity and other summary information on additional report spreadsheets. Background and Design Philosophy of the Excel Solver The Microsoft Excel Solver and its counterparts in Lotus 1-2-3 97 and Corel Quattro Pro were not the first spreadsheet optimizers; that distinction belongs to What’sBest!, conceived by Sam Savage, Linus Schrage, and Kevin Cunningham in 1985 and marketed by General Optimization Inc. for the Lotus 1-2-3 Release 2 spreadsheet [Savage 1985]. What’sBest! is still available in versions for each of the major spreadsheets and is now sold and supported by Lindo Systems Inc. Other early spreadsheet optimizers included Frontline Systems’ What-If Solver [Frontline Systems 1990], Enfin Software’s Optimal Solutions [Enfin Software 1988], and Lotus Development’s Solver in earlier versions of 1-2-3 [Lotus Development 1990]. The design approach of What-If Solver, implemented in the graphical user interface of Excel, was chosen by Microsoft over several alternatives including What’sBest!; by Borland (the original developers of Quattro Pro) over an earlier solver developed internally by that company; and later by Lotus over their own internally developed solver. A major reason for this outcome, we believe, is that the Excel Solver had as its design goal "making optimization a feature of spreadsheets," whereas other packages, such as What’sBest!, "use the spreadsheet to do optimization." In many small ways, the Excel Solver caters to the tens of millions of spreadsheet users, rather than to the tens of thousands of OR/MS professionals. Although OR/MS professionals readily learn to use the Excel Solver, they often find certain aspects of its design puzzling or at least different from their expectations. In most cases the differences are due to (1) the architecture of spreadsheet programs, (2) the expectations of the majority of spreadsheet users who are not OR/MS professionals, or (3) the desires of the spreadsheet vendors (Microsoft in the case of the Excel Solver). The Architecture of Spreadsheet Programs Because of the architecture of spreadsheet programs, it is easy to create spreadsheet models that contain discontinuous functions or even nonnumeric values. These models usually cannot be solved with classical optimization methods. The spreadsheet’s formula language is designed for general computations and not just for optimization. Indeed, Excel supports a rich variety of operators and several hundred built-in functions, as well as user-written functions. In contrast, GAMS, AMPL and similar modeling languages include only a small set of operators and functions sufficient for expressing linear, smooth nonlinear, and integer optimization models. The Expectations of Spreadsheet Users The Excel Solver was designed to meet the expectations of spreadsheet users – in particular, users of earlier versions of Excel – rather than traditional OR/MS professionals. An example is the terminology it uses in dialog boxes, such as "Target Cell" (for the objective) and "Changing Cells" (for the decision variables). We used these terms – at Microsoft’s request – to mirror the terms used in the Goal Seek feature, which predated the Solver in Excel and in other spreadsheet programs. The Goal Seek feature, which spreadsheet users often describe as "what-if in reverse," solves a nonlinear function of one variable for a specified value. Spreadsheet users see the Excel Solver as a more powerful successor to the Goal Seek feature [Person et al. 1997]. Figure 1 shows Excel’s Goal Seek dialog box, and Figure 2 shows the Solver Parameters dialog box with its similar terminology. Figure 1: The Goal Seek feature of Microsoft Excel predated the Solver. This feature uses iterative methods to solve a simple equation (formula in the "set cell" equal to the "value") in one variable (the "changing cell"). Figure 2: The Solver Parameters dialog is used to define the optimization model. The terms "set target cell" (for the objective) and "changing cells" (for the variables), and the "value of" option were derived from the earlier Goal Seek feature The Desires of the Spreadsheet Vendors The influence of the spreadsheet vendors’ desires is reflected in the way the Solver determines whether the model is linear or nonlinear. By default, the Solver assumes that the model is nonlinear. The user must select the Assume Linear Model check box in the Solver Options dialog box to override this assumption; the Solver does not attempt to automatically determine whether the model is linear by inspecting the formulas making up the model. Most of Excel’s several hundred built-in functions and all user-written functions would have to be treated as "not linear" (smooth nonlinear or discontinuous, over their full domains) in an automatic test. But users sometimes create models using these functions and then add constraints that result in a linear model over the feasible region. Microsoft wanted a general approach that would support such cases and specified the use of the check box, as well as the use of the nonlinear solver as the default choice. The Role of Bundled Spreadsheet Solvers The "free" bundled version of the Excel Solver described in this paper and similar products, such as What’sBest! Personal Edition, represent the low end of the range of spreadsheet solver functionality, capacity, and performance. More powerful versions are available and these versions are most often used to solve problems in industry. For example, where the standard Excel Solver supports just 200 decision variables, Frontline Systems’ Large-Scale LP Solver (a component of the Premium Solver Platform) supports up to 16,000 variables, and Lindo Systems’ What’sBest! Extended Edition supports up to 32,000 variables. Table 1 summarizes the characteristics of the Premium Solver products offered by Frontline Systems. Table 1. The characteristics of the enhanced Excel Solvers are summarized in this table. For integer problems, "B&B" refers to Branch and Bound, "P&P" refers to Preprocessing and Probing. For nonlinear problems, "GRG" refers to the Generalized Reduced Gradient method and "SQP" refers to Sequential Quadratic Programming. │ │ Excel Built-In │ Premium Solver │ Premium Solver Plus │ Premium Solver Platform │ │ │ │ │ │ │ │ │ Solver │ │ │ │ │ NLP Variables/ Constraints │ 200/100 + bounds │ 400/200 + bounds │ 400/200 + bounds │ 1000/1000 + bounds │ │ LP Variables/ Constraints │ 200/unlimited │ 800/unlimited │ 800/unlimited │ 2000/unlimited to 16,000/unlimited │ │ Setup Performance │ 1x │ 1-50x │ 1-50x │ 1-50x │ │ NLP Performance │ 1x │ 1x │ 1.5x │ 2-10x │ │ LP Performance │ 1x │ 2-3x │ 2-3x │ Large Scale │ │ MIP Performance │ 1x │ 5-10x │ 25-50x │ 25-50x │ │ Selection of optimizers │ Fixed set │ Fixed set │ Fixed set │ Multiple choices, field-installable │ │ LP/QP Methods │ Simplex w/bounds │ Enhanced Simplex w/bounds │ Enhanced Simplex, Dual, Quadratic │ Sparse Simplex, LU, Markowitz │ │ MIP Methods │ Branch & Bound │ Enhanced Branch & Bound │ Enhanced B&B, P&P, Dual Simplex │ Enhanced B&B, P&P, Dual Simplex │ │ NLP Methods │ GRG2 │ GRG2 │ Enhanced GRG2 │ LSGRG, SQP, etc. │ │ Reports │ Standard: Answer, Limits, Sensitivity │ Standard + Linearity, Feasibility │ Standard + Linearity, Feasibility │ Standard + Linearity, Feasibility │ Like most optimization software, the Excel Solver has steadily improved in performance over the years. Although solution times are model dependent, in overall terms the Solver in Excel 97 offers about five times the performance of that in Excel 5.0, and perhaps 20 times the performance of the earliest version in Excel 3.0 (assuming a constant hardware platform). The Premium Solver further improves mixed integer problem solution times by a factor of 25 to 50 over the Excel 97 Solver (Table 1). While spreadsheet solvers are unlikely to compete with dedicated optimizers, such as CPLEX and OSL, they do provide a practical platform for solving real-world optimization problems. User Interface and Selection of Objective, Decision Variables and Constraints In the Excel Solver, as in an algebraic modeling system, the optimization model is defined by algebraic formulas (which appear in spreadsheet cells). Excel’s formula language can express a wide range of mathematical relationships, but Excel has no facilities for distinguishing decision variables from other variables or objectives or constraints from other formulas. Hence, the Excel Solver provides both interactive and user-programmable ways to specify which spreadsheet cells are to serve each of these roles. In interactive use, the user selects Tools Solver… from the Excel menu bar, displaying the Solver Parameters dialog box (Figure 2). As noted earlier, this dialog box is patterned after the Goal Seek feature (Figure 1). The "Value of" option offers a way to directly solve goal-seeking problems using the Solver; when the user selects this option and enters a target value, an equality constraint is added to the optimization model, and there is no objective to be maximized or minimized. (Alternatively, one may simply leave the Set Target Cell edit box blank, and enter an equality constraint in the Constraint list box.) In either case, the problem is solved with a (constant) dummy objective and the Solver stops when the first feasible solution is found. In this way, the Excel Solver fulfills spreadsheet users’ expectations of a more powerful Goal Seek capability that can be used to find solutions for systems of equations and inequalities. Decision Variables and the Guess Button Model decision variables are entered in the By Changing Cells edit box. Excel allows one to enter a so-called multiple selection, which consists of up to 16 ranges (rectangles, rows or columns, or single cells) separated by commas. Alternatively, one may press the Guess button to obtain an initial entry in the By Changing Cells edit box. This feature often puzzles OR/MS professionals; Ragsdale [1997] includes a sidebar saying that the "Solver usually guesses wrong" and advising students not to use it, but many spreadsheet users find it useful. When one presses the Guess button, the Solver places a selection in the By Changing Cells edit box that includes all input (nonformula) cells on which the objective formula depends. This selection will usually include the actual decision variables as a subset and may be edited to remove ranges of cells that are not decision variables (for example, those that are fixed parameters in the model). The key issue in a spreadsheet solver’s user interface is the method of specifying constraints. What’sBest! Originally used a "Rule of Constraints" that required every formula cell dependent on the variables to be nonnegative – but this form was not intuitive for typical spreadsheet users and was not acceptable to the spreadsheet vendors. (More recent versions of What’sBest! use a new constraint representation.) In the earlier Lotus-developed solver for 1-2-3, Lotus used logical expressions in the spreadsheet’s formula language, including the relational operators <=, = and >=, to represent constraints. The solver dialog box simply offered an edit box in which a range of cells containing such logical formulas could be entered – thereby taking full advantage of an existing spreadsheet feature. In the Excel Solver, in consultation with Microsoft, we chose a different way of specifying constraints, for several reasons. First, spreadsheet logical formulas (expressions that evaluate to TRUE or FALSE in Excel, or 1 or 0 in Lotus 1-2-3) are more general than constraints. They allow such relations as <, >, and <> (not equal), which are not easily handled by current optimization methods, as well as such logical operators as AND, OR and NOT. Second, relations such as A1 >= 0, are evaluated by the spreadsheet as strictly satisfied or unsatisfied, whereas an optimization algorithm evaluates constraints within a tolerance. For example, if A1 = -0.0000005, the Excel Solver would treat A1 >= 0 as satisfied (using the default Precision setting of 10^-6 or 0.000001), but the logical formula =A1>=0 in a cell would display as FALSE. Third, constraints almost always come in blocks or indexed sets, such as A1:A10 >= 0, and it is very advantageous for users to be able to enter such constraints and later view and edit them in block form. Hence, the Excel Solver provides a Constraint list box in the Solver Parameters dialog box where users can add, change, or delete blocks of constraints by clicking the corresponding buttons. In accord with the GUI conventions used throughout Excel, one can select blocks of cells for decision variables and for left hand sides and right hand sides of constraints by typing coordinates or by clicking and dragging with the mouse. The latter method is far more often used. Excel also allows the user to define symbolic names for individual cells or ranges of cells (through the Insert Name menu option). The Excel Solver will recognize any names the user has defined for the objective, variables, and blocks of constraints and will display them in the Solver Parameters dialog box (Figure Figure 3: Excel users can define symbolic names for single cells or ranges of cells, which the Solver will use. This dialog depicts the same model as in Figure 2 with the aid of defined names, resulting in a much more readable model For those who prefer to use spreadsheet logical formulas for constraints, the Excel Solver will read and write constraints in this form, when the Load Model and Save Model buttons in the Solver Options dialog box are used. Solver Options The user can control several options and tolerances used by the optimizers through the Solver Options dialog box (Figure 4). In the standard Excel Solver, all such options appear in one dialog box; in the Premium Solver products, where many more options and tolerances are available, each optimizer has a separate dialog box. Figure 4: The Solver Options dialog box is used to select algorithmic options and to set tolerances for the Excel Solver's solution methods. The Max Time and the Iterations edit boxes control the Solver’s running time. The Show Iteration Results check box instructs the Solver to pause after each major iteration and display the current "trial solution" on the spreadsheet. In lieu of these options, however, the user can simply press the ESC key at any time to interrupt the Solver, inspect the current iterate, and decide whether to continue or to stop. The Assume Linear Model check box determines whether the simplex method or the GRG2 nonlinear programming algorithm will be used to solve the problem. The Use Automatic Scaling check box causes the model to be rescaled internally before solution. The Assume Non-Negative check box places lower bounds of zero on any decision variables that do not have explicit bounds in the Constraints list box. The Precision edit box is used by all of the optimizers and indicates the tolerance within which constraints are considered binding and variables are considered integral in mixed integer programming (MIP) problems. The Tolerance edit box (a somewhat unfortunate name, but Microsoft’s choice) is the integer optimality or MIP-gap tolerance used in the branch and bound method. The GRG2 algorithm uses the Convergence edit box and Estimates, Derivatives, and Search option button groups. Modeling Practice Excel, including the Solver, offers many convenient ways to select and manipulate blocks of cells for variables and constraints. Modelers should take advantage of this feature by laying out optimization models with indexed sets (for example, products, regions, or time periods) along the columns and rows of tables or blocks of cells. We also highly recommend the practice of defining names for indexed sets of variables and constraints, and even for single cells. For example, the structure of the model with names defined as shown in Figure 3 is far more easily grasped than the same model with cell coordinate ranges as shown in Figure 2. Blocks of constraint values can often be computed more easily with Excel’s array formulas, which provide some of the high-level features of algebraic modeling languages, though without all of the flexibility of such languages. For further suggestions on modeling practice for spreadsheet optimization, we encourage readers to consult Conway and Ragsdale [1997]. User Programmability The user-programmable interface offered by the Excel Solver – a feature rarely found in other optimization modeling systems – is critically important to the many commercial users who are using Excel and Microsoft Office as a platform for developing custom applications. Every interactive, GUI-based action supported by the Excel Solver has a counterpart function call in Visual Basic for Applications (VBA), Excel’s built-in programming language. (The earlier Excel macro language is also supported, for backward compatibility.) All components of Excel share this feature, making it a flexible platform for decision support applications. For example, the new marketing textbook [Lilien and Rangaswamy 1997] includes a number of Excel Solver models that are controlled by VBA programs. Model Extraction and Evaluation of the Jacobian Matrix Like an algebraic modeling system such as GAMS or AMPL, the Excel Solver extracts the optimization problem from the spreadsheet formulas and builds a representation of the model suitable for an optimizer. For a linear programming (LP) problem, the focus of this model representation is the LP coefficient matrix. In more general terms, this is the Jacobian matrix of partial derivatives of the problem functions (objective and constraints) with respect to the decision variables. In LP problems, the matrix entries are constant, and only need to be evaluated once at the start of the optimization. In nonlinear programming (NLP) problems, the Jacobian matrix entries are variable and must be re-computed at each new trial point. The Jacobian matrix could be obtained either analytically by symbolic differentiation of the spreadsheet formulas [Ng et. al., 1979]; or during function evaluation through so-called automatic differentiation methods [Griewank and Corliss, 1991]; or it could be approximated by finite differences [Gill et al. 1981]. This choice is a major design decision in any optimization modeling system, with many tradeoffs. What’s Best! can be regarded as using the symbolic algebraic approach; systems such as GAMS and AMPL use automatic differentiation; and the Excel Solver uses finite differences. The most important reason for choosing the finite difference approach for the Excel Solver was the requirement, set by Microsoft, that it support all of Excel’s built-in functions as well as user-written functions. Symbolic differentiation would have been difficult for many of Excel’s several hundred functions (and in fact, What’s Best! rejects most of them) and impossible for user-written functions. To use automatic differentiation we would have had to modify the Excel recalculator and require user-written functions (often coded in other languages) to supply both function and derivative values, neither of which was possible. On the other hand, finite differences could be efficiently calculated using the finely tuned Excel recalculator as is. The Solver is concerned only with those formulas that relate the objective and constraints to the decision variables; it treats all other formulas on the spreadsheet as constant in the optimization problem. Excel, 1-2-3, and Quattro Pro all implement a form of minimal recalculation in which only those formulas that are dependent on the cell values that have changed need to be recalculated. In calculating finite differences, the [i,j]th element of the Jacobian matrix is approximated by the formula In this formula, eps is a perturbation factor, typically 10^-8, approximately equal to the square root of the machine precision [Gill et. al., 1981]. After an initial recalculation to evaluate f(x), the Solver perturbs each variable in turn, recalculates the spreadsheet, and obtains values for the j^th column of the Jacobian matrix. Hence the process requires n+1 recalculations for an n variable problem; each recalculation after the first perturbs just one variable and resets another, thereby taking advantage of the spreadsheet’s minimal recalculation feature. Modeling Practice The use of finite differences in the Excel Solver has a number of implications for spreadsheet modelers. The Solver’s model processing allows users to employ any of Excel’s several hundred built-in functions, as well as user-written functions, in constructing the spreadsheet. While many of these functions have nonlinear or non-smooth values, they can be used freely to compute parameters of the model that do not depend on the decision variables, even if the optimization model is an LP. Indeed, it is often convenient to use IF, CHOOSE, and table LOOKUP functions in calculating parameters, and we frequently see these functions in models created by commercial users of Frontline Systems’ Premium Solver products. Computing finite differences does, however, take time to recalculate the spreadsheet. Bearing in mind that Excel will recalculate every formula on the current worksheet that depends on the decision variables – even those not involved in the optimization model – modelers can minimize this time by keeping auxiliary calculations on a separate worksheet. Because of the significant overhead in recalculating multiple worksheets, the Excel Solver currently requires that cells for the decision variables, the objective, and the left-hand sides of constraints appear on the active sheet, though model formulas and right-hand sides of constraints can refer to other sheets. For users with models that take a long time to recalculate, we strongly recommend an upgrade to Excel 97, the latest version of Excel at this writing. Recalculation performance is greatly improved in this version, and the Solver is correspondingly faster on the majority of models. Frontline Systems’ Premium Solver products offer additional ways to speed up evaluation of the Jacobian matrix (Table 1), and we plan further improvements in this area. Solving Linear Problems When a user checks the Assume Linear Model box (Figure 4) the Excel Solver uses a straightforward implementation of the simplex method with bounded variables to find the optimal solution. This code operates directly on the LP coefficient matrix (that is, the Jacobian), which is determined using finite differences. The standard Excel Solver stores the full matrix, including zero entries, however no matrix rows are required for simple variable bounds. Frontline Systems’ Large-Scale LP Solver (Table 1) relies on a sparse representation of the matrix and of the LU factorization of the basis with dynamic Markowitz refactorization, yielding better memory usage and improved numerical stability on large-scale problems. Automatic Scaling and Related Pitfalls Earlier versions of the standard Excel Solver had no provision for automatic scaling of the coefficient matrix; they used values directly from the user’s spreadsheet. Since it is easy to rescale the objective and constraint values on the spreadsheet itself, we did not think that automatic scaling would be needed, especially for linear problems. We were wrong. Over the years, we have received many spreadsheet models from users – including business school instructors – that did not seem to solve correctly. In virtually all of these cases, the model was very poorly scaled – for example, with dollar amounts in millions for some constraints and return figures in percentages for others – yet none of these users identified scaling as a problem. It seems that in the widespread move to emphasize modeling over algorithms, such issues as scaling (still important in using software) have been de-emphasized or forgotten. To improve performance of the nonlinear solver in Excel 4.0, we added the Use Automatic Scaling check box to the Solver Options dialog box. But this dug a deeper pitfall for users with linear problems, since this automatic scaling option had no effect on the linear solver – and users often overlooked the documentation of this fact in Excel’s online Help. In Excel 97, the Use Automatic Scaling box applies to both linear and nonlinear problems. If the user checks this box and the Assume Linear Model box, the Solver rescales columns, rows, and right-hand sides to a common magnitude before beginning the simplex method. It unscales the solution values before storing them into cells on the spreadsheet. With this enhancement, the simplex solver is able to handle most poorly scaled models without any extra effort by the user. Linearity Test and Related Pitfalls For the reasons outlined earlier, the Excel Solver asks the user to specify whether the model is linear, but it does perform a simple numerical test to check the linearity assumption for reasonableness. This linearity test gave rise to another pitfall, again for poorly scaled models. Prior to Excel 97, the Solver performed this test after it had obtained a solution using the simplex method. It used these solution values by recalculating the spreadsheet, satisfied the following condition: Here is the function gradient, that is, the appropriate row of the LP coefficient matrix, and is the Precision value in the Solver Options dialog box with a default value of 10^-6. Given that the model might contain any of the hundreds of Excel built-in functions as well as user-written functions, and that the test is performed at discrete points, this test cannot be perfect; very occasionally, a model with nonlinear, or even discontinuous functions will pass the linearity test. In practice, however, this linearity test almost always detects situations in which the user has accidentally set up a model that doesn’t satisfy the linearity assumption – and truly linear models will always pass the linearity test, as long as they are well scaled. Unfortunately, linear models that are poorly scaled will sometimes fail this test. Since the resulting error message is "The conditions for Assume Linear Model are not satisfied," the user who is not conscious of the effect of poor scaling may not realize that this is the problem. (The only saving grace is that very poorly scaled models, which might otherwise yield incorrect answers in the absence of automatic scaling, almost always give this error message instead.) In Excel 97, we have substantially revised the linearity test. The Solver performs a quick check before solving the problem by verifying that the problem functions, evaluated at several multiples of the initial variable values, satisfy the above condition. If the problem fails this test, the user is warned against using the simplex method. When the Solver finds an optimal solution using the simplex method it performs a further check. It verifies that the objective function and constraint slacks, obtained by recalculating the spreadsheet at the optimal point, match the values provided by the LP solution within the Precision value in the Solver Options dialog. As long as the user selects the Use Automatic Scaling box, so that the values in the LP matrix are well scaled internally, this test should be robust even for poorly scaled models. Modeling Practice Students (and instructors) who use Excel 97, with its automatic scaling and its improved linearity test, can avoid the pitfalls described earlier. We strongly encourage business school instructors to upgrade to Excel 97 as soon as possible. Schools still using Windows 3.1 can obtain an academic version of Frontline Systems’ Premium Solver for Excel 5.0 with the same enhancements, but support for this 16-bit version will be limited in the future. Still, we emphasize that, while we have used scaling methods favored in the literature [Gill et. al., 1981], no automatic scaling method is perfect. It will always be possible to create examples that cause problems in spite of automatic scaling, and we suggest that instructors devote at least some time to explaining the limitations of finite precision computer arithmetic to students. Ragsdale [1997] addresses scaling briefly but effectively, for instance. The example model in Figure 5, which is available for download on Practice Online, is a poorly scaled variant of the Working Capital Management worksheet distributed with Excel. It will yield a non-optimal solution (of all zeroes) in Excel 5.0 and 7.0 and in Excel 97 if the Use Automatic Scaling box is cleared. It yields the correct solution in Excel 97 if the user checks the Use Automatic Scaling box. Figure 5: This spreadsheet, which can be downloaded from Practice Online as FIGURE5.XLS, is a poorly scaled model that "fools" the linearity test in earlier Excel versions, yielding the message "The conditions for Assume Linear Model are not satisfied." Solving Nonlinear Problems When the Assume Linear Model box in the Solver Options dialog is cleared, the Excel Solver uses the generalized reduced gradient method, as implemented in the GRG2 code [Lasdon et al. 1978], to solve the problem. Like other gradient-based methods, GRG2 is guaranteed to find a local optimum only on problems with continuously differentiable functions, and then only in the absence of numerical difficulties (such as degeneracy or ill conditioning). However GRG2 has a reputation for robustness, compared to other nonlinear optimization methods, on difficult problems where these conditions are not fully satisfied. Problem Representation GRG2 requires function values and the Jacobian matrix (which is not constant for nonlinear models). The Excel Solver approximates the Jacobian matrix using finite differences as described earlier and re-evaluates it at the start of each major iteration. Automatic Scaling A poorly scaled model can cause even more problems for GRG2 than for the simplex method. The earliest version of the Excel Solver used variable and constraint values directly from the spreadsheet, but as of Excel 4.0 (released in 1992), the Solver rescales both variable and function values internally if the user checks the Use Automatic Scaling box in the Solver Options dialog box. Unlike the simplex code, which uses gradient values for scaling (as of Excel 97), the GRG2 algorithm in Excel uses typical-value scaling. In this approach GRG2 rescales the decision variables and problem functions by dividing by their initial values at the beginning of the solution process. (We chose this approach because our tests showed that gradient-based scaling was not very effective on typical nonlinear spreadsheet models where scaling was a problem.) GRG2 Stopping Conditions Like the simplex method, the GRG2 algorithm will stop when it finds an optimal solution, when the objective appears to be unbounded, when it can find no feasible solution, or when it reaches the time limit or maximum number of iterations. For nonlinear models, an "optimal solution" means that the Solver has found a local optimum where the Kuhn-Tucker conditions are satisfied to within the convergence tolerance; the message displayed is "Solver found a solution." GRG2 also stops when the current solution meets a "slow progress" test: The relative change in the objective is less than the convergence tolerance for the last five iterations. In this case, the message displayed is "Solver converged to the current solution." In previous Excel versions, the convergence tolerance was fixed at 10^-4 or 10^-5 (depending on the version) and could not be changed by the user. In Excel 97, there is a new Convergence edit box (Figure 4) which sets this tolerance. The message "Solver could not find a feasible solution" occurs when the GRG2 algorithm terminates with a positive sum of infeasibilities. This almost always indicates a truly infeasible model, but with nonlinear problems this can occur (rarely) in feasible problems if GRG2 finds a local optimum of the phase one objective (the sum of the infeasibilities) or if GRG2 simply terminates in phase one due to slow progress. Remedies available through the Solver Options dialog box (Figure 4) include using automatic scaling, increasing the feasibility tolerance (Precision option), decreasing the convergence tolerance to make it more difficult to terminate in phase one, trying central differences, and trying other starting points. Non-smooth Functions The convergence results for gradient-based methods such as GRG2 depend on differentiability of the problem functions. The spreadsheet formula language is designed to express arbitrary calculations, and users can easily create optimization models that include non-smooth functions, that is functions with discontinuous values or first partial derivatives at one or more points. Examples of such functions are ABS, MIN and MAX, INT and ROUND, CEILING and FLOOR, and the commonly used IF, CHOOSE, and LOOKUP functions. Expressions involving relations (outside the context of Solver-recognized constraints) and such Boolean operators as AND, OR, and NOT are discontinuous at their points of transition between FALSE and TRUE values. The presence of any of these (or many other) functions in a spreadsheet does not necessarily mean that the optimization model is non-smooth. For example, an IF function whose conditional expression is independent of the decision variables and whose result expressions are smooth is itself smooth. Similar statements apply to the other functions mentioned above. Even if the problem is non-smooth, GRG2 may never encounter a point of discontinuity. This depends on the path that the algorithm takes, which depends on the starting point. GRG2 may simply skip over a discontinuity or may never encounter a region where discontinuities occur. Problems occur when the finite difference process (which approximates partial derivatives) spans both sides of a discontinuity, for then the estimated derivatives are likely to be very large. If GRG2 is converging to a local solution where the objective is non-smooth, inaccurate derivative estimates near the solution are likely to cause it to oscillate about that point, and to terminate because of a small fractional change in the objective. Modeling Practice The path GRG2 takes and the scaling factors it uses depend on the initial values of the variables. Users should take care to start the solution process with values for the variable cells that are representative of the values expected at the optimal solution, rather than with arbitrary values such as all zeroes. The example spreadsheet in Figure 6, which is available for download on Practice Online, is an Excel version of a product mix and pricing model from Fylstra [1992]. If the model is solved with initial values of zero for all four variables, GRG2 stops immediately, declaring this point to be an "optimal solution" (in fact, this point is a Kuhn-Tucker point). With initial values that make each quantity to build and the profit per unit positive, GRG2 finds the correct optimal solution. Alternatively, if one changes the constraint that requires production to be less than or equal to demand to an equality constraint, GRG2 is able to find the correct solution even with initial values of zero, since it can solve for certain variables in terms of others. FIGURE6.XLS, causes the GRG2 nonlinear solver to stop at a non-optimal solution if the initial values of all variables are 0. GRG2 finds the correct optimal solution for initial variable values that make the profits per unit positive. We encourage users who encounter difficulty with slow progress or who receive the message "Solver converged to the current solution" to upgrade to Excel 97, which allows them to control the convergence tolerance. The example model in Figure 7, also available for download on Practice Online, is a variant of the Quick Tour worksheet distributed with Excel. If this model is solved in Excel 97 with the default convergence tolerance of 10^-4, the Solver stops with the message "Solver converged to the current solution" and an objective value of $79,705.55, just short of the true optimum. If the convergence tolerance is tightened to 10^-5, the Solver stops with "Solver found a solution" and an objective value of $79,705.62. (In Excel 5.0 and 7.0, solving this model yields the optimal objective of $79,705.62, because the convergence tolerance is hard-wired in these versions to 10^-5.) Figure 7: This spreadsheet, which can be downloaded from Practice Online as FIGURE7.XLS, shows how the GRG2 nonlinear solver can stop with the message "Solver converged to the current solution." With a tighter convergence tolerance, it stops at a slightly better, optimal point with the message "Solver found a solution." GRG2 uses the value in the Precision edit box shown in Figure 4 (default 10^-6) for its feasibility tolerance. Constraints are classified as active when they are within this (absolute) tolerance of one of their bounds and are violated when their bound violation exceeds this tolerance. The default value is rather tight for nonlinear problems, and users may find that they can solve some problems with nonlinear constraints faster or even to a better result, if they increase this value. We recommend 10^-4 for nonlinear problems but caution against using values greater than 10^-2. Users requiring high accuracy may prefer the default value. For nonlinear problems, maximum accuracy results from choosing central differences and the default feasibility tolerance. When a model is non-smooth or non-convex, we recommend trying several different starting points. If GRG2 reaches roughly the same final point, one can be fairly confident that this is a global solution. If not, one can choose the best of the solutions obtained. For further information on reduced gradient methods and the GRG2 solver, see Lasdon et al. [1992]. Solving Problems with Integer Constraints When a problem includes integer variables, the Excel Solver invokes a branch and bound (B & B) algorithm that can use either the simplex method or GRG2 to solve its sub-problems. The user indicates which of the decision variables are integer by adding constraints, such as A1:A10 = integer (or, in Excel 97, A1:A10 = binary), where A1:A10 is a range of variable cells. (One enters such constraints by selecting "int" or "bin" from the Relation list in the Add or Change Constraints dialog box.) The branch and bound algorithm starts by solving the relaxed problem (without the integer constraints) using either GRG2 or the simplex method, yielding an initial best bound for the problem including the integer constraints. The algorithm then begins branching and solving sub-problems with additional (or tighter) bounds on the integer variables. A sub-problem whose solution satisfies all of the integer constraints is a candidate for the solution of the overall problem; the candidate with the best objective value so far is saved as the incumbent. The algorithm uses the best objective of the remaining nodes to be fathomed to update the best bound. Each time the algorithm finds a new incumbent, it computes the relative difference between its objective and the current best bound, yielding an upper bound on the improvement in the objective that might be obtained by continuing the solution process: If this value is less than or equal to the Tolerance edit box value (Figure 4), the algorithm stops. Some users have failed to notice that the default tolerance amount is not zero, but 0.05 and have therefore concluded that the Excel Solver was not finding the correct integer solution. We chose this default value, at Microsoft’s request, to limit the time taken by nontrivial integer problems. It often happens that the branch and bound algorithm finds a reasonably good solution fairly quickly, and then spends a great deal of time finding (or verifying that it has found) the true integer optimal solution. In the standard Excel Solver, the branch and bound algorithm uses a breadth-first search that branches on the unfathomed node with the best objective. Frontline Systems’ Premium Solver products use much more elaborate strategies (Table 1). These include a depth-first search that continues until it finds an incumbent, followed by a breadth-first search; more sophisticated rules for choosing the next node to be fathomed; rules for reordering the integer variables chosen for branching; use of the dual simplex method for the sub-problems; and preprocessing and probing (P & P) strategies for binary integer variables. These improvements often dramatically reduce solution time on integer problems (Table 1). It is possible to solve nonlinear integer problems with the Excel Solver, but users should be aware of the intrinsic limitations of this process. On a linear problem, the simplex method can conclusively determine whether each sub-problem is feasible and, if so, return the globally optimal solution to that sub-problem. On nonlinear integer problems, the GRG algorithm (or any gradient-based method) may fail to find a feasible solution for a sub-problem even though one exists, or it may return a local optimum that is not global. This also means that the best bound used by the branch and bound algorithm will be based on local optima found by GRG2 and this may not be the global optimum. Because of this, the branch and bound algorithm is not guaranteed to find the true integer optimum for nonlinear problems, although it will often succeed in finding a "good" integer solution. Modeling Practice It is important for users to understand the role of the Tolerance edit box value. In a classroom environment, instructors may wish to have students set this value to zero, to ensure that the Solver will continue branching until it finds the optimal integer solution. Users attempting to solve nonlinear integer problems should also take careful note of the limitations cited above for the branch and bound algorithm when used with GRG2. Even small, academic-size integer problems may require a great deal of solution time with the standard Excel Solver. Here again, we recommend an upgrade to Excel 97, which will improve solution times for both linear and nonlinear sub-problems. An even better alternative is Frontline Systems’ Premium Solver for Excel 97, which offers algorithmic improvements to reduce both the number of sub-problems and the time spent on each one. An academic version of the Premium Solver is available and has proven quite popular with business school instructors. Saving the Solution and Producing Solver Reports When one of the Excel Solver’s optimizers returns a solution, the Solver places the solution values into the decision variable cells, recalculates the spreadsheet, and displays the Solver Results dialog box (Figure 8). From this dialog box, the user can choose to keep the optimal solution, or discard it and restore the initial values of the variables. In addition, the user can select one or more reports, which the Solver will then produce in the form of additional worksheets inserted into the current workbook. Figure 8: The Solver Results dialog box is displayed whenever the Solver stops. It allows the user to keep the solution or restore the original values of the variable cells and produce one or more of the Solver's reports. Assuming that the user (or a Visual Basic program controlling the Solver) decides to keep the solution, the Solver updates all of the model’s results appropriately, including the objective, the constraints, and other auxiliary calculations that depend on the decision variables. One can use any of these model values to draw charts and graphs, update external databases, and the like using standard Excel facilities. A Visual Basic program may also inspect the values and may further manipulate them or store them for later use. For example, it is an easy classroom exercise to generate and graph the efficient frontier in a portfolio optimization problem in finance. The standard Excel Solver can produce three types of reports: The Answer Report (Figure 9), the Sensitivity Report (Figure 10), and the Limits Report (Figure 11). The Premium Solver products (Table 1) can also produce a Linearity Report and a Feasibility Report. The Linearity Report highlights the constraints involved when an attempt to solve with the simplex method fails the linearity test described earlier. The Feasibility Report highlights an "irreducible inconsistent system" of constraints [Chinneck 1997] when an attempt to solve a linear problem yields no feasible solution. Figure 10: The sensitivity report shows, for linear problems, reduced costs for the variables and shadow prices for the constraints, as well as the ranges of validity of these dual values. Figure 11: The limits report shows the objective value obtained by maximizing and minimizing each variable in turn while holding the other variables' values constant. The Answer Report provides the initial and final values of the variables and the objective, and optimal values for each constraint’s left-hand side as well as slack values for non-binding The Sensitivity Report provides final solution values and dual values for variables and constraints in both linear and nonlinear models. For linear models, the dual values are labeled "reduced costs" and "shadow prices;" their values and ranges of validity are included in the report. For nonlinear models, the dual values are valid only for small changes about the optimal point, and they are labeled "reduced gradients" and "Lagrange multipliers." The Solver creates the Limits Report by rerunning the optimizer with each decision variable selected in turn as the objective, both maximizing and minimizing, while holding all other variables fixed at their optimal values. The report shows the resulting lower limit and upper limit for each variable and the corresponding value of the original objective function. OR/MS professionals are sometimes puzzled by the inclusion of this report, but Microsoft specified it for competitive reasons, since the former Lotus-developed solver in 1-2-3 featured a similar report. Report Pitfalls There are two pitfalls that users sometimes encounter with these reports. The more common problem arises from the fact that the report spreadsheets are constructed so that each cell "inherits" its formatting from the corresponding cell in the user’s model. This feature, which Microsoft specified, has the advantage that the report values are automatically formatted with dollars and cents, percent symbols, scientific notation, or whatever custom formatting was used in the model. The pitfall arises when users format their models to display variable and constraint values rounded to integers (say), which causes the corresponding dual values to be formatted as integers also. Not realizing this, some users think that the dual values are wrong. However, the Solver stores the dual values to full precision on the report spreadsheet; one can inspect each value by selecting it with the mouse, and one can easily reformat the values to whatever precision one desires. The second pitfall relates only to the Sensitivity Report. The Excel Solver recognizes constraints that are simple bounds on the variables and passes them in this form to both the simplex and GRG2 optimizers, where they are handled more efficiently than if they were included as general constraints. If one of these constraints is binding at the solution, this actually means that the corresponding decision variable has been driven to its bound. The dual value for this binding constraint will appear as a reduced cost for the decision variable, rather than as a shadow price for the constraint; it will be nonzero if the variable was nonbasic at the solution. (In fact, constraints that are simple bounds on the variables are never listed in the Constraints section of the Sensitivity Report.) Modeling Practice We encourage modelers to take advantage of the fact that the reports are spreadsheets. Not only can they view them but they can easily modify them, use them to draw charts and graphs, transfer them to other programs, or inspect them using Visual Basic programs. Since the reports show a text label as well as a cell reference for each variable and constraint, users can easily design their spreadsheet models so that meaningful labels appear on the reports. The algorithm for constructing these labels is very simple: starting from the variable or constraint cell on the model worksheet, the Solver looks left and up for the first text label in the same row and the first text label in the same column. It then concatenates these two labels to form the label that appears for that cell in the report. Users should avoid the pitfalls cited above. Because the default formatting for cells is general, report values will appear to full precision unless the user defines custom formatting for the variable or constraint cells. If one wants such formatting, one must simply bear in mind its effect on the reports. To see the dual values for simple variable bounds in the Constraints section of the Sensitivity Report, one can modify the constraint right-hand side to be (say) the formula 0+5 rather than the constant 5. In this case the Solver will not recognize the constraint as a simple variable bound. In Frontline Systems’ Premium Solver products, we changed this report so that dual values always appear in the Constraints section of the report, even for constraints that are recognized as simple variable bounds, making this workaround unnecessary. Use of the Solver in Industry We have heard many opinions about use of the Excel Solver from OR/MS professionals. Many view spreadsheet solvers as suitable only for quite small problems or only for educational rather than industrial use. Some wonder how such tools can be successfully employed by individuals with little if any formal training in OR/MS methods. Some, seeing little usage of the Excel Solver among their colleagues, think that the Solver is widely distributed but not very widely used. We do not have enough systematic data to project the actual number of users of the Excel Solver among the 30-million-plus copies of Microsoft Office and Excel distributed to date. But based on our contacts with users and the data we do have, we believe that OR/MS professionals are seeing only the proverbial tip of the iceberg, and that use of the Excel Solver is far more widespread than their comments would suggest. Problem Size Having worked with commercial users for more than five years, we are very confident that spreadsheet solvers are capable of solving the majority of industrial LP models, as well as many integer and nonlinear models. We base this belief on our own experience and on information about problem size gained in discussions with other vendors of (non spreadsheet-based) optimization software. In fact, we believe that the median-size industrial LP model is smaller than many OR/MS professionals might expect – possibly as small as 2,000 rows and columns. Spreadsheet optimizers can readily handle problems well above this size. Model Developers OR/MS professionals usually create optimization models in situations where the modeling task is challenging enough and the economic value of the problem is large enough to justify expert consulting help. These problems are often much larger than our median size estimate. But this is a tiny part of the spectrum of optimization applications that we see. Many spreadsheet models are straightforward, successful adaptations of classic forms, such as transportation, blending, multi-period inventory, and portfolio-optimization problems. These models are created by functional managers who base them on the examples supplied with Excel or found in various books (indeed, such users often seek out the textbooks that we feature on the Frontline Systems Web site). In other cases, these spreadsheet optimization models are created by outside consultants with industry expertise, rather than OR/MS expertise per se. OR/MS Training Every day, we see successful Solver applications created by spreadsheet users with little or no formal OR/MS training. Users of Frontline Systems’ Premium Solver products are typically solving LP models in the range of several hundred to a few thousand (some as large as 10,000) decision variables and constraints, and integer and nonlinear problems of somewhat smaller size. Although this group is self-selected for applications more ambitious than those built with the standard Excel Solver, we estimate that 90 to 95 percent of these users have no affiliation with the OR/MS community. They are clearly "dispersed practitioners" [Geoffrion 1991]. Yet this is just another layer of the iceberg. A much larger number of Excel Solver users visit Frontline Systems’ World Wide Web site (www.frontsys.com), which receives more than ten thousand "hits" per day. We have some survey data on these users that indicates that a surprising number of Solver applications are below 200 variables in size, but are of sufficient value that their developers are planning to distribute copies of these applications within their organizations or commercially. This survey data and our experience in technical support leads us to believe that this class of applications is at least five times and perhaps 10 times larger than the class of applications above 200 variables. Still deeper in the iceberg are the smaller-size spreadsheet solver applications that are developed for use within only one department or office and not for redistribution. These users may well find that the standard Excel Solver, Microsoft technical support, and the variety of trade books about Excel meet all of their needs. We believe that this group is the largest of all, but we are unable to estimate its size. In any case, we are reasonably certain that OR/MS professionals collectively are involved in, at most, a fraction of one percent of the Excel Solver applications actually in use. Economic Value Small optimization models may yield high economic value. In one case, a Fortune 50 company (which prefers to remain anonymous) used the standard Excel Solver to build a purchasing logistics model, used in negotiating contracts for over a billion dollars worth of a single commodity. This model, whose size was a function of the number of supplier locations and company plants, fit within the 200-variable limit of the standard Solver. Savings from use of this model amounted to nearly $3 million in the first round of purchasing negotiations and the company estimates future savings of $7 million per year. A major difference from the OR/MS successes of the 1970s was the time and effort required to formulate, test, and gain acceptance of this model. One individual, with no formal OR/MS training, completed the entire project in three person-months, with about one month spent on the actual optimization model. The resulting spreadsheet is operated directly by the senior vice-president of purchasing. The return on investment in such application projects is extremely high. Use of the Solver in Education Spreadsheets have become the preferred tool for teaching quantitative methods to undergraduate and graduate business students. Their use is strongly endorsed in a recent report of the operating subcommittee of the INFORMS Business School Education Task Force [Jordan et al. 1997]. In July 1994, the Presidents of ORSA and TIMS, Dick Larson and Gary Lilien, chartered the INFORMS Business School Education Task Force in response to the decline of OR/MS content in business education that began in the early ‘90s. The task force’s survey of business school OR/MS faculty (306 responses) revealed that many faculty members planned to increase their use of spreadsheets (Table 2) in order to strengthen the role of OR/MS in their MBA programs. Table 2. Two questions and the most often selected responses from the INFORMS Business School Education Task Force's 1997 survey of business school OR/MS faculty (306 respondents). │ Which of the following "fixes" have the highest potential to strengthen the role of MS/OR in your particular school of business? │ │ │ More use of cases and real-world examples │ 60% │ │ More emphasis on modeling skills and numeracy and less on algorithms. │ 55% │ │ Better math background for students │ 49% │ │ Use of spreadsheets instead of special purpose OR/MS software │ 39% │ │ What changes are you planning to make in your MS/OR course in the near future? │ │ │ more emphasis on modeling and less on the teaching of algorithms │ 55% │ │ increasing the role of the computer in the course │ 43% │ │ more use of spreadsheets in the course │ 37% │ │ more case analyses │ 34% │ The subcommittee also conducted structured telephone interviews with program administrators at 21 of the leading MBA programs in the US. One of the questions they asked was "what particular sets of quantitative skills are in greatest demand from employers of your graduates." The interpretation of responses was "Demand for particular ‘hard’ OR/MS skills is very low. Where technique is needed, it involves statistics more than OR/MS. There is demand for general skill in model formulation and interpretation and in quantitative reasoning." A related question was "what level of competence is appropriate for MBAs" and it had the summarized response "MBAs need to be able to use spreadsheets and statistical software at the level of the ‘educated consumer’." The authors of the report conclude that OR/MS courses in business schools should focus on common, realistic business situations, acknowledge important nonmathematical issues, use spreadsheets, and emphasize model formulation and assessment more than model structuring. Recommendations include the following: embed analytical material strongly in a business context; use spreadsheets as a delivery vehicle for OR/MS algorithms; and stress the development of general modeling skills. There are now strong trends in these directions, most of which began well before the INFORMS report appeared. They are most prominent in the form of new textbooks for the basic OR/MS course for undergraduate or graduate business students. Such texts include those of Ragsdale [1997], Winston and Albright [1997], Hesse [1997], a forthcoming revision of the Eppen and Gould text, and a forthcoming book by Sam Savage. The authors use spreadsheet models as the focus around which they base all discussion and examples. All use the Excel Solver for optimization, and several use spreadsheet add-ins for decision tree analysis and Monte Carlo simulation. All include a disk containing a complete set of spreadsheet files, bundled with the text and intended for student use, and an instructor’s disk or CD-ROM containing spreadsheets for each problem and case. Some contain a shell version of the instructor spreadsheets, in which the numbers and formulas are omitted. These greatly ease the instructor’s task of grading many spreadsheets, especially when he or she uses the grading macros that are provided for some problems. In the introductions to these texts, the authors advocate a course based on learning modeling by doing examples. They include many traditional examples from the operations management area of business: production and inventory planning, distribution, inventory models, and so forth. In addition, they include problems from finance (portfolio selection, options pricing, cash management), and marketing (sales force allocation). Problems in finance and marketing are often of more interest to MBA students than the traditional operations examples. Outside the traditional OR/MS course, new texts are also appearing with a focus on spreadsheets. For example, the marketing textbook [Lilien and Rangaswamy 1997] includes 17 models in Excel, most using the Solver, controlled by the authors’ programs written in Visual Basic for Applications. For more on configuring a successful OR/MS course for business students see the articles by Bodily [1996], and Powell [1995], and many other articles in the Teachers forum section of Interfaces. Conclusions and Directions for Future Work We designed the Excel Solver to "make optimization a feature of spreadsheets." Where OR/MS professionals tend to see it as simply another tool for doing optimization, managers in industry tend to see it as an extension of spreadsheet technology that enables them to solve resource-allocation problems in a new way, in their own work groups, without outside help. Our most important direction for future work is to extend the range of optimization problems that managers can solve without special OR/MS training or outside help. Classical linear and smooth nonlinear functions are too restrictive for many of the problems our users want to solve. The use of integer variables and special constraints to express such constructs as fixed charges and either-or conditions is unnecessarily complex for users; familiar spreadsheet functions such as IF, CHOOSE, and LOOKUP (which may depend on the variables) could be used to express these concepts directly. In the future, we would like to support the creation of optimization models using as much of the full power of the spreadsheet formula language as possible. To do this, we expect to perform more analysis and transformation of the spreadsheet formulas, obtaining the Jacobian matrix through a combination of automatic differentiation of the most common operators and functions and the selective use of finite differences for others. We are also considering approaches to global optimization, and heuristics and algorithmic methods that yield good solutions that may not be provably optimal (for example, clustering methods, genetic algorithms, and simulated annealing), since our users have clearly indicated their interest in such methods. Spreadsheets, such as Excel, have become so ubiquitous that they serve as a kind of lingua franca for quantitative models, understood by nearly every decision maker in industry, government, and education. Because of this universality, spreadsheet software has become an excellent delivery vehicle for such OR/MS techniques as optimization, as the Excel Solver clearly demonstrates. We encourage OR/MS professionals to gain experience with these tools and explore the world of spreadsheet-based problem solving that continues to grow outside the traditional boundaries of the field. We also encourage OR/MS professionals to communicate with Microsoft and with Frontline Systems about their desires for the Excel Solver. Email is the preferred method: Microsoft welcomes feedback on the Solver and other Excel features sent to xlwish@microsoft.com, while Frontline Systems welcomes feedback sent to info@frontsys.com. By making their voices heard, OR/MS professionals can influence the future direction of software such as the Excel Solver. Bazaraa, M.; Sherali, H.; and Shetty, C. M. 1993, Nonlinear Programming, Theory and Algorithms, John Wiley and Sons, New York, New York Bodily, S. 1996, "Teaching MBA quantitative business analysis with cases", Interfaces, Vol. 26, No. 6, pp. 132-149. Brooke, A.; Kendick, D.; and Meeraus, A. 1992, GAMS, A User’s Guide, Boyd and Fraser, Danvers, Massachusetts. Chinneck, J. W. 1997, "Finding a useful subset of constraints for analysis in an infeasible linear program," INFORMS Journal on Computing, Vol. 9, No. 2, pp. 164-174. Conway, D. and Ragsdale, C., "Modeling optimization problems in the unstructured world of spreadsheets," Omega. Enfin Software Corporation 1988, Optimal Solutions User Manual. Eppen, G.D.; Gould, F.; Schmidt, C.; Moore, J.; and Weatherford, L. 1998, Introductory Management Science: Decision Modeling with Spreadsheets, fifth edition", Prentice Hall, Englewood Cliffs, New Fourer, R, Gay, D.M., and Kernighan, B.W. 1993, AMPL: A Modeling Language for Mathematical Programming, Duxbury Press, Pacific Grove, California. Frontline Systems Inc. 1990, What-If Solver User Guide. Frontline Systems Inc. 1994, Solver User Guide: Premium, Quadratic, and Large-Scale LP Solvers. Fylstra, D. 1992, The Student Edition of What-If Solver, Addison-Wesley Longman, Reading, Massachusetts. Geoffrion, A. M. 1991, "Forces, trends, and opportunities in Management Science and Operations Research," Operations Research, Vol. 4, No. 3, pp. 423-445. Gill, P.E.; Murray, W.; and Wright, M. H. 1981, Practical Optimization, Academic Press, San Diego, California. Griewank, A. and Corliss, G. F. 1991, Automatic Differentiation of Algorithms: Theory, Implementation, and Application, SIAM Press, Philadelphia, Pennsylvania. Hesse, R. 1997, Managerial Spreadsheet Modeling and Analysis, Richard D. Irwin, Burr Ridge, Illinois. Jordan, E.; Lasdon, L.; Lenard, M.; Moore, J.; Powell, S.; and Willemain. T. 1997, "OR/MS and MBA’s - Mediating the mismatches," OR/MS Today, February 1997, pp. 36-41. Lasdon, L.S.; Waren. A. D.; Jain, A.; and Ratner, M. 1978, "Design and testing of a generalized reduced gradient code for nonlinear programming," ACM Transactions on Mathematical Software, Vol. 4, No. 1, pp. 34-49. Lasdon, L.S. and Smith, S. 1992, "Solving large sparse nonlinear programs using GRG", ORSA Journal on Computing, Vol. 4, No. 1, pp. 2-15. Lilien, G. and Rangaswamy, A. 1997, Marketing Engineering: Computer-Assisted Marketing Analysis and Planning, Addison-Wesley Longman, Reading, Massachusetts. Lotus Development Corp. 1990, 1-2-3/G User Guide. Ng, E. and Char, B. W. 1979, "Gradient and Jacobian computation for numerical applications," Proceedings of the 1979 Macsyma User's Conference, Washington, DC, pp. 604-621. Person, R. 1997, Using Microsoft Excel 97, Que Corp./Macmillian Computer Publishing, Indianapolis, Indiana. Powell, S. G. 1995, "Teaching the art of modeling to MBA students," Interfaces, Vol. 25, No. 3, pp. 88-94. Ragsdale, C. T. 1997, Spreadsheet Modeling and Decision Analysis, second edition, South-Western Publishing, Cambridge, Massachusetts. Savage, S. L. 1985, What's Best! User Manual, General Optimization Inc., Chicago, Illinois. Savage, S. L. 1997, INSIGHT Business Analysis Tools for Excel, Duxbury Press, Pacific Grove, California. Winston, W.L. and Albright, S. C. 1997, Practical Management Science: Spreadsheet Modeling and Applications, Duxbury Press, Pacific Grove, California.
{"url":"http://www.utexas.edu/courses/lasdon/design3.htm","timestamp":"2014-04-19T08:05:44Z","content_type":null,"content_length":"82499","record_id":"<urn:uuid:ad0cf008-eb03-4f94-9ae5-70a42e80c195>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Naïve computational type theory - In Proc. 11th Int. Conf. on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2004), Lecture Notes in Computer Science , 2005 "... To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is particularly effective if the specifications are written in a high-level language that makes it easy for de ..." Cited by 7 (4 self) Add to MetaCart To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is particularly effective if the specifications are written in a high-level language that makes it easy for designers to specify their goals. We consider a high-level specification language that results from adding knowledge to a fragment of Nuprl specifically tailored for specifying distributed protocols, called event theory. We then show how high-level knowledge-based programs can be synthesized from the knowledge-based specifications using a proof development system such as Nuprl. Methods of Halpern and Zuck [1992] then apply to convert these knowledge-based protocols to ordinary protocols. These methods can be expressed as heuristic transformation tactics in Nuprl. 1 - Knowledge-Based Systems 15: 265–273 Joshi, A.K. (1983) Varieties of Cooperative Responses in Question-Answer Systems , 2005 "... The paper develops a semantics for natural language interrogatives which identifies questions— the denotations of interrogatives—with propositional abstracts. The paper argues that a theory of Questions as Propositional Abstracts (QPA), is a simple, transparently implementable theory that has signif ..." Cited by 1 (0 self) Add to MetaCart The paper develops a semantics for natural language interrogatives which identifies questions— the denotations of interrogatives—with propositional abstracts. The paper argues that a theory of Questions as Propositional Abstracts (QPA), is a simple, transparently implementable theory that has significant empirical coverage. However, until recently QPA has been abandoned in formal semantic treatments of questions, due to a number of significant problems QPA encountered when formulated within the type system of Montague Semantics. In recent work, Ginzburg and Sag provided a a situation theoretic implementation of QPA that succeeded in overcoming cerain of the original problems for QPA. However, Ginzburg and Sag’s proposal relied on a special purpose account of λ-abstraction, raising the question to what extent QPA can be sustained using standard notions of abstraction. In this paper such doubts are allayed by implementing QPA in a version of Type Theory that provides record types. These latter allow one to develop notions of simultaneous/vacuous abstraction with restrictions and an ontology with various ‘informational entities’. Moreover, the intrinsic polymorphism of this theory plays a crucial role in enabling the definition of a general type for questions, one of the main stumbling blocks for earlier versions of QPA. 1 - in Computational Type Theory Diploma thesis, Institut für Informatik, Universität Potsdam , 2009 "... Abstract. We present a hybrid proof calculus λµPRL that combines the propositional fragment of computational type theory with classical reasoning rules from the λµ-calculi. The calculus supports the top-down development of proofs as well as the extraction of proof terms in a functional programming l ..." Cited by 1 (0 self) Add to MetaCart Abstract. We present a hybrid proof calculus λµPRL that combines the propositional fragment of computational type theory with classical reasoning rules from the λµ-calculi. The calculus supports the top-down development of proofs as well as the extraction of proof terms in a functional programming language extended by a nonconstructive binding operator. It enables a user to employ a mix of constructive and classical reasoning techniques and to extract algorithms from proofs of specification theorems that are fully executable if classical arguments occur only in proof parts related to the validation of the algorithm. We prove the calculus sound and complete for classical propositional logic, introduce the concept of µ-safe terms to identify proof terms corresponding to constructive proofs and show that the restriction of λµPRL to µ-safe proof terms is sound and complete for intuitionistic propositional logic. We also show that an extension of λµPRL to arithmetical and first-order expressions is isomorphic to Murthy’s calculus P ROGK. , 2006 "... An efficient proof assistant uses a wide range of decision procedures, including automatic verification of validity of arithmetical formulas with linear terms. Since the final product of a proof assistant is a formalized and verified proof, it prompts an additional task of building proofs of formula ..." Add to MetaCart An efficient proof assistant uses a wide range of decision procedures, including automatic verification of validity of arithmetical formulas with linear terms. Since the final product of a proof assistant is a formalized and verified proof, it prompts an additional task of building proofs of formulas, which validity is established by such a decision procedure. We present an implementation of several decision procedures for arithmetical formulas with linear terms in the MetaPRL proof assistant in a way that provides formal proofs of formulas found valid by those procedures. We also present an implementation of a theorem prover for the logic of justified common knowledge S4 J n introduced in [Artemov, 2004]. This system captures the notion of justified common knowledge, which is free of some of the deficiencies of the usual common knowledge operator, and is yet sufficient for the analysis of epistemic problems where common knowledge has been traditionally applied. In particular, S4 J n enjoys cut-elimination, which introduces the possibility of automatic proof search in the logic of common
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=704689","timestamp":"2014-04-18T17:09:58Z","content_type":null,"content_length":"23150","record_id":"<urn:uuid:2aae0d65-644d-429b-ba72-d1b4b31b1b97>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Non-linear Waves in Multi-dimensions: with Special Reference to Surface Water Waves Prasad, Phoolan and Ravindran, Renuka (1977) Theory of Non-linear Waves in Multi-dimensions: with Special Reference to Surface Water Waves. In: Institute of Mathematics and its Applications, 20 (1). pp. 9-20. Full text not available from this repository. ( Request a copy The surface water waves are "modal" waves in which the "physical space" (t, x, y, z) is the product of a propagation space (t, x, y) and a cross space, the z-axis in the vertical direction. We have derived a new set of equations for the long waves in shallow water in the propagation space. When the ratio of the amplitude of the disturbance to the depth of the water is small, these equations reduce to the equations derived by Whitham (1967) by the variational principle. Then we have derived a single equation in (t, x, y)-space which is a generalization of the fourth order Boussinesq equation for one-dimensional waves. In the neighbourhood of a wave froat, this equation reduces to the multidimensional generalization of the KdV equation derived by Shen & Keller (1973). We have also included a systematic discussion of the orders of the various non-dimensional parameters. This is followed by a presentation of a general theory of approximating a system of quasi-linear equations following one of the modes. When we apply this general method to the surface water wave equations in the propagation space, we get the Shen-Keller equation. Actions (login required)
{"url":"http://eprints.iisc.ernet.in/24386/","timestamp":"2014-04-25T09:37:17Z","content_type":null,"content_length":"20233","record_id":"<urn:uuid:dfdeb30e-026f-4e80-b33d-3e5b9bbdd957>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Subject is "D - G > Group theory and generalizations" Number of items at this level: 3. Heath-Brown, D. R. and Praeger, C. E. and Shalev, A. (2005) Permutation groups, simple groups and sieve methods. Israel Journal of Mathematics, 148 . pp. 347-375. Neumann, P.M. (1966) A study of some finite permutation groups. PhD thesis, University of Oxford. Wharton, Elizabeth (2006) The model theory of certain infinite soluble groups. PhD thesis, University of Oxford.
{"url":"http://eprints.maths.ox.ac.uk/view/subjects/M20.type.html","timestamp":"2014-04-17T10:00:04Z","content_type":null,"content_length":"8169","record_id":"<urn:uuid:4c57f718-4077-4e38-93b7-863a8f9b5a25>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Potentially Useful Stuff January 29, 2014 Voronoi tessellation on the surface of a sphere (python code) Today I needed to do perform a Voronoi tessellation. If I have a set of points on a surface, this is the way of splitting a surface up into areas that are closest to each of the points. Like this: The blue points are the set of points I started with, and the black lines show the edges of the Voronoi tessellation. Doing a planar tessellation is quite simple, but I wanted to do it on the surface of a sphere. It’s conceptually quite simple, but the algorithm was really annoying to debug. So to save other people the same frustrations, I thought I’d post my python class. September 25, 2013 Rotation matrix from one vector to another in n-dimensions Sometimes you need to find a rotation matrix that rotates one vector to face in the direction of another. Here’s some code to do it for vectors of arbitrary dimension. The code is at the bottom of the post. June 24, 2013 Measure Theoretic Probability for Dummies: Part I Nothing makes me empathise more with those struggling with probability theory than reading things like this on Wikipedia: Let (Ω, F, P) be a measure space with P(Ω)=1. Then (Ω, F, P) is a probability space, with sample space Ω, event space F and probability measure P. This is written so that only the people who already know what it is saying can understand it. The only possible value of this sentence would be to someone who managed to study measure theory without being exposed to it’s most widespread application; in other words: no one! Whilst the attitude this, and soooo many Wikipedia pages displays encourages people to be precise in a way that mathematicians cherrish, it also alienates a lot of perfectly capable, intelligent people who just run out of patience in the face of the relentless influx of oblique statements. Personally, I think that understanding probability spaces is very important, but for the reasons including those I mention above, most people find the measure theoretic formalisation daunting. Here I have tried to outline the most widely used formalisation, which has turned out to be far more work than I expected… read more » June 1, 2013 Friston’s Free Energy for Dummies People always want an explanation of Friston’s Free Energy that doesn’t have any maths. This is quite a challenge, but I hope I have managed to produce something comprehensible. This is basically a summary of Friston’s Entropy paper (available here). A friend of jellymatter was instrumental in its production, and for this reason I am fairly confident that my summary is going in the right direction, even if I have not emphasised exactly the same things as Friston. I’ve made a point of writing this without any maths, and I have highlighted what I consider to be the main assumptions of the paper and maked them with a P. July 1, 2012 Visualizing the mutual information and an introduction to information geometry For a while now I have had an interest in information geometry. The maxims that geometry is intuitive maths and information theory is intuitive statistics seem pretty fair to me, so it’s quite surprising to find a lack of easy to understand introductions to information geometry. This is my first attempt, the idea is to get an geometric understanding of the mutual information and to introduce a few select concepts from information geometry. March 13, 2012 Printing .PDF stickynotes in linux Not the usual kind of topic, but this needs to go out into the wide world. How to print the annotations that sometimes one might get on a pdf file: First install or upgrade Adobe Reader to 9.0: sudo apt-get install acroread sudo apt-get upgrade acroread Then back up and open the following file in your home directory: “~/.adobe/Acrobat/9.0/Preferences/reader_prefs” cd ~/.adobe/Acrobat/9.0/Preferences/ cp reader_prefs reader_prefs.backup gedit reader_prefs gedit will complain about the encoding, but ignore it and click “edit anyway” (we have a backup if anything goes badly wrong). Find the bit where it says: /printCommentPopups [/b false] and change “false” to “true”. Save the file. So it looks like /printCommentPopups [/b true] Now you can just open up your file in adobe reader acroread filename.pdf and print, making sure to select the “Documents and Markups” option in the “Comments and Forms” combo box in the print dialogue. January 4, 2012 A secret message from another dimension We’ve touched on the difference between chaos and randomness before. One strange property of chaotic systems is that they are able to synchronise to each other, so that in spite of their intrinsic tendency to vary wildly, a chaotic system can (actually quite easily) be persuaded to match the behaviour of another chaotic system. As this post will show, it is possible to use this property for a kind of secret message transmission. December 1, 2011 Faster JSON in Python Retracted due to my massive fuckwittery. March 31, 2011 Drawing Confidence Ellipses and Ellipsoids I’ve seen some really bad methods for drawing confidence ellipsoids recently, they all seem to make it really complicated and confusing (and specific). So I thought I would show how to calculate points on an ellipse corresponding to a covariance matrix – this method works for any number of dimensions without any need to change it. For all those that don’t care why, the method to generate the points of an ellipsoid is as follows: 1) make a unit n-sphere (which for 2D is a circle with radius 1), call these points X: If it is an elipse you want, make a matrix with columns of $\sin(\theta)$ and $\cos(\theta)$ for some incrementing $\theta$ values between $0$ and $2\pi$(=开) 2) apply the following linear transformation to get the points of your ellipsoid (Y): $Y = M + kC(\Sigma)X$ where M is the vector of the means (center of the ellipsoid) and $\Sigma$ is the covariance matrix. C represents the Cholesky Decomposition, sort of a matrix square root. k is the number of standard deviations at which one wishes to draw the ellipse The Cholesky decomposition can be accessed as, “numpy.linalg.cholesky” in Python, “Cholesky” in R (matrix package), “chol” in MATLAB and “spotrf” (amongst others, I think) in LAPACK For those who care, here is why this works… read more »
{"url":"http://jellymatter.com/category/potentially-useful-stuff/","timestamp":"2014-04-20T18:33:40Z","content_type":null,"content_length":"74493","record_id":"<urn:uuid:4a2d4a10-aeb1-436b-9849-6b3e6a526ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
A Book With a Theory of Everything? There's an oft-repeated story that when Stephen Hawking was writing "A Brief History of Time," he was told that every equation in the book would cut his readership in half. If there were any truth to this counsel, Roger Penrose's "The Road to Reality: A Complete Guide to the Laws of the Universe," his recent 1,100 page behemoth of a book, should attract a half dozen readers at most. It's an enormous equation-packed excursion through modern mathematics and physics that attempts, quixotically perhaps, to answer and really explain "What Laws Govern Our Universe?" Scattered about this impressive book are informal expository sections, but Penrose's focus is on the facts and theories of modern physics and the mathematical techniques needed to arrive at them. He doesn't skimp on the details, which, for different readers, is the book's strength and its weakness. Parts of it, in fact, seem closer in tone to a text in mathematical physics than to a book on popular science. An emeritus professor at Oxford, Penrose is a mathematician and physicist renown for his work in many areas. In the 1960s he and Hawking did seminal research on "singularities" and black holes in general relativity theory. He also discovered what have come to be called Penrose tiles, a pair of four-sided polygons, that can cover the plane in a non-periodic way. And about a decade ago he wrote "The Emperor's New Mind" in which he argued that "artificial intelligence" was a bit of a crock and that significant scientific advances would be needed before we could begin to understand consciousness. Mathematical Preliminaries, Fractions to Fiber Bundles The first 400 pages of "The Road to Reality" sketch the mathematics needed to understand the physics of the following 700 pages. Like many mathematicians, Penrose is an avowed Platonist who believes that mathematical entities such as pi, infinite cardinal numbers, and the Mandelbrot set are simply "out there" and have an objective existence independent of us. Developing his mathematical philosophy a bit with some interesting speculations about the relations between the mathematical, physical, and mental worlds (but never descending to sappy theology), he very soon gets into the mathematical nitty-gritty. He expounds on Dedekind cuts, conformal mappings, Riemann surfaces, Fourier transforms, Grassmann products, tensors, Lie algebras, symmetry groups, covariant derivatives, and fiber bundles among many other notions. As suggested, the level of exposition and the topics covered make me wonder about the intended audience. Penrose writes that he'd like the book to be accessible to those who struggled with fractions in school, but this seems an almost psychotically optimistic hope. This is especially so because Penrose's approach to so many topics is so clever and novel. Another problem is that he doesn't generally proceed from the concrete to the theoretical, but more often in the other direction. (For the mathematicians: He introduces abstract 1-forms and only later the relatively more intuitive vector fields. Likewise he develops Maxwell equations via tensors and Hodge duals and never explicitly mentions more familiar notions like the curl of a field or Stokes' theorem.) The physics begins around page 400 and includes uncompromising discussions of space-time and Minkowskian geometry, general relativity theory of course, Lagrangian and Hamiltonian approaches to dynamics, quantum particles and entanglement including the standard illustrations (the two-slit experiment, Schrodinger's cat, and Einstein-Podolosky-Rosen non-locality), the measurement problem, Hermitian operators, black holes, the Big Bang, time travel, quantum field theory, the anthropic principle, Calabi-Yau spaces, as well as many other topics of current research. EPR Experiment, String Theory, Inflation and Everything Else As in his previous works the author is not afraid to strike an iconoclastic pose. He sides with Einstein and against most modern physicists, for example, in thinking that the EPR experiment demonstrates that quantum theory is incomplete. The experiment, described very simplistically since this column has fewer words than Penrose's book has pages, involves identical particles moving rapidly apart. A physicist measures the spin of one of the particles realizing that quantum theory stipulates that the particle doesn't have a definite spin -- it could go either way -- until it is measured and its wave function collapses. Astonishingly, the other particle, which by the time of the measurement may be in a different galaxy, has a wave collapse at the same moment that always results in its having an opposite spin. How does the second particle instantaneously "know" the first particle's spin? Eerie entanglement, an incomplete theory, something else? Penrose's skepticism extends to more modern developments as well. He is unenthused about inflation theory and particularly so about string theory. (Inflation, very roughly, refers to the lightning fast expansion of a part of the very early universe, and string theory, even more roughly, refers to the notion that fundamental particles are composed of minuscule strings, vibrating and multi-dimensional.) Inflation theory has considerable evidence backing it, but Penrose seems correct to emphasize that string theory and its offspring M-theory are largely speculative. Why their appeal? He offers an interesting discussion of the role of fads and fashion even in theoretical physics. The end of the book is devoted to a sketch of M-theory's main competitor, twistor theory and loop quantum gravity, which he invented decades ago and has been developing with colleagues ever since. He also seems less than impressed with Brian Green's "The Elegant Universe," a book that is far more accessible. Coming every few pages, Penrose's well-done drawings and illustrations may ease the book's near vertical learning curve. Like some New Yorker subscribers, many readers of this book will, I suspect, confine themselves largely to the pictures and the pages that are more broad-gauged and less technical. There is something to be said for inducing even this level of involvement in mathematics and physics, and if "The Road to Reality" succeeds in doing this, it may become a (sturdy) coffee table book and popular success. My hunch, however, is that this truly magisterial book will be appreciated primarily by those who have already spent considerable time in school learning a substantial portion of what's in it. -- Professor of mathematics at Temple University, John Allen Paulos is the author of best-selling books, including "Innumeracy" and "A Mathematician Plays the Stock Market." His "Who's Counting?" column on ABCNews.com appears the first weekend of every month.
{"url":"http://abcnews.go.com/Technology/WhosCounting/story?id=890697&singlePage=true","timestamp":"2014-04-16T16:24:58Z","content_type":null,"content_length":"93000","record_id":"<urn:uuid:2305bf34-c680-472f-a0b2-15b87bff7078>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Arithmetic Algorithms "This is one of the best available textbooks on computer arithmetic design" - review, Analog Dialogue See the Computer Arithmetic Algorithms Simulator - a companion website featuring Java and JavaScript simulators of many of the algorithms discussed in the book. Book features • Table of contents and features summary - GIF file or PostScript file • Main features (PDF file) • Review of the 1st edition of the book from IEEE Computer Magazine • Review of the 2nd edition of the book from Analog Dialogue • Review of the 2nd edition of the book from ACM SIGACT News Ordering information • Order the book from A. K. Peters (part of CRC Press) • Order the book from Amazon.com For students and instructors • Corrections for the 1st printing, 2002 • Powerpoint slides for instructors • Solutions to selected problems (chapters 1 - 10) - PostScript file, PDF file • Solutions to almost all the problems (for instructors only: for access contact the publisher susie.carlisle AT taylorandfrancis DOT com) Relevant computer arithmetic links See the Computer Arithmetic Algorithms Simulator - a companion website featuring Java and JavaScript simulators of many of the algorithms discussed in the book.
{"url":"http://www.ecs.umass.edu/ece/koren/arith/","timestamp":"2014-04-17T03:53:52Z","content_type":null,"content_length":"3507","record_id":"<urn:uuid:1d5d8b69-e8a6-4a7d-ab6d-5b4a796313b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 4 of 4 1. CJM 2013 (vol 66 pp. 303) Haar Null Sets and the Consistent Reflection of Non-meagreness A subset $X$ of a Polish group $G$ is called Haar null if there exists a Borel set $B \supset X$ and Borel probability measure $\mu$ on $G$ such that $\mu(gBh)=0$ for every $g,h \in G$. We prove that there exist a set $X \subset \mathbb R$ that is not Lebesgue null and a Borel probability measure $\mu$ such that $\mu(X + t) = 0$ for every $t \in \mathbb R$. This answers a question from David Fremlin's problem list by showing that one cannot simplify the definition of a Haar null set by leaving out the Borel set $B$. (The answer was already known assuming the Continuum Hypothesis.) This result motivates the following Baire category analogue. It is consistent with $ZFC$ that there exist an abelian Polish group $G$ and a Cantor set $C \subset G$ such that for every non-meagre set $X \subset G$ there exists a $t \in G$ such that $C \cap (X + t)$ is relatively non-meagre in $C$. This essentially generalises results of BartoszyŠski and Burke-Miller. Keywords:Haar null, Christensen, non-locally compact Polish group, packing dimension, Problem FC on Fremlin's list, forcing, generic real Categories:28C10, 03E35, 03E17, , , , , 22C05, 28A78 2. CJM 2012 (vol 65 pp. 961) A Hilbert Scheme in Computer Vision Multiview geometry is the study of two-dimensional images of three-dimensional scenes, a foundational subject in computer vision. We determine a universal Gröbner basis for the multiview ideal of $n$ generic cameras. As the cameras move, the multiview varieties vary in a family of dimension $11n-15$. This family is the distinguished component of a multigraded Hilbert scheme with a unique Borel-fixed point. We present a combinatorial study of ideals lying on that Hilbert scheme. Keywords:multigraded Hilbert Scheme, computer vision, monomial ideal, Groebner basis, generic initial ideal Categories:14N, 14Q, 68 3. CJM 2011 (vol 63 pp. 1107) Genericity of Representations of p-Adic $Sp_{2n}$ and Local Langlands Parameters Let $G$ be the $F$-rational points of the symplectic group $Sp_{2n}$, where $F$ is a non-Archimedean local field of characteristic $0$. Cogdell, Kim, Piatetski-Shapiro, and Shahidi constructed local Langlands functorial lifting from irreducible generic representations of $G$ to irreducible representations of $GL_{2n+1}(F)$. Jiang and Soudry constructed the descent map from irreducible supercuspidal representations of $GL_{2n+1}(F)$ to those of $G$, showing that the local Langlands functorial lifting from the irreducible supercuspidal generic representations is surjective. In this paper, based on above results, using the same descent method of studying $SO_{2n+1}$ as Jiang and Soudry, we will show the rest of local Langlands functorial lifting is also surjective, and for any local Langlands parameter $\phi \in \Phi(G)$, we construct a representation $\sigma$ such that $\phi$ and $\sigma$ have the same twisted local factors. As one application, we prove the $G$-case of a conjecture of Gross-Prasad and Rallis, that is, a local Langlands parameter $\phi \in \Phi(G)$ is generic, i.e., the representation attached to $\phi$ is generic, if and only if the adjoint $L$-function of $\phi$ is holomorphic at $s=1$. As another application, we prove for each Arthur parameter $\psi$, and the corresponding local Langlands parameter $\phi_{\psi}$, the representation attached to $\phi_{\psi}$ is generic if and only if $\phi_{\psi}$ is tempered. Keywords:generic representations, local Langlands parameters Categories:22E50, 11S37 4. CJM 2004 (vol 56 pp. 825) Differentiability Properties of Optimal Value Functions Differentiability properties of optimal value functions associated with perturbed optimization problems require strong assumptions. We consider such a set of assumptions which does not use compactness hypothesis but which involves a kind of coherence property. Moreover, a strict differentiability property is obtained by using techniques of Ekeland and Lebourg and a result of Preiss. Such a strengthening is required in order to obtain genericity results. Keywords:differentiability, generic, marginal, performance function, subdifferential Categories:26B05, 65K10, 54C60, 90C26, 90C48
{"url":"http://cms.math.ca/cjm/kw/generic","timestamp":"2014-04-18T10:52:00Z","content_type":null,"content_length":"32601","record_id":"<urn:uuid:7be461fb-3816-49fe-a737-d2212a778ac1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: First problem 100 thinking robots are given a challenge : they will communicate only by means of a single light bulb which can be either in the state 1 (light on) or in state 0 (light off). Each second a randomly chosen \[\bf ONE \quad single \] robot can see the bulb (others don't- in that second) and keep it in the previous state or change it to opposite state. Bulb is On at first. Each second one chosen randomly out of 100. The challenge is to know -WHEN every one of them has been at the bulb at least once (or more) times. This info has to reach all of 100 eventually. SEE THE • one year ago • one year ago Best Response You've already chosen the best response. Sounds like a statistics problem. To me there will always be a chance the same robot doesn't see the bulb, but it will be so small it's considered 0. We have to decide how small is too small. I really can't help any further then relaying my thoughts. sry Best Response You've already chosen the best response. you could set up a limit as x approaches 0 for the last robot. But i don't know how to do that for this problem Best Response You've already chosen the best response. Consider it given that there does occur a visit of each to the bulb. THIS IS NOT THE QUESTION !! The question is to communicate that event AFTER IT HAPPENNED JUST USING THE BULB ! Best Response You've already chosen the best response. oh ok. I did read it wrong. Is this a question you need an answer to, or a question for fun to the community? I would have to think about this for a while. still not much help Best Response You've already chosen the best response. For intellectual profit of the community (not fun, god forbid that !) Best Response You've already chosen the best response. Hi there @experimentX Best Response You've already chosen the best response. so you know the answer. Now I'm more interested in this puzzle Best Response You've already chosen the best response. Yo @Mikael what's up!! ... serves as bookmark. Best Response You've already chosen the best response. There is a follow up problem - which HAS practical serious applications and is very deep Best Response You've already chosen the best response. what if the first robot turns the bulb off and every robot you sees the bulb and it is their first time will turn it on. If it is their second time will turn it off. If the bulb is left off for a long time then we can assume every bot has seen it. I don't know how to determine the length of time Best Response You've already chosen the best response. Glad to see you \[ Yoda-Not , \bf @ganeshie8 \] Best Response You've already chosen the best response. :) so the state has to be communicated to all the 100 robots, but the bulb has only two states hmm Best Response You've already chosen the best response. The offered method has weakness - a robot doesn't know whether it was a multiple visit by some group that made the light off (if that is his observation) or the complete set of visits. Best Response You've already chosen the best response. lol .. .thanks!! Best Response You've already chosen the best response. Ok let's make it simpler: Best Response You've already chosen the best response. How each robot can be sure with probability of \[p= 1 -10^{-6}\] that all others HAVE visited the bulb - devise a method for that. Best Response You've already chosen the best response. @Algebraic! you are missed here , one needs some critical approach... Best Response You've already chosen the best response. Each robot has also a seconds-counting watch Best Response You've already chosen the best response. Well let's say @ChmE is "warm" in his attempt... Best Response You've already chosen the best response. I'm still thinking I see the problem you mentioned Best Response You've already chosen the best response. Just devise an approach with lower possibility of that Best Response You've already chosen the best response. So .. Best Response You've already chosen the best response. can a robot leave a bolt. so the first time they visit they leave a bolt and there is a robot that collects all the bolts when he has 100 including his own they are done Best Response You've already chosen the best response. and my first method will be complete in 100^101/100! years. lol Best Response You've already chosen the best response. Great beginning of BRAIN -STORMING , NOW that you have "I wish" method - try making "the bolt" only with the ligh on/Off and the conditions given Best Response You've already chosen the best response. I mean - the bulb , in some sense is a third-rate "bolt" Best Response You've already chosen the best response. the first turns it off. and he counts how many times he see it on. The other robots turn it on one of their 2 cycles. Best Response You've already chosen the best response. OK - push forward on the path of SIMPLIFYING !!! Best Response You've already chosen the best response. "the first turns it off".... Best Response You've already chosen the best response. and ignore (for a sec) the actual numbers of seconds Best Response You've already chosen the best response. I will continue to think about it. Gotta catch a bus Best Response You've already chosen the best response. So gentlemen and ladies - who's up to the challenge ?! Best Response You've already chosen the best response. I'm going to stay quiet for a while since I've seen the problem before (in slightly different terms). Best Response You've already chosen the best response. \[\bf \text{here is Second Problem - will be posted after this one is done and cleared.}\] \[\bf \color{red}{\text {Now the robots have to somehow}\,\,\\ {\Huge\color{green}{Choose \quad a \quad president}}\\{\text{EXACTLY IN THE SAME SITUATION }} }\] Best Response You've already chosen the best response. No tricks - real new democratic choice of president. Best Response You've already chosen the best response. the 1st robot turns it off and he only ever turns it off. Every other robot turns the light only once. If they have already turned it on they leave it off or if it is on they leave it on. The 1st robot counts how many times he turns it off til 99 Best Response You've already chosen the best response. if the robots see the light on but havent touched it then they leave it on til they are given the chance to turn it on Best Response You've already chosen the best response. All right - so here is The second problem. \[ \bf \text{Choosing a specific number by majority of votes.} \\ \text{ Each robot has has own number known to all.}\\ \text{ Using all of the above they have to choose one pf them.}\\ \text{He will be called the president.} \] Best Response You've already chosen the best response. Check my soln to problem 1. I think I've got it Best Response You've already chosen the best response. @ChmE This seems the solution "the 1st robot turns it off and he only ever turns it off. Every other robot turns the light only once. If they have already turned it on they leave it off or if it is on they leave it on. The 1st robot counts how many times he turns it off til 99" Best Response You've already chosen the best response. OMG!!! I swear I didn't cheat. Been thinking about this question on and off all day yesterday Best Response You've already chosen the best response. But you were a bit unclear - you have to STATE that he turns it off during 99 opportunities he is given Best Response You've already chosen the best response. Best Response You've already chosen the best response. By the way - he does know - but how do the others know that he DOES know ? Best Response You've already chosen the best response. what do you mean by that Best Response You've already chosen the best response. the robot knows but how to the other robots know he knows? Best Response You've already chosen the best response. First robot counted 99 light-on and reached the conclusion that all have been at the bulb. NOW - how will he communicate that fact to all the others ? Best Response You've already chosen the best response. haha. This isn't fun anymore. I gotta think about this Best Response You've already chosen the best response. This is NOT fun, but i will spare you this time : Only probabilistically they will know. How? by seeing the light in their SEVERAL personal visits off they conclude that the probability that the FIRST - the turning-off guy was right before them is too low. Best Response You've already chosen the best response. ok. thx. Is this a question that was proposed to you in one of your classes? Best Response You've already chosen the best response. Thanks - now let us call here the other people who has been here - so they appreciate our work. @bahrom7893 @sauravshakya @kingGeorge, @ganeshie8 @hartnn @experimentex Best Response You've already chosen the best response. Welcome to our humble abode Mrs @sauravshakya @hartnn and all ! Best Response You've already chosen the best response. Much appreciated ! Best Response You've already chosen the best response. ACTUALLY THIS SIMILAR PROBLEM WAS ALREADY SOLVED BY ME TOO. Best Response You've already chosen the best response. THAT IS THE SIMILAR QUESTION....... only number are different..AND ITS LOGICAL NO SERIES Best Response You've already chosen the best response. AS far as I remember. Best Response You've already chosen the best response. ya, i have also seen such problems... Best Response You've already chosen the best response. Dear visitor @sauravshakya - I bet you ten sayings of praise (of your choosing) that the following you have NOT solved before http://openstudy.com/study#/updates/50631053e4b0583d5cd34249 Best Response You've already chosen the best response. Because I have met this situation in real life engineering problem. Best Response You've already chosen the best response. Thanks Highlander ....! Best Response You've already chosen the best response. Hey @siddhantsharan Best Response You've already chosen the best response. Hello. :) Best Response You've already chosen the best response. Best of luck. and keep them cool and well nutritioned ! Best Response You've already chosen the best response. Haha. Yeahh. Thanks. Nice problem though. Best Response You've already chosen the best response. I was just thinking @Mikael . How is my solution/our solution correct because it relies on the robots communicating prior to seeing the lightbulb? Which by the given conditions cannot happen. Best Response You've already chosen the best response. It is assumed the do communicate and agree beforehand. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5061ee56e4b02e139411644c","timestamp":"2014-04-16T22:39:49Z","content_type":null,"content_length":"187560","record_id":"<urn:uuid:c82f1b5e-0db8-43ce-99f6-e58a718866b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Lilburn Algebra 1 Tutor Find a Lilburn Algebra 1 Tutor ...I was then captain my senior year and played intramurals through college. I have been playing chess for more than 15 years now. Throughout high school and college I was involved in chess clubs, I play online and I have other resources to help master openings and endings of games. 11 Subjects: including algebra 1, chemistry, biology, physics ...I strongly believe in Socrates' method, which is asking the student questions that get progressively harder to help them come to the answer themselves. In this way, they not only remember the material better, but are able to reason out answers on the tests and quizzes they have in school. In high school and now college I work a lot with other people studying for tests and helping each 20 Subjects: including algebra 1, Spanish, French, geometry ...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. 10 Subjects: including algebra 1, geometry, algebra 2, precalculus ...Algebra 2 is one of my favorite subjects. I am 100% proficient in this area. I taught it for a few years and have been tutoring it for more than 20 years! 8 Subjects: including algebra 1, geometry, algebra 2, SAT math Hello! My name is Ben and I am a 24-year-old student pursuing a degree in Computer Science. My strengths include math, writing, reading and basic computer software (Microsoft Office, Adobe Creative Suite).I have a Bachelor of Science in Communication and have worked for small marketing firms in the past, where I was responsible for writing and editing content. 14 Subjects: including algebra 1, reading, English, writing Nearby Cities With algebra 1 Tutor Avondale Estates algebra 1 Tutors Berkeley Lake, GA algebra 1 Tutors Chamblee, GA algebra 1 Tutors Clarkston, GA algebra 1 Tutors Covington, GA algebra 1 Tutors Doraville, GA algebra 1 Tutors Duluth, GA algebra 1 Tutors Grayson, GA algebra 1 Tutors Norcross, GA algebra 1 Tutors Pine Lake algebra 1 Tutors Scottdale, GA algebra 1 Tutors Snellville algebra 1 Tutors Stone Mountain algebra 1 Tutors Sugar Hill, GA algebra 1 Tutors Tucker, GA algebra 1 Tutors
{"url":"http://www.purplemath.com/Lilburn_algebra_1_tutors.php","timestamp":"2014-04-17T07:19:13Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:4d6d917d-5d6b-417a-8751-75ccef3e5b61>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
omitted portion text book algebra geometry std ix - Top websites Check for keyword Result sites about omitted portion text book algebra geometry std ix Think Quest - Oracle | Hardware and Software, Engineered ...www.oraclearchive.orgFull text of "A mathematical solution book containing systematic solutions to many of the most difficult problems. Taken from the leading authors on arithmetic and ... Sponsored links Math.com - World of Math Onlinemath.comFree math lessons and math homework help from basic math to algebra, geometry and beyond. Students, teachers, parents, and everyone can find solutions to their math ...Algebra · Basic Math · Fractions · Everyday Math · PracticeQuaternion - Wikipedia, the free encyclopediaen.wikipedia.org/wiki/QuaternionQuaternion algebra was introduced by Hamilton in 1843. Important precursors to this work included Euler's four-square identity (1748) and Olinde Rodrigues ... Elementary algebra - Wikipedia, the free encyclopediaen.wikipedia.orgElementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and ... Sponsored links
{"url":"http://webschecktool.eu/search/omitted_portion_text_book_algebra_geometry_std_ix.html","timestamp":"2014-04-16T10:09:47Z","content_type":null,"content_length":"11687","record_id":"<urn:uuid:7663764a-c1dc-48d7-b3da-68bc6a09fe6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Determines skew in MESON proof tree search limits. This is one of several parameters determining the behavior of MESON, MESON_TAC and related rules and tactics. During search, MESON successively searches for proofs of larger and larger `size'. The ``skew'' value determines what proportion of the entire proof size is permitted in the left-hand half of the list of subgoals. The symmetrical value is 2 (meaning one half), the default setting of 3 (one third) seems generally better because it can cut down on redundancy in proofs. Not applicable. For users requiring fine control over the algorithms used in MESON's first-order proof search. For more details of MESON's search strategy, see Harrison's paper ``Optimizing Proof Search in Model Elimination'', CADE-13, 1996. meson_brand, meson_chatty, meson_dcutin, meson_depth, meson_prefine, meson_split_limit,
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/meson_skew.html","timestamp":"2014-04-18T05:35:48Z","content_type":null,"content_length":"1979","record_id":"<urn:uuid:7e6de7be-6776-4d27-a990-802a8a702ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
d C ← Lecture 19 | Lecture 21 → The Transfer Function and the Impulse Response We will now look at an important means of understanding the behaviour of circuits and other linear systems. Previously, we solved differential equations by finding first the general solution to the homogeneous equation, then finding a particular solution, and then matching the solution to the initial conditions. Unfortunately, the particular solution is not unique and therefore nothing physical can be interpreted from the particular solution or its effect on the coefficients of the general homogeneous solution. Instead, we will see that the Laplace transform allows us to divide the solution into two components: • The response of the system due to the initial conditions, and • The response of the system due to the input (or forcing function). Consider the differential equation for the circuit shown in Figure 1. Figure 1. An RC circuit. If the charge on the capacitor (the output or response) is given by We may solve this for Notice that the first time depends only on the forcing function (or input) and the second term depends only on the initial conditions. Using the inverse Laplace transform and the convolution theorem, we get that In a stable system (necessary), the initial conditions are transient, that is, they decay to zero, and therefore the focus is on the response to the input. In this example, the input response in the frequency domain is the Laplace transform of the input (transfer function and is written as Notice that the transfer function is defined as Once we have the transfer function, it becomes straight-forward to find the response to a system: Example 1 Find the transfer function of a system where the output/response The Laplace transform gives us the expression Therefore, the transfer function, the ratio between the Laplace transforms of the output and the input, is Notice that for a linear system with constant coefficients, the transfer function is a rational polynomial, that is, it is the ratio of two polynomials. The degree of the denominator is the order of the system, in this case, two. The zeros of the numerator are the zeros of the transfer function, while the zeros of the denominator are the roots of the transfer function. In this case: • The characteristic equation of the transfer function is • The poles of the transfer function are -1 and -3, and • The zero of the transfer function is -2. The zeros and the poles of the transfer function determine the stability of the system and this will be a significant focus later in this class and in your signals and systems course. Suppose now we have a forcing function which has In this case, if we denote the inverse Laplace transform of the transfer function to be Thus, the transfer function is the response of the system to the unit impulse forcing function and this impulse response is usually denoted Put another way, the impulse response and the transfer function both contain all information necessary about the dynamics of a time-invariant system. Conclusion: to determine the response of any linear time-invariant system, all it is necessary to do is to start in a quiescent state, excite it with an impulse, and measure the response. Note: the adjective time-invariant simply indicates that it does not matter when you choose time Example 2 Find the impulse response of the system described by the differential equation The transfer function is and therefore, we can find the inverse Laplace transform by using partial fraction decomposition and hence Two Important Transfer Functions Consider the system That is, the response is the derivative of the input function. Taking the Laplace transform Consider the system That is, the response is the integral of the input function. Taking the Laplace transform In the next section, we will see how engineers will often refer to an integrator or a differentiator of a signal by their transfer function
{"url":"https://ece.uwaterloo.ca/~math211/Lectures/20/","timestamp":"2014-04-16T10:11:35Z","content_type":null,"content_length":"16024","record_id":"<urn:uuid:7d735b39-52e1-45e0-b9e0-6389d0e25486>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Hialeah Algebra 2 Tutor Find a Hialeah Algebra 2 Tutor ...I hope that I have been convincing enough in this description. I wish the best of luck to all the students and parents in their endeavors.Since high school, I was always helping classmates with their Algebra, whether it was for the class, as a refresher for a class that required it as a prerequi... 9 Subjects: including algebra 2, calculus, physics, geometry ...I am an experienced tutor in Physical Science, Chemistry, Biology, Physics and Math. I tutored a range of students from elementary to college. Each student will receive a personalized lesson catered to their learning style. 21 Subjects: including algebra 2, chemistry, physics, geometry ...I had to lecture any course given me by the department, as it was a requirement that every PhD candidate be able to lecture any course/subject assigned....You won't believe the first course/ subject I was assigned to lecture? STATISTICS! LOL! 5 Subjects: including algebra 2, statistics, algebra 1, prealgebra ...The SAT Reading is more about evaluating someone's level of understanding rather than the level of knowledge. Context, again, is the keyword. In dealing with SAT Reading, it is more important to be able to make sense of a word within the specific context of the given sentence or text than to have that word's definition memorized. 20 Subjects: including algebra 2, English, reading, ESL/ESOL ...As a volunteer at the Boys and Girls Club of Hollywood, I tutored in all subject matters for students ages 7-12. In addition, while in D.C. I tutored high school students in Geometry, Algebra I and II. 16 Subjects: including algebra 2, chemistry, reading, biology
{"url":"http://www.purplemath.com/hialeah_fl_algebra_2_tutors.php","timestamp":"2014-04-18T03:56:17Z","content_type":null,"content_length":"23844","record_id":"<urn:uuid:77bf9cbb-c42c-41a1-b74b-e7d166f8bf5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2012 [00499] [Date Index] [Thread Index] [Author Index] Re: Probability Distribution Function • To: mathgroup at smc.vnet.net • Subject: [mg125194] Re: Probability Distribution Function • From: Bill Rowe <readnews at sbcglobal.net> • Date: Mon, 27 Feb 2012 06:43:00 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com On 2/26/12 at 4:18 AM, niels.martinsen at gmail.com (Niles) wrote: >I have a probablity distribution (Maxwell-Boltzmann) giving the >probability of a classical particle having some velocity v. No. The Maxwell-Boltzmann distribution is a distribution of particle speeds not particle distributions. That is the Maxwell-Boltzmann distribution is a continuous distribution of a scalar quantity not a distribution of a vector quantity such as velocity. If I seem to be being a bit fussy about the difference between speed (a scalar) and velocity (a vector) it is because of what you post next. >Now, what I have is a function to calculate the trajectory for a >particle with some velocity v_i. I need to apply this function to the >whole distribution. My question is regarding how I should do this. There are several difficulties here. The trajectory of a particle will be determined by its velocity. But velocity a vector quantity is not described by a Maxwell-Boltzmann distribution. You could assign a random velocity to a given particle by selecting the speed of the particle as a random deviate from a given Maxwell-Boltzmann distribution and a direction selected from a uniform distribution. But this leads to the next difficulty. It is far from clear what you mean by applying a function to "the whole distribution". What do you mean by the "whole distribution"? >Originally what I had thought about doing is to partition the >distribution into N small bins, and associate a velocity to each >bin. My plan was then to calculate the trajectory for each velocity >(= bin), and the "output-velocity" I weigh with the original >probability/ weight. This suggests you are thinking of say n particles with different velocities. You could look at the center of mass for your system of particles, rather than each particle individually. Done this way, you need not concern yourself with the behavior of each particle. But if you do want to model an individual particle, you are going to need to consider collisions between particles. It seems to me the way to approach this would be to treat individual particles trajectories as 3D Brownian motion with the trajectory of the center of mass for the system superimposed. >1) My first question is if this is a correct method I am using? I don't think so. But I can't be certain from your post. >2) I have already implemented this is Mathematica. However, for some >distinct bins some of the "output"-velocities are the same. So I >need to figure out some way to add them up, which I don't find that >easy. My problem is to determine how close two data points have to >be in order to be binned together.
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Feb/msg00499.html","timestamp":"2014-04-16T04:44:07Z","content_type":null,"content_length":"27744","record_id":"<urn:uuid:0c725b16-4b29-44c2-a68b-326ecab2dc1c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51299a6ae4b02acc415cc7ac","timestamp":"2014-04-16T04:27:42Z","content_type":null,"content_length":"44169","record_id":"<urn:uuid:735c1511-66ad-4492-a361-79d7ad0e3518>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Stanwood, WA Math Tutor Find a Stanwood, WA Math Tutor ...I am able to tutor statistics, algebra, geometry, and basic math for young children through high school level classes, as well. I received a 4.0 GPA during my undergraduate career in my French minor. With the natural fluency that comes from having French-Canadian family, I joyfully share this expertise with any children who struggle with this language. 36 Subjects: including geometry, probability, SAT math, statistics Hi My name is George. I graduated from Bergen Community College, NJ, in 2009 with Associate in Science degree in Engineering Science. I earned my Bachelor of Science degree in Mechanical and Aerospace Engineering from Rutgers University (New Brunswick, NJ) in 2012. 11 Subjects: including trigonometry, precalculus, algebra 1, algebra 2 ...I have been tutoring and teaching both professionally and through non-profit organizations for many years now in Seattle, Hawaii and Chicago. I have worked with students K-12 in the Seattle Public Schools, Chicago Public Schools, and Seattle area independent schools. I have also worked with students at the University level. 27 Subjects: including precalculus, reading, study skills, trigonometry ...I have a Bachelor's in music from Southern Virginia University. I've been playing the piano for 18 years. I have accompanied choirs and vocal students on the piano. 26 Subjects: including algebra 2, special needs, piano, dyslexia ...Let a professional Math Coach help! I have a Master's in Math Education and would love to demonstrate my ability to help you improve your skills and confidence. I currently teach at a College in Everett, but live in Oak Harbor. 19 Subjects: including calculus, physics, differential equations, public speaking Related Stanwood, WA Tutors Stanwood, WA Accounting Tutors Stanwood, WA ACT Tutors Stanwood, WA Algebra Tutors Stanwood, WA Algebra 2 Tutors Stanwood, WA Calculus Tutors Stanwood, WA Geometry Tutors Stanwood, WA Math Tutors Stanwood, WA Prealgebra Tutors Stanwood, WA Precalculus Tutors Stanwood, WA SAT Tutors Stanwood, WA SAT Math Tutors Stanwood, WA Science Tutors Stanwood, WA Statistics Tutors Stanwood, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Stanwood_WA_Math_tutors.php","timestamp":"2014-04-20T04:17:40Z","content_type":null,"content_length":"23706","record_id":"<urn:uuid:df9d04a9-6697-4e05-95c3-62534773489e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Renate Mayntz This paper draws heavily on three previous publications (Mayntz 1988, 1990, 1991), some passages from these publications having been adopted virtually unchanged. Over the past decades, attention in the natural sciences has increasingly turned to phenomena that defy analysis in terms of the traditional physical world view with its assumptions of linearity and reversibility, i.e., to the behaviour of systems remote from equilibrium and to discontinuous processes resulting from nonlinearity. After the recognition of the stochastic nature of many real processes, the attention paid to nonlinear processes means a further step away from the traditional mechanistic world view. Nonlinear systems display a number of behaviours that can be widely observed in the natural world. Their state variables can change discontinuously, producing phase jumps, i.e., sudden changes of state, as in the phenomenon of ferro-magnetism or in superconductivity. In such discontinuous processes, threshold and critical mass phenomena often play a decisive role. A threshold phenomenon exists where a dependent variable initially does not react at all, or only very little, to continuous changes of an independent variable, but beyond a given point it reacts suddenly and strongly. The threshold may be defined by a critical mass, e.g. the number of particles of a specific kind that must be present before a reaction sets in, but other kinds of threshold also exist. There are phase transitions from order to disorder and in the reverse direction. The behaviour of nonlinear systems can become completely irregular - or "chaotic" - if the values of given parameters move into a particular range. On the other hand, nonlinear systems can also move spontaneously from disorder to order, a stationary state far from equilibrium; this is called a dissipative structure, or self-organization. Furthermore, systems characterized by nonlinear dynamics can display a specific kind of irreversibility, i.e. hysteresis (path dependency of phase jumps), and a specific kind of indeterminateness expressed in the term bifurcation: a point where a trajectory can proceed in different directions. The analysis of nonlinear dynamics has been enhanced by the development of new mathematical methods, as such René Thom's catastrophe theory, and by the computational power of modern EDP,^1 which for instance made it possible to discover and formalize the phenomenon of (deterministic) chaos. The label "chaos theory" is presently being used both in a narrower and a more comprehensive sense. The narrowest interpretation of the term equates it with the mathematical theory of deterministic chaos (and its applications), and hence with the preconditions of a phase transition from order to disorder in nonlinear systems. In a wider interpretation, chaos theory may refer to discontinuous processes moving either from order to chaos or from chaos (or disorder) to order; the term would thus also cover phenomena of self-organization. Sometimes, the term chaos theory seems to be used in an even wider sense, referring to the whole field of research into nonlinear dynamics of non-equilibrium systems. In this paper, the second understanding of the term "chaos theory" is used. I shall be considering discontinuous processes, or phase transitions, going in both directions - from order to chaos and from chaos to order. However, I shall rather use the terms self-organization or nonlinear dynamics except where we deal with phase transitions from order to chaos, the narrowest meaning of "chaos theory." The following argument can be summarized in the form of a few theses: 1. Natural science theories of nonlinear dynamics were not necessary to turn the attention of social scientists to phenomena of sudden disruptions of order and system breakdown. 2. Natural science theories and models of nonlinear dynamics have nevertheless had an impact both on formal modelling attempts and substantive theorizing in sociology. 3. The potential relevance of natural science theories of nonlinear dynamics lies in the promise to gain a better understanding of discontinuous changes at the macro-level as a consequence of micro-level processes. 4. The nature of social reality nevertheless seriously limits the potential applicability of natural science models of nonlinear dynamics.
{"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0envl--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-10&cl=CL2.1.1&d=HASHe117ec92329f5eaea7ac0f.18.1&gc=1","timestamp":"2014-04-20T10:57:06Z","content_type":null,"content_length":"112309","record_id":"<urn:uuid:e7cd4b5e-c558-4798-89cb-35990149c3ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/accidentalaichan/asked","timestamp":"2014-04-21T05:01:15Z","content_type":null,"content_length":"115999","record_id":"<urn:uuid:65cdcde5-579c-4b33-860a-04e055a8c7e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Real life applications of integration All these clever replies and not one specific example of how, during the course of say, a mechanical engineers day, what problems he would be trying to solve using a method of integration...? I'm currently learning all about these methods, and knowing to what practical purpose they are applied will help me greatly in understanding the values I arrive at. For example, when you find the area under a curve, what does it represent? There seems to be come clever people on here, now be smart. What people are asking is for the poster of the thread to be smarter. He/she could find many examples in textbooks specific to the field of interest -- or even state the specific area of interest in the thread... note the TYPE of engineering was not even posted in the original thread, let alone a specific topic within the field -- that's why I was leaving this thread alone (perhaps hoping the original poster came back with more specification). Granted, maybe prior responders could be more polite (and some have been) but now you've butted in (withyour first post to a "real" part of PF I might add) criticizing the whole... and your post isn't really even smart. You ask about a mechanical engineer, but what is he/she in the process of designing... is the main interest of his design -- vibration, distortion under weight, or one of many types of "fluid flow" -- be it heat flow, air flow (over a wing), or water flow (over a ship's hull, through a pipe, or perhaps even groundwater flow regarding a contaminant) etc.? All these things can be either directly solved by integration (for simple systems), or some type of numerical integration (for complex systems). Note: I'm not even a mechanical engineer, but these are the things you'll often see in an introductory "boundary values" text. Then you ask about the area under a curve... The area under a curve (or for that matter its slope) can represent many things, or nothing of any real physical significance or interest at all, depending on what is plotted in the curve. To be polite, however, to the original poster and others of interest in this topic: Some areas in the past where I've done numerical integration: The first time I did numerical integration was when I was a student in a chemistry class and needed to do numerical integration to analyze some data regarding pH in a titration (forgive me that this was 15 yrs ago -- I don't have the book anymore and don't recall the details because I don't work in chemistry or chemical engineering). More recently: when I worked in the field of electro-optical engineering and wanted to design optical waveguides for certain systems, and I did some numerical analysis to estimate the power (intensity) profile of the exiting beam (this depends on Maxwell's equations, boundary conditions required within these equations, and the specific boundary profile in space created by the waveguide and the material it was made of -- I mention this because some other respondents mentioned Maxwell's equations). There was also some analysis of how the time profile of a "square pulse" would be changed as it propagated through the structure (to see if it spread too much to be above a certain signal to noise requirement -- with given types of noise being added). As an aside, numerical integration is also important in other fields (like biology -- say for population analysis or kinematic studies of cell processes, etc.) or economics, but again, those are not my particular fields.
{"url":"http://www.physicsforums.com/showthread.php?t=434429","timestamp":"2014-04-19T17:36:17Z","content_type":null,"content_length":"53824","record_id":"<urn:uuid:ca4f2596-5a1a-4c6e-b58d-2b161184cc3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Black hole hair and the dark energy problem Published October 14, 2013 Astronomy, Physics, and Related Fields Closed Tags: black holes, general relativity, quantum gravity, quantum field theory We wrapped up the Introduction to Black Holes online class last week, finishing with some of the more bizarre results that have come out of trying to understand black hole behavior. In a course this short (four one-hour meetings), we’re only able to scratch the metaphorical surface of black hole science, but I wanted my students to see both the astronomy and the theoretical physics sides. The physics of accretion and jets is the stuff we have more-or-less direct access to, thanks to powerful telescopes, but the theory side is where we can get close to the workings of gravity — including possibly a regime where general relativity breaks down and must be supplemented with a new set of ideas. One interesting result arising from the study of gravitational collapse is the “no-hair” theorem (named by John Archibald Wheeler, obviously not someone untroubled by receding hairlines). Hair isn’t literal: it refers to anything that might poke out through the black hole’s event horizon, the barrier past which nothing can return to the outside Universe. Lumpy stars result in smooth black holes; strong magnetic fields get radiated away during gravitational collapse; chemical composition is lost behind the event horizon. The no-hair theorem states that the only properties a black hole presents to the cosmos are its mass, spin, and (possibly) electric charge.[ The theorem is fairly simple, but its implications are astounding. Two black holes with the same mass and rotation are identical, even if the stars that birthed them were different in chemistry, size, and magnetic properties. In that sense, they might seem more like elementary particles such as electrons, which are indistinguishable from each other. The analogy doesn’t go too far: black holes have a continuous range of masses and spins that can change over the life of the object, while each electron has precisely the same mass and spin. However, compared with stars, pulsars, white dwarfs, and other astronomical bodies, black holes are remarkably simple. (For questions about what happens inside black holes, see my most recent article at Nautilus.) The no-hair theorem and gravitational collapse are both natural consequences of general relativity, but nearly nobody believes our current theory is the last word. If nothing else, the infinite density predicted at the heart of black holes and at the Big Bang itself have made many suspect that general relativity contains the sign of its own limitations. Others, motivated by inflation or dark energy, have introduced new gravitational theories to explain those phenomena. Modifications to gravity can lead to testable consequences, and perhaps the no-hair theorem fails as well. A theoretical hair apparent That happens in a class of modified gravity theories known (unfortunately) as “scalar-tensor” models. Obviously nobody was thinking of non-physicists when they proposed the name, so as a popularizer I’m already at a major disadvantage. “Scalar-tensor” refers to the mathematical objects used in the theories. Standard general relativity is a “tensor” theory, which expresses how gravity shapes the paths of particles and also how neighboring trajectories deviate from each other. (I’m not entirely sure, but I believe the name itself comes from the mathematical description of the internal mechanics of solids, including tension.) Tensors turn out to be a particularly elegant way to describe gravity, but the details of their working are something I’ll leave for another day; you can see a bit of how they work in the appendix of an earlier post. Scalar fields, on the other hand, are fairly simple things: they’re like the density of air, which is just a number that fluctuates from place to place. The Higgs field, which gives masses to many particles as they interact with it, is a scalar field: it doesn’t depend on how fast the particle is moving or what direction, unlike forces like gravity or electromagnetism. Scalar-tensor theories of gravity start with general relativity and add one or more extra scalar fields. Typically, those fields don’t connect directly to matter — they don’t constitute a new force, in other words — but affect the strength of gravity behind the scenes. Inflation is often described as a scalar field, driving the rapid expansion of spacetime during the Universe’s first instants, but scalar-tensor inflation theories couple this field to gravity in a way that alters gravity’s behavior. Other scalar-tensor models are motivated by string theory, or to explain dark energy. A recent paper in Physical Review Letters[2] worked out the details of the no-hair theorem in a very general version of scalar-tensor gravitational models. The authors didn’t pick one theory, but considered a whole class of them together, and they found that the extra scalar field changed the behavior of black holes. In a way, that’s utterly unsurprising: a different theory predicts new results! However, the specific changes due to the scalar field lead to a breakdown of the no-hair theorem for realistic black holes. A solitary isolated black hole in scalar-tensor theories is no different than what general relativity predicts. However, under some circumstances the presence of matter nearby — perhaps in the form of matter flowing from the accretion disk swirling around a black hole, which is how we detect them in nature — an instability in the scalar field, making it behave like a massive particle. (Weirdly, in some cases that mass is described by an imaginary number, which could be problematic. No consistent theory involving such imaginary masses exists to my knowledge.) That makes the black hole “hairy”: the scalar field sticks out of the event horizon, providing an extra bit of physical detail beyond mass and spin. The paper doesn’t discuss specific observational consequences, but we do have some information about them nevertheless. Scalar-tensor theories have been around for decades, and thanks to a multitude of observations, we can constrain their predictions pretty well.[3] As the authors of the current paper point out, this doesn’t rule out the possibility of scalar-tensor models, but systems like binary pulsars put the kibosh on many strong scalar field effects. Any black hole “hair” would have to abide by these constraints, and if we don’t see the predicted effects, it could have consequences for modified gravity models for dark energy or inflation. In other words, black holes with their strong gravity could prove to be a lab for testing theories of cosmology. 1. It’s likely that most black holes do have some electric charge simply by eating more of one type of particle than another, but it takes a lot of excess charge to make a measurable difference. I’ll ignore charged black holes for the rest of this post. 2. Vitor Cardoso, Isabella P. Carucci, Paolo Pani, and Thomas P. Sotiriou, “Black Holes with Surrounding Matter in Scalar-Tensor Theories”. Phys. Rev. Lett. 111, 111101 (2013). DOI: 10.1103/ PhysRevLett.111.111101, ArXiV: 1308.6587 3. Those versed in gravitational physics should see Clifford M. Will’s excellent book Theory and experiment in gravitational physics (Cambridge University Press, 1993). I’m not sure if there’s a book explaining the same concepts on a popular level.
{"url":"http://galileospendulum.org/2013/10/14/black-hole-hair-and-the-dark-energy-problem/","timestamp":"2014-04-21T08:23:59Z","content_type":null,"content_length":"69713","record_id":"<urn:uuid:345ad824-dc83-4b89-88cc-562268de3bc4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
JOT: Journal of Object Technology - Theory of Classification Previous column Next article The Theory of Classification COLUMN Part 1: Perspectives on Type Compatibility Anthony J.H. Simons, Department of Computer Science, University of Sheffield PDF Version This is the first article in a regular series on object-oriented type theory, aimed specifically at non-theoreticians. The object-oriented notion of classification has for long been a fascinating issue for type theory, chiefly because no other programming paradigm has so sought to establish systematic sets of relationships between all of its types. Over the series, we shall seek to find answers to questions such as: What is the difference between a type and a class? What do we mean by the the notion of plug-in compatibility? What is the difference between subtyping and subclassing? Can we explain inheritance, method combination and template instantiation? Along the way, we shall survey a number of different historical approaches, such as subtyping, F-bounds, matching and state machines and seek to show how these models explain the differences in the behaviour of popular object-oriented languages such as Java, C++, Smalltalk and Eiffel. The series is titled "The Theory of Classification", because we believe that all of these concepts can be united in a single theoretical model, demonstrating that the object-oriented notion of class is a first-class mathematical In this introductory article, we first look at some motivational issues, such as the need for plug-in compatible components and the different ways in which compatibility can be judged. Reasons for studying object-oriented type theory include the desire to explain the different features of object-oriented languages in a consistent way. This leads into a discussion of what we really mean by a type, ranging from the concrete to the abstract views. The eventual economic success of the object-oriented and component-based software industry will depend on the ability to mix and match parts selected from different suppliers [1]. In this, the notion of component compatibility is a paramount concern: • the client (component user) has to make certain assumptions about the way a component behaves, in order to use it; • the supplier (component provider) will want to build something which at least satisfies these expectations; But how can we ensure that the two viewpoints are compatible? Traditionally the notion of type has been used to judge compatibility in software. We can characterise type in two fundamental ways: • syntactic compatibility - the component provides all the expected operations (type names, function signatures, interfaces); • semantic compatibility - the component's operations all behave in the expected way (state semantics, logical axioms, proofs); and these are both important, although most work published as "type theory" has concentrated on the first aspect, whereas the latter aspect comes under the heading of "semantics" or "model checking". There are many spectacular examples of failure due to type-related software design faults, such as the Mars Climate Orbiter crash and the Ariane-5 launch disaster. These recent high-profile cases illustrate two different kinds of incompatibility. In the case of the Mars Climate Orbiter, the failure was due to inadequate characterisation of syntactic type, resulting in a confusion of metric and imperial units. Output from the spacecraft's guidance system was re-interpreted by the propulsion system in a different set of measurement units, resulting in an incorrect orbital insertion manoeuvre, leading to the crash [2]. In the case of the Ariane 5 disaster, the failure was due to inadequate characterisation of semantic type, in which the guidance system needlessly continued to perform its pre-launch self-calibration cycle. During launch, the emission of larger than expected diagnostic codes caused arithmetic overflow in the data conversion intended for the propulsion system, which raised an exception terminating the guidance system, leading to the violent course-correction and break-up of the launcher [3]. This last example should be of particular interest to object-oriented programmers, since it involved the wholesale reuse of the previously successful guidance software from the earlier Ariane 4 launcher in a new context. How strictly must a component match the interface into which it is plugged? In Pascal, a strongly-typed language, a variable can only receive a value of exactly the same type, a property known as monomorphism (literally, the same form). Furthermore, types are checked on a name equivalence, rather than structural equivalence basis. This means that, even if a programmer declared Meter and Foot to be synonyms for Integer, the Pascal type system would still treat the two as non-equivalent, because of their different names (so avoiding the Martian disaster). In C++, typedef synonyms are all considered to be the same type and you would have to devise wrapper classes for Meter and Foot to get the same strict separation. Furthermore, all object-oriented languages are polymorphic (literally, having many forms), allowing variables to receive values of more than one type^1. From a practical point of view, polymorphism is regarded as an important means of increasing the generality of an interface, allowing for a wider choice of components to be substituted, which are said to satisfy the interface. Informally, this is understood to mean supplying at least those functions declared in the interface. However, the theoretical concept of polymorphism is widely misunderstood and the term mistakenly applied, by object-oriented programmers, variously to describe dynamic binding or subtyping. The usage we shall adopt is consistent with established work in the functional programming community, in that it requires at least a second-order typed lambda calculus (with type parameters) to model formally [4]. However, we must lay more foundations before introducing such a calculus. A simple approach to interface satisfaction is subtyping. This is where an object of one type may safely be substituted where another type was expected [5]. This involves no more than coercing the supplied subtype object to a supertype and executing the supertype's functions. The coerced object then behaves in exactly the same way as expected. An example of this is where two SmallInt objects are passed to an Integer plus function and the result is returned as an Integer. The function originally expected Integers, but could handle subtypes of Integer and convert them. Note that no dynamic binding is implied or required. Also, a simply-typed first-order calculus (with subtyping) is adequate to explain this behaviour. We shall call the more complex, polymorphic approach subclassing. This is where one type is replaced by another, which also systematically replaces the original functions with new ones appropriate to the new type. An example of this is where a Numeric type, with abstract plus, minus, times and divide, is replaced by a Complex type, having appropriately-retyped versions of these (as in Eiffel [6]). Rather than coerce a Complex object to a Numeric, the call to plus through Numeric should execute the Complex plus function. Also, there is an obligation to propagate type information about the arguments and result-type of Complex's plus back to the call-site, which needs to supply suitable arguments and then know how to deal with the result. In a later article, we shall see why this formally requires a parametric explanation. To summarise so far, there are three different degrees of sophistication when judging the type compatibility of a component with respect to the expectations of an interface: • correspondence: the component is identical in type and its behaviour exactly matches the expectations made of it when calls are made through the interface; • subtyping: the component is a more specific type, but behaves exactly like the more general expectations when calls are made through the interface; • subclassing: the component is a more specific type and behaves in ways that exceed the more general expectations when calls are made through the interface. Certain object-oriented languages like Java and C++ practise a halfway-house approach, which is subtyping with dynamic binding. This is similar to subtyping, except that the subtype may provide a replacement function that is executed instead. Adapting the earlier example, this is like the SmallInt type providing its own version of the plus function which wraps the result back into the SmallInt range. Syntactically, the result is acceptable as an Integer, but semantically it may yield different results from the original Integer plus function (when wrap-around occurs). The selection mechanism of dynamic binding is formally equivalent to higher-order functional programming [7], in which functions are passed as arguments and then are dynamically invoked under program control. So, languages with apparently simple type systems are more complex than they may at first seem. How can we explain the behaviour of languages such as Smalltalk, C++, Eiffel and Java in a consistent framework? Our goal is to find a mathematical model that can describe the features of these languages; and a proof technique that will let us reason about the model. To do this, we need an adequate definition of type that will allow reasoning about syntactic and semantic type compatibility. This brings into question what we mean exactly by a type. Bit-Interpretation Schemas There are various definitions of type, with increasing formal usefulness. Some approaches are quite concrete, for example a programmer sometimes thinks of a type as a schema for interpreting bit-strings in computer memory, eg the bit-string 01000001 is 'A' if interpreted as a Character; but 65 if interpreted as an Integer. This approach is concerned more with machine-level memory storage requirements than with formal properties necessary to reason about types. Model-Based and Constructive Types An afficionado of formal methods (such as Z [8], or VDM) likes to think of types as equivalent to sets: This is called the model-based approach, in which the notion of type is grounded in a set-theoretic model, that is, having type (x : T, "x is of type T") is equivalent to set membership All program operations can be modelled as set manipulations. The constructive approach [9] also translates a program into a simpler concrete model, like set-theory, whose formal mathematical properties are well understood. Concrete approaches have their limits [10], for example, how would you specify an Ordinal type? You merely want to describe something that is countable, whose elements are ordered, but not assert that any particular set "is" the set of Ordinals. The set of Natural numbers: Natural = {0, 1, 2...} is too specific a model for Ordinal, since this excludes other ordered things, like Characters, and the Natural numbers are subject to further operations (such as arithmetic) which the Ordinals don't allow (although strictly the set-theoretic model only enumerates the membership of a type and does not describe how elements behave). Syntactic and Existential Abstract Types A type theorist typically thinks of a type as a set of function signatures, which describe the operations that a type allows. This characterises the type in a more abstract way, by enumerating the operations that it allows. The Ordinal type is defined as: in which Although syntactic types reach the desired degree of abstraction away from concrete models, they are not yet precise. Consider that the following faulty expressions are still possible: succ('b') = first() = 'a' - an undesired possibility; succ(1) = 1 - another undesired possibility; This is because the signatures alone fail to capture the intended meaning of functions. Axioms and Algebraic Types A mathematician considers a type as a set of signatures and constraining axioms. The type Ordinal is fully characterised by: This form of definition is known as an algebra. Formally, an algebra consists of: a sort (that is, an uninterpreted set, ord, acting as a placeholder for the type); and a set of functions defined on the sort (first, succ), whose meaning is given by axioms. The two axioms (1) and (2), plus the logical rule of induction, are sufficient to make Ordinal behave in exactly the desired way. But how do the axioms work? Let us arbitrarily label: x = first(). • From (1), succ(x) • From (2) succ(y) • From (2) succ(z) by substitution of y and z, we get: succ(succ(succ(x))) = succ(x) by unwinding succ, we get: succ(succ(x)) = x, which is false by (1), so succ(z) is also distinct; and so on... Once the algebra is defined, we can disregard the sort, which is no longer needed, since every element of the type can now be expressed in a purely syntactic way: first(); succ(first()); succ(succ(first())); ... The algebraic definition of Ordinal says exactly enough and no more [11]; it is both more abstract than a concrete type - it is not tied to any particular set representation - and is more precise - it is inhabited exactly by a monotonically-ordered sequence of abstract objects. We are motivated to study object-oriented type theory out of a concern to understand better the notion of syntactic and semantic type compatibility. Compatibility may be judged according to varying degrees of strictness, which each have different consequences. Likewise, different object-oriented languages seem to treat substitutability in different ways. As a preamble to developing a formal model in which languages like Smalltalk, C++, Eiffel and Java can be analysed and compared, increasingly abstract definitions of type were presented. The next article in this series builds on the foundation laid here and deals with models of objects, methods and message-passing. ^1 Beware object-oriented textbooks! Polymorphism does not refer to the dynamic behaviour of objects aliased by a common superclass variable, but to the fact that variables may hold values of more than one type in the first place. This fact is independent of static or dynamic binding. [1] B. J. Cox, Object-Oriented Programming: an Evolutionary Approach, 1st edn., Addison-Wesley, 1986. [2] Mars Climate Orbiter Official Website, http://mars.jpl.nasa.gov/msp98/orbiter/, September 1999. [3] J. L. Lions, Ariane 5 Flight 501 Failure, Report of the Inquiry Board, http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html, July 1996. [4] J. C. Reynolds, Towards a theory of type structure, Proc. Coll. sur la Programmation, New York; pub. LNCS 19, Springer Verlag, 1974, 408-425. [5] L. Cardelli and P. Wegner, On understanding types, data abstraction and polymorphism, ACM Computing Surveys, 17(4), 1985, 471-521. [6] B. Meyer, Object-Oriented Software Construction, 2nd edn., Prentice Hall, 1995. [7] W. Harris, Contravariance for the rest of us, J. of Obj.-Oriented Prog., Nov-Dec, 1991, 10-18. [8] J. M. Spivey, Understanding Z: a Specification Language and its Formal Semantics, CUP, 1988. [9] P. Martin-Löf, Intuitionistic type theory, lecture notes, Univ. Padova, 1980. [10] J. H. Morris, Types are not sets, Proc. ACM Symp. on Principles of Prog. Langs., Boston, 1973, 120-124. [11] K. Futatsugi, J. Goguen, J.-P. Jouannaud and J. Messeguer, Principles of OBJ-2, Proc. 12th ACM Symp. Principles of Prog. Langs., 1985, 52-66. About the author Anthony Simons is a Senior Lecturer and Director of Teaching in the Department of Computer Science, University of Sheffield, where he leads object-oriented research in verification and testing, type theory and language design, development methods and precise notations. He can be reached at a.simons@dcs.shef.ac.uk Cite this column as follows: Anthony J. H. Simons: "The Theory of Classification, Part 1: Perspectives on Type Compatibility", in Journal of Object Technology, vol. 1, no. 1, May-June 2002, pages 55-61, http://www.jot.fm/issues/issue_2002_05/column5 Previous column Next article
{"url":"http://www.jot.fm/issues/issue_2002_05/column5/","timestamp":"2014-04-17T18:32:33Z","content_type":null,"content_length":"25198","record_id":"<urn:uuid:4a6f56ac-7928-45ab-a4a8-34ff93e976f8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
A unified approach to sparse signal processing A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels. There are many applications in signal processing and communication systems where the discrete signals are sparse in some domain such as time, frequency, or space; i.e., most of the samples are zero, or alternatively, their transforms in another domain (normally called “frequency coefficients”) are sparse (see Figures 1 and 2). There are trivial sparse transformations where the sparsity is preserved in both the “time” and “frequency” domains; the identity transform matrix and its permutations are extreme examples. Wavelet transformations that preserve the local characteristics of a sparse signal can be regarded as “almost” sparse in the “frequency” domain; in general, for sparse signals, the more similar the transformation matrix is to an identity matrix, the sparser the signal is in the transform domain. In any of these scenarios, sampling and processing can be optimized using sparse signal processing. In other words, the sampling rate and the processing manipulations can be significantly reduced; hence, a combination of data compression and processing time reduction can be achieved.^a Figure 1. Sparse discrete time signal with its DFT. Figure 2. Sparsity is manifested in the frequency domain. Each field has developed its own tools, algorithms, and reconstruction methods for sparse signal processing. Very few authors have noticed the similarities of these fields. It is the intention of this tutorial to describe these methods in each field succinctly and show that these methods can be used in other areas and applications often with appreciable improvements. Among these fields are 1— Sampling: random sampling of bandlimited signals [1], compressed sensing (CS) [2], and sampling with finite rate of innovation [3]; 2—Coding: Galois [4,5] and real-field error correction codes [6]; 3—Spectral Estimation[7-10]; 4—Array Processing: Multi-source location (MSL) and direction of arrival (DOA) estimation [11,12], sparse array processing [13], and sensor networks [14]; 5—Sparse Component Analysis (SCA): blind source separation [15-17] and dictionary representation [18-20]; 6—Channel Estimation in Orthogonal Frequency Division Multiplexing (OFDM) [21-23]. The sparsity properties of these fields are summarized in Tables 1, 2, and 3.^b The details of most of the major applications will be discussed in the next sections but the common traits will be discussed in this Table 1. Various topics and applications with sparsity properties: the sparsity, which may be in the time/space or “frequency” domains, consists of unknown samples/coefficients that need to be The columns of Table 1 consist of 0—category, 1—topics, 2—sparsity domain, 3—type of sparsity, 4—information domain, 5—type of sampling in information domain, 6—minimum sampling rate, 7—conventional reconstruction methods, and 8—applications. The first rows (2–7) of column 1 are on sampling techniques. The 8–9th rows are related to channel coding, row 10 is on spectral estimation and rows 11–13 are related to array processing. Rows 14–15 correspond to SCA and finally, row 16 covers multicarrier channel estimation, which is a rather new topic. As shown in column 2 of the table, depending on the topics, sparsity is defined in the time, space, or “frequency” domains. In some applications, the sparsity is defined as the number of polynomial coefficients (which in a way could be regarded as “frequency”), the number of sources (which may depend on location or time sparsity for the signal sources), or the number of “words” (signal bases) in a dictionary. The type of sparsity is shown in column 3; for sampling schemes, it is usually low-pass, band-pass, or multiband [24], while for compressed sensing, and most other applications, it is random. Column 4 represents the information domain, where the order of sparsity, locations, and amplitudes can be determined by proper sampling (column 5) in this domain. Column 7 is on traditional reconstruction methods; however, for each area, any of the reconstruction methods can be used. The other columns are self explanatory and will be discussed in more details in the following sections. Table 3. Common notations used throughout the article The rows 2–4 of Table 1 are related to the sampling (uniform or random) of signals that are bandlimited in the Fourier domain. Band-limitedness is a special case of sparsity where the nonzero coefficients in the frequency domain are consecutive. A better assumption in the frequency domain is to have random sparsity [25-27] as shown in row 5 and column 3. A generalization of the sparsity in the frequency domain is sparsity in any transform domain such as Discrete Cosine and Wavelet Transforms (DCT and DWT); this concept is further generalized in CS (row 6) where sampling is taken by a linear combination of time domain samples [2,28-30]. Sampling of signals with finite rate of innovation (row 7) is related to piecewise smooth (polynomial based) signals. The positions of discontinuous points are determined by annihilating filters that are equivalent to error locator polynomials in error correction codes and the Prony’s method [10] as discussed in Sections 4 and 5, Random errors in a Galois field (row 8) and the additive impulsive noise in real-field error correction codes (row 9) are sparse disturbances that need to be detected and removed. For erasure channels, the impulsive noise can be regarded as the negative of the missing sample value [31]; thus the missing sampling problem, which can also be regarded as a special case of nonuniform sampling, is also a special case of the error correction problem. A subclass of impulsive noise for 2-D signals is salt and pepper noise [32]. The information domain, where the sampling process occurs, is called the syndrome which is usually in a transform domain. Spectral estimation (row 10) is the dual of error correction codes, i.e., the sparsity is in the frequency domain. MSL (row 11) and multi-target detection in radars are similar to spectral estimation since targets act as spatial sparse mono-tones; each target is mapped to a specific spatial frequency regarding its line of sight direction relative to the receiver. The techniques developed for this branch of science is unique; with examples such as MUSIC [7], Prony [8], and Pisarenko [9]. We shall see that the techniques used in real-field error correction codes such as iterative methods (IMAT) can also be used in this area. The array processing category (rows 11–13) consists of three separate topics. The first one covers MSL in radars, sonars, and DOA. The techniques developed for this field are similar to the spectral estimation methods with emphasis on the minimum description length (MDL) [33]. The second topic in the array processing category is related to the design of sparse arrays where some of the array elements are missing; the remaining nodes form a nonuniform sparse grid. In this case, one of the optimization problems is to find the sparsest array (number, locations, and weights of elements) for a given beampattern. This problem has some resemblance to the missing sampling problem but will not be discussed in this article. The third topic is on sensor networks (row 13). Distributed sampling and recovery of a physical field using an array of sparse sensors is a problem of increasing interest in environmental and seismic monitoring applications of sensor networks [34]. Sensor fields may be bandlimited or non-bandlimited. Since the power consumption is the most restricting issue in sensors, it is vital to use the lowest possible number of sensors (sparse sensor networks) with the minimum processing computation; this topic also will not be discussed in this article. In SCA, the number of observations is much less than the number of sources (signals). However, if the sources are sparse in the time domain, then the active sources and their amplitudes can be determined; this is equivalent to error correction codes. Sparse dictionary representation (SDR) is another new area where signals are represented by the sparsest number of words (signal bases) in a dictionary of finite number of words; this sparsity may result in a tremendous amount of data compression. When the dictionary is over complete, there are many ways to represent the signal; however, we are interested in the sparsest representation. Normally, for extraction of statistically independent sources, independent component analysis (ICA) is used for a complete set of linear mixtures. In the case of a non-complete (underdetermined) set of linear mixtures, ICA can work if the sources are also sparse; for this special case, ICA analysis is synonymous with SCA. Finally, channel estimation is shown in row 16. In mobile communication systems, multipath reflections create a channel that can be modeled by a sparse FIR filter. For proper decoding of the incoming data, the channel characteristics should be estimated before they can be equalized. For this purpose, a training sequence is inserted within the main data, which enables the receiver to obtain the output of the channel by exploiting this training sequence. The channel estimation problem becomes a deconvolution problem under noisy environments. The sparsity criterion of the channel greatly improves the channel estimation; this is where the algorithms for extraction of a sparse signal could be employed [21,22,35]. When sparsity is random, further signal processing is needed. In this case, there are three items that need to be considered. 1—Evaluating the number of sparse coefficients (or samples), 2—finding the positions of sparse coefficients, and 3—determining the values of these coefficients. In some applications, only the first two items are needed; e.g., in spectral estimation. However, in almost all the other cases mentioned in Table 1, all the three items should be determined. Various types of linear programming (LP) and some iterative algorithms, such as the IMAT with adaptive thresholding (IMAT), determine the number, positions, and values of sparse samples at the same time. On the other hand, the minimum description length (MDL) method, used in DOA/MSL and spectral estimation, determines the number of sparse source locations or frequencies. In the subsequent sections, we shall describe, in more detail, each algorithm for various areas and applications based on Table 1. Finally, it should be mentioned that the signal model for each topic or application may be deterministic or stochastic. For example, in the sampling category for rows 2–4 and 7, the signal model is typically deterministic although stochastic models could also be envisioned [36]. On the other hand, for random sampling and CS (rows 5–6), the signal model is stochastic although deterministic models may also be envisioned [37]. In channel coding and estimation (rows 8–9 and 16), the signal model is normally deterministic. For Spectral and DOA estimation (rows 10–11), stochastic models are assumed, whereas for array beam-forming (row 12), deterministic models are used. In sensor networks (row 13), both deterministic and stochastic signal models are employed. Finally, in SCA (rows 14–15), statistical independence of sources may be necessary and thus stochastic models are applied. Underdetermined system of linear equations In most of the applications where sparsity constraint plays a significant role, we are dealing with under-determined system of linear equations; i.e., a sparse vector s[n×1] is observed through a linear mixing system denoted by A[m×n]where m<n: Since m<n, the vector s[n×1] cannot be uniquely recovered by observing the measurement vector x[m×1]; however, among the infinite number of solutions to (1), the sparsest solution may be unique. For instance, if no 2k columns of A[m×n] are linearly dependent, the null-space of A[m×n] does not include any 2k-sparse vector (at most 2k non-zero elements) and therefore, the measurement vectors (x[m× n]) of different k-sparse vectors are different. Thus, if s[n×1]is sparse enough (k-sparse), the sparsest solution of (1) is unique and coincides with s[n×1]; i.e., perfect recovery. Unfortunately, there are two obstacles here: (1) the vector x[m×1] often includes an additive noise term, and (2) finding the sparsest solution of a linear system is an NP problem in general. Since in the rest of the article, we are frequently dealing with the problem of reconstructing the sparsest solution of (1), we first review some of the important reconstruction methods in this Greedy methods Mallat and Zhang [38] have developed a general iterative method for approximating sparse decomposition. When the dictionary is orthogonal and the signal x is composed of k≪n atoms, the algorithm recovers the sparse decomposition exactly after n steps. The introduced method which is a greedy algorithm [39], is usually referred to as Matching Pursuit. Since the algorithm is myopic, in some certain cases, wrong atoms are chosen in the first few iterations, and thus the remaining iterations are spent on correcting the first few mistakes. The concepts of this method are the basis of other advanced greedy methods such as OMP [40] and CoSaMP [41]. The algorithms of these greedy methods (MP, OMP, and CoSaMP) are shown in Table 4. Basis pursuit The mathematical representation of counting the number of sparse components is denoted by ℓ[0]. However, ℓ[0]is not a proper norm and is not computationally tractable. The closest convex norm to ℓ[0] is ℓ[1]. The ℓ[1]optimization of an overcomplete dictionary is called Basis Pursuit. However the ℓ[1]-norm is non-differentiable and we cannot use gradient methods for optimal solutions [42]. On the other hand, the ℓ[1] solution is stable due to its convexity (the global optimum is the same as the local one) [20]. Formally, the Basis Pursuit can be formulated as: We now explain how the Basis Pursuit is related to LP. The standard form of LP is a constrained optimization problem defined in terms of variable by: where C^Tx is the objective function, Ax=b is a set of equality constraints and ∀i: x[i]≥0is a set of bounds. Table 5 shows this relationship. Thus, the solution of (2) can be obtained by solving the equivalent LP. The Interior Point methods are the main approaches to solve LP. Table 5. Relation between LP and basis pursuit (the notation for LP is from [[43]]) Gradient projection sparse reconstruction (GPSR) The GPSR technique [44] is considered as one of the fast variations of the ℓ[1]-minimization method and consists of solving the following minimization problem: Note that J(s) is almost the Lagrange form of the constraint problem in (2) where the Lagrange multiplier is defined as , with the difference that in (4), the minimization procedure is performed exclusively on s and not on τ. Thus, the outcome of (4) coincides with that of (2) only when the proper τis used. For a fast implementation of (4), the positive and negative elements of s are treated separately, i.e., Now by assuming that all the vectors and matrices are real, it is easy to check that the minimizer of the following cost function (F) corresponds to the minimizer of J(s): In GPSR, the latter cost function is iteratively minimized by moving in the opposite direction of the gradient while respecting the condition z≥0. There step-wise explanation of the basic GPSR method is given in Table 6. In this table, (a)[ + ] denotes the value max{a,0} while (a)[ + ] indicates the element-wise action of the same function on the vector A. There is another adaptation of this method known as Barzilai-Borwein (BB) GPSR which is not discussed here. Iterative shrinkage-threshold algorithm (ISTA) Instead of using the gradient method for solving (4), it is possible to approximate the cost function. To explain this idea, let s^(0) be an estimate of the minimizer of (4) and let be a cost function that satisfies: Now if s^(1) is the minimizer of , we should have J(s^(1))≤J(s^(1)); i.e., s^(1) better estimates the minimizer of J(.) than s^(0). This technique is useful only when finding the minimizer of is easier than solving the original problem. In ISTA [45], at the kth iteration and by having the estimate s^(i), the following alternative cost function is used: where β is a scalar larger than all squared singular values of Ato ensure (7). By modifying the constant terms and rewriting the above cost function, one can check that the minimizer of is essentially the same as Note that the minimization problem in (9) is separable with respect to the elements of sand we just need to find the minimizer of the single-variable cost function , which is the well-known shrinkage-threshold operator: The steps of the ISTA algorithm are explained in Table 7. FOCal underdetermined system solver (FOCUSS) FOCal underdetermined system solver is a non-parametric algorithm that consists of two parts [46]. It starts by finding a low resolution estimation of the sparse signal, and then pruning this solution to a sparser signal representation through several iterations. The solution at each iteration step is found by taking the pseudo-inverse of a modified weighted matrix. The pseudo-inverse of the modified weighted matrix is defined by (AW)^ + =(AW)^H(AW·(AW)^H)^−1. This iterative algorithm is the solution of the following optimization problem: Description of this algorithm is given in Table 8 and an extended version is discussed in [46]. Iterative detection and estimation (IDE) The idea behind this method is based on a geometrical interpretation of the sparsity. Consider the elements of vector sare i.i.d. random variables. By plotting a sample distribution of vector s, which is obtained by plotting a large number of samples in the S-space, it is observed that the points tend to concentrate first around the origin, then along the coordinate axes, and finally across the coordinate planes. The algorithm used in IDE is given in Table 9. In this table, s[i]s are the inactive sources, s[a]s are the active sources, A[i] is the column of A corresponding to the inactive s[i] and A[a] is the column of A corresponding to the active s[a]. Notice that IDE has some resemblances to the RDE method discussed in Section 4.1.2, IMAT mentioned in Section 4.1.2, and MIMAT explained in Section 8.1.2. Smoothed ℓ[0]-norm (SL0) method As discussed earlier, the criterion for sparsity is the ℓ[0]-norm; thus our minimization is The ℓ[0]-norm has two major drawbacks: the need for a combinatorial search, and its sensitivity to noise. These problems arise from the fact that the ℓ[0]-norm is discontinuous. The idea of SL0 is to approximate the ℓ[0]-norm with functions of the type [47]: where σ is a parameter which determines the quality of the approximation. Note that we have For the vector s, we have , where . Now minimizing is equivalent to maximizing F[σ](s) for some appropriate values of σ. For small values of σ, F[σ](s)is highly non-smooth and contains many local maxima, and therefore its maximization over A·s=x may not be global. On the other hand, for larger values of σ, F[σ](s) is a smoother function and contains fewer local maxima, and its maximization may be possible (in fact there are no local maxima for large values of σ[47]). Hence we use a decreasing sequence for σin the steepest ascent algorithm and may escape from getting trapped into local maxima and reach the actual maximum for small values of σ, which gives the minimum ℓ[0]-norm solution. The algorithm is summarized in Table 10. Comparison of different techniques The above techniques have been simulated and the results are depicted in Figure 3. In order to compare the efficiency and computational complexity of these methods, we use a fixed synthetic mixing matrix and source vectors. The elements of the mixing matrix are obtained from zero mean independent Gaussian random variables with variance σ^2=1. Sparse sources have been artificially generated using a Bernoulli–Gaussian model: s[i]=pN(0,σ[on]) + (1−p)N(0,σ[off]). We set σ[off]=0.01,σ[on]=1and p=0.1. Then, we compute the noisy mixture vector xfrom x=As + ν, where ν is the noise vector. The elements of the vector νare generated according to independent zero mean Gaussian random variables with variance . We use orthogonal matching pursuit (OMP) which is a variant of Matching Pursuit [38 ]. OMP has a better performance in estimating the source vector in comparison to Matching Pursuit. Figure 4 demonstrates the time needed for each algorithm to estimate the vector swith respect to the number of sources. This figure shows that IDE and SL0 have the lowest complexity. Figure 3. Performance of various methods with respect to the standard deviation when n = 1,000, m = 400, and k = 100. Figure 4. Computational time (complexity) versus the number of sources for m = 0.4n and k = 0.1 n. Figures 5 and 6 illustrate a comparison of several sparse reconstruction methods for sparse DFT signals and sparse random transformations, respectively. In all the simulations, the block size of the sparse signal is 512 while the number of sparse signal components in the frequency domain is 20. The compression rate is 25% which leads to a selection of 128 time domain observation samples. Figure 5. Performance comparison of some reconstruction techniques for DFT sparse signals. Figure 6. Performance comparison of some reconstruction techniques for sparse random trasnformations. In Figure 5, the greedy algorithms, COSAMP and OMP, demonstrate better performances than ISTA and GPSR, especially at lower input signal SNRs. IMAT shows a better performance than all other algorithms; however its performance in the higher input signal SNRs is almost similar to OMP and COSAMP. In Figure 6, OMP and COSAMP have better performances than the other ones while ISTA, SL0, and GPSR have more or less the same performances. In sparse DFT signals, the complexity of the IMAT algorithm is less than the others while ISTA is the most complex algorithm. Similarly in Figure 6, SL0 has the least complexity. Sampling: uniform, nonuniform, missing, random, compressed sensing, rate of innovation Analog signals can be represented by finite rate discrete samples (uniform, nonuniform, or random) if the signal has some sort of redundancies such as band-limitedness, finite polynomial representation (e.g., periodic signals that are represented by a finite number of trigonometric polynomials), and nonlinear functions of such redundant functions [48,49]. The minimum sampling rate is the Nyquist rate for uniform sampling and its generalizations for nonuniform [1] and multiband signals [50]. When a signal is discrete, the equivalent discrete representation in the “frequency” domain (DFT, DCT, DWT, Discrete Hartley Transform (DHT), Discrete Sine Transform (DST)) may be sparse, which is the discrete version of bandlimited or multiband analog signals where the locations of the bands are unknown. For discrete signals, if the nonzero coefficients (“frequency” sparsity) are consecutive, depending on the location of the zeros, they are called lowpass, bandpass, or multiband discrete signals; if the locations of the nonzero coefficients do not follow any of these patterns, the “frequency” sparsity is random. The number of discrete time samples needed to represent a frequency-sparse signal with known sparsity pattern follows the law of algebra, i.e., the number of time samples should be equal to the number of coefficients in the “frequency” domain; since the two domains are related by a full rank transform matrix, recovery from the time samples is equivalent to solving an invertible k×k system of linear equations where k is the number of sparse coefficients. For band-limited real signals, the Fourier transform (sparsity domain) consists of similar nonzero patterns in both negative and positive frequencies where only the positive part is counted as the bandwidth; thus, the law of algebra is equivalent to the Nyquist rate, i.e., twice the bandwidth (for discrete signals with DC components it is twice the bandwidth minus one). The dual of frequency-sparsity is time-sparsity, which can happen in a burst or a random fashion. The number of “frequency” coefficients needed follows the Nyquist criterion. This will be further discussed in Section 4 for sparse additive impulsive noise channels. Sampling of sparse signals If the sparsity locations of a signal are known in a transform domain, then the number of samples needed in the time (space) domain should be at least equal to the number of sparse coefficients, i.e., the so-called Nyquist rate. However, depending on the type of sparsity (lowpass, bandpass, or random) and the type of sampling (uniform, periodic nonuniform, or random), the reconstruction may be unstable and the corresponding reconstruction matrix may be ill-conditioned [51,52]. Thus in many applications discussed in Table 1, the sampling rate in column 6 is higher than the minimum (Nyquist) rate. When the location of sparsity is not known, by the law of algebra, the number of samples needed to specify the sparsity is at least twice the number of sparse coefficients. Again for stability reasons, the actual sampling rate is higher than this minimum figure [1,50]. To guarantee stability, instead of direct sampling of the signal, a combination of the samples can be used. Donoho has recently shown that if we take linear combinations of the samples, the minimum stable sampling rate is of the order , where n and k are the frame size and the sparsity order, respectively [29]. Reconstruction algorithms There are many reconstruction algorithms that can be used depending on the sparsity pattern, uniform or random sampling, complexity issues, and sensitivity to quantization and additive noise [53,54]. Among these methods are LP, lagrange interpolation [55], time varying method [56], spline interpolation [57], matrix inversion [58], error locator polynomial (ELP) [59], iterative techniques [52,60- 65], and IMAT [25,31,66,67]. In the following, we will only concentrate on the last three methods as well as the first (LP) that have been proven to be effective and practical. Iterative methods when the location of sparsity is known The reconstruction algorithms have to recover the original sparse signal from the information domain and the type of sparsity in the transform domain. We know the samples in the information domain (both position and amplitude) and we know the location of sparsity in the transform domain. An iteration between these two domains (Figure 7 and Table 11) or consecutive Projections Onto Convex Sets (POCS) should yield the original signal [51,61,62,65,68-71]. Figure 7. Block diagram of the iterative reconstruction method. The mask is an appropriate filter with coefficients of 1’s and 0’s depending on the type of sparsity in the original signal. Table 11. The iterative algorithm based on the block diagram of Figure 7 In the case of the usual assumption that the sparsity is in the “frequency” domain and for the uniform sampling case of lowpass signals, one projection (bandlimiting in the frequency domain) suffices. However, if the frequency sparsity is random, the time samples are nonuniform, or the “frequency” domain is defined in a domain other than the DFT, then we need several iterations to have a good replica of the original signal. In general, this iterative method converges if the “Nyquist” rate is satisfied, i.e., the number of samples per block is greater than or equal to the number of coefficients. Figure 8 shows the improvement in dB versus the number of iterations for a random sampling set for a bandpass signal. In this figure, besides the standard iterative method, accelerated iterations such as Chebyshev and conjugate gradient methods are also used (please see [72] for the algorithms). Figure 8. SNR improvement vs. the no. of iterations for a random sampling set at the Nyquist rate (OSR = 1) for a bandpass signal. Iterative methods are quite robust against quantization and additive noise. In fact, we can prove that the iterative methods approach the pseudo-inverse (least squares) solution for a noisy environment; specially, when the matrix is ill-conditioned [50]. Iterative method with adaptive threshold (IMAT) for unknown location of sparsity As expected, when sparsity is assumed to be random, further signal processing is needed. We need to evaluate the number of sparse coefficients (or samples), the position of sparsity, and the values of the coefficients. The above iterative method cannot work since projection (the masking operation in Figure 7) onto the “frequency” domain is not possible without the knowledge of the positions of sparse coefficients. In this scenario, we need to use the knowledge of sparsity in some way. The introduction of an adaptive nonlinear threshold in the iterative method can do the trick and thus the name, IMAT; the block diagram and the pseudo-code are depicted in Figure 9 and Table 12, respectively. The algorithms in [23,25,31,73] are variations of this method. Figure 9 shows that by alternate projections between information and sparsity domains (adaptively lowering or raising the threshold levels in the sparsity domain), the sparse coefficients are gradually picked up after several iterations. This method can be considered as a modified version of Matching Pursuit as described in Section 2.1; the results are shown in Figure 10. The sampling rate in the time domain is twice the number of unknown sparse coefficients. This is called the full capacity rate; this figure shows that after approximately 15 iterations, the SNR reaches its peak value. In general, the higher the sampling rate relative to the full capacity, the faster is the convergence rate and the better the SNR value. Figure 9. The IMAT for detecting the number, location, and values of sparsity. Table 12. Generic IMAT of Figure 9 for any sparsity in the DT, which is typically DFT Figure 10. SNR vs. the no. of iterations for sparse signal recovery using the IMAT (Table 12). Matrix solutions When the sparse nonzero locations are known, matrix approaches can be utilized to determine the values of sparse coefficients [58]. Although these methods are rather straightforward, they may not be robust against quantization or additive noise when the matrices are ill conditioned. There are other approaches such as Spline interpolation [57], nonlinear/time varying methods [58], Lagrange interpolation [55] and error locator polynomial (ELP) [74] that will not be discussed here. However, the ELP approach will be discussed in Section 4.1; variations of this method are called the annihilating filter in sampling with finite rate of innovation (Section 3.3) and Prony’s method in spectral and DOA estimation (Section 5.1). These methods work quite well in the absence of additive noise but they may not be robust in the presence of noise. In the case of additive noise, the extensions of the Prony method (ELP) such as Pisarenko harmonic decomposition (PHD), MUSIC and Estimation of signal parameters via rotational invariance techniques (ESPRIT) will be discussed in Sections 5.2, 5.3, and 6. Compressed sensing (CS) The relatively new topic of CS (Compressive) for sparse signals was originally introduced in [29,75] and further extended in [30,76,77]. The idea is to introduce sampling schemes with low number of required samples which uniquely represent the original sparse signal; these methods have lower computational complexities than the traditional techniques that employ oversampling and then apply compression. In other words, compression is achieved exactly at the time of sampling. Unlike the classical sampling theorem [78] based on the Fourier transform, the signals are assumed to be sparse in an arbitrary transform domain. Furthermore, there is no restricting assumption for the locations of nonzero coefficients in the sparsity domain; i.e., the locations should not follow a specific pattern such as lowpass or multiband structure. Clearly, this assumption includes a more general class of signals than the ones previously studied. Since the concept of sparsity in a transform domain is more convenient to study for discrete signals, most of the research in this field is focused along discrete type signals [79]; however, recent results [80] show that most of the work can be generalized to continuous signals in shift-invariant subspaces (a subclass of the signals which are represented by Riesz basis).^c We first study discrete signals and then briefly discuss the extension to the continuous case. CS mathematical modeling Let the vector be a finite length discrete signal which has to be under-sampled. We assume that xhas a sparse representation in a transform domain denoted by a unitary matrix Ψ[n×n]; i.e., we have: where s is an n×1vector which has at most k non-zero elements (k-sparse vectors). In practical cases, shas at most k significant elements and the insignificant elements are set to zero which means s is an almost k-sparse vector. For example, x can be the pixels of an image and Ψcan be the corresponding IDCT matrix. In this case, most of the DCT coefficients are insignificant and if they are set to zero, the quality of the image will not degrade significantly. In fact, this is the main concept behind some of the lossy compression methods such as JPEG. Since the inverse transform on x yields s, the vector s can be used instead of x, which can be succinctly represented by the locations and values of the nonzero elements of s. Although this method efficiently compresses x, it initially requires all the samples of xto produce s, which undermines the whole purpose of CS. Now let us assume that instead of samples of x, we take m linear combinations of the samples (called generalized samples). If we represent these linear combinations by the matrix Φ[m×n] and the resultant vector of samples by y[m×1], we have The question is how the matrix Φand the size m should be chosen to ensure that these samples uniquely represent the original signal x. Obviously, the case of Φ=I[n×n]where I[n×n] is an n×nidentity matrix yields a trivial solution (keeping all the samples of x) that does not employ the sparsity condition. We look for Φmatrices with as few rows as possible which can guarantee the invertibility, stability, and robustness of the sampling process for the class of sparse inputs. To solve this problem, we introduce probabilistic measures; i.e., instead of exact recovery of signals, we focus on the probability that a random sparse signal (according to a given probability density function) fails to be reconstructed using its generalized samples. If the probability δof failure can be made arbitrarily small, then the sampling scheme (the joint pair of Ψ,Φ) is successful in recovering x with probability 1−δ, i.e., with high probability. Let us assume that Φ^(m)represents the submatrix formed by m random (uniform) rows of an orthonormal matrix Φ[n×n]. It is apparent that if we use as the sampling matrices for a given sparsity domain, the failure probabilities for Φ^(0) and Φ^(n) are, respectively, one and zero, and as the index m increases, the failure probability decreases. The important point shown in [81] is that the decreasing rate of the failure probability is exponential with respect to . Therefore, we expect to reach an almost zero failure probability much earlier than m=ndespite the fact that the exact rate highly depends on the mutual behavior of the two matrices Ψ,Φ. More precisely, it is shown in [81] that where P[failure] is the probability that the original signal cannot be recovered from the samples, c is a positive constant, and μ(Ψ,Φ) is the maximum coherence between the columns of Ψand rows of Φ defined by [82]: where ψ[a],ϕ[b] are the a^th column and the b^th row of the matrices Ψand Φ, respectively. The above result implies that the probability of reconstruction is close to one for The above derivation implies that the smaller the maximum coherence between the two matrices, and the lower is the number of required samples. Thus, to decrease the number of samples, we should look for matrices Φ with low coherence with Ψ. For this purpose, we use a random Φ. It is shown that the coherence of a random matrix with i.i.d. Gaussian distribution with any unitary Ψ is considerably small [29], which makes it a proper candidate for the sampling matrix. Investigation of the probability distribution has shown that the Gaussian PDF is not the only solution (for example binary Bernouli distribution and other types are considered in [83]) but may be the simplest to analyze. For the case of random matrix with i.i.d. Gaussian distribution (or more general distributions for which the concentration inequality holds [83]), a stronger inequality compared with (20) is valid; this implies that for the reconstruction with a probability of almost one, the following condition for the number of samples m suffices [2,79]: Notice that the required number of samples given in (20) is for random sampling of an orthonormal basis while (21) represents the required number of samples with i.i.d. Gaussian distributed sampling matrix. Typically, the number in (21) is less than that of (20). Reconstruction from compressed measurements In this section, we consider reconstruction algorithms and the stability robustness issues. We briefly discuss the following three methods: a—geometric, b—combinatorial, and c—information theoretic. The first two methods are standard while the last one is more recent. Geometric methods The oldest methods for reconstruction from compressed sampling are geometric, i.e., ℓ[1] minimization techniques for finding a k-sparse vector from a set of m=O(klog(n))measurements (y[i]s); see e.g., [29,81,84-86]. Let us assume that we have applied a suitable Φ which guarantees the invertibility of the sampling process. The reconstruction method should be a technique to recover a k-sparse vector s[n×1] from the observed samples y[m×1]=Φ[m×n]·Ψ[n×n]·s[n×1]or possibly y[m×1]=Φ[m×n]·Ψ[n×n]·s[n×1] + ν[m×1], where ν denotes the noise vector. Suitability of Φimplies that s[n×1]is the only k -sparse vector that produces the observed samples; therefore, s[n×1] is also the sparsest solution for y=Φ·Ψ·s. Consequently, scan be found using Good methods for the minimization of an ℓ[0]-norm (sparsity) do not exist. The ones that are known are either computationally prohibitive or are not well behaved when the measurements are corrupted with noise. However, it is shown in [82] and later in [76,87] that minimization of an ℓ[1]-norm results in the same vector sfor many cases: The interesting part is that the number of required samples to replace ℓ[0] with ℓ[1]-minimization has the same order of magnitude as the one for the invertibility of the sampling scheme. Hence, s can be derived from (22) using ℓ[1]-minimization. It is worthwhile to mention that replacement of ℓ[1]-norm with ℓ[2]-norm, which is faster to implement, does not necessarily produce reasonable solutions. However, there are greedy methods (Matching Pursuit as discussed in Section 7 on SCA [40,88]) which iteratively approach the best solution and compete with the ℓ[1]-norm optimization (equivalent to Basis Pursuit methods as discussed in Section 7 on SCA). To show the performance of the BP method, we have reported the famous phase transition diagram from [89] in Figure 11; this figure characterizes the perfect reconstruction region with respect to the parameters k/m and m/n. In fact, the curve represents the points for which the BP method recovers the sparse signal measured through a Gaussian random matrix with probability 50%. The interesting point is that the transition from the high-probability region (below the curve) to the low-probability one (above the curve) is very sharp and when n→∞ the plotted curve separates the regions for probabilities 0 and 100%. The empirical results show that by deviating from the Gaussian distribution, the curve does not change while it is yet to be proved [89]. Figure 11. The phase transition of the BP method for reconstruction of the sparse vector from Gaussian random measurement matrices; the probability of perfect reconstruction for the pairs of and that stand above and below the curve are, respectively, 0 and 1 asymptotically. A sufficient condition for these methods to work is that the matrix Φ·Ψ must satisfy the so-called restricted isometric property (RIP) [75,83,90]; which will be discussed in the following section. Restricted isometric property It is important to note that the ℓ[1]-minimization algorithm produces almost optimal results for signals that are not k-sparse. For example, almost sparse signals (compressible signals) are more likely to occur in applications than exactly k-sparse vectors, (e.g., the wavelet transform of an image consists mostly of small coefficients and a few large coefficients). Moreover, even exactly k -sparse signals may be corrupted by additive noise. This characteristic of ℓ[1]-minimization algorithms is called stability. Specifically, if we let β[k](s) denote the smallest possible error (in the ℓ[1]-norm) that can be achieved by approximating a signal s by a k-sparse vector z then the vector produced by the -reconstruction method is almost optimal in the sense that for some constant independent of . An implication of stability is that small perturbations in the signal caused by noise result in small distortions in the output solution. The previous result means that if is not -sparse, then is close to the -sparse vector that has the -largest components of . In particular, if -sparse, then . This stability property is different from the so-called which is another important characteristic that we wish to have in any reconstruction algorithm. Specifically, an algorithm is if small perturbations in the measurements are reflected in small errors in the reconstruction. Both stability and robustness are achieved by the -minimization algorithms (after a slight modification of (22), see ]). Although the two concepts of robustness and stability can be related, they are not the same. In compressed sensing, the degree of stability and robustness of the reconstruction is determined by the characteristics of the sampling matrix Φ. We say that the matrix Φ has RIP of order k, when for all k-sparse vectors s, we have [30,76]: where 0≤δ[k]<1(isometry constant). The RIP is a sufficient condition that provides us with the maximum and minimum power of the samples with respect to the input power and ensures that none of the k -sparse inputs fall in the null space of the sampling matrix. The RIP property essentially states that every k columns of the matrix Φ[m×n]must be almost orthonormal (these submatrices preserve the norm within the constants 1±δ[k]). The explicit construction of a matrix with such a property is difficult for any given n,k and m≈klogn; however, the problem has been studied in some cases [37,92]. Moreover, given such a matrix Φ, the evaluation of s (or alternatively x) via the minimization problem involves numerical methods (e.g., linear programming, GPSR, SPGL1, FPC [44,93]) for n variables and m constraints which can be computationally expensive. However, probabilistic methods can be used to construct m×n matrices satisfying the RIP property for a given n,k and m≈klogn. This can be achieved using Gaussian random matrices. If Φis a sample of a Gaussian random matrix with the number of rows satisfying (20), Φ·Ψis also a sample of a Gaussian random matrix with the same number of rows and thus it satisfies RIP with high probability. Using matrices with the appropriate RIP property in the ℓ[1]-minimization, we guarantee exact recovery of k-sparse signals that are stable and robust against additive noise. Without loss of generality, assume that Ψ is equal to the identity matrix I, and that instead of Φ·s, we measure Φ·s + ν, where ν represents an additive noise vector. Since Φ·s + ν may not belong to the range space of Φover k-sparse vectors, the ℓ[1]minimization of (25) is modified as follows: where ε^2is the maximum noise power. Let us denote the result of the above minimization for y=Φ·s + ν by . With the above algorithm, it can be shown that This shows that small perturbations in the measurements cause small perturbations in the output of the ℓ[1]-minimization method (robustness). Another standard approach for reconstruction of compressed sampling is combinatorial. As before, without loss of generality Ψ=I. The sampling matrix Φis found using a bipartite graph which consists of binary entries, i.e., entries that are either 1 or 0. Binary search methods are then used to find an unknown k-sparse vector , see, e.g., [84,94-100] and the references therein. Typically, the binary matrix Φ has m=O(klogn) rows, and there exist fast algorithms for finding the solution xfrom the m measurements (typically a linear combination). However, the construction of Φ is also Information theoretic A more recent approach is adaptive and information theoretic [101]. In this method, the signal is assumed to be an instance of a vector random variable , where (.)^t denotes transpose operator, and the ith row of Φ is constructed using the value of the previous sample y[i−1]. Tools from the theory of Huffman coding are used to develop a deterministic construction of a sequence of binary sampling vectors (i.e., their components consist of 0 or 1) in such a way as to minimize the average number of samples (rows of Φ) needed to determine a signal. In this method, the construction of the sampling vectors can always be obtained. Moreover, it is proved that the expected total cost (number of measurements and reconstruction combined) needed to sample and reconstruct a k-sparse vector in R^n is no more than klogn + 2k. Sampling with finite rate of innovation The classical sampling theorem states that where B is the bandwidth of x(t)with the Nyquist interval T[s]=1/2B. These uniform samples can be regarded as the degrees of freedom of the signal; i.e., a lowpass signal with bandwidth B has one degree of freedom in each Nyquist interval T[s]. Replacing the sinc function with other kernels in (27), we can generalize the sparsity (bandlimitedness) in the Fourier domain to a wider class of signals known as the shift invariant (SI) spaces: Similarly, the above signals have one degree of freedom in each T[s] period of time (the coefficients c[i]). A more general definition for the degree of freedom is introduced in [3] and is named the Rate of Innovation. For a given signal model, if we denote the degree of freedom in the time interval of [t[1],t[2]] by C[x](t[1],t[2]), the local rate of innovation is defined by and the global rate of innovation (ρ) is defined as provided that the limit exists; in this case, we say that the signal has finite rate of innovation [3,27,102,103]. As an example, for the lowpass signals with bandwidth B we have ρ=2B, which is the same as the Nyquist rate. In fact by proper choice of the sampling process, we are extracting the innovations of the signal. Now the question that arises is whether the uniform sampling theorems can be generalized to the signals with finite rate of innovation. Answer is positive for a class of non-bandlimited signals including the SI spaces. Consider the following signals: where are arbitrary but known functions and is a realization of a point process with mean μ. The free parameters of the above signal model are {c[i,r]}and {t[i]}. Therefore, for this class of signals we have ; however, the classical sampling methods cannot reconstruct these kinds of signals with the sampling rate predicted by ρ. There are many variations for the possible choices of the functions φ[r](t); nonetheless, we just describe the simplest version. Let the signal x(t)be a finite mixture of sparse Dirac functions: where {t[i]} is assumed to be an increasing sequence. For this case, since there are k unknown time instants and k unknown coefficients, we have C[x](t[1],t[k])=2k. We intend to show that the samples generated by proper sampling kernels φ(t)can be used to reconstruct the sparse Dirac functions. In fact, we choose the kernel φ(t) to satisfy the so called Strang-Fix condition of order 2k: The above condition for the Fourier domain becomes where Φ(Ω) denotes the Fourier transform of φ(t), and the superscript (r) represents the r^th derivative. It is also shown that such functions are of the form φ(t)=f(t)∗β[2k](t), where β[2k](t) is the B-spline of order 2k^thand f(t)is an arbitrary function with nonzero DC frequency [102]. Therefore, the function β[2k](t) is itself among the possible options for the choice of φ(t). We can show that for the sampling kernels which satisfy the Strang-Fix condition (32), the innovations of the signal x(t) (31) can be extracted from the samples (y [j]): In other words, we have filtered the discrete samples (y [j]) in order to obtain the values τ[r]; (35) shows that these values are only a function of the innovation parameters (amplitudes c[i] and time instants t[i]). However, the values τ[r]are nonlinearly related to the time instants and therefore, the innovations cannot be extracted from τ[r] using linear algebra.^dHowever, these nonlinear equations form a well-known system which was studied by Prony in the field of spectral estimation (see Section 5.1) and its discrete version is also employed in both real and Galois field versions of Reed-Solomon codes (see Section 4.1). This method which is called the annihilating filter is as follows: The sequence {τ[r]}can be viewed as the solution of a recursive equation. In fact if we define , we will have (see Section 4.1 and Appendices Appendix 1, Appendix 2 for the proof of a similar In order to find the time instants t[i], we find the polynomial H(z) (or the coefficients h[i]) and we look for its roots. A recursive relation for τ[r]becomes By solving the above linear system of equations, we obtain coefficients h[i] (for a discussion on invertibility of the left side matrix see [102,104]) and consequently, by finding the roots of H(z), the time instants will be revealed. It should be mentioned that the choice of in (37) can be replaced with any 2kconsecutive terms of {τ[i]}. After determining {t[i]}, (35) becomes a linear system of equations with respect to the values {c[i]} which could be easily solved. This reconstruction method can be used for other types of signals satisfying (30) such as the signals represented by piecewise polynomials [102] (for large enough n, the n^thderivative of these signals become delta functions). An important issue in nonlinear reconstruction is the noise analysis; for the purpose of denoising and performance under additive noise the reader is encouraged to see [27]. A nice application of sampling theory and the concept of sparsity is error correction codes for real and complex numbers [105]. In the next section, we shall see that similar methods can be employed for decoding block and convolutional codes. Error correction codes: Galois and real/complex fields The relation between sampling and channel coding is the result of the fact that over-sampling creates redundancy [105]. This redundancy can be used to correct for “sparse” impulsive noise. Normally, the channel encoding is performed in finite Galois fields as opposed to real/complex fields; the reason is the simplicity of logic circuit implementation and insensitivity to the pattern of errors. On the other hand, the real/complex field implementation of error correction codes has stability problems with respect to the pattern of impulsive, quantization and additive noise [52,59,74,106-109]. Nevertheless, such implementation has found applications in fault tolerant computer systems [110-114] and impulsive noise removal from 1-D and 2-D signals [31,32]. Similar to finite Galois fields, real/complex field codes can be implemented in both block and convolutional fashions. A discrete real-field block code is an oversampled signal with n samples such that, in the transform domain (e.g., DFT), a contiguous number of high-frequency components are zero. In general, the zeros do not have to be the high-frequency components or contiguous. However, if they are contiguous, the resultant m equations (from the syndrome information domain) and m unknown erasures form a Vandermonde matrix, which ensures invertibility and consequently erasure recovery. The DFT block codes are thus a special case of Reed-Solomon (RS) codes in the field of real/complex numbers [105]. Figure 12 represents convolutional encoders of rate 1/2of finite constraint length [105] and infinite precision per symbol. Figure 12a is a systematic convolutional encoder and resembles an oversampled signal discussed in Section 3 if the FIR filter acts as an ideal interpolating filter. Figure 12b is a non-systematic encoder used in the simulations to be discussed subsequently. In the case of additive impulsive noise, errors could be detected based on the side information that there are frequency gaps in the original oversampled signal (syndrome). In the following subsections, various algorithms for decoding along with simulation results are given for both block and convolutional codes. Some of these algorithms can be used in other applications such as spectral and channel Figure 12. Convolutional encoders. (a) A real-field systematic convolutional encoder of rate ; f [i]s are the taps of an FIR filter. (b) A non-systematic convolutional encoder of rate , f[1][i]s and f[2][i]s are the taps of 2 FIR filters. Decoding of block codes—ELP method Iterative reconstruction for an erasure channel is identical to the missing sampling problem [115] discussed in Section 3.1.1 and therefore, will not be discussed here. Let us assume that we have a finite discrete signal x[orig][i], where . The DFT of this sequence yields l complex coefficients in the frequency domain ( ). If we insert p consecutive zeros^eto get n=l + psamples ( ) and take its inverse DFT, we end up with an oversampled version of the original signal with n complex samples ( ). This oversampled signal is real if Hermitian symmetry (complex conjugate symmetry) is preserved in the frequency domain, e.g., the set Λ of p zeros is centered at . For erasure channels, the sparse missing samples are denoted by e [i[m]]=x [i[m]], where i[m]s denote the positions of the lost samples; consequently, for i≠i[m], e [i]=0. The Fourier transform of e [i] (called ) is known for the syndrome positions Λ. The remaining values of E [j]can be found from the following recursion (see Appendix Appendix 1): where h[k]s are the ELP coefficients as defined in (36) and Appendix Appendix 1, r is a member of the complement of Λ, and the index additions are in mod(n). After finding E [j] values, the spectrum of the recovered oversampled signal X [j]can be found by removing E [j]from the received signal (see (99) in Appendix Appendix 1). Hence the original signal can be recovered by removing the inserted zeros at the syndrome positions of X [j]. The above algorithm, called the ELP algorithm, is capable of correcting any combination of erasures. However, if the erasures are bursty, the above algorithm may become unstable. To combat bursty erasures, we can use the Sorted DFT (SDFT^f) [1,59,116,117] instead of the conventional DFT. The simulation results for block codes with erasure and impulsive noise channels are given in the following two subsections. Simulation results for erasure channels The simulation results for the ELP decoding implementation for n=32, p=16, and k=16 erasures (a burst of 16 consecutive missing samples from position 1 to 16) are shown in Figure 13; this figure shows we can have perfect reconstruction up to the capacity of the code (up to the finite computer precision which is above 320 dB; this is also true for Figures 14 and 15). By capacity we mean the maximum number of erasures that a code is capable of correcting. Figure 13. Recovery of a burst of 16 sample losses. Figure 14. Simulation results of a convolutional decoder, using the iterative method with the generator matrix, after 30 CG iterations (see [72]); SNR versus the relative rate of erasures (w.r.t. full capacity) in an erasure channel. Figure 15. Simulation results by using the IMAT method for detecting the location and amplitude of the impulsive noise, λ=1.9. Since consecutive sample losses represent the worst case [59,116], the proposed method works better for random samples. In practice, the error recovery capability of this technique degrades with the increase of the block and/or burst size due to the accumulation of round-off errors. In order to reduce the round-off error, instead of the DFT, a transform based on the SDFT, or Sorted DCT (SDCT) can be used [1,59,116]. These types of transformations act as an interleaver to break down the bursty erasures. Simulation results for random impulsive noise channel There are several methods to determine the number, locations, and values of the impulsive noise samples, namely Modified Berlekamp-Massey for real fields [118,119], ELP, IMAT, and constant false alarm rate with recursive detection estimation (CFAR-RDE). The Berlekamp-Massey method for real numbers is sensitive to noise and will not be discussed here [118]. The other methods are discussed ELP method [104] When the number and positions of the impulsive noise samples are not known, h[t] in (38) is not known for any t; therefore, we assume the maximum possible number of impulsive noise samples per block, i.e., as given in (96) in Appendix Appendix 1. To solve for h[t], we need to know only n−lsamples of E in the positions where zeros are added in the encoding procedure. Once the values of h[t] are determined from the pseudo-inverse [104], the number and positions of impulsive noise can be found from (98) in Appendix Appendix 1. The actual values of the impulsive noise can be determined from (38) as in the erasure channel case. For the actual algorithm, please refer to Appendix Appendix 2. As we are using the above method in the field of real numbers, exact zeros of {h[k]}, which are the DFT of {h[i]}, are rarely observed; consequently, the zeros can be found by thresholding the magnitudes of h[k]. Alternatively, the magnitudes of h[k]can be used as a mask for soft-decision; in this case, thresholding is not needed. CFAR-RDE and IMAT methods [31] The CFAR-RDE method is similar to the IMAT with the additional inclusion of the CFAR module to estimate the impulsive noise; CFAR is extensively used in radars to detect and remove clutter noise from data. In CFAR, we compare the noisy signal with its neighbors and determine if an impulsive (sparse) noise is present or not (using soft decision [31]).^g After removing the impulsive noise in a “soft” fashion, we estimate the signal using the iterative method for an erasure channel as described in Section 3.1.1 for random sampling or using the ELP method. The impulsive noise and signal detection and estimation go through several iterations in a recursive fashion as shown in Figure 16. As the number of recursions increases, the certainty about the detection of impulsive noise locations also increases; thus, the soft decision is designed to act more like the hard decision during the later parts of the iteration steps, which yields the error locations. Meanwhile, further iterations are performed to enhance the quality of the original signal since suppression of the impulsive noise also suppresses the original signal samples at the location of the impulsive noise. The improvement of using CFAR-RDE over a simple soft decision RDE is shown in Figure 17. Figure 16. CFAR-RDE method with the use of adaptive soft thresholding and an iterative method for signal reconstruction. Figure 17. Comparison of CFAR-RDE and a simple soft decision RDE for DFT block codes. Decoding for convolutional codes The performance of convolutional decoders depends on the coding rate, the number and values of FIR taps for the encoders, and the type of the decoder. Our simulation results are based on the structure given in Figure 12b, and the taps of the encoder are The input signal is taken from a uniform random distribution of size 50 and the simulations are run 1,000 times and then averaged. The following subsections describe the simulation results for erasure and impulsive noise channels. Decoding for erasure channels For the erasure channels, we derive the generator matrix of a convolutional encoder (Figure 12b with taps given in (39)) as shown below [4] An iterative decoding scheme for this matrix representation is similar to that of Figure 7 except that the operator G consists of the generator matrix, a mask (erasure operation), and the transpose of the generator matrix. If the rate of erasure does not exceed the encoder full capacity, the matrix form of the operator G can be shown to be a nonnegative definite square matrix and therefore its inverse exists [51,60]. Figure 14 shows that the SNR values gradually decrease as the rate of erasure reaches its maximum (capacity). Decoding for impulsive noise channels Let us consider xand y as the input and the output streams of the encoder, respectively, related to each other through the generator matrix G as y=Gx. Denoting the observation vector at the receiver by , we have , where ν is the impulsive noise vector. Multiplying by the transpose of the parity check matrix H^T, we get Multiplying the resultant by the right pseudo-inverse of the H^T, we derive Thus by multiplying the received vector by H(H^TH)^−1H^T(projection matrix into the range space of H), we obtain an approximation of the impulsive noise. In the IMAT method, we apply the operator H(H ^TH)^−1H^T in the iteration of Figure 9; the threshold level is reduced exponentially at each iteration step. The block diagram of IMAT in Figure 9 is modified as shown in Figure 18. Figure 18. The modified diagram of the IMAT method from Figure 9. For simulation results, we use the generator matrix shown in (40), which can be calculated from [4]. In our simulations, the locations of the impulsive noise samples are generated randomly and their amplitudes have Gaussian distributions with zero mean and variance equal to 1, 2, 5, and 10 times the variance of the encoder output. The results are shown in Figure 15 after 300 iterations. This figure shows that the high variance impulsive noise has a better performance. Spectral estimation In this section, we review some of the methods which are used to evaluate the frequency content of data [7-10]. In the field of signal spectrum estimation, there are several methods which are appropriate for different types of signals. Some methods are more suitable to estimate the spectrum of wideband signals, whereas some others are better for the extraction of narrow-band components. Since our focus is on sparse signals, it would be reasonable to assume sparsity in the frequency domain, i.e., we assume the signal to be a combination of several sinusoids plus white noise. Conventional methods for spectrum analysis are non-parametric methods in the sense that they do not assume any model (statistical or deterministic) for the data, except that it is zero or periodic outside the observation interval. For example, the periodogram is a well-known nonparametric method that can be computed via the FFT algorithm: where m is the number of observations, T[s] is the sampling interval (usually assumed as unity), and x[r]is the signal. Although non-parametric methods are robust with low computational complexity, they suffer from fundamental limitations. The most important limitation is their resolution; too closely spaced harmonics cannot be distinguished if the spacing is smaller than the inverse of the observation period. To overcome this resolution problem, parametric methods are devised. Assuming a statistical model with some unknown parameters, we can increase resolution by estimating the parameters from the data at the cost of more computational complexity. Theoretically, in parametric methods, we can resolve closely spaced harmonics with limited data length if the SNR goes to infinity.^h In this section, we shall discuss three parametric approaches for spectral estimation: the Pisarenko, the Prony, and the MUSIC algorithms. The first two are mainly used in spectral estimation, while the MUSIC algorithm was first developed for array processing and later has been extended to spectral estimation. It should be noted that the parametric methods unlike the non-parametric approaches require prior knowledge of the model order (the number of tones). This can be decided from the data using the minimum discription length (MDL) method discussed in the next section. Prony method The Prony method was originally proposed for modeling the expansion of gases [120]; however, now it is known as a general spectral estimation method. In fact, Prony tried to fit a weighted mixture of k damped complex exponentials to 2k data measurements. The original approach is related to the noiseless measurements; however, it has been extended to produce the least squared solutions for noisy measurements. We focus only on the noiseless case here. The signal is modeled as a weighted mixture of k complex exponentials with complex amplitudes and frequencies: where x[r] is the noiseless discrete sparse signal consisting of k exponentials with parameters where a[i],θ[i],f[i] represent the amplitude, phase, and the frequency (f[i]is a complex number in general), respectively. Let us define the polynomial H(z) such that its roots represent the complex exponential functions related to the sparse tones (see Section 3.3 on FRI, (38) on ELP and Appendix 1): By shifting the index of (44) and multiplying by the parameter h[j] and summing over j we get where r is indexed in the range k + 1≤ r ≤ 2 k. This formula implies a recursive equation to solve for h[i]s [8]. After the evaluation of the h[i]s, the roots of (46) yield the frequency components. Hence, the amplitudes of the exponentials can be evaluated from a set of linear equations given in (44). The basic Prony algorithm is given in Table 13. The Prony method is sensitive to noise, which was also observed in the ELP and the annihilating filter methods discussed in Sections 3.3 and 4.1. There are extended Prony methods that are better suited for noisy measurements [10]. Pisarenko harmonic decomposition (PHD) The PHD method is based on the polynomial of the Prony method and utilizes the eigen-decomposition of the data covariance matrix [10]. Assume k complex tones are present in the spectrum of the signal. Then, decompose the covariance matrix of k + 1dimensions into a k-dimensional signal subspace and a 1-dimensional noise subspace that are orthogonal to each other. By including the additive noise, the observations are given by where y is the observation sample and νis a zero-mean noise term that satisfies E{ν[r]ν[r + i]}=σ^2δ [i]. By replacing x[r]=y[r]−ν[r]in the difference equation (47), we get which reveals the auto-regressive moving average (ARMA) structure (order (k,k)) of the observations y[r]as a random process. To benefit from the tools in linear algebra, let us define the following Now (49) can be written as Multiplying both sides of (51) by yand taking the expected value, we get E{yy^H}h=E{yν^H}h. Note that We thus have an eigen-equation which is the key equation of the Pisarenko method. The eigen-equation of (54) states that the elements of the eigenvector of the covariance matrix, corresponding to the smallest eigenvalue (σ^2), are the same as the coefficients in the recursive equation of x[r](coefficients of the ARMA model in (49)). Therefore, by evaluating the roots of the polynomial represented in (46) with coefficients that are the elements of this vector, we can find the tones in the spectrum. Although we started by eigen-decomposition of R[yy], we observed that only one of the eigenvectors is required; the one that corresponds to the smallest eigenvalue. This eigenvector can be found using simple approaches (in contrast to eigen-decomposition) such as power method. The PHD method is briefly shown in Table 14. A different formulation of the PHD method with linear programming approach (refer to Section 2.2 for description of linear programming) for array processing is studied in [121]. The PHD method is shown to be equivalent to a geometrical projection problem which can be solved using ℓ[1]-norm optimization. MUltiple SIgnal Classification (MUSIC), is a method originally devised for high-resolution source direction estimation in the context of array processing that will be discussed in the next section [ 122]. The inherent equivalence of array processing and time series analysis paves the way for the employment of this method in spectral estimation. MUSIC can be understood as a generalization and improvement of the Pisarenko method. It is known that in the context of array processing, MUSIC can attain the statistical efficiency^i in the limit of asymptotically large number of observations [11 In the PHD method, we construct an autocorrelation matrix of dimension k + 1 under the assumption that its smallest eigenvalue (σ^2) belongs to the noise subspace. Then we use the Hermitian property of the covariance matrix to conclude that the noise eigenvector should be orthogonal to the signal eigenvectors. In MUSIC, we extend this method using a noise subspace of dimension greater than one to improve the performance. We also use some kind of averaging over noise eigenvectors to obtain a more reliable signal estimator. The data model for the sum of exponentials plus noise can be written in the matrix form as where the length of data is taken as m>kand the elements of Aare where ν represents the noise vector. Since the frequencies are different, A is of rank k and the first term in (55) forms a k-dimensional signal subspace, while the second term is randomly distributed in both signal and noise subspaces; i.e., unlike the first term, it is not confined to a subspace of lower dimension. The correlation matrix of the observations is given by where the noise is assumed to be white with variance σ^2. If we decompose R into its eigenvectors, k eigenvalues corresponding to the k-dimensional subspace of the first term of (57) are essentially greater than the remaining m − k values, σ^2, corresponding to the noise subspace; thus, by sorting the eigenvalues, the noise and signal subspaces can be determined. Assume ω is an arbitrary frequency and . The MUSIC method estimates the spectrum content of the signal at frequency ω by projecting the vector e(ω)into the noise subspace. When the projected vector is zero, the vector e(ω) falls in the signal subspace and most likely, ωis among the spectral tones. In fact, the frequency content of the spectrum is inversely proportional to the ℓ[2]-norm of the projected vector: where v[i]s are eigenvectors of Rcorresponding to the noise subspace. The k peaks of P[MU](ω)are selected as the frequencies of the sparse signal. The determination of the number of frequencies (model order) in MUSIC is based on the MDL and Akaike information criterion (AIC) methods to be discussed in the next section. The MUSIC algorithm is briefly explained in Table 15. Figure 19 compares the results (in the order of improved performance) for various spectral line estimation methods. The first upper figure shows the original spectral lines, and the four other figures show the results for Prony, PHD, MUSIC, and IMAT methods. We observe that the Prony method (which is similar to ELP and annihilating filter of Section 3.3 and (38)) does not yield good results due to its sensitivity to noise, while the IMAT method is the best. The application of IMAT to spectral estimation is a clear confirmation of our contention that we can apply tools developed in some areas to other areas for better performance. Figure 19. A comparison of various spectral estimation methods for a sparse mixture of sinusoids (the top figure) using Prony, Pisarenko, MUSIC, and IMAT methods (in the order of improved performance); input SNR is 5dB and 256 time samples are used. Sparse array processing There are three types of array processing: 1—estimation of multi-source location (MSL) and Direction of Arrival (DOA), 2—sparse array beam-forming and design, and 3—sparse sensor networks. The first topic is related to estimating the directions and/or the locations of multiple targets; this problem is very similar to the problem of spectral estimation dealt with in the previous section; the relations among sparsity, spectral estimation, and array processing were discussed in [123,124]. The second topic is related to the design of sparse arrays with some missing and/or random array sensors. The last topic, depending on the type of sparsity, is either similar to the second topic or related to CS of sparse signal fields in a network. In the following, we will only consider the first kind. Array processing for MSL and DOA estimation Among the important fields of active research in array processing are MSL and DOA estimation [122,125,126]. In such schemes, a passive or active array of sensors is used to locate the sources of narrow-band signals. Some applications may assume far-field sources (e.g., radar signal processing) where the array is only capable of DOA estimation, while other applications (e.g. biomedical imaging systems) assume near-field sources where the array is capable of locating the sources of radiation. A closely related field of study is spectral estimation due to similar linear statistical models. The stochastic sparse signals pass through a partially known linear transform (e.g., array response or inverse Fourier transform) and are observed in a noisy environment. In the array processing context, the common temporal frequency of the source signals is known. Spatial sampling of the signal is used to extract the direction of the signal (spatial frequency). As a far-field approximation, the signal wavefronts are assumed to be planar. Consider a signal arriving with angle φ as in Figure 20. Simultaneous sampling of this wavefront on the array will exhibit a phase change of the signal from sensor to sensor. In this way, discrete samples of a complex exponential are obtained, where its frequency can be translated to the direction of the signal source. The response of a uniform linear array (ULA) to a wavefront impinging on the array from direction φis where d is the inter-element spacing of the array, λis the wavelength, and n is the number of sensors in the array. When multiple sources are present, the observed vector is the sum of the response (sweep) vectors and noise. This resembles the spectral estimation problem with the difference that sampling of the array elements is not limited in time. In fact in array processing, an additional degree of freedom (the number of elements) is present; thus, array processing is more general than spectral estimation. Figure 20. Uniform linear array with element distance d, element length I, and a wave arriving from direction φ. Two main fields in array processing are MSL and DOA for estimating the source locations and directions, respectively; for both purposes, the angle of arrival (azimuth and elevation) should be estimated while for MSL an extra parameter of range is also needed. The simplest case is the 1-D ULA (azimuth-only) for DOA estimation. For the general case of k sources with angles with respect to the array, the ULA response is given by the matrix , where the vector φ of DOA’s is defined as . In the above notation, Ais a matrix of size n×kand a(φ[i])s are column vectors. Now, the vector of observations at array elements (y[i]) is given by where the vector s[i] represents the multi-source signals and ν[i]is the white Gaussian noise vector. Source signals and additive noise are assumed to be zero-mean and i.i.d. normal processes with covariance matrices P and σ^2I, respectively. With these assumptions, the observation vector y[i]will also follow an n-dimensional zero-mean normal distribution with the covariance matrix In the field of DOA estimation, extensive research has been accomplished in (1) source enumeration, and (2) DOA estimation methods. Both of the subjects correspond to the determination of parameters k and φ. Although some methods are proposed for simultaneous detection and estimation of the model statistical characteristics [127], most of the literature is devoted to two-stage approaches; first, the number of active sources is detected and then their directions are estimated by techniques such as estimation of signal parameters via rotational invariance techniques (ESPRIT)^j[128-132]. Usually, the joint detection-estimation methods outperform the two-stage approaches with the cost of higher computational complexity. In the following, we will describe Minimum Description Length (MDL) as a powerful tool to detect the number of active sources. Minimum description length One of the most successful methods in array processing for source enumeration is the use of the MDL criterion [133]. This technique is very powerful and outperforms its older versions including AIC [ 134-136]. Hence, we confine our discussion to MDL algorithms. Minimum description length is an optimum method of finding the model order and parameters for the most compressed representation of the observed data. For the purpose of statistical modeling, the MAP probability or the suboptimal criterion of ML is used; more precisely, conditioned on the observed data, the maximum probability among the possible options is found (hypotheses testing) [137]. When the model parameters are not known, the MAP and ML criteria result in the most complex approach; consider fitting a finite sequence of data to a polynomial of unknown degree [33]: where , ν(t)is the observed Gaussian noise and k is the unknown model order (degree of the polynomial P(t)) which determines the complexity. Clearly, m−1 is the maximum required order for unique description of the data (m observed samples), and the ML criterion always selects this maximum value ( ); i.e., the ML method forces the polynomial P(t)to pass through all the points. MDL, on the other hand, yields a sparser solution ( ). Due to the existence of additive noise, it is quite rational to look for a polynomial with degree less than m which also takes the complexity order into account. In MDL, the idea of how to consider the complexity order is borrowed from information theory: given a specific statistical distribution, we can find an optimum source coding scheme (e.g., Huffman coding) which attains the lowest average code length for the symbols. Furthermore, if p[s] is the distribution of the source s and q[s]is another distribution, we have [138]: where H(s) is the entropy of the signal. This implies that the minimum average code length is obtained only for the correct source distribution (model parameters); in other words, the choice of wrong model parameters (distribution function) leads to larger code lengths. When a particular model with the set of parameters θis assumed for the data a priori, each time a sequence y is received, the parameters should first be estimated. The optimum estimation method is usually the ML estimator which results in . Now, the probability distribution for a received sequence y becomes which according to information theory, requires an average code length of bits. In addition to the data, the model parameters should also be encoded which in turn requires bits where κ is the number of independent parameters to be encoded in the model and m is the number of data points.^k Thus, the two-part MDL selects the model that minimizes the whole required code length which is given by [139]: The first term is the ML term for data encoding, and the second term is a penalty function that inhibits the number of free parameters of the model to become very large. Example of using MDL in spectral estimation An example from spectral estimation can help clarify how the MDL method works (for more information refer to the previous section on spectral estimation). The mathematical formulation of the problem is as follows: If there are k (unknown) sinusoids with various frequencies, amplitudes, and phases (3k unknown parameters) observed in a noisy data vector x (sampled at n distinct time slots), the maximum likelihood function for this observed data with additive Gaussian noise is as follows: here are the unknown sinusoidal parameters to be estimated to compute the likelihood term in (65), which in this case is computed from (66). The 3kunidentified parameters are estimated by the grid search, i.e., all possible values of frequency and phase (amplitude can be estimated using the assumed frequency and phase by using this relation; [140] are tested and the one maximizing the likelihood function (66) is selected as the best estimate. To find the number of embedded sinusoids in the noisy observed data, it is initially assumed that k=0 and (65) is calculated, then k is increased and by using the grid search, the maximum value of the likelihood for the assumed k is calculated from (66), and this calculated value is then used to compute (65). This procedure should be followed as long as (65) decreases and consequently aborted when it starts to rise. The k minimizing (65) is the k selected by MDL method and hopefully reveals the true number of the sinusoids in the noisy observed data. It is obvious that the sparsity condition, i.e., k<<n, is necessary for the efficient operation of MDL. In addition to the number of sinusoids, MDL has apparently estimated the frequency, amplitude, and phase of the embedded sinusoids. This should make it clear why such methods are called detection–estimation algorithms. The very same method can be used to find the number, position, and amplitude of an impulsive noise added to a low-pass signal in additive noise. If the samples of the added impulsive noise are statistically independent from each other, the high-pass samples of the discrete fourier transform (DFT) of the noisy observed data with impulsive noise should be taken and the same method applied. MDL source enumeration In the source enumeration problem, our model is a multivariate Gaussian random process with zero mean and covariance of the type shown in (62), where the number of active sources is unknown. In some enumeration methods (other than MDL), the exact form of (62) is employed which results in high computational complexity. In the conventional MDL method, it is assumed that the model is a covariance matrix with a spherical subspace^l of dimension n−k. Suppose the sample covariance matrix is and assume the ordered eigenvalues of are , while the ordered eigenvalues of the exact covariance matrix R are . The normal distribution function of the received complex data x is [129] where tr(.)stands for the trace operator. The ML estimate of signal eigenvalues in R are with the respective eigenvectors . Since , the ML estimate of the noise eigenvalue is and are all noise eigenvectors. Thus, the ML estimate of R given is In fact, since we know that R has a spherical subspace of dimension n−k, we correct the observed to obtain R[ML]. Now, we calculate −log(p(x|R[ML])); it is easy to show that which is independent of k and can be omitted in the minimization of (65). Thus, for the first term of (65) we only need the determinant |R[ML]|which is the product of the eigenvalues, and the MDL criterion becomes where κ is the number of free parameters in the distribution. This expression should be computed for different values of 0≤k≤n−1 and its minimum point should be . Note that we can subtract the term from the expression, which is not dependent on k to get the well-known MDL criterion [129]: where the first term is the likelihood ratio for the sphericity test of the covariance matrix. This likelihood ratio is a function of arithmetic and geometric means of the noise subspace eigenvalues [141]. Figure 21 is an example of MDL performance in determining the number of sources in array processing. It is evident that in low SNRs, the MDL has a strong tendency to underestimate the number of sources, while as SNR increases, it gives a consistent estimate. Also at high SNRs, underestimation is more probable than overestimation. Figure 21. An MDL example; the vertical axis is the probability of order detection. And the other two axes are the number of sources and the SNR values. The MDL method estimates the number of active sources (which is 2) correctly when the SNR value is relatively high. Now we compute the number of independent parameters (κ) in the model. Since the noise subspace is spherical, the choice of eigenvectors in this subspace can accept any arbitrary orthonormal set; i.e., no information is revealed when these vectors are known. Thus, the set of parameters is . The eigenvalues of a hermitian matrix (correlation matrix) are all real while the eigenvectors are normal complex vectors. Therefore, the eigenvalues (including σ^2) introduce k + 1 degrees of freedom. The first eigenvector has 2n−2degrees of freedom (since its first nonzero element can be adjusted to unity), while the second, due to its orthogonality to the first eigenvector, has 2n−4degrees of freedom. With the same argument, it can be shown that there are 2(n−i) free parameters in the i^theigenvector; hence where the last integer 1 can be omitted since it is independent of k. The two-part MDL, despite its very low computational complexity, is among the most successful methods for source enumeration in array processing. Nonetheless, this method does not reach the best attainable performance for finite number of measurements [142]. The new version of MDL, called one-part or Refined MDL has improved the performance for the cases of finite measurements which has not been applied to the array processing problem [33]. Sparse sensor networks Wireless sensor networks typically consist of a large number of sensor nodes, spatially distributed over a region of interest, that observe some physical environment including acoustic, seismic, and thermal fields with applications in a wide range of areas such as health care, geographical monitoring, homeland security, and hazard detection. The way sensor networks are used in practical applications can be divided into two general categories: (1) There exists a central node known as the fusion center (FC) that retrieves relevant field information from the sensor nodes and communication from the sensor nodes to FC generally takes place over a power- and bandwidth-constrained wireless channel. (2) Such a central node does not exist and the nodes take specific decisions based on the information they obtain and exchange among themselves. Issues such as distributed computing and processing are of high importance in such scenarios. In general, there are three main tasks that should be implemented efficiently in a wireless sensor network: sensing, communication, and processing. The main challenge in design of practical sensor networks is to find an efficient way of jointly performing these tasks, while using the minimum amount of system resources (computation, power, bandwidth) and satisfying the required system design parameters (such as distortion levels). For example, one such metric is the so-called energy-distortion tradeoff which determines how much energy the sensor network consumes in extracting and delivering relevant information up to a given distortion level. Although many theoretical results are already available in the case of point-to-point links in which separation between source and channel coding can be assumed, the problem of efficiently transmitting or sharing information among a vast number of distributed nodes remains a great challenge. This is due to the fact that well-developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems are still under development. However, recent results on distributed estimation or detection indicate that joint optimization through some form of source-channel matching and local node cooperation can result in significant system performance improvement [ How sparsity can be exploited in a sensor network Sparsity appears in many applications for which sensor networks are deployed, e.g., localization of targets in a large region or estimation of physical phenomena such as temperature fields that are sparse under a suitable transformation. For example, in radar applications, under a far-field assumption, the observation system is linear and can be expressed as a matrix of steering vectors [148, 149]. In general, sparsity can arise in a sensor network from two main perspectives: (1) Sparsity of node distribution in spatial terms (2) Sparsity of the field to be estimated Although nodes in a sensor network can be assumed to be regularly deployed in a given environment, such an assumption is not valid in many practical scenarios. Therefore, the non-uniform distribution of nodes can lead to some type of sparsity in spatial domain that can be exploited to reduce the amount of sensing, processing, and/or communication. This issue is subsequently related to extensions of the nonuniform sampling techniques to two-dimensional domains through proper interpolation and data recovery when samples are spatially sparse [34,150]. The second scenario that provides a proper basis for exploiting the sparsity concepts arises when the field to be estimated is a sparse multi-dimensional signal. From this point of view, ideas such as those presented earlier in the context of compressed sensing (Section 3.2) provide the proper framework to address the sparsity in such fields. Spatial sparsity and interpolation in sensor networks Although general 2-D interpolation techniques are well-known in various branches of statistics and signal processing, the main issue in a sensor network is exploring proper spatio/temporal interpolation such that communication and processing are also efficiently accomplished. While there is a wide range of interpolation schemes (polynomial, Fourier, and least squares [151]), many of these schemes are not directly applicable for spatial interpolation in sensor networks due to their communication complexity. Another characteristic of many sensor networks is the non-uniformity of node distribution in the measurement field. Although non-uniformity has been dealt with extensively in contexts such as signal processing, geo-spatial data processing, and computational geometry [1], the combination of irregular sensor data sampling and intra-network processing is a main challenge in sensor networks. For example, reference [152] addresses the issue of spatio-temporal non-uniformity in sensor networks and how it impacts performance aspects of a sensor network such as compression efficiency and routing overhead. In order to reduce the impact of non-uniformity, the authors in [152] propose using a combination of spatial data interpolation and temporal signal segmentation. A simple interpolation wavelet transform for irregular sampling which is an extension of the 2-D irregular grid transform to 3-D spatio-temporal transform grids is also proposed in [153]. Such a multi-scale transform extends the approach in [154] and removes the dependence on building a distributed mesh within the network. It should be noted that although wavelet compression allows the network to trade reconstruction quality for communication energy and bandwidth usage, such energy savings are naturally offset by the overhead cost of computing the wavelet coefficients. Distributed wavelet processing within sensor networks is yet another approach to reduce communication energy and wireless bandwidth usage. Use of such distributed processing makes it possible to trade long-haul transmission of raw data to the FC for less costly local communication and processing among neighboring nodes [153]. In addition, local collaboration among nodes decorrelates measurements and results in a sparser data set. Compressive sensing in sensor networks Most natural phenomena in SNs are compressible through representation in a natural basis [86]. Some examples of these applications are imaging in a scattering medium [148], MIMO radar [149], and geo-exploration via underground seismic data. In such cases, it is possible to construct a highly compressed version of a given field, in a decentralized fashion. If the correlations between data at different nodes are known a-priori, it is possible to use schemes that have very favorable power-distortion-latency tradeoffs [143,155,156]. In such cases, distributed source coding techniques, such as Slepian-Wolf coding, can be used to design compression schemes without collaboration between nodes (see [155] and the references therein). Since prior knowledge of such correlations is not available in many applications, collaborative, intra-network processing and compression are used to determine unknown correlations and dependencies through information exchange between network nodes. In this regard, the concept of compressive wireless sensing has been introduced in [147] for energy-efficient estimation at the FC of sensor data, based on ideas from wireless communications [143,145 ,156-158] and compressive sampling theory [29,75,159]. The main objective in such an approach is to combine processing and communications in a single distributed operation [160-162]. Methods to obtain the required sparsity in a SN While transform-based compression is well-developed in traditional signal and image processing domains, the understanding of sparse transforms for networked data is not as trivial [163]. There are methods such as associating a graph with a given network, where the vertices of the graph represent the nodes of the network, and edges between vertices represent relationships among data at adjacent nodes. The structure of the connectivity is the key to obtaining effective sparse transformations for networked data [163]. For example, in the case of uniformly distributed nodes, tools such as DFT or DCT can be adopted to exploit the sparsity in the frequency domain. In more general settings, wavelet techniques can be extended to handle the irregular distribution of sampling locations [153]. There are also scenarios in which standard signal transforms may not be directly applicable. For example, network monitoring applications rely on the analysis of communication traffic levels at the network nodes where network topology affects the nature of node relationships in complex ways. Graph wavelets [164] and diffusion wavelets [165] are two classes of transforms that have been proposed to address such complexities. In the former case, the wavelet coefficients are obtained by computing the digital differences of the data at different scales. The coefficients at the first scale are differences between neighboring data points, and those at subsequent spatial scales are computed by first aggregating data in neighborhoods and then computing differences between neighboring aggregations. The resulting graph wavelet coefficients are then defined by aggregated data at different scales and computing differences between the aggregated data [164]. In the latter scheme, diffusion wavelets are based on construction of an orthonormal basis for functions supported on a graph and obtaining a custom-designed basis by analyzing eigenvectors of a diffusion matrix derived from the graph adjacency matrix. The resulting basis vectors are generally localized to neighborhoods of varying size and may also lead to sparse representations of data on a graph [165]. One example of such an approach is where the node data correspond to traffic rates of routers in a computer network. Implementation of CS in a wireless SN Two main approaches to implement random projections in a SN are discussed in the literature [163]. In the first approach, the CS projections are simultaneously calculated through superposition of radio waves and communicated using amplitude-modulated coherent transmissions of randomly weighted values directly from the nodes in the network to the FC (Figure 22). This scheme, introduced in [147 ,157] and further refined in [166], is based on the notion of the so-called matched source-channel communication[156,157]. Although the need for complex routing, intra-network communications, and processing are alleviated, local phase synchronization among nodes is an issue to be addressed properly in this approach. Figure 22. Computation of CS projections through superposition of radio waves of randomly weighted values directly from the nodes in the network to the FC (from [163]). In the second approach, the projections can be computed and delivered to every subset of nodes in the network using gossip/consensus techniques, or be delivered to a single point using clustering and aggregation. This approach is typically used for networked data storage and retrieval applications. In this method, computation and distribution of each CS sample is accomplished through two simple steps [163]. In the first step, each of the sensors multiplies its data with the corresponding element of the compressing matrix. Then, in the second step, the resulting local terms are simultaneously aggregated and distributed across the network using randomized gossip [167], which is a simple iterative decentralized algorithm for computing linear functions. Because each node only exchanges information with its immediate neighbors in the network, gossip algorithms are more robust to failures or changes in the network topology and cannot be easily compromised by eliminating a single server or fusion center [168]. Finally, it should be noted that in addition to the encoding process, the overall system performance is significantly affected by the decoding process [44,88,169]; this study and its extensions to sparse SNs remain as challenging tasks. Sensing capacity Despite wide-spread development of SN ideas in recent years, understanding of fundamental performance limits of sensing and communication between sensors is still under development. One of the issues that has recently attracted attention in theoretical analysis of sensor networks is the concept of sensor capacity. The sensing capacity was initially introduced for discrete alphabets in applications such as target detection [170] and later extended in [14,171,172] to the continuous case. The questions in this area are related to the problem of sampling of sparse signals, [29,76,159] and sampling with finite rate of innovation [3,103]. In the context of the CS, sensing capacity provides bounds on the maximum signal dimension or complexity per sensor measurement that can be recovered to a pre-defined degree of accuracy. Alternatively, it can be interpreted as the minimum number of sensors necessary to monitor a given region to a desired degree of fidelity based on noisy sensor measurements. The inverse of sensing capacity is the compression rate; i.e., the ratio of the number of measurements to the number of signal dimensions which characterizes the minimum rate to which the source can be compressed. As shown in [14], sensing capacity is a function of SNR, the inherent dimensionality of the information space, sensing diversity, and the desired distortion level. Another issue to be noted with respect to the sensing capacity is the inherent difference between sensor network and CS scenarios in the way in which the SNR is handled [14,172]. In sensor networks composed of many sensors, fixed SNR can be imposed for each individual sensor. Thus, the sensed SNR per location is spread across the field of view leading to a row-wise normalization of the observation matrix. On the other hand, in CS, the vector-valued observation corresponding to each signal component is normalized by each column. This difference has led to different regimes of compression rate [172]. In SN, in contrast to the CS setting, sensing capacity is generally small and correspondingly the number of sensors required does not scale linearly with the target sparsity. Specifically, the number of measurements is generally proportional to the signal dimension and is weakly dependent on target density sparsity. This issue has raised questions on compressive gains in power-limited SN applications based on sparsity of the underlying source domain. Sparse component analysis: BSS and SDR Recovery of the original source signals from their mixtures, without having a priori information about the sources and the way they are mixed, is called blind source separation (BSS). This process is impossible if no assumption about the sources can be made. Such an assumption on the sources may be uncorrelatedness, statistical independence, lack of mutual information, or disjointness in some space [18,19,49]. The signal mixtures are often decomposed into their constituent principal components, independent components, or are separated based on their disjoint characteristics described in a suitable domain. In the latter case, the original sources should be sparse in that domain. Independent component analysis (ICA) is often used for separation of the sources in the former case, whereas SCA is employed for the latter case. These two mathematical tools are described in the following sections followed by some results and illustrations of their applications. Independent component analysis (ICA) The main assumption in ICA is the statistical independence of the constituent sources. Based on this assumption, ICA can play a crucial role in the separation and denoising of signals (BSS). There has been recent research interest in the field of BSS due to its practicality in a wide range of problems. For example, BSS of acoustic signals measured in a room is often referred to as the Cocktail Party problem, which means separation of individual sounds from a number of recordings in an echoic and noisy environment. Figure 23 illustrates the BSS concept, wherein the mixing block represents the multipath propagation model between the original sources and the microphone measurements. Figure 23. The BSS concept; the unobservable sources s[1][i],…,s[n][i] are mixed and corrupted by additive zero mean noise to generate the observations x[1][i],…,x[m][i]. The target of BSS is to estimate an unmixing system to recover the original sources in . Generally, BSS algorithms make assumptions about the environment in order to make the problem more tractable. There are typically three assumptions about the mixing medium. The most simple but widely used case is the instantaneous case, where the source signals arrive at the sensors at the same time. This has been considered for separation of biological signals such as the EEG where the signals have narrow bandwidths and the sampling frequency is normally low [173]. The generative model for BSS in this case can be easily formulated as where s[i], x[i], and ν[i] denote, respectively, the vector of source signals, size n×1, observed signal size m×1, and noise signal size m×1. H is the mixing matrix of size m×n. Generally, the mixing process can be nonlinear (due to inhomogenity of the environment and that the medium can change with respect to the source signal variations; e.g., stronger vibration of a drum as a medium, with louder sound). However, in an instantaneous linear case where the above problems can be avoided or ignored, the separation is performed by means of a separating matrix, W of size n×m, which uses only the information contained in x[i]to reconstruct the original source signals (or the independent components) as where y[i] is the estimate for the source signal s [i]. The early approaches in instantaneous BSS started from the work by Herault and Jutten [174] in 1986. In their approach, they considered non-Gaussian sources with equal number of independent sources and mixtures. They proposed a solution based on a recurrent artificial neural network for separation of the sources. In the cases where the number of sources is known, any ambiguity caused by false estimation of the number of sources can be avoided. If the number of sources is unknown, a criterion may be established to estimate the number of sources beforehand. In the context of model identification, this is referred to as Model Order Selection and methods such as the final prediction error (FPE), AIC, residual variance (RV), MDL and Hannan and Quinn (HNQ) methods [175] may be considered to solve this problem. In acoustic applications, however, there are usually time lags between the arrival times of the signals at the sensors. The signals also may arrive through multiple paths. This type of mixing model is called a convolutive model [176]. The convolutive mixing model can also be classified into two subcategories: anechoic and echoic. In both cases, the vector representations of mixing and separating processes are modified as x[i]=H[i]∗s[i] + ν[i] and y[i]=W[i]∗x[i], respectively, where ∗denotes the convolution operation. In an anechoic model, however, the expansion of the mixing process may be given as where the attenuation, h[r,j], and delay δ[r,j] of source j to sensor r would be determined by the physical position of the source relative to the sensors. Then the unmixing process to estimate the sources will be given as where the w[j,r]s are the elements of W. In an echoic mixing environment, it is expected that the signals from the same sources reach the sensors through multiple paths. Therefore, the expansion of the mixing and separating models will be changed to where L denotes the maximum number of paths for the sources, ν[r][i] is the accumulated noise at sensor r, and (.)^lrefers to the l^thpath. The unmixing process will be formulated similarly to the anechoic one. For a known number of sources, an accurate result may be expected if the number of paths is known; otherwise, the overall number of observations in an echoic case is infinite. The aim of BSS using ICA is to estimate an unmixing matrix W such that Y=WX best approximates the independent sources s, where y and x are respectively matrices with columns and . Thus the ICA separation algorithms are subject to permutation and scaling ambiguities in the output components, i.e. W=PDH^−1, where P and D are the permutation and scaling (diagonal) matrices, respectively. Permutation of the outputs is troublesome in places where either the separated segments of the signals are to be joined together or when a frequency-domain BSS is performed. Mutual information is a measure of independence and maximizing the non-Gaussianity of the source signals is equivalent to minimizing the mutual information between them [177]. In those cases where the number of sources is more than the number of mixtures (underdetermined systems), the above BSS schemes cannot be applied simply because the mixing matrix is not invertible, and generally the original sources cannot be extracted. However, when the signals are sparse, the methods based on disjointness of the sources in some domain may be utilized. Separation of the mixtures of sparse signals is potentially possible in the situation where, at each sample instant, the number of nonzero sources is not more than a fraction of the number of sensors (see Table 1, row and column 6). The mixtures of sparse signals can also be instantaneous or convolutive. Sparse component analysis (SCA) While the independence assumption for the sources is widely exploited in the design of BSS algorithms, the possible disjointness of the sources in some domain has not been considered. In SCA, this property is directly employed. Blind source separation by sparse decomposition has been addressed by Zibulevsky and Pearlmutter [178] for both over-determined/exactly-determined and underdetermined systems using the maximum a posteriori approach. One way of formulating SCA is by representing the sources using a proper signal dictionary: where and n is the number of basis functions in the dictionary. The functions ϕ[l][i] are called atoms or elements of the dictionary. These atoms do not have to be linearly independent and may form an overcomplete dictionary. The sparsity property requires that only a small number of the coefficients c[r,l] differ significantly from zero. Based on this definition, the mixing and unmixing systems are modeled as follows: where ν[i] is an m×1vector. Aand Ccan be determined by optimization of a cost function based on an exponential distribution for c[i,j][178]. In places where the sources are sparse and at each time instant, at most one of the sources has significant nonzero value, the columns of the mixing matrix may be calculated individually, which makes the solution to the underdetermined case possible. The SCA problem can be stated as a clustering problem since the lines in the scatter plot can be separated based on their directionalities by means of clustering. A number of works on this method have been reported [18,179,180]. In the work by Li et al. [180], the separation has been performed in two different stages. First, the unknown mixing matrix is estimated using the k-means clustering method. Then, the source matrix is estimated using a standard linear programming algorithm. The line orientation of a data set may be thought of as the direction of its greatest variance. One way is to perform eigenvector decomposition on the covariance matrix of the data, the resultant principal eigenvector, i.e., the eigenvector with the largest eigenvalue, indicates the direction of the data, since it has the maximum variance. In [179], GAP statistics as a metric which measures the distance between the total variance and cluster variances, has been used to estimate the number of sources followed by a similar method to Li’s algorithm explained above. In line with this approach, Bofill and Zibulevsky [15] developed a potential function method for estimating the mixing matrix followed by ℓ[1]-norm decomposition for the source estimation. Local maxima of the potential function correspond to the estimated directions of the basis vectors. After the mixing matrix is identified, the sources have to be estimated. Even when Ais known, the solution is not unique. So, a solution is found for which the ℓ[1]-norm is minimized. Therefore, for , is minimized using linear programming. Geometrically, for a given feasible solution, each source component is a segment of length |s[j]| in the direction of the corresponding a[j]and, by concatenation, their sum defines a path from the origin to x [i]. Minimizing amounts therefore to finding the shortest path to x [i]over all feasible solutions , where n is the dimension of space of the independent basis vectors [18]. Figure 24 shows the scatter plot and the shortest path from the origin to the data point x [i]. Figure 24. Measurement points for data structures consisting of multiple lower dimensional subspaces. (a) the scatter plot and (b) the shortest path from the origin to the data point, x [i], extracted from [15]. There are many cases for which the sources are disjoint in other domains, rather than the time-domain, or when they can be represented as sum of the members of a dictionary which can consist for example of wavelets or wavelet packets. In these cases the SCA can be performed in those domains more efficiently. Such methods often include transformation to time-frequency domain followed by a binary masking [181] or a BSS followed by binary masking [176]. One such approach, called degenerate unmixing estimation technique (DUET) [181], transforms the anechoic convolutive observations into the time-frequency domain using a short-time Fourier transform and the relative attenuation and delay values between the two observations are calculated from the ratio of corresponding time-frequency points. The regions of significant amplitudes (atoms) are then considered to be the source components in the time-frequency domain. In this method only two mixtures have been considered and as a major limit of this method, only one source has been considered active at each time instant. For instantaneous separation of sparse sources, the common approach used by most researchers is to attempt to maximize the sparsity of the extracted signals at the output of the separator. The columns of the mixing matrix A assign each observed data point to only one source based on some measure of proximity to those columns [182], i.e., at each instant only one source is considered active. Therefore the mixing system can be presented as: where in an ideal case, a[j,r]= 0 for r≠j. Minimization of the ℓ[1]-norm is one of the most logical methods for estimation of the sources as long as the signals can be considered sparse. ℓ[1]-norm minimization is a piecewise linear operation that partially assigns the energy of x [i] to the m columns of A around x [i] in space. The remaining n−m columns are assigned zero coefficients, therefore the ℓ[1]-norm minimization can be manifested as: A detailed discussion of signal recovery using ℓ[1]-norm minimization is presented by Takigawa et al. [183] and described below. As mentioned above, it is important to choose a domain that sparsely represents the signals. On the other hand, in the method developed by Pedersen et al. [176], as applied to stereo signals, the binary masks are estimated after BSS of the mixtures and then applied to the microphone signals. The same technique has been used for convolutive sparse mixtures after the signals are transformed to the frequency domain. In another approach [184], the effect of outlier noise has been reduced using median filtering then hybrid fast ICA filtering, and ℓ[1]-norm minimization have been used for separation of temporomandibular joint sounds. It has been shown that for such sources, this method outperforms both DUET and Li’s algorithms. The authors of [185] have recently extended the DUET algorithm to separation of more than two sources in an echoic mixing scenario in the time-frequency domain. In a very recent approach, it has been considered that brain signal sources in the space-time frequency domain are disjoint. Therefore, clustering the observation points in the space-time-frequency-domain can be effectively used for separation of brain sources [186]. As it can be seen, generally, BSS exploits independence of the source signals, whereas SCA benefits from the disjointness property of the source signals in some domain. While the BSS algorithms mostly rely on ICA with statistical properties of the signals, SCA uses their geometrical and behavioral properties. Therefore, in SCA, either a clustering approach or a masking procedure can result in estimation of the mixing matrix. Often, an ℓ[1]-norm is used to recover the source signals. Generally, in places where the source signals are sparse, the SCA methods often result in more accurate estimation of the signals with less ambiguities in the estimation. SCA algorithms There are three main steps for the solution of an SCA problem as shown in Table 16[187]. The first step of Table 16 shows a linear model for the SCA problem, the second step consists of estimating the mixing matrix A using sparsity information, and finally the third step is to estimate the sparse source representation based on the estimate of A[17]. A brief review of major approaches that are suggested for the third step was given in Section 2. Sparse dictionary representation (SDR) and signal modeling A signal may be sparse in a given basis but not sparse in a different basis. For example, an image may be sparse in a wavelet basis (i.e., most of the wavelet coefficients are small) even though the image itself may not be sparse (i.e., many of the gray values of the image are relatively large). Thus, given a class , an important problem is to find a basis or a frame in which all signals in can be represented sparsely. More specifically, given a class of signals , it is important to find a basis (or a frame) (if it exists) for such that every data vector can be represented by at most k≪nlinear combinations of elements of D. The dictionary design problem has been addressed in [18-20,40,75,190]. A related problem is the signal modeling problem in which the class is to be modeled by a union of subspaces where each V[i] is a subspace of with the dimension of V[i]≤k where k≪n[49]. If the subspaces V[i] are known, then it is possible to pick a basis for each V[i] and construct a dictionary in which every signal of has sparsity k (or is almost k sparse). The model can be found from an observed set of data by solving (if possible) the following non-linear least squares problem: Find subspaces of that minimize the expression over all possible choices of l subspaces with dimension of V[i] ≤ k < n. Here d denotes the Euclidian distance in and k is an integer with 1≤k<nfor . Note that is calculated as follows: for each f [i] ∈ F and fixed , the subspace closest to f[i] is found and the distance d^2(f[i],V[j]) is computed. This process is repeated for all f[i]∈F and the squares of the distances are added together to find . The optimal model is then obtained as the union , where minimize the expression (83). When l = 1 this problem reduces to the classical least squares problem. However, when l > 1 the set is a nonlinear set and the problem is fully non-linear (see Figure 25). A more general nonlinear least squares problem has been studied for finite and infinite Hilbert spaces [49]. In that general setting, the existence of solutions is proved and a meta-algorithm for searching for the solution is described. Figure 25. Objective function. (a)e = d^2(f[1], V[2]) + d^2(f[2],V[1]) + d^2(f[3], V[1]) and (b)e = d^2(f[1], V[2]) + d^2(f[3], V[2]) + d^2(f[2], V[1]). Configuration of V[1],V[2] in a) creates the partition P[1] ={f[1]} and P[2] = {f[2],f[3]} while the configuration in (b) causes the partition P[1] = {f[1], f[3]} and P[2] = {f[2]}. For the special finite dimensional case of in (83), the search algorithm is an iterative algorithm that alternates between data partition and the optimization of a simpler least squares problem. This algorithm, which is equivalent to the k-means algorithm, is summarized in Table 17. In some new attempts sparse representation and the compressive sensing concept have been extended to solving multichannel source separation [191-194]. In [191,192] separation of sparse sources with different morphologies has been presented by developing a multichannel morphological component analysis approach. In this scheme, the signals are considered as combination of features from different dictionaries. Therefore, different dictionaries are assumed for different sources. In [193] inversion of a random field from pointwise measurements collected by a sensor network is presented. In this article, it is assumed that the field has a sparse representation in a known basis. To illustrate the approach, the inversion of an acoustic field created by the superposition of a discrete number of propagating noisy acoustic sources is considered. The method combines compressed sensing (sparse reconstruction by ℓ[1]-constrained optimization) with distributed average consensus (mixing the pointwise sensor measurements by local communication among the sensors). [194] addresses source separation from a linear mixture under source sparsity and orthogonality of the mixing matrix assumptions. A two-stage separation process is proposed. In the first stage recovering a sparsity pattern of the sources is tried by exploiting the orthogonality prior. In the second stage, the support is used to reformulate the recovery task as an optimization problem. Then a solution based on alternating minimization for solving the above problems is suggested. Multipath channel estimation In wireless systems, channel estimation is required for the compensation of channel distortions. The transmitted signal reflects off different objects and arrives at the receiver from multiple paths. This phenomenon causes the received signal to be a mixture of reflected and scattered versions of the transmitted signal. The mobility of the transmitter, receiver, and scattering objects results in rapid changes in the channel response, and thus the channel estimation process becomes more complicated. Due to the sparse distribution of scattering objects, a multipath channel is sparse in the time domain as shown in Figure 26. By taking sparsity into consideration, channel estimation can be simplified and/or made more accurate. The sparse time varying multipath channel is modeled as where k is the number of taps, α[l]is the l^th complex path gain, and τ[l] is the corresponding path delay. At time t, the transfer function is given by Figure 26. The impulse response of two typical multipath channels. (a) Brazil-D and (b) TU6 channel profiles. The estimation of the multipath channel impulse response is very much similar to the determination of analog epochs and amplitudes of discontinuities for finite rate of innovation as shown in (31). Essentially, if a known train of impulses is transmitted and the received signal from the multipath channel is filtered and sampled (information domain as discussed in Section 3.3), the channel impulse response can be estimated from these samples using an annihilating filter (the Prony or ELP method) [27] defined with the -transform and a pseudo-inverse matrix inversion, in principle.^ mOnce the channel impulse response is estimated, its effect is compensated; this process can be repeated according to the dynamics of the time varying channel. A special case of multipath channel is an OFDM channel, which is widely used in ADSL, DAB, DVB, WLAN, WMAN, and WIMAX.^nOFDM is a digital multi-carrier transmission technique where a single data stream is transmitted over several sub-carrier frequencies to achieve robustness against multipath channels as well as spectral efficiency [195]. Channel estimation for OFDM is relatively simple; the time instances of channel impulse response is now quantized and instead of an annihilating filter defined in the -transform, we can use DFT and ELP of Section 4.1. Also, instead of a known train of impulses, some of the available sub-carriers in each transmitted symbol are assigned to predetermined patterns, which are usually called comb-type pilots. These pilot tones help the receiver to extract some of the DFT samples of the discrete time varying channel (84) at the respective frequencies in each transmitted symbol. These characteristics make the OFDM channel estimation similar to unknown sparse signal recovery of Section 3.1.1 and the impulsive noise removal of Section 4.1.2. Because of these advantages, our main example and simulations are related to OFDM channel estimation. OFDM channel estimation For OFDM, the discrete version of the time varying channel of (85) in the frequency domain becomes where T[f]and n are the symbol length (including cyclic prefix) and number of sub-carriers in each OFDM symbol, respectively. Δf is the sub-carrier spacing, and is the sample interval. The above equation shows that for the r^thOFDM symbol, H [r,i]is the DFT of h [r,l]. Two major methods are used in the equalization process [196]: (1) zero forcing and (2) minimun mean squared error (MMSE). In the zero forcing method, regardless of the noise variance, equalization is obtained by dividing the received OFDM symbol by the estimated channel frequency response; while in the MMSE method, the approximation is chosen such that the MSE of the transmitted data vector is minimized, which introduces the noise variance in the equations. Statement of the problem The goal of the channel estimation process is to obtain the channel impulse response from the noisy values of the channel transfer function in the pilot positions. This is equivalent to solving the following equation for H. where i[p]is an index vector denoting the pilot positions in the frequency spectrum, is a vector containing the noisy value of the channel frequency spectrum in these pilot positions and denotes the matrix obtained from taking the rows of the DFT matrix pertaining to the pilot positions. is the additive noise on the pilot points in the frequency domain. Thus, the channel estimation problem is equivalent to finding the sparse vector Hfrom the above set of equations for a set of pilots. Various channel estimation methods [197] have been used with the usual tradeoffs of optimality and complexity. The least square (LS) [197], ML [198], MMSE [199-201], and Linear Minimum Mean Squared Error (LMMSE) [198,199,202] techniques are among some of these methods. However, none of these techniques use the inherent sparsity of the multipath channel H, and thus, they are not as accurate. Sparse OFDM channel estimation In the following, we present two methods that utilize this sparsity to enhance the channel estimation process. CS-based channel estimation The idea of using time-domain sparsity in OFDM channel estimation has been proposed by [203-205]. There are two main advantages in including the sparsity constraint of the channel impulse response in the estimation process: (1) Decrease in the MSE: By applying the sparsity constraint, the energy of the estimated channel impulse response will be concentrated into a few coefficients while in the conventional methods, we usually observe a leakage of the energy to the neighboring coefficients of the nonzero taps. Thus, if the sparsity-based methods succeed in estimating the support of the channel impulse response, the MSE will be improved by prevention of the leakage effect. (2) Reduction in the overhead: The number of pilot sub-carriers is in fact, the number of (noisy) samples that we obtain from the channel frequency response. Since the pilot sub-carriers do not convey any data, they are considered as the overhead imposed to enhance the estimation process. The theoretical results in [203] indicate that by means of sparsity-based methods, the perfect estimation can be achieved with an overhead proportional to the number of non-zero channel taps (which is considerably less than that of the current standards). In the sequel, we present two iterative methods which exploit the inherent sparsity of the channel impulse response to improve the channel estimation task in OFDM systems. Iterative method with adaptive thresholding (IMAT) for OFDM channel estimation [206] Here we apply a similar iterative method as in Section 4.2 for the channel estimation problem in (88). The main goal is to estimate Hfrom given that H has a few non-zero coefficients. To obtain an initial estimate , we use the Moore-Penrose pseudo-inverse of which yields a solution with minimum ℓ[2]-norm: where we used The non-zero coefficients of Hare found through a set of iterations followed by adaptively decreasing thresholds: where λ and i are the relaxation parameter and the iteration number, respectively, k is the index of channel impulse response and is defined in (89). The block diagram of the proposed channel estimation method is shown in Figure 27. Modified IMAT (MIMAT) for OFDM channel estimation [23] In this method, the spectrum of the channel is initially estimated using a simple interpolation method such as linear interpolation between pilot sub-carriers. This initial estimate is further improved in a series of iterations between time (sparse) and frequency (information) domains to find the sparsest channel impulse response by using an adaptive thresholding scheme; in each iteration, after finding the locations of the taps (locations with previously estimated amplitudes higher than the threshold), their respective amplitudes are again found using the MMSE criterion. In each iteration, due to thresholding, some of the false taps that are noise samples with amplitudes above the threshold are discarded. Thus, the new iteration starts with a lower number of false taps. Moreover, because of the MMSE estimator, the valid taps approach their actual values in each new iteration. In the last iteration, the actual taps are detected and the MMSE estimator gives their respective values. This method is similar to RDE and IDE methods discussed in Sections 2.6 and 4.1.2. The main advantage of this method is its robustness against side-band zero-padding.^o Table 18 summarizes the steps in the MIMAT algorithm. In the threshold of the MIMAT algorithm, α and βare constants which depend on the number of taps and initial powers of noise and channel impulses. In the first iteration, the threshold is a small number, and with each iteration it is gradually increased. Intuitively, this gradual increase of the threshold with the iteration number, results in a gradual reduction of false taps (taps that are created due to noise). In each iteration, the tap values are obtained from where t denotes the index of nonzero impulses obtained from the previous step and is obtained from by keeping the columns determined by t. The amplitudes of nonzero impulses can be obtained from simple iterations, pseudo-inverse, or the MMSE equation (94) of Table 18 that yields better results under additive noise environments. Table 18. MIMAT algorithm for OFDM channel estimation The equation that has to be solved in (93) is usually over-determined which helps the suppression of the noise in each iteration step. Note that the solution presented in (94) represents a variant of the MMSE solution when the location of discrete impulses are known. If further statistical knowledge is available, this solution can be modified and a better estimation is obtained; however, this makes the approximation process more complex. This algorithm does not need many steps of iterations; the positions of the non-zero impulses are perfectly detected in three or four iterations for most types of channels. Simulation results and discussions For OFDM simulations, the DVB-H standard was used with the 16-QAM constellation in the 2K mode (2^11 FFT size). The channel profile was the Brazil channel D. Figures 28, 29, 30, and 31 show the symbol error rate (SER) versus the carrier-to-noise ratio (CNR) after equalizing using different sparse reconstruction methods such as orthogonal matching pursuit (OMP) [88], compressive sampling matching pursuit (CoSaMP) [41], gradient projection for sparse reconstruction (GPSR) [44], IMAT and MIMAT. Also the standard linear interpolation in the frequency domain using the noisy pilot samples is simulated. In these simulations, we have considered the effects of zero-padding and Doppler frequency in the SER of estimation. As can be seen in Figures 28, 29, 30, and 31, the SER obtained from the sparsity-based algorithms reveal almost perfect approximation of the hypothetical ideal channel (where the exact channel frequency response is used for equalization). Figure 28. SER vs. CNR for the ideal channel, linear interpolation, GPSR, OMP, and the IMAT for the Brazil channel at Fd=0 without zeropadding effect. Figure 29. SER vs. CNR for the ideal channel, linear interpolation, GPSR, CoSaMP, and the IMAT for the Brazil channel at Fd=50 Hz without zeropadding effect. Figure 30. SER vs. CNR for the ideal channel, linear interpolation, GPSR, CoSaMP and the MIMAT for the Brazil channel at Fd=0 including zeropadding effect. Figure 31. SER versus CNR for the ideal channel, linear interpolation, GPSR, OMP, and the MIMAT for the Brazil channel at Fd=50 Hz including zeropadding effect. A unified view of sparse signal processing has been presented in tutorial form. The sparsity in the key areas of sampling, coding, spectral estimation, array processing, component analysis, and channel estimation has been carefully exploited. Some form of uniform or random sampling has been shown to underpin the associated sparse processing methods used in each of these fields. The reconstruction methods used in each application domain have been introduced and the interconnections among them have been highlighted. This development has revealed, for example, that the iterative methods developed for random sampling can be applied to real-field block and convolutional channel coding for impulsive noise (salt-and-pepper noise in the case of images) removal, SCA, and channel estimation for orthogonal frequency division multiplexing systems. These iterative reconstruction methods have been shown to be naturally extendable to spectral estimation and sparse array processing due to their similarity to channel coding in terms of mathematical models with significant improvements. Conversely, the minimum description length method developed for spectral estimation and array processing has potential for application in other areas. The error locator polynomial method developed for channel coding has, moreover, been shown to be a discrete version of the annihilating filter used in sampling with a finite rate of innovation and the Prony method in spectral estimation; the Pisarenko and MUSIC methods are further improvements of the Prony method when additive noise is also considered. Linkages with emergent areas such as compressive sensing and channel estimation have also been considered. In addition, it has been suggested that the linear programming methods developed for compressive sensing and SCA can be applied to other applications with possible reduction of sampling rate. As such, this tutorial has provided a route for new applications of sparse signal processing to emerge, which can potentially reduce computational complexity and improve performance quality. Other potential applications of sparsity are in the areas of sensor networks and sparse array design. ^aSparse Signal Processing, Panel Session organized and chaired by F. Marvasti and lectured by Profs. E. Candes, R. G. Baraniuk, P. Marziliano, and Dr. A. Cichoki, ICASSP 2008, Las Vegas, May 2008.^ bA list of acronyms is given in Table 2 at the end of this section.^cThe sequence of vectors {v[n]} is called a Riesz basis if there exist scalars 0<A≤B<∞such that for every absolutely summable sequence of scalars {a[n]}, we have the following inequalities [207]: ^dNote that the Strang-Fix condition can also be used for an exponential polynomial assuming the delta functions are non-uniformly periodic; in that case τ[r]in equation (35) is similar toE, the DFT of the impulses, as defined in Appendices Appendix 1 and Appendix 2.^eWe call the set of indices of consecutive zeros syndrome positions and denote it by Λ; this set includes the complex conjugate part of the Fourier domain.^fThe kernel of SDFT is , where q is relatively prime w.r.t. n; this is equivalent to a sorted version of DFT coefficients according to a mod rule, which is a kind of structured interleaving pattern.^gThis has some resemblance to soft decision iteration for turbo codes [109].^hSimilar to array processing to be discussed in the next section, we can resolve any closely spaced sources conditioned on (1) limited snapshots and infinite SNR, or (2) limited SNR and infinite number of observations, while the spatial aperture of the array is kept finite.^ iStatistical efficiency of an estimator means that it is asymptotically unbiased and its variance goes to zero.^jThe array in ESPRIT is composed of sensor doublets with the same displacement. The parameters of the impinging signals can be estimated via a rotational invariant property of the signal subspace. The complexity and storage of ESPRIT is less than MUSIC; it is also less vulnerable to array imperfections. ESPRIT, unlike MUSIC results in an unbiased DOA estimate; nonetheless, MUSIC outperforms ESPRIT, in general.^kFor a video introduction to these concepts, please refer to http:// videolectures.net/icml08_grunwald_mdl webcite.^lSpherical subspace implies the eigenvalues of the autocorrelation matrix are equal in that subspace.^mSimilar to Pisarenko method for spectral estimation in Section 5.2.^nThese acronyms are defined in Table 2 at the end of Section 1.^oIn current OFDM standards, a number of subcarriers at both edges of the bandwith are set to zero to ease the process of analog bandpass filtering. ELP decoding for erasure channels [59] For lost samples, the polynomial locator for the erasure samples is where . The polynomial coefficients can be found from the product in (95); it is easier to find h[t] by obtaining the inverse FFT of H (z). Multiplying (96) by (where r is an integer) and summing over m, we get Since the inner summation is the DFT of the missing samples e [i[m]], we get where E [.] is the DFT of e [i]. The received samples, d [i], can be thought of as the original over-sampled signal, x [i], minus the missing samples e [i[m]]. The error signal, e [i], is the difference between the corrupted and the original over-sampled signal and hence is equal to the values of the missing samples for i = i[m]and is equal to zero otherwise. In the frequency domain, we Since X [j] = 0 for j∈Λ (see the footnote on page 227), then The remaining values of E [j]can be found from (98), by the following recursion: where r∉Λ and the index additions are in mod (n). ELP decoding for impulsive noise channels [31,104] For all integer values of r such that r∈Λ and r + k∈Λ, we obtain a system of k equations with k + 1unknowns (h[t]coefficients). These equations yield a unique solution for the polynomial with the additional condition that the first nonzero h[t] is equal to one. After finding the coefficients, we need to determine the roots of the polynomial in (95). Since the roots of H(z) are of the form , the inverse DFT (IDFT) of the can be used. Before performing IDFT, we have to pad n−1−k zeros at the end of the sequence to obtain an n-point signal. We refer to the new signal (after IDFT) as . Each zero in {H[i]} represents an error in r [i] at the same location. We would like to sincerely thank our colleagues for their specific contributions in various sections in this article. Especially, Drs. S. Holm from University of Oslo who contributed a section in sparse array design, M. Nouri Moghadam, the director of the Newton Foundation, H. Saeedi from University of Massachusetts, and K. Mehrany from EE Dept. of Sharif University of Technology who contributed to various sections of the original paper before revision. We are also thankful to M. Valiollahzadeh who edited and contributed to the SCA section. We are especially indebted to Prof. B. Sankur from Bogazici University in Turkey for his careful review and comments. We are also thankful to the students of the Multimedia Lab and members of ACRI at Sharif University of Technology for their invaluable help and simulations. We are specifically indebted to A. Hosseini, A. Rashidinejad, R. Eghbali, A. Kazerouni, V. Montazerhodjat, S. Jafarzadeh, A. Salemi, M. Soltanalian, M. Sharif and H. Firouzi. The work of Akram Aldroubi was supported in part by grant NSF-DMS 0807464. 1. M Vetterli, P Marziliano, Blu T, Sampling signals with finite rate of innovation. IEEE Trans. Signal Process 50(6), 1417–1428 (2002) 2. F Marvasti, M Hung, MR Nakhai, The application of walsh transform for forward error correction. in Proc, ed. by . IEEE Int. Conf. Acoust, Speech Signal Proc. ICASSP’99 (USA: Phoenix, AZ, 1999), pp. 2459–2462 3. SM Kay, SL Marple, Spectrum analysis—a modern perspective. Proc. IEEE (Modern Spectrum Analysis II) 69(11) 4. P Stoica, A Nehorai, maximumlikelihoodandCramer-Raobound Music, IEEE Trans. ASSP 37(5), 720–741 (1989) 5. P Stoica, A Nehorai, Performance study of conditional and unconditional direction-of-arrival estimation. IEEE Trans. ASSP 38(10), 1783–1795 (1990) 6. S Holm, A Austeng, K Iranpour, JF Hopperstad, Sparse sampling in array processing. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (New York: Springer, 2001), pp. 787–833 7. S Aeron, M Zhao, V Saligrama, Fundamental tradeoffs between sparsity,sensing diversity,sensing capacity. in Asilomar Conference on Signals, Systems and Computers, ACSSC’06 (Pacific Grove,CA: Oct–Nov, 2006), pp. 295–299 8. P Bofill, M Zibulevsky, Underdetermined blind source separation using sparse representations. Signal Process Elsevier 81(11), 2353–2362 (2001) 9. MA Girolami, JG Taylor, Self-Organising Neural Networks:Independent Component Analysis and Blind Source Separation (London: Springer, 1999) 10. P Georgiev, F Theis, A Cichocki, Sparse component analysis and blind source separation of underdetermined mixtures. IEEE Trans. Neural Netw 16(4), 992–996 (2005) 11. M Aharon, M Elad, AM Bruckstein, The k-svd: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 54(11), 4311–4322 (2006) 12. M Aharon, M Elad, AM Bruckstein, On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. Linear Algebra Appl 416(1), 48–67 (2006) 13. R Gribonval, M Nielsen, Sparse representations in unions of bases. IEEE Trans. Inf. Theory 49(12), 3320–3325 (2003) 14. P Fertl, G Matz, Efficient OFDM channel estimation in mobile environments based on irregular sampling. in Proc, ed. by . Asil. Conf. Sig. Sys. and Computers (US: Pacific Grove, May 2007), pp. 15. O Ureten, N Serinken, Decision directed iterative equalization of OFDM symbols using non-uniform interpolation. in IEEE conf, ed. by . on Vehicular Technol. Conf. (VTC) (Canada: Ottawa, Sep. 2007), pp. 1–5 16. M Soltanolkotabi, A Amini, F Marvasti, OFDM channel estimation based on adaptive thresholding for sparse signal detection. in Proc, ed. by . EUSIPCO’09 (Scotland: Glasgow, Aug 2009), pp. 17. JL Brown, Sampling extentions for multiband signals. IEEE Trans. Acoust. Speech Signal Process 33, 312–315 (1985) 18. OG Guleryuz, Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising, parts I and II. IEEE Trans. Image Process 15(3), 539–571 (2006) 19. H Rauhut, On the impossibility of uniform sparse reconstruction using greedy methods. Sampl Theory Signal Image Process 7(2), 197–215 (2008) 20. T Blu, P Dragotti, M Vetterli, P Marziliano, P Coulot, Sparse sampling of signal innovations: Theory, algorithms, and performance bounds. IEEE Signal Process. Mag 25(2) (2008) 21. F Marvasti, Guest editor’s comments on special issue on nonuniform sampling. Sampl Theory Signal Image Process 7(2), 109–112 (2008) 22. E Candès, Compressive sampling. in Int, ed. by . Congress of Mathematics (Spain: Madrid, 200), pp. 1433–1452 23. S Zahedpour, S Feizi, A Amini, M Ferdosizadeh, F Marvasti, Impulsive noise cancellation based on soft decision and recursion. IEEE Trans. Instrum. Meas 58(8), 2780–2790 (2009) 24. S Feizi, S Zahedpour, M Soltanolkotabi, A Amini, F Marvasti, Salt and pepper noise removal for images (Russia: St. Petersburg, June 2008) 25. PD Grunwald, IJ Myung, MA Pitt, Advances in Minimum Description Length: Theory and Applications (Cambridge: MIT Press, 2005) 26. A Kumar, P Ishwar, K Ramchandran, On distributed sampling of bandlimited and non-bandlimited sensor fields. Acoustics, Speech, and Signal Processing (USA: Berkeley, CA, Aug 2004), pp. 925–928 27. P Fertl, G Matz, Multi-user channel estimation in OFDMA uplink systems based on irregular sampling and reduced pilot overhead. in Proc, ed. by . Acoustics, Speech and Sig. Proc., (ICASSP) (US: Honolulu, April 2007), pp. 297–300 28. F Marvasti, Spectral analysis of random sampling and error free recovery by an iterative method. Trans. IECE Jpn 69(2), 79–82 (1986) 29. RA DeVore, Deterministic constructions of compressed sensing matrices. J. Complex 23(4–6), 918–925 (2007) 30. SG Mallat, Z Zhang, Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process 41(12), 3397–3415 ((1993)) 31. R Gribonval, P Vandergheynst, On the exponential convergence of matching pursuit in quasi-incoherent dictionaries. IEEE Trans Inf. Theory 52(1), 255–261 (2006) 32. JA Tropp, Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004) 33. D Needell, JA Tropp, Cosamp: iterative signal recovery from incomplete and inaccurate samples. App. Comp. Harmon. Anal 26, 301–321 (2009) 34. DL Donoho, M Elad, V Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52(1), 6–18 ((2006)) 35. M Figueiredo, R Nowak, S Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process 1(4), 586–597 (2007) 36. I Daubechies, MD Friese, CD Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl Math 57, 1413–1457 (2004) 37. IF Gorodnitsky, BD Rao, Sparse signal reconstruction from limited data using FOCUSS, a re-weighted minimum norm algorithm. IEEE Trans. Signal Process 45(3), 600–616 (1997) 38. GH Mohimani, M Babaie-Zadeh, C Jutten, A fast approach for overcomplete sparse decomposition based on smoothed ℓ0-norm. IEEE Trans Signal Process 57(1), 289–301 (2009) 39. F Marvasti, AK Jain, Zero crossings,bandwidth compression and restoration of nonlinearly distorted band-limited signals. J. Opt Soc. Am 3(5), 651–654 (1986) 40. A Aldroubi, C Cabrelli, U Molter, Optimal non-linear models for sparsity and sampling. J Fourier Anal. Appl. (Special Issue on Compressed Sampling) 14(6), 48–67 (2008) 41. A Amini, F Marvasti, Convergence analysis of an iterative method for the reconstruction of multi-band signals from their uniform and periodic nonuniform samples. STSIP 7(2), 109–112 (2008) 42. PJSG Ferreira, Iterative and non-iterative recovery of missing samples for 1-D band-limited signals. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (Boston: Springer, 2001), pp. 43. PJSG Ferreira, The stability of a procedure for the recovery of lost samples in band-limited signals. IEEE Trans. Signal Process 40(3), 195–205 (1994) 44. F Marvasti, A Unified Approach to Zero-crossings and Nonuniform Sampling of Single and Multi-dimensional Signals and Systems (Oak Park: Nonuniform Publication, 1987) 45. AI Zayed, PL Butzer, Lagrange interpolation and sampling theorems. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (New York: Springer, formerly Kluwer Academic/Plenum Publishers, 2001), pp. 123–168 46. F Marvasti, P Clarkson, M Dokic, U Goenchanart, C Liu, Reconstruction of speech signals with lost samples. IEEE Trans. Signal Process 40(12), 2897–2903 (1992) 47. F Marvasti, Random topics in nonuniform sampling. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (Springer, formerly Kluwer Academic/Plenum Publishers, 2001), pp. 169–234 48. F Marvasti, M Hasan, M Eckhart, S Talebi, Efficient algorithms for burst error recovery using FFT and other transform kernels. IEEE Trans. Signal Process 47(4), 1065–1075 (1999) 49. PJSG Ferreira, Mathematics for multimedia signal processing II: Discrete finite frames and signal reconstruction. Signal Process Multimed IOS press 174, 35–54 (1999) 50. F Marvasti, M Analoui, M Gamshadzahi, Recovery of signals from nonuniform samples using iterative methods. IEEE Trans. ASSP 39(4), 872–878 (1991) 51. H Feichtinger, K Grô̈chenig, Theory and practice of irregular sampling. in Wavelets- Mathematics and Applications, ed. by Benedetto JJ, Frazier M (Boca Raton: CRC Publications, 1994), pp. 305–363 52. PJSG Ferreira, Noniterative and fast iterative methods for interpolation and extrapolation. IEEE Trans. Signal Process 42(11), 3278–3282 (1994) 53. A Aldroubi, K Grôchenig, Non-uniform sampling and reconstruction in shift-invariant spaces. SIAM Rev 43(4), 585–620 (2001) 54. A Aldroubi, Non-uniform weighted average sampling exact reconstruction in shift-invariant and wavelet spaces. Appl. Comput. Harmon. Anal 13(2), 151–161 (2002) 55. A Papoulis, C Chamzas, Detection of hidden periodicities by adaptive extrapolation. IEEE Trans. Acoust. Speech Signal Process 27(5), 492–500 (1979) 56. WYXu C Chamzas, An improved version of Papoulis-Gerchberg algorithm on band-limited extrapolation. IEEE Trans. Acoust. Speech Signal Process 32(2), 437–440 (1984) 57. PJSG Ferreira, Interpolation and the discrete Papoulis-Gerchberg algorithm. IEEE Trans. Signal Process 42(10), 2596–2606 (1994) 58. K Gröchenig, T Strohmer, Numerical and theoretical aspects of no. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (New York: Springer, formerly Kluwer Academic/Plenum Publishers, 2001), pp. 283–324 59. DC Youla, Generalized image restoration by the method of alternating orthogonal projections. IEEE Trans. Circuits Syst 25(9), 694–702 (1978) 60. DC Youla, H Webb, Image restoration by the method of convex projections: Part 1-theory. IEEE Trans. Med. Imag 1(2), 81–94 (1982) 61. K Gröchenig, Acceleration of the frame algorithm. IEEE Trans. Signal Process 41(12), 3331–3340 (1993) 62. A Ali-Amini, M Babaie-Zadeh, C Jutten, A New Approach for Sparse Decomposition and Sparse Source Separation (EUSIPCO2006,Florence) 63. F Marvasti, Applications to error correction codes. in Nonuniform Sampling: Theory and Practice, ed. by Marvasti F (New York: Springer, 2001), pp. 689–738 64. E Candes, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006) 65. E Candes, T Tao, Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006) 66. AJ Jerri, The Shannon sampling theorem-its various extension and applications: a tutorial review. Proc IEEE 65(11), 1565–1596 (1977) 67. E Candes, M Wakin, An introduction to compressive sampling. IEEE Signal Process. Mag 25(2), 21–30 (2008) 68. Y Eldar, Compressed sensing of analog signals in shift-invariant spaces. IEEE Trans. Signal Process 57(8), 2986–2997 (2009) 69. E Candes, J Romberg, Sparsity and incoherence in compressive sampling. Inverse Probl 23, 969–985 (2007) 70. D Donoho, X Hou, Uncertainty principle and ideal atomic decomposition. IEEE Trans.Inf. Theory 47(7), 2845–2862 (2001) 71. RG Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices. Constr Approx 28(3), 253–263 (2008) 72. AC Gilbert, MJ Strauss, JA Tropp, R Vershynin, Algorithmic linear dimension reduction in the ℓ1 norm for sparse vectors. in Proc, ed. by . Allerton conf. Comm., Control and Comp (USA: IL, 2006) 73. DL Donoho, For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution 59, 797–829 74. JA Tropp, Recovery of short linear combinations via ℓ1 minimization. IEEE Trans. Inf. Theory 90(4), 1568–1570 (2005) 75. J Tropp, A Gilbert, Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007) 76. E Candes, J Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math 6(2), 227–254 (2006) 77. E Candes, J Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl. Math 59(8), 1207–1223 (2006) 78. V Saligrama, Deterministic designs with deterministic gaurantees:Toepliz compressed sensing matrices, sequence design and system identification. arXiv 0806, 4958 (2008) 79. Ev Berg, MP Friedlander, G Hennenfent, F Herrmann, R Saab, Ö Yılmaz, Sparco: a testing framework for sparse reconstruction, Dept. Comp. Sci. Univ. Br. Columbia, Vancouver. Tech. Rep TR-2007-20 (October 2007) 80. R Berinde, AC Gilbert, P Indyk, H Karloff, MJ Strauss, Combining geometry and combinatorics:a unified approach to sparse signal recovery. in Proc, ed. by . Allerton conf. Comm., Control and Comp (US: IL, 2008) (pp, 2008), . 798–805 81. MF Duarte, MB Wakin, RG Baraniuk, Fast reconstruction of piecewise smooth signals from random projections. in Proc, ed. by . SPARS05 (France: Rennes, Nov 2005), pp. 1064–1070 82. AC Gilbert, Y Kotidis, S Muthukrishnan, MJ Strauss, One-pass wavelet decompositions of data streams. IEEE Trans. Knowl. Data Eng 15(3), 541–554 (2003) 83. AC Gilbert, MJ Strauss, JA Tropp, R Vershynin, One sketch for all: fast algorithms for compressed sensing. ACM STOC (San Diego: CA, June 2007), pp. 237–246 84. S Sarvotham, D Baron, RG Baraniuk, Compressed sensing reconstruction via belief propagation. Technical Report ECE-0601 (ECE Dept., Rice University, July 2006) (http://dsp, July 2006), . rice.edu/ sites/dsp.rice.edu/files/cs/csbpTR07.142006.pdf webcite 85. S Sarvotham, D Baron, RG Baraniuk, Sudocodes-fast measurement and reconstruction of sparse signals. IEEE ISIT (USA: Seattle, 2006), pp. 2804–2808 86. W Xu, B Hassibi, Efficient compressive sensing with deterministic guarantees using expander graphs. in IEEE Inf, ed. by . Theory Workshop, ITW’07 (Japan: Tokyo, Sep. 2007), pp. 414–419 87. PL Dragotti, M Vetterli, T Blu, Sampling moments and reconstructing signals of finite rate of innovation:Shannon meets strang-fix. IEEE Trans. Signal Process 55(5), 1741–1757 (2007) 88. I Maravic, M Vetterli, Sampling and reconstruction of signals with finite rate of innovation in the presence of noise. IEEE Trans. Signal Process 53(8), 2788–2805 (2005) 89. P Azmi, F Marvasti, Robust decoding of DFT-based error-control codes for impulsive and additive white gaussian noise channels. IEEE Proc. Commun 152(3), 265–271 (2005) 90. F Marvasti, M Nafie, Sampling theorem: a unified outlook on information theory, block and convolutional codes. Spec. Issue Info. Theory Appl., IEICE Trans. Fundam. Electron. Commun. Comput. Sci. Sec. E 76(9), 1383–1391 (1993) 91. J Wolf, Redundancy, the discrete Fourier transform, and impulse noise cancellation. IEEE Trans. Commun 31(3), 458–461 (1983) 92. T Jr Marshall, Coding of real-number sequences for error correction: a digital signal processing problem. IEEE J. Sel. Areas Commun 2(2), 381–392 (2002) 93. C Berrou, A Glavieux, P Thitimajshima, Near Shannon limit error- correcting coding and decoding:Turbo codes. in Proc, ed. by . Int. Conf. Comm. (ICC) (Switzerland: Geneva, 1993), pp. 1064–1070 94. CN Hadjicostis, GC Verghese, Coding approaches to fault tolerance in linear dynamic systems. IEEE Trans. Inf. Theory 51(1), 210–228 (2005) 95. CJ Anfinson, FT Luk, A linear algebraic model of algorithm-based fault tolerance. IEEE Trans. Comput 37(12), 1599–1604 (1988) 96. VSS Nair, JA Abraham, Real-number codes for fault-tolerant matrix operations on processor arrays. IEEE Trans. Comput 39(4), 426–435 (1990) 97. ALN Reddy, P Banerjee, Algorithm-based fault detection for signal processing applications. IEEE Trans. Comput 39(10), 1304–1308 (1990) 98. JMN Vieira, PJSG Ferreira, Interpolation, spectrum analysis, error-control coding, and fault tolerant computing. in Proc, ed. by . of ICASSP’97 (Germany: Munich, Apr 1997), pp. 1831–1834 99. M Nafie, F Marvasti, Implementation of recovery of speech with missing samples on a DSP chip. Electron Lett 30(1), 12–13 (1994) 100. A Momenai, S Talebi, Improving the stability of DFT error recovery codes by using sparse oversampling patterns. Elsevier 87(6), 1448–1461 (2007) 101. F Marvasti, Error concealment of speech, image and video signals. U.S Patents 6,601, 206 (July 2003) 102. C Wong, F Marvasti, W Chambers, Implementation of recovery of speech with impulsive noise on a DSP chip. Electron Lett 31(17), 1412–1413 (1995) 103. de B G R Prony, Essai éxperimental et analytique: sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alkool, á différentes températures. J. l’École. Polytech 1, 24–76 (1795) 104. JJ Fuchs, Extension of the Pisarenko method to sparse linear arrays. IEEE Trans. Signal Process 45(10), 2413–2421 (1997) 105. R Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag 34(3), 276–280 (1986) 106. JJ Fuchs, On the application of the global matched filter to DOA estimation with uniform circular array. IEEE Trans. Signal Process 49(4), 702–709 (2001) 107. JJ Fuchs, Linear programming in spectral estimation. Application to array processing. in Proc, ed. by . IEEE Int. Conf. Acoustics Speech Signal Proc., ICASSP’96, vol. 6, (USA, 7–10: Atlanta, GA, May 1996), pp. 3161–3164 108. H Krim, M Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag 13(4), 67–94 (1996) 109. BDV Veen, KM Buckley, Beamforming: a versatile approach to spatial filtering. IEEE ASSP Mag 5(2), 4–24 (1988) 110. S Valaee, P Kabal, An information theoretic approach to source enumeration in array signal processing. IEEE Trans. Signal Process 52(5), 1171–1178 (2004) 111. R Roy, T Kailath, Esprit-estimation of signal parameters via rotational invariance techniques. IEEE Trans. ASSP 37(7), 984–995 (1989) 112. M Wax, T Kailath, Detection of signals by information theoretic criteria. IEEE Trans. ASSP 33(2), 387–392 (1985) 113. I Ziskind, M Wax, Maximum likelihood localization of multiple sources by alternating projection. IEEE Trans. ASSP 36(10), 1553–1560 (1988) 114. M Viberg, B Ottersten, Sensor array processing based on subspace fitting. IEEE Trans. Signal Process 39(5), 1110–1121 (1991) 115. S Shahbazpanahi, S Valaee, AB Gershman, A covariance fitting approach to parametric localization of multiple incoherently distributed sources. IEEE Trans. Signal Process 52(3), 592–600 (2004) 116. H Akaike, A new look on the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974) 117. M Kaveh, H Wang, H Hung, On the theoretical performance of a class of estimators of the number of narrow-band sources. IEEE Trans. ASSP 35(9), 1350–1352 (1987) 118. QT Zhang, KM Wong, PC Yip, JP Reilly, Statistical analysis of the performance of information theoretic criteria in the detection of the number of signals in array processing. IEEE Trans. ASSP 37 (10), 1557–1567 (1989) 119. J Rissanen, A universal prior for integers and estimation by minimum description length. Annals Stat 11(2), 416–431 (1983) 120. B Nadler, A Kontorvich, Model selection for sinusoids in noise: statistical analysis and a new penalty term. IEEE Trans. Signal Process 59(4), 1333–1345 (2011) 121. F Haddadi, MRM Mohammadi, MM Nayebi, MR Aref, Statistical performance analysis of detection of signals by information theoretic criteria. IEEE Trans. Signal Process 58(1), 452–457 (2010) 122. M Gastpar, M Vetterli, Source-channel communication in sensor networks. Lecture Notes in Computer Science (New York: Springer, 2003), pp. 162–177 123. AM Sayeed, A statistical signal modeling framework for wireless sensor networks, in Proc. 2nd Int. Workshop on Info. Proc. in Sensor Networks, IPSN’03, UW Tech, ed. by . Rep. ECE-1-04 (WI: Univ. Wisconsin, Madison, Feb 2004), pp. 162–177 124. K Liu, AM Sayeed, Optimal distributed detection strategies for wireless sensor networks. in Proc, ed. by . 42nd Annual Allerton Conf. on Comm., Control and Comp. (IL: Monticello, Oct 2004), pp. 125. A D’Costa, V Ramachandran, A Sayeed, Distributed classification of gaussian space-time sources in wireless sensor networks. IEEE J. Sel. Areas Commun 22(6), 1026–1036 (2004) 126. WU Bajwa, J Haupt, AM Sayeed, R Nowak, Compressive wireless sensing. in Proc, ed. by . Int. Symposium on Info. Proc. in Sensor Networks, IPSN’06 (TN: Nashville, Apr 2006), pp. 134–142 127. R Rangarajan, R Raich, A Hero, Sequential design of experiments for a rayleigh inverse scattering problem (France: Bordeaux, July 2005) 128. Y Yang, RS Blum, Radar waveform design using minimum mean-square error and mutual information. in Fourth IEEE Workshop Sens, ed. by . Array Multichannel Proc. vol. 12 (USA: Waltham, MA, 2006), pp. 234–238 129. A Kumar, P Ishwar, K Ramchandran, On distributed sampling of smooth non-bandlimited fields. in Int, ed. by . Simp. On Info. Proc. In Sensor Netwroks ISPN2004 (CA: Berkeley, April 2004), pp. 130. E Meijering, A chronology of interpolation: from ancient astronomy to modern signal and image processing. Proc. IEE 90(3), 319–342 (March 2002) 131. D Ganesan, S Ratnasamy, H Wang, D Estrin, Coping with irregular spatio-temporal sampling in sensor networks. ACM SIGCOMM Comput. Commun Rev 34(1), 125–130 (2004) 132. R Wagner, R Baraniuk, S Du, D Johnson, A Cohen, An architecture for distributed wavelet analysis and processing in sensor networks. in Proc, ed. by . of Int. Workshop Info. Proc. in Sensor Networks, IPSN’06 (TN: Nashville, Apr 2006), pp. 243–250 133. R Wagner, H Choi, R Baraniuk, V Delouille, Distributed wavelet transform for irregular sensor network grids, in Proc, ed. by . IEEE Stat. Signal Proc. Workshop (SSP) (Bordeaux, July 2005), pp. 134. SS Pradhan, J Kusuma, K Ramchandran, Distributed compression in a dense microsensor network. IEEE Signal Process. Mag 19(2), 51–60 (2002) 135. M Gastpar, M Vetterli, Power, spatio-temporal bandwidth distortion in large sensor networks. IEEE J Sel. Areas Commun 23(4), 745–754 (2005) 136. WU Bajwa, AM Sayeed, R Nowak, Matched source-channel communication for field estimation in wireless sensor networks (Los Angeles: CA, April 2005) 137. R Mudumbai, J Hespanha, U Madhow, G Barriac, Scalable feedback control for distributed beamforming in sensor networks. in Proc, ed. by . of the Int. Symposium on Info. Theory, ISIT’05 (SA: Adelaide, Sept 2005), pp. 137–141 138. J Haupt, R Nowak, Signal reconstruction from noisy random projections. IEEE Trans. Inf. Theory 52(9), 4036–4068 (2006) 139. D Baron, MB Wakin, MF Duarte, S Sarvotham, RG Baraniuk, Distributed compressed sensing, submitted for publication, pre-print (http://www, November 2005), . ece.rice.edu/drorb/pdf/DCS112005.pdf 140. W Wang, M Garofalakis, K Ramchandran, Distributed sparse random projections for refinable approximation. in Proc, ed. by . IPSN’07 (MA: Cambridge, April 2007), pp. 331–339 141. MF Duarte, MB Wakin, D Baron, RG Baraniuk, Universal distributed sensing via random projections (TN: Nashville, April 2006) 142. J Haupt, WU Bajwa, M Rabbat, R Nowak, Compressed sensing for networked data. IEEE Signal Process. Mag 2, 92–101 (2008) 143. M Crovella, E Kolaczyk, Graph wavelets for spatial traffic analysis. in INFOCOM 2003, ed. by . Twenty-Second Annual Joint Conf. of the IEEE Computer and Communications Societies. IEEE (MA: Boston University, March 2003), pp. 1848–1857 144. WU Bajwa, J Haupt, AM Sayeed, R Nowak, Joint source-channel communication for distributed estimation in sensor networks. IEEE Trans. Inf. Theory 53(10), 3629–3653 (2007) 145. S Boyd, A Ghosh, B Prabhakar, D Shah, Randomized gossip algorithms. IEEE Trans. Inf. Theory IEEE/ACM Trans. Netw 52(6), 2508–2530 (2006) 146. M Rabbat, J Haupt, A Singh, R Nowak, Decentralized compression and predistribution via randomized gossiping. in Proc, ed. by . IPSN’06 (TN: Nashville, April 2006), pp. 51–59 147. SJ Kim, K Koh, M Lustig, S Boyd, D Gorinevsky, A method for large-scale ℓ1-regularized least squares problems with applications in signal processing and statistics. IEEE J. Sel. Top. Signal Process 1(4), 606–617 (2007) 148. Y Rachlin, R Negi, P Khosla, Sensing capacity for discrete sensor network applications. in Proc, ed. by . of the Fourth Int. Symposium on Info. Proc. in Sensor Networks (PA: Pittsburgh, April 2005), pp. 126–132 149. S Aeron, M Zhao, V Saligrama, Information theoretic bounds to sensing capacity of sensor networks under fixed SNR. in IEEE Info, ed. by . Theory Workshop, ITW’07 (CA: Lake Tahoe, 2007), pp. 150. S Aeron, M Zhao, V Saligrama, Information theoretic bounds for compressed sensing. IEEE Trans. Inf. Theory 52(10), 5111–5130 (2010) 151. J Herault, C Jutten, Space or time adaptive signal processing by neural network models. in Proc, ed. by . of American Institute of Physics (AIP) Conf.: Neural Networks for Computing (US: Snowbird,Utah, 1986), pp. 206–211 152. AM Aibinu, AR Najeeb, MJE Salami, AA Shafie, Optimal model order selection for transient error autoregressive moving average (TERA) MRI reconstruction method. Proc. World Acad. Sci. Eng. Technol 32, 191–195 (2008) 153. MS Pedersen, J Larsen, U Kjems, LC Parra, A Survey of Convolutive Blind Source Separation Methods, Springer Handbook of Speech Processing (Berlin: Springer, 2007) 154. M Zibulevsky, BA Pearlmutter, Blind source separation by sparse decomposition in a signal dictionary. Neural Comp 13(4), 863–882 (2001) 155. Y Luo, JA Chambers, S Lambotharan, I Proudler, Exploitation of source non-stationarity in underdetermined blind source separation with advanced clustering techniques. IEEE Trans. Signal Process 54(6), 2198–2212 (2006) 156. Y Li, S Amari, A Cichocki, DWC Ho, S Xie, Underdetermined blind source separation based on sparse representation. IEEE Trans. Signal Process 54(2), 423–437 (2006) 157. A Jourjine, S Rickard, O Yilmaz, Blind separation of disjoint orthogonal signals: demixing n sources from 2 mixtures. in Proc, ed. by . of IEEE Conf. on Acoustic, Speech, and Signal Proc. ICASSP’2000 (Turkey: Istanbul, 2000), pp. 2985–2988 158. L Vielva, D Erdogmus, C Pantaleon, I Santamaria, J Pereda, JC Principe, Underdetermined blind source separation in a time-varying environment. in Proc, ed. by . of IEEE Conf. On Acoustic, Speech, and Signal Proc. ICASSP’02 (USA: Orlando, FL, 13), pp. 3049–3052 159. I Takigawa, M Kudo, A Nakamura, J Toyama, On the minimum ℓ1-norm signal recovery in underdetermined source separation. in Proc, ed. by . of 5th Int. Conf. on Independent Component Analysis (Spain: Granada, 2004), pp. 22–24 160. CC Took, S Sanei, J Chambers, A filtering approach to underdetermined BSS with application to temporomandibular disorders. in Proc IEEE ICASSP’06 (France: IEEE, May 2006), pp. 1124–1127 161. T Melia, S Rickard, Underdetermined blind source separation in echoic environment using DESPRIT, EURASIP. J. Adv. Signal Process 2007(Article No 86484), 19 (2007) 162. K Nazarpour, S Sanei, L Shoker, JA Chambers, Parallel space-time-frequency decomposition of EEG signals for brain computer interfacing. in Proc, ed. by . of EUSIPCO 2006 (Italy: Florence), pp. 163. R Gribonval, S Lesage, A survey of sparse component analysis for blind source separation: principles, perspectives, and new challenges. Proc. of ESANN, 323–330 (April 2006) 164. SS Chen, DL Donoho, MA Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput 20, 33–61 (1998) 165. E Candes, J Romberg, ℓ1-magic: Recovery of sparse signals (http://www), . acm.caltech.edu/l1magic/.Availableonline webcite 166. JJ Fuchs, On sparse representations in arbitrary redundant bases. IEEE Trans. Inf. Theory 50(6), 1341–1344 (2004) 167. J Bobin, Y Moudden, JL Starck, M Elad, Morphological diversity and source separation. IEEE Signal Process. Lett 13(7), 409–502 (2006) 168. J Bobin, JL Starck, JM Fadili, Y Moudden, DL Donoho, Morphological component analysis: an adaptive thresholding strategy. IEEE Trans. Image Process 16(11), 2675–2685 (2007) 169. A Schmidt, JMF Moura, Field inversion by consensus and compressed sensing. in Proc, ed. by . of IEEE Int. Conf. on Acoustics, Speech and Signal Proc., ICASSP’09 (Taiwan: IEEE,Taipei, 2009), pp. 170. M Mishali, YC Eldar, Sparce source separation from orthogonal mixtures. in Proc, ed. by . of IEEE Int. Conf. on Acoustics, Speech and Signal Proc., ICASSP’09 IEEE (Taiwan: Taipei, 2009), pp. 171. H Schulze, C Lueders, Theory and Applications of OFDM and CDMA:Wideband Wireless Communications (NJ: Wiley, 2005) 172. M Ozdemir, H Arslan, Channel estimation for wireless ofdm systems. IEEE Commun. Surv. Tutor 9(2), 18–48 (2007) 173. JJV de Beek, O Edfors, M Sandell, SK Wilson, P Borjesson, On channel estimation in OFDM systems. in Proc, ed. by . 45th IEEE Vehicular Technology Conf., Chicago (IL: Chicago, July 1995), pp. 174. M Morelli, U Mengali, A comparison of pilot-aided channel estimation methods for ofdm systems. IEEE Trans. Signal Process 49(12), 3065–3073 (2001) 175. O Edfors, M Sandell, J-JV de Beek, SK Wilson, OFDM channel estimation by singular value decomposition. IEEE Trans. Commun 46(7), 931–939 (1998) 176. S Coleri, M Ergen, A Puri, A Bahai, Channel estimation techniques based on pilot arrangement in OFDM systems. IEEE Trans. Broadcast 48(3), 223–229 (2002) 177. SG Kang, YM Ha, EK Joo, A comparative investigation on channel estimation algorithms for OFDM in mobile communications. IEEE Trans. Broadcast 49(2), 142–149 (2003) 178. G Tauböck, F Hlawatsch, A compressed sensing technique for OFDM channel estimation in mobile environments. in exploiting channel sparsity for reducing pilots, in Proc, ed. by . ICASSP’08 (March: Las Vegas, 2008), pp. 2885–2888 179. MR Raghavendra, K Giridhar, Improving channel estimation in OFDM systems for sparse multipath channels. IEEE Signal Process. Lett 12(1), 52–55 (2005) 180. T Kang, RA Iltis, Matching pursuits channel estimation for an underwater acoustic OFDM modem. in in IEEE Int, ed. by . Conf. on Acoustics, Speech and Signal Proc., ICASSP’08 (March: Las Vegas, 2008), pp. 5296–5299 Sign up to receive new article alerts from EURASIP Journal on Advances in Signal Processing
{"url":"http://asp.eurasipjournals.com/content/2012/1/44","timestamp":"2014-04-19T00:16:13Z","content_type":null,"content_length":"511815","record_id":"<urn:uuid:279831fd-1b77-4b75-a1f7-5ce62b33c198>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
π vs τ (Pi vs Tau) August 17th 2013, 10:47 PM #1 Junior Member Apr 2011 π vs τ (Pi vs Tau) Hello! I am wondering if I could please have help with this topic of why Pi would be more beneficial than Tau. Is there any proofs that show why Pi would be better in 3 dimensional sphere or 4 dimensions? Any harder equations/proofs would be appreciated. Also why would Pi be more useful in Euler Formula. Thanks! Re: π vs τ (Pi vs Tau) Before anyone could answer that you will have to tell what YOU mean by one number being more "beneficial" or "better" than another! Re: π vs τ (Pi vs Tau) From what I understand is that people that support Tau want to replace the teachings of Pi. One of Tau examples would be that it is easier to teach radians to students compared to Pi where Pi/2 is a quarter compared to Tau/4 is a quarter. So I guess what I am looking for is, what would be better in the Educational system. If I was to support Pi how could I find a good tertiary example for this. I am happy to continue using Pi as it has been around for thousands of years though Tau makes some valid points. Pi and Tau have it ups and downs, so really it shouldn't matter what we use since we know 2Pi=Tau. However, I still would like some good proofs for Pi to stick around. Re: π vs τ (Pi vs Tau) You may change your mind after watching the link below. Pi may be wrong, but so is Tau! - YouTube Re: π vs τ (Pi vs Tau) There are far more important things to argue over than numbers! August 18th 2013, 11:30 AM #2 MHF Contributor Apr 2005 August 18th 2013, 07:35 PM #3 Junior Member Apr 2011 August 18th 2013, 10:26 PM #4 August 19th 2013, 06:21 AM #5 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-math-topics/221249-vs-pi-vs-tau.html","timestamp":"2014-04-16T09:01:27Z","content_type":null,"content_length":"38089","record_id":"<urn:uuid:03f89ee7-9ffd-4393-9bd4-66dd0b60ddff>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of pairings in a set of n objects I like Serena Hi cepheid! Trying to keep track in dealing with unordered stuff usually gives me a head ache. What I have learned to do, is first count the ordered stuff, and then divide by the number of duplicate countings. In your case you can order the objects in ##n!## ways, yielding ##m## ordered pairs. However, since the pairs are supposed to be unordered, we are counting each pair twice. So we need to divide by ##2^m##. Furthermore, the m pairs can be ordered in m! ways, so we need to divide by ##m!## to eliminate the duplicate countings. I believe the general formula is: $$N_{pairings}={n! \over m! 2^m}$$ I didn't try to brute force count and verify though... To illustrate with n=6, the first ordering is: However, (2,1),(3,4),(5,6) is the same pairing which we would be counting separately. We need to divide by 2 for each pair in the pairing. That is, by 2^3. Furthermore, (3,4),(1,2),(5,6) is again the same pairing which we would also be counting separately. We need to divide by 3! to compensate. Yeah, I see what I did wrong. You're right, I am over-counting due to the ordering of the pairs (not the ordering within the pairs, which is taken care of by my using n choose 2 instead of n permute 2). The problem first started with my stray factor of 1/2 in the n = 4 step. This wasn't a factor 1/2, it was a factor 1/m, because it was saying that if you four objects, 1 2 3 4, and you choose to pair 12, then the other pair is 34. However, if you choose to pair 34, then the other one is 12, and these are not distinct outcomes. This correction of 1/m needs to occur at every recursion step. For example, with n = 6, I had 6 choose 2, and I said, if the two initially chosen are 12, then you have these being the three pairings of the remaining four that weren't initially chosen. The problem is, if you multiply these three sets by 6 choose 2, you're including, for example, the case where you choose 34 initially, which leads to which is not distinct from one of the above sets. So it's clear that number of times a duplicate set will appear is just going to be equal to m, the number of pairs (3 in this case). That's because this set of pairs will appear again in the "56" series, when those are the two initially chosen). So, recasting my product, I end up with:$$ N_\textrm{pairings} = \prod_{i=2}^m \frac{1}{i}\left (\ begin{array}{c}2i\\2\end{array}\right)$$And for m = 4 this is $$\frac{1}{2}\left (\begin{array}{c}4\\2\end{array}\right)\frac{1}{3}\left (\begin{array}{c}6\\2\end{array}\right)\frac{1}{4}\left (\ begin{array}{c}8\\2\end{array}\right) = \frac{1}{4\cdot 3 \cdot 2}\frac{8 \cdot 7}{2}\frac{6\cdot 5}{2}\frac{4\cdot 3}{2} = \frac{1}{4!}\frac{1}{2^3}\frac{8!}{2!} $$ $$ = \frac{8!}{4! \cdot 2^4} $$ reproducing your formula. However, your method is WAY better. Mine is tricky and prone to error.
{"url":"http://www.physicsforums.com/showthread.php?p=4254730","timestamp":"2014-04-16T19:15:30Z","content_type":null,"content_length":"38451","record_id":"<urn:uuid:f197a729-d9be-447e-aaef-855394e8b2bc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Transformation groups and symmetry groups Mathematicians classify the various patterns by their symmetries, the transformations that leave them invariant. For a given pattern, the collection of symmetries form what mathematicians call a symmetry group which is a kind of "transformation group." Now, the word "group" as used in the English language just means a bunch of things considered together. Mathematicians need a word for something more, and, for better or for worse, they decided on "group." A group of things, for a mathematician, means a collection of things with a certain structure. The structure is one of "composition." Given two elements S and T of a group, you can "compose" them to get another element ST of the group. In our case we're composing transformations of the plane that leave a pattern invariant. That just means first perform one transformation S, then perform the other transformation T. (It's a matter of convention whether you read ST from left to right or from right to left. Although the right-to-left convention is more common, lets use the left-to-right convention here.) If each transformation is a symmetry of a pattern, then their composition is a symmetry of the pattern, too. The structure, that is, the operation of composition, isn't enough by itself to give a group. There are certain axioms that the operation must obey in order to have a group. First, there has to be some element of the group that acts as an identity, that is, it doesn't do anything. We'll use the letter I to denote the identity of a group. A little more precisely, the axiom requires that the identity I when composed with any element T gives back T. Algebraically, we require I T = T, and T I = T. In a symmetry group for a pattern, the identity is the identity transformation. That's the transformation of the plane that doesn't move any point. It's trivial. It doesn't do anything! The second requirement for a group is that there are "inverses." An inverse of an element T is another element, usually written T^-1, whose composition with T gives the identity. That is, T^-1 T = I, and T T^-1 = I. So, the inverse undoes whatever T does. For example, the inverse of a translation upwards is a translation downwards. The inverse of a rotation 90° clockwise is a rotation 90° counterclockwise. The inverse of a reflection, surprisingly enough, is itself. The third axiom for groups is associativity. Algebraically, whenever S, T, and U are three elements, (S T) U = S (T U). It allows us to write the composition of three elements without using parentheses. Composition of transformations is associative. The groups that we're dealing with aren't commutative, that is, the equation S T = T S usually doesn't hold in our transformation groups. If both S and T happen to both be translations, then it does hold, but rarely otherwise. For example, if S and T are reflections with parallel axes, then ST and TS are both translations, but in opposite directions. Up to the table of contents Back to lattices On to the 17 plane groups David E. Joyce Department of Mathematics and Computer Science Clark University Worcester, MA 01610 The files are located at http://aleph0.clarku.edu/~djoyce/wallpaper/
{"url":"http://www.clarku.edu/~djoyce/wallpaper/groups.html","timestamp":"2014-04-20T03:12:12Z","content_type":null,"content_length":"4850","record_id":"<urn:uuid:b4e9a5f0-c700-4dc7-973a-ed95efb7e17d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
CRC Implementation Code in C Sun, 2007-12-02 16:54 - webmaster CRCs are among the best checksums available to detect and/or correct errors in communications transmissions. Unfortunately, the modulo-2 arithmetic used to compute CRCs doesn't map easily into software. This article shows how to implement an efficient CRC in C. I'm going to complete my discussion of checksums by showing you how to implement CRCs in software. I'll start with a naive implementation and gradually improve the efficiency of the code as I go along. However, I'm going to keep the discussion at the level of the C language, so further steps could be taken to improve the efficiency of the final code simply by moving into the assembly language of your particular processor. For most software engineers, the overwhelmingly confusing thing about CRCs is their implementation. Knowing that all CRC algorithms are simply long division algorithms in disguise doesn't help. Modulo-2 binary division doesn't map particularly well to the instruction sets of off-the-shelf processors. For one thing, generally no registers are available to hold the very long bit sequence that is the numerator. For another, modulo-2 binary division is not the same as ordinary division. So even if your processor has a division instruction, you won't be able to use it. Modulo-2 binary division Before writing even one line of code, let's first examine the mechanics of modulo-2 binary division. We'll use the example in Figure 1 to guide us. The number to be divided is the message augmented with zeros at the end. The number of zero bits added to the message is the same as the width of the checksum (what I call c); in this case four bits were added. The divisor is a c+1-bit number known as the generator polynomial. The modulo-2 division process is defined as follows: • Call the uppermost c+1 bits of the message the remainder • Beginning with the most significant bit in the original message and for each bit position that follows, look at the c+1 bit remainder: □ If the most significant bit of the remainder is a one, the divisor is said to divide into it. If that happens (just as in any other long division) it is necessary to indicate a successful division in the appropriate bit position in the quotient and to compute the new remainder. In the case of modulo-2 binary division, we simply: ☆ Set the appropriate bit in the quotient to a one, and ☆ XOR the remainder with the divisor and store the result back into the remainder □ Otherwise (if the first bit is not a one): ☆ Set the appropriate bit in the quotient to a zero, and ☆ XOR the remainder with zero (no effect) □ Left-shift the remainder, shifting in the next bit of the message. The bit that's shifted out will always be a zero, so no information is lost. The final value of the remainder is the CRC of the given message. What's most important to notice at this point is that we never use any of the information in the quotient, either during or after computing the CRC. So we won't actually need to track the quotient in our software implementation. Also note here that the result of each XOR with the generator polynomial is a remainder that has zero in its most significant bit. So we never lose any information when the next message bit is shifted into the remainder. Bit by bit Listing 1 contains a naive software implementation of the CRC computation just described. It simply attempts to implement that algorithm as it was described above for this one particular generator polynomial. Even though the unnecessary steps have been eliminated, it's extremely inefficient. Multiple C statements (at least the decrement and compare, binary AND, test for zero, and left shift operations) must be executed for each bit in the message. Given that this particular message is only eight bits long, that might not seem too costly. But what if the message contains several hundred bytes, as is typically the case in a real-world application? You don't want to execute dozens of processor opcodes for each byte of input data. #define POLYNOMIAL 0xD8 /* 11011 followed by 0's */ crcNaive(uint8_t const message) uint8_t remainder; * Initially, the dividend is the remainder. remainder = message; * For each bit position in the message.... for (uint8_t bit = 8; bit > 0; --bit) * If the uppermost bit is a 1... if (remainder & 0x80) * XOR the previous remainder with the divisor. remainder ^= POLYNOMIAL; * Shift the next bit of the message into the remainder. remainder = (remainder << 1); * Return only the relevant bits of the remainder as CRC. return (remainder >> 4); } /* crcNaive() */ Code clean up Before we start making this more efficient, the first thing to do is to clean this naive routine up a bit. In particular, let's start making some assumptions about the applications in which it will most likely be used. First, let's assume that our CRCs are always going to be 8-, 16-, or 32-bit numbers. In other words, that the remainder can be manipulated easily in software. That means that the generator polynomials will be 9, 17, or 33 bits wide, respectively. At first it seems we may be stuck with unnatural sizes and will need special register combinations, but remember these two facts: • The most significant bit of any generator polynomial is always a one • The uppermost bit of the XOR result is always zero and promptly shifted out of the remainder Since we already have the information in the uppermost bit and we don't need it for the XOR, the polynomial can also be stored in an 8-, 16-, or 32-bit register. We can simply discard the most significant bit. The register size that we use will always be equal to the width of the CRC we're calculating. As long as we're cleaning up the code, we should also recognize that most CRCs are computed over fairly long messages. The entire message can usually be treated as an array of unsigned data bytes. The CRC algorithm should then be iterated over all of the data bytes, as well as the bits within those bytes. The result of making these two changes is the code shown in Listing 2. This implementation of the CRC calculation is still just as inefficient as the previous one. However, it is far more portable and can be used to compute a number of different CRCs of various widths. * The width of the CRC calculation and result. * Modify the typedef for a 16 or 32-bit CRC standard. typedef uint8_t crc; #define WIDTH (8 * sizeof(crc)) #define TOPBIT (1 << (WIDTH - 1)) crcSlow(uint8_t const message[], int nBytes) crc remainder = 0; * Perform modulo-2 division, a byte at a time. for (int byte = 0; byte < nBytes; ++byte) * Bring the next byte into the remainder. remainder ^= (message[byte] << (WIDTH - 8)); * Perform modulo-2 division, a bit at a time. for (uint8_t bit = 8; bit > 0; --bit) * Try to divide the current data bit. if (remainder & TOPBIT) remainder = (remainder << 1) ^ POLYNOMIAL; remainder = (remainder << 1); * The final remainder is the CRC result. return (remainder); } /* crcSlow() */ Byte by byte The most common way to improve the efficiency of the CRC calculation is to throw memory at the problem. For a given input remainder and generator polynomial, the output remainder will always be the same. If you don't believe me, just reread that sentence as "for a given dividend and divisor, the remainder will always be the same." It's true. So it's possible to precompute the output remainder for each of the possible byte-wide input remainders and store the results in a lookup table. That lookup table can then be used to speed up the CRC calculations for a given message. The speedup is realized because the message can now be processed byte by byte, rather than bit by bit. The code to precompute the output remainders for each possible input byte is shown in Listing 3. The computed remainder for each possible byte-wide dividend is stored in the array crcTable[]. In practice, the crcInit() function could either be called during the target's initialization sequence (thus placing crcTable[] in RAM) or it could be run ahead of time on your development workstation with the results stored in the target device's ROM. crc crcTable[256]; crc remainder; * Compute the remainder of each possible dividend. for (int dividend = 0; dividend < 256; ++dividend) * Start with the dividend followed by zeros. remainder = dividend << (WIDTH - 8); * Perform modulo-2 division, a bit at a time. for (uint8_t bit = 8; bit > 0; --bit) * Try to divide the current data bit. if (remainder & TOPBIT) remainder = (remainder << 1) ^ POLYNOMIAL; remainder = (remainder << 1); * Store the result into the table. crcTable[dividend] = remainder; } /* crcInit() */ Of course, whether it is stored in RAM or ROM, a lookup table by itself is not that useful. You'll also need a function to compute the CRC of a given message that is somehow able to make use of the values stored in that table. Without going into all of the mathematical details of why this works, suffice it to say that the previously complicated modulo-2 division can now be implemented as a series of lookups and XORs. (In modulo-2 arithmetic, XOR is both addition and subtraction.) A function that uses the lookup table contents to compute a CRC more efficiently is shown in Listing 4. The amount of processing to be done for each byte is substantially reduced. crcFast(uint8_t const message[], int nBytes) uint8_t data; crc remainder = 0; * Divide the message by the polynomial, a byte at a time. for (int byte = 0; byte < nBytes; ++byte) data = message[byte] ^ (remainder >> (WIDTH - 8)); remainder = crcTable[data] ^ (remainder << 8); * The final remainder is the CRC. return (remainder); } /* crcFast() */ As you can see from the code in Listing 4, a number of fundamental operations (left and right shifts, XORs, lookups, and so on) still must be performed for each byte even with this lookup table approach. So to see exactly what has been saved (if anything) I compiled both crcSlow() and crcFast() with IAR's C compiler for the PIC family of eight-bit RISC processors. ^1 I figured that compiling for such a low-end processor would give us a good worst-case comparison for the numbers of instructions to do these different types of CRC computations. The results of this experiment were as follows: • crcSlow(): 185 instructions per byte of message data • crcFast(): 36 instructions per byte of message data So, at least on one processor family, switching to the lookup table approach results in a more than five-fold performance improvement. That's a pretty substantial gain considering that both implementations were written in C. A bit more could probably be done to improve the execution speed of this algorithm if an engineer with a good understanding of the target processor were assigned to hand-code or tune the assembly code. My somewhat-educated guess is that another two-fold performance improvement might be possible. Actually achieving that is, as they say in textbooks, left as an exercise for the curious reader. CRC standards and parameters Now that we've got our basic CRC implementation nailed down, I want to talk about the various types of CRCs that you can compute with it. As I mentioned last month, several mathematically well understood and internationally standardized CRC generator polynomials exist and you should probably choose one of those, rather than risk inventing something weaker. In addition to the generator polynomial, each of the accepted CRC standards also includes certain other parameters that describe how it should be computed. Table 1 contains the parameters for three of the most popular CRC standards. Two of these parameters are the "initial remainder" and the "final XOR value". The purpose of these two c-bit constants is similar to the final bit inversion step added to the sum-of-bytes checksum algorithm. Each of these parameters helps eliminate one very special, though perhaps not uncommon, class of ordinarily undetectable difference. In effect, they bulletproof an already strong checksum algorithm. ┃ │ CRC-CCITT │ CRC-16 │ CRC-32 ┃ ┃ Width │ 16 bits │ 16 bits │ 32 bits ┃ ┃ (Truncated) Polynomial │ 0x1021 │ 0x8005 │ 0x04C11DB7 ┃ ┃ Initial Remainder │ 0xFFFF │ 0x0000 │ 0xFFFFFFFF ┃ ┃ Final XOR Value │ 0x0000 │ 0x0000 │ 0xFFFFFFFF ┃ ┃ Reflect Data? │ No │ Yes │ Yes ┃ ┃ Reflect Remainder? │ No │ Yes │ Yes ┃ ┃ Check Value │ 0x29B1 │ 0xBB3D │ 0xCBF43926 ┃ To see what I mean, consider a message that begins with some number of zero bits. The remainder will never contain anything other than zero until the first one in the message is shifted into it. That's a dangerous situation, since packets beginning with one or more zeros may be completely legitimate and a dropped or added zero would not be noticed by the CRC. (In some applications, even a packet of all zeros may be legitimate!) The simple way to eliminate this weakness is to start with a nonzero remainder. The parameter called initial remainder tells you what value to use for a particular CRC standard. And only one small change is required to the crcSlow() and crcFast() functions: crc remainder = INITIAL_REMAINDER; The final XOR value exists for a similar reason. To implement this capability, simply change the value that's returned by crcSlow() and crcFast() as follows: return (remainder ^ FINAL_XOR_VALUE); If the final XOR value consists of all ones (as it does in the CRC-32 standard), this extra step will have the same effect as complementing the final remainder. However, implementing it this way allows any possible value to be used in your specific application. In addition to these two simple parameters, two others exist that impact the actual computation. These are the binary values "reflect data" and "reflect remainder". The basic idea is to reverse the bit ordering of each byte within the message and/or the final remainder. The reason this is sometimes done is that a good number of the hardware CRC implementations operate on the "reflected" bit ordering of bytes that is common with some UARTs. Two slight modifications of the code are required to prepare for these capabilities. What I've generally done is to implement one function and two macros. This code is shown in Listing 5. The function is responsible for reflecting a given bit pattern. The macros simply call that function in a certain way. #define REFLECT_DATA(X) ((uint8_t) reflect((X), 8)) #define REFLECT_REMAINDER(X) ((crc) reflect((X), WIDTH)) reflect(uint32_t data, uint8_t nBits) uint32_t reflection = 0; * Reflect the data about the center bit. for (uint8_t bit = 0; bit < nBits; ++bit) * If the LSB bit is set, set the reflection of it. if (data & 0x01) reflection |= (1 << ((nBits - 1) - bit)); data = (data >> 1); return (reflection); } /* reflect() */ By inserting the macro calls at the two points that reflection may need to be done, it is easier to turn reflection on and off. To turn either kind of reflection off, simply redefine the appropriate macro as (X). That way, the unreflected data byte or remainder will be used in the computation, with no overhead cost. Also note that for efficiency reasons, it may be desirable to compute the reflection of all of the 256 possible data bytes in advance and store them in a table, then redefine the REFLECT_DATA() macro to use that lookup table. Tested, full-featured implementations of both crcSlow() and crcFast() are available for download. These implementations include the reflection capabilities just described and can be used to implement any parameterized CRC formula. Simply change the constants and macros as necessary. The final parameter that I've included in Table 1 is a "check value" for each CRC standard. This is the CRC result that's expected for the simple ASCII test message "123456789". To test your implementation of a particular standard, simply invoke your CRC computation on that message and check the result: checksum = crcFast("123456789", 9); If checksum has the correct value after this call, then you know your implementation is correct. This is a handy way to ensure compatibility between two communicating devices with different CRC implementations or implementors. Through the years, each time I've had to learn or relearn something about the various CRC standards or their implementation, I've referred to the paper "A Painless Guide to CRC Error Detection Algorithms, 3rd Edition" by Ross Williams. There are a few holes that I've hoped for many years that the author would fill with a fourth edition, but all in all it's the best coverage of a complex topic that I've seen. Many thanks to Ross for sharing his expertise with others and making several of my networking projects possible. Free source code in C and C++ The source code for these CRC computations is placed into the public domain and is available in electronic form at http://www.netrino.com/code/crc.zip. Inspired by the treatment of CRC computations here and in Ross Williams' paper, a gentleman named Daryle Walker <[email protected]> has implemented some very nice C++ class and function templates for doing the same. The entire library, which is distributed in source code form and in the public domain, is available at http://www.boost.org/libs/crc/. [1] I first modified both functions to use unsigned char instead of int for variables nBytes and byte. This effectively caps the message size at 256 bytes, but I thought that was probably a pretty typical compromise for use on an eight-bit processor. I also had the compiler optimize the resulting code for speed, at its highest setting. I then looked at the actual assembly code produced by the compiler and counted the instructions inside the outer for loop in both cases. In this experiment, I was specifically targeting the PIC16C67 variant of the processor, using IAR Embedded Workbench 2.30D (PICmicro engine 1.21A). This column was published in the January 2000 issue of Embedded Systems Programming. If you wish to cite the article in your own work, you may find the following MLA-style information helpful: Barr, Michael. "Slow and Steady Never Lost the Race," Embedded Systems Programming, January 2000, pp. 37-46. Submitted by Sun, 2008-05-25 08:45 I have gone through this article and it is really helpful. But I am having a doubt regarding implementation. I will be greatful to you if you could answer my question. What's the actual need of reflection? Will it be a problem if I turn it off if I am not implementing this for UART? Submitted by Thu, 2009-10-22 21:34 I have used your code & that of a few other sources and came up with differing results which i'm trying to work through. My first question is where/what one does about changing the code depending on whether the processor is big endian or small endian? The second question is if one reflects the data when creating the table rather than reflecting the message byte what values does one actual reflect...the index? the final table value? the index after it has been left shifted (WIDTH - 8)? Submitted by Fri, 2010-06-11 20:12 I am trying to understand the mechanics of CRC calculations. I have been reading internet documentation for days and I have not been able to locate an adequate demonstration of the jump between bitwise-mod-2 long-division, and Bytewise-Table-Lookup-XOR-CRC. Are these two procedures really the same function? If so, please provide a proof, which doesn't involve superficial hand-waving. Thank you so much. Submitted by Tue, 2010-06-29 09:12 i need to compute crc for polynomials 0x31 and 0x39.but i need guidance on how to select the initial remainder and final xor value for these. Submitted by Thu, 2011-07-14 22:42 On x84 64bit machines : typedef unsigned long crc; Is a 64bit value instead of a 32bit one. With crcFast() the lower 4bytes are all 0xFF. (ex. 0x126fc44ffffffff). A work-around for this issue to use uint32_t instead of unsigned long: #include "stdint.h" typedef uint32_t crc; Submitted by Fri, 2012-12-07 09:24 Hi, and thanks for the best introduction to CRC mathematics I've seen so far. It appears that some of the code examples have been messed up at some point. For example, this is clearly incorrect. * Shift the next bit of the message into the remainder. remainder = (remainder > 4); Old version at Netrino seems to be correct and still available via Wayback Machine: Registration to barrgroup.com gave me an empty password and no way to change it.
{"url":"http://www.barrgroup.com/Embedded-Systems/How-To/CRC-Calculation-C-Code","timestamp":"2014-04-19T17:04:15Z","content_type":null,"content_length":"62861","record_id":"<urn:uuid:03093d58-2e97-475f-9cb9-d7c629f385fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Spherical Trigonometry I have posted a question at homework forum, nobody's answering my query. I'm now going try my luck at this forum. I'm doing a self review on spherical trigonometry and I'm now implementing Napier's Rules in solving right spherical triangles, the first problem goes like this: In a right spherical triangle (C=90 degrees), A=69 degrees and 50.8minutes, c=72 degrees and 15.4minutes, find B, a, b. Using Napier's rules, I get a=63 degrees and 23.8 minutes b= 47 degrees7.0 minutes, B=50 degrees and 17.7 minutes a note is given at the end of the question stating that "The supplementary value is not admissible since 'A' (angle A) and 'a' (side a) do not terminate at the same quadrant" - but by inspection, the two values terminate on the first quadrant. This completely confuses me, can anyone elaborate. A million thanks, PS: I'm using "PLANE AND SPHERICAL TRIGONOMETRY" By Paul Rider.
{"url":"http://www.physicsforums.com/showthread.php?t=57926","timestamp":"2014-04-20T05:59:13Z","content_type":null,"content_length":"19887","record_id":"<urn:uuid:3a915d01-8542-41ff-ba92-fc633ab7a803>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
ETA-BASED MOS GUIDANCE - THE 0000/1200 UTC ALPHANUMERIC MESSAGES J. Paul Dallavalle and Mary C. Erickson This Technical Procedures Bulletin (TPB) describes the format and contents of the Eta MOS alphanumeric messages generated during the 0000 and 1200 UTC forecast cycles. These messages contain forecasts of the max/min temperature; time-specific surface temperature and dew point; total sky cover; surface wind direction and wind speed; probability of precipitation (PoP) for 6- and 12-h periods; categories of quantitative precipitation for 6- and 12-h periods; and probability of thunderstorms and conditional probability of severe thunderstorms for 6- and 12-h periods. Guidance is provided for projections of 6 to 60 hours for most weather elements. Note that a particular element line (see Sections 3 - 13) is not included in the message when all of the forecasts in that line are unavailable. The messages will become operational during spring of 2002. 2. MESSAGE HEADING The message heading shown above (see Figs. 1 and 2 also) identifies the station for which the guidance is valid, the forecast cycle, and the day and hour for which the forecasts are valid. In this example, the message is valid for Albany, NY (KALB). All stations are identified by the ICAO four-character identifier. The "ETA MOS GUIDANCE" appearing on the same line as the station call letters identifies the message contents. The date of the forecast cycle during which the message is issued follows this information. The form of mm/dd/yyyy where mm is the month (1 through 12), dd is the day (1 through 31), and yyyy is the four-digit year is used. The forecast cycle is identified by the standard 0000 or 1200 UTC. In this example, the MOS guidance for KALB was issued from the 0000 UTC forecast cycle of the Eta model on October 24, 2000. The DT and HR lines denote the date and hour at which the forecasts are valid. The DT line indicates the day of the month. Note that the month is denoted by the standard three or four letter abbreviation. For temperature, dew point, sky cover, wind direction, and wind speed, the date and hour denote the specific time that the forecasts are valid. These forecasts are valid every 3 hours from 6 to 60 hours after initial time. For PoP, quantitative precipitation, thunderstorms, and severe weather, the time indicates the end of the period over which the forecasts are valid. For the max /min temperature, the date group gives only the approximate ending time of the daytime and nighttime periods for which the max and min temperature guidance, respectively, are valid. 3. X/N - MAXIMUM/MINIMUM TEMPERATURE The max/min surface temperature forecasts are displayed for projections of 24, 36, 48, and 60 hours after the initial data time (0000 or 1200 UTC). Although the forecasts are presented at consecutive 12-h intervals, each forecast is actually valid for a daytime or nighttime period. For the Eta-based MOS guidance, daytime is defined as 7 a.m. to 7 p.m. Local Standard Time (LST). Nighttime is defined as 7 p.m. to 8 a.m. LST. Thus, the valid date in the appropriate column of the DT and HR lines must be converted by the forecaster to his/her local date. This local date then denotes the appropriate daytime or nighttime for the max or min temperature forecast, respectively. For the 0000 UTC forecast cycle, the temperatures are shown in max/min (X/N) order and are valid for today's max, tonight's min, tomorrow's max, and tomorrow night's min. For the 1200 UTC cycle, the temperatures are shown in min/max (N/X) order and are valid for tonight's min, tomorrow's max, tomorrow night's min, and the day after tomorrow's max. Each temperature forecast is presented to the nearest whole degree Fahrenheit, and three characters are allowed. A missing forecast is indicated by a 4. TMP - SURFACE TEMPERATURE Time-specific 2-m temperature forecasts are valid every 3 hours from 6 to 60 hours after 0000 and 1200 UTC. These forecasts are valid at 0600, 0900,..., 2100, 0000 UTC, and so forth. Each temperature forecast is presented to the nearest whole degree Fahrenheit; a missing forecast is indicated by a 999. Only three characters are available for the temperature forecasts. Thus, two consecutive forecasts of 100 degrees or more or of -10 degrees or less appear with no spaces between them. 5. DPT - SURFACE DEW POINT Time-specific 2-m dew point forecasts are valid every 3 hours from 6 to 60 hours after 0000 and 1200 UTC. These forecasts are valid at 0600, 0900,..., 2100, 0000 UTC, and so forth. Each dew point forecast is presented to the nearest whole degree Fahrenheit; a missing forecast is indicated by a 999. Three characters are available for the dew point forecasts so that two consecutive forecasts of -10 degrees or less appear with no spaces between them. 6. CLD - TOTAL SKY COVER CATEGORIES Forecast categories of total sky cover (see the following table) are available in plain language for projections at 3-h intervals from 6 to 60 hours after the initial data times (0000 and 1200 UTC). All forecasts are valid for specific times (i.e., 0600, 0900, 1200, and so forth). Two characters identify the category (CL - clear; SC - scattered; BK - broken; OV - overcast); a missing forecast is denoted by XX. Total Sky Cover Categories • CL - clear; • SC - > 0 to 4 octas of total sky cover; • BK - > 4 to < 8 octas of total sky cover; • OV - 8 octas of total sky cover or totally obscured. 7. WDR - SURFACE WIND DIRECTION / WSP - SURFACE WIND SPEED Surface wind direction (WDR) and speed (WSP) forecasts are given at 3-h intervals for projections of 6 to 60 hours after the initial data times (0000 and 1200 UTC). These are forecasts of the 10-m winds (a 2-minute average) at specific times throughout each day (i.e., 0600, 0900, 1200 UTC, and so forth). The wind direction is given in tens of degrees and varies from 01 (10 degrees) to 36 (360 degrees). The normal meteorological convention for specifying wind direction is followed. The wind speed is given in knots; the maximum speed allowed in the message is 98 knots. For both direction and speed, missing forecasts are denoted by 99. A calm wind is indicated by a wind direction and speed of 00. 8. P06 - PROBABILITY OF PRECIPITATION IN A 6-H PERIOD The P06 forecasts are for the probability of 0.01 inches or more of liquid-equivalent precipitation (PoP) occurring during a 6-h period. The 6-h PoP's are valid for intervals of 6-12, 12-18, 18-24, 24-30, 30-36, 36-42, 42-48, 48-54, and 54-60 hours after the initial data times (0000 and 1200 UTC). In the message, the forecast values are displayed under the ending time of the 6-h period. The probability is given to the nearest percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. 9. P12 - PROBABILITY OF PRECIPITATION IN A 12-H PERIOD The P12 forecasts are for the probability of 0.01 inches or more of liquid-equivalent precipitation (PoP) occurring during a 12-h period. For nearly all stations, the 12-h PoP's are valid for intervals of 12-24, 24-36, 36-48, and 48-60 hours after the initial data times (0000 and 1200 UTC). In the message, the forecast values are displayed under the ending time of the 12-h period. The probability is given to the nearest percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. 10. Q06 - QUANTITATIVE PRECIPITATION AMOUNT IN A 6-H PERIOD Guidance for liquid-equivalent precipitation amount (QPF) accumulated during a 6-h period is presented in categorical form on the line designated Q06. These forecasts are available for projections of 6-12, 12-18, 18-24, 24-30, 30-36, 36-42, 42-48, 48-54, and 54-60 hours after the initial data time (0000 and 1200 UTC). The forecasts are displayed beneath the hour indicating the end of the 6-h period. The Q06 guidance is a categorical forecast of liquid-equivalent precipitation equaling or exceeding certain specified amounts in the 6-h periods. The categories are as follows: Q06 Categories • 0 = no precipitation expected; • 1 = 0.01 - 0.09 inches; • 2 = 0.10 - 0.24 inches; • 3 = 0.25 - 0.49 inches; • 4 = 0.50 - 0.99 inches; • 5 = > 1.00 inches. Missing forecasts are denoted by 9. 11. Q12 - QUANTITATIVE PRECIPITATION AMOUNT IN A 12-H PERIOD Guidance for liquid-equivalent precipitation amount (QPF) accumulated during a 12-h period is presented in categorical form on the line designated Q12. These forecasts are available for projections of 12-24, 24-36, 36-48, and 48-60 hours after the initial data time (0000 and 1200 UTC). The forecasts are displayed beneath the hour indicating the end of the 12-h period. The Q12 guidance is a categorical forecast of liquid-equivalent precipitation equaling or exceeding certain specified amounts in the 12-h periods. The categories are as follows: Q12 Categories • 0 = no precipitation expected; • 1 = 0.01 - 0.09 inches; • 2 = 0.10 - 0.24 inches; • 3 = 0.25 - 0.49 inches; • 4 = 0.50 - 0.99 inches; • 5 = 1.00 - 1.99 inches; • 6 = > 2.00 inches. Missing forecasts are denoted by 9. 12. T06 - PROBABILITY OF THUNDERSTORMS/CONDITIONAL PROBABILITY OF SEVERE THUNDERSTORMS IN A 6-H PERIOD The T06 line represents forecasts for the probability of thunderstorms (to the left of the diagonal) and the conditional probability of severe thunderstorms (to the right of the diagonal) during a 6-h period. The 6-h probability forecasts are valid for intervals of 6-12, 12-18, 18-24, 24-30, 30-36, 36-42, 42-48, 48-54, and 54-60 hours after the initial data times (0000 and 1200 UTC). In the message, the pair of forecast values is displayed under the ending time of the 6-h period. The thunderstorm probability is given to the nearest whole percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. The conditional severe thunderstorm probability is given to the nearest whole percent. Values range from 0 to 98%. A missing forecast value is given by 99. Both the thunderstorm and conditional severe storm probabilities are available year-round for stations in the contiguous U.S. Note that these probabilities represent the likelihood of the event within a box approximately 47 km on a side and containing the station specified. 13. T12 - PROBABILITY OF THUNDERSTORMS/CONDITIONAL PROBABILITY OF SEVERE THUNDERSTORMS IN A 12-H PERIOD The T12 line represents forecasts for the probability of thunderstorms (to the left of the diagonal) and the conditional probability of severe thunderstorms (to the right of the diagonal) occurring during a 12-h period. The 12-h probability forecasts are valid for intervals of 6-18, 18-30, 30-42, and 42-54 hours after the initial data times (0000 and 1200 UTC). In the message, the pair of forecast values is displayed under the ending time of the 12-h period. The thunderstorm probability is given to the nearest whole percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. The conditional severe thunderstorm probability is given to the nearest whole percent. Values range from 0 to 98%. A missing forecast value is given by 99. Both the thunderstorm and conditional severe storm probabilities are available year-round for stations in the contiguous U.S. These probabilities represent the likelihood of the event within a box approximately 47 km on a side and containing the station specified. 14. AVAILABILITY The 0000 and 1200 UTC Eta MOS guidance will be available at approximately 0230 and 1430 UTC, respectively, in 6 alphanumeric messages transmitted to NWS AWIPS and Family of Services (FOS) circuits and containing guidance for stations in the contiguous United States (CONUS). Guidance is not available for stations outside of the CONUS. The following two-line WMO headers will be used: WMO Header - Region □ FOUS41 KWNO - Northeast U.S. □ METNE1 □ FOUS42 KWNO - Southeast U.S. □ METSE1 □ FOUS43 KWNO - North Central U.S. □ METNC1 □ FOUS44 KWNO - South Central U.S. □ METSC1 □ FOUS45 KWNO - Rocky Mountain Region □ METRM1 □ FOUS46 KWNO - West Coast Region □ METWC1 15. STATION LIST The Eta MOS guidance will be available for approximately 1258 stations in the CONUS The guidance is transmiteed in the six bulletins described in Section 14. The user may check the following home pages for the station lists and corresponding WMO headers:
{"url":"http://www.nws.noaa.gov/om/tpb/486body.html","timestamp":"2014-04-20T03:19:41Z","content_type":null,"content_length":"15015","record_id":"<urn:uuid:6ff8467f-5106-4ba1-ac37-9801b2cc2d32>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodbury Heights Math Tutor I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching. 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I tell all of my students each and every year: It is wonderful to know how to do the math in terms of knowing all of the formulas, rules, etc. The real value is having the capability to take the knowledge of these skills and to apply them to solving problems in everyday life. My inventory of resources is extensive. 10 Subjects: including algebra 1, linear algebra, geometry, prealgebra ...Dr. Peter is always willing to offer flexible scheduling to suit the client's needs. He is also prepared to be responsive to any budgetary concerns.My qualification for tutoring GMAT is based upon (1) my academic record and (2) my workplace experience. 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra I started my life in computers in 8th grade when I came home from school and found my own computer in pieces all over the floor. The explanation: My father couldn't get something to print so he 'thought there must be something wrong inside this box' and took the whole thing apart. The problem was,... 26 Subjects: including algebra 2, algebra 1, geometry, reading ...I am very detail-oriented and always attempt to speak and write as eloquently as possible. As a master's student, proofreading the papers of my peers was an essential part of my job. In addition to scientific peer reviews, I have served as the editor for a monthly newsletter. 20 Subjects: including statistics, algebra 1, algebra 2, biology
{"url":"http://www.purplemath.com/Woodbury_Heights_Math_tutors.php","timestamp":"2014-04-18T11:45:19Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:d04a08f1-b7fa-47d1-b0a6-45768c74044b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Conformal geometry From Encyclopedia of Mathematics The branch of geometry in which properties of figures are studied that are invariant under conformal transformations (cf. Conformal transformation). The main invariant in conformal geometry is the angle between two directions. Conformal geometry is the geometry defined in Euclidean space extended by a single (ideal) point at infinity having as corresponding fundamental group of transformations the group of point transformations taking spheres into spheres. This space is called the conformal space This definition of conformal geometry is valid for Euclidean spaces of arbitrary dimension; in the two-dimensional case one speaks about circles instead of spheres. For dimension Each transformation in the fundamental group of a conformal geometry decomposes into a finite number of Euclidean motions, similarity transformations and inversions. The fundamental group of the conformal geometry of the plane Every point of be a form in two vectors stereographic projection can be performed, taking points on and outside the absolute into the conformal plane and its set of circles. The coordinates tetracyclic coordinates of the points and the circles on the plane differs from only by a factor. By setting the conditions of preservation of the quadratic form can be written as Under conformal transformations, the point at infinity can be taken to any other point, therefore a circle can be taken to a line and vice versa. If it is required that the point at infinity be taken to itself, i.e. that lines be taken to lines, then the group of such transformations is the group of similarity transformations (homothety and Euclidean motion). The similarity subgroup in Another important class of conformal transformations consists of the inversions (cf. Inversion). An inversion in The main invariant in conformal geometry on the plane is the angle The simplest figure in Figure: c024770a The transformations belonging to the fundamental group of the conformal geometry of the plane are given by the fractional-linear functions of a complex variable. In the conformal geometry of the three-dimensional space pentaspherical coordinates The simplest figures in A circle in under the extra condition The angle has the absolute invariants For each pair of circles one can choose from the components of their pencils two principal spheres. The latter are defined by the property that for the pencils in terms of these spheres the The principal angles Figure: c024770b A necessary and sufficient condition for isogonality of a pair of circles is The use of methods of mathematical analysis in conformal geometry leads to the creation of conformal-differential geometry. The geometry of a space with a conformal connection is constructed on the basis of conformal geometry, and this geometry is related to conformal geometry in the same way as Riemannian geometry is related to Euclidean geometry. The following terminology is also customary for conformal geometry: the geometry of inverse radii, circular geometry, inversion geometry, as well as Möbius geometry (named after A. Möbius who first studied the geometry of circular [1] F. Klein, "Vorlesungen über höhere Geometrie" , Springer (1926) [2] W. Blaschke, "Vorlesungen über Differentialgeometrie und geometrische Grundlagen von Einstein's Relativitätstheorie" , 3. Differentialgeometrie der Kreisen und Kugeln , Springer (1929) [3] G.V. Bushmanova, A.P. Norden, "Elements of conformal geometry" , Kazan' (1972) (In Russian) An exhausting treatment of Möbius geometry in dimension 2 is given in [a1]. [a1] H. Schwerdtfeger, "Geometry of complex numbers" , Dover, reprint (1979) How to Cite This Entry: Conformal geometry. G.V. Bushmanova (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Conformal_geometry&oldid=16231 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Conformal_geometry","timestamp":"2014-04-16T19:13:42Z","content_type":null,"content_length":"36512","record_id":"<urn:uuid:d77ff096-5b37-4bdc-b2dd-91face155d39>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
CCSS.Math.Content.HSF-TF.C.9 - Wolfram Demonstrations Project US Common Core State Standard Math HSF-TF.C.9 Demonstrations 1 - 9 of 9 Description of Standard: (+) Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems. Description of Standard: (+) Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems.
{"url":"http://www.demonstrations.wolfram.com/education.html?edutag=CCSS.Math.Content.HSF-TF.C.9","timestamp":"2014-04-20T15:53:16Z","content_type":null,"content_length":"26792","record_id":"<urn:uuid:c27fdc9a-cd11-4521-996d-3f66f2787a06>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of a Sphere October 22nd 2010, 04:13 PM #1 Super Member Jun 2009 United States Volume of a Sphere I need to set this up as a double integral. So the region of integration on the xy-plane should be $x^2+y^2\leq 1$ or, $-1\leq x\leq 1$ $-\sqrt{1-x^2}\leq y\leq \sqrt{1-x^2}$ If I take the region in the first quadrant, I end up writing $V=\int_R\int f(x,y)dA =4\int_{0}^1\int_0^{\sqrt{1-x^2}}\sqrt{1-x^2-y^2}dydx$ The answer I have in the book gives the constant as 8, instead of 4. I'm not understanding why this is. I'm integrating over 1/4 of the region defined by $x^2+y^2\leq 1$, so I'm just multiplying it by 4 to account for the rest of the region.... I think I've figured out what's wrong. The definition of volume for two variables requires that $f(x,y)> 0$ so my integral only accounts for the top half above the xy-plane. October 22nd 2010, 04:24 PM #2 October 22nd 2010, 04:43 PM #3 Super Member Jun 2009 United States
{"url":"http://mathhelpforum.com/calculus/160645-volume-sphere.html","timestamp":"2014-04-18T13:34:57Z","content_type":null,"content_length":"35676","record_id":"<urn:uuid:ed8cb898-dcc2-4dc2-84e8-704c3d1536e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry - Labeling Triangles Trigonometry – Labeling Triangles Image Source: http://tributehomes.com.au Triangles are a key feature of the Architecture of Buildings. Here is a building which contains many triangles. Image Source: http://www.infoteli.com The building is in Eindhove Holland and is called ‘De Blob’. It forms the Admirant Building Entrance. The Admirant Building is a concrete structure of five storeys, surrounded by the skin of the bubble. The bubble is made almost entirely of thick glass, held in place by many triangles attached to a complex steel structure . The Bubble also contains two tunnel entrances, which lead to an underground parking area for about 1,700 bikes. Triangles are used a lot in Architecture, but not usually in such spectacular from as in ‘De Blob’. The usual use for triangles involves creating frames which make buildings rigid and strong. Image Source: http://www.blogger.com “Trigonometry” is a branch of mathematics which deals with measuring the sides and angles in Right Angled Triangles. Architects use Trigonometry to calculate structural load, roof slopes, ground surfaces and many other aspects, including sun shading and light. Trigonometry allows architects to figure out measurements and angles so that their blueprints can be turned into real world structures from raw materials such as steel, wood, and concrete. It is essential when designing a building to predetermine the geometrical patterns and how much material and labor will be required in order to erect the structure. Thanks to Trigonometry, when the building is erected, it will be strong with accurate measurements and a budgeted dollar cost. Definition of Trigonometry The “Trigon” part of “Trigonometry” refers to a three sided geometrical shape, eg. a Triangle. Trigon = 3 sides, Hexagon = 6 sides, Octagon = 8 sides, etc. The “metry” part of “Trigonometry” refers to the activity of measuring. So the word “Trigonometry” means measurements of the sides and angles in Triangles. The full story on Trigonometry actually extends beyond Triangles: Trigonometry is a branch of mathematics which deals with triangles, circles, waves and oscillations. However, we will only be looking at Triangles in this particular lesson. Trigonometry and Right Angled Triangles The Trigonometry we will be covering in this lesson only applies to 90 degree Right Angled Triangles. Image Copyright 2013 by Passy’s World of Mathematics There is a standard way of labeling the angles and sides of a Right Triangle for Trigonometry. Labeling Angles Labeling The standard “reference angle” in a Right Triangle is shown below. The Right Triangle has been drawn in standard position, which means that the “Reference Angle” (or slope angle) is at the bottom left, and the right angle is at the bottom right of the triangle. Image Copyright 2013 by Passy’s World of Mathematics Labeling Sides The three sides of the Right Angled Triangle are labelled as “Hypotenuse”, “Opposite”, and “Adjacent”. The “Hypotenuse” is the longest sloping side of the Triangle, just as it is in the “Pythagoras Theorem”. Image Copyright 2013 by Passy’s World of Mathematics The “Opposite” is the side that is directly across from the slope angle. Image Copyright 2013 by Passy’s World of Mathematics The “Adjacent” is the side that is closest to the slope angle. Image Copyright 2013 by Passy’s World of Mathematics When the Triangle is drawn in standard position, its full labeling for use in Trigonometry looks like this: Image Copyright 2013 by Passy’s World of Mathematics The angle “theta” at the bottom of the Triangle is called the standard Reference Angle. We can also label the Triangle in the same way for the other angle at the top of the Triangle. Image Copyright 2013 by Passy’s World of Mathematics Video About Labelling Sides The following five minute video goes through how to Label the Sides of a Triangle for using in Trigonometry. It covers triangles drawn in all positions, as well as the basic standard position. Here is another shorter video which shows how to label for both angles in the Triangle: Labelling Sides Worksheets This first worksheet labels sides using letters on the Triangle, and has Answers on Page 2 of the sheet. This second worksheet involves labelling sides using the numerical measurements which are on them, with Answers on Page 2 of the sheet Related Items Trigonometric Ratios – Sin Cos and Tan Classifying Triangles Pythagoras and Right Triangles Congruent Triangles Tall Buildings and Large Dams Similar Shapes and Similar Triangles Geometry in the Animal Kingdom If you enjoyed this lesson, why not get a free subscription to our website. You can then receive notifications of new pages directly to your email address. Go to the subscribe area on the right hand sidebar, fill in your email address and then click the “Subscribe” button. To find out exactly how free subscription works, click the following link: If you would like to submit an idea for an article, or be a guest writer on our website, then please email us at the hotmail address shown in the right hand side bar of this page. If you are a subscriber to Passy’s World of Mathematics, and would like to receive a free PowerPoint version of this lesson, that is 100% free to you as a Subscriber, then email us at the following Please state in your email that you wish to obtain the free subscriber copy of the “Trigonometry – Labeling Triangles” Powerpoint. Feel free to link to any of our Lessons, share them on social networking sites, or use them on Learning Management Systems in Schools. Like Us on Facebook Help Passy’s World Grow Each day Passy’s World provides hundreds of people with mathematics lessons free of charge. Help us to maintain this free service and keep it growing. Donate any amount from $2 upwards through PayPal by clicking the PayPal image below. Thank you! PayPal does accept Credit Cards, but you will have to supply an email address and password so that PayPal can create a PayPal account for you to process the transaction through. There will be no processing fee charged to you by this action, as PayPal deducts a fee from your donation before it reaches Passy’s World. 4 Responses to Trigonometry – Labeling Triangles You must be logged in to post a comment. This entry was posted in Geometry, Trigonometry and tagged architecture math, building math, carpentry math, how do i label a triangle, labeling sides worksheets, labeling triangle for trig, labeling trig sides worksheet, real life trig, real life trigonometry, real world trig, real world trigonometry, trig, trig angles, trig examples, trig intro, trig introduction, trig labeling, trig labeling sides, trig sides, trig videos, trig worksheet, trigonometry, trigonometry in architecture, trigonometry lesson, trigonometry tutorial, what is trigonometry. Bookmark the permalink.
{"url":"http://passyworldofmathematics.com/trigonometry-labeling-triangles/","timestamp":"2014-04-17T15:40:27Z","content_type":null,"content_length":"67305","record_id":"<urn:uuid:3e2767c9-492c-4bc1-b562-61fec5653917>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a variable such that two vectors are parallel November 5th 2009, 11:12 AM #1 Junior Member Sep 2009 Find a variable such that two vectors are parallel Let p = (2, k) and q = (3, 5). Find k such that p and q are parallel. Should be easy to solve, and I have all the formulas here, but alas, no luck. A second exercise is to find k such that the angle between p and q is pi / 3, but I suspect I will be able to solve that one once some kind person helps me figure out what I am doing wrong in the first one. Doh! So simple. y = 5x / 3, which gives k = (5*3) / 2 = 10 / 3. I was caught up in thoughts using a zero angle between the vectors, since that was what the chapter in book was about. Anyone care to help show how to solve this problem using the dot product? November 5th 2009, 11:21 AM #2 Junior Member Sep 2009
{"url":"http://mathhelpforum.com/trigonometry/112625-find-variable-such-two-vectors-parallel.html","timestamp":"2014-04-18T03:46:18Z","content_type":null,"content_length":"31050","record_id":"<urn:uuid:f0c9950e-3c56-45eb-b9f4-4e76738e4aac>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
The curve has the form of a snail^1), so that it is also called the snail curve. It's the epitrochoid for which the rolling circle and the rolled circle have the same radius. The curve can also be defined as a conchoid. The curve is called the limaçon of Pascal (or snail of Pascal). It is not named to the famous mathematician Blaise Pascal, but to his father, Etienne Pascal. He was a correspondent for Mersenne, a mathematician who made a large effort to mediate new knowledge (in writing) between the great mathematicians of that era. The name of the curve was given by Roberval when he used the curve drawing tangents to for differentiation. But before Pascal, Dürer had already discovered the curve, since he gave a method for drawing the limaçon, in 'Underweysung der Messung' (1525). Sometimes the limaçon is confined to values b < 1. We might call this curve an ordinary limaçon. It is a transitional form between the circle (b=0) and the cardioid (b=1). When we extend the curve to values for b > 1, a noose appears. For b = 2, the curve is called the trisectrix or the limaçon trisectrix. This curve has as alternative equation: r = sin φ/3. The name of the trisectrix^2) is because angle BAP is 3 times angle APO. An alternative name for the limaçon is arachnid^3)^ or spider curve. These names follow from Dürer. Some fine properties of the curve are: • the limaçon is the catacaustic and also the pedal of the circle The catacaustic quality was shown by Thomas de St Laurent in 1826 • the ordinary limaçon is the inverse of the ellipse • the limaçon with a noose is the inverse of the hyperbola In fact, the constant b is the same as the eccentricity for a conic section • the limaçon is the orthoptic of the cardioid • the curve is a special kind of botanic curve • both cardioid and trisectrix are a conchoid of the circle The limaçon is an anallagmatic curve. For b unequal zero, the curve is a quartic, in Cartesian coordinates it can be written as a fourth degree equation^4). 1) limax (Lat.) = snail 2) Tri = three, sectrix = angle. 3) from arakne (Gr.) = spider. In French: arachnée 4) equation: (x^2 + y^2 - by)^2 = x^2 + y^2
{"url":"http://www.2dcurves.com/roulette/roulettel.html","timestamp":"2014-04-21T07:24:25Z","content_type":null,"content_length":"6621","record_id":"<urn:uuid:1b5920f2-d978-43a5-88da-3243a2988df7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics and Society WG 1a Mathematics education in the aboriginal community [Report] WG 1b Early numeracy: Developing Mathematical Literacy in the Early Years WG 1c Why is mathematics relevant in our society? [Report] WG 1d Supporting Student Success – Helping students reach their potential [Report] Mathematics in the Classroom WG 2a Creating a curriculum that affords learners opportunity to develop powerful mathematics [Report] WG 2b Learning in the presence of technology [Report] WG 2c Mathematics through the eyes of a child [Report] WG 2d Classroom practice and mathematics education research [Report] The Mathematics Education Community in Canada WG 3a Developing a national mathematics teaching community WG 3c Supporting Teacher Success [Report] WG 1a: Mathematics education in the aboriginal community Report Working Group Leaders Kanwal Neel, Simon Fraser University and Richmond School District, British Colombia Louise Poirier, Université de Montréal, Quebec One of the recommendations found in the Minister of the Department of Indian Affairs and Northern Development’s National Working Group on Education’s final report is that “post-secondary institutions and teacher education programs adopt multiple strategies to increase substantially the number of Aboriginal secondary school teachers…” (p. 43). The report also identifies the importance of developing culturally relevant curricula, pedagogy and resources that would address the identified weaknesses in mathematics and science. In what way or ways might the mathematics education community in Canada contribute to the development of these curricula, pedagogy, and resources? In this working group the following questions will form a framework for our discussion as to the way(s) in which the mathematics education community in Canada can respond to the identified need within the Aboriginal community. • In what ways do the aboriginal sense of knowing effect the teaching of mathematics in aboriginal communities? • Is it possible to separate the challenges of learning and teaching mathematics in the aboriginal community from those encountered for other disciplines? • What type of education about aboriginal communities should teachers of mathematics receive? • What types of programs might universities offer to help aboriginal students to make the transition into mathematics and science programs? • How is mathematics viewed within the aboriginal community? • How might the efforts of this working group be shared after the forum? WG 1b: Early numeracy: Developing Mathematical Literacy in the Early Years Working Group Leaders Chris Suurtamm, University of Ottawa, Ontario Rita Janes, Education associates, Newfoundland Several provinces have large scale efforts in a direction that they call “early numeracy;” how is early numeracy defined? Are these efforts interventions for “students at risk” at an early age or are the efforts focused on something else? What are the underlying values to these provincial efforts? In this working group the following questions will form a framework as to consider early numeracy in Canada: • Which provinces do have “early numeracy” directions? What are these directions? Is early numeracy only counting? • What is the philosophy underlying each of those directions? • What types of “guidelines” exist for preschool or early numeracy programs? • In what way(s) do these directions encourage programs to acknowledge the understandings that children come with and then build on that understanding? • What types of initiatives can encourage the development of early numeracy through home connections? WG 1c: Why is mathematics relevant in our society? Report Working Group Leaders Elaine Simmt, University of Alberta, Alberta Peter Taylor, Queen's University, Ontario In the conversations leading to the 2005 Forum, the question of the relevance of mathematics for our students often came out as an issue. This working group considers this question in the following way: what is relevant for the general citizen living in our society? There are different answers to this question, but it is often hard to make the link between these answers and what is done in our classrooms. Teachers say they need more than just reasons why mathematics is important; they need examples that can be worked on in classes. • Identify some good examples which illustrate both the relevance and the feasibility in a classroom setting. • Address some of the less conventional connections with respect to relevance, namely in other human creative endeavours like visual arts, dance and music. • Document resources which address this question. • In what ways do these examples fit in the school mathematics program? How would things have to change so they did? WG 1d: Supporting Student Success – Helping students reach their potential Report Working Group Leaders Stewart Craven, Toronto District School Board, Ontario Anna Spanik, West Halifax High School, Nova Scotia, and NCTM representative for the Nova Scotia Mathematics Teacher Association Virtually all jurisdictions across Canada are showing great concern for students who are “at risk” or in other words, are struggling to learn Mathematics. This concern is appropriate because the students that graduate from our schools must be able to cope in a highly technological and information-based world whether they pursue studies in higher education or enter the workforce Even if students are doing well they may not reach their potential in terms of mathematical understanding and may not be able to fully contribute to society. The definition of “at risk” Mathematics students should be broad enough to include those students who are in jeopardy of receiving their high school diploma because they cannot meet the Mathematics course requirements and those students who may be “passing” but are not learning the Mathematics they are capable of learning. This working group should address the following guiding questions: 1. How can we truly engage students in Mathematics throughout their schooling years? 2. For those students who are struggling, how can we address their needs so that they can reach their potential? 3. What initiatives are governments and boards of education from across Canada taking to meet the needs of struggling learners in Mathematics? 4. Should society (governments, boards of education) provide large amounts of resources so that all high school graduates meet an acceptable "numeracy" standard? WG 2a: Creating a curriculum that affords learners opportunity to develop powerful mathematics Report Working Group Leaders Sophie René de Cotret, Université de Montréal, Quebec Richard de Merchant, Alberta Education, Alberta Shirley Dalrymple, York Region District School Board, Ontario As evidenced by recent “back-to-basics” and “teaching-for-understanding” movements, much of the debate about grade school mathematics curricula is organized around the assumption that there is a tension between technical proficiency and conceptual understanding. Is this tension a necessary one? Or is it possible to create a curriculum in which proficiency and understanding are framed in terms of complementary—indeed, codependent—relationship? What sorts of resources and preparations would be needed to ensure the successful introduction of such a curriculum? We will explore these sorts of questions in this working group. Discussions will be informed by brief presentations of the origins of the proficiency/understanding debate and recent research into the interdependencies of technical competency and conceptual competence. WG 2b: Learning in the presence of technology Report Working Group Leaders Tom Steinke, Ottawa-Carleton Catholic School Board, Ontario France Caron, Université de Montréal, Quebec This working group will bring together mathematics education researchers and mathematics educators from across Canada to explore how research, practice, hardware and software can combine to ensure that learning mathematics does in fact take place in the presence of technology. • What might a technology-rich curriculum look like? • Articulate the skills and concepts that can be enhanced through the use of the different technologies (e.g., critical thinking). • Explore the issue of equity in access to and use of technologies to learn mathematics. • Explore effective teacher professional learning models to support the effective use of technology for new and existing teachers of mathematics. • Explore what role technologies might play in elementary mathematics education. WG 2c: Mathematics through the eyes of a child Report Working Group Leaders Ann Anderson, University of British Columbia, British Columbia Susan Pitre, Toronto District School Board, Ontario The working group will analyse examples of practices which take Mathematics through the eyes of a child into consideration in a constructive way. The children referred to are meant to be in grades K-8. This analysis will touch upon the following questions. • How do teachers (and possibly families, societies or communities) adjust their point of view so that they better understand their children's view of the world? • If there is a specific context in a given example, then what makes it an interesting context for children? When and how is the transition made from the context to the introduction of the mathematical concepts? Must some of these concepts be taught before discussing the specific context? 1. Increase our understanding of the characteristics of activities taking Mathematics through the eyes of a child into consideration in a constructive way, their advantages as well as the obstacles to undertaking such activities. 2. Come up with some specific recommendations to be shared with the wider mathematics teaching and education community. WG 2d: Classroom practice and mathematics education research Report Working Group Leaders Shannon Sookochoff, Jasper Place High School, Edmonton, Alberta Margaret Sinclair, York University, Ontario The relationship between educational research and educational practice has often been described in terms of separate worlds. The working group is concerned with models and means to bring practice and research into more complementary relationship. The group will consider issues around collaborative projects, teacher-research, communications between groups, and the development of resources. 1. Provide a space where teachers and researchers can discuss this issue together. 2. Analyse projects of research and teaching, with a view toward articulating a complementary relationship. 3. Discuss projects and models in which research clearly informs practice and practice informs research. 4. Develop some recommendations, including means to address the tendency to conceive of teacher-research relationships in terms of hierarchies and gaps. WG 3a: Developing a national mathematics teaching community Working Group Leaders Marc Garneau, Surrey School District, and President of BCAMT (British-Columbia Association of Mathematics Teachers), British Columbia André Ladouceur, Collège Catholique Samuel-Genest, Ottawa, Ontario Liliane Gauthier, Saskatchewan Learning, Saskatchewan This working group will address the question: in what ways might we develop and sustain a national mathematics teaching community? The interest in developing a national mathematics teaching community developed following the 2003 Forum. Since that time, two meetings have occurred with representatives from different provincial mathematics teachers organizations to discuss the possibility of organizing a national organization for mathematics teachers. The deliberations at these meetings saw value in the development of such an organization, and the Canadian Association for the Teaching of Mathematics (CATM)/Association canadienne pour l’enseignement des mathématques (ACEM) is in the process of being been formed. The focus of this working group will be to identify projects and initiatives that will establish sustainability of the ACEM/CATM. WG 3c: Supporting Teacher Success Report Working Group Leaders Eric Muller, Brock University, Ontario Brent Davis, University of Alberta, Alberta Lisa Lunney, We'koqma'q Secondary School, Nova Scotia This working group will address the question: how do we support teacher success? The aim will be to identify what support the mathematics education community, at the elementary and secondary levels, needs, and to consider those issues that can be addressed at a national level. It is anticipated that members of the working group will represent two constituencies, namely, individuals who are well placed to know the support which is desired that is, teachers, mathematics consultants, mathematics education representatives from ministries of education, etc., and individuals who can assist in the delivery of that support. It is hoped that participants will provide good examples of mathematics teacher support, both at the elementary and secondary level, which can be shared and considered for generalization to other parts of Canada. Such examples could include individual mathematics courses for teachers being offered across the country, and other in-service, pre-service and professional development activities, discussing how they can be designed to support teachers' success. In particular, the working group will be interested in the mathematical knowledge useful for teaching. There are two specific objectives for this working group: 1. to engage in a national conversation about these issues, 2. to recommend a few concrete actions to support teacher success at a national level
{"url":"http://cms.math.ca/Evenements/FCEM2005/groups.f?nomenu=1","timestamp":"2014-04-20T08:21:47Z","content_type":null,"content_length":"23552","record_id":"<urn:uuid:9d1b7d0e-4d3f-453a-ad7d-d3eae45973e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the spring constant of a spring with retractable tip pen that stores 0.008J potential energy when compressed... - Homework Help - eNotes.com what is the spring constant of a spring with retractable tip pen that stores 0.008J potential energy when compressed by 1cm. This is related to Hooke's law The energy stored in a spring can be given as; `E = 1/2xxkxxx^2` E = stored energy which is 0.008J k = spring constant x = amount of compression which is 1cm or 0.01m `E = 1/2xxkxxx^2` `0.008 = 1/2xxkxx0.01^2` `k = 160` So the spring constant is 160N/m Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-spring-constant-spring-with-retractable-tip-440451","timestamp":"2014-04-21T08:34:31Z","content_type":null,"content_length":"25520","record_id":"<urn:uuid:5d79fb31-3eb1-46e4-b9bb-763e6013981e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Use Quaternions 05-31-2012, 01:38 PM How To Use Quaternions I am having an issue figuring out how to implement quaternions. I was reading an article that suggested to use euler angles for the gameplay, but convert the euler angles to a quaternion only when you need to interpolate between two rotations. The article was written back in 1998 though so I am skeptical since this tech changes so often (http://www.gamasutra.com/view/ Is that the correct approach to using quaternions or is there a better way? And if there is a better way, are there any articles you can point me to so I can start heading in the right direction? Lastly, and possibly the most helpful thing, could anyone post a link to some opensource OpenGL project that uses quaternions properly so I can just learn by reading the code? 06-01-2012, 06:23 AM Dark Photon It would help us to advise you if you'd describe the issue you're trying to solve. Otherwise this is just a technique in search of a problem. For instance, are you trying to interpolate rotations only? Or full rigid transformations? Do you want super-high quality/correctness, or are you more after speed? Is something that interpolates between adjacent pairs sufficient, or do you really need something that interpolates across 3 or more samples? ...and what's the problem domain? Camera transform interpolation? Skeletal animation? etc. For some possible issues, quaternions might be best. For others, they might be insufficient and would point you to something else like dual quats. Could provide you points to write-ups and code for both (and others). Among other places, there's Quat code in David Eberly's WildMagic engine: download link 06-11-2012, 04:19 PM What I am looking to achieve is to code a rudimentary game engine. I am not looking to make a game with it. I just want to be familiar with the inner workings of the game engine I use (Unity) and perhaps some day using the engine to make a game. Unity uses quaternions to express all of the rotations in the game, however you can get and set rotations in euler anglers. This is an idea that I mused that might work properly. Store the rotations as quaternions all the time but expose methods that will rotate it by another quaternion (or a set of euler angles converted into a quaternion). And if I remember correctly, multiplying the rotation quaternion with the new quaternion will rotate it by the second quaternion. And if I need to get the euler angles of a rotation I can just convert it back out into euler angles. I hope that makes sense. Also, I took a quick skim through the WildMagic engine. That looks like an excellent source of reference code. Thanks for sharing. 06-11-2012, 05:45 PM Dark Photon Ok. I guess given you don't have a specific aim for quats yet, Eberly's code is probably a reasonable answer to your question.
{"url":"http://www.opengl.org/discussion_boards/printthread.php?t=177784&pp=10&page=1","timestamp":"2014-04-16T19:20:53Z","content_type":null,"content_length":"7581","record_id":"<urn:uuid:4946e496-437a-44e2-93c0-cd54635d9247>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum and Difference formulae (sin and cos) 1. December 16th 2010, 03:45 PM #1 2. December 16th 2010, 03:48 PM #2 No simple way to derive them I'm afraid - more time and effort than you can afford to waste in a test. Just write them down and/or memorise them. $\displaystyle \sin{(\alpha \pm \beta)} = \sin{\alpha}\cos{\beta}\pm\cos{\alpha}\sin{\beta}$ $\displaystyle \cos{(\alpha \pm \beta)} = \cos{\alpha}\cos{\beta}\mp \sin{\alpha}\sin{\beta}$. If I can memorise them you can too :P Similar Math Help Forum Discussions Search Tags
{"url":"http://mathhelpforum.com/trigonometry/166429-sum-difference-formulae-sin-cos.html","timestamp":"2014-04-20T14:08:41Z","content_type":null,"content_length":"33131","record_id":"<urn:uuid:8a67b43b-cf89-43dd-b163-970e7fdb3416>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Open Office Math is good for editing MathML. WYSIWYG. > -----Original Message----- > From: Justus H. Piater [mailto:Justus.Piater@ULg.ac.be] > Sent: Monday, 9 February 2004 22:56 > To: forrest-dev@xml.apache.org > Subject: Re: How do I proceed (WAS Re: Forrest and Mathematics) > Stephan Michels <stephan@apache.org> wrote on Sat, 07 Feb 2004 > 15:11:10 +0100: > > If you write more than just one equation, you will make a > difference, > > for sure ;-) If you have a document with dozens of equations, MathML > > makes it unmaintainable. > I find MathML tedious to code by hand, and hard to read too for > non-trivial formulas. I have a setup that allows me to code all my > math in LaTeX syntax, and a Makefile setup that uses an external Perl > script and Ian Hutchinson's TTM to convert inlined LaTeX code to > external MathML files that are then XIncluded. This allows me to > hand-code math conveniently in LaTeX (which is processed directly by > PassiveTeX until there is a FO processor that can render MathML), and > convert it to MathML for the Web. > This gets me the best of both worlds: Easy coding and reading in > LaTeX, and Web math in MathML without cluttering up my (validated) XML > source files.
{"url":"http://mail-archives.apache.org/mod_mbox/forrest-dev/200402.mbox/%3C001a01c3eef3$df158920$d9784fcb@insurgentes.local%3E","timestamp":"2014-04-17T22:31:11Z","content_type":null,"content_length":"4994","record_id":"<urn:uuid:7d85c13c-d5f4-4126-916e-375440f2c57b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Deleting "weak homeomorphism" in a Hilbert space up vote 3 down vote favorite It is well-known that there exists a homeomorphism $h$ from an infinite-dimensional Hilbert space $H$ to $H\setminus\{0\}$. Does there exist a "weak homeomorphism" $g:H \to H\setminus\{0\}$, that is, $g$ is bijective, and $g$ and $g^{-1}$ are weakly continuous? fa.functional-analysis hilbert-spaces I don't get it: doesn't strong continuity imply weak continuity? so your $g$ could be your previous $h$? – leo monsaingeon Jun 16 '13 at 7:17 Use the same topology on both ends. Then strong continuity need not imply weak continuity. – Gerald Edgar Jun 16 '13 at 12:12 same quesion on MSE math.stackexchange.com/questions/421450/… – Norbert Aug 19 '13 at 20:41 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fa.functional-analysis hilbert-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/133843/deleting-weak-homeomorphism-in-a-hilbert-space","timestamp":"2014-04-17T13:02:28Z","content_type":null,"content_length":"49541","record_id":"<urn:uuid:680599e4-d918-43db-bd35-77cfcdd2b609>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the formula for a graph April 14th 2010, 04:59 PM Finding the formula for a graph I need help with finding the horizontal shift for the function P = f(t). Also how would we find the formula for this? I already know that Midline is 2100, Amplitude is 900. But I don't know how to complete this formula... 900 * sin (???) + 2100 April 14th 2010, 05:10 PM The period of this function is 4 Use the fact that $n$ influences the period in $\sin(nx)$ and period is given by $\frac{2\pi}{n}$ so solve $4=\frac{2\pi}{n}$ April 14th 2010, 05:11 PM $P(t) = -900\sin\left[B(t \pm C)\right] + 2100$ $B = \frac{2\pi}{4} = \frac{\pi}{2}$ C = phase shift ... looks close to a 0.5 shift to the left to me. April 14th 2010, 05:35 PM Hmm, I don't get how you got -900 from. The amplitude is 900. What made you put a negative in front of it? Also where did you get the 0.5 from. What formula did you use? April 14th 2010, 06:02 PM look at the curve ... it's an inverted sine wave. amplitude = |A| as far as the 0.5 ... no formula, I just looked at the graph's horizontal shift.
{"url":"http://mathhelpforum.com/pre-calculus/139219-finding-formula-graph-print.html","timestamp":"2014-04-16T05:23:22Z","content_type":null,"content_length":"7480","record_id":"<urn:uuid:f603c78c-a15b-423d-b4f2-c45765cea4b2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] nonlinear optimisation with constraints Sebastian Walter sebastian.walter@gmail.... Tue Jun 23 03:10:53 CDT 2009 2009/6/22 Ernest Adrogué <eadrogue@gmx.net>: > 22/06/09 @ 13:54 (+0200), thus spake Sebastian Walter: >> 2009/6/22 Ernest Adrogué <eadrogue@gmx.net>: >> > Hi Sebastian, >> > >> > 22/06/09 @ 09:57 (+0200), thus spake Sebastian Walter: >> >> >> >> are you sure you can't reformulate the problem? >> > >> > Another approach would be to try to solve the system of >> > equations resulting from equating the gradient to zero. >> > Such equations are defined for all x. I have already tried >> > that with fsolve(), but it only seems to find the obvious, >> > useless solution of x=0. I was going to try with a >> > Newton-Raphson alorithm, but since this would require the >> > hessian matrix to be calculated, I'm leaving this option >> > as a last resort :) >> Ermmm, I don't quite get it. You have an NLP with linear equality >> constraints and box constraints. >> Of course you could write down the Lagrangian for that and define an >> algorithm that satisifies the first and second order optimality >> conditions. >> But that is not going to be easy, even if you have the exact hessian: >> you'll need some globalization strategy (linesearch, trust-region,...) >> to guarantee global convergence >> and implement something like projected gradients so you stay within >> the box-constraints. >> I guess it will be easier to use an existing algorithm... > Mmmm, yes, but the box constraints are merely to prevent the > algorithm from evaluating f(x) with values of x for which f(x) > is not defined. It's not a "real" constraint, because I know > beforehand that all elements of x are > 0 at the maximum. >> And I just had a look at fmin_l_bfgs_b: how did you set the equality >> constraints for this algorithm. It seems to me that this is an >> unconstrained optimization algorithm which is worthless if you have a >> constrained NLP. > You're right. I included the equality constraint within the > function itself, so that the function I omptimised with fmin_l_bfgs_b > had one parameter less and computed the "missing" parameter > internally as a function of the others. > The problem is that this dependent parameter, was unaffected by > the box constraint and eventually would take values < 0. > Fortunately, Siddhardh Chandra has told me the solution, which > is to maximise f(|x|) instead of f(x), with the linear > constraint incorporated into the function, using a simple > unconstrained optimisation algorithm. His message hasn't made it > to the list though. I'm curious: Could you elaborate how you incoroporated the linear constraints into the objective function? > I have just done this and it seems to work! After 10.410 > function evaluations and 8.904 iterations fmin has found the > solution and it looks sound at first sight > Thanks for your help. >> remark: >> To compute the Hessian you can always use an AD tool. There are >> several available in Python. >> My biased favourite one being pyadolc ( >> http://github.com/b45ch1/pyadolc ) which is slowly approaching version >> 1.0. > I will have a look. > Thanks again :) > Ernest > _______________________________________________ > SciPy-user mailing list > SciPy-user@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-June/021578.html","timestamp":"2014-04-17T16:27:27Z","content_type":null,"content_length":"7094","record_id":"<urn:uuid:4935fdb7-74a6-4e01-9110-7c65fc37989b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Pitkänen, Matti (2010) About the Nature of Time. In: TGD Inspired Theory of Consciousness. Matti Pitkanen, tgd.wippiespace.com. PDF (Chapter of Internet book "TGD Inspired Theory of Consciousness") Official URL: http://tgd.wippiespace.com/public_html/tgdconsc/tg... The identification of the experienced time t<sub>e</sub> and geometric time t<sub>g</sub> involves well-known problems. Physicist is troubled by the reversibility of t<sub>g</sub> contra irreversibility of t<sub>e</sub>, by the conflict between determinism of Schrödinger equation and the non-determinism of state function reduction, and by the poorly understood the origin of the arrow of t<sub>g</sub>. In biology the second law of thermodynamics might be violated in its standard form for short time intervals. Neuroscientist knows that the moment of sensory experience has a finite duration, does not understand what memories really are, and is bothered by the Libet's puzzling finding that neural activity seems to precede conscious decision. These problems are discussed in the framework of Topological Geometrodynamics (TGD) and TGD inspired theory of consciousness constructed as a generalization of quantum measurement theory. In TGD space-times are regarded as 4-dimensional surfaces of 8-dimensional space-time H=M<sup>4</sup>&times;CP<sub>2</sub> and obey classical field equations. The basic notions of consciousness theory are quantum jump and self. Subjective time is identified as a sequence of quantum jumps. Self has as a geometric correlate a fixed volume of H- "causal diamond"-defining the perceptive field of self. Quantum states are regarded as quantum superpositions of space-time surfaces of H and by quantum classical correspondence assumed to shift towards the geometric past of H quantum jump by quantum jump. This creates the illusion that perceiver moves to the direction of the geometric future. Self is curious about the geometric future and induces the shift bringing it to its perceptive field. Macroscopic quantum coherence and the identification of space-times as surfaces in H play a crucial role in this picture allowing to understand also other problematic aspects in the relationship between experienced and geometric time. Repository Staff Only: item control page
{"url":"http://scireprints.lu.lv/65/","timestamp":"2014-04-21T09:35:45Z","content_type":null,"content_length":"17872","record_id":"<urn:uuid:5b51d583-9160-4420-94eb-e189dfd21d59>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Chamblee, GA Prealgebra Tutor Find a Chamblee, GA Prealgebra Tutor ...I enjoy working with both children and adults, and have experience with both. I look forward to hearing from you and supporting your efforts in achieving success in Economics, Math or ESL courses! I am a native Arabic speaker. 14 Subjects: including prealgebra, Spanish, geometry, statistics ...I have tutored middle and high school math for 20+ years. I enjoy working with the students and receive many rewards when I see their successes. My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I took one course in Differential Equations and received an A. 20 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I also took a philosophy course, "Mind, Matter and God," with a community college professor, who just semi-retired. He told me he could use him as a reference if I wanted one. His name is Stu Barr, at Pima Community College, Tucson, Arizona. 14 Subjects: including prealgebra, calculus, statistics, algebra 2 ...I am very patient and very thorough in my teaching. I understand that subjects like math and physics can be very challenging at times. I also understand how hopeless it seems when you are behind in a class because you just couldn't understand a concept. 10 Subjects: including prealgebra, reading, physics, calculus ...Tutoring methods that I use include breaking the lessons into chunks, incorporating kinesthetic teaching techniques and helping students create organized plans of tutoring sessions. I have worked as a tutor off and on for 15 years. During this time, I have worked with several students diagnosed with Aspergers. 33 Subjects: including prealgebra, reading, English, algebra 1 Related Chamblee, GA Tutors Chamblee, GA Accounting Tutors Chamblee, GA ACT Tutors Chamblee, GA Algebra Tutors Chamblee, GA Algebra 2 Tutors Chamblee, GA Calculus Tutors Chamblee, GA Geometry Tutors Chamblee, GA Math Tutors Chamblee, GA Prealgebra Tutors Chamblee, GA Precalculus Tutors Chamblee, GA SAT Tutors Chamblee, GA SAT Math Tutors Chamblee, GA Science Tutors Chamblee, GA Statistics Tutors Chamblee, GA Trigonometry Tutors Nearby Cities With prealgebra Tutor Avondale Estates prealgebra Tutors Berkeley Lake, GA prealgebra Tutors Clarkston, GA prealgebra Tutors Cumming, GA prealgebra Tutors Doraville, GA prealgebra Tutors Dunwoody, GA prealgebra Tutors Holly Springs, GA prealgebra Tutors Lilburn prealgebra Tutors Norcross, GA prealgebra Tutors North Atlanta, GA prealgebra Tutors Pine Lake prealgebra Tutors Sandy Springs, GA prealgebra Tutors Scottdale, GA prealgebra Tutors Stone Mountain prealgebra Tutors Tucker, GA prealgebra Tutors
{"url":"http://www.purplemath.com/Chamblee_GA_Prealgebra_tutors.php","timestamp":"2014-04-20T02:29:52Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:c7c0b803-10ff-4825-b519-ba05a3f8a052>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Find number of combinations April 4th 2007, 02:31 PM #1 Mar 2007 Find number of combinations Scarlett purchased five new mp3s. She has room to add only three more onto her mp3 player. How many possible combinations of songs can be added to her mp3 player? A. 10 B. 20 C. 30 D. 60 This will be P(5,3), that is the number of ways to choose 3 songs from 5, in some order P(5,3) = 5!/(5-3)! = 60 see Probability for more info EDIT: I think order counts here, so I changed C(5,3) to P(5,3) Last edited by Jhevon; April 4th 2007 at 02:54 PM. April 4th 2007, 02:43 PM #2
{"url":"http://mathhelpforum.com/statistics/13341-find-number-combinations.html","timestamp":"2014-04-21T02:19:59Z","content_type":null,"content_length":"33216","record_id":"<urn:uuid:a03b17a4-af06-42f9-b074-8b037d4c4ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Provider: ingentaconnect Database: ingentaconnect Content: application/x-research-info-systems TY - ABST AU - Gámez-Merino, José L. AU - Muñoz-Fernández, Gustavo A. AU - Seoane-Sepúlveda, Juan B. TI - A Characterization of Continuity Revisited JO - American Mathematical Monthly PY - 2011-02-01T00:00:00/// VL - 118 IS - 2 SP - 167 EP - 170 N2 - It is well known that a function f : R → R is continuous if and only if the image of every compact set under f is compact and the image of every connected set is connected. We show that there exist two 2^c-dimensional linear spaces of nowhere continuous functions that (except for the zero function) transform compact sets into compact sets and connected sets into connected sets respectively. UR - http://www.ingentaconnect.com/content/maa/ amm/2011/00000118/00000002/art00009 M3 - doi:10.4169/amer.math.monthly.118.02.167 UR - http://dx.doi.org/10.4169/amer.math.monthly.118.02.167 ER -
{"url":"http://www.ingentaconnect.com/content/maa/amm/2011/00000118/00000002/art00009?format=ris","timestamp":"2014-04-21T16:09:14Z","content_type":null,"content_length":"1580","record_id":"<urn:uuid:ad79cdd8-ab1a-4e90-a213-107759e59d35>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Completing the circle of critical thinking The primary reason we learn is to be able to use our learning for actions that are practical and pleasurable in our lives. When we have a problem to solve we think critically. One does good critical thinking if one is facing challenges and solving problems. It is difficult to solve problems effectively unless one thinks critically about the nature of the problems and of how to go about solving them. So, critical thinking is working our way through a problem to a solution.
{"url":"http://gulfnews.com/life-style/education/completing-the-circle-of-critical-thinking-1.1143422","timestamp":"2014-04-18T00:26:31Z","content_type":null,"content_length":"159570","record_id":"<urn:uuid:400c17f9-3632-4512-9465-b8917f9c6c93>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/514a16fbe4b05e69bfac1149","timestamp":"2014-04-17T19:21:18Z","content_type":null,"content_length":"327085","record_id":"<urn:uuid:304e7711-a23d-4ba9-964f-75df3eb01ec5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Recently I came across a complex model written in Access with complex SQL queries all over the place. The engineer who was maintaining it and I did some analysis and agreed that the model was using SQL in an unnatural way (things SQL isn't good at) - c... Stata or R Recently I came across a complex model written in Access with complex SQL queries all over the place. The engineer who was maintaining it and I did some analysis and agreed that the model was using SQL in an unnatural way (things SQL isn't good at) - c... In time series work you often run into difficulties in modeling processes where the overall level of one variable (an input, for example) changes over time but the levels of another variable (an output) do not change. For instance if … Continue reading → Create a Web Crawler in R Admittedly I am not the best R coder, and I certainly have a lot to learn, but the code at the link below should provide you with an example of how easy it is to create a very (repeat: very) basic web crawler in R. If you wanted to do this in SPSS, and I Utilizing multiple cores in R There are a couple of options in R, if you want to utilize multiple cores on your machine. These days my favorite is doMC package, which depends on foreach and multicore packages.in the section below squareroot for each number is calculated in parallel... Utilizing multiple cores in R There are a couple of options in R, if you want to utilize multiple cores on your machine. These days my favorite is doMC package, which depends on foreach and multicore packages.in the section below squareroot for each number is calculated in parallel... Revolution in the News Between the recent launch of Revolution R Enterprise 4.2 and the announcement of the SAS to R Challenge, there's been a flurry of recent news about Revolution Analytics and R in the media. Here's a quick recap: The Register's Timothy Prickett Morgan comments on the SAS to R Challenge: 'Red Hat for stats' goes toe-to-toe with SAS. "By supporting... R courses from Statistics.com If you're looking for some in-depth training in R, here are some upcoming courses presented by R gurus and hosted by statistics.com to consider: Feb 11: Modeling in R (Sudha Purohit -- more details after the jump) Mar 4: Introduction to R - Data Handling (Paul Murrell) Apr 15: Programming in R (Hadley Wickham) Apr 29: Graphics in R... Simple example:How to use foreach and doSNOW packages for parallel computation. update************************************************************************************************I checked whether this example was run collectly or not in Windows XP(32bit) only ! ************************************************************************************************In R language, the members at Revolution R provide foreach and doSNOW packages for parallel computation. these packages allow us to compute things in parallel. So, we start to install these packages.install.packages("foreach")install.packages("doSNOW")Created by Pretty R at Simple example:How to use foreach and doSNOW packages for parallel computation. update************************************************************************************************I checked whether this example was run collectly or not in Windows XP(32bit) only ! ************************************************************************************************In R language, the members at Revolution R provide foreach and doSNOW packages for parallel computation. these packages allow us to compute things in parallel. So, we start to install these packages.install.packages("foreach")install.packages("doSNOW")Created by Pretty R at
{"url":"http://www.r-bloggers.com/search/gis/page/209/","timestamp":"2014-04-16T10:34:05Z","content_type":null,"content_length":"36303","record_id":"<urn:uuid:8bd88892-5a47-4baf-9d7a-18847db17b01>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Analysis Textbooks Browse New & Used Complex Analysis Textbooks Some of the list prices for complex analysis textbooks required on math courses are extremely high. However, you can save huge amounts if you buy cheap complex analysis textbooks for your college course right here. Find out about complex variables and the fundamentals of this type of analysis. Learn how these variables can be applied in all walks of life, sometimes in very different ways. With book titles including Visual Complex Analysis and Complex Variables and Applications, you can find all kinds of affordable textbooks at pre-owned prices in our extensive marketplace. Browse hundreds of titles now and rent used complex analysis textbooks to suit your needs and your mathematics college courses today. We buy back complex analysis books too, so if you want to sell some books to us, we'd be happy to buy them. It's one of the reasons why students across America love our service, because it works both ways! Results 101 - 150 of 175 for Complex Analysis Textbooks
{"url":"http://www.valorebooks.com/new-used-textbooks/mathematics/complex-analysis?page=3","timestamp":"2014-04-24T02:58:48Z","content_type":null,"content_length":"73223","record_id":"<urn:uuid:7996332e-29cc-41f4-9e7f-1dd61cb982a9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Make: Electronics book giveaway (and electronics discussion) Without further delay, here are the winners of the Make: Electronics book drawing: >> NikonErik >> EvilGenius121 >> driph >> xrazorwirex >> Idlewood Congrats. All of you: email me your mailing address. We’re thrilled by the response we got to the giveaway, 166 comments! And not only did people ask great questions, other readers pitched in and answered a bunch of them. We’ll be compiling the best of this content in the next day or so, but if you want to see it in its conversational form, it’s here. Just to give you a tasre of the exchange, here’s Matt Silvia asking about Ohm’s Law: I get the relationship between voltage, current, and resistance, but I don’t know how to do anything more interesting with it than light up a bulb or make a motor spin. Even then, I’m not confident I won’t set fire to anything, and while I’m not afraid to tinker (and have actually fixed some electronic via basic troubleshooting), complex circuits make me feel like I’m missing out on the secret handshake. I guess I know how a capacitor works… I just don’t know why you would want one. And here’s reader HowTutorial’s response: There is a bit of a secret handshake. Electronics is a vast field that can appear quite overwhelming. But Ohm’s law is part of the secret, so you’re off to a great start. Furthermore, much circuit design is individual circuits chained together. Think of it this way, tweak your circuit empirically, and how it all works together becomes easier to see. So how do you know if you are going to start a fire or not? Ohm’s law says the voltage equals the current multiplied by the resistance. So with any two of these, you can calculate the third. Power equals current times voltage. By lowering resistance, for the same voltage, you have increased the current. So you have increased the power. Put a wire with very little resistance across a battery, then the wire and battery will get hot. The wire might melt. (try a 9 volt battery on steel wool) But if the battery gets too hot, it might pop, exposing you to the nasty chemicals inside or worse. So be careful when playing with fire. For your next question, there are many uses for a capacitor. A capacitor is made up of two conductive parts that are not touching. If you put a voltage across a capacitor, positive and negative charge will build up on it’s two sides, with charge on one side attracted to the opposite charge on the other side. If you remove the external voltage, the capacitor will still have the charge you gave it. That charge across a capacitor creates a voltage of its own. The voltage is the same as that external voltage that we just removed. So a capacitor stores voltage. If you put a resistor across the capacitor the charge will flow through the resistor to the opposite charge on the other side of the capacitor. The larger the capacitor, the more charge you can bleed off this way before you run out of charge, and thus reduce the voltage to zero. Using Ohm’s law, we can calculate the initial current through the resistor. That current through the resistor creates a voltage across it equal to what the capacitor holds. That voltage holds back more charge from flowing. So the resistor limits the speed of the capacitor discharging. The smaller the resistor, the faster the charge discharges. Another law to know is the RC time constant. The resistance multiplied by the capacitance equals the time it takes for the voltage to drop 63%. It will take the same amount of time to drop the next 63%, and so on. This also works going the other way, and charging a capacitor through a resistor. It will take one RC time constant to charge 63%. So a capacitor tries to maintain its voltage. The larger the capacitance, the larger the time constant, and the slower the change in voltage. So one use of a capacitor is to reduce fluctuations in the voltage in part of a circuit. So you can use a capacitor to quiet the noise on a power supply. The noise is the fluctuations you don’t want. Similarly, if you put capacitors across the power pins of IC’s, you can create more stable power for all your chips. In the Maker Shed: Make: Electronics Want to learn the fundamentals of electronics in a fun and experiential way? Start working on some excellent projects as soon as you crack open this unique, hands-on book. Build the circuits first, then learn the theory behind them! With Make: Electronics, you’ll learn all of the basic components and important principles through a series of “learn by discovery” experiments. And you don’t need to know a thing about electricity to get started. 1. I have been on again off again thinking about writing a blog with answers like this. Featuring my answer is a nice boost when I need it and as good as winning the book! To teach is to learn twice. I like to understand how things work, and how to actually do something with your knowledge. It is one of the reasons I like Make and Makezine so much. So I hope it is okay to plug my new blog. I’ll start it off by copying my answers from here over to there. Then I’ll add some pictures to try and make things clearer. But if you have more questions for me that would be great. Visit my new blog at http://HowTutorial.com
{"url":"http://makezine.com/2011/01/27/make-electronics-book-giveaway-and/","timestamp":"2014-04-16T23:20:24Z","content_type":null,"content_length":"66341","record_id":"<urn:uuid:fcdf89a2-ad9a-4466-8b2b-6ccdb328c3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: Some Special Derivatives Videos | MindBites Series: Calculus: Some Special Derivatives Included: (Click to preview) About this Series • Lessons: 2 • Total Time: 0h 33m • Use: Watch Online & Download • Access Period: Unlimited • Created At: 06/23/2009 • Last Updated At: 07/20/2010 This two part series covers finding the derivatives of the reciprocal and square root functions. In the first lesson, we will calculate the derivative of 1/x (the reciprocal function) using the definition of derivative. As part of this problem, we will review how to find the equation for a line tangent to this function at a particular point. To do this, we'll not only calculate the derivative, but we'll also review the use of the point-slope formula to arrive at the tangent line formula, given the point of tangency and the slope of the line (calculated by the derivative of the curve at the point of tangency). The second lesson, will show how to calculate the derivative of the square root of x using the definition of the derivative. As part of this problem, we will go through how to calculate the instantaneous rate of change at a given point in time. Further, we'll examine how to use a position function to find the point in time when an object will be moving at a particular instantaneous rate of change (or velocity) by setting the derivative of the position function to that rate of speed and then solving for the variable. This series is a great review for a CLEP test, mid-term, final exam, or personal growth! Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Calculus. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hôpital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Lessons Included awesome instructional! ~ nachan This is great lesson if you need to understand using the square root of x in calculus. I love the props he uses to help explain each problem! He also explains how to find the instantaneous rate of change by finding the derivative. He also explains the definition of the derivative and explains a step by step process of finding the problem. Below are the descriptions for each of the lessons included in the series: • Calculus: Derivative of the Reciprocal Function In this lesson, we will calculate the derivative of 1/x (the reciprocal function) using the definition of derivative. As part of this problem, we will review how to find the equation for a line tangent to this function at a particular point. To do this, we'll not only calculate the derivative, but we'll also review the use of the point-slope formula to arrive at the tangent line formula, given the point of tangency and the slope of the line (calculated by the derivative of the curve at the point of tangency). Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hôpital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut College. He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. • Calculus: Derivative of the Square Root Function In this lesson, we will calculate the derivative of the square root of x using the definition of the derivative. As part of this problem, we will go through how to calculate the instantaneous rate of change at a given point in time. Further, we'll examine how to use a position function to find the point in time when an object will be moving at a particular instantaneous rate of change (or velocity) by setting the derivative of the position function to that rate of speed and then solving for the variable. Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hôpital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut College. He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. Supplementary Files: • Once you purchase this series you will have access to these files: Buy Now and Start Learning Also from Thinkwell: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/series/107-calculus-some-special-derivatives","timestamp":"2014-04-18T08:05:42Z","content_type":null,"content_length":"36359","record_id":"<urn:uuid:da0a34bd-6184-4722-8db7-4389edd96c37>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating the dead load (G) and live load(Q) of this tunnel - civil engineering que I was unable to open your pdf previously. I'm not sure what "calculate the live load" means when it is given in the figure, but oh well ... The dead load should simply be the total weigh per length.
{"url":"http://www.physicsforums.com/showthread.php?t=303306","timestamp":"2014-04-18T15:51:59Z","content_type":null,"content_length":"31955","record_id":"<urn:uuid:04fc3ba2-bfce-47db-b0fc-e8c07e8af1f3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Modal and Tense Predicate Logic: Models in Presheaves and Categorical Conceptualization , 1999 "... This paper introduces a temporal logic for coalgebras. Nexttime and lasttime operators are dened for a coalgebra, acting on predicates on the state space. They give rise to what is called a Galois algebra. Galois algebras form models of temporal logics like Linear Temporal Logic (LTL) and Computatio ..." Cited by 33 (7 self) Add to MetaCart This paper introduces a temporal logic for coalgebras. Nexttime and lasttime operators are dened for a coalgebra, acting on predicates on the state space. They give rise to what is called a Galois algebra. Galois algebras form models of temporal logics like Linear Temporal Logic (LTL) and Computation Tree Logic (CTL). The mapping from coalgebras to Galois algebras turns out to be functorial, yielding indexed categorical structures. This gives many examples, for coalgebras of polynomial functors on sets. Additionally, it will be shown how \fuzzy" predicates on metric spaces, and predicates on presheaves, yield indexed Galois algebras, in basically the same coalgebraic manner. Keywords: Temporal logic, coalgebra, Galois connection, fuzzy predicate, presheaf Classication: 68Q60, 03G05, 03G25, 03G30 (AMS'91); D.2.4, F.3.1, F.4.1 (CR'98). 1 Introduction This paper combines the areas of coalgebra and of temporal logic. Coalgebras are simple mathematical structures (similar, but dual, to... - PROCEEDINGS OF THE LOGIC AT WORK CONFERENCE , 1996 "... In this paper we consider an intuitionistic modal logic, which we call IS42 . Our approach is different to others in that we favour the natural deduction and sequent calculus proof systems rather than the axiomatic, or Hilbert-style, system. Our natural deduction formulation is simpler than other pr ..." Cited by 23 (7 self) Add to MetaCart In this paper we consider an intuitionistic modal logic, which we call IS42 . Our approach is different to others in that we favour the natural deduction and sequent calculus proof systems rather than the axiomatic, or Hilbert-style, system. Our natural deduction formulation is simpler than other proposals. The traditional means of devising a modal logic is with reference to a model, and almost always, in terms of a Kripke model. Again our approach is different in that we favour categorical models. This facilitates not only a more abstract definition of a whole class of models but also a means of modelling proofs as well as provability. , 2000 "... We investigate the development of theories of types and computability via realizability. ..." - Studia Logica , 2001 "... . In this paper we consider an intuitionistic variant of the modal logic S4 (which we call IS4). The novelty of this paper is that we place particular importance on the natural deduction formulation of IS4---our formulation has several important metatheoretic properties. In addition, we study models ..." Cited by 19 (4 self) Add to MetaCart . In this paper we consider an intuitionistic variant of the modal logic S4 (which we call IS4). The novelty of this paper is that we place particular importance on the natural deduction formulation of IS4---our formulation has several important metatheoretic properties. In addition, we study models of IS4, not in the framework of Kripke semantics, but in the more general framework of category theory. This allows not only a more abstract definition of a whole class of models but also a means of modelling proofs as well as provability. 1. Introduction Modal logics are traditionally extensions of classical logic with new operators, or modalities, whose operation is intensional. Modal logics are most commonly justified by the provision of an intuitive semantics based upon `possible worlds', an idea originally due to Kripke. Kripke also provided a possible worlds semantics for intuitionistic logic, and so it is natural to consider intuitionistic logic extended with intensional modalities... , 2002 "... Counterpart semantics is proposed as the appropriate semantical framework for a foundational investigation of quantified modal logics. It turns out to be a limit case of the categorical semantics of relational universes introduced by Ghilardi and Meloni in 1988. The main result is a deeper understan ..." Cited by 3 (0 self) Add to MetaCart Counterpart semantics is proposed as the appropriate semantical framework for a foundational investigation of quantified modal logics. It turns out to be a limit case of the categorical semantics of relational universes introduced by Ghilardi and Meloni in 1988. The main result is a deeper understanding of the interplay between substitution, quantification and identity wherever modalities are present. Languages with types and explicit substitutions are the tools used to clarify such an interplay and to disintangle classical problems related to modalities in first-order languages. It is shown that controversial modal principles are neither valid nor provable. Quine's worries are dispelled. , 1999 "... This paper generalises and adapts the theory of sheaves on a topological space to sheaves on a relational space: a topological space with a binary relation. The relational bundles on a relational space are defined as the continuous, relation-preserving functions into the space, and the relational se ..." Cited by 1 (0 self) Add to MetaCart This paper generalises and adapts the theory of sheaves on a topological space to sheaves on a relational space: a topological space with a binary relation. The relational bundles on a relational space are defined as the continuous, relation-preserving functions into the space, and the relational sections of a relational bundle are defined as the relation-preserving partial sections. This defines a functor to the category of presheaves on the space, which has a left adjoint. The presheaves which arise as the relational sections of a relational bundle are characterised by separation and patching conditions similar to those of a sheaf: we call them the relational sheaves. The relational bundles which arise from presheaves are characterised by local homeomorphism conditions: we call them the local relational homeomorphisms. The adjunction restricts to an equivalence between the categories of relational sheaves and local relational homeomorphisms. The paper goes on to investigate the structure of these equivalent categories. They are shown to be quasi-toposes (thus modelling firstorder logic), and to have enough structure to model a certain firstorder modal logic described in a companion paper. 1 "... Abstract: We present a new way of formulating first order modal logic which circumvents the usual difficulties associated with variables changing their reference on moving between states. This formulation allows a very general notion of model (sheaf models). The key idea is the introduction of synta ..." Cited by 1 (0 self) Add to MetaCart Abstract: We present a new way of formulating first order modal logic which circumvents the usual difficulties associated with variables changing their reference on moving between states. This formulation allows a very general notion of model (sheaf models). The key idea is the introduction of syntax for describing relations between individuals in related states. This adds an extra degree of expressiveness to the logic, and also appears to provide a means of describing the dynamic behaviour of computational systems in a way somewhat different from traditional program logics. 1 "... This report summarises research undertaken by the author as part of his DPhil in computation. A draft table of contents of the thesis, a proposed timetable for submission, and a list of publications are also included. ..." Add to MetaCart This report summarises research undertaken by the author as part of his DPhil in computation. A draft table of contents of the thesis, a proposed timetable for submission, and a list of publications are also included. "... 5.1 Definition and Examples 5.1.1 Definition and Definability Results A tripos is a weak tripos with disjunction which has a (weak) generic object. Explicitly we define: ..." Add to MetaCart 5.1 Definition and Examples 5.1.1 Definition and Definability Results A tripos is a weak tripos with disjunction which has a (weak) generic object. Explicitly we define: "... As McKinsey and Tarski [20] showed, the Stone representation theorem for Boolean algebras extends to algebras with operators to give topological semantics for (classical) propositional modal logic, in which the “necessity ” operation is modeled by taking the interior of an arbitrary subset of a topo ..." Add to MetaCart As McKinsey and Tarski [20] showed, the Stone representation theorem for Boolean algebras extends to algebras with operators to give topological semantics for (classical) propositional modal logic, in which the “necessity ” operation is modeled by taking the interior of an arbitrary subset of a topological space. This topological interpretation was recently extended in a natural way to arbitrary theories of full first-order logic by Awodey and Kishida [3], using topological sheaves to interpret domains of quantification. This paper proves the system of full first-order S4 modal logic to be deductively complete with respect to such extended topological semantics. The techniques employed are related to recent work in topos theory, but are new to systems of modal logic. They are general enough to also apply to other modal systems. Keywords: First-order modal logic, topological semantics, completeness.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=223519","timestamp":"2014-04-21T16:23:04Z","content_type":null,"content_length":"34593","record_id":"<urn:uuid:0c6f25ee-390b-4ba8-b75a-82ca65740097>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Jackson Street Village volunteer report: April 14th April 14th (from 2:30 to 4:30 PM): On the 14th, very few children showed up once again. I spent the first half of the volunteer time talking to children and reading with them. Later, I helped a student learn how to use Venn diagrams with simple math. Each Venn diagram had two circles of numbers, and of course, some numbers overlapped. The assignment was to find the pattern in each individual circle. For example, there would be a group of odd numbers in the left circle, and multiples of three in the right. The answer would be ‘odd’ and ‘X3.’ It was really fun giving the student hints. He really seemed to be getting the hang of it by the end, even though the answers were getting more and more complex. I really enjoyed myself on this day.
{"url":"http://blog.lib.umn.edu/will2456/architecture/2008/04/jackson_street_village_volunte_3.html","timestamp":"2014-04-17T18:46:15Z","content_type":null,"content_length":"4803","record_id":"<urn:uuid:20b5adde-6532-445e-b4c2-8d5a6548c74c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Math.random and fillRect with HTML5 Canvas Enjoy an ad free experience by logging in. Not a member yet? New to the CF scene Join Date Nov 2010 Thanked 0 Times in 0 Posts Hello, I'm new here so.. hi! You'll probably be seeing me a lot. Anyways here's my questions. I am trying to get a function to draw 5 randomly sized and colored rectangles nested within each other. Meaning each rectangle should not go outside the boundaries of the rectangle it is in. The color thing I've got down in a randomColor() function. It's the nesting rectangles inside rectangles that is confusing me (hense me being up for the past 4 hours trying to understand it) I started out with very simple code just making 5 rectangles of reducing sizes nested in each other, then added the Math.random to randomize all the sizes. Now I'm at this point and have lost my way. Please help, or maybe there is just an easier way. I added a bunch of comments in my code so maybe you'll understand what I'm trying to do. function rect() { // rectangle generator autoctx.clearRect(0, 0, 400, 400); // clear canvas // declare variables var x; var y; var width; var height; var i = 0; // counter // create random x,y coordinates x = Math.round(Math.random() * 100); y = Math.round(Math.random() * 100); do { // create random size rectangle width = Math.round(Math.random() * 400); height = Math.round(Math.random() * 400); } while (width < 100 || height < 100); // make sure rectangle is big enough autoctx.fillStyle = randomColor(); // Runs random color generator autoctx.fillRect(x, y, width, height); // fill first rectangle do { x += (Math.round(Math.random() * 15)); // choose new random coordinates within previous rectangle y += (Math.round(Math.random() * 15)); // choose new random coordinates within previous rectangle //*********** KEEPS RECTANGLE HEIGHT AND WIDTHS FROM BEING A NEGATIVE NUMBER ***********// do { //*** WIDTH TESTER ***// validates width to be within previous rectangles width var testW = 0; // new width tester variable testW = width - (Math.round(Math.random() * (2 * x))); // store test width if (testW > 0) { width = testW; // if test width > 0 store in width } while (testW < 0); // if test width < 0 continue loop do { //*** HEIGHT TESTER ***// validates height to be within previous rectangles height var testH = 0; // create height testing variable testH = height - (Math.round(Math.random() * (2 * y))); // calculate new test height if (testH > 0) { // if test height is > 0 height = testH; // store as height } while (testH < 0); // if test height < 0 try again //*** Fills final rectangle values ***// autoctx.fillStyle = randomColor(); // Runs random color generator autoctx.fillRect(x, y, width, height); i++; // add 1 to counter } while (i < 4); // kick out of loop after the 5th rectangle thanks guys I hope I'm not too confusing or anything, any help would be greatly appreciated! Red Devil Mod Join Date Apr 2003 Bucharest, ROMANIA Thanked 379 Times in 375 Posts In my opinion this should be rather a problem of analyzing and timing your operations. It has nothing to do with HTML5. It's just a logical sequence. You have to focus on creating sets of (x,y,width,height) for all the 5 rectangles, following 2 simple rules: After you create, randomly, the first set, try (in a loop) the new random set till: 1. the x in the new set is grater than the x from the first set. Same for the y. 2. the width in the new set is smaller than the width from the first set. same for the height. Now repeat the operation. Except that the x,y,width and height value for comparison will be now the values from the previous created set. All these should be don within a separate function which, in the end, should return an array (or an object), like: Only now you should start the loop for fillStyle() and fillRect() Need an example? Last edited by Kor; 11-22-2010 at 03:18 PM. Red Devil Mod Join Date Apr 2003 Bucharest, ROMANIA Thanked 379 Times in 375 Posts Here's a quick example (untested): function returnSets(X,Y,W,H){ var minX=X,minY=Y,maxW=W,maxH=H; var arr=[], x,y,w,h; for(var i=0;i<5;i++){ return arr Where the arguments passed are the canvas properties: X=0 //extreme left Y=0 //extreme top W= the canvas width H= the canvas height Could not work properly, but at least I hope it illustrates the idea behind. Last edited by Kor; 11-22-2010 at 04:17 PM. New to the CF scene Join Date Nov 2010 Thanked 0 Times in 0 Posts I think I understand what you are suggesting and I like it. In fact, that's pretty much what I was going for but without the arrays (I don't completely understand how to use arrays yet) I think what I'm looking for is the math formulas to place the x, y coordinates inside the top left quadrant of the previous rectangle, and have the width and height fall in the bottom right quadrant of that same rectangle, making it inside the rectangle.
{"url":"http://www.codingforums.com/javascript-programming/209923-math-random-fillrect-html5-canvas.html","timestamp":"2014-04-17T19:55:44Z","content_type":null,"content_length":"76332","record_id":"<urn:uuid:d5a1ce7f-9407-42ef-968e-97437ebd6e55>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Wealth Plus High Return Policy from LIC of India LIC’s Wealth Plus is a unit linked plan that safeguards your investment from market fluctuations, so that your investments are protected in financially volatile times. This plan offers payment of Fund Value at the end of policy term, based on highest Net Asset Value (NAV) over the first 7 years of the policy, or the NAV as applicable at the end of the policy term, whichever is higher. NAV of the fund will be subject to a minimum of Rs. 10/-. The policy term is 8 years with an extended life cover for 2 years after the completion of policy term. This plan will be available for sale for a limited period. You can pay the premium either in a single lump sum or for 3 years. You can choose the level of cover within the limits, which will depend on your age whether the policy is a Single premium or Limited premium contract and on the level of premium you agree to pay. Premiums paid after allocation charge will purchase units of the Fund. The Unit Fund is subject to various charges and value of units may increase or decrease, depending on the Net Asset Value (NAV). LIC of India’s Wealth Plus Features 1. Payment of Premiums: You may pay premiums regularly at yearly, half-yearly, quarterly or monthly (through ECS mode only) intervals over the 3 years premium paying term. Alternatively, a Single premium can be paid. 2. Guaranteed NAV: In this product there is a guarantee of the highest NAV recorded on a daily basis, in the first 7 years of the policy, subject to a minimum of Rs. 10. This means the payment at the end of the policy term will be based on highest Net Asset Value (NAV) recorded over the first 7 years of the policy, or the NAV as applicable on the end of the policy term, whichever is higher. The guarantee will be applicable only for payment made at the end of the policy term irrespective of any partial withdrawals made during the policy term. The period of 7 years starts from the date of commencement of policy. 3. Minimum Age 10 - Maximum Age 65 3 years Premium Paying policies – Rs. [20,000] p.a. (Other than monthly (ECS) mode) Monthly (ECS) mode - Rs. [2,000] p.m. Single premium policies - Rs. [40,000] p.a. (f) Sum Assured under the Basic Plan - 3 years Premium Paying Term: 5 times the annualized premium 4. Other Features of Wealth Plus : i) Partial Withdrawals: ii) Increase / Decrease of risk covers:No increase or decrease of benefits will be allowed. iii) Discontinuance of premiums: If premiums are payable either yearly, half-yearly, quarterly or monthly (ECS) and the same have not been duly paid within the days of grace under the Policy, the Policy will lapse. A lapsed policy can be revived during the period of two years from the due date of first unpaid premium. iv) Revival: If due premium is not paid within the days of grace, the policy lapses. A lapsed policy can be revived during the period of two years from the due date of first unpaid premium. The period during which the policy can be revived will be called “Period of revival” or “revival period”. 5. Reinstatement: A policy once surrendered cannot be reinstated. 6. Risks borne by the Policyholder: 7. LIC’s Wealth Plus is a Unit Linked Life Insurance products which is different from the traditional insurance products and are subject to the risk factors. 8. The premium paid in Unit Linked Life Insurance policies are subject to investment risks associated with capital markets and the NAVs of the units may go up or down based on the performance of fund and factors influencing the capital market and the insured is responsible for his/her decisions. 9. Life Insurance Corporation of India is only the name of the Insurance Company and LIC’s Wealth Plus is only the name of the unit linked life insurance contract and does not in any way indicate the quality of the contract, its future prospects or returns. 10. Please know the associated risks and the applicable charges, from your Insurance agent or the Intermediary or policy document of the insurer. 11. The various funds offered under this contract are the names of the funds and do not in any way indicate the quality of these plans, their future prospects and returns. 12. All benefits under the policy are also subject to the Tax Laws and other financial enactments as they exist from time to time. 50 Responses to “Wealth Plus,Guaranteed NAV,High Return Policy from LIC of India” 1. if we deposit a sum of 20,000 per year for three years, then how much ammount we will get after 7 years. tell me please in detail.. 2. reply m 3. if we deposit a sum of 50,000 per year for three years, then how much ammount we will get after 7 years 4. what is the current nav of wealth plus lic seheme 5. i was invested in wealth plus @ 1,50,000 in apr 2010. how much nav scored 6. I have invested 400000. What is the present NAV for Wealh Plus 2010? 7. What is the highest NAV value till today? 8. if we deposit a sum of Rs, 50,000 Single Payment then how much amount we will get after 7 years. 9. i have invest in 3years scheme, now What is the highest NAV value till today 10. latest nav is 11.10 11. Expect the NAV to increase now, judging from the SENSEX rally (crossing of 20k mark) today 12. I have invested Rs Ten Lakhs in Wealth Plus Policy. What is the current NAV. 13. if we deposit a sum of Rs.50000.per years for three years.then how much amount we will get after 7 years. plz repiy 14. I HAVE INVESTED RS. 2,00,000 IN THE LIC WEALTH PLUS. KINDLY LET ME THE NAV OF THIS PLAN DURING LAST ONE YEAR. 15. i have kept Rs50000/- per annum. How much will I get after 7 yrs 16. Really amazed to see, how could the people think Wealth Place scheme as a “magic stick”! Really I believe the gimic is to get and expect a maximum return on basis of highest NAV point for next 7 years. But I strongly believe, the money LIC invest in market is in form of liquid bond/ bond pension fund or saving like… – not more than 10% in equity. SO DO N”T EXPECT A LUMPSOME RETURN AFTER 7 YEARS! It will be 10% steady increase in NAV with a little “noise” on an average per year. Will be not affected much by sensex… 17. I have invested 400000. What is the present NAV for Wealh Plus 2010? 18. I WANT TO KNOW MY POLICE NO 525125440 NVA TODAY 19. I want know my policy nav to wealth plus 20. I WANT TO MY POLICY NO.286490188 NAV TODAY 21. Really amazed to see, how could the people think Wealth Place scheme as a “magic stick”! Really I believe the gimic is to get and expect a maximum return on basis of highest NAV point for next 7 years. But I strongly believe, the money LIC invest in market is in form of liquid bond/ bond pension fund or saving like… – not more than 10% in equity. SO DO N”T EXPECT A LUMPSOME RETURN AFTER 7 YEARS! It will be 10% steady increase in NAV with a little “noise” on an average per year. Will be not affected much by sensex… 22. Iwould like to know the highest nav till today 23. what is the nav of today….?? 24. kindly let me know highest nav of wealth plus till today 25. My Policy in Today Current High NAV Rate ? 26. I am LIC of India Agent. So Please sand me Net.Add. OF 1. Checking all Nav rate, 2. Check all Party Policy Detail, 27. I am depositing the amount of Rs 20,000 p/a.How do you expect I will get after 7 years. 28. I am depositing the amount of Rs 20,000 p/a.How much do you expect I will get after 7 years. 29. Rs.50,000/= Invest by me.what is NAV today, plz inform my e-mail(firoz1972@yahoo.com) 30. I am depositing the amount of Rs 20,000 p/a.How much do you expect I will get after 7 years 31. I am depositing the amount of Rs 30,000 p/a.How much do you expect I will get after 7 years 32. what is the current nav of wealth plus till today? urgent inform 33. what is the current nav of wealth plus till today? urgent inform 34. Please let me know the fund value of my wealth plus policies.I had invested total one lakh in this policy.50000 each in my name and my wife`s name. 35. Nice information for some one looking for new LIC product.. 36. I am depositing the amount of Rs 20,000 p/a at regular premium in aprail.How much do you expect that I will get after 7 years 37. what is the latest NAV 38. Present NAV = Rs. 22.65 39. Present NAV 40. Hi People, The return on this policy will depend on the days existing NAV value. There is no gaurantered return. However, if you depositing a equal amount of money for 3 years, after 7 years of term period, you are gaurantered to received the HIGHEST NAV value during the last 7 years. However, if you withdraw ur investment before 7 years, the return would be on that days NAV valuree. 41. plz tell me what is the current growth of wealth plus i deposit the amt in may 2010 42. i was invested Rs75000 as on 2010, please mail me till today gain NAv ammount 43. I was invested Rs75000 as on Feb 2010, please mail me till today gain NAV ammount 44. I have invested in WEALTH PLUS POLICY, 40,000 Every year. I will be investing Total 1,20,000 in 3 years, How much will i get if ,i have to withdraw after 3 years.AND ALSO If i wish to discontinue from this year onwards how much will i get out of the 40,000 which i have invested last year . Is withdrawl possible .Please suggest. 45. I am depositing the amount of Rs 20,000 p/a.How will i get after 7 years.? i was deposit 20000 in march 2010 i got 2000 unit because that time face value was 10 rupees and now i was deposit 20000 again i want to know how many units i got ? any body have a idea plz share with me … thanks in advance .. 46. this is good plan to invest… thanks for sharing… 47. TILL NOW THE HIGHEST NAV IS 11.05 APPROX & what we will be the nav after three years 48. thanx for giving me inforamtion.. 49. it’s really a nic post… thax for sharing me some useful info ..in every post you give some extra efforts.. 50. I like all Wealth Plus Features, great man
{"url":"http://www.technicstoday.com/wealth-plus-guaranteed-nav-high-return-policy-from-lic-of-india/","timestamp":"2014-04-18T14:10:59Z","content_type":null,"content_length":"95998","record_id":"<urn:uuid:870597b3-3b25-492e-a5b9-0b58c26c25d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions for finding lowest common ancestors in trees in O(1) time, with O(n) preprocessing. lowestCommonAncestor :: Int -> (a -> Index) -> Tree a -> Index -> Index -> aSource lowestCommonAncestor n ix tree takes a tree whose nodes are mapped by ix to a unique index in the range 0..n-1, and returns a function which takes two indices (corresponding to two nodes in the tree) and returns the label of their lowest common ancestor. This takes O(n) preprocessing and answers queries in O(1), as it is an application of Data.RangeMin. For binary trees, consider using Data.RangeMin.LCA.Binary. quickLCA :: Tree a -> (Int, Tree (Index, a), Index -> Index -> (Index, a))Source Takes a tree and indexes it in depth-first order, returning the number of nodes, the indexed tree, and the lowest common ancestor function.
{"url":"http://hackage.haskell.org/package/rangemin-2.2.2/docs/Data-RangeMin-LCA.html","timestamp":"2014-04-20T08:42:01Z","content_type":null,"content_length":"5346","record_id":"<urn:uuid:1ad27220-c559-49e9-9469-ab10ac1a6062>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is the answer a negative number? 12-22-2010, 02:33 AM #1 New Member Join Date Sep 2010 Why is the answer a negative number? (-5a^3b^5)^2 / a^4b^3 My working out and answer: (-5a^3b^5)(-5a^3b^5) / a^4b^3 25a^6b^10 / a^4 b^3 = 25a^2b^7 The book says the answer= -25a^2b^7 Re: Why is the answer a negative number? (-5a^3b^5)^2 / a^4b^3 My working out and answer: (-5a^3b^5)(-5a^3b^5) / a^4b^3 25a^6b^10 / a^4 b^3 = 25a^2b^7 The book says the answer= -25a^2b^7 Following PEMDAS - both you and book are incorrect (according to the problem as posted). “... mathematics is only the art of saying the same thing in different words” - B. Russell Re: Why is the answer a negative number? (-5a^3b^5)^2 / a^4b^3 Book is correct IF this is the expression: -(5a^3b^5)^2 / (a^4b^3) In other words: the - sign outside the brackets, PLUS denominator MUST be bracketed I'm just an imagination of your figment ! Re: Why is the answer a negative number? Subhotosh Khan (-5a^3b^5)^2 / a^4b^3 My working out and answer: (-5a^3b^5)(-5a^3b^5) / a^4b^3 25a^6b^10 / a^4 b^3 = 25a^2b^7 The book says the answer= -25a^2b^7 Following PEMDAS - both you and book are incorrect (according to the problem as posted). Can you please elaborate? Re: Why is the answer a negative number? [Can you please elaborate? Did you read Deniss's reply above?? If you did - read it again slowly and carefully ...... “... mathematics is only the art of saying the same thing in different words” - B. Russell Re: Why is the answer a negative number? Subhotosh Khan [Can you please elaborate? Did you read Deniss's reply above?? If you did - read it again slowly and carefully ...... yes i read his reply diligently he said the book is correct if there is a negative sign before the bracket and brackets around the last two terms... but there is not. i copied the the question from the book to the forum correctly and always do so denis is saying IF the book did it this way then the books answer is right. But the book did not do it the way he wrote, so that means the books answer is wrong. i got that! but you said BOTH the book's anwer was wrong (which, like i said, i get it) but you also said my answer was wrong. So i said please elaborate because i don't see how my answer is wrong. I believe i followed pemdas correctly. If you would kindly point out why MY answer was wrong i'd much appreciate it. Re: Why is the answer a negative number? Your work: 25a^6b^10 / a^4 b^3 = 25a^2b^7 That is incorrect (sorry, man!) As is (no brackets) 25a^6b^10 / a^4 b^3 means (25a^6b^10 / a^4) times b^3 You PEMDASed badly I'm just an imagination of your figment ! Re: Why is the answer a negative number? Your work: 25a^6b^10 / a^4 b^3 = 25a^2b^7 That is incorrect (sorry, man!) As is (no brackets) 25a^6b^10 / a^4 b^3 means (25a^6b^10 / a^4) times b^3 You PEMDASed badly so the answer to 25a^6b^10 / a^4 b^3 = 25a^2b^13 Re: Why is the answer a negative number? I'm just an imagination of your figment ! i copied the the question from the book to the forum correctly and always do If your book displays the given expression as shown below, then you did not correctly copy the exercise. [tex]\frac{(-5 a^3 b^5)^2}{a^4 b^3}[/tex] If your book displays the given expression the following way, instead, then you could clarify your typing with square brackets as shown in blue. [tex]\frac{(-5 a^3 b^5)^2}{a^4} \; b^3[/tex] [(-5a^3b^5)^2/a^4] b^3 Either way, the factor of -1 is squared, so there is no negation sign in the answer. "English is the most ambiguous language in the world." ~ Yours Truly, 1969 12-22-2010, 03:58 AM #2 Elite Member Join Date Jun 2007 12-22-2010, 12:55 PM #3 Elite Member Join Date Feb 2004 Ottawa, Ontario 12-22-2010, 07:07 PM #4 New Member Join Date Sep 2010 12-22-2010, 11:00 PM #5 Elite Member Join Date Jun 2007 12-23-2010, 04:03 AM #6 New Member Join Date Sep 2010 12-23-2010, 05:28 AM #7 Elite Member Join Date Feb 2004 Ottawa, Ontario 12-23-2010, 11:46 PM #8 New Member Join Date Sep 2010 12-24-2010, 03:25 AM #9 Elite Member Join Date Feb 2004 Ottawa, Ontario 12-25-2010, 12:10 AM #10
{"url":"http://www.freemathhelp.com/forum/threads/69062-Why-is-the-answer-a-negative-number","timestamp":"2014-04-20T23:49:25Z","content_type":null,"content_length":"76172","record_id":"<urn:uuid:981cb010-29b0-4eb9-aefe-66ef1b41af17>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Your Age By Chocolate Math This is pretty neat. It takes less than a minute. Work this out as you read. Be sure you don't read the bottom until you've worked it out! This is not one of those waste of time things, it's fun. 1. First of all, pick the number of times a week that you would like to have chocolate (more than once but less than 10) 2. Multiply this number by 2 (just to be bold) 3. Add 6 4. Multiply it by 50 -- I'll wait while you get the calculator 5. If you have already had your birthday this year add 1759 .. If you haven't, add 1758. 6. Now subtract the four digit year that you were born. You should have a three or four digit number The first digit of this was your original number (i.e., how many times you want to have chocolate each week). The next two numbers are YOUR AGE! THIS IS THE ONLY YEAR (2010) IT WILL EVER WORK, SO SPREAD IT AROUND WHILE IT LASTS. For more, Check out http://coolemailforwards.com/tags-Math.php
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=155777","timestamp":"2014-04-17T12:37:11Z","content_type":null,"content_length":"12886","record_id":"<urn:uuid:9a7a0ecc-9238-4113-92d9-c0a24f0144e9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Floor functions! April 12th 2012, 08:05 AM #1 Junior Member Jan 2010 Floor functions! Given any real number $x$ and positive integer $n$, prove that $\left \lfloor\frac{\left \lfloor x\right \rfloor}{n}\right \rfloor = \left \lfloor\frac{x}{n}\right \rfloor$ Deduce that for any real number $y$ and positive integers $n$ and $m$, one has $\left \lfloor\frac{\left \lfloor \frac{y}{m}\right \rfloor}{n}\right \rfloor = \left \lfloor\frac{\left \lfloor \frac{y}{n}\right \rfloor}{m}\right \rfloor$ I think the second part is very straightforward i.e. $\left \lfloor\frac{\left \lfloor \frac{y}{m}\right \rfloor}{n}\right \rfloor = \left \lfloor\frac{\frac{y}{m}}{n}\right \rfloor = \left \ lfloor\frac{\frac{y}{n}}{m}\right \rfloor = \left \lfloor\frac{\left \lfloor \frac{y}{n}\right \rfloor}{m}\right \rfloor$ I just can't prove the initial part. Any help appreciated! Re: Floor functions! Let $x=qn+r+a$ where $0\le r<n$ and $0\le a<1$ and q and r are integers. I think this goes somewhere. Re: Floor functions! Cheers, think I get it.. So by putting $x=qn+r+a$ as you suggest we have $\left \lfloor x\right \rfloor = qn+r$ and as $r<n, \left \lfloor \frac{r}{n}\right \rfloor = 0$ so $\left \lfloor\frac{\ left \lfloor x\right \rfloor}{n}\right \rfloor = q$ Now $\frac{x}{n} = q+ \frac{r+a}{n}$ as $r\in Z$ and $r<n$, and $a<1$ we have $(r+a)<n$ so $\left \lfloor q+ \frac{r+a}{n}\right \rfloor = q$ also. Therefore $\left \lfloor\frac{\left \lfloor x\right \rfloor}{n}\right \rfloor = \left \lfloor\frac{x}{n}\right \rfloor$ April 12th 2012, 09:07 AM #2 Senior Member Jan 2008 April 13th 2012, 10:27 AM #3 Junior Member Jan 2010
{"url":"http://mathhelpforum.com/number-theory/197162-floor-functions.html","timestamp":"2014-04-18T09:50:39Z","content_type":null,"content_length":"38430","record_id":"<urn:uuid:9a269a38-6184-4bbd-9262-aa8a71733372>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
1. gigabyte(s) 2. Great Britain 1. gigabyte 2. Great Britain gb - Computer Definition A billion bytes, often shortened to gig in conversation. 1. Hard disks and flash drives measure computer storage in SI units, which are based on the base 10, or decimal, system, so GB is one billion (10 9 ) bytes, or 1,000,000,000 bytes. See also byte, decimal, and SI. 2. Internal computer memory is based on a base 2, or binary, number system.A GB of internal memory, therefore is 1,073,741,824 (2 30 ) bytes.The term GB comes from the fact that 1,073,741,824 is nominally 1,000,000,000. See also byte, G, and SI. (1) (GB) (GigaByte) One billion bytes (technically 1,073,741,824 bytes). See giga and space/time. (2) (Gb) (GigaBit) One billion bits (technically 1,073,741,824 bits). Lower case "b" for bit and "B" for byte are not always followed and often misprinted. Thus, Gb may refer to gigabyte. See Gbps, giga and space/time.
{"url":"http://www.yourdictionary.com/gb","timestamp":"2014-04-20T04:08:23Z","content_type":null,"content_length":"46910","record_id":"<urn:uuid:47dc5f78-af74-4cde-9151-990c4a471991>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/kaykim95/asked","timestamp":"2014-04-16T22:42:54Z","content_type":null,"content_length":"102729","record_id":"<urn:uuid:413ad7de-32c8-4ce8-be23-68cfb61a6a5f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Logic equations for MOSFET circuits - IEEE Transactions on Computer-aided Design , 1987 "... The switch-level model represents a digital metal-oxide semiconductor (MOS) circuit as a network of charge storage nodes connected by resistive transistor switches. The functionality of such a network can be expressed as a series of systems of Boolean equations. Solving these equations symbolically ..." Cited by 63 (14 self) Add to MetaCart The switch-level model represents a digital metal-oxide semiconductor (MOS) circuit as a network of charge storage nodes connected by resistive transistor switches. The functionality of such a network can be expressed as a series of systems of Boolean equations. Solving these equations symbolically yields a set of Boolean formulas that describe the mapping from input and current state to the new network state. This analysis supports the same class of networks as the switch-level simulator MOSSIM II and provides the same functionality, including the handling of bidirectional e ects and indeterminate (X) logic values. In the worst case, the analysis of an n node network can yield a set of formulas containing a total of O(n 3) operations. However, all but a limited set of dense, pass-transistor networks give formulas with O(n) total operations. The analysis can serve as the basis of e cient programs for a variety oflogic design tasks, including: logic simulation (on both conventional and special purpose computers), fault simulation, test generation, and symbolic veri cation. - PROCEEDINGS OF THE 24TH DESIGN AUTOMATION CONFERENCE , 1987 "... The cosmos simulator provides fast and accurate switch-level modeling of mos digital circuits. It attains high performance by preprocessing the transistor network into a functionally equivalent Boolean representation. This description, produced by the symbolic analyzer anamos, captures all aspects o ..." Cited by 52 (0 self) Add to MetaCart The cosmos simulator provides fast and accurate switch-level modeling of mos digital circuits. It attains high performance by preprocessing the transistor network into a functionally equivalent Boolean representation. This description, produced by the symbolic analyzer anamos, captures all aspects of switch-level networks including bidirectional transistors, stored charge, different signal strengths, and indeterminate (X) logic values. The lgcc program translates the Boolean representation into a set of machine language evaluation procedures and initialized data structures. These procedures and data structures are compiled along with code implementing the simulation kernel and user interface to produce the simulation program. The simulation program runs an order of magnitude faster than our previous simulator mossim ii. - Tech. Rep. Computer Science, RC 19219 (#83668), IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY , 1993 "... This paper describes a diagnosis technique for locating design errors in circuit implementations which do not match their functional speci cation. The method e ciently propagates mismatched patterns from erroneous outputs backward into the network and calculates circuit regions which most likely con ..." Cited by 27 (3 self) Add to MetaCart This paper describes a diagnosis technique for locating design errors in circuit implementations which do not match their functional speci cation. The method e ciently propagates mismatched patterns from erroneous outputs backward into the network and calculates circuit regions which most likely contain the error(s). In contrast to previous approaches, the described technique does not depend on a xed set of error models. Therefore, it is more general and especially suitable for transistor-level circuits, which have a broader variety of possible design errors than gate-level implementations. Furthermore, the proposed method is also applicable for incomplete sets of mismatched patterns and hence can be used not only as a debugging aid for formal veri cation techniques but also for simulationbased approaches. Experiments with industrial CMOS circuits show that for most design errors the identi ed problem region is less than 3 % of the overall circuit. 1 - IBM JOURNAL OF RESEARCH AND DEVELOPMENT , 1994 "... In an effort to fully exploit CMOS performance, custom design techniques are used extensively in commercial microprocessor design. However, given the complexity of current generation processors and the necessity for manual designer intervention throughout the design process, proving design correc ..." Cited by 19 (5 self) Add to MetaCart In an effort to fully exploit CMOS performance, custom design techniques are used extensively in commercial microprocessor design. However, given the complexity of current generation processors and the necessity for manual designer intervention throughout the design process, proving design correctness is a major concern. In this paper we discuss Verity, a formal verification program for symbolically proving the equivalence between a high-level design specification and a MOS transistor-level implementation. Verity - IEEE Trans. CAD/IC , 1987 "... A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean eq ..." Cited by 16 (5 self) Add to MetaCart A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean equations. For the class of networks that arise when analyzing digital metal-oxide semiconductor (MOS) circuits, a simple pivot selection rule guarantees that most s switch networks encountered in practice can be solved with O(s) operations. When represented by a directed acyclic graph, the set of Boolean formulas generated by the analysis has total size bounded by the number of operations required by the Gaussian elimination. This paper presents the mathematical basis for systems of Boolean equations, their solution by Gaussian elimination, and data structures and algorithms for representing and manipulating Boolean formulas. , 1985 "... The program MOSSYM simulates the behavior of a MOS circuit represented as a switch-level network symbolically. That is, during simulator operation the user can set an input to either 0, 1, or a Boolean variable. The simulator then computes the behavior of the circuit as a function of the past and pr ..." Cited by 13 (6 self) Add to MetaCart The program MOSSYM simulates the behavior of a MOS circuit represented as a switch-level network symbolically. That is, during simulator operation the user can set an input to either 0, 1, or a Boolean variable. The simulator then computes the behavior of the circuit as a function of the past and present input variables. By using heuristically efficient Boolean function manipulation algorithms, the verification of a circuit by symbolic simulation can proceed much more quickly than by exhaustive logic simulation. In this paper we present our concept of symbolic simulation, derive an algorithm for switch-level symbolic simulation, and present experimental measurements from MOSSYM.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=742642","timestamp":"2014-04-20T07:02:17Z","content_type":null,"content_length":"27453","record_id":"<urn:uuid:b3d237d4-8d38-4a46-97be-54a719bf4f03>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
This site is a part of the JavaScript E-labs learning objects for decision making. Other JavaScript in this series are categorized under different areas of applications in the MENU section on this page. Often managers have to make decisions about inventory level over a very limited period, This is the case, for example with seasonal goods such as Christmas cards that should satisfy all demand in December, but any cards left in January have almost no value. These single-period decision models are phrased as the Newsboy Problem. For a newsboy who sells papers on a street corner, the demand is uncertain, and the newsboy must decide how many papers to buy from his supplier. If he buys too many papers he is left with unsold papers that have no value at the end of the day; if he buys too few papers he has lost the opportunity of making a higher profit. This JavaScript computes the optimal inventory level over a single cycle. For other inventory management tools visit the inventory control models site. Enter up-to-28 pairs of (number of possible item to sell, and their associated non-zero probabilities), together with the "not sold unit batch cost", and the "net profit of a batch sold" in the data-matrix, THEN click the Calculate button. Together with the "not sold unit batch cost", and the "net profit of a batch sold" Blank boxes are not included in the calculations. In entering your data to move from cell to cell in the data-matrix use the Tab key not arrow or enter keys. To edit your data, including add/change/delete, you do not have to click on the "clear" button, and re-enter your data all over again. You may simply add a pair of numbers to any blank cells, change a number to another in the same cell, or delete a number from a cell. After editing, then click the "calculate" button. For extensive edit or to use the JavaScript for a new set of data, then use the "clear" button. This JavaScript, like many others works well for decision problems having nontrivial parameter, such as non-zero probabilities. The solution for such cases is obvious. For example, if number of batches are 0 or 1, with probabilities 1, and zero. Moreover, not sold unit batch cost, say 1, and the net profit of a batch sold is also 1, then the optimal ordering quantity is 0, which is an obvious case.
{"url":"http://home.ubalt.edu/ntsbarsh/business-stat/otherapplets/newsboy.htm","timestamp":"2014-04-16T19:04:51Z","content_type":null,"content_length":"22765","record_id":"<urn:uuid:acda22d5-350f-4cdf-9c55-fb6ba5a7ae9c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and Queuing Theory Important Questions Problems MA2262 PQT Probability and Queuing Theory Important Questions (problems) MA2262 PQT for CSE & IT 4th Sem Tags : Anna University, CIVIL, IT, CSE 4th Sem, Cse 4th Sem Important Questions, Important Questions, IT 4th Sem, IT 4th Sem Important Questions, IT question bank, MA2262 PQT question bank, PQT important questions, question bank. Probability and Queuing Theory Important Questions (problems) Anna University MA2262 PQT Important Questions for CSE & IT 4th Semester Anna University Probability and Queuing Theory Important Questions for CSE and IT 4th Semester is given. here important questions for all five units for MA2262 Probability and Queuing Theoryimportant Problems questions is given for 2nd year 4th semester students.here we have provided very important Problem for Probability and Queuing Theory. questions uploaded are repeatedly asked in the exams. make use of this questions. unit 1,unit 2,unit 3,unit 4 unit 5 important questions for MA2262 Probability and Queuing Theory subject.useful materials for MA2262 are uploaded here. if you need materials feel free to ask. surely we are here to help you. both Probability and Queuing Theory important questions are uploaded here.16 marks and 2 marks questions are uploaded here.here we have uploaded question bank for MA2262 Probability and Queuing Theory – part A and Part B question. Probability and Queuing Theory ANNA UNIVERSITY IMPORTANT 2 marks and 16 mark QUESTIONS MA2262 PQT – CSE & IT 4th SEMESTER Subject : Probability and Queuing Theory (PQT) Subject Code : MA2262 Semester : 4th sem Department : CSE & IT University: Anna University Content in Probability and Queuing Theory important questions : 2 marks, 8 marks , 16 marks MA2262 Probability and Queuing Theory Important Questions Part A Questions download 2 marks questions with answers – MA2262 PQT important questions PQT-Imp-Problems - download here MA2262 Probability and Queuing Theory Important Questions Part B Questions download 16 Marks and 8 Marks Questions with Answers – CS2253 PQT Important Questions PQT-Imp-Problems - download here Probability and Queuing Theory Notes Anna University MA2262 PQT notes anna university expected questions for CSE and IT 2nd year – 4th Semester students for Probability and Queuing Theory important questions is uploaded above.click the above link for your download. we have uploaded the important questions MA2262 material in pdf format. question uploaded are repeatedly asked in the exams. make use of this questions. if you need materials feel free to ask. surely we are here to help u. important questions for MA2262 Probability and Queuing TheorySubject.useful materials for MA2262 are uploaded here. if you need materials feel free to ask. surely we are here to help you kindly comment us below.
{"url":"http://www.kinindia.com/university/probability-and-queuing-theory-important-questions-problems-ma2262-pqt-for-cse-it-4th-sem/","timestamp":"2014-04-17T01:58:39Z","content_type":null,"content_length":"41228","record_id":"<urn:uuid:35be9f49-0ac0-4204-8c45-aca0b21c2c18>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [Axiom-developer] RE: TeXmacs+Axiom [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Axiom-developer] RE: TeXmacs+Axiom From: Martin Rubey Subject: Re: [Axiom-developer] RE: TeXmacs+Axiom Date: 22 May 2007 11:09:00 +0200 User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.4 Dear Alasdair, "Alasdair McAndrew" <address@hidden> writes: > I will give this new tm_axiom a go. Although, with Martin, I prefer emacs - > I always have an emacs window running (I use LaTeX with the emacs auctex > mode for all my document preparation), and my ideal interface for axiom > would be an emacs mode which produces output as a TeX'ed graphic - such as > imaxima does for maxima. I find TeXmacs slow and a bit clumsy. Oh, I thought you'd prefer TeXmacs for some reason... Well, what concerns emacs + axiom, I did not get round to try the axiom mode by Jay Belanger, Cliff Yapp et al. Unfortunately, I guess that a good emacs mode for .input files would be the most important thing for axiom *users*. In particular, it would be good to have a possibility to send a function definition to a running axiom, via shift return or something. But that's probably not so easy. Cliff, I found the emacs mode documentation very hard to read. Is there a small tutorial, and a summary of the available emacs commands? What concerns programming, since I prefer Aldor to SPAD, I am quite happy with Ralf's (? I think) aldor.el mode, which comes bundled with ALLPROSE. For spad.pamphlet (i.e., LaTeX) editing, mmm mode is a wonderful thing. If you install ALLPROSE, you'll have all these things automatically set up, I think that's the easiest way to do it. [Prev in Thread] Current Thread [Next in Thread] • [Axiom-developer] RE: TeXmacs+Axiom, (continued) • [Axiom-developer] RE: TeXmacs+Axiom, Bill Page, 2007/05/21 • Re: [Axiom-developer] RE: TeXmacs+Axiom, Alasdair McAndrew, 2007/05/21 • RE: [Axiom-developer] RE: TeXmacs+Axiom, Bill Page, 2007/05/21 • RE: [Axiom-developer] RE: TeXmacs+Axiom, C Y, 2007/05/22 • Re: [Axiom-developer] RE: TeXmacs+Axiom, Martin Rubey <= • Re: [Axiom-developer] RE: TeXmacs+Axiom, C Y, 2007/05/22 • [Axiom-developer] Re: axiom-mode, Martin Rubey, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, Martin Rubey, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, C Y, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, Martin Rubey, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, Martin Rubey, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, C Y, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, Martin Rubey, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, Ralf Hemmecke, 2007/05/22 • Re: [Axiom-developer] Re: axiom-mode, C Y, 2007/05/22
{"url":"http://lists.gnu.org/archive/html/axiom-developer/2007-05/msg00366.html","timestamp":"2014-04-16T07:41:58Z","content_type":null,"content_length":"8666","record_id":"<urn:uuid:dc51bedb-bc07-411c-b77c-348dc7a881c1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
A Student On A Piano Stool Rotates Freely With ... | Chegg.com A student on a piano stool rotates freely with an angular speed of 3.09 rev/s. The student holds a 1.35-kg mass in each outstretched arm, 0.787 m from the axis of rotation. The combined moment of inertia of the student and the stool, ignoring the two masses, is 5.48 kgm2, a value that remains constant. As the student pulls his arms inward, his angular speed increases to 3.50 rev/s. How far are the masses from the axis of rotation at this time, considering the masses to be points? What is the initial and final kinetic energy of the system? I honestly don't know how to even start this problem. Any help/walk throughs on how to do this or a problem like this would be very appreciated
{"url":"http://www.chegg.com/homework-help/questions-and-answers/student-piano-stool-rotates-freely-angular-speed-309-rev-s-student-holds-135-kg-mass-outst-q1260784","timestamp":"2014-04-18T12:12:39Z","content_type":null,"content_length":"21875","record_id":"<urn:uuid:3232d532-b6e7-4262-a743-9f5601f6ff4f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Invited Talks Aalborg Universitet, Denmark Tuesday 6 April, 16:00 Verifying LEGO: Validation and Synthesis of Embedded Software Embedded systems are to be found almost everywhere in todays society. Intelligence in form of software and hardware is introduced into all kinds of products and sectors with the objective of enhancing their functionality. Examples include consumer electronics (e.g. MP3 players, audio/video systems), transport systems (e.g. ABS in cars, control software in high speed trains), medical devices (intelligent pens for treating diabetis), and many others. As indicated by the examples above, embedded software are often applied in highly safety cricial applications. The increased complexity of embedded systems challenges the methods used for designing and realising embedded systems, in order that they are bug-free, correct, predicatable and fault-tolerant. In addition, embedded systems are often used in situation where timing garantees (e.g. the release of an airbag in a car must happen within a few millisecond after impact regardsless) as well as bounds on usages of resorces (memory, power, bandwidth) are crucial properties making this effort even more difficult. In the talk we will introduce the real-time validation tool UPPAAL (www.uppaal.com) and demonstrate on a number of embedded system examples realized in LEGO Mindstorm how UPPAAL enables to find and correct bugs as well as optimize resource consumption. Hebrew University of Jerusalem, Israel האוניברסיטה העברית בירושלים Wednesday 7 April, 09:00 Analysis and Probability of Boolean Functions Boolean functions (under various names) are important in combinatorics, probability theory, computer science, game theory, and other areas. In the talk I will discuss a few results and some problems concerning analysis and probability of Boolean functions. In the (self-contained and friendly for students) lecture I will describe first a few notions: 1. Influence: The definition and properties of the influence of a variable on a Boolean function. (Related to: the power of a voter for an election rule; the effect of malicious processors in a collective coin flipping.) 2. Noise-sensitivity: How sensitive is a Boolean function to noise? (Related to how likely it is that errors in counting the votes will change the outcomes of an election.) 3. The Fourier Walsh expansion Next I will briefly describe a few results: the existence of influential variable (KKL), the tradeoff between total influence and noise sensitivity (BKS) and the majority is stablest (MOO). Next two conjectures will be presented the first on the relation between expectation threshold and the actual threshold (with Jeff Kahn) and the wsecond about the noise stability conjecture for monotone threshold circuits and the reverse Boppana-Håstad conjecture (with Itai Benjamini and Oded Schramm). Finally two extensions will be mentioned: (a) Maps from Σ^n to Σ where Σ is a larger alphabet and their thresholds (with Elchanan Mossel), and (b) maps from {0,1}^mn to {0,1}^m in the context of judgement aggregation (with Muli Safra and Moshe Tennenholtz). INRIA-Saclay and École Polytechnique, France Wednesday 7 April, 14:00 Information-Theoretic approaches to Information Flow In recent years, there has been a growing interest in considering the quantitative aspects of Information Flow, partly because often the a priori knowledge of the secret information can be represented by a probability distribution, and partly because the mechanisms to protect the information may use randomization to obfuscate the relation between the secrets and the observables. Among the quantitative approaches to Information Flow, the most prominent is the one based on Information Theory, which interprets a system producing information leakage as a (noisy) channel between the secrets (input) and the observables (output), and the leakage itself as the difference between the Shannon entropies of the input before and after the output (Mutual Information). This approach however suffers from two shortcomings: 1. The Shannon entropy is not the most suitable measure in case of the typical attacks in security (in particular, the one-try attacks), and 2. the analogy with the (simple) channel collapses in case of an interactive system. In this talk we discuss these issues and propose some solutions. University of Manchester, U.K. Thursday 8 April, 09:00 Automated Reasoning for Ontology Engineering In the last decade, "ontologies" are being developed as logical theories capturing domain knowledge, and used in a variety of applications, most prominently in clinical and life sciences. They are used to design and maintain terminologies, to base information systems on, and to provide flexible access to data. Description Logics, i.e., decidable fragments of first order logic, are used as the basis for ontology languages such as OWL, and the standardisation of these ontology languages has led to an increasing number of applications and tool developments, and to an increased interest in automated reasoning services. I will briefly introduce OWL and describe its relationship with other logics, and then describe some of the recent developments in automated reasoning for ontology engineering. On the one hand, progress was made regarding standard reasoning tasks: e.g., we have seen the development of new reasoning techniques to cope with extremely large, modestly complex theories and to answer queries against databases w.r.t. ontologies. On the other hand, the "serious" usage of ontologies requires ever more and powerful non-standard reasoning services such as the extraction of modules or the explanation of entailments from an ontology. I will assume a basic knowledge of first order or modal logic, and hope to provide an interesting overview of this lively area of logic- based knowledge representation. Massachusetts Institute of Technology, U.S.A. Thursday 8 April, 14:00 Algorithms Meet Art, Puzzles, and Magic When I was six years old, my father Martin Demaine and I designed and made puzzles as the Erik and Dad Puzzle Company, which distributed to toy stores across Canada. So began our journey into the interactions between algorithms and the arts (here, puzzle design). More and more, we find that our mathematical research and artistic projects converge, with the artistic side inspiring the mathematical side and vice versa. Mathematics itself is an art form, and through other media such as sculpture, puzzles, and magic, the beauty of mathematics can be brought to a wider audience. These artistic endeavors also provide us with deeper insights into the underlying mathematics, by providing physical realizations of objects under consideration, by pointing to interesting special cases and directions to explore, and by suggesting new problems to solve (such as the metapuzzle of how to solve a puzzle). This talk will give several examples in each category, from how our first font design led to a universality result in hinged dissections, to how studying curved creases in origami led to sculptures at MoMA. The audience will be expected to participate in some live magic demonstrations. Kungliga Tekniska Högskolan, Sweden. Friday 9 April, 09:30 Approximation Resistance Consider 3Sat, the problem of satisfying a number of clauses each being the disjunction of exactly three literals. It is one of the basic NP-complete problems. It has a maximization companion Max-3-Sat which asks to satisfy as many clauses as possible. Max-3-Sat is easily seen to be NP-hard to solve exactly but turns out to remain NP-hard in much weaker, approximate, forms. Given a satisfiable instance with m clauses and ε>0, it is NP-hard to find an assignment that satisfies (7/8+ε)m clauses. As a random assignment satisfies 7m/8 clauses this result can be interpreted as saying that efficient computation cannot do anything useful for We say that a predicate P is "approximation resistant" if, for the problem above with the predicate P taking the role of "disjunction of three literals", it is computationally hard to find an assignment that does better than a random assignment even when (almost) all constraints can be satisfied simultaneously. We will survey many of the results relating to approximation resistance giving both positive and negative results as well as some indication of the proof-techniques involved.
{"url":"http://www.bctcs.ac.uk/BCTCS2010/invited.html","timestamp":"2014-04-18T08:01:45Z","content_type":null,"content_length":"10462","record_id":"<urn:uuid:9c42cae2-7f2c-4519-9876-f65eac0487c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
find the equation of the line tangent to the graph of sin(x) at the following value of x: - Homework Help - eNotes.com find the equation of the line tangent to the graph of sin(x) at the following value of x: You should find the equation of the tangent line, to the graph of function `y = sin x` , at the point `x = 4pi` , such that: `y - sin(4pi) = (dy)/(dx)|_(x = 4pi)(x - 4pi)` You need to evaluate ` sin 4pi` using the following trigonometric identity, such that: `sin 4pi = sin 2*(2pi) = 2 sin 2pi*cos 2pi` Since `sin 2pi = 0` yields: `sin 4pi = 0` You need to evaluate the derivative of the function `y = sin x` such that: `(dy)/(dx) = cos x => (dy)/(dx)|_(x = 4pi) = cos 4pi = cos^2 2p` i Since `cos 2pi = 1` yields: `cos 4pi= 1` Replacing 0 for `sin 4pi` and 1 for `cos 4pi` yields: `y - 0 = 1*(x - 4pi) => y = x - 4pi` Hence, evaluating the equation of the tangent line to the graph of function `y = sin x` , at the point `x = 4pi` , yields `y = x - 4pi.` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/find-equation-line-tangent-graph-sin-x-following-441352","timestamp":"2014-04-20T16:19:02Z","content_type":null,"content_length":"25593","record_id":"<urn:uuid:7c62bb12-29f8-4989-b4fc-5aafa2ddbab4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Mathematics and Mathematical Physics Computational Science and Discovery Computer Physics Communications Contemporary Concepts of Condensed Matter Science Contemporary Physics Continuum Mechanics and Thermodynamics Contributions to Plasma Physics COSPAR Colloquia Series Current Applied Physics Diamond and Related Materials Differential Equations and Nonlinear Mechanics Doklady Physics Dynamical Properties of Solids ECS Journal of Solid State Science and Technology Egyptian Journal of Remote Sensing and Space Science Embedded Systems Letters, IEEE Energy Procedia Engineering Failure Analysis Engineering Fracture Mechanics Environmental Fluid Mechanics EPJ Nonlinear Biomedical Physics EPJ Photovoltaics EPJ Web of Conferences European Journal of Mechanics - A/Solids European Journal of Mechanics - B/Fluids European Journal of Physics European Journal of Physics Education European Physical Journal - Applied Physics European Physical Journal C Europhysics News Experimental Heat Transfer Experimental Mechanics Experimental Methods in the Physical Sciences Experimental Techniques Exploration Geophysics Few-Body Systems Fire and Materials Flexible Services and Manufacturing Journal Fluctuation and Noise Letters Fluid Dynamics Fluid-Structure Interactions Fortschritte der Physik/Progress of Physics Frontiers in Physics Frontiers of Materials Science Frontiers of Physics Fusion Engineering and Design Geochemistry, Geophysics, Geosystems Geografiska Annaler, Series A: Physical Geography Geophysical Research Letters Geoscience and Remote Sensing, IEEE Transactions on Glass Physics and Chemistry Granular Matter Graphs and Combinatorics Handbook of Geophysical Exploration: Seismic Exploration Handbook of Metal Physics Handbook of Surface Science Handbook of Thermal Analysis and Calorimetry Handbook of Thermal Conductivity Haptics, IEEE Transactions on Heat and Mass Transfer Heat Transfer - Asian Research High Energy Density Physics High Pressure Research: An International Journal High Temperature Hyperfine Interactions IEEE Electromagnetic Compatibility Magazine IEEE Journal of Quantum Electronics IEEE Photonics Technology Letters IEEE Signal Processing Magazine IEEE Transactions on Electromagnetic Capability IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control IET Optoelectronics Il Colle di Galileo Imaging Science Journal, The Indian Journal of Biochemistry and Biophysics (IJBB) Indian Journal of Physics Indian Journal of Pure & Applied Physics (IJPAP) Indian Journal of Radio & Space Physics (IJRSP) Industrial Electronics, IEEE Transactions on Industry Applications, IEEE Transactions on Infinite Dimensional Analysis, Quantum Probability and Related Topics Infrared Physics & Technology Intelligent Transportation Systems Magazine, IEEE International Applied Mechanics International Geophysics International Heat Treatment & Surface Engineering International Journal for Computational Methods in Engineering Science and Mechanics International Journal for Ion Mobility Spectrometry International Journal for Simulation and Multidisciplinary Design Optimization International Journal of Abrasive Technology International Journal of Aeroacoustics International Journal of Applied Electronics in Physics & Robotics International Journal of Astronomy and Astrophysics International Journal of Computational Fluid Dynamics International Journal of Computational Materials Science and Surface Engineering International Journal of Damage Mechanics International Journal of Fatigue First | 1 2 3 4 5 6 7 | Last Continuum Mechanics and Thermodynamics [5 followers] Hybrid journal It can contain Open Access articles ISSN (Print) 1432-0959 - ISSN (Online) 0935-1175 Published by Springer-Verlag [2188 journals] [SJR: 0.749] [H-I: 26] • Erratum to: Thermodynamic approach to generalized continua • Foundation, analysis, and numerical investigation of a variational network-based model for rubber □ Abstract: Abstract Since the pioneering work by Treloar, many models based on polymer chain statistics have been proposed to describe rubber elasticity. Recently, Alicandro, Cicalese, and the first author rigorously derived a continuum theory of rubber elasticity from a discrete model by variational convergence. The aim of this paper is twofold. First, we further physically motivate this model and complete the analysis by numerical simulations. Second, in order to compare this model to the literature, we present in a common language two other representative types of models, specify their underlying assumptions, check their mathematical properties, and compare them to Treloar’s experiments. PubDate: 2014-01-01 • A simple mixture theory for class="a-plus-plus">ν Newtonian and generalized Newtonian constituents □ Abstract: Abstract This work presents the development of mathematical models based on conservation laws for a saturated mixture of ν homogeneous, isotropic, and incompressible constituents for isothermal flows. The constituents and the mixture are assumed to be Newtonian or generalized Newtonian fluids. Power law and Carreau–Yasuda models are considered for generalized Newtonian shear thinning fluids. The mathematical model is derived for a ν constituent mixture with volume fractions ${\phi_\alpha}$ using principles of continuum mechanics: conservation of mass, balance of momenta, first and second laws of thermodynamics, and principles of mixture theory yielding continuity equations, momentum equations, energy equation, and constitutive theories for mechanical pressures and deviatoric Cauchy stress tensors in terms of the dependent variables related to the constituents. It is shown that for Newtonian fluids with constant transport properties, the mathematical models for constituents are decoupled. In this case, one could use individual constituent models to obtain constituent deformation fields, and then use mixture theory to obtain the deformation field for the mixture. In the case of generalized Newtonian fluids, the dependence of viscosities on deformation field does not permit decoupling. Numerical studies are also presented to demonstrate this aspect. Using fully developed flow of Newtonian and generalized Newtonian fluids between parallel plates as a model problem, it is shown that partial pressures p α of the constituents must be expressed in terms of the mixture pressure p. In this work, we propose ${p_\alpha=\phi_\alpha p}$ and ${\sum_\alpha^\nu p_\alpha = p}$ which implies ${\sum_\alpha^\nu \phi_\alpha = 1}$ which obviously holds. This rule for partial pressure is shown to be valid for a mixture of Newtonian and generalized Newtonian constituents yielding Newtonian and generalized Newtonian mixture. Modifications of the currently used constitutive theories for deviatoric Cauchy stress tensor are proposed. These modifications are demonstrated to be essential in order for the mixture theory for ν constituents to yield a valid mathematical model when the constituents are the same. Dimensionless form of the mathematical models is derived and used to present numerical studies for boundary value problems using finite element processes based on a residual functional, that is, least squares finite element processes in which local approximations are considered in ${H^{k,p}\left(\bar{\Omega}^e\right)}$ scalar product spaces. Fully developed flow between parallel plates and 1:2 asymmetric backward facing step is used as model problems for a mixture of two constituents. PubDate: 2014-01-01 • A frame-indifferent model for a thermo-elastic material beyond the three-dimensional Eulerian and Lagrangian descriptions □ Abstract: Abstract The covariance principle of differential geometry within a four-dimensional (4D) space-time ensures the validity of any equations and physical relations through any changes of frame of reference, due to the definition of the 4D space-time and the use of 4D tensors, operations and operators. This enables to separate covariance (i.e. frame-indifference) and material objectivity (i.e. material-indifference). We propose here a method to build a constitutive relation for thermo-elastic materials using such a 4D formalism. A 4D generalization of the classical variational approach is assumed leading to a model for a general thermo-elastic material. The isotropy of the relation can be ensured by the use of the invariants of variables, which offers new possibilities for the construction of constitutive relations. It is then possible to build a general frame-indifferent but not necessarily material-indifferent constitutive relation. It encompasses both the 3D Eulerian and Lagrangian thermo-elastic isotropic relations for finite transformations. PubDate: 2014-01-01 • Homogenization of a graphene sheet □ Abstract: Abstract A continuum model for a graphene sheet undergoing infinitesimal in-plane deformations is derived by applying the arguments of homogenization theory. The model turns out to coincide with that found by various authors with different methods, but it avoids anticipations on the validity of any properly adjusted or generalized Cauchy–Born rule. The constitutive equation for stress and the effective Young’s modulus and Poisson ratio are explicitly given in terms of the bond constants. PubDate: 2014-01-01 • Mass flow rate prediction of pressure–temperature-driven gas flows through micro/nanoscale channels □ Abstract: Abstract In this paper, we study mass flow rate of rarefied gas flow through micro/nanoscale channels under simultaneous thermal and pressure gradients using the direct simulation Monte Carlo (DSMC) method. We first compare our DSMC solutions for mass flow rate of pure temperature-driven flow with those of Boltzmann-Krook-Walender equation and Bhatnagar-Gross-Krook solutions. Then, we focus on pressure–temperature-driven flows. The effects of different parameters such as flow rarefaction, channel pressure ratio, wall temperature gradient and flow bulk temperature on the thermal mass flow rate of the pressure–temperature-driven flow are examined. Based on our analysis, we propose a correlated relation that expresses normalized mass flow rate increment due to thermal creep as a function of flow rarefaction, normalized wall temperature gradient and pressure ratio over a wide range of Knudsen number. We examine our predictive relation by simulation of pressure-driven flows under uniform wall heat flux (UWH) boundary condition. Walls under UWH condition have non-uniform temperature distribution, that is, thermal creep effects exist. Our investigation shows that developed analytical relation could predict mass flow rate of rarefied pressure-driven gas flows under UWH condition at early transition regime, that is, up to Knudsen numbers of 0.5. PubDate: 2014-01-01 • Strain gradient plasticity modeling and finite element simulation of Lüders band formation and propagation □ Abstract: Abstract An analytical solution of the problem of the propagation of a Lüders band in an isotropic strain gradient plasticity medium is provided based on a softening–hardening constitutive law. A detailed description is given of the plastic strain distribution in the finite size band front. The solution is shown to be harmonic in the band front and exponential in the band tail. Particular attention is paid to the conditions to be applied at the interface between both regions. This solution is then used to validate finite element simulations of the Lüders band formation and propagation in a plate in tension. The approach is shown to suppress the spurious mesh dependence exhibited by conventional finite element simulations of the Lüders behavior and to provide a finite width band front in agreement with the experimental observations from strain field measurements. PubDate: 2013-12-29 • Micro-macro scale instability in 2D regular granular assemblies □ Abstract: Abstract Instability and stress–strain behavior were investigated for 2D regular assemblies of cylindrical particles. Biaxial shear experiments were performed on three sets of assemblies with regular, albeit increasingly defective structures. These experiments revealed unique instability behavior of these assemblies. Continuum models for the assemblies were then constructed using the granular micromechanics approach. In this approach, the constitutive equations governing the behavior of inter-particle contacts are written in local or microscopic level. The behavior of the RVE is then retrieved by using either kinematic constraint or least squares (static constraint) along with the principle of virtual work to equate the work done by microscopic force–displacement conjugates to that of the macroscopic stress and strain tensor conjugates. The ability of the two continuum approaches to describe the measured stress–strain behavior was evaluated. The continuum models and the local constitutive laws were used to perform instability analyses. The onset of instability and orientation of shear band was found to be well predicted by the instability analyses with the continuum models. Further, macro-scale instability was found to correlate with the instability of inter-particle contacts, although with some variations for the two modeling approaches. PubDate: 2013-12-28 • Wave propagation in relaxed micromorphic continua: modeling metamaterials with frequency band-gaps □ Abstract: Abstract In this paper, the relaxed micromorphic model proposed in Ghiba et al. (Math Mech Solids, 2013), Neff et al. (Contin Mech Thermodyn, 2013) has been used to study wave propagation in unbounded continua with microstructure. By studying dispersion relations for the considered relaxed medium, we are able to disclose precise frequency ranges (band-gaps) for which propagation of waves cannot occur. These dispersion relations are strongly nonlinear so giving rise to a macroscopic dispersive behavior of the considered medium. We prove that the presence of band-gaps is related to a unique elastic coefficient, the so-called Cosserat couple modulus μ c , which is also responsible for the loss of symmetry of the Cauchy force stress tensor. This parameter can be seen as the trigger of a bifurcation phenomenon since the fact of slightly changing its value around a given threshold drastically changes the observed response of the material with respect to wave propagation. We finally show that band-gaps cannot be accounted for by classical micromorphic models as well as by Cosserat and second gradient ones. The potential fields of application of the proposed relaxed model are manifold, above all for what concerns the conception of new engineering materials to be used for vibration control and stealth technology. PubDate: 2013-12-19 • Unsupervised identification of damage and load characteristics in time-varying systems □ Abstract: Abstract Parameters identification of nonstationary systems is a very challenging topic that has only recently received critical attention from the research community. Aim of the paper is the structural health monitoring of bridge-like structures excited by a massive moving load, whose characteristics, such as the mass and speed, are unknown, in the presence of a localized damage along the structure. A novel method for the simultaneous identification of both the load characteristics and damage parameters from vibration measurements is proposed: the data processing relies on the ensemble empirical mode decomposition and the normalized Hilbert transform. Neither a priori information about the response of the undamaged structure nor the free decay of the damaged system is required, only a single-point measurement is needed. The empirical instantaneous frequency is firstly employed to estimate the load characteristics; secondly, the effect of the moving mass is filtered from the instantaneous frequency, and then, the damage position is identified. The performance of the technique are studied varying the load characteristics, damage locations, and crack depths. The effect of ambient noise is also taken into account. Numerical experiments show the identification is rather accurate, results are not very sensitive to the crack location and depth, while they are sensibly affected by the speed of the moving load. PubDate: 2013-12-15 • On the lower-order theories of continua with application to incremental motions, stability and vibrations of rods □ Abstract: Abstract The relative merit of lower-order theories, which have been deduced from the three-dimensional theories of continua, is evaluated with respect to the quantified and un-quantified errors in mathematically modeling the physical response of structural elements. Then, the one-dimensional theories are derived with high accuracy, internal consistency and flexibility from the three-dimensional theory of elasticity in order to govern the nonlinear and incremental motions and stability of a functionally graded rod. First, a kinematic-based method of separation of variables is introduced as a method of reduction, which may lead to the lower-order theories with the same order of errors of the three-dimensional theories, and the nonlinear theories of the rod are derived under Leibnitz’s postulate of structural elements by use of Hamilton’s principle. A theorem of uniqueness is proved in solutions of the linear equations of the rod by means of the logarithmic convexity argument. Next, the kinematic basis is expressed by the power series expansion in the cross-sectional coordinates using Weierstrass’s theorem. Mindlin’s method is used so as to derive the equations in an invariant and fully variational form for the small motions superposed on a static finite deformation, the stability analysis and the high-frequency vibrations of the rod. Moreover, the free vibrations of the rod are considered, the basic properties of eigenvalues are examined, and Rayleigh’s quotient is obtained. The invariant equations of the rod, which are expressible in any system of orthogonal coordinates, may provide simultaneous approximations on all the field variables in a direct method of solutions. The equations are indicated to contain some of earlier equations of rods, as special cases, and also, the numerical elasticity solution of a sample application is presented. PubDate: 2013-11-30 • A viscoplastic approach to the behaviour of fluidized geomaterials with application to fast landslides □ Abstract: Abstract This paper deals with modelling of landslide propagation. Its purpose is to present a methodology of analysis based on mathematical, constitutive and numerical modelling, which includes both well-established theories together with some improvements which are proposed herein. Concerning the mathematical model, it is based on Biot–Zienkiewicz equations, from where a depth-integrated model is developed. The main contribution here is to combine a depth-integrated description of the soil–pore fluid mixture together with a set of 1D models dealing with pore pressure evolution within the soil mass. In this way, pore pressure changes caused by vertical consolidation, changes of total stresses resulting from height variations and changes of basal surface permeability can be taken into account with more precision. Most of rheological models used in depth-integrated models are derived either heuristically (the case of Voellmy model, for instance), or from general 3D rheological models. Here, we will propose an alternative way, based on Perzyna’s viscoplasticity. The approach followed for numerical modelling is the SPH method, which we have enriched adding a 1D finite difference grid to each SPH node, in order to improve the description of pore water profiles in the avalanching soil. This paper intends to be a homage to Professor Felix Darve, who has very much contributed to the field of modern geomechanics. PubDate: 2013-11-26 • A micromechanical study of drying and carbonation effects in cement-based □ Abstract: Abstract This paper is devoted to a micromechanical study of mechanical properties of cement-based materials by taking into account effects of water saturation degree and carbonation process. To this end, the cement-based materials are considered as a composite material constituted with a cement matrix and aggregates (inclusions). Further, the cement matrix is seen as a porous medium with a solid phase (CSH) and pores. Using a two-step homogenization procedure, a closed-form micromechanical model is first formulated to describe the basic mechanical behavior of materials. This model is then extended to partially saturated materials in order to account for the effects of water saturation degree on the mechanical properties. Finally, considering the solid phase change and porosity variation related to the carbonation process, the micromechanical model is coupled with the chemical reaction and is able to describe the consequences of carbonation on the macroscopic mechanical properties of material. Some comparisons between numerical results and experimental data are presented. PubDate: 2013-11-23 • A mesoscopic thermomechanically coupled model for thin-film shape-memory alloys by dimension reduction and scale transition □ Abstract: Abstract We design a new mesoscopic thin-film model for shape-memory materials which takes into account thermomechanical effects. Starting from a microscopic thermodynamical bulk model, we guide the reader through a suitable dimension reduction procedure followed by a scale transition valid for specimen large in area up to a limiting model which describes microstructure by means of parametrized measures. All our models obey the second law of thermodynamics and possess suitable weak solutions. This is shown for the resulting thin-film models by making the procedure described above mathematically rigorous. The main emphasis is, thus, put on modeling and mathematical treatment of joint interactions of mechanical and thermal effects accompanying phase transitions and on reduction in specimen dimensions and transition of material scales. PubDate: 2013-11-22 • On the isotropic moduli of 2D strain-gradient elasticity □ Abstract: Abstract In the present paper, the simplest model of strain-gradient elasticity will be considered, that is, the isotropy in a bidimensional space. Paralleling the definition of the classic elastic moduli, our aim is to introduce second-order isotropic moduli having a mechanical interpretation. A general construction process of these moduli will be proposed. As a result, it appears that many sets can be defined, each of them constituted of 4 moduli: 3 associated with 2 distinct mechanisms and the last one coupling these mechanisms. We hope that these moduli (and the construction process) might be useful for forthcoming investigations on generalized continuum mechanics. PubDate: 2013-11-20 • A unifying perspective: the relaxed linear micromorphic continuum □ Abstract: Abstract We formulate a relaxed linear elastic micromorphic continuum model with symmetric Cauchy force stresses and curvature contribution depending only on the micro-dislocation tensor. Our relaxed model is still able to fully describe rotation of the microstructure and to predict nonpolar size effects. It is intended for the homogenized description of highly heterogeneous, but nonpolar materials with microstructure liable to slip and fracture. In contrast to classical linear micromorphic models, our free energy is not uniformly pointwise positive definite in the control of the independent constitutive variables. The new relaxed micromorphic model supports well-posedness results for the dynamic and static case. There, decisive use is made of new coercive inequalities recently proved by Neff, Pauly and Witsch and by Bauer, Neff, Pauly and Starke. The new relaxed micromorphic formulation can be related to dislocation dynamics, gradient plasticity and seismic processes of earthquakes. It unifies and simplifies the understanding of the linear micromorphic models. PubDate: 2013-11-07 • Phase equilibria in isotropic solids □ Abstract: Abstract The paper determines the forms of equations of equilibrium for stable coherent phase interfaces in isotropic solids. If the first phase satisfies the Baker Ericksen inequalities strictly and the principal stretches of the second phase differ from those of the first phase, one obtains the equality of three specific generalized scalar forces and of a generalized Gibbs function. The forms of these quantities depend on the signs of the increments of the principal stretches across the interface. The proof uses the rank 1 convexity condition for isotropic materials (Šilhavý in Proc. R. Soc. Edinb 129A:1081–1105, 1999) and is available only if the two phases are not too far from each other or if one of the two phases is a fluid (liquid). The result does not follow from the representation theorems for isotropic solids. PubDate: 2013-11-01 • Constitutive modeling of multiphase flows with moving interfaces and contact line □ Abstract: Abstract A continuum description of multiphase flows, in which excess physical quantities associated with phase interfaces and the three-phase contact line are incorporated, is briefly presented. A thermodynamic analysis, based on the Müller–Liu thermodynamic approach of the second law of thermodynamics, is performed to derive the expressions of the constitutive variables in thermodynamic equilibrium. Non-equilibrium responses are proposed by use of a quasi-linear theory. A set of constitutive equations for the surface and line constitutive quantities is postulated. Some restrictions for the emerging material parameters are derived by means of the minimum conditions of the surface and line entropy productions in thermodynamic equilibrium. Hence, a complete continuum mechanical model to describe excess surface and line physical quantities is formulated. PubDate: 2013-11-01 • Dispersion relation for sound in rarefied polyatomic gases based on extended thermodynamics □ Abstract: Abstract We study the dispersion relation for sound in rarefied polyatomic gases (hydrogen, deuterium and hydrogen deuteride gases) basing on the recently developed theory of extended thermodynamics (ET) of dense gases. We compare the relation with those obtained in experiments and by the classical Navier–Stokes Fourier (NSF) theory. The applicable frequency range of the ET theory is proved to be much wider than that of the NSF theory. We evaluate the values of the bulk viscosity and the relaxation times involved in nonequilibrium processes. The relaxation time related to the dynamic pressure has a possibility to become much larger than the other relaxation times related to the shear stress and the heat flux. PubDate: 2013-11-01 • Structural and transport properties of Sn–Mg alloys □ Abstract: Abstract In this work, the structural and transport properties of Mg-doped Sn-based alloys have been investigated. The temperature-dependent transport and structural properties of Sn–Mg alloys were investigated for five different samples (Pure Sn, Sn-1.0 wt% Mg, Sn-2.0 wt% Mg, Sn-6.0 wt% Mg and Pure Mg). Scanning electron microscopy (SEM), X-ray diffraction and energy dispersive X-ray analysis measurements were carried out in order to clarify the structural properties of the samples. It was found that the samples had tetragonal crystal symmetry, except for pure Mg which had hexagonal crystal symmetry. We also found that the cell parameters changed slightly with the addition of Mg element. The SEM micrographs of the samples showed that they had smooth surfaces with a clear grain boundary. The electrical and thermal conductivity of the samples were measured by four-point probe and the radial heat flow method, respectively. The electrical resistivity of the samples increased almost linearly with the increasing temperature. The thermal conductivity values ranged between 0.60 and 1.00 W/Km, while they decreased slightly with temperature and increased with Mg composition. The thermal conductivity values of the alloys were in between the values of pure Sn and Mg. The thermal conductivity results of the alloys were compared with other available results, and a good agreement was seen between the results. In addition, the temperature coefficients of electrical resistivity and thermal conductivity were determined; these were independent of the composition of the alloying elements. PubDate: 2013-11-01
{"url":"http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=53&journalID=2936&pageb=2&userQueryID=&sort=&local_page=&sorType=&sorCol=","timestamp":"2014-04-16T07:14:12Z","content_type":null,"content_length":"140488","record_id":"<urn:uuid:caa4f649-30f7-412d-af6d-77bd01f51dca>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Conway’s Problem and the commutation of languages - In Proc. STACS’05, Springer LNCS 3404 , 2005 "... We construct a finite language L such that the largest language commuting with L is not recursively enumerable. This gives a negative answer to the question raised by Conway in 1971 and also strongly disproves Conway’s conjecture on context-freeness of maximal solutions of systems of semi-linear ine ..." Cited by 19 (1 self) Add to MetaCart We construct a finite language L such that the largest language commuting with L is not recursively enumerable. This gives a negative answer to the question raised by Conway in 1971 and also strongly disproves Conway’s conjecture on context-freeness of maximal solutions of systems of semi-linear inequalities. 1 - Mathematical Foundations of Computer Science (MFCS 2005 , 2005 "... Abstract. Systems of language equations of the form {ϕ(X1,..., Xn) = ∅, ψ(X1,..., Xn) � = ∅} are studied, where ϕ, ψ may contain set-theoretic operations and concatenation; they can be equivalently represented as strict inequalities ξ(X1,..., Xn) ⊂ L0. It is proved that the problem whether such an ..." Cited by 5 (3 self) Add to MetaCart Abstract. Systems of language equations of the form {ϕ(X1,..., Xn) = ∅, ψ(X1,..., Xn) � = ∅} are studied, where ϕ, ψ may contain set-theoretic operations and concatenation; they can be equivalently represented as strict inequalities ξ(X1,..., Xn) ⊂ L0. It is proved that the problem whether such an inequality has a solution is Σ2-complete, the problem whether it has a unique solution is in (Σ3 ∩Π3)\(Σ2 ∪Π2), the existence of a regular solution is a Σ1-complete problem, while testing whether there are finitely many solutions is Σ3-complete. The class of languages representable by their unique solutions is exactly the class of recursive sets, though a decision procedure cannot be algorithmically constructed out of an inequality, even if a proof of solution uniqueness is attached. 1 , 2002 "... We study in this thesis several problems related to commutation on sets of words and on formal power series. We investigate the notion of semilinearity for formal power series in commuting variables, introducing two families of series - the semilinear and the bounded series - both natural generaliza ..." Cited by 4 (3 self) Add to MetaCart We study in this thesis several problems related to commutation on sets of words and on formal power series. We investigate the notion of semilinearity for formal power series in commuting variables, introducing two families of series - the semilinear and the bounded series - both natural generalizations of the semilinear languages, and we study their behaviour under rational operations, morphisms, Hadamard product, and difference. Turning to commutation on sets of words, we then study the notions of centralizer of a language - the largest set commuting with a language -, of root and of primitive root of a set of words. We answer a question raised by Conway more than thirty years ago - asking whether or not the centralizer of any rational language is rational - in the case of periodic, binary, and ternary sets of words, as well as for rational c-codes, the most general results on this problem. We also prove that any code has a unique primitive root and that two codes commute if and only if they have the same primitive root, thus solving two conjectures of Ratoandromanana, 1989. Moreover, we prove that the commutation with an c-code X can be characterized similarly as in free monoids: a language commutes with X if and only if it is a union of powers of the primitive root of X. - Bull. Eur. Assoc. Theor. Comput. Sci. EATCS , 2005 "... Abstract. We survey results, both positive and negative, on regularity of maximal solutions of systems of implicit language equations and inequalities. These results concern inequalities with constant right-hand sides, one-sided linear inequalities, inequalities with restrictions on constants, and c ..." Cited by 3 (1 self) Add to MetaCart Abstract. We survey results, both positive and negative, on regularity of maximal solutions of systems of implicit language equations and inequalities. These results concern inequalities with constant right-hand sides, one-sided linear inequalities, inequalities with restrictions on constants, and commutation equations and inequalities. In addition, we present some of these results in a generalized form in order to underline common principles. 1. - Theor. Comput. Sci , 2005 "... The centralizer of a set of words X is the largest set of words C(X) commuting with X: XC(X) = C(X)X. It has been a long standing open question due to Conway, 1971, whether the centralizer of any rational set is rational. While the answer turned out to be negative in general, see Kunc 2004, we prov ..." Cited by 2 (1 self) Add to MetaCart The centralizer of a set of words X is the largest set of words C(X) commuting with X: XC(X) = C(X)X. It has been a long standing open question due to Conway, 1971, whether the centralizer of any rational set is rational. While the answer turned out to be negative in general, see Kunc 2004, we prove here that the situation is different for codes: the centralizer of any rational code is rational and if the code is finite, then the centralizer is finitely generated. This result has been previously proved only for binary and ternary sets of words in a series of papers by the authors and for prefix codes in an ingenious paper by Ratoandromanana 1989 – many of the techniques we use in this paper follow her ideas. We also give in this paper an elementary proof for the prefix case. Key words: Codes, Commutation, Centralizer, Conway’s problem, Prefix codes. 1 , 2002 "... We prove several results on the commutation of languages. First, we prove that the largest set commuting with a given code X , i.e., its centralizer C(X), is always (X) , where (X) is the primitive root of X . Using this result, we characterize the commutation with codes similarly as for words, p ..." Cited by 1 (1 self) Add to MetaCart We prove several results on the commutation of languages. First, we prove that the largest set commuting with a given code X , i.e., its centralizer C(X), is always (X) , where (X) is the primitive root of X . Using this result, we characterize the commutation with codes similarly as for words, polynomials, and formal power series: a language commutes with X if and only if it is a union of powers of (X). This solves a conjecture of Ratoandromanana, 1989, and also gives an armative answer to a special case of an intriguing problem raised by Conway in 1971. Second, we prove that for any nonperiodic ternary set of words F , and moreover, a language commutes with F if and only if it is a union of powers of F , results previously known only for ternary codes. A boundary point is thus established, as these results do not hold for languages with at least four words.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=721100","timestamp":"2014-04-19T03:34:11Z","content_type":null,"content_length":"27998","record_id":"<urn:uuid:b948501c-018b-4b2a-ba26-87f3d6840038>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
module Data.Pointed where import Control.Arrow import Control.Applicative import Control.Concurrent.STM import Data.Default import Data.Monoid as Monoid import Data.Semigroup as Semigroup import Data.Functor.Identity import Data.Sequence (Seq) import qualified Data.Sequence as Seq import Data.Tree (Tree(..)) import Data.Set (Set) import qualified Data.Set as Set import Data.Functor.Constant import qualified Data.Functor.Product as Functor import Data.Functor.Compose import Control.Monad.Trans.Cont import Control.Monad.Trans.Error import Control.Monad.Trans.List import Control.Monad.Trans.Maybe import Control.Monad.Trans.Identity import Data.List.NonEmpty import qualified Control.Monad.Trans.RWS.Lazy as Lazy import qualified Control.Monad.Trans.RWS.Strict as Strict import qualified Control.Monad.Trans.Writer.Lazy as Lazy import qualified Control.Monad.Trans.Writer.Strict as Strict import qualified Control.Monad.Trans.State.Lazy as Lazy import qualified Control.Monad.Trans.State.Strict as Strict import Data.Semigroupoid.Static -- | 'Pointed' does not require a 'Functor', as the only relationship -- between 'point' and 'fmap' is given by a free theorem. class Pointed p where point :: a -> p a instance Pointed [] where point a = [a] instance Pointed Maybe where point = Just instance Pointed (Either a) where point = Right instance Pointed IO where point = return instance Pointed STM where point = return instance Pointed Tree where point a = Node a [] instance Pointed NonEmpty where point a = a :| [] instance Pointed ZipList where point = pure instance Pointed Identity where point = Identity instance Pointed ((->)e) where point = const instance Default e => Pointed ((,)e) where point = (,) def instance Monad m => Pointed (WrappedMonad m) where point = WrapMonad . return instance Default m => Pointed (Const m) where point _ = Const def instance Arrow a => Pointed (WrappedArrow a b) where point = pure instance Pointed Dual where point = Dual instance Pointed Endo where point = Endo . const instance Pointed Sum where point = Sum instance Pointed Monoid.Product where point = Monoid.Product instance Pointed Monoid.First where point = Monoid.First . Just instance Pointed Monoid.Last where point = Monoid.Last . Just instance Pointed Semigroup.First where point = Semigroup.First instance Pointed Semigroup.Last where point = Semigroup.Last instance Pointed Semigroup.Max where point = Semigroup.Max instance Pointed Semigroup.Min where point = Semigroup.Min instance Pointed Seq where point = Seq.singleton instance Pointed Set where point = Set.singleton instance (Pointed p, Pointed q) => Pointed (Compose p q) where point = Compose . point . point instance (Pointed p, Pointed q) => Pointed (Functor.Product p q) where point a = Functor.Pair (point a) (point a) instance Default m => Pointed (Constant m) where point _ = Constant def instance Pointed (ContT r m) where point a = ContT ($ a) instance Pointed m => Pointed (ErrorT e m) where point = ErrorT . point . Right instance Pointed m => Pointed (IdentityT m) where point = IdentityT . point instance Pointed m => Pointed (ListT m) where point = ListT . point . point instance Pointed m => Pointed (MaybeT m) where point = MaybeT . point . point instance (Default w, Pointed m) => Pointed (Lazy.RWST r w s m) where point a = Lazy.RWST $ \_ s -> point (a, s, def) instance (Default w, Pointed m) => Pointed (Strict.RWST r w s m) where point a = Strict.RWST $ \_ s -> point (a, s, def) instance (Default w, Pointed m) => Pointed (Lazy.WriterT w m) where point a = Lazy.WriterT $ point (a, def) instance (Default w, Pointed m) => Pointed (Strict.WriterT w m) where point a = Strict.WriterT $ point (a, def) instance Pointed m => Pointed (Lazy.StateT s m) where point a = Lazy.StateT $ \s -> point (a, s) instance Pointed m => Pointed (Strict.StateT s m) where point a = Strict.StateT $ \s -> point (a, s) instance Pointed m => Pointed (Static m a) where point = Static . point . const
{"url":"http://hackage.haskell.org/package/pointed-0.1.2/docs/src/Data-Pointed.html","timestamp":"2014-04-18T06:14:56Z","content_type":null,"content_length":"35239","record_id":"<urn:uuid:f82b8d95-0537-4b31-a377-a0c10aabd3ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Bell, CA Geometry Tutor Find a Bell, CA Geometry Tutor ...I took and passed the CA Bar Exam in February 2011, my first and only attempt. What's more, I passed the exam without taking a standard prep course (e.g., BarBri, Kaplan, etc.), but utilizing certain BarBri prep materials. I devised and fastidiously followed a rigorous study plan that canvassed all material and contoured my mastery of the subject matter to the precise dimensions of the test. 58 Subjects: including geometry, English, reading, writing ...I am a teacher and tutor, and I can help you improve your score on the MCAT. I have scored above the 95th percentile on all the standardized tests I have taken, and I can teach you the best of what I know about taking this sort of test. In addition to my academic and test-taking expertise, I ha... 63 Subjects: including geometry, reading, English, chemistry ...Have tutored students in Algebra 2. Received 5 on Calc BC exam. I have taken several advanced math classes at Caltech since then, and have used Calculus regularly over the course of my physics 15 Subjects: including geometry, reading, physics, calculus ...I've been teaching SAT reading for 8 years. The reading test is standardized-- meaning that there are particular trends and preferences of the test. I help teach these to students, and also help them tease out questions to eliminate wrong answer choices. 49 Subjects: including geometry, reading, writing, English ...You are taking a math or science class and the concepts are starting to pile up. Maybe you don't quite understand some of the techniques, and your teacher's explanations are confusing! You have tried a number of the practice problems, but they are so complex, it's hard to know where to start! 18 Subjects: including geometry, chemistry, physics, calculus Related Bell, CA Tutors Bell, CA Accounting Tutors Bell, CA ACT Tutors Bell, CA Algebra Tutors Bell, CA Algebra 2 Tutors Bell, CA Calculus Tutors Bell, CA Geometry Tutors Bell, CA Math Tutors Bell, CA Prealgebra Tutors Bell, CA Precalculus Tutors Bell, CA SAT Tutors Bell, CA SAT Math Tutors Bell, CA Science Tutors Bell, CA Statistics Tutors Bell, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Bell_CA_Geometry_tutors.php","timestamp":"2014-04-19T17:42:35Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:045c5099-49a0-4775-9a55-fe0c9b5b506c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fourth Dimension, by Alfred Russel Wallace Alfred Russel Wallace : Alfred Wallace : A. R. Wallace : Russel Wallace : Alfred Russell Wallace (sic) The Fourth Dimension (S502a: 1894) Editor Charles H. Smith's Note: A transcendental letter to the Editor printed on page 467 of the 29 September 1894 issue of the British spiritualist journal Light. To link directly to this page connect with: http://people.wku.edu/charles.smith/wallace/S502A.htm Sir,--The discussion on this subject seems to me to be wholly founded upon fallacy and verbal quibbles. I hold, not only that the alleged fourth dimension of space cannot be proved to exist, but that it cannot exist. The whole fallacy is based upon the assumption that we do know space of one, two, and three dimensions. This I deny. The alleged space of one dimension--lines--is not space at all, but merely directions in space. So the alleged space of two dimensions--surfaces--is not space, but only the limits between two portions of space, or the surfaces of bodies in space. There is thus only one Space--that which contains everything, both actual, possible, and conceivable. This Space has no definite number of dimensions, since it is necessarily infinite, and infinite in an infinite number of directions. Because mathematicians make use of what they term "three dimensions" in order to measure certain portions of space, or to define certain positions, lines, or surfaces in it, that does not in any way affect the nature of Space itself, still less can it limit space, which it must do if any other kind of space is possible which is yet not contained in infinite Space. The whole conception of space of different dimensions is thus a pure verbal fantasy, founded on the terms and symbols of mathematicians, who have no more power to limit or modify the conception of Space itself than has the most ignorant schoolboy. The absolute unity and all-embracing character of Space may be indicated by that fine definition of it as being "a sphere whose centre is everywhere and circumference nowhere." To anyone who thus thinks of it--and it can be rationally thought of in no other way--all the mathematicians' quibbles, of space in which parallel lines will meet, in which two straight lines can enclose a definite portion of spaces, and in which knots can be tied upon an endless cord, will be but as empty words without rational cohesion or intelligible meaning. * * * * * Return to Home
{"url":"http://people.wku.edu/charles.smith/wallace/S502A.htm","timestamp":"2014-04-18T14:11:32Z","content_type":null,"content_length":"4067","record_id":"<urn:uuid:703d536f-3567-4bb3-ac66-8e2169b14609>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Testset of Stiff ODEs All the problems of this testset are described in our monograph "Solving Ordinary Differential Equations. Volume II". Fortran 77 subroutines of the equation (and Jacobian), and of the drivers for the stiff codes RADAU5, RODAS, SEULEX are available. If you have problems with running these drivers on your computer, please, contact Ernst.Hairer@math.unige.ch. It may be necessary to change the CALL DTIME(TARRAY) by a call to your local clock. Remark. Further interesting test problems can be obtained from the testset for IVP solvers of Bari (former CWI Amsterdam), and from the IVP Software page of Jeff Cash; see also the IVP software page of Frnacesca Mazzia and Felice Iavernaro. Stiff Problems Mechanical Problems
{"url":"http://www.unige.ch/~hairer/testset/testset.html","timestamp":"2014-04-21T08:00:02Z","content_type":null,"content_length":"8016","record_id":"<urn:uuid:a5ae9042-c0b0-4a6b-ab6d-e3ecf940dc65>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Math for Dummies 21-book eBook Pack Math for Dummies 21-book eBook Pack Publisher: For Dummies | 21 PDF | 251MB 21 "for Dummies" eBooks on the topic of math, logic, statistics etc from basic math to Trigonometry to Calculus II. The only oddball in this collection is the Calculus for Dummies book because it is a 93MB scan as opposed to an official eBook. Algebra I Essentials for Dummies ISBN: 0470618349 With its use of multiple variables, functions, and formulas algebra can be confusing and overwhelming to learn and easy to forget. Perfect for students who need to review or reference critical concepts, Algebra I Essentials For Dummies provides content focused on key topics only, with discrete explanations of critical concepts taught in a typical Algebra I course, from functions and FOILs to quadratic and linear equations. This guide is also a perfect reference for parents who need to review critical algebra concepts as they help students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Algebra I for Dummies - 2nd Edition ISBN: 0470559640 Factor fearlessly, conquer the quadratic formula, and solve linear equations There's no doubt that algebra can be easy to some while extremely challenging to others. If you're vexed by variables, Algebra I For Dummies, 2nd Edition provides the plain-English, easy-to-follow guidance you need to get the right solution every time! Algebra II Essentials for Dummies ISBN: 0470618400 Passing grades in two years of algebra courses are required for high school graduation. Algebra II Essentials For Dummies covers key ideas from typical second-year Algebra coursework to help students get up to speed. Free of ramp-up material, Algebra II Essentials For Dummies sticks to the point, with content focused on key topics only. It provides discrete explanations of critical concepts taught in a typical Algebra II course, from polynomials, conics, and systems of equations to rational, exponential, and logarithmic functions. This guide is also a perfect reference for parents who need to review critical algebra concepts as they help students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Algebra II for Dummies ISBN: 0471775812 Besides being an important area of math for everyday use, algebra is a passport to studying subjects like calculus, trigonometry, number theory, and geometry, just to name a few. To understand algebra is to possess the power to grow your skills and knowledge so you can ace your courses and possibly pursue further study in math. Algebra II For Dummies is the fun and easy way to get a handle on this subject and solve even the trickiest algebra problems. This friendly guide shows you how to get up to speed on exponential functions, laws of logarithms, conic sections, matrices, and other advanced algebra concepts. In no time you’ll have the tools you need to: * Interpret quadratic functions * Find the roots of a polynomial * Reason with rational functions * Expose exponential and logarithmic functions * Cut up conic sections * Solve linear and non linear systems of equations * Equate inequalities * Simplifyy complex numbers * Make moves with matrices * Sort out sequences and sets This straightforward guide offers plenty of multiplication tricks that only math teachers know. It also profiles special types of numbers, making it easy for you to categorize them and solve any problems without breaking a sweat. When it comes to understanding and working out algebraic equations, Algebra II For Dummies is all you need to succeed! Basic Math & Pre-Algebra for Dummies ISBN: 0470135372 Get the skills you need to solve problems and equations and be ready for algebra class. Whether you're a student preparing to take algebra or a parent who wants to brush up on basic math, this fun, friendly guide has the tools you need to get in gear. From positive, negative, and whole numbers to fractions, decimals, and percents, you'll build necessary skills to tackle more advanced topics, such as imaginary numbers, variables, and algebraic equations. Basic Math & Pre-Algebra Workbook for Dummies ISBN: 0470288177 When you have the right math teacher, learning math can be painless and even fun! Let Basic Math and Pre-Algebra Workbook For Dummies teach you how to overcome your fear of math and approach the subject correctly and directly. A lot of the topics that probably inspired fear before will seem simple when you realize that you can solve math problems, from basic addition to algebraic equations. Lots of students feel they got lost somewhere between learning to count to ten and their first day in an algebra class, but help is here! Begin with basic topics like interpreting patterns, navigating the number line, rounding numbers, and estimating answers. You will learn and review the basics of addition, subtraction, multiplication, and division. Do remainders make you nervous? You’ll find an easy and painless way to understand long division. Discover how to apply the commutative, associative, and distributive properties, and finally understand basic geometry and algebra. Find out how to: * Properly use negative numbers, units, inequalities, exponents, square roots, and absolute value * Round numbers and estimate answers * Solve problems with fractions, decimals, and percentages * Navigate basic geometry * Complete algebraic expressions and equations * Understand statistics and sets * Uncover the mystery of FOILing * Answer sample questions and check your answers Complete with lists of ten alternative numeral and number systems, ten curious types of numbers, and ten geometric solids to cut and fold, Basic Math and Pre-Algebra Workbook For Dummies will demystify math and help you start solving problems in no time! Calculus Essentials for Dummies ISBN: 0470618353 Just the key concepts you need to score high in calculus From limits and differentiation to related rates and integration, this practical, friendly guide provides clear explanations of the core concepts you need to take your calculus skills to the next level. It's perfect for cramming, homework help, or review. Calculus for Dummies ISBN: 0764524981 The mere thought of having to take a required calculus course is enough to make legions of students break out in a cold sweat. Others who have no intention of ever studying the subject have this notion that calculus is impossibly difficult unless you happen to be a direct descendant of Einstein. Well, the good news is that you can master calculus. It's not nearly as tough as its mystique would lead you to think. Much of calculus is really just very advanced algebra, geometry, and trig. It builds upon and is a logical extension of those subjects. If you can do algebra, geometry, and trig, you can do calculus. Calculus For Dummies is intended for three groups of readers: * Students taking their first calculus course – If you're enrolled in a calculus course and you find your textbook less than crystal clear, this is the book for you. It covers the most important topics in the first year of calculus: differentiation, integration, and infinite series. * Students who need to brush up on their calculus to prepare for other studies – If you've had elementary calculus, but it's been a couple of years and you want to review the concepts to prepare for, say, some graduate program, Calculus For Dummies will give you a thorough, no-nonsense refresher course. * Adults of all ages who'd like a good introduction to the subject – Non-student readers will find the book's exposition clear and accessible. Calculus For Dummies takes calculus out of the ivory tower and brings it down to earth. This is a user-friendly math book. Whenever possible, the author explains the calculus concepts by showing you connections between the calculus ideas and easier ideas from algebra and geometry. Then, you'll see how the calculus concepts work in concrete examples. All explanations are in plain English, not math-speak. Calculus For Dummies covers the following topics and more: * Real-world examples of calculus * The two big ideas of calculus: differentiation and integration * Why calculus works * Pre-algebra and algebra review * Common functions and their graphs * Limits and continuity * Integration and approximating area * Sequences and series Don't buy the misconception. Sure calculus is difficult – but it's manageable, doable. You made it through algebra, geometry, and trigonometry. Well, calculus just picks up where they leave off – it's simply the next step in a logical progression. Calculus II for Dummies ISBN: 047022522X Calculus II is a prerequisite for many popular college majors, including pre-med, engineering, and physics. Calculus II For Dummies offers expert instruction, advice, and tips to help second semester calculus students get a handle on the subject and ace their exams. Calculus Workbook for Dummies ISBN: 076458782X From differentiation to integration - solve problems with ease Got a grasp on the terms and concepts you need to know, but get lost halfway through a problem or, worse yet, not know where to begin? Have no fear! This hands-on guide focuses on helping you solve the many types of calculus problems you encounter in a focused, step-by-step manner. With just enough refresher explanations before each set of problems, you'll sharpen your skills and improve your performance. You'll see how to work with limits, continuity, curve-sketching, natural logarithms, derivatives, integrals, infinite series, and more! 100s of Problems! * Step-by-step answer sets clearly identify where you went wrong (or right) with a problem * The inside scoop on calculus shortcuts and strategies * Know where to begin and how to solve the most common problems * Use calculus in practical applications with confidence Differential Equations Workbook for Dummies ISBN: 0470472014 Need to know how to solve differential equations? This easy-to-follow, hands-on workbook helps you master the basic concepts and work through the types of problems you'll encounter in your coursework. You get valuable exercises, problem-solving shortcuts, plenty of workspace, and step-by-step solutions to every equation. You'll also memorize the most-common types of differential equations, see how to avoid common mistakes, get tips and tricks for advanced problems, improve your exam scores, and much more! Intermediate Statistics for Dummies ISBN: 0470045205 Need to know how to build and test models based on data? Intermediate Statistics For Dummies gives you the knowledge to estimate, investigate, correlate, and congregate certain variables based on the information at hand. The techniques you’ll learn in this book are the same techniques used by professionals in medical and scientific fields. Linear Algebra for Dummies ISBN: 0470430907 Does linear algebra leave you feeling lost? No worries —this easy-to-follow guide explains the how and the why of solving linear algebra problems in plain English. From matrices to vector spaces to linear transformations, you'll understand the key concepts and see how they relate to everything from genetics to nutrition to spotted owl extinction. Logic for Dummies ISBN: 0471799416 Logic concepts are more mainstream than you may realize. There’s logic every place you look and in almost everything you do, from deciding which shirt to buy to asking your boss for a raise, and even to watching television, where themes of such shows as CSI and Numbers incorporate a variety of logistical studies. Logic For Dummies explains a vast array of logical concepts and processes in easy-to-understand language that make everything clear to you, whether you’re a college student of a student of life. LSAT Logic Games for Dummies ISBN: 0470525142 Improve your score on the Analytical Reasoning portion of the LSAT If you're like most test–takers, you find the infamous Analytical Reasoning or "Logic Games" section of the LSAT to be the most elusive and troublesome. Now there's help! LSAT Logic Games For Dummies takes the puzzlement out of the Analytical Reasoning section of the exam and shows you that it's not so problematic after all! Math Word Problems for Dummies ISBN: 0470146605 Everyone remembers story problems, nowadays called "word problems," from elementary school and middle school math. Solving word problems is the latest way to help students who struggle learning basic math skills, as well as to introduce more complicated math concepts. Math Word Problems For Dummies shows students and adult learners how to solve word problems with a method that works for any word problem at any level. Math-wary readers will use basic math to work through problems, focusing on elementary-level skills before moving on to algebra and geometry. Mary Jane Sterling (Peoria, IL), a teacher for more than 25 years, is the author of numerous For Dummies books, including Algebra For Dummies (0-7645-5325-9) and Trigonometry For Dummies (0-7645-6903-1). Pre-Algebra Essentials For Dummies ISBN: 0470618388 Just the critical concepts you need to score high in pre-algebra This practical, friendly guide focuses on critical concepts taught in a typical pre-algebra course, from fractions, decimals, and percents to standard formulas and simple variable equations. Pre-Algebra Essentials For Dummies is perfect for cramming, homework help, or as a reference for parents helping kids study for exams. Pre-Calculus Workbook for Dummies ISBN: 0470421312 Get the confidence and the math skills you need to get started with calculus! Are you preparing for calculus? This easy-to-follow, hands-on workbook helps you master basic pre-calculus concepts and practice the types of problems you'll encounter in your cour sework. You get valuable exercises, problem-solving shortcuts, plenty of workspace, and step-by-step solutions to every problem. You'll also memorize the most frequently used equations, see how to avoid common mistakes, understand tricky trig proofs, and much more. 100s of Problems! Detailed, fully worked-out solutions to problems The inside scoop on quadratic equations, graphing functions, polynomials, and more A wealth of tips and tricks for solving basic calculus problems Probability for Dummies ISBN: 0471751413 Packed with practical tips and techniques for solving probability problems Increase your chances of acing that probability exam -- or winning at the casino! Whether you're hitting the books for a probability or statistics course or hitting the tables at a casino, working out probabilities can be problematic. This book helps you even the odds. Using easy-to-understand explanations and examples, it demystifies probability -- and even offers savvy tips to boost your chances of gambling success! Discover how to * Conquer combinations and permutations * Understand probability models from binomial to exponential * Make good decisions using probability * Play the odds in poker, roulette, and other games Statistics Essentials for Dummies ISBN: 0470618396 Statistics Essentials For Dummies not only provides students enrolled in Statistics I with an excellent high-level overview of key concepts, but it also serves as a reference or refresher for students in upper-level statistics courses. Free of review and ramp-up material, Statistics Essentials For Dummies sticks to the point, with content focused on key course topics only. It provides discrete explanations of essential concepts taught in a typical first semester college-level statistics course, from odds and error margins to confidence intervals and conclusions. This guide is also a perfect reference for parents who need to review critical statistics concepts as they help high school students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Trigonometry Workbook for Dummies ISBN: 0764587818 From angles to functions to identities - solve trig equations with ease Got a grasp on the terms and concepts you need to know, but get lost halfway through a problem or worse yet, not know where to begin? No fear - this hands-on-guide focuses on helping you solve the many types of trigonometry equations you encounter in a focused, step-by-step manner. With just enough refresher explanations before each set of problems, you'll sharpen your skills and improve your performance. You'll see how to work with angles, circles, triangles, graphs, functions, the laws of sines and cosines, and more! 100s of Problems! * Step-by-step answer sets clearly identify where you went wrong (or right) with a problem * Get the inside scoop on graphing trig functions * Know where to begin and how to solve the most common equations * Use trig in practical applications with confidence to get Premium speed
{"url":"http://carnesdownloads.org/e-books/18793-math-for-dummies-21-book-ebook-pack.html","timestamp":"2014-04-17T01:12:34Z","content_type":null,"content_length":"36990","record_id":"<urn:uuid:643ad074-1f44-4152-a8dc-fc0a3533c34a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Inference and algorithms for sparse Bayesian factor analysis Magnus Rattray, Oliver Stegle, Kevin Sharp and John Winn Journal of Physics A Volume Conf. Series 197, , 2009. ISSN 1742-6596 Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/ EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00005588/","timestamp":"2014-04-18T05:35:28Z","content_type":null,"content_length":"7962","record_id":"<urn:uuid:6a752ad1-a4de-42c3-8857-75b78e8f2bd0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: reference of mathematical discourse Randall Holmes holmes at catseye.idbsu.edu Fri Mar 20 14:39:10 EST 1998 This posting is from M. Randall Holmes In my postings criticizing Hersh, I have stated that for the applicability of mathematics to make sense, mathematical theories must refer to some aspect of the real world. I'm going to try to outline a hypothesis (NOT original with me) as to how this works. I think that the simplest hypothesis is the one which I suspect is the "working hypothesis" of most mathematicians (though they may not really believe it): mathematical discourse really does refer just as it appears to on the surface. There are natural numbers, algebraic varieties, and so forth, "out there". This is simple Platonism. There are two objections. One is the visceral objection to non-physical entities; this has very little force for me, and I will allow others to address it. One aspect which should be noted is that the mathematical universe of simple Platonism, if one supposes it to be or at least include a model of ZFC, is far larger than the physical universe, even if one supposes the physical universe to be infinite! The cardinality of the set of all physically relevant objects certainly appears to be no more than 2^\omega. The other, and for me rather more powerful objection is the one posed by Martin Davis (and even by me in earlier postings): when we talk about the number 7, it is not clear even in the universe of simple Platonism what object we are talking about. My view is that it is easier to start with the reference of mathematical propositions to the real world; the status of mathematical objects (and their elusiveness noted by Davis) will come out in my account of propositions. I think that all mathematical propositions are "really" of the following form: If M is a model of theory T, then P where P is a statement in the language of theory T about objects in the model M. Thus, with regard to one's favorite statement about the number 7 (say A(7)) the statement usually thought of as A(7) will be In any model of arithmetic, A(7) or, more precisely, "In any structure with operations including a unary operation S and constant 0 satisfying <Peano's axioms>, A(S(S(S(S(S(S(S(0))))))))". Notice that this position handles the elusiveness of 7 just fine: the identity of 7 depends on choice of a specific relation S and object 0; 7 is hidden behind an abstract data type interface, as it were. In the simple Platonist world including a model of ZFC, we can do arithmetic as soon as we prove the existence of a model of arithmetic. There are many models of arithmetic; each of them has a different number 7. But all questions about 7 _as a natural number_ (all theorems of arithmetic about 7) will get the same answer in each model. There is of course a von Neumann ordinal traditionally called 7 in models of ZFC which has non-arithmetic properties such as 6 \in 7 which do not necessarily hold in all models of arithmetic in ZFC. There is a serious problem with this view of mathematical propositions as conditionals. If the universe is finite, the most obvious interpretation of the conditional "There is a model of arithmetic" => P , for any P, is that it is vacuously true because there are no models of arithmetic. To mkae this approach work, one would then have to view the conditional as a non-truth-functional operation. The logic of counterfactuals is a nasty philosophical issue, which one would like to avoid. However, if one supposes enough set theory to be able to prove the completeness theorem for first-order logic (one does not need ZFC for this, but one does need infinitely many objects), life becomes much simpler. The existence of models of any first-order theory comes to coincide with the consistency of the theory, so semantics and logic become easier to relate to one another. This enormous simplification of the logic of the conditional interpretations of mathematical statements can be taken to justify the adoption as a standing hypothesis that there are enough objects to form a model of (say) second-order arithmetic. A further point about logic is that the conditionals which I suppose here to be the true referents of mathematical objects are second-order assertions: they involve quantification over predicates and operations, as in the example above "For any operation S and object 0 satisfying <Peano's axioms>, A(S(S(S(S(S(S(S(0)))))))))" It seems quite natural to admit second-order theories outright, since one has to do second-order quantification anyway; however, this needs to be approached warily, since second-order logic does not have a completeness theorem. Any mathematical assertion would then be formalized as a conditional "In any model of theory T (this part is implicit in the context, not actually expressed in the statement of a theorem in ordinary mathematical practice), P holds" in the second-order theory of some domain of entirely uncharacterized objects. To get the logic to work nicely (for first-order theories) would require at least a countable infinity of objects. Second-order theories would require stronger assumptions about how many objects there were. Existential assumptions are required here (there are at least \omega objects and at least 2^\omega collections of objects in this framework) but they are all of the form that there are "enough" objects to build models of a given theory. Notice that these weakest assumptions might be on some views be satisfied in the physical world! It doesn't make sense under this view to refer to particular mathematical objects except in conditionals of the form "for any model of T ...", where T is the theory in which the object is defined; every mathematical object of a particular theory is as it were hiding behind an ADT interface (which can be implemented by choosing an actual model of the theory). Applicability of this to the real world is a further question. Any proposition (say, of a scientific theory) which refers to mathematical objects (natural numbers, real numbers, Hilbert spaces, or whatever) must on this interpretation include the hypothesis that there is a model of the mathematical theory in which the mathematical objects in question are defined; it doesn't make sense, on this interpretation of mathematical discourse, to refer to a mathematical object outside a conditional statement about models of the theory in which it is defined. What is needed is the principle that a statement not referring to mathematical objects which can be deduced from the scientific theory is true. If models of the mathematical theory actually exist, this is obvious; if the actual existence of models of the mathematical theory is doubtful (the existence of models of second-order ZF might be doubted even by one who regards it as certainly consistent) then one needs some kind of principle asserting that supposing the "real world" to be enhanced by the addition of a model of (for example) second order ZF induces a conservative extension of the theory of the real world; nothing false of the real world can be proven using the hypothesized extra objects. Such a principle would have to be recognized as a working hypothesis not susceptible of formal proof. All of this is pure speculation. I find it easiest to be a simple Platonist about ZFC + large cardinals or my favorite strong extension of NFU and build models of whatever other theories I want to think about there :-) I do see that there is something dangerous about this proposal; I seem to have presented a view under which something like second-order arithmetic (the minimal mathematics required for the proof of the completeness theorem) is the correct foundation for all mathematics expressible in first-order theories: work in first-order ZFC is apparently to be viewed as working out the PA_2 consequences of Con(ZFC). But perhaps ZF-istes to whom this view would appeal would prefer second-order ZF as their foundation. And God posted an angel with a flaming sword at | Sincerely, M. Randall Holmes the gates of Cantor's paradise, that the | Boise State U. (disavows all) slow-witted and the deliberately obtuse might | holmes at math.idbsu.edu not glimpse the wonders therein. | http://math.idbsu.edu/~holmes More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001664.html","timestamp":"2014-04-20T08:17:47Z","content_type":null,"content_length":"10761","record_id":"<urn:uuid:e3ab641e-7ae2-4215-9780-4806159d43b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a term? April 13th 2011, 07:49 AM #1 Junior Member Mar 2011 What is a term? a term is either a single number or variable, or the product of several numbers or variables separated from another term by a + or - sign in an overall expression. For example, in 3 + 4x + 5yzw 3, 4x, and 5yzw are all terms. -wikipedia But it says either a single number...or the product of the several numbers. So does that mean that 4 is a term, x is a term, and y,z,and w are also terms? In that case how does it affect things like the associative property, does it not allow for an anarchic rewriting of the original expression? the 5xyz etc are all terms as is the 4 the individual components of each term are either constants such as the 5 or variables like the x if you have studied polynomials then you are familiar with the expression terms in a polynomial However, as can be seen hereArithmetic and Geometric Sequences a term can be a combination of elements... Re: What is a term? But what would make the elements of a term not terms too? Or are they terms? April 13th 2011, 08:15 AM #2 Apr 2011 April 30th 2012, 07:13 AM #3 Junior Member Mar 2011
{"url":"http://mathhelpforum.com/algebra/177714-what-term.html","timestamp":"2014-04-21T02:13:56Z","content_type":null,"content_length":"34106","record_id":"<urn:uuid:ba9966f6-3a49-43ba-a490-11d45939bd60>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectral density of first order piecewise linear system excited by white noise Atkinson, John David (1967) Spectral density of first order piecewise linear system excited by white noise. California Institute of Technology . (Unpublished) http://resolver.caltech.edu/ PDF (Adobe PDF (10 MB)) See Usage Policy. Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechEERL:1967.DYNL.1967.002 [Contains mathematical notation that does not convert: see report for the correct formula.] The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form m . . . where f and the . . . are piecewise linear functions (not necessarily continuous), and the . . . are stationary Gaussian white noise. For such systems, it is shown how the Laplace -transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32. 11. This method is applied to 4 subclasses: (1) . . . (forcing function excitation); (2) . . . (parametric excitation); (3) . . .; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed. Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as . . .). Item Type: Report or Paper (Technical Report) Additional Information: PhD, 1967 Group: Dynamics Laboratory Record Number: CaltechEERL:1967.DYNL.1967.002 Persistent URL: http://resolver.caltech.edu/CaltechEERL:1967.DYNL.1967.002 Usage Policy: You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format. ID Code: 26517 Collection: CaltechEERL Deposited By: Imported from CaltechEERL Deposited On: 11 Jul 2002 Last Modified: 26 Dec 2012 13:59 Repository Staff Only: item control page
{"url":"http://authors.library.caltech.edu/26517/","timestamp":"2014-04-19T22:36:03Z","content_type":null,"content_length":"22165","record_id":"<urn:uuid:1257c978-9535-45a2-8e1f-7810235077f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
A Rose By Another Name Mathematics in Design This is a sphereflake: The sphereflake is created by orienting spheres symmetrically around a larger sphere. The process is continued with spheres of exponentially smaller sizes around these smaller spheres and repeating the process infinitely. The sphereflake can then be attached to others to create more complex patterns. This is just one example of how mathematics is used to design and create objects of beauty. Through math architects can create spaces that are pleasing to the eye through symmetry and
{"url":"http://blog.lib.umn.edu/rosex228/architecture/2006/11/","timestamp":"2014-04-21T02:35:09Z","content_type":null,"content_length":"4366","record_id":"<urn:uuid:6853370d-ce7d-426c-83a8-dc8e143c6f39>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
To convert kilometers to meters, move the decimal place(s) to the . Weegy: 2 points User: To convert centimeters to meters, move the decimal BLANK place(s) to the BLANK Weegy: 2 points User: To convert centimeters to meters, move the decimal two place(s) to the Weegy: 2 points User: How do you calculate number of feet if you are given a distance measured in miles? Weegy: distance in miles multiply by 5280
{"url":"http://www.weegy.com/home.aspx?ConversationId=736ED250","timestamp":"2014-04-19T00:15:03Z","content_type":null,"content_length":"39440","record_id":"<urn:uuid:9fd820c0-26ee-4832-bca5-e705ab759465>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}