content
stringlengths
86
994k
meta
stringlengths
288
619
Re: RE: st: out-of-date commands [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: RE: st: out-of-date commands From khigbee@stata.com To statalist@hsphsun2.harvard.edu Subject Re: RE: st: out-of-date commands Date Mon, 03 Mar 2003 12:45:18 -0600 Giovanni Vecchi <vecchi@economia.uniroma2.it> asks: > If I may add a comment, I would like to note that the solution suggested > by Ken does not allow to spot the fact that - tutorial - does not exist > anymore. If I understand it correctly, by typing <search out of date , > entry> one gets a list of Stata 7 commands which have been > replaced/renamed in Stata 8. > My point in citing the -tutorial- command in my previous posting was > that it is an example of A Stata command which has disappeared with no > replacement. I am sorry if I was unclear. > Thus, my new question could be phrased as follows: is anyone aware of > how to get a list of all Stata 7 commands which have been dropped from > Stata 8 and have not been replaced? Friedrich Huebler <huebler@rocketmail.com> mentions that unlike Stata 7, Stata 8 does not have tutorial files, and suggests > Stata 8 has no tutorials. You can copy the tutorial files from Stata > 7 but some of the instructions (the graphics, for example) are > obsolete. and Nick Cox <n.j.cox@durham.ac.uk> said > ... My impression > is that the tutorials -- defined as what are run by -tutorial- -- > are not being maintained into Stata 8. Even if you copy > the old files, they won't necessarily work. > In the case of graphics, -whelp graph_intro- is a substitute. > The style of the tutorials -- essentially emitting a > windowful of stuff at a time -- has become arguably very out-of-date > anyway. I've sensed students getting bored and bailing out presented > with this style. The contents had in many cases become very out-of-date > as well. I've not used them for teaching for some years. It > is a lot easier -- and in many ways just as effective -- to > write documents to be looked at in the viewer. Naturally, that's > some work as compared with no work in using pre-prepared tutorials. Nick's impression is correct. We noticed that the tutorials in Stata 7 were not really the direction we wanted to continue going with Stata. They were too linear. The direction we believe we want to go is along the lines of the graph_intro.hlp where you can read, skip around, click to see it run, etc. and are not forced to go through the material in a straight line. Over time we want to introduce more hlp files like the graph_intro.hlp, or possibly device even better methods to help new users learn. We forgot to create an appropriate -search- entry and -hlp- file for -tutorial- for Version 8. Next ado-file update we will provide files that explain that -tutorial- is out-of-date. -tutorial-, though out-of-date, can still be run on user created tutorials, just as in the past. But, the old official Stata tutorials are, we believe, no longer of much interest and are not included in Stata 8. Ken Higbee khigbee@stata.com StataCorp 1-800-STATAPC * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-03/msg00033.html","timestamp":"2014-04-16T16:21:45Z","content_type":null,"content_length":"7480","record_id":"<urn:uuid:adb702a2-cde0-474b-8c2f-d2d12861e4bf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Modifications of Kleinberg’s HITS algorithms Using Matrix Exponentiation and Web Log Records Results 1 - 10 of 12 - In SIGIR , 2004 "... Link Analysis has shown great potential in improving the performance of web search. PageRank and HITS are two of the most popular algorithms. Most of the existing link analysis algorithms treat a web page as a single node in the web graph. However, in most cases, a web page contains multiple semanti ..." Cited by 36 (5 self) Add to MetaCart Link Analysis has shown great potential in improving the performance of web search. PageRank and HITS are two of the most popular algorithms. Most of the existing link analysis algorithms treat a web page as a single node in the web graph. However, in most cases, a web page contains multiple semantics and hence the web page might not be considered as the atomic node. In this paper, the web page is partitioned into blocks using the visionbased page segmentation algorithm. By extracting the page-toblock, block-to-page relationships from link structure and page layout analysis, we can construct a semantic graph over the WWW such that each node exactly represents a single semantic topic. This graph can better describe the semantic structure of the web. Based on block-level link analysis, we proposed two new algorithms, Block Level PageRank and Block Level HITS, whose performances we study extensively using web data. , 2007 "... Community analysis algorithm proposed by Clauset, Newman, and Moore (CNM algorithm) finds community structure in social networks. Unfortunately, CNM algorithm does not scale well and its use is practically limited to networks whose sizes are up to 500,000 nodes. The paper identifies that this ineffi ..." Cited by 31 (0 self) Add to MetaCart Community analysis algorithm proposed by Clauset, Newman, and Moore (CNM algorithm) finds community structure in social networks. Unfortunately, CNM algorithm does not scale well and its use is practically limited to networks whose sizes are up to 500,000 nodes. The paper identifies that this inefficiency is caused from merging communities in unbalanced manner. The paper introduces three kinds of metrics (consolidation ratio) to control the process of community analysis trying to balance the sizes of the communities being merged. Three flavors of CNM algorithms are built incorporating those metrics. The proposed techniques are tested using data sets obtained from existing social networking service that hosts 5.5 million users. All the methods exhibit dramatic improvement of execution efficiency in comparison with the original CNM algorithm and shows high scalability. The fastest method processes a network with 1 million nodes in 5 minutes and a network with 4 million nodes in 35 minutes, respectively. Another one processes a network with 500,000 nodes in 50 minutes (7 times faster than the original algorithm), finds community structures that has improved modularity, and scales to a network with 5.5 million. - IN PROCEEDINGS OF THE 16TH INTERNATIONAL WORLD WIDE WEB CONFERENCE (WWW-07 , 2007 "... Users searching for information in hypermedia environments often perform querying followed by manual navigation. Yet, the conventional text/hypertext retrieval paradigm does not take post-query navigation into account. This paper proposes a new retrieval paradigm, called navigation-aided retrieval ..." Cited by 13 (0 self) Add to MetaCart Users searching for information in hypermedia environments often perform querying followed by manual navigation. Yet, the conventional text/hypertext retrieval paradigm does not take post-query navigation into account. This paper proposes a new retrieval paradigm, called navigation-aided retrieval (NAR), which treats both querying and navigation as first-class activities. In the NAR paradigm, querying is seen as a means to identify starting points for navigation, and navigation is guided based on information supplied in the query. NAR is a generalization of the conventional probabilistic information retrieval paradigm, which implicitly assumes no navigation takes place. This paper - In Proceedings of the 3rd International Conference on Web Information Systems Engineering , 2002 "... Web link analysis has been proved to provide significant enhancement to the precision of web search in practice. Among existing approaches, Kleinberg's HITS and Google's PageRank are the two most representative algorithms that employ explicit hyperlinks structure among web pages to conduct link anal ..." Cited by 9 (2 self) Add to MetaCart Web link analysis has been proved to provide significant enhancement to the precision of web search in practice. Among existing approaches, Kleinberg's HITS and Google's PageRank are the two most representative algorithms that employ explicit hyperlinks structure among web pages to conduct link analysis, and DirectHit represents the other extreme that takes the user's access frequency as implicit link to the web page for counting its importance. In this paper, we propose a novel link analysis algorithm which puts both explicit and implicit link structures under a unified framework, and show that HITS and DirectHit are essentially the two extreme instances of our proposed method. One important advantage of our method is its ability to analyze not only the hyperlinks between web-pages but also the interactions between the users and the Web at the same time. The importance of web-pages and users can reinforce each other to improve the Web link analysis. Compared with traditional HITS and DirectHit algorithms, our method further improves the search precision by 11.8% and 25.3%. - IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING , 2004 "... In this paper, we study the problem of mining the informative structure of a news Web site that consists of thousands of hyperlinked documents. We define the informative structure of a news Web site as a set of index pages (or referred to as TOC, i.e., ..." Cited by 8 (0 self) Add to MetaCart In this paper, we study the problem of mining the informative structure of a news Web site that consists of thousands of hyperlinked documents. We define the informative structure of a news Web site as a set of index pages (or referred to as TOC, i.e., - SIAM Journal on Scientific Computing , 2006 "... Abstract. Algorithms such as Kleinberg’s HITS algorithm, the PageRank algorithm of Brin and Page, and the SALSA algorithm of Lempel and Moran use the link structure of a network of webpages to assign weights to each page in the network. The weights can then be used to rank the pages as authoritative ..." Cited by 7 (0 self) Add to MetaCart Abstract. Algorithms such as Kleinberg’s HITS algorithm, the PageRank algorithm of Brin and Page, and the SALSA algorithm of Lempel and Moran use the link structure of a network of webpages to assign weights to each page in the network. The weights can then be used to rank the pages as authoritative sources. These algorithms share a common underpinning; they find a dominant eigenvector of a non-negative matrix that describes the link structure of the given network and use the entries of this eigenvector as the page weights. We use this commonality to give a unified treatment, proving the existence of the required eigenvector for the PageRank, HITS, and SALSA algorithms, the uniqueness of the PageRank eigenvector, and the convergence of the algorithms to these eigenvectors. However, we show that the HITS and SALSA eigenvectors need not be unique. We examine how the initialization of the algorithms affects the final weightings produced. We give examples of networks that lead the HITS and SALSA algorithms to return non-unique or nonintuitive rankings. We characterize all such networks, in terms of the connectivity of the related HITS authority graph. We propose a modification, Exponentiated Input to HITS, to the adjacency matrix input to the HITS algorithm. We prove that Exponentiated Input to HITS returns a unique ranking, so long as the network is weakly connected. Our examples also show that SALSA can give inconsistent hub and authority weights, due to non-uniqueness. We also mention a small modification to the SALSA initialization which makes the hub and authority weights consistent. - IN PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE'2004 , 2004 "... The Hyperlink Induced Topic Search algorithm, which is a method of link analysis, primarily developed for retrieving information from the Web, is extended in this paper, in order to evaluate one aspect of quality in an object-oriented model. Considering the number of discrete messages exchanged betw ..." Cited by 5 (1 self) Add to MetaCart The Hyperlink Induced Topic Search algorithm, which is a method of link analysis, primarily developed for retrieving information from the Web, is extended in this paper, in order to evaluate one aspect of quality in an object-oriented model. Considering the number of discrete messages exchanged between classes, it is possible to identify “God” classes in the system, elements which imply a poorly designed model. The principal eigenvectors of matrices derived from the adjacency matrix of a modified class diagram, are used to identify and quantify heavily loaded portions of an objectoriented design that deviate from the principle of distributed responsibilities. The non-principal eigenvectors are also employed in order to identify possible reusable components in the system. The methodology can be easily automated as illustrated by a Java program that has been developed for this purpose. - IN OOPSLA '91: PROCEEDINGS OF THE 6TH ANNUAL ACM SIGPLAN CONFERENCE ON OBJECT-ORIENTED PROGRAMMING, SYSTEMS, LANGUAGES, AND APPLICATIONS , 2003 "... A method of link analysis employed for retrieving information from the Web is extended in order to evaluate one aspect of quality in an object-oriented model. The principal eigenvectors of matrices derived from the adjacency matrix of a modified class diagram are used to identify and quantify heavi ..." Cited by 3 (0 self) Add to MetaCart A method of link analysis employed for retrieving information from the Web is extended in order to evaluate one aspect of quality in an object-oriented model. The principal eigenvectors of matrices derived from the adjacency matrix of a modified class diagram are used to identify and quantify heavily loaded portions of an object-oriented design that deviate from the principle of distributed , 2005 "... Abstract: The paper describes RankFeed a new adaptive method of recommendation that benefits from similarities between searching and recommendation. Concepts such as: the initial ranking, the positive and negative feedback widely used in searching are applied to recommendation in order to enhance it ..." Cited by 1 (1 self) Add to MetaCart Abstract: The paper describes RankFeed a new adaptive method of recommendation that benefits from similarities between searching and recommendation. Concepts such as: the initial ranking, the positive and negative feedback widely used in searching are applied to recommendation in order to enhance its coverage, maintaining high accuracy. There are four principal factors that determine the method’s behaviour: the quality document ranking, navigation patterns, textual similarity and the list of recommended pages that have been ignored during the navigation. In the evaluation part, the local site’s behaviour of the RankFeed ranking is contrasted with PageRank. Additionally, recommendation behaviour of RankFeed versus other classical approaches is evaluated. - In Workshop on Link Analysis for Detecting Complex Behavior (LinkKDD , 2003 "... Link-analysis based techniques for ranking of the vertices of a directed graph have been widely studied in the social networks and bibliometrics communities. More recently, they have been popularized in the context of web graphs by the Pagerank [1] and HITS [2] algorithms, both of which under approp ..." Cited by 1 (0 self) Add to MetaCart Link-analysis based techniques for ranking of the vertices of a directed graph have been widely studied in the social networks and bibliometrics communities. More recently, they have been popularized in the context of web graphs by the Pagerank [1] and HITS [2] algorithms, both of which under appropriate normalization correspond to dierent random walk (or \sur ng") models. Pagerank is maximally local in the sense that its equivalent surfer ignores the links of the surrounding vertices, whereas the corresponding sur ng model for normalized HITS is a second-order model, as its behaviour is independent of the rest of the graph, given the vertices a single hop away. In this paper we propose a way of generalizing these strategies by taking into account non-local eects of higher order, while remaining computationally ef- cient. The need for such an extension is motivated by the fact that Pagerank and HITS have complementary biases, Pagerank can only take advantage of direct endorsement, whereas HITS can only identify close-knit structures. The approach leads to a series of parameterized schemes, where the value of the parameter determines the weights assigned to the neighbouring vertices, connected either by forward or by backward links (edges). The parametric form allows us to select its value to optimize some desirable quality. Access to \correct" rankings would allow nding the optimal value of the parameter in a supervised learning setup. Typically, however, such data is dicult to come by, so we propose and provide solutions for two optimization criteria (i) maximum entropy, which is motivated by the desire to make minimal extra assumptions and (ii) maximum stability. The framework and techniques developed in this paper apply to a wide range of networks. We empiricall...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=418134","timestamp":"2014-04-18T01:34:46Z","content_type":null,"content_length":"41549","record_id":"<urn:uuid:915e4b0d-3307-4f1e-aa4d-8c7fbd4c1696>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Discrete Kakeya-type problems and small bases Noga Alon Boris Bukh Benny Sudakov A subset U of a group G is called k-universal if U contains a translate of every k-element subset of G. We give several nearly optimal constructions of small k-universal sets, and use them to resolve an old question of Erdos and Newman on bases for sets of integers, and to obtain several extensions for other groups. 1 Introduction A subset U of Rd is a Besicovitch set if it contains a unit-length line segment in every direction. The Kakeya problem asks for the smallest possible Minkowski dimension of a Besicovitch set. It is widely conjectured that every Besicovitch set has Minkowski dimension d. For large d the best lower bounds come from the approach pioneered by Bourgain [4] which is based on combinatorial number theory. For example, in [5] it is shown that if every set X Z/pZ containing a translate of every k-term arithmetic progression is of size at least (N1- (k)) with (k) 0 as k , then the Kakeya conjecture is true. In this paper we address a related problem where instead of seeking a set containing a translate of every k-term arithmetic progression, we demand that the set contains a translate of every k-element set. We do not restrict the problem to the cyclic groups, and consider general (possibly non-abelian)
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/213/2795580.html","timestamp":"2014-04-18T11:43:24Z","content_type":null,"content_length":"8414","record_id":"<urn:uuid:27e507f1-17ac-4713-a0ec-1ff4b525a26b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Major requirements The major in computer science consists of a minimum of 12 courses: seven core computer science courses, plus two mathematics courses, a senior seminar and two electives at or above the 200 level. For those students who place out of the introductory course(s), the additional course(s) needed to meet the minimum requirement will be determined in consultation with the department. Courses used to fulfill the major requirements may not be taken on a pass/fail basis. To major in computer science, the department strongly recommends that students achieve at least a C+ average in the first two computer science courses and that the first two math courses be completed by the second year. Required courses COMP 111 Foundations of Computing Theory COMP 115 Robots, Games and Problem Solving COMP 116 Data Structures (strongly recommend at least a combined 2.67 GPA in these courses to continue) Four computer science core courses COMP 215 Algorithms COMP 220 Computer Organization and Assembly Language Select two of the following: COMP 335 Principles of Programming Languages COMP 345 Operating Systems COMP 375 Theory of Computation Two math courses MATH 101 Calculus I MATH 104 Calculus II MATH 151 Accelerated Statistics MATH 202 Cryptography MATH 211 Discrete Mathematics MATH 221 Linear Algebra MATH 236 Multivariable Calculus Two additional computer science (or mathematics with permission) at or above the 200-level: COMP 242 DNA COMP 255 Artificial Intelligence COMP 325 Database Systems COMP 365 Computer Graphics COMP 499 Independent Research COMP 401 Senior Seminar
{"url":"https://wheatoncollege.edu/computer-science/major/","timestamp":"2014-04-20T00:46:19Z","content_type":null,"content_length":"17940","record_id":"<urn:uuid:7da1f884-9d60-4bd7-be98-9c150439ad0e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: mathml xsl stylesheets? <-prev [Thread] next-> <-prev [Date] next-> Month Index | List Home Re: mathml xsl stylesheets? Subject: Re: mathml xsl stylesheets? From: STENZEL@xxxxxxxxxx Date: Mon, 23 Oct 2000 13:49:11 +0200 >STENZEL@xxxxxxxxxx writes: > > Sebastian, fair question ... The final goal would be to tranform > > Docbook/MathML documents to HTML >thats a fairly impossible request :-) Ok, to be more precise: ... transform Docbook/MathML documents to something, which can be viewed with a HTML browser ... Note, this might include - transforming the MathML to GIFs - transforming the MathML to something else - keeping the MathML and use a browser pug-in - ... > > and Latex >thats _fairly_ easy. the xmltex package includes a partial >implementation of MathML2, which could easily be turned into XSLT in a >matter of minutes :-) You would not have these minutes it takes, would you? :-) Maybe some more background information is needed: xmltex is a XML parser implemented in TeX ...I must admit I am (still) too much of a LateX (and therefore TeX) beginner to feel comfortable with the idea of fiddling too much with my TeX setup, adding this and that package, modifying this and that configuration file ... I spent some time with SGML Docbook, DSSSL, Norman DSSSL stylesheets and JadeTex, making it work was sort of painful, too complicated (for me) to setup to do it more than once ... (sorry this is going to far away from XSL and this list, so I stop here) Then I switched to XML DocBook, XSL, Normans XSL stylesheets for HTML plus some stylesheet which I adapted from HTML to output TexML. Once I realised, that TexML is not needed, I modified those to out output 'basic' LateX directly (not a lot is supported so far, but it basically works) This setup is much simpler: a 'standard' TeX installation, a 'standard' XSL engine (be it Xalan or Saxon or whatever) plus some stylesheets (NO additional TeX/LateX packages, NO additional programs to 'texify' special characters) and bingo: HTML and PDF from the same source documents And now I would like to include some mathematical formulas... :-) >well, I use one which does an identity transform of any <math> >elements it finds, but thats probably not what you want! probably not. thanks anyway for the offer ... and all the replies so far XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
{"url":"http://www.biglist.com/lists/lists.mulberrytech.com/xsl-list/archives/200010/msg00915.html","timestamp":"2014-04-20T13:21:45Z","content_type":null,"content_length":"6143","record_id":"<urn:uuid:899ba565-ae68-4b5f-8b75-87c3594b8521>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Uniform Random Variable October 13th 2009, 10:18 AM #1 Mar 2009 Discrete Uniform Random Variable A discrete uniform random variable, X, has PMF of the form px(k) = {(1/b-a+1), k = a, a+1....b, 0 otherwise} where a and b are two integers with a < b how do i sketch the PMF of X? how do i work out E[X]? Sorry about lack of LaTex Math You're placing equal probability on the numbers a, a+1,...,b. The mean is the 'midpoint'. Just add the integers from a to b and divide by the number of terms, b-a+1. im sorry, i dont understand what you are saying? am i not right in thinking that E[X] = $\frac{1}{b-a+1}$ x $\frac{b+a}{2}$? as $\frac{b+a}{2}$ would be moo of all the values k will take and $\frac{1}{b-a+1}$ is the probability of all these values? Last edited by sirellwood; October 14th 2009 at 04:06 PM. October 13th 2009, 02:50 PM #2 October 14th 2009, 02:37 PM #3 Mar 2009 October 14th 2009, 03:06 PM #4 Mar 2009
{"url":"http://mathhelpforum.com/advanced-statistics/107775-discrete-uniform-random-variable.html","timestamp":"2014-04-19T22:20:39Z","content_type":null,"content_length":"38344","record_id":"<urn:uuid:a29aa08e-6e69-40f6-a3fc-d754a4b76dc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Highly oscillatory PDEs, slow manifolds and regularized PDE formulations The main motivation of my talk is provided by geophysical fluid dynamics. The underlying Euler or Navier-Stokes equations display oscillatory wave dynamics on a wide range of temporal and spatial scales. Simplified models are often based on the idea of balance and the concept of a slow manifold. Examples are provided by hydrostatic and geostrophic balance. One would also like to exploit these concepts on a computational level. However, slow manifolds are idealized objects that do not fully characterize the complex fluid behavior. I will describe a novel regularization technique that makes use of balance and slow manifolds in an adaptive manner. The regularization approach is based on a reinterpretation of the (linearly) implicit midpoint rule as an explicit time-stepping method applied to a regularized set of Euler equations. Adaptivity can be achieved by means of a predictor-corrector interpretation of the regularization.
{"url":"http://www.newton.ac.uk/programmes/HOP/Abstract2/reich.html","timestamp":"2014-04-20T14:17:24Z","content_type":null,"content_length":"2954","record_id":"<urn:uuid:0c0ebc94-fd89-40fc-b9f3-a84b943bd22b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
On 26 Apr, 2013By Anonymous (not verified)0 Comments The Department of Mathematics has used technology in its teaching of the subject since 1989, beginning with the use of Maple for calculus. Currently a number of mathematics courses offered in the department use the power of today's computer technology to simulate real-world situations in the classroom. We use software packages like Maple, Microsoft Excel, SPSS, MatLab, C++ and Java Programming environments, TeX/LaTeX typesetting packages, etc. A Math Lab containing twelve Pentium 4 PCs running Windows XP Pro and 2 classrooms with thirty PCs are used heavily throughout the year.
{"url":"http://whittier.edu/academics/math/facilities","timestamp":"2014-04-19T10:02:29Z","content_type":null,"content_length":"33949","record_id":"<urn:uuid:ba755ed0-c9ca-41b0-b649-82437178cf8c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul-Baker and other three-mirror anastigmatic aplanats telescopeѲptics.net ▪ ▪ ▪ ▪ ▪▪▪▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ CONTENTS ◄ 8.3. Three-mirror telescopes ▐ 8.4. Off-axis and tilted element telescopes ► 8.3.1. Paul-Baker and other three-mirror anastigmatic aplanats Paul-Baker three-mirror system combines concept of the Mersenne telescope with the unique property of a sphere to be free from primary coma and astigmatism with the aperture stop at the radius of curvature. The basic, curved field PB concept, uses a pair of confocal mirrors - concave paraboloidal primary and convex spherical secondary - with the third, concave spherical mirror placed so that the secondary vertex coincides with its center of curvature. In effect, the secondary acts as an aperture stop for the tertiary (FIG. 129). As long as radii of curvature of the tertiary and secondary are identical, their spherical aberration contributions are also identical and of opposite sign, and the system is free from primary spherical aberration, as well as from coma and astigmatism. FIGURE 129: Paul-Baker three-mirror anastigmatic aplanat. The basic arrangement (left) consists from a concave paraboloidal primary (P), convex spherical secondary (S) and concave spherical tertiary (T). The only remaining third-order aberration is field curvature, which can be corrected by extending secondary-to-tertiary separation and aspherizing the secondary (right). Secondary in either arrangement is at the center of curvature of the tertiary mirror. The parameters are exceptionally simple for a three-mirror telescope: the system is fully specified by the above description for the primary of ~ƒ/3 and slower. Faster PB systems require higher-order aspherics added to one or more surfaces for optimum performance. The PB image curvature is Rp=R1/2, R1 being the radius of curvature of the primary, with the curvatures of the secondary and tertiary cancelling each other. The PB flat-field variant also uses paraboloidal primary and spherical tertiary. Primary-to-secondary separation is given by S1=(1-k)R1/2, with the secondary conic K2=-1+(1-k)3 and the radius of curvature R2=kR1. Distance to the tertiary - and tertiary radius of curvature - is S2=R3=kR1/(1-k). The k parameter is the height of the marginal ray at the secondary (and tertiary), in units of the aperture radius. It is also the secondary-to-primary-focus separation in units of the primary's focal length. In either case, it is determined by the secondary location, which is arbitrary (except for the small/large extremes). The FAA1 and FAA2 are also relatively simple systems, rivaling the Paul-Baker in image quality. The FAA1 consists of the near-paraboloidal (ellipsoidal) primary and a pair of spherical mirrors, as shown on FIG. 130a. A pair of spherical mirrors in this arrangement can also work with paraboloidal primary, but can't correct all four 3rd FIGURE 130: (a) FAA1, flat-field anastigmatic aplanat consisting from: (1) concave ellipsoidal primary, (2) small convex spherical secondary outside the primary focus, and (3) concave spherical tertiary. The field is somewhat more limited in size than in either Paul-Baker or FAA2, due to vignetting caused by restricted tertiary size. Configuration can't vary significantly, but the system ƒ/ ratio is very flexible. Excellent performance extends to ~ƒ/3, without a need to add higher-order surface aspherics (in the range of amateur apertures). (b) FAA2, a flat-field anastigmatic aplanat consisting from: (1) concave ellipsoidal primary, (2) small convex spherical secondary placed inside the primary focus, and (3) concave ellipsoidal tertiary. Effective central obstruction achievable with this design is significantly smaller than with either Paul-Baker of FAA1. This makes it viable for visual observing, after a diagonal flat is added to make the final image accessible. Not quite as well corrected as the FAA1 at very large relative apertures, but the difference is of no practical significance. First published by Korsch in a very similar form. order aberrations (leaving distortion out). Very low residual coma remains present. Required system parameters are more complex than those for the PB. For that reason, only actual prescriptions are given for the main FAA1 arrangement, the anastigmatic aplanat, and the version with paraboloidal primary, with slight residual coma (the parameters scale with the aperture, or with the primary radius of curvature). - FAA1, flat-field anastigmatic aplanat: ρ1=0.54515, σ2=-0.11423, ρ2=0.14433, ρ3=0.16907, K1=-0.95, K2,3=0 - FA, flat-field anastigmat: σ1=0.5429, σ2=-0.1115, ρ2=0.1429, ρ3=0.1666, K1=-1, K2,3=0 - AP, aplanat: σ1=0.544, σ2=-0.113, ρ2=0.12, ρ3=0.167, K1=-1, K2,3=0 where σ1 and σ2 are primary-to-secondary and secondary-to-tertiary separation, respectively, and ρ2 and ρ3 are secondary and tertiary radius of curvature, respectively, all expressed in units of the primary radius of curvature. The effective system focal length is typically somewhat (up to ~10%) smaller than primary's f.l. The parameters should give near-optimum performance; it can be maintained within a small degree of compensatory changes in parameters (for instance, the aplanat gives nearly identical performance with the secondary-to-tertiary distance changed to -0.11R, and tertiary radius to 0.164R) The FAA2 uses concave ellipsoidal primary, convex spherical secondary forming the focus between the two mirrors, and concave ellipsoidal tertiary placed behind the primary (FIG. 130b). It is very similar to Korsch's three-mirror flat-field anastigmatic aplanat, in which the secondary is hyperboloidal (there is no need for aspherizing in my version). Mirror positioning is similar to the flat-field PB, but mirrors are somewhat easier to make, baffling is better and central obstruction required for a given field is smaller. Correction of primary aberrations is even slightly better than in the PB, but the difference has no practical consequences. An actual system prescription, with the above notation, is as follows: σ1=0.41896, σ2=-0.70833, ρ2=0.25, ρ3=1/3, K1=-0.7196, K2=0 and K3=-0.3496 The effective system focal length is typically ~50% longer than that of the primary. In setting up the design, a near flat-field condition requires near-zero Petzval, which is achieved by selecting needed radii of curvature of the secondary and tertiary for given primary. Spherical aberration is controlled with the primary conic, coma by the primary-to-secondary separation, and astigmatism by the secondary-to-tertiary separation (the former influences coma much more than astigmatism, the latter the other way around, so it takes a few steps to have them optimally balanced). FIG. 131 illustrates performance of the above systems with the ray spot plot over 1 degree field diameter. FIGURE 131: 1o field ray spot plots for (left to right) curved and flat-field Paul-Baker (PB), anastigmatic aplanats FAA1 and FAA2 (FA is FAA1 versions with parabolic primary), and folded Cassegrain-Gregorians, 3A (reduced by a factor of 4 to fit in) and 3AA. Aperture diameter is 300mm for all except 3AA (D=400mm). At ƒ/3, higher order spherical becomes noticeable, in particular in the flat-field PB variant (requires higher-order aspheric term surface correction). Note that the angular field for 3AA is different than for the rest of systems, 0°, 0.23° and 0.33° (for 25mm field radius); also, spots for the PB (curved field), 3A and 3AA are on best curved image surface, with the radii -900mm, 333mm and -240mm, respectively. SPEC'S As mentioned, visual field quality in the Cassegrain-Gregorian can be expected to be significantly better, due to its astigmatism partly cancelling that of the eyepiece. For instance, the RMS error of the above CG at 0.5o off-axis is some 2.5 times the error of a comparable Newtonian in the image produced by the objective. With a typical 30mm Kellner - which still has a few times stronger astigmatism, opposite in sign to that in the CG - the resulting CG RMS error is about 25% smaller than that in the Newtonian. With modern eyepiece types better corrected for astigmatism, nearly full field correction at a certain eyepiece f.l. is possible. In principle, similar field correction effects should be expected in combination with a focal reducer lens. ◄ 8.3. Three-mirror telescopes ▐ 8.4. Off-axis and tilted element telescopes ► Home | Comments
{"url":"http://www.telescope-optics.net/paul-baker_telescope.htm","timestamp":"2014-04-18T13:39:55Z","content_type":null,"content_length":"23612","record_id":"<urn:uuid:f2ce59b3-31c4-454e-bcba-9abe5a174dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Boston Prealgebra Tutor Find a Boston Prealgebra Tutor ...I love school: currently, I am majoring in Neurobiology at Boston University. Because I work as an EMT and teach EMT training classes, my background in Science is both hands on and in a teaching environment. In the past, I have found that courses that provided problems for me happened because I was not getting enough individual attention in class. 14 Subjects: including prealgebra, chemistry, calculus, geometry ...I have taken four semesters of Japanese so far and have received an "A" in each of them. I feel very passionate about my Japanese heritage and would be more than happy to assist anyone that shares that same passion. I am a second year student at Northeastern University. 11 Subjects: including prealgebra, reading, Japanese, English ...I currently have a BS in Industrial and Management Engineering, and I am working towards a master's degree in engineering as well. I am fully bilingual in Spanish and English, both written and oral. I can tutor in most basic undergraduate courses (Calculus I, physics, chem, writing, etc) and all of high, middle and elementary school classes. 29 Subjects: including prealgebra, English, reading, Spanish ...I received an A in my Advanced Algebra II high school course six years ago. I am well specialized in all topics that fall under a second course in algebra. Having received an A in my AP Biology course in high school, followed by a score of 5 on the AP test, I can comfortably claim that I am very well suited in tutoring a first course in either regular or advanced biology. 15 Subjects: including prealgebra, chemistry, calculus, biology ...Recently, I tutored algebra 2 students nationwide through email or telephone at Pearson. I have also taught a college algebra course at Bunker Hill Community College. Do you need assistance with your derivatives and integrals? 12 Subjects: including prealgebra, calculus, physics, algebra 1
{"url":"http://www.purplemath.com/Boston_Prealgebra_tutors.php","timestamp":"2014-04-20T13:44:27Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:4f9999f7-1fbc-4f88-b1e9-73ce4031f684>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Vol. 4 (1999), 127-166 DOCUMENTA MATHEMATICA , Vol. 4 (1999), 127-166 B. Kreußler Twistor Spaces With a Pencil of Fundamental Divisors In this paper simply connected twistor spaces $Z$ containing a pencil of fundamental divisors are studied. The Riemannian base for such spaces is diffeomorphic to the connected sum $n\PP$. We obtain for $n\ge 5$ a complete description of the set of curves intersecting the fundamental line bundle $\fb{1}$ negatively. For this purpose we introduce a combinatorial structure, called \emph{blow-up graph}. We show that for generic $S\in\mid\fund\mid$ the algebraic dimension can be computed by the formula $a(Z)=1+\kappa^{-1}(S)$. A detailed study of the anti Kodaira dimension $\kappa^{-1}(S)$ of rational surfaces permits to read off the algebraic dimension from the blow-up graphs. This gives a characterisation of Moishezon twistor spaces by the structure of the corresponding blow-up graphs. We study the behaviour of these graphs under small deformations. The results are applied to prove the main existence result, which states that every blow-up graph belongs to a fundamental divisor of a twistor space. We show, furthermore, that a twistor space with $\dim\mid\fund\mid=3$ is a LeBrun space \cite{LeB2}. We characterise such spaces also by the property to contain a smooth rational non-real curve $C$ with $C.(\fund)=2-n$. 1991 Mathematics Subject Classification: 32L25, 32J17, 32J20, 14M20 Keywords and Phrases: Moishezon manifold, algebraic dimension, self--dual, twistor space Full text: dvi.gz 82 k, dvi 229 k, ps.gz 163 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.emis.de/journals/DMJDMV/vol-04/05.html","timestamp":"2014-04-17T19:00:50Z","content_type":null,"content_length":"2292","record_id":"<urn:uuid:838e93e8-24cd-41ad-86a2-40f0de3629ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
The Library UNSPECIFIED. (1995) PIK MASS-PRODUCTION AND AN OPTIMAL CIRCUIT FOR THE NECIPORUK SLICE. COMPUTATIONAL COMPLEXITY, 5 (2). pp. 132-154. ISSN 1016-3328 Full text not available from this repository. Let f : {0, 1}(n) --> {0, 1}(m) be an m-output Boolean function in n variables. f is called a k-slice if f(x) equals the all-zero vector for all x with Hamming weight less than k and f(x) equals the all-one vector for all x with Hamming weight more than k. Wegener showed that ''PIk-set circuits'' (set circuits over prime implicants of length k) are at the heart of any optimum Boolean circuit for a k-slice f. We prove that, in PIk-set circuits, savings are possible for the mass production of any F\X, i.e., any collection F of m output-sets given any collection X of n input-sets, if their PIk-set complexity satisfies SCm(F\X) greater than or equal to 3n + 2m. This PIk mass production, which can be used in monotone circuits for slice functions, is then exploited in different ways to obtain a monotone circuit of complexity 3n + o(n) for the Neciporuk slice, thus disproving a conjecture by Wegener that this slice has monotone complexity Theta(n(3/2)). Finally, the new circuit for the Neciporuk slice is proven to be asymptotically optimal, not only with respect to monotone complexity, but also with respect to combinational complexity. Data sourced from Thomson Reuters' Web of Knowledge Actions (login required)
{"url":"http://wrap.warwick.ac.uk/19215/","timestamp":"2014-04-18T10:53:12Z","content_type":null,"content_length":"34292","record_id":"<urn:uuid:be3550ad-8c9e-4862-8c3b-b606d6067ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
RPI Mathematical Sciences: Mathematical Sciences Problem Solving Math Club Mathematical Sciences Problem Solving Math Club The Mathematics Problem Solving Club at Rensselaer meets weekly to work on challenging mathematical problems. The club is focused on working and discussing problems such as those found on math competions as well as other interesting problems suggested by students and faculty. We would like to invite students of all majors to join the Mathematical Sciences Problem Solving Math Club. The club meets Tuesdays from 4 p.m. - 6 p.m. in Amos Eaton 402. All students are invited to join us. Many students in the Mathematics Problem Solving Club also participate in the Putnam Mathematical Competition. The Putnam Competition is held on the first Saturday of December every year. If you have any questions, feel free to contact Bruce Piper or Joanne Kessler. External Links: Club participants may join in the student chapter of the Mathematical Association of America.
{"url":"http://www.rpi.edu/dept/math/ms_undergraduate/club.html","timestamp":"2014-04-19T15:13:18Z","content_type":null,"content_length":"9195","record_id":"<urn:uuid:b4633ead-0804-4e37-9aff-dab928740f81>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
The Non-Stationary Poisson Equation Next: Numerical Resolution of the Up: Coupling between Monte Carlo Previous: The Stationary Poisson Equation Contents We introduce in this section what I call the "non-stationary" Poisson equation, hereafter NSP equation. This equation is very easy to implment, in a very general numerical context, and very easy to and solve with simple, but very robust, numerical schemes. Let us report in the following the NSP equation Actually it is trivial to develop and implement a numerical solver for NSP. In fact, in the context of finite difference, such a numerical scheme can be obtained applying finite-difference approximations of derivatives to the NSP equation. This is what we will see in the next paragraph. Next: Numerical Resolution of the Up: Coupling between Monte Carlo Previous: The Stationary Poisson Equation Contents Didier Link 2007-05-18
{"url":"http://www.gnu.org/software/archimedes/manual/html/node31.html","timestamp":"2014-04-21T16:26:24Z","content_type":null,"content_length":"5904","record_id":"<urn:uuid:9f308d94-f62a-4c3c-914b-cd7584eae46f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: y^2+36x+8y-92 = 0 what are the intercepts of this equation in parabola? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f55e5b3e4b0862cfd0719f5","timestamp":"2014-04-19T12:49:16Z","content_type":null,"content_length":"42057","record_id":"<urn:uuid:a9f6f575-69a4-4273-a023-eb94be498307>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrate functions perform the time evolution of a given ODE from some starting time t[0] to a given end time t[1] and starting at state x[0] by subsequent calls of a given stepper's do_step function. Additionally, the user can provide an __observer to analyze the state during time evolution. There are five different integrate functions which have different strategies on when to call the observer function during integration. All of the integrate functions except integrate_n_steps can be called with any stepper following one of the stepper concepts: Stepper , Error Stepper , Controlled Stepper , Dense Output Stepper. Depending on the abilities of the stepper, the integrate functions make use of step-size control or dense output. If observer calls at equidistant time intervals dt are needed, the integrate_const or integrate_n_steps function should be used. We start with explaining integrate_const: integrate_const( stepper , system , x0 , t0 , t1 , dt ) integrate_const( stepper , system , x0 , t0 , t1 , dt , observer ) These integrate the ODE given by system with subsequent steps from stepper. Integration start at t0 and x0 and ends at some t' = t[0] + n dt with n such that t[1] - dt < t' <= t[1]. x0 is changed to the approximative solution x(t') at the end of integration. If provided, the observer is invoked at times t[0], t[0] + dt, t[0] + 2dt, ... ,t'. integrate_const returns the number of steps performed during the integration. Note that if you are using a simple Stepper or Error Stepper and want to make exactly n steps you should prefer the integrate_n_steps function below. • If stepper is a Stepper or Error Stepper then dt is also the step size used for integration and the observer is called just after every step. • If stepper is a Controlled Stepper then dt is the initial step size. The actual step size will change due to error control during time evolution. However, if an observer is provided the step size will be adjusted such that the algorithm always calculates x(t) at t = t[0] + n dt and calls the observer at that point. Note that the use of Controlled Stepper is reasonable here only if dt is considerably larger than typical step sizes used by the stepper. • If stepper is a Dense Output Stepper then dt is the initial step size. The actual step size will be adjusted during integration due to error control. If an observer is provided dense output is used to calculate x(t) at t = t[0] + n dt. This function is very similar to integrate_const above. The only difference is that it does not take the end time as parameter, but rather the number of steps. The integration is then performed until the time t0+n*dt. integrate_n_steps( stepper , system , x0 , t0 , dt , n ) integrate_n_steps( stepper , system , x0 , t0 , dt , n , observer ) Integrates the ODE given by system with subsequent steps from stepper starting at x[0] and t[0]. If provided, observer is called after every step and at the beginning with t0, similar as above. The approximate result for x( t[0] + n dt ) is stored in x0. This function returns the end time t0 + n*dt. If the observer should be called at each time step then the integrate_adaptive function should be used. Note that in the case of Controlled Stepper or Dense Output Stepper this leads to non-equidistant observer calls as the step size changes. integrate_adaptive( stepper , system , x0 , t0 , t1 , dt ) integrate_adaptive( stepper , system , x0 , t0 , t1 , dt , observer ) Integrates the ODE given by system with subsequent steps from stepper. Integration start at t0 and x0 and ends at t[1]. x0 is changed to the approximative solution x(t[1]) at the end of integration. If provided, the observer is called after each step (and before the first step at t0). integrate_adaptive returns the number of steps performed during the integration. • If stepper is a Stepper or Error Stepper then dt is the step size used for integration and integrate_adaptive behaves like integrate_const except that for the last step the step size is reduced to ensure we end exactly at t1. If provided, the observer is called at each step. • If stepper is a Controlled Stepper then dt is the initial step size. The actual step size is changed according to error control of the stepper. For the last step, the step size will be reduced to ensure we end exactly at t1. If provided, the observer is called after each time step (and before the first step at t0). • If stepper is a Dense Output Stepper then dt is the initial step size and integrate_adaptive behaves just like for Controlled Stepper above. No dense output is used. If the observer should be called at some user given time points the integrate_times function should be used. The times for observer calls are provided as a sequence of time values. The sequence is either defined via two iterators pointing to begin and end of the sequence or in terms of a Boost.Range object. integrate_times( stepper , system , x0 , times_start , times_end , dt , observer ) integrate_times( stepper , system , x0 , time_range , dt , observer ) Integrates the ODE given by system with subsequent steps from stepper. Integration starts at *times_start and ends exactly at *(times_end-1). x0 contains the approximate solution at the end point of integration. This function requires an observer which is invoked at the subsequent times *times_start++ until times_start == times_end. If called with a Boost.Range time_range the function behaves the same with times_start = boost::begin( time_range ) and times_end = boost::end( time_range ). integrate_times returns the number of steps performed during the integration. • If stepper is a Stepper or Error Stepper dt is the step size used for integration. However, whenever a time point from the sequence is approached the step size dt will be reduced to obtain the state x(t) exactly at the time point. • If stepper is a Controlled Stepper then dt is the initial step size. The actual step size is adjusted during integration according to error control. However, if a time point from the sequence is approached the step size is reduced to obtain the state x(t) exactly at the time point. • If stepper is a Dense Output Stepper then dt is the initial step size. The actual step size is adjusted during integration according to error control. Dense output is used to obtain the states x (t) at the time points from the sequence. Additionally to the sophisticated integrate function above odeint also provides a simple integrate routine which uses a dense output stepper based on runge_kutta_dopri5 with standard error bounds 10^ -6 for the steps. integrate( system , x0 , t0 , t1 , dt ) integrate( system , x0 , t0 , t1 , dt , observer ) This function behaves exactly like integrate_adaptive above but no stepper has to be provided. It also returns the number of steps performed during the integration.
{"url":"http://www.boost.org/doc/libs/1_55_0/libs/numeric/odeint/doc/html/boost_numeric_odeint/odeint_in_detail/integrate_functions.html","timestamp":"2014-04-17T10:17:54Z","content_type":null,"content_length":"30335","record_id":"<urn:uuid:b8c361da-35fb-479f-bcc0-65f6cafc8a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Speaker Biographies Scientific Organizing Committee G. Jogesh Babu <babu@psu.edu> http://www.stat.psu.edu/~babu/ Professor of Statistics, Director of the Center for Astrostatistics, Penn State University. Fellow AAAS, ASA, IMS, Elected member ISI. Founding editor Statistical Methodology. Co-author of monograph Astrostatistics, co-organizer of SCMA conferences and statistics Summer Schools. Research in asymptotic theory, bootstrap theory, number theory, astrostatistics, and other topics of mathematical David Banks http://www.stat.duke.edu/~banks/ Professor of Statistical Science, Duke University. Fellow ASA, various government achievement awards. Former positions as statistician for U.S. Dept. of Transportation, FDA, and NIST. Co-editor of six books including Encyclopedia of Statistical Sciences. Research in data mining, Bayesian methodology, risk assessment, multivariate statistics and other topics of mathematical statistics. Lawrence D. Brown http://www-stat.wharton.upenn.edu/~lbrown/ Miers Busch Professor of Statistics, Wharton School, University of Pennsylvania. Member NAS, Fellow IMS, ASA Wilks Award, former Pres. IMS. Former editor Annals of Statistics. Resarch in regression, Bayesian methodology, density estimation, wavelet analysis, Poisson processes, decision theory, and other topics of mathematical statistics. David van Dyk http://www.ics.uci.edu/~dvd/ Professor of Statistics, University of California-Irvine. Fellow ASA and IMS. Former editor Journal of Computational and Graphical Statistics. Coordinator, California Harvard Astro-Statistics Collaboration. Research in astrostatistics, Bayesian methodology and computation, causal inference, and other topics of statistical methodology. Chris Koen http://www.uwc.ac.za/ Professor of Statistics, University of Western Cape SA. Ph.D’s in both astronomy and statistics. Research in time series analysis, variable stars, and astrostatistics. Fionn Murtagh http://www.cs.rhul.ac.uk/home/fionn/ Science Foundation of Ireland. Member Royal Irish Academy, Fellow IAPR, BCS. Director, Science Foundation Ireland. Former Professor of Computer Science, University of London. Past-President Classification Society and British Classification Society. Past Editor The Computer Journal. Co-author five monographs on astronomical image processing and signal processing, co-editor of ~15 volumes in informations. Research on machine vision, image analysis, signal processing, classification, semantics, and other topics in informatics. Chad Schafer http://www.stat.cmu.edu/~cschafer Asst. Professor of Statistics, Carnegie Mellon University. Member, McWilliams Center for Cosmology. Research in methodology for astronomical inference problems including construction of optimal constraints on cosmological parameters, bivariate luminosity functions, and low-dimensional characterizations of complex data. Kirk Borne http://classweb.gmu.edu/kborne/ Assoc. Professor of Astrophysics and Computational Science, Dept. of Computational and Data Sciences, George Mason University. Chair, LSST Informatics and Statistics Science Collaboration, former scientist at Space Telescope Science Institute. Research in galaxy dynamics and evolution, Virtual Observatory and LSST informatics, data mining, citizen science and public outreach. Eric Feigelson <edf@astro.psu.edu> http://astro.psu.edu/users/edf Professor of Astronomy and Astrophysics, Penn State University. Member VAO Science Council, NRAO Users Committee, Chandra ACIS Team. Sci. Editor Astrophyscal Journal. Co-author of monograph Astrostatistics, co-organizer of SCMA conferences and statistics Summer Schools. Research in X-ray astronomy, star and planet formation, statistical education for astronomy. Alan Heavens http://www.roe.ac.uk/~afh/ Professor of Theoretical Astrophysics. Institute for Astronomy, University of Edinburgh UK. Fellow RAS and RSE. Research includes statistical cosmology (galaxy clustering, weak lensing, cosmic background radiation), likelihood theory and Bayesian methodology, medical imaging, statistics education. Thomas Loredo http://www.astro.cornell.edu/staff/loredo/ Senior Research Associate, Department of Astronomy, Cornell University. Research in astrostatistics, Bayesian inference for astronomy, statistical computation, statistics education, extrasolar planets, Solar System minor bodies. Pavlos Protopapas http://timemachine.iic.harvard.edu/ Senior scientist and lead investigator, Time Series Center, Institute for Innovative Computing, Harvard University. Research in time series methodology for astronomy, data mining and classification, grid computing, Solar System minor bodies. Jean-Luc Starck http://jstarck.free.fr/ Senior scientist, Service d’Astrophysique, CEA-Saclay, France. Coauthor of four monographs on astronomical image analysis, coeditor of ten conferences including Astronomical Data Analysis conferences. Research in statistical cosmology (cosmic microwave background fluctuations, weak lensing, and large-scale structure), sparse representations (wavelet, curvelet, etc) and their applications in astronomy. Licia Verde http://icc.ub.edu/~liciaverde ICREA Professor, Institute of Cosmological Sciences, University of Barcelona ES. Former Chandra/Spitzer Fellow. Research in statistical cosmology (cosmic background fluctuations, large-scale structure, astronomical surveys, statistical computation, statistics education. Invited Speakers and Commentators Ethan Anderes http://www.stat.ucdavis.edu/~anderes Asst. Professor, Statistics Department, University of California - Davis Nicholas Ball https://www.astrosci.ca/users/NickBall/ Assistant Research Officer, Herzberg Institute for Astrophysics, Canada Richard Baraniuk http://www.ece.rice.edu/~richb/ Cameron Professor of Electrical & Computer Engineering, Rice University Othman Benomar Postdoctoral researcher, School of Physics, University of Sydney AU Alexander Blocker http://www.awblocker.com/ Graduate student, Dept. of Statistics, Harvard University Joshua Bloom http://astro.berkeley.edu/~jbloom/ Associate Professor, Astronomy Dept. University of California - Berkeley Tamas Budavari http://www.sdss.jhu.edu/ Research scientist, Dept. of Physics & Astronomy, Johns Hopkins University Andrew Connolly http://www.astro.washington.edu/users/ajc/ Associate Professor, Department of Astronomy, University of Washington David Donoho http://www-stat.stanford.edu/~donoho/ Bass Professor of Humanities and Sciences, Dept. of Statistics, Stanford University Didier Fraix-Burnet http://www-laog.obs.ujf-grenoble.fr/~fraix/accueildfb.htm Docteur, Laboratoire d'Astrophysique de l'Observatoire de Grenoble Peter Freeman http://www.stat.cmu.edu/~pfreeman/ Project Scientist, Department of Statistics, Carnegie Mellon University Christopher Genovese http://www.stat.cmu.edu/~genovese/ Professor, Department of Statistics, Carnegie Mellon University Alexander Gray http://www.cc.gatech.edu/~agray/ Assistant Professor, College of Computing, George Institute of Technology Carlo Graziani Senior Resaerch Associate, Department of Astronomy and Astrophysics, University of Chicago Philip Gregory http://www.physics.ubc.ca/~gregory/gregory.html Professor Emeritus, Department of Physics and Astronomy, University of British Columbia Fabrizia Guglielmetti http://www.mpe.mpg.de/rumwuslerseite.php?user=fabrizia Postdoctoral researcher, High-Energy Astrophysics, Max-Planck Institute for Extraterrestrial Physics, Germany Martin Hendry home page Faculty, School of Physics and Astronomy, University of Glasgow David Higdon http://www.stat.lanl.gov/source/orgs/ccs/ccs6/staff/DHigdon Group Leader, Statistical Sciences, Los Alamos National Laboratory Joseph Hilbe http://en.wikipedia.org/wiki/Joseph_Hilbe Adjunct Professor, Dept. of Statistics, Arizona State University Raul Jimenez http://icc.ub.edu/~jimenez/ ICREA Chair in Physics, University of Barcelona ES Vinay Kashyap http://hea-www.harvard.edu/AstroStat/ Research scientist, High Energy Astrophysics Division, Harvard-Smithsonian Center for Astrophysics Brandon Kelly http://hea-www.harvard.edu/AstroStat/ Hubble Fellow, Harvard-Smithsonian Center for Astrophysics Ann Lee http://www.stat.cmu.edu/~annlee/ Associate Professor, Dept. of Statistics, Carnegie Mellon University Thomas Lee http://anson.ucdavis.edu/~tcmlee/ Professor of Statistics, University of California - Davis Kaisey Mandel https://www.cfa.harvard.edu/~kmandel/ Graduate student, Dept. of Astronomy, Harvard University Domenico Marinucci http://www.mat.uniroma2.it/~marinucc/ Director and Professor of Mathematics, University of Rome 'Tor Vergata' Donald Percival http://staff.washington.edu/dbp/ Professor of Statistics, Principal Mathematician of Applied Physics Laboratory, University of Washington Erik Rosolowsky https://people.ok.ubc.ca/erosolo Professor, School of Arts and Sciences, University of British Columbia - Okanagan David Ruppert home page Andrew Schultz Jr. Professor of Engineering, Cornell University Sanat Sarkar http://astro.temple.edu/~sanat/ Professor of Statistics, Fox School of Business & Management, Temple University Jeffrey Scargle http://astrophysics.arc.nasa.gov/~jeffrey/ Senior Scientist, Planetary Systems Branch, Ames Research Center, NASA Eric Switzer Institute Fellow, Kavli Institute of Cosmological Physics, University of Chicago Luke Tierney http://www.stat.uiowa.edu/~luke/ Professor of Statistics & Actuarial Science, University of Iowa Roberto Trotta http://astro.ic.ac.uk/rtrotta/home Lecturer in Astrophysics, Imperial College London Ricardo Vilalta http://www2.cs.uh.edu/~vilalta/ Associate Professor, Department of Computer Science, University of Houston Michael Way http://www.giss.nasa.gov/staff/mway/ Scientist, Goddard Institute for Space Studies, NASA Benjamin Wandelt Professor and Chair of Theoretical Cosmology, Institut d'Astrophysique de Paris Martin Weinberg http://www.astro.umass.edu/~weinberg/ Professor of Astronomy, University of Massachusetts
{"url":"http://www.astrostatistics.psu.edu/su11scma5/speaker_bios.html","timestamp":"2014-04-20T03:49:08Z","content_type":null,"content_length":"18730","record_id":"<urn:uuid:a8d55c4d-7f84-4cab-8812-3f6c54f0e40d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridgeport, PA Algebra 1 Tutor Find a Bridgeport, PA Algebra 1 Tutor ...I teach it right. Algebra 2 is a lot harder than it used to be. It's also more important than it used to be because algebra 2 concepts are included on the new SAT. 23 Subjects: including algebra 1, English, calculus, geometry I work with students at their home! My background is as an Educator and Mechanical Engineer who, after working 23 years in the computer industry as an Electro-Mechanical Packaging Engineer, returned to college (Night School) to become a professional teacher, earning 14 years of teaching experience ... 62 Subjects: including algebra 1, reading, English, calculus ...I have over ten years of experience teaching Sunday School to children ranging in age from preschool to upper middle school. As a K-6 Teacher, I teach students with ADHD every year. I also work with their behavioral therapists who have taught me how to better work with them. 41 Subjects: including algebra 1, English, reading, writing ...In addition, I trained students in the area of protein biochemistry and molecular biology (DNA, RNA, transcription, translation) for six years during graduate school. I have a proven track record of helping students understand science under my tutoring expertise and knowledge of chemistry.I have... 26 Subjects: including algebra 1, chemistry, physics, reading I have been tutoring science and math, particularly high school and college chemistry and algebra, since 1983. I have been teaching at the college level since then as well. My doctoral work has focused on the use of technology and group work to enhance learning, particularly in the area of science. 19 Subjects: including algebra 1, chemistry, GRE, organic chemistry Related Bridgeport, PA Tutors Bridgeport, PA Accounting Tutors Bridgeport, PA ACT Tutors Bridgeport, PA Algebra Tutors Bridgeport, PA Algebra 2 Tutors Bridgeport, PA Calculus Tutors Bridgeport, PA Geometry Tutors Bridgeport, PA Math Tutors Bridgeport, PA Prealgebra Tutors Bridgeport, PA Precalculus Tutors Bridgeport, PA SAT Tutors Bridgeport, PA SAT Math Tutors Bridgeport, PA Science Tutors Bridgeport, PA Statistics Tutors Bridgeport, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/bridgeport_pa_algebra_1_tutors.php","timestamp":"2014-04-21T07:42:47Z","content_type":null,"content_length":"24073","record_id":"<urn:uuid:92adaf60-044f-48f5-ae50-5599aa5dad38>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
A 600 G Mass On A Spring Is Mounted Horizontally ... | Chegg.com A 600 g mass on a spring is mounted horizontally and is set inmotion so that it oscillates in simple harmonic motion. The mass is released fromx=12.0 cm and first reaches the equilibrium point 0.5 s later. D etermine thespring constant of the spring i know i use F=kx and to get F you use F=ma so it looks likema=kx but i can't seem to figure out how to get a. i know one ofthe formulas is
{"url":"http://www.chegg.com/homework-help/questions-and-answers/600-g-mass-spring-mounted-horizontally-set-inmotion-oscillates-simple-harmonic-motion-mass-q471006","timestamp":"2014-04-16T15:37:45Z","content_type":null,"content_length":"18399","record_id":"<urn:uuid:968250d5-49ef-4785-bb02-67f77f0520ea>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 1 2002 Saturday, January 19 Cellular Automata Lecture Professor: Harold V. McIntosh. CELLULAR AUTOMATA (1) To have some discussion between meetings I am going to try ''e-mail teaching.'' It isn't really teaching, but is a way to suggest things to read preparing for the next meeting, and mention exercises that ought to be carried out. Besides it gives a way to work on the notes peice by piece. The most interesting news is that Andrew Wuensche, for whatever reason, intends to be in Puebla for a visit, and is willing to meet with the class on Saturday. He has worked on cellular automata, spends his time between England and the Santa Fe Institute, and has coauthored an Atlas of Cellular Automata which is one of our regular references. After dividing (2,1) automata into symmetry classes, they give sample evolutions and basin diagrams for all the classes. He has also writen a simulation program, somewhat after the style of the LCAU series, and will demonstrate it on Saturday, if we can find a projector for his laptop. As far as general preparations are concerned: 1) for Wuensche's visit, look over "Linear Cellular Automata" and the cellular automata articles, particularly the one in which his book was reviewed. 2) about differential equations, read the material on SERO, the Complex Variable Notes , and The Summer of 1999. 3) for wave packets, look around the internet for waves, phase and group velocity, visual quantum mechanics, and so on. There are more waves in quantum mechanics, but there are also mechanical waves, waves in electrical circuits, hydrodynamic waves, waves in optics and wave guides, and there are probably sites with articles and demonstratioins. 4) the concrete example I mentioned. Add two sine waves, and note the beats. It is simple algebra or trigonometry to turn a sum of waves into a product, and also some simple symbolic manipulation to add a whole batch of waves with gaussian or poisson amplitudes, and compare the results. --- The Physical Society has finally got a century of Physical Review on line, and I have finally gotten a subscription to most of it. So I have been going after old articles, or articles missing from the issues in the library. It always seems that the most interesting article is the one that is missing. The bibliography has grown to about 250 articles, but probably only 10% of that are things which we should copy and study. The rest are useful to know about and for having a complete bibliography, but many are redundant and not all are directly pertinent to the subject. - hvm CELLULAR AUTOMATA (2) Here is another try at making up a maioing list Meanwhile I have been looking over making up wave packets, and there doesn't actually seem to be much information available. Of course, all the quantum mechanics books mention the Gaussian wave packet for a free particle with the Schroedinger equation. There is very little on a Dirac particle except for the assumption that you make up a packet the same way. There are three cases (maybe four). The ordinary wave equation has second derivatives both in x and t, and so wave numbers are proportional to frequencies, and there is no dispersion. The Schroedinger equation has a second x-derivative and first time derivative, so (wave number)squared goes with energy; Schroedinger packets do disperse. The Dirac equation has first x derivative and first time derivative, but now the fact that there are two components makes (wave number) squared = mass squared - energy squared (or some permutation of this. The fourth case would be the Klein-Gordon equation, which is back to looking like the wave equation although there is a mass there besides. The reason for a Gaussian wave packet is that it is such a nice algebraic trick --- completing the square --- to evaluating the sum (integral) over wave numbers to get the reciprocal gaussians for position and for wave number. It would be interesting to try to track down who was the first to do this --- Schroedinger, maybe. But surely something similar must hav4e been familiar to people who were already working with wave equations, in hydrodynamics, say. From then on, the result has probably just been copied from one treatment to another. The problem here, as I mentioned in the last class, is that boundary conditions are being completely ignored. True, complex exponentials of either sign are solutions, and that there are two such solutions is a fundamental result of using 2x2 matrices to look at second order equations. But from the point of a basis, they are sort of redundant. That is, you could ask for left-propagating waves, right propagating waves, standing waves, or some other mixture, but do you need all of them?. That is why I suggested graphing some packets using only one sign of wave numbers. Otherwise you get these mixtures of phase velocity going in one direction and group velocity in the other, and so on. I hope everyone has looked at the internet demonstration of phase and group velocity. When it comes to the Dirac equation, it is even harder to find explicit examples. One thing that happens is that both signs of the wave number (momentum, in quantum mechanics) are associated with both signs of the energy (frequency), so that it is possible to complete the Gaussian integral over wave numbers by combining left-moving wave functions of positive energy and right-moving functions of negative energy (except for the fact that they really also move left because of the way vave numbers combine with energy in the time-dependent solutions). If that isn't bad enough, the fact that you have to use both components of a two-component wave function means that there are many mixtures of the components having the same absolute value, which is what gives the probabilities. Apparently trying to understand this is what is behind the Foldy-Wouthuysen transformation, and the Newton-Wigner analysis. And the Zitterbewegung. Bernd Thaller seems to be abother of these authors whose book is forever forthcoming. He has this book ``Visual Quantum Mechanics'' with a lot of the usual illustrations, but his web page shows some Dirac packets and claims that they will appear in Volume 2. But it is now two years, and it doesn't seem to have been published yet (maybe not even finished writing?). He has wave packets squirming around in different ways as they undergo Zitterbewegung, but no text to describe just how and why. He also has a series showing the wave packet in different Lorentz frames, which makes nice pictures, but one wants to understand it better. It is kind of like those Star Trek pictures where they try to represent the appearance of the sky when making their hyperspace jumps. There is a whole set of physics demonstrations to be found in various places as to what familiar objects would look like when moving past them at relativistic velocities. It is remarkable that a mere plane wave creates all this complication. I'll have copies of some of the original articles on Saturday. - hvm
{"url":"http://delta.cs.cinvestav.mx/~mcintosh/comun/lectures/lecture2002-1.html","timestamp":"2014-04-17T15:26:28Z","content_type":null,"content_length":"8819","record_id":"<urn:uuid:f940ae1d-23e1-41a0-82a1-6dcfd839e69d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Flux Integrals August 1st 2010, 09:46 AM Flux Integrals Hi, I'm having trouble with some simple flux integrals. A surface is given with points (0,0,2), (0,1,2), (1,0,0), and (1,1,0), and the vector field is v= 2i + 3j + 5k, and I am trying to find the flux of the vector field through the surface. I have found the normal to this surface by taking the cross product, and my normal is I'm having problems setting up my integral. What is my dA? I don't know if it's in terms of dx, dy, or dz. The back of the book says the answer is 9, and the only way I can get this is by using dxdy for my dA, but I'm not sure why this is the case. Any help is appreciated. Thanks! August 1st 2010, 10:21 AM You have a constant vector field over a flat surface so an integral is not needed. You area vector is given by a unit normal to your surface multiplied by it's area. You found the the a normal vector to be <2,0,1> if you normalize this you get by the luck of the draw the area of the rectangle is also $\sqrt{5}$ so the Area vector $\vec{S}=<2,0,1>$ The flux of a constant vector field through a flat surface is the dot product of the area vector and the V.F so we get $\vec{F}\cdot \vec{S}=(2)(2)+(3)(0)+(5)(1)=9$ IF you really want to use an integrate the (from your equation of the normal vector) the plane has equation 2x+z=2 or z=2-2x Then use the formula Flux = $\int \vec{F}\cdot \left( -\frac{\partial z}{\partial x}\vec{i} -\frac{\partial z}{\partial y}\vec{j}+\vec{k} \right)dA$ Where dA is the projection of the surface into it's domain so in this case the xy plane as you wanted. August 1st 2010, 12:14 PM Thanks for the help! I didn't even think to do it in a non integral way. That makes it much easier. August 2nd 2010, 01:15 AM By the way, in your first post you say only "a [b]surface is given with points ..." with the points happening to lie on a plane, and then you assume the surface is a plane. There exist an infinite number of different surfaces passing through 4 given points. Are you specifically told that the surface is a plane? August 2nd 2010, 01:31 PM By the way, in your first post you say only "a [b]surface is given with points ..." with the points happening to lie on a plane, and then you assume the surface is a plane. There exist an infinite number of different surfaces passing through 4 given points. Are you specifically told that the surface is a plane? Sorry, I didn't specify in the original post that there was a graph given, with those 4 points being the corners. If it didn't specify, how would I do the problem? Would I just assume it's a plane?
{"url":"http://mathhelpforum.com/calculus/152512-flux-integrals-print.html","timestamp":"2014-04-25T01:08:19Z","content_type":null,"content_length":"8044","record_id":"<urn:uuid:e6f291fd-f0b2-4af5-82ed-fcbceffeecc9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Choosing a base where a given digit of a given number appears the most times up vote 2 down vote favorite Is there an algorithm for choosing a base where a given digit of a given number appears the most times, that works better then trial and error? (see also this) nt.number-theory algorithms If you aren't picky, choosing base 1 or some irrational base will likely work. I don't think you should invent a term like "oneier" though. Gerhard "Inventing Words Is Sorta Fun" Paseman, 2013.02.02 – Gerhard Paseman Feb 2 '13 at 16:40 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged nt.number-theory algorithms or ask your own question.
{"url":"http://mathoverflow.net/questions/120593/choosing-a-base-where-a-given-digit-of-a-given-number-appears-the-most-times","timestamp":"2014-04-16T19:58:53Z","content_type":null,"content_length":"47314","record_id":"<urn:uuid:13a2f468-c21a-4d76-89a9-0ef352907836>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths Tutors Glendale, AZ 85305 PhD Graduate for Physics, Chemistry, Mathematics Tutoring ...That is, when it comes to the homework and test on their own, they should ask themselves, "What questions did Dr. James ask me?" This will give them an increased analytical skill not only for the individual subject but with life itself.... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Sun_City_AZ_Maths_tutors.aspx","timestamp":"2014-04-19T08:46:09Z","content_type":null,"content_length":"60652","record_id":"<urn:uuid:78e9478d-478f-40ee-b477-f0bc67e69b81>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
The MathTutorDVD.com Story home | About Math Tutor DVD The MathTutorDVD.com Story My name is Jason Gibson, founder of MathTutorDVD.com and teacher in all of the videos. I have always been a good tutor, with friends frequently asking me to help them in various subjects in Math and Science. One afternoon in late 2004 after helping a friend with Algebra a comment was made that I should record myself solving math problems and make the content available to anyone who needed some help. With this idea in mind I began filming some algebra content and made the first three hours available for sale on eBay as a CD-ROM that I burned with my home computer. The CDs began selling, and the email feedback from the students and teachers began flooding in. With this encouragement, I continued filming algebra eventually reaching 10 hours of content. By this time it took 2 full CD-ROMs to fit all of the video material and at least 10 minutes to burn a complete copy for each customer. I had a steady stream of orders and finally decided to put this material on DVD and have them mass produced. This is how the "Math Video Tutor - Fractions Thru Algebra" was born! The "Math Video Tutor" DVD was a huge success on eBay which encouraged me to continue filming content. I produced the "Algebra 2 Tutor" and the "Trigonometry & Pre-Calculus Tutor" early 2005 which were both great successes. By this time I had a loyal group of customers with a steady stream of email traffic asking me "when is your next video due to release?" For my next title I had my eye on Calculus. Calculus is one of those subjects which is traditionally thought to be not only hard, but "very hard". My goal was to make it as easy as 3rd grade Math. I finished the calculus tutor in 2005 and the students and teachers loved it! My next projects involved a DVD in Physics and a DVD in Basic Math, which were released in 2006. These DVDs have both done very well with great user feedback. I am particularly proud of the Physics Tutor because it is a difficult subject to teach mainly because it is a combination of two subjects tough for lots of people - math and word problems. In 2007 I launched the redesigned MathTutorDVD.com website which contains sample videos of all of the DVDs. In 2010 we launched the Member's area which allows students to view all of our courses online for one low monthly rate. I pride myself on providing quality educational content at affordable prices, and I have a passion for making "complex" subjects easy to understand. So what does the future hold for MathTutorDVD.com? I have embarked on a comprehensive Chemistry series which is an ongoing project. To date I have 40+ hours of chemistry tutorials alone! In addition, I have begun to develop an Engineering Circuit Analysis course which is many dozens of hours of length and is also an ongoing project. Going forward, you'll see Math Tutor DVD focus on more advanced Chemistry, Engineering, and Statistics topics. I plan for Math Tutor DVD to be a comprehensive resource in all levels of Math, Science, and Engineering with quality step-by-step teaching. Sincerest thanks to all those who have spread the word regarding Math Tutor DVD! Jason Gibson
{"url":"http://www.mathtutordvd.com/public/department13.cfm","timestamp":"2014-04-19T06:51:57Z","content_type":null,"content_length":"45590","record_id":"<urn:uuid:2d613985-3f5b-4509-a8a7-4482c37e6f74>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Cudahy, CA Statistics Tutor Find a Cudahy, CA Statistics Tutor ...I teach effective reading techniques such as SQ3R and how to take class notes. I have been playing tournament chess and have a USCF rating of 1600. I have taught chess as an ofterschool enrichment class for over 12 years and I've been the head of the chess club at several different high schools. 72 Subjects: including statistics, English, reading, writing ...I got a 1580 on my SAT (old style, max 1600) and a 1540 on my GRE (max of 1600). I took and passed 10 AP tests. I received 5s in Biology, Physics B, Calculus AB, Statistics, English Literature, English Language, US History, and Macroeconomics, and received a 4 in Microeconomics and Chemistry (In... 52 Subjects: including statistics, chemistry, English, finance ...My experiences in chemistry classes as well as teaching chemistry have left me with a fairly complete understanding of thermodynamics. Since much of my graduate work was with diesel compression engines, I would be able to help anyone understand the thermodynamics of the Carnot cycle and real-wor... 73 Subjects: including statistics, reading, English, chemistry ...I can handle the math preparation for the exam and will help work through a test prep booklet to help in the other areas. Since I have taught middle and high school math for the past 19 years, I would be able to help someone prepare for the GED, especially in the math area. I would also be able to help in the English and Science areas. 15 Subjects: including statistics, geometry, GRE, ASVAB ...Everyday,I would give him practice exercises worksheet in mathematics. What I have learned in my experience as a teacher and as a tutor. First, It is very important to guide my students the fundamentals in mathematics. 3 Subjects: including statistics, algebra 1, algebra 2 Related Cudahy, CA Tutors Cudahy, CA Accounting Tutors Cudahy, CA ACT Tutors Cudahy, CA Algebra Tutors Cudahy, CA Algebra 2 Tutors Cudahy, CA Calculus Tutors Cudahy, CA Geometry Tutors Cudahy, CA Math Tutors Cudahy, CA Prealgebra Tutors Cudahy, CA Precalculus Tutors Cudahy, CA SAT Tutors Cudahy, CA SAT Math Tutors Cudahy, CA Science Tutors Cudahy, CA Statistics Tutors Cudahy, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/cudahy_ca_statistics_tutors.php","timestamp":"2014-04-19T17:16:34Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:bd02df00-1a6c-4fc9-ae98-7fd116b62f2c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
New Laws for Quantum Dynamics Physicists don’t know much about how large quantum systems evolve over time. Until a few years ago, they didn’t think they needed to. Quantum systems tend to be small, not large. When you consider only a few atoms at a time, you might see quantum effects like superposition (where particles seem to be in two states at once) or entanglement (where particles seem to affect each other instantaneously, at a distance). When you get lots of atoms together, these strange effects vanish. This process – this averaging out of the quantum effects and other interesting features – is called thermalization. Recently, physicists have made progress in understanding why conventional many-body systems thermalize, and how statistical mechanics emerges from microscopic, quantum description. It’s complex, but the short version is that a system of many particles quickly comes to equilibrium. That equilibrium state looks classical, and can be completely described by the classical theory of statistical mechanics. It’s as if the interactions between particles wash away all the quantum-ness. As for evolving in time, thermalization usually happens so quickly that the evolution of quantum effects hasn’t really been a consideration. But this changed when researchers discovered a class of systems – systems with strong disorder – which do not thermalize. By finding new ways to isolate quantum systems from their environments, researchers have been able to create and study these systems in the lab. For instance, researchers have learned to catch hundreds of atoms in a magnetic trap, cool them, and study their quantum evolution. Inside specially prepared diamonds, other researchers have found tiny cavities, with one atom in each. The spin of the atom does not interact with the cavity, but does interact with the spin of the atom in the next cave over – making a fundamentally quantum system of spins that persists through time. These kinds of systems – technically called localized many-body systems – are a new and hot field. Fuelling the interest is not just a basic desire to understand how nature works, but the pressing need to build lasting quantum systems in order to use them for quantum information processors. If we are ever to capitalize on the promise of quantum computing, engineering many-body quantum systems is a vital early step. The emergence of systems where quantum effects persist through time throws a spotlight on our ignorance about how quantum systems evolve – and gives us a chance to remedy that ignorance by finally allowing us to study such systems. We’ve learned, for instance, that they do not conform to our usual understanding of statistical mechanics. New laws of dynamics must be found. Enter a team of Perimeter researchers. In new papers appearing in Physical Review Letters, Perimeter Faculty member Dmitry Abanin, Perimeter Postdoctoral Researcher Zlatko Papić, and Maksym Serbyn (a graduate student at MIT and a Perimeter visitor) describe the laws which govern dynamics in disordered many-body quantum systems. These new laws of quantum statistics can be used in place of the traditional laws of statistical This is a general result, which can be applied to any strongly disordered experimental quantum many-body system. It’s expected to be of widespread use as researchers create and study more such One immediate result is a counterintuitive one: “Disorder can be a good thing,” says Abanin. “We’re accustomed to thinking that, in order to have long coherence times and be useful in quantum computing, a quantum system must be very cold and very pure, but our new laws show that that’s not necessarily the case. Introducing disorder into the system can actually increase the coherence times.” Coherence time is the length of time for which quantum effects persist before washing away. “I think this is a very interesting, broad result, that will have applications to many fields,” says Abanin. “The laws are quite different than statistical mechanics, but they are also unexpectedly simple. There’s a certain beauty to them. They are deeply connected to questions in quantum information, statistical mechanics, and condensed matter.” These questions of quantum many-body dynamics are fundamental, but not well explored – mostly because for years we didn’t have to deal with systems like this. “We used to have quantum mechanics in one corner and statistical mechanics in the other corner – but it turns out that there is something amazing right in the middle,” explains Abanin. “I think this field of quantum many-body dynamics is very promising, and will only grow.”
{"url":"http://www.perimeterinstitute.ca/news/new-laws-quantum-dynamics","timestamp":"2014-04-20T07:14:13Z","content_type":null,"content_length":"33168","record_id":"<urn:uuid:7b30415a-e0df-400f-af99-9f7e43fb2a39>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Milwaukee, WI Trigonometry Tutor Find a Milwaukee, WI Trigonometry Tutor ...I finished college at Penn State in May 2011 with a statistics major for my Bachelor's and Master's degrees. During my last 5 semesters, I worked for the statistics department as a grader and lab intern for Intro to Statistics and during my last semester, I worked as a teaching assistant for a 4... 21 Subjects: including trigonometry, English, reading, writing Hi, I'm always described as a good explainer by friends and family. I know math; I know the math that you are studying; and I know the math behind that math. Teaching in public school, I am used to breaking down all subjects (including calculus) into their most basic parts. 14 Subjects: including trigonometry, calculus, statistics, geometry ...I also have minors in Psychology and Spanish and am fairly knowledgeable in many other subject areas. As far as my tutoring philosophy, I believe that there's more than one way to explain something. If a student isn't comprehending something, I will try to explain it in a different manner. 32 Subjects: including trigonometry, English, Spanish, reading ...However, the fact that I am part of a very large community of talented teachers keeps me humble and in constant pursuit of improvement. I welcome and use feedback from my students and like to make myself available outside our sessions to help. I check my email regularly and respond to questions via email and/or text. 22 Subjects: including trigonometry, Spanish, calculus, writing Hi, my name is Matt, and I'm a currently a junior studying computer science at UW-Milwaukee. I have a strong passion for technology and find its implementation very interesting. Over the past three years I've developed skills in writing, debugging, and analyzing computer code. 13 Subjects: including trigonometry, geometry, algebra 1, algebra 2 Related Milwaukee, WI Tutors Milwaukee, WI Accounting Tutors Milwaukee, WI ACT Tutors Milwaukee, WI Algebra Tutors Milwaukee, WI Algebra 2 Tutors Milwaukee, WI Calculus Tutors Milwaukee, WI Geometry Tutors Milwaukee, WI Math Tutors Milwaukee, WI Prealgebra Tutors Milwaukee, WI Precalculus Tutors Milwaukee, WI SAT Tutors Milwaukee, WI SAT Math Tutors Milwaukee, WI Science Tutors Milwaukee, WI Statistics Tutors Milwaukee, WI Trigonometry Tutors Nearby Cities With trigonometry Tutor Brookfield, WI trigonometry Tutors Brown Deer, WI trigonometry Tutors Cudahy trigonometry Tutors Glendale, WI trigonometry Tutors Greenfield, WI trigonometry Tutors Menomonee Falls trigonometry Tutors New Berlin, WI trigonometry Tutors Racine, WI trigonometry Tutors River Hills, WI trigonometry Tutors Saint Francis, WI trigonometry Tutors Shorewood, WI trigonometry Tutors Wauwatosa, WI trigonometry Tutors West Allis, WI trigonometry Tutors West Milwaukee, WI trigonometry Tutors Whitefish Bay, WI trigonometry Tutors
{"url":"http://www.purplemath.com/Milwaukee_WI_Trigonometry_tutors.php","timestamp":"2014-04-19T20:12:28Z","content_type":null,"content_length":"24425","record_id":"<urn:uuid:9bcd23a8-8e05-4377-b0d0-8eb3032044f7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
More Magic Potting Sheds Copyright © University of Cambridge. All rights reserved. 'More Magic Potting Sheds' printed from http://nrich.maths.org/ This problem follows on from Magic Potting Sheds After a year of successful gardening using his magic doubling shed (introduced in Magic Potting Sheds), Mr McGregor buys a new shed that trebles the number of plants in it each night. Use the interactivity to investigate how many plants he needs this time to get the same number in each garden. What is the smallest number of plants he could use? Can you predict how many plants he would need on the first day and how many he should plant each day if he bought a new shed that quadruples the number of plants in it each night? Use the interactivity to test your prediction. Mr McGregor is so successful that he decides to plant more gardens. He can still only plant one garden each day. Use the interactivity to change the number of gardens and investigate how many plants he should use for each of the different potting sheds. What do you find? Can you find a general rule? Can you explain why your rule works? Full Screen Version This text is usually replaced by the Flash movie. Unfortunately, Mr McGregor suffers an attack from evil magic slugs that eat half of the plants in his (non-magic) potting shed each night. He still wishes to plant the same number of plants in each How many plants does he need on the first day this time, and how many should he plant each day? (Remember that he can only plant whole numbers of plants!) Use this interactivity to help you. Full Screen Version This text is usually replaced by the Flash movie.
{"url":"http://nrich.maths.org/4927/index?nomenu=1","timestamp":"2014-04-21T07:16:39Z","content_type":null,"content_length":"6082","record_id":"<urn:uuid:1350e89e-1080-4e3e-9397-6d401254b8aa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois group of a polynomial April 19th 2009, 03:26 AM #1 Junior Member Jun 2008 Galois group of a polynomial The problem is to identify Galois group of the polynomial $f=x^4 + 2x^2 + x +3$. We have done a couple of examples in class, but the I am unable to solve this. Thank you. There are many ways to do it. Which method have you learned so far? You can use reduction modulo $p$, or you can use the method where you find cubic resolvent and then determine the Galois group. Have you learned this? Anyways, It is easily seen that the polynomial is irreducible over $\mathbb Q$. You can try to show this as following. It is easily seen by Rational root theorem that the polynomial has no rational root, furthermore assume that the polynomial can be written as $x^4+2x^2+x+3 = (x^2+bx+c)(x^2+dx+e)$ and then try to get a contradiction. Therefore you can conlude that the polynomial is irreducible over $\mathbb Q$. By reducing the polynomial modulo $3$ we have $(x^3+2x+1)(x)$ and modulo 5 we have $(x+2)(x+1)(x^2+2x+4)$. Since The Galois group contains a trasoposition and a 3-cycle we conclude that the Galois group is $S_4.$ Last edited by peteryellow; April 19th 2009 at 04:45 AM. Yes, we've learned the method with cubic resolvent, but I'm unable to solve it by myself. I would be really grateful for your help and time. The cubic resolvent is $g(y) = y^3-2y^2-12y-25$ (If I have solved it correctly.)You can recheck it. then it is easily seen that the Galois group if g is $S_3$. Then by using some Theorems you can conclude that $S_4$ is the Galois group. Ask if something is not very clear. Thank you very very very much, I will read more about Galois groups and than ask if there are any questions left. April 19th 2009, 04:27 AM #2 Aug 2008 April 19th 2009, 04:40 AM #3 Junior Member Jun 2008 April 19th 2009, 05:14 AM #4 Aug 2008 April 19th 2009, 05:26 AM #5 Junior Member Jun 2008
{"url":"http://mathhelpforum.com/advanced-algebra/84401-galois-group-polynomial.html","timestamp":"2014-04-24T09:49:56Z","content_type":null,"content_length":"41596","record_id":"<urn:uuid:479cc9e5-0d4a-4da8-bf46-99bc56b4deb1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
The Voltage Across The Terminals Of A 0.4 MuF Capacitor ... | Chegg.com Image text transcribed for accessibility: The voltage across the terminals of a 0.4 muF capacitor is: The initial current in the capacitor is 90 mA. Assume the passive sign convention. What is the initial energy stored in the capacitor? Evaluate the coefficient A1 and A2. What is the expression for the capacitor current? Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/voltage-across-terminals-04-muf-capacitor-initial-current-capacitor-90-ma-assume-passive-s-q1850383","timestamp":"2014-04-20T09:41:13Z","content_type":null,"content_length":"20732","record_id":"<urn:uuid:ac58dc1d-067c-4651-8ad6-e0206d882ba4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Isomorphisms of types: from lambda-calculus to information retrieval and language design. Birkhauser - In WS-FM, 3rd Int. Workshop on Web Services and Formal Methods, number 4184 in LNCS , 2006 "... Abstract. We define a formal contract language along with subcontract and compliance relations. We then extrapolate contracts out of processes, that are a recursion-free fragment of ccs. We finally demonstrate that a client completes its interactions with a service provided the corresponding contrac ..." Cited by 24 (4 self) Add to MetaCart Abstract. We define a formal contract language along with subcontract and compliance relations. We then extrapolate contracts out of processes, that are a recursion-free fragment of ccs. We finally demonstrate that a client completes its interactions with a service provided the corresponding contracts comply. Our contract language may be used as a foundation of Web services technologies, such as wsdl and wscl. 1 - In ICFP ’03: Proceedings of the eighth ACM SIGPLAN international conference on Functional programming , 2003 "... We propose a type system ML F that generalizes ML with first-class polymorphism as in System F. We perform partial type reconstruction. As in ML and in opposition to System F, each typable expression admits a principal type, which can be inferred. Furthermore, all expressions of ML are well-typed, ..." Cited by 12 (0 self) Add to MetaCart We propose a type system ML F that generalizes ML with first-class polymorphism as in System F. We perform partial type reconstruction. As in ML and in opposition to System F, each typable expression admits a principal type, which can be inferred. Furthermore, all expressions of ML are well-typed, with a possibly more general type than in ML, without any need for type annotation. Only arguments of functions that are used polymorphically must be annotated, which allows to type all expressions of System F as well. "... The quest for type inference with first-classpolymorphic types Programming languages considerably benefit from static type-checking. In practice however, types may sometimes trammel programmers, for two opposite reasons. On the one hand, type anno-tations may quickly become a burden to write; while ..." Add to MetaCart The quest for type inference with first-classpolymorphic types Programming languages considerably benefit from static type-checking. In practice however, types may sometimes trammel programmers, for two opposite reasons. On the one hand, type anno-tations may quickly become a burden to write; while they usefully serve as documentation for toplevel functions, they also obfuscatethe code when every local function must be decorated. On the other hand, since types are only approximations, any type system willreject programs that are perfectly well-behaved and that could be accepted by another more expressive one; hence, sharp program-mers may be irritated in such situations. "... Intuitionistic type theory [43] is an expressive formalism that unifies mathematics and computation. A central concept is the propositions-as-types principle, according to which propositions are interpreted as types, and proofs of a proposition are interpreted as programs of the associated type. Mat ..." Add to MetaCart Intuitionistic type theory [43] is an expressive formalism that unifies mathematics and computation. A central concept is the propositions-as-types principle, according to which propositions are interpreted as types, and proofs of a proposition are interpreted as programs of the associated type. Mathematical propositions are thereby to be understood as specifications, or problem descriptions, that are solved by providing a program that meets the specification. Conversely, a program can, by the same token, be understood as a proof of its type viewed as a proposition. Over the last quarter-century type theory has emerged as the central organizing principle of programming language research, through the identification of the informal concept of language features with type structure. Numerous benefits accrue from the identification of proofs and programs in type theory. First, it provides the foundation for integrating types and verification, the two most successful formal methods used to ensure the correctness of software. Second, it provides a language for the mechanization of mathematics in which proof checking is equivalent to type checking, and proof search is equivalent to writing a program to meet a specification.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=902422","timestamp":"2014-04-21T00:45:58Z","content_type":null,"content_length":"21897","record_id":"<urn:uuid:300f70c8-0963-4ec5-8e20-3131bec496bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Maximizing solutions of NDSolve' topic Author Comment/Response I've come up with a possible workaround but for some reason it gives results that are clearly wrong. What I did is calculate points for the fuction L and fitted an interpolation function to those points and then maximized the interpolation function. This doesn't produce an error but gives clearly wrong results. With the example in the post above: l[(k_)?NumericQ] := NDSolve[{y''[x] + (k + Sin[x]^2) y[x] == 0, y'[0] == 0, y[0] == 0.5}, y, {x, 0, 30}] L[k_, t_] := (y[t] /. l[k])^2 data = Table[{k, First[L[k, 30]]}, {k, 0, 10, 0.001}]; f = Interpolation[data]; Maximize[{f[x], 0 < x < 10}, x] However this gives {0.229964, {x -> 6.35663}} which is obviously wrong. I can see quite clearly by plotting the fuction that the maximum is around x=0.4 and is about 120. In fact if I give this as an argument to the interpolation function it gives which is clearly much larger that what it found to be the maximum. If I give it a range that is around what I know is the maximum it does find it correctly: In[07]:= Maximize[{f[x], 0 < x < 0.5}, x] Out[07]= {123.308, {x -> 0.398059}} Why is it getting the wrong result for a wider range. Or is there a better way of doing this? Any help appreciated. URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/29198/","timestamp":"2014-04-18T10:59:10Z","content_type":null,"content_length":"30007","record_id":"<urn:uuid:f9f04175-7b81-4e2a-9aef-2acddfedf393>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Detecting and Counting Objects with Circular Features This example shows how you can use imfindcircles together with removeoverlap function (can be found in the file exchange), for counting fungi spores, which has various elliptical like shapes. As you can see, the spores have different shapes and sizes, some overlap objects from other planes of the imaged sample. This is a common microscopy problem in biology. First, try to find circles in the image using imfindcircles. For estimating the radius range of our objects we can use imdisline: l= imdistline; Using a radius range between 12 to 30 pixel and visualizing the results with viscircles: [centers, radii] = imfindcircles(image,[12 30]); close all;figure; imshow(image); viscircles(centers, radii,'EdgeColor','b'); Lets increase the Sensitivity factor (the default is 0.85) and use a low static Edge Gradient Threshold instead of the default graytreshold. [centers, radii] = imfindcircles(image,[12 30],'Sensitivity',0.92,'Edge',0.03); close all;figure; imshow(image); viscircles(centers, radii,'EdgeColor','b'); Now, it seems we detected more circles than spores, mostly because of overlapping circles. Using removeoverlap function we can remove the overlapping circles, or allow an overlap of circle pair up to some tolerance, e.g: 5 pixels overlap. close all;figure; imshow(image); viscircles(centersNew, radiiNew,'EdgeColor','b'); We got a relatively good detection for the number of spores, finally we can count the number of circles. ans = 94 So, we counted 94 spores! 8 Comments Already One pingback/trackback
{"url":"http://imageprocessingblog.com/detecting-and-counting-objects-with-circular-features/","timestamp":"2014-04-17T06:52:12Z","content_type":null,"content_length":"61276","record_id":"<urn:uuid:e51244b1-0662-4238-b6ee-ce83238284c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplication shift cipher with k-graphs May 1st 2013, 12:52 AM #1 Dec 2012 Multiplication shift cipher with k-graphs Ok, I've been looking at a worked example of a question I can't understand what's going on a certain point. There is a multiplier of 17, a shift 41 and k=2. PA needs to be enciphered. P has the value of 16 and A has the value of 1. PA is converted to a pair with the sum: 16 x 62 + 1 = 993 Multiply: 993 x 17 = 16881 Shift: +41 = 16992 [working in modulo 62^2] 16992 (mod 62^2) = 1546 Now, here's where i'm confused, the next step just says "separate codes" and then gives the values of 24 and 58. So if someone could explain how those two numbers were calculated that would be great, thanks. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/number-theory/218422-multiplication-shift-cipher-k-graphs.html","timestamp":"2014-04-19T20:46:49Z","content_type":null,"content_length":"30273","record_id":"<urn:uuid:741a3559-2c38-4984-9232-254ce3db54b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
- New Logic/Math Puzzles - New Logic/Math Puzzles - New Logic/Math Puzzles You have to buy exactly 100 eggs You have exactly 100 coins There are 3 kinds of eggs: A costs 7 coins for 1 egg B costs 3 coins for 1 egg C costs 1 coin for 3 eggs Edit: there's one condition: you have to buy from all 3 kinds How many eggs of each type do you need to buy in order to have spend exactly 100 coins? Edited by Maggi, 09 November 2008 - 08:50 PM.
{"url":"http://brainden.com/forum/index.php/topic/5376--/","timestamp":"2014-04-17T00:52:28Z","content_type":null,"content_length":"90907","record_id":"<urn:uuid:4934c31b-4285-4528-be9d-c09dbfa2a88c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Computation of FIRST set "Ulrich Frank" <franku@fmi.uni-passau.de> 19 Feb 2006 02:01:45 -0500 From comp.compilers | List of all articles for this month | From: "Ulrich Frank" <franku@fmi.uni-passau.de> Newsgroups: comp.compilers Date: 19 Feb 2006 02:01:45 -0500 Organization: http://groups.google.com Keywords: LL(1), question Posted-Date: 19 Feb 2006 02:01:45 EST Hello NG, I have a big problem with ANTLR and the computation of the first set of a rule. I've already read corresponing articles on the computation of the first set but additionally want to ask you. In the lexer I define a token ID : ('a'..'z'); and in the parser I have the following rules: rule1 = ID rule2 | rule2; rule2 = "where" rule3; rule3 ... So FIRST(rule1) = {ID, "where"} and FIRST(rule2) = {"where"} And if I define rule1 as rule1 = (ID)? rule2; which is equivalent to the first rule1 definition, FIRST(rule1) also is {ID, "where"} RIGHT?!! Please say, that I'm right. Thanks. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/06-02-135","timestamp":"2014-04-18T21:09:07Z","content_type":null,"content_length":"4218","record_id":"<urn:uuid:db9c93c4-f795-40b2-ac1b-fce9409f43f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Constant Time BSR Solutions to Parenthesis Matching, Tree Decoding, and Tree Reconstruction From Its Traversals February 1996 (vol. 7 no. 2) pp. 218-224 ASCII Text x Ivan Stojmenovic, "Constant Time BSR Solutions to Parenthesis Matching, Tree Decoding, and Tree Reconstruction From Its Traversals," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 2, pp. 218-224, February, 1996. BibTex x @article{ 10.1109/71.485530, author = {Ivan Stojmenovic}, title = {Constant Time BSR Solutions to Parenthesis Matching, Tree Decoding, and Tree Reconstruction From Its Traversals}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {7}, number = {2}, issn = {1045-9219}, year = {1996}, pages = {218-224}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.485530}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Constant Time BSR Solutions to Parenthesis Matching, Tree Decoding, and Tree Reconstruction From Its Traversals IS - 2 SN - 1045-9219 EPD - 218-224 A1 - Ivan Stojmenovic, PY - 1996 KW - Binary tree KW - broadcast KW - parallel algorithm KW - parallel prefix KW - parenthesis matching KW - reduction KW - selection KW - sorting KW - tree traversals. VL - 7 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—Recently Akl et al. introduced a new model of parallel computation, called BSR (broadcasting with selective reduction) and showed that it is more powerful than any CRCW PRAM and yet requires no more resources for implementation than even EREW PRAM. The model allows constant time solutions to sorting, parallel prefix and other problems. In this paper, we describe constant time solutions to the parenthesis matching, decoding binary trees in bitstring representation, generating next tree shape in B-order, and the reconstruction of binary trees from their traversals, using the BSR model. They are the first constant time solutions to mentioned problems on any model of computation. The number of processors used is equal to the input size, for each problem. A new algorithm for sorting integers is also presented. [1] S.G. Akl, "Memory access in models of parallel computation: From folklore to synergy and beyond," Algorithms and Data Structures.Berlin: Springer Verlag, pp. 92-104, 1991. [2] A. Anderson and S. Carlsson, "Construction of a tree from its traversals in optimal time and space," Information Processing Letters, vol. 34, no. 1, pp. 21-25, 1990. [3] K. Abrahamson, N. Dadoun, D.G. Kirkpatrick, and T. Przytycka, "A Simple Parallel Tree Contraction Algorithm," J. Algorithms, vol. 10, no. 2, pp. 287-302, 1989. [4] S.G. Akl, L. Fava Lindon, and G.R. Guenther, "Broadcasting with selective reduction on an optimal PRAM circuit," Technique et Science Informatiques, vol. 4, pp. 261-268, 1991. [5] S.G. Akl and G.R. Guenther, "Application of BSR to the maximal sum subsegment problem," Int'l J. High Speed Computing, vol. 3, no. 2, pp. 107-119, 1991. [6] S.G. Akl and G.R. Guenther, "Broadcasting with selective reduction," Proc. 11th IFIP Congress,San Francisco, pp. 515-520, Aug. 1989. [7] S.G. Akl and K.A. Lyons, Parallel Computational Geometry, Prentice Hall, Englewood Cliffs, N.J., 1993. [8] R.J. Anderson, E.W. Mayr, and M.K. Warmuth, "Parallel approximation algorithms for bin packing," Inform. and Comput., vol. 82, pp. 262-277, 1989. [9] S.G. Akl and I. ${\bf Stojmenovi\acute c}$, “Multiple Criteria BSR: An Implementation and Applications to Computational Geometry Problems,” Proc. 27th Hawaii Int'l Conf. System Sciences, vol. II, pp. 159-168, Maui, Hawaii, Jan. 1994. [10] S.G. Akl and I. Stojmenovic, "Generating binary trees in parallel," Proc. Allerton Conf. on Commun., Control and Computing,Monticello, Ill., pp. 225-233, Sept.30- Oct.2, 1992. [11] H.A. Burgdorff, S. Jajodia, F.N. Springsteel, and Y. Zalstein, "Alternative methods for the reconstruction of trees from their traversals," BIT, vol. 27, pp. 134-140, 1987. [12] C.C.Y. Chen and S.K. Das, "A cost optimal parallel algorithm for the parentheses matching problem on an EREW PRAM," Proc. Fifth Int'l Parallel Proc. Symp., pp. 132-137, May 1991. [13] T.H. Cormen,C.E. Leiserson, and R.L. Rivest,Introduction to Algorithms.Cambridge, Mass.: MIT Press/McGraw-Hill, 1990. [14] K. Diks and W. Rytter, "On optimal parallel computations for sequences of brackets," manuscript, 1990. [15] L.F. Lindon and S.G. Akl, “An Optimal Implementation of Broadcasting with Selective Reduction,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 3, pp. 256-269, Mar. 1993. [16] N. Gabrani and Priti Shankar, "A note on the reconstruction of a binary tree from its traversals," Information Processing Letters, vol. 42, pp. 117-119, 1992. [17] C. Levcopoulos and O. Petersson, "Matching parentheses in parallel," Discrete Applied Mathematics, vol. 40, pp. 423-431, 1992. [18] E. Makinen, "Constructing a binary tree from its traversals," BIT, vol. 29, pp. 572-575, 1989. [19] R.A. Melter and I. ${\bf Stojmenovi \acute c}$, “Solving City Block Metric and Digital Geometry Problems on the BSR Model of Parallel Computation,” J. Math. Imaging and Vision, vol. 5, pp. 119-127, 1995. [20] S. Olariu, C. Overstreet, and Z. Wen, "An optimal parallel algorithms to reconstruct a binary tree from its traversals," Proc. Int'l Conf. Computing and Information, Lecture Notes in Computer Science, vol. 497, pp. 484-495, 1991. [21] S. Olariu, J.L. Schwing, and J. Zhang, "Optimal parallel encoding and decoding for trees," Int'l J. Foundations of Computer Science, vol. 3, pp. 1-10, 1992. [22] F. Ruskey and T.C. Hu, "Generating binary trees lexicographically," SIAM J. Comput., vol. 6, pp. 745-758, 1977. [23] F. Springsteel and I. ${\bf Stojmenovi\acute c}$, “Parallel General Prefix Computations with Geometric, Algebraic, and Other Applications,” Int'l J. Parallel Program, vol. 18, no. 6, pp. 485-503, 1989. [24] A.E. Trojanowski, "Ranking and listing algorithms for k-ary trees," SIAM J. Comput., vol. 7, no. 4, pp. 492-509, 1978. [25] W.W. Tsang, T.L. Lam, and Y.L. Chin, "An optimal EREW parallel algorithm for parenthesis matching," Proc. Int'l Conf. Par. Proc., vol. 3, pp. 185-192, 1989. [26] S. Zaks, "Lexicographic generation of ordered trees," Theoretical Computer Science, vol. 10, pp. 63-82, 1980. Index Terms: Binary tree, broadcast, parallel algorithm, parallel prefix, parenthesis matching, reduction, selection, sorting, tree traversals. Ivan Stojmenovic, "Constant Time BSR Solutions to Parenthesis Matching, Tree Decoding, and Tree Reconstruction From Its Traversals," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 2, pp. 218-224, Feb. 1996, doi:10.1109/71.485530 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/1996/02/l0218-abs.html","timestamp":"2014-04-16T05:51:12Z","content_type":null,"content_length":"56662","record_id":"<urn:uuid:be3df652-6a76-48bc-8e2e-16d32d9a9ebf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with parametric equations? November 13th 2009, 07:16 PM #1 Jul 2007 Two particles move in the xy-plane. At time t, the position of particle A is given by x(t)=4t−4 and y(t)=2t−k, and the position of particle B is given by x(t)=3t and y(t)=t^2−2t−1. Find k so that the particles are sure to collide. My initial thought wast to take the derivatives of the equations, but then the k would just drop out, so that's not right. Any suggestions? Thanks! Two particles move in the xy-plane. At time t, the position of particle A is given by x(t)=4t−4 and y(t)=2t−k, and the position of particle B is given by x(t)=3t and y(t)=t^2−2t−1. Find k so that the particles are sure to collide. My initial thought wast to take the derivatives of the equations, but then the k would just drop out, so that's not right. Any suggestions? Thanks! If the two particles collide, then surely they have the same x and y values... So $4t - 4 = 3t$ and $2t - k = t^2 - 2t - 1$ From equation 1, $t = 4$. So, when subbed into equation 2, we have $8 - k = 7$. What is k? November 13th 2009, 07:20 PM #2
{"url":"http://mathhelpforum.com/calculus/114418-help-parametric-equations.html","timestamp":"2014-04-17T01:36:31Z","content_type":null,"content_length":"34416","record_id":"<urn:uuid:675a3073-2beb-49d5-8486-5627286a1035>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Parsing with infinite lookahead corbett@lupa.Eng.Sun.COM (Robert Corbett) Sat, 26 Feb 1994 01:31:45 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: corbett@lupa.Eng.Sun.COM (Robert Corbett) Keywords: parse, theory Organization: Sun References: 94-02-174 94-02-188 Date: Sat, 26 Feb 1994 01:31:45 GMT dwohlfor@cs.uoregon.edu (Clai'omh Dorcha) writes: >The CYK algorithm (as seen in Hopcroft & Ullman) is capable of parsing >_any_ context free grammar. The bummer is that it is O(n^3) where n is >the length of the input string. Most folks aren't too keen on using it >when a linear parser exists for most CFG's. But it is the only general >purpose parsing algorithm. The ONLY general-purpose parsing algorithm? HARDLY! Earley's algorithm (CACM 13:2), the Graham, Harrison, Ruzzo algorithm (TOPLAS 2:3), backtracking algorithms all handle general context-free grammars. BTW, the order complexity of context-free parsing has been shown to be less than or equal to the order complexity of matrix multiplication. Yours truly, Robert Corbett Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/94-02-192","timestamp":"2014-04-17T00:58:06Z","content_type":null,"content_length":"6021","record_id":"<urn:uuid:ce39cff5-02a9-488d-836b-3f72a0b3790d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America B Using a first-order multiple-scale expansion approach, we derive a set of coupled-mode equations that describe both forward and backward second-harmonic generation and amplification processes in nonlinear, one-dimensional, multilayered structures of finite length. The theory is valid for index modulation of arbitrary depth and profile. We derive analytical solutions in the undepleted pump regime under different pumping circumstances. The model shows excellent agreement with the numerical integration of Maxwell’s equations. © 2002 Optical Society of America OCIS Codes (190.4410) Nonlinear optics : Nonlinear optics, parametric processes (230.4170) Optical devices : Multilayers Giuseppe D'Aguanno, Marco Centini, Michael Scalora, Concita Sibilia, Mario Bertolotti, Mark J. Bloemer, and Charles M. Bowden, "Generalized coupled-mode theory for χ^(2) interactions in finite multilayered structures," J. Opt. Soc. Am. B 19, 2111-2121 (2002) Sort: Year | Journal | Reset 1. M. Scalora, M. J. Bloemer, A. S. Manka, J. P. Dowling, C. M. Bowden, R. Viswanathan, and J. W. Haus, “Pulsed second-harmonic generation in nonlinear, one-dimensional, periodic structure,” Phys. Rev. A 56, 3166–3174 (1997). 2. M. Centini, C. Sibilia, M. Scalora, G. D’Aguanno, M. Bertolotti, M. Bloemer, C. M. Bowden, and I. Nefedov, “Dispersive properties of finite, one-dimensional photonic bandgap structures: applications to nonlinear quadratic interactions,” Phys. Rev. E 60, 4891–4898 (1999). 3. A. V. Balakin, D. Boucher, V. A. Bushev, N. I. Koroteev, B. I. Mantsyzov, P. Masselin, I. A. Ozheredov, and A. P. Shurinov, “Enhancement of second-harmonic generation with femtosecond laser pulses near the photonic band edge for different polarizations of incident light,” Opt. Lett. 24, 793–795 (1999). 4. G. D’Aguanno, M. Centini, C. Sibilia, M. Bertolotti, M. Scalora, M. Bloemer, and C. M. Bowden, “Enhancement of χ^(2) cascading processes in one-dimensional photonic bandgap structures,” Opt. Lett. 24, 1663–1665 (1999). 5. Y. Dumeige, P. Vidakovic, S. Sauvage, I. Sagnes, J. A. Levenson, C. Sibilia, M. Centini, G. D’Aguanno, and M. Scalora, “Enhancement of second harmonic generation in a 1-D semiconductor photonic bandgap,” Appl. Phys. Lett. 78, 3021–3023 (2001). 6. M. Scalora, M. J. Bloemer, C. M. Bowden, G. D’Aguanno, M. Centini, C. Sibilia, M. Bertolotti, Y. Dumeige, I. Sagnes, P. Vidakovic, and A. Levenson, “Choose your color from the photonic band edge: nonlinear frequency conversion,” Opt. Photon. News 12, 36–40 (2001). 7. G. D’Aguanno, M. Centini, M. Scalora, C. Sibilia, Y. Dumeige, P. Vidakovic, J. A. Levenson, M. J. Bloemer, C. M. Bowden, J. W. Haus, and M. Bertolotti, “Photonic band edge effects in finite structures and applications to χ^(2) interactions,” Phys. Rev. E 64, 016609–016619 (2001), and references therein. 8. J. W. Haus, R. Viswanathan, M. Scalora, A. G. Kalocsai, J. D. Cole, and J. Theimer, “Enhanced second-harmonic generation in media with weak periodicity,” Phys. Rev. A 57, 2120–2128 (1998), and references therein. 9. M. J. Steel and C. M. de Sterke, “Second-harmonic generation in second-harmonic fiber Bragg gratings,” Appl. Opt. 35, 3211–3222 (1996). 10. C. M. de Sterke and J. E. Sipe, “Envelope-function approach for the electrodynamics of nonlinear periodic structures,” Phys. Rev. A 38, 5149–5165 (1988). 11. A. Arraf and C. M. de Sterke, “Coupled-mode equations for quadratically nonlinear deep gratings,” Phys. Rev. E 58, 7951–7958 (1998), and references therein. 12. T. Iizuka and C. M. de Sterke, “Corrections to coupled mode theory for deep gratings,” Phys. Rev. E 61, 4491–4499 (2000). 13. O. Di Stefano, S. Savasta, and R. Girlanda, “Mode expansion and photon operators in dispersive and absorbing dielectrics,” J. Mod. Opt. 48, 67–84 (2001). 14. P. Yeh, Optical Waves in Layered Media (Wiley, New York, 1988). 15. A. Nayfeh, Introduction to Perturbation Techniques (Wiley, New York, 1993). 16. G. D’Aguanno, M. Centini, M. Scalora, C. Sibilia, M. J. Bloemer, C. M. Bowden, J. W. Haus, and M. Bertolotti, “Group velocity, energy velocity and superluminal propagation in finite photonic bandgap structures,” Phys. Rev. E 63, 036610–036615 (2001). 17. G. D’Aguanno, “Nonlinear χ^(2) interactions in bulk and stratified materials,” Ph.D. Thesis (National Library of Italy, Rome, 1999) pp. 72–78. 18. J. M. Bendickson, J. P. Dowling, and M. Scalora, “Analytic expression for the electromagnetic mode density in finite, one-dimensional, photonic bandgap structures,” Phys. Rev. E 53, 4107–4121 19. The numerical calculations were performed by use of a fast Fourier transform beam-propagation method that integrates the equations of motion in the time domain, as outlined in Ref. 1. Incident pulses are assumed to be 2 ps in duration and have the following intensity profile in time: Ĩ^(pump)(t)=I^(peak) exp[−(t^2/2σ^2)], where I^(peak) is the peak intensity of the pump. A comparison with plane-wave results by use of pulses is possible because the spatial extension of these pulses is nearly 3 orders of magnitude greater than the structure length. As far as the structure is concerned, this incident pulse is nearly monochromatic, and the dynamics yield results that are nearly identical to the plane-wave results, provided the intensity of the plane wave is properly averaged over the pulse width. The average intensity of an incident plane-wave pump field is defined as I^(pump)=I^(peak)4σ −∞+∞ exp−t^22σ^2dt=(π/8)^1/2I^(peak). 20. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C (Cambridge U. Press, Cambridge, 1988). 21. S. L. Voronov, I. Kohl, J. B. Madsen, J. Simmons, N. Terry, J. Titensor, Q. Wang, and J. Peatross, “Control of laser high-harmonic generation with counterpropagating light,” Phys. Rev. Lett. 87, 133902–133914 (2001), and references therein. OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-19-9-2111","timestamp":"2014-04-17T23:29:53Z","content_type":null,"content_length":"149211","record_id":"<urn:uuid:c0e9191f-c8a3-45c6-b5e8-6f5f701048b2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Extra Volume: Kazuya Kato's Fiftieth Birthday (2003), 99-129 DOCUMENTA MATHEMATICA , Extra Volume: Kazuya Kato's Fiftieth Birthday (2003), 99-129 Laurent Berger Bloch and Kato's Exponential Map: Three Explicit Formulas The purpose of this article is to give formulas for Bloch-Kato's exponential map and its dual for an absolutely crystalline $p$-adic representation $V$, in terms of the $(\varphi,\Gamma)$-module associated to $V$. As a corollary of these computations, we can give a very simple and slightly improved description of Perrin-Riou's exponential map, which interpolates Bloch-Kato's exponentials for the twists of $V$. This new description directly implies Perrin-Riou's reciprocity formula. 2000 Mathematics Subject Classification: 11F80, 11R23, 11S25, 12H25, 13K05, 14F30, 14G20 Keywords and Phrases: Bloch-Kato's exponential, Perrin-Riou's exponential, Iwasawa theory, $p$-adic representations, Galois cohomology. Full text: dvi.gz 58 k, dvi 160 k, ps.gz 665 k, pdf 317 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.emis.de/journals/DMJDMV/vol-kato/berger.dm.html","timestamp":"2014-04-17T13:21:21Z","content_type":null,"content_length":"1907","record_id":"<urn:uuid:9d5102cf-1074-4cfc-8c4f-f60c45465df7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Lotka-Volterra Visualization This project was a follow up to my post about the predator-prey model derived by Alfred Lotka and Vito Volterra. I chose to do this for two reasons. First its always easier to understand concepts when you have their meanings visualized and applied to real life. Second, when I took ODE’s phase planes where just lines and numbers on the board. After this project, I wholesomely understood the significance of phase planes, and they’re truly enlightening. I want to share that with everyone! It’s a very straight forward program. I chose arbitrary values for the constants, my only concern was that the center point was at (2,2). The constants are $a_1 = 2\ \ a_2 = 2\ \ b_1 = 1\ \ b_2 = 1$. All of the code and documentation is at my github repository. Program Layout I use OpenGL to plot solutions on the phase plane. The solutions are lines of particles that flow under the forces of the vector field from the Lotka-Volterra equations. I used a graph generated from gnuplot as the background. The graph is the vector field from (5,5) with center point at $(\frac{a_2}{b_2},\ \frac{a_1}{b_1})$. Then I use OpenCV to plot both populations against time. The sinusoidal graphs are just a sequence of lines drawn from pixel to pixel along the graph. Each pixel’s succeeding location is updated by the Lotka-Volterra equations as well. They’re plotted on top of the graph below, it as well was generated from gnuplot. See it in Action If you’re curious about what it looks like in action, I made a video about it in my math math modeling section. To Do List I learned OpenGL for this program, so it’s a little buggy and certainly not very clean. There are a bunch of constants I had to experimentally derive to scale the mouse clicks to the appropriate dimensions for both windows. I’m sure there is a better way to go about doing it. This could have eaily been written only in OpenCV, but then I wouldn’t have had the excuse to learn OpenGL! =] Special Thanks Thanks to my friend Ian Johnson for all of his help. Also thanks to David Kopriva for teaching me about math models! 4 Responses to Lotka-Volterra Visualization 1. Hey great visualizer Nathan. Really useful. How difficult would it be for you to modify for the Competitive Lotka-Volterra equations? http://en.wikipedia.org/wiki/ 2. Thanks Michael! Editting the equations is a piece of cake. Lines 325-327 in main.cpp take care of updating the particle’s position. You could switch out the lotka volterra equations for any vector field! It looks like the competitive LV equations are still first order. Just change alpha and beta to the per capita growth rate, and the carrying capacity and then edit the equations on lines 326-327. You may also have to change the dimensions of openGL’s bounding box to see the solution, and I can’t guarantee convergence using forward Euler as I did. None-the-less all the ingredients are there, let me know how it works out for ya! 3. With Respect, I have been a Life-Science student throughout my carrier. As expected I am not good in Mathematics. I will be highly thankful if you will send step-by-step explanation of all the mathematical equations (or other relevant study-material) involved in Lotka-Volterra model to my e-mail ID (mislam.esst@gmail.com). Or just simply suggest me some websites & books which will be helpful for me to fully understand this model and the mathematical equations in it. Thank You. 4. thank you very much for your explanation, i am doing a project work mainly saying about the relation between jacobian matrix and lotka volterra predator prey method, and i had doubt ,when i find the eigenvalues of system ,i got purely imaginary value..what did it mean , how can i conclude project This entry was posted in Lotka-Volterra Visualizer. Bookmark the permalink.
{"url":"http://mathnathan.com/2010/12/lv-visual/","timestamp":"2014-04-19T14:50:40Z","content_type":null,"content_length":"33746","record_id":"<urn:uuid:da954db2-e0a4-4840-b496-96546085bcb2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
The Difference Between Log and Natural Log Date: 8 Feb 1995 20:05:32 -0500 From: Anonymous Subject: Logarithms What is the difference between log and natural log? I am having problems with this, so please help me. Date: 8 Feb 1995 22:29:17 -0500 From: Dr. Sydney Subject: Re: Logarithms Suppose we have y = lnx; z = log t where ln is log base e and log is log base 10. Then these equations are equivalent to the following e^y = x; 10^z = t Sometimes it is easier to think of logs in these terms instead! So, the difference is in the base -- ln has base e, log has base 10. Hope this helps! Write back if you have any more problems! Sydney, "dr. math" Date: 8 Feb 1995 23:41:01 -0500 From: Elizabeth Weber Subject: Re: Logarithms Now, what is e? Well, it's real name is Euler's number, and it's equal to 2.71828182...... But why would we care enough about e to have a special kind of logarithm for it? Well, for some reason it's a number that pops up all over the place (Especially when you learn calculus). For instance, if you draw the graph of 1/x, the area between this graph and the x-axis between x=0 and x=1 is the natural log of x.....but that's calculus. But you don't have to be using calculus to run into e occasionally. e shows up in statistics and in growth problems. You've learned about interest, right? If you have a hundred dollars, and the interest rate is 10%, you soon have $110, and the next time interest is figured out you're adding another 10% of $110, so you'll get $121, and so on... What happens when the interest is being computed continuously (all the time)? You might think you'd soon have an infinite amount of money, but actually, you have your initial deposit times e to the power of the interest rate times the amount of time: (interest rate x time) (deposit) (e) And e just naturally shows up again in growth problems, and in some statistics problems too, which is why we bother giving the natural log a special name. Elizabeth, a math doctor
{"url":"http://mathforum.org/library/drmath/view/60301.html","timestamp":"2014-04-17T07:50:47Z","content_type":null,"content_length":"7003","record_id":"<urn:uuid:0fd08bff-9c05-4a0f-8ae9-7d0cf29ad635>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basics of Statistics I: The Normal Distribution So here’s the first post on statistics. If you know the basics, and I suspect most of you do, then you can just ignore these posts (unless you want to check to make sure I’m getting it right). If you don’t know the basics, then hopefully you will when I’m done. Even for those of you who’ve never taken a stats class, much of this will probably be familiar, but I’m going to start from the assumption that I’m writing for someone who has no knowledge of statistics whatsoever, so bear with me. Alright, let’s begin. The Normal Distribution In cognitive psychology, two related types of statistics are used: descriptive and inferential. Descriptive statistics are just what the name says: descriptions of data. Inferential statistics are used to draw inferences about populations from samples. Since the two are related, I’m going to talk about them both pretty much at the same time. And to do that, we have to start with the normal distribution, and Central Limit Theorem (CLT). In essence, CLT says that if you have a bunch of independent, random variables that (additively) determine the value of another variable, then as long as you meet a few constraints (particularly, finite variance, but that won’t make sense until we get to variance), then the distribution of that variable will be approximately normal. Take, for example, height. A person’s height is determined by a bunch of independent random variables like genetics, nutrition, and the amount of solar radiation (maybe?), so at least within a population (say, the adult population in the United States), height will tend to be normally distributed. That is, if you calculate the number of people at each particular height, and then graph those frequencies (represented as probabilities) for one gender, you’ll get a graph that looks something like this (from here): That’s the classic “bell curve,” or the normal distribution. Now the reason CMT is important is because it lets us (by us, I mean psychologists) assume the normal distribution in most cases, and that’s important because the normal distribution has certain well known properties that make it excellent for computing both descriptive and inferential statistics. We’ll start with measures of central tendency. There are three basic measures of central tendency”: • The mean, which is just the average. I’m sure you know how the mean is computed, but just in case, you compute it like this: μ = ΣX/N Where μ is the mean, ΣX is the sum of all of the instances of variable X, and N is the number of instances. Put more simply, the mean is just the sum of all the instances divided by the number of • The median, or the middle value. That is, the value for which half of the instances are greater and half are lower. So, if we had the following values for X: 10, 13, 17, 6, and 15, then the median would be 13. If you have an even number of instances, then there is no middle instance, so you compute the median by adding the two middle instances and dividing them by 2. For example, if you added 21 to the above instances of X, the median would now be (13 + 15)/2, or 14.? • The mode is the most frequent value. So if you have these values for X: 12, 21, 17, 14, 7, 8, 23, 8, 14, 20, 8, 13, then the mode is 8. One of the great features of the normal distribution is that within it, the mean, median, and mode are the same thing. That is, the average instance is also the number with an equal number of instances above and below it and the most frequent instance. The next great feature of the normal distribution concerns variability. It’s all well and good to know the central tendency of the distribution of a variable, but that doesn’t tell you a whole lot unless you know the spread of that variable, or how much each instance of the variable tends to differ from the others and from the mean. The first measure of spread, or variability, is the variance. The variance is computed like this (for a population): σ^2 = Σ(X – μ)^2 / N In the equation, σ^2 is the variance, Σ(X – μ)^2 the sum (Σ) of the mean subtracted from the value of each instance of X, squared, and N is the number of instances. So the variance is computed by subtracting the mean from each value of X, squaring that, adding the results, and dividing by the number of instances. Put simply, the variance is the average squared distance from the mean. You may be wondering why the sum of X – μ is squared in the equation. Well, it’s quite simple. If you added all the values above and below the mean, they’d cancel each other out. The mean is just the number with half of the distribution’s value above it and half below it. So if you don’t square it you get 0, and that doesn’t help you very much. But a squared value is difficult to work with, so in addition to the variance as a measure of spread, we also use the standard deviation (represented as σ, for populations), which is calculated simply by taking the square root of the variance. So now you have a number that basically gives you the average distance of the values of a variable from the mean of that variable. Perhaps the most important feature of the normal distribution, for our purposes, is the fact that it allows you to compute the probability of getting a value up to a particular value. How, you ask? Well, consider this normal distribution: The area under the curve represents probability. The line down the middle of the distribution (at 50) is the mean. The space to the left of the mean represents 50% of the area, and thus the probability of getting a value less than the mean is 50%. The same is true of the area above the mean, and so the probability of getting a value above it is also 50%. But you knew that from the discussion of central tendency above. The wonderful thing about normal distributions is that you can also compute the probability associated with any value of a variable by computing the area under the curve to the left of that value. You do this with a nice little equation that I’m too lazy to write out, and that you’ll never ever need to use anyway. Why this is all important will begin to become clear in the next post, but I think this is enough for now. 1. #1 js June 29, 2007 Good idea for a post, but everything you’ve said that’s good about the normal distribution is true of any symmetric, unimodal distribution. One of the distinctive things about the normal distribution in this class is that all higher-order moments are zero. But, really, the normal distribution is important because of the CLT, which implies we can use the same procedures for a wide variety of problems, without having to think about what we’re doing. This is also what’s really awful about the normal distribution. 2. #2 Webs June 29, 2007 You hear that… it’s the standing ovation you got for the quality work here! I thought my grad class on stats broke it down easily till I came across this post. Anyone with a basic understanding of math should understand this, which is very important. The more people understand basic stats, the more likely they are to understand the world and it’s interactions. For instance, I think a better understanding of stats would lead to less people believing that 66,000 children are abducted every year in the US (not trying to get political here, just an example I heard of crappy stats). And the more intelligent our society could become. 3. #3 Renee June 29, 2007 Thanks for this post! I’ll be reading the rest. I’m taking statistics next semester, but we all know how useless classes can be for learning sometimes ^^. 4. #4 Chris June 29, 2007 js, you’re right, it’s true of any symmetrical, unimodal distribution. But the rest of the posts are about starting from the normal distribution and going through its different properties to ultimately arrive at hypothesis testing for cases when we don’t know the parameters of the population. Which is why I say that it will start to become clear in the next posts. And you’re right, it’s both a blessing and a curse. 5. #5 Torbjörn Larsson, OM June 29, 2007 the variance is the average squared distance from the mean Since this is a basics post I can mention my favorite heuristic for getting to, and remembering, that fact. Model data as springs connected to a point representing the mean, each spring pulled to a distance proportional to the data, in two directions. Spring forces are linear with distance, so the work done by each spring will be squared. We get that from work being the force applied over a distance. Work done is energy stored, so the variance given in the post represents the average energy in a spring. We get that from the formula, seeing that variance is summed up energy (total energy) divided by number of springs (data). Now we go back again. A ‘variance’ spring with average energy represents average distance from the mean. [A variant of the same model helps to understand linear regression, btw.] 6. #6 Ben M June 29, 2007 Human heights happen to be a normal-ish distribution, but they’re not a good illustration of the Central Limit Theorem. First of all, there is no reason to suspect that the variables are additive, random, or independent. Secondly, the appropriate “large number” in the CLT would be the number of independent quantities per person. In practice, this might be just a few parameters—suppose that your height is proportional to the sum of your parent’s heights, two or so nutritional degrees-of-freedom, and a random noise variable. It’s just four or five parameters; in my mind that’s usually not enough to invoke the CLT. In reality, we have to look at the height distribution and say “Yeah, that looks pretty normal if we plot US males only.”—we can’t predict it from any useful principle. Why is the height distribution so Gaussian-looking? Well, probably because the main input variables happen to be Gaussian-looking, and convolving two Gaussians tends to give you another Gaussian. The CLT may, however, tell you why the inputs are so normal: you could consider the heights of your ensemble of great-great-grandparents to be sixteen independent quantities which sum in a CLT way. “No matter what the human height distribution was in 1880, if offspring heights are simple averages of parent heights AND if marriages don’t sort by height, then today’s height distribution will be normal”—that’s an accurate illustration of the CLT. “Human height depends on some random variables, and is therefore normal” is not. Just a thought. 7. #7 Torbjörn Larsson, OM June 29, 2007 “Work done is energy stored,” – Work done on a spring is energy stored in it, 8. #8 Chris June 29, 2007 Ben, you’re right, of course. And I could have said all of that. And I’d have lost half the people in the first paragraph… heh. I use height as an example, as many stats teachers do, ’cause it’s something people are familiar with. I’d use IQ, but since that’s a standardized distribution, it would be getting ahead. 9. #9 Torbjörn Larsson, OM June 29, 2007 Though as a note, to come back to the blessing and the curse with the gaussian, it must be pretty much the null hypothesis in most cases, especially for noise. The problem is when people doesn’t check it thoroughly or model the causes properly. But you seem to take some care. 10. #10 chet snicker June 29, 2007 mr. ben, me perceives in your tone that you are a born hall monitor. NERD ALERT!!! NERD ALERT!!!! yours truly c.v. snicker 11. #11 james October 27, 2008 Subject Line: Beat Long Poll Lines with Absentee Ballots from StateDemocracy.org Many state and local election officials are encouraging voters to use Absentee Ballots to avoid the long lines and delays expected at the polls on November 4th due to the record-breaking surge in newly registered voters. Voters in most states still have time to obtain an Absentee Ballot by simply downloading an official application form available through http://www.StateDemocracy.org, a completely FREE public service from the nonprofit State Democracy Foundation. Read More: http://us-2008-election.blogspot.com/2008/10/beat-long-poll-lines-with-absentee.html
{"url":"http://scienceblogs.com/mixingmemory/2007/06/29/the-basics-of-statistics-i-the-n-1/","timestamp":"2014-04-19T10:02:07Z","content_type":null,"content_length":"59921","record_id":"<urn:uuid:acdf2467-a648-4d2d-bdab-3a640b23205b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
More Iranian election statistics It’s looking more and more as if the official Iranian election returns were at least partially fictional. I wrote last week about one unconvincing statistical argument for fraud; now a short paper by Bernd Beber and Alexandra Scacco offers more numbers and makes a stronger case. Keeping in mind that I like their paper a lot, let me say something about a part of it where I thought a bit more justification was needed. Consider the following three scenarios for generating 116 digits that are supposed to be random: 1. Digits produced by 116 spins of a spinner labeled 0,1,…,9. 2. Final digits of vote totals from 116 Iranian provinces. 3. Final digits of vote totals from U.S. counties. Now consider the following possible outcomes: • A. Each digit appears either 11 or 12 times. • B. 0 appears only 4% of the time, and the other digits appear roughly 10% of the time. • C. 7 appears 17% of the time, 5 appears only 4% of the time, other digits appear roughly 10% of the time. Which outcome should make you doubt that the digits are truly random? In scenario 1, I think B and C are suspicious; that level of deviation from the mean is more than you’d expect from random spins. Outcome B would make you suspect the spinner was biased against landing on 0, and C would make you think the spinner was biased towards 7 and against 5. But of course, outcome A is much more improbable (or so my mental calculation tells me) than either B or C. So why does’t it arouse suspicion? Because there’s no apparent mechanism by which a spinner could be biased to produce near-exactly uniformly distributed results like this. Your prior degree of belief that the spinner is “fixed” to produce this behavior is thus really low, and so even after observing A your belief in the spinner’s fairness is left essentially unchanged. In scenario 3, I don’t think any of the three outcomes should raise too much suspicion. Yes, the probability of seeing deviations from uniformity as large as those in C in random digits is under 5%. But we have a strong prior belief that U.S. elections aren’t crooked — in this case, I think it’s fair to say that scenarios A,B, and C are all evidence that the digits being faked, but not enough evidence to raise the very small prior to a substantial probablity of fraud. Scenario 2, the one Beber and Scacco consider, is the most interesting. Outcome C is the one they found. In order to estimate the probability of fraud in a Bayesian way, given outcome C, you need three numbers: • The probability of seeing outcome C from random digits; • The probability of seeing outcome C from digits made up from whole cloth at the ministry; • The probability — prior to any knowledge of the election results — that the Iranian government would release false numbers. The third question isn’t a mathematical one, but let’s stipulate that the answer is substantial — much larger than the analogous probability in the United States. The first question is the one Beber and Scacco assess in their paper; they get an answer of less than 5%. That sounds pretty damning — deviations like the “extra 7s” seen in the returns would arise less than 1 in 20 times from authentic election numbers. In fact, outcomes A,B and C are all pretty unlikely to arise from random digits. But outcome C is evidence for fraud only if it’s more likely to arise from fake numbers than real ones. And here we have an interesting question. Beber and Scacco observe that, in practice, people are bad at choosing random digits; when they try, they tend to pick some numbers more frequently than chance would dictate, and some less. (Their cites for this include the interesting paper by Philip J. Boland and Kevin Hutchinson, Student selection of random digits, Statistician, 49(4): 519-529, 2000.) So on these grounds it seems outcome C is indeed good evidence for faked data. But note that the Boland-Hutchinson data doesn’t just say people are bad at picking random digits — it says they are bad in predictable ways at picking random digits. Indeed, in each of their four trial groups, participants chose “0″ — which just doesn’t “feel random” — between 6.5% and 7.5% of the time, substantially less than the 10% you’d get from a random spinner. So outcome B, I think, would clearly be evidence for fraud. But outcome C is a little less cut-and-dried. Just as it’s not clear what mechanism would make a fixed spinner prone to outcome A, it’s not clear whether it’s reasonable to expect a person trying to pick random numbers to choose lots of numbers ending in “7″. In Boland and Hutchinson’s study, that digit came up just about exactly 10% of the time. Here’s one way to get a little more info; let’s say we believe that people trying to imitate random numbers choose 0 less often than they should. If the Iranian election digits had an overpopulation of 0, you might take this to be evidence against the made-up number hypothesis. So I checked — and in fact, only 9 out of the 116 digits from the provincial returns, or 7.7%, are 0. Point, Beber and Scacco. In the end, it’ll take people with better knowledge of Iranian domestic politics — that is, people with more reliable priors — to determine what portion of the election numbers are fake. But Beber and Scacco have convinced me, at least, that the provincial returns they studied are more consistent with made-up numbers than with real ones. Here’s a post from Andrew Gelman’s blog in which Beber and Scacco explain what their tests reveal about the county-level election data. Update: A more skeptical take on Beber and Scacco from Zach at Alchemy Today, who also makes the point that in order to get this question right it’s a good idea to think about the way in which people’s attempts to choose random numbers deviate from chance. I think his description of Beber and Scacco’s reasoning as “bogus” is too strong, but his observation that the penultimate digits of the state totals for Obama and McCain are as badly distributed as the final digits of the Iran numbers is a good reminder to be cautious. Re-update: Beber remarks on Zach’s criticisms here. 6 thoughts on “More Iranian election statistics” 1. In the first sentence of paragraph -5, (“So outcome C, I think, would clearly be…”) you’ve switched outcomes B and C. Either that or I’m very confused… 2. Crap. Think I fixed this now. 3. I would be cautious in equivalencing biases among American university students and Iranian voters: although the former are biased against 0s, etc., that is only weak evidence that the latter do. Also note that Beber & Scacco present a second test, which combined with the first, greatly reduces the probability. Their calculations, though, contain a minor error: the probability of the two results occurring by chance is not the stated 0.5%, but rather 0.14%. I pointed that out to the authors and they concurred; see the report online at Discovery Magazine: 4. I agree that “bogus” is strong, but when the authors are arrogantly proclaiming that their analysis “leaves little room for reasonable doubt” and that they “systematically show” what likely happened, a strong response is required. Frankly, it’s dishonest. Go read their work on Nigerian elections and see what they’re omitting when it comes to expected numbers — they conveniently pick the evidence from cognitive psychology that fits their observations and selectively summarize that which doesn’t. Lastly, mentioning the 0.005 probability (ignoring that it’s wrong and uncorrected by the Post thus far) is specious when there is a much, much higher probability that an equivalent event would occur in a random sequence. I don’t have the time or skill to prove this, but I suspect that it’s more likely than not that an article could be written following precisely the same logic for any random sequence of 116 two-digit numbers. 5. More here btw – http://alchemytoday.com/2009/06/25/more-on-that-devil/ 6. [...] We’ve talked about attempts to prove election fraud by mathematical means before. This time the election in question is in Russia, where angry protesters marched in the streets with placards displaying the normal distribution. Why? Because the turnout figures look really weird. The higher the proportion of the vote Vladimir Putin’s party received in a district, the higher the turnout; almost as if a more ordinary-looking distribution were being overlaid with a thick coating of Putin votes… Mikhail Simkin in (extremely worth reading pop-stats magazine) Significance argues there’s no statistical reason to doubt that the election results are legit. Andrew Gelman is not reassured. Advertisement GA_googleAddAttr("AdOpt", "1"); GA_googleAddAttr ("Origin", "other"); GA_googleAddAttr("theme_bg", "ffffff"); GA_googleAddAttr("theme_text", "333333"); GA_googleAddAttr("theme_link", "da1071"); GA_googleAddAttr("theme_border", "cccccc"); GA_googleAddAttr("theme_url", "0d78b6"); GA_googleAddAttr("LangId", "1"); GA_googleAddAttr("Autotag", "politics"); GA_googleAddAttr("Tag", "math"); GA_googleAddAttr("Tag", "news"); GA_googleAddAttr("Tag", "politics"); GA_googleAddAttr("Tag", "andrew-gelman"); GA_googleAddAttr("Tag", "election-fraud"); GA_googleAddAttr("Tag", "elections"); GA_googleAddAttr("Tag", "russia"); GA_googleAddAttr("Tag", "significance"); GA_googleFillSlot("wpcom_sharethrough"); Share this:EmailFacebookTwitterLike this:LikeBe the first to like this post. Filed under: math, news, politics | Leave a Comment Tags: andrew gelman, election fraud, elections, russia, significance [...] Tagged beber, elections, fraud, iran, iran election, randomness, scacco, statistics
{"url":"http://quomodocumque.wordpress.com/2009/06/24/more-iranian-election-statistics/","timestamp":"2014-04-19T09:37:47Z","content_type":null,"content_length":"73838","record_id":"<urn:uuid:f6c6a8ab-0483-4a7b-98e5-8f94a5b22aeb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravitonics - Managing grovity SuperLight is magnetic light; it is magneto–electric radiation. Regular light is electric light or electro–magnetic radiation. There is parity or symmetry the Universe, everything has an equal and opposite mirror–image counterpart, the Ying and the Yang, right and left, matter and anti matter, the electron and the positron. Why not light? Both science and metaphysics have honored this parity law in all things except light. They are wrong. There is parity in light as well ! I will now explain and give you more detail. SuperLight is the unseen force in nature that has been ignored by science but real to the mystics and metaphysicians for thousands of years. It has been given different names by different cultures for thousands of years. A Nuous, Chi, Biomagnetic Energy, Wilhelm Reich's Orgone Energy, Tesler's Free Earth Energy, Animal Magnetism, Space Energy, Vacuum Energy, and Zero Point Energy, etc. Those who have subtle perception know it is real. SuperLight was identified scientifically over 100 years ago when James Clerk Maxwell solved his famous wave equation. This occurred shortly after radio was invented by Nikola Tesla, and theoretical physicists tried to find a mathematical model to explain radio waves. When using positive numbers in Maxwell's Equations this explains radio waves and also all forms of electro–magnetic radiation such as light, radio, TV, microwaves, x–rays, etc. What his equation also explains 100 years ago was SuperLight but because it was the solution that comes from the use of negative numbers, "this second solution" was ignored for over 100 years. Remember when you were taught algebra and were told to ignore imaginary numbers (e.g. The square root of –1) because they have no meaning in this world. Well, times have changed and now we have a very valid second solution to Maxwell's equation and it is SuperLight. In the mid 70's a scientist, Dr. William Tiller, at Stanford University took another look at Maxwell's equation and asked; "What does this second solution explain when interpreted in our world."{1} To understand this second solution, we must first review what the first or positive solution explains. The first solution is as follows: Radio waves leave the antenna and radiate out into space from a point source (the antenna) equally in all directions into space toward infinity traveling at the speed of light. The wave is composed of a large electrical component and a small magnetic component 90 degrees to the electrical component. Thus named, electro–magnetic radiation. The second solution describes a particle wave of just the opposite structure. It explains that from infinity traveling toward the point source from all directions radiates SuperLight. This new radiation is composed of a large magnetic component and a small electrical component, thus the name, magneto–electric radiation. When the equations are looked at more closely, one finds that "SuperLight" travels at the speed of light squared ! 1020 meters per second, or 10 billion times faster than light. It has a frequency 10 billion times higher, and has a corresponding, shorter wavelength. It therefore has a higher energy density.
{"url":"http://occulttreasures.com/gravitonics.html","timestamp":"2014-04-17T00:59:57Z","content_type":null,"content_length":"29597","record_id":"<urn:uuid:1bd5132e-7f88-4d14-bfc4-b0e548d0e82c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
elliptic curve with a degree 2 isogeny to itself? up vote 4 down vote favorite I've come across the following question, which I think must be easy for experts: is there a complex elliptic curve $E$ with an isogeny of degree 2 to itself? Of course one can ask the same question for isogenies whose degree is not a square, or for higher dimensional abelian varieties etc. elliptic-curves ag.algebraic-geometry add comment 2 Answers active oldest votes Expanding on Francois's answer, $E$ has an endomorphism of degree 2 if and only if its endomorphism ring $R=\operatorname{End}(E)$, which is an order in an imaginary quadratic field, has an element of norm 2. There are exactly three such orders, namely $\mathbb{Z}[i]$, $\mathbb{Z}[\sqrt{-2}]$, and $\mathbb{Z}[(1+\sqrt{-7})/2]$. So up to isomorphism over $\overline{\ mathbb{Q}}$, there are exactly three elliptic curves with endomorphisms of degree 2. Equations for these curves and their degree 2 endomorphism are given in Advanced Topics in the up vote 12 Arithmetic of Elliptic Curves, Proposition II.2.3.1. down vote accepted There are similarly only finitely many curves with a higher degree cyclic isogeny of fixed degree $d$. Using Velu's formulas, one could probably write them all down for small values of Thank you very much for the additional explanation (I actually needed it)! – rita May 3 '13 at 15:59 add comment Yes, but the elliptic curve needs to have complex multiplication, since the multiplication-by-$n$ map has degree $n^2$. For an explicit example, you can take $E=\mathbf{C}/(\mathbf{Z} up vote 8 down +i\mathbf{Z})$ with the isogeny being multiplication by $1+i$. Thank you for the answer. – rita May 3 '13 at 15:58 add comment Not the answer you're looking for? Browse other questions tagged elliptic-curves ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/129509/elliptic-curve-with-a-degree-2-isogeny-to-itself/129513","timestamp":"2014-04-18T13:51:24Z","content_type":null,"content_length":"55466","record_id":"<urn:uuid:5523afc8-8be7-4da4-b092-d566d13fae96>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. A basic introduction to magic squares can be found here . Magic squares have intrigued people for thousands of years and in ancient times they were thought to be connected with the supernatural and hence, magical. Today, we might still think of them as being magical, for the sum of each row, column and diagonal is a constant, the magic constant. The squares intrigued me when I found that their construction was far from easy. For the simple 3x3, that is order 3 magic square, trial and improvement quickly does the job; but for higher than order 4 magic squares a method is necessary. The problem of construction is twofold. An algorithm which works for odd order squares will not work for even order squares without the further addition of another algorithm. At least, I know of no method which will work for both odd and even orders, other than trial and improvement computer programs. For the purposes of this article, I will be considering only magic squares that are constructed using consecutive integers from 1 to n ^2 , where n is the number of integers on one side of the square. Odd magic squares are fairly easily constructed using the either the Siamese (sometimes called de la Loubere's, or the Staircase method), the Lozenge, or the de Meziriac's methods. The first two methods are described in some detail on the web site: "Eric's treasure trove of Mathematics''. Either way use "magic squares'' in the search engine on that site. De Meziriac's method can be found on page 76 in the book "Mathematical Games and Puzzles'', by Trevor Rice and published by B.T Batsford Limited, London. Another way, which I prefer (but then that was the way I learned to construct odd order squares), is the extended Pyramid method or diagonals. This method consists of three steps: 1. Draw a pyramid on each side of the magic square. The pyramid should have two less squares on its base than the number of squares on the side of the magic square. This creates a square standing on a vertex. 2. Sequentially place the numbers 1 to n ^2 of the n x n magic square in the diagonals as shown in Figures 1 and 2. 3. Relocate any number not in the n x n square (that appears in the pyramids you added) to the opposite hole inside the square (shaded). Figure 1 The same Pyramid method can be used for any odd order magic square as shown below for the 5x5 square in Figure 2. Figure 2 We can use some properties of magic squares to construct more squares from the manufactured squares above; e.g. 1. A magic square will remain magic if any number is added to every number of a magic square. 2. A magic square will remain magic if any number multiplies every number of a magic square. 3. A magic square will remain magic if two rows, or columns, equidistant from the centre are interchanged. 4. An even order magic square ( n x n where n is even) will remain magic if the quadrants are interchanged. 5. An odd order magic square will remain magic if the partial quadrants and the row is interchanged. This will be the subject of the next article published in September. Constructing the even order magic squares does present more of a challenge. There are many different ways, which can be studied through "Eric's treasure trove of Mathematics ''; at least as a starting point. All the methods I have seen in the literature are rather complicated, in that they require the use of two or more algorithms. There are claims for a simple method for the construction of even order magic squares, but I have yet to find such a method. However, the following method, which I have developed, uses but one algorithm and will work for any even sided square. I call it the "paired exchange method". The theory behind the paired method is fairly straightforward. Consider the first and last column of a n x n magic square where n is even. Starting by placing the integers in order across the rows of the square (see figure 3), the difference between the first and last number in any row will be n - 1. Since there are n rows in the square, there will be a total difference of n ( n -1) between the first and last column of the square. To balance the total for the first and last columns we must exchange pairs of numbers between the first and last columns and each exchanged pair must be from the same row, so as not to change the sum total of the row. How many times must we exchange pairs to equalise the columns? When a pair is exchanged in a row the difference between the columns changes by 2( n -1). If t is the number of times pairs must be exchanged, then │t [2( n - 1)] = n ( n - 1),│ │2 t = n , │ │and t = n /2. │ A similar argument can be made for the 2 ^nd and the next to last columns, since the only change in the above formulae will be to substitute ( n - 3) for ( n - 1). The resulting t stays the same. In like manner, all columns paired from the centre line of the square can be made to be equal, and since the numbers in the original square are consecutive integers, all the columns will be equal to the magic constant for the n by n magic square. Of course, columns are just rows seen from a different viewpoint, hence in a like manner all rows can be made to equal the magic constant. Now let's look at a few examples. Consider the 4 by 4 magic square. t is now equal to 2. The choice of pairs to exchange is limited if we want the sum of the numbers on the diagonals to equal the magic constant. The pairs must be on the diagonals. By reflecting in the centre lines x and then y we achieve the same as a single reflection in the lines y = x and y = - x . Or to state it another way, we exchange the numbers with their opposite numbers equidistant from the centre along the diagonals (see figure 3). We have exchanged two pairs in each row and column of the square and the results is a magic square. Figure 3 The 6 by 6 magic square is constructed in the same manner, but here t = 3, hence we have much more freedom in our choice of pairs for each column and row. We exchange all the pairs on the diagonal, which is equivalent to exchanging two pairs from each pair of rows and columns, so we must now exchange one more pair from each pair of rows and columns to equalise them. One such choice in the construction of a 6 by 6 magic square is shown in figures 4, 5 and 6 below. Figure 4 Exchange the pairs in the diagonals. Here we equalise the rows before the columns, the opposite order to that discussed in the text. It can be done in any order. Figure 5 Exchange the pairs in the columns. Figure 6 Exchange the pairs in the rows. How many other choices we have is possible to calculate, but I will leave that for another time. In the meantime try making a 7 by 7 magic square, or a 8 by 8, or a 10 by 10, or .... The next article in the series isĀ Magic Squares II.
{"url":"http://nrich.maths.org/1337/index?nomenu=1","timestamp":"2014-04-18T21:09:35Z","content_type":null,"content_length":"11447","record_id":"<urn:uuid:1bccb8c1-efe4-4464-bb1a-ec02ae9dab46>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Arguments Up: Linear Least Squares Problems Previous: LA_GELSS / LA_GELSD &nbsp Contents &nbsp Index LA_GELSS and LA_GELSD compute the minimum-norm least squares solution to one or more real or complex linear systems The effective rank of LA_GELSD combines the singular value decomposition with a divide and conquer technique. For large matrices it is often much faster than LA_GELSS but uses more workspace. Susan Blackford 2001-08-19
{"url":"http://www.netlib.org/lapack95/lug95/node183.html","timestamp":"2014-04-19T04:29:23Z","content_type":null,"content_length":"4744","record_id":"<urn:uuid:86136ced-7217-4744-819d-24a56f3e7b70>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
2.6 – The Normal Distribution Previous: 2.5 – Some Common Continuous Distributions Next: 2.7 – A Geometric Problem The most important probability distribution in all of science and mathematics is the normal distribution. │ The Normal Distribution │ │ The random variable X has a normal distribution with mean parameter μ and variance parameter σ^2 > 0 with PDF given by │ │ │ │ $f(x) = \frac 1{\sqrt{2\pi\sigma^2}}e^{-\frac {(x-\mu)^2}{2\sigma^2}},\ -\infty < x < \infty.<br />$ │ │ │ │ To express this distributional relationship on X, we commonly write X ~ Normal(μ,σ^2). │ This PDF is the classic "bell curve" shape associated to so many experiments. The parameter μ gives the mean of the distribution (the centre of the bell curve) while the σ^2 parameter gives the variance (the horizontal spread of the bell curve). The first of these facts is a simple exercise in integration (see the exercises), while the second requires a bit more ingenuity. Recall that the standard deviation of a random variable is defined to be the positive square root of its variance. Thus, a normal random variable has standard deviation σ. This random variable enjoys many analytical properties that make it a desirable object to work with theoretically. For example, the normal density is symmetric about its mean μ. This means that, among other things, exactly half of the area under the PDF lies to the right of the mean, and the other half of the area lies to the left of the mean. More generally, we have the following important │ Symmetry of Probabilities for a Normal Distribution │ │ If X has a normal distribution with mean μ and variance σ^2, and if x is any real number, then │ │ │ │ $\text{Pr}(X\leq \mu - x) = \text{Pr}(X\geq \mu + x).<br />$ │ However, the PDF of a normal distribution is not convenient for calculating probabilities directly. In fact, it can be shown that no closed form exists for the cumulative distribution function of a normal random variable. Thus, we must rely on tables of values to calculate probabilities for events associated to a normal random variable. (The values in these tables are calculated using careful numerical techniques not covered in this course.) A particularly useful version of the normal distribution is the standard normal distribution, where the mean parameter is 0 and the variance parameter is 1. │ The Standard Normal Distribution │ │ The random variable Z has a standard normal distribution if its distribution is normal with mean 0 and variance 1. The PDF of Z is given by │ │ │ │ $f(x) = \frac 1{\sqrt{2\pi}}e^{-\frac {x^2}{2}},\ -\infty < x < \infty.<br />$ │ For a particular value x of X, the distance from x to the mean μ of X expressed in units of standard deviation σ is $z = \frac {x-\mu}{\sigma}.$ Since we have subtracted off the mean (the centre of the distribution) and factored out the standard deviation (the horizontal spread), this new value z is not only a rescaled version of x, but is also a realization of a standard normal random variable Z. In this way, we can standardize any value from a generic normal distribution, transforming it into one from a standard normal distribution. Thus we reduce the problem of calculating probabilities for an event from a normal random variable to calculating probabilities for an event from a standard normal random variable. │ Theorem: Standardizing a Normal Random Variable │ │ Let X have a normal distribution with mean μ and variance σ^2. Then the new random variable │ │ │ │ $Z = \frac {X - \mu}{\sigma}<br />$ │ │ │ │ has a standard normal distribution. │ Calculating Probabilities Using a Standard Normal Distribution Suppose that the test scores for first-year integral calculus final exams are normally distributed with mean 70 and standard deviation 14. Given that Pr(Z ≤ 0.36) = 0.64 and Pr(Z ≤ 1.43) = 0.92 for a standard normal random variable X, what percentage of final exam scores lie between 75 and 90? If we let X denote the score of a randomly selected final exam, then we know that X has a normal distribution with parameters μ = 70 and σ = 14. To find the percentage of final exam scores that lie between 75 and 90, we need to use the information about the probabilities of a standard normal random variable. Thus we must standardize X using the theorem above. For our particular question, we wish to compute $\text{Pr}(75 \leq X \leq 90).$ We proceed by standardizing the random variable X as well as the particular x values of interest. Thus, since X has mean 70 and standard deviation 14, we write $\text{Pr}(75 \leq X \leq 90) = \text{Pr}\left(\frac {75 - 70}{14} \leq \frac {X - 70}{14} \leq \frac {90 - 70}{14}\right).$ Now we have standardized our normal random variable so that $\frac {X - 70}{14} = Z,$ where Z ~ Normal(0,1). Simplifying the numerical expressions from above, we deduce that we must calculate $\text{Pr}(0.36 \leq Z \leq 1.43).$ Now we can use the information we were given, namely that Pr(Z ≤ 0.36) = 0.64 and Pr(Z ≤ 1.43) = 0.92. Using these values, we find \begin{align}<br /> \text{Pr}(75 \leq X \leq 90) &= \text{Pr}(0.36 \leq Z \leq 1.43)\\<br /> &= \text{Pr}(Z\leq 1.43) - \text{Pr}(Z\leq 0.36)\\<br /> &= 0.92 - 0.64\\<br /> &= 0.28.<br /> \end{align} Therefore the percentage of first-year integral calculus final exam scores between 75 and 90 is 28%. Now suppose we wish to find the percentage of final exam scores larger than 90, as well as the percentage of final exam scores less than 65. To find the percentage of final exam scores larger than 90, we use our knowledge about probabilities of disjoint events: \begin{align}<br /> \text{Pr}(X > 90) &= 1 - \text{Pr}(X \leq 90)\\<br /> &= 1 - \text{Pr}(Z\leq 1.43)\\<br /> &= 1 - 0.92\\<br /> &= 0.08.<br /> \end{align} Thus, we find that 8% of exam scores are larger than 90. To find the percentage of final exam scores less than 65, we must exploit the symmetry of the normal distribution. Recall that our normal random variable X has mean 70. We are given information about the probability of a standard normal random variable assuming a value less than 0.36, which we have already seen corresponds to the probability of our normal random variable X assuming a value less than 75. Now notice that the x value 65 is the reflection of 75 through the mean. That is, both scores 65 and 75 are exactly 5 units from the mean of our random variable X. Thus we should take advantage of the symmetry property of X. Using the symmetry identity from the top of the page, we find that \begin{align}<br /> \text{Pr}(X < 65) &= \text{Pr}(X < 70 - 5)\\<br /> &= \text{Pr}(X > 70 + 5)\\<br /> &= 1 - \text{Pr}(X\leq 75)\\<br /> &= 1 - \text{Pr}(Z\leq 0.36)\\<br /> &= 1 - 0.64\\<br /> &= 0.36.<br /> \end{align} Thus, we find that 36% of exam scores are smaller than 65. source: http://wiki.ubc.ca/Science:Math105_Probability/Lesson_2_CRV/2.07b_The_Normal_Distribution Previous: 2.5 – Some Common Continuous Distributions Next: 2.7 – A Geometric Problem Leave a Reply Cancel reply
{"url":"http://blogs.ubc.ca/math105/continuous-random-variables/the-normal-distribution/","timestamp":"2014-04-18T18:21:34Z","content_type":null,"content_length":"35660","record_id":"<urn:uuid:21fc54a4-26db-4e71-8307-a6ed3aeede84>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Bitcoin Research up vote 12 down vote favorite I have recently been assigned to advise a student on a senior thesis. She has taken linear algebra, introductory real analysis, and abstract algebra. Her interest is in cryptography. And she has a love of Bitcoin. The point of a senior thesis is to get a student to teach themselves a subject and learn to find and read mathematical papers. Original work that could be published would be nice, but is often untenable. My question is whether anyone knows of any research that is/has being/been done in cryptography related to Bitcoin. Thanks. teaching cryptography 5 Try to ask on crypto.stackexchange.com . – Zsbán Ambrus Sep 5 '12 at 16:57 2 But first, look at all the existing questions in the bitcoin tag to ensure it's not a dupe: crypto.stackexchange.com/questions/tagged/bitcoin . – Zsbán Ambrus Sep 5 '12 at 16:58 3 Hmm, turns out there's already a stackexchange site specifically about bitcoins, in beta stage: bitcoin.stackexchange.com – Zsbán Ambrus Sep 5 '12 at 17:05 2 Bitcoin uses elliptic curves for digital signatures, you could start there. – Felipe Voloch Sep 5 '12 at 17:21 2 If she had a love of Bitcoin in September of her senior year, I guess she got a nice graduation present because the value has gone up over 1000% since then. – Nate Eldredge Oct 29 '13 at 20:56 add comment 2 Answers active oldest votes Two papers with real world monetary implications: Two Bitcoins at the Price of One? Double-Spending Attacks on Fast Payments in Bitcoin Don't know if this is fixed in the current implementation. An Analysis of Anonymity in the Bitcoin System At the time of theft, the stolen Bitcoins had a market value of approximately half a million U.S. dollars. We chose this case study to illustrate the potential risks to the anonymity of a user (the thief) who has good reason to remain anonymous. up vote 13 down vote accepted Added Bitcoin related news from popular media. MtGox declared bankruptcy last week, taking more than $US400m worth of Bitcoin with it. Bitcoin bank Flexcoin pulls plug after cyber-robbers nick $610,000 Your money is gone. Kthxbye add comment there is a nice preprint server, widely used by cryptographic researchers: http://eprint.iacr.org If you do a search, there are some papers talking more or less about bitcoin. I guess "Decentralized anonymous credentials" is a good begin for your student. This is indeed a well known area of research, and is the main point of bitcoins (with mining). up vote 6 You may take a look at Google Scholar http://scholar.google.com/scholar?hl=en&q=bitcoin&btnG=&lr= there are many references dealing with bitcoins, probably some mathematical with a large down vote interest. 2 I imagine the student has finished her senior thesis by now, but this answer may yet be helpful to others interested in bitcoins. – Gerry Myerson Oct 29 '13 at 22:36 add comment Not the answer you're looking for? Browse other questions tagged teaching cryptography or ask your own question.
{"url":"https://mathoverflow.net/questions/106452/bitcoin-research","timestamp":"2014-04-23T10:14:25Z","content_type":null,"content_length":"61902","record_id":"<urn:uuid:f99e7162-c326-424c-a896-277b22d9e5ad>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Predistorter Polishes PA Performance This low-cost RF predistortion circuit helps overcome the nonlinear behavior of RF and microwave power amplifiers without sacrificing amplifier efficiency. Ning Gaoli, Xie Yongjun, and Lei Zhenya Power amplifiers in modern communications systems must deliver highly linear and efficient operation to properly handle multiple-carrier signals with complex modulation. Typically, a power amplifier (PA) should operate near saturation to achieve high efficiency, although this can also lead to nonlinearity. The nonlinearity generates spectra regrowth, which leads to adjacent-channel interference and violations of wireless standard out-of-band emission requirements. As a result, PAs for wireless communications must be designed with a careful tradeoff between linearity and efficiency.^1,2 Numerous linearization approaches have been applied to PAs^3-13 to help improve this classic tradeoff, including power backoff, feed-forward and feedback techniques, and predistortion methods, all differing in system architectures and benefits. Among the linearization techniques, analog predistortion is particularly popular for repeater systems since they boost RF signals directly between mobile handsets and cellular base stations. Predistortion methods essentially introduce amplitude and phase distortion at the input of the PA that is equal but opposite to the distortion exhibited by the PA at its output port, effectively cancelling the distortion. For the purposes of this article, the terms predistorter, linearizer, and predistortion circuit all refer to the same thing. The single-diode based linearizer presented here provides control through bias resistance, with improved performance compared to linearizers based on two anti-parallel diodes. These two-diode linearizers offer limited control since they are only affected by one variablei.e., the bias voltage of the diodes. The single-diode approach is also simpler and more effective than various other novel techniques for PA predistortion. For example, a linearizer developed with microprocessor control of various parameters features effective performance, but it is complex and large in size. A more flexible solution is by generating a PA's required predistortion products with a mixer, applying those products to an attenuator and a phase shifter, then feeding the resulting predistortion products to the input of the nonlinear PA to neutralize its distortion products. This number of components, however, increases the costs and complexity of the predistortion circuitry and sacrifices The single-diode approach is extremely simple, connected with only a resistor and a capacitor. Compared to previous predistorters using a similar approach, neither additional modules (amplifiers, mixers, attenuators, phase shifters, couplers, etc.) nor direct current (DC) bias are needed. The simple single-diode architecture results in a circuit that is extremely compact and cost effective, and provides excellent efficiency in addition to providing good PA linearity. PAs generally operate under large-signal conditions, where they exhibit the nonlinear transfer characteristics that introduce distortion to their output signals. The distortion consists of amplitude-modulation-to-amplitude-modulation (AM-to-AM) and AM-to-phase-modulation (AM-to-PM) distortion. Figure 1 indicates that as the input signal increases, the magnitude of the gain is compressed, but the phase is also changed. A power (or Taylor) series expansion is often used to explain this phenomenon. However, it appears only AM-AM distortion is included in this technique. To consider AM-AM and AM-PM distortion simultaneously, the generalized power series is employed here. The use of the generalized power series refers to the fact that time delay is incorporated into the coefficient of each term, making the coefficient complex numbers. The generalized power series applies to a PA with strongly nonlinear behavior. The generalized power series can be used to quantitatively analyze distortion in a nonlinear PA, showing how the distortion is produced and how to mitigate it. The transfer function of a PA operating in the nonlinear region can be expressed as: where v[out] = the output signal of the PA; v[in] = the input signal; and gi(I = 0, 1, 2, ) are the complex coefficients that include magnitude and phase and are related to the specific circuit under analysis. In this case, the magnitude and phase are used to represent the AM-to-AM and AM-to- PM distortion effects, respectively. If the applied input is a single-tone signal, v[in] = AcosΩit, Eq. 1 can be rewritten as Eq. 2: This shows that, owing to the nonlinearity, the output signal contains the new DC offset and all harmonic frequency products, in addition to the fundamental frequency signal. The DC offset and harmonics take energy away from the desired signal, thus lowering the PA efficiency and causing output signal distortion. If the input is a two-tone signal, v[in] = AcosΩ[1]t + AcosΩ[2]t, and this signal representation is substituted into Eq. 1, it will yield a large number of output terms, as shown in Eq. 3: In this case, there is not only a new DC offset and additional harmonics but also intermodulation distortion (IMD) products. Ordinarily, the coefficients of high-order terms rapidly decrease with the increasing of the order, so they can be ignored. The IMD products produced by the even-order terms are far enough removed from the original two-tone signal in frequency and do not contribute to inband signal distortion (as with harmonics), so only the IMD products generated by the odd-order terms and coincident with the original two-tone signal need be considered. Among these products, the third-order IMD product: is the most significant for two reasons: (1) it falls in the desired frequency band, making it very close in frequency to the carrier and rendering it unable to be removed by filtering; (2) it is much larger in magnitude than the other in-band IMD products. The third-order IMD product is clearly the main nonlinear source for PAs and has the most deleterious effect on linearity. As a result, the next focus of this study will be on PA third-order IMD products. Continue on Page 2 Page Title As the last term of Eq. 3 indicates, the third-order IMD product stems mainly from the third-order term in the transfer function of the PA, since it is just related to the coefficient. This means that to linearize a nonlinear PA, one should mainly eliminate the third-order term. In predistortion techniques, predistorters are cascaded with PAs. They are nonlinear modules that employ nonlinear devices such as diodes and field-effect transistors (FETs), with a transfer function that can be expressed by Eq. 4: where v[o1] = the output signal of the predistorter; v[in] = the input signal added to the predistorter; k[i](i = 0, 1, 2, ) are the coefficients that include magnitude and phase to represent AM-to-AM and AM-to-PM effects, respectively, and are related to the specific predistorer circuit for a specific PA. The transmission characteristic of the PA can then be rewritten as Eq. 5: where each term has similar meaning as the terms in Eq. 4. Substituting Eq. 4 into Eq. 5 yields the transfer characteristic of the system formed by the combination of the predistorter and the PA. By expanding and arranging the expression, the coefficient of the third-order term is shown in Eq. 6: It would seem that G3 should be maximized to produce a linear system. Allowing Eq. 6 to go to zero results in Eq. 7: To suppress the third-order nonlinear distortion at the output, the coefficients of the designed predistortion circuit must meet the condition above. The behavior of the predistorter may be adjusted by means of parameters k[1] and k[2]. Noting that Eq. 7 is a complex equation (including magnitude and phase), to ensure that there always exists a solution, the predistortion circuit must contain at least two independent variables for adjustment, enabling flexible and effective adjustment of the magnitude and phase of the nonlinearity. The problem of achieving PA linearization has been studied directly, from the point of view of the nonlinear distortion products. It can also be visualized by means of Fig. 2, where it is shown indirectly from the reference point of the fundamental-frequency signal. Figure 3 shows the relationship between the "complex gain" (magnitude and phase) of the fundamental frequency and the input signal. Figure 3(a) depicts the relationship solely for the predistorters, while Fig. 3(b) shows the total performance of the system for the predistorter and the PA. As the input to the PA increases, the nonlinear distortion products rob more energy from the desired signal, and the magnitude of the "complex gain" diminishes. This condition is known as gain compression. At the same time, the phase of the signal is changed as well. Figure 3(a) shows the ideal performance of the predistorteri.e., with gain expansion and phase lagging, the opposite of what is produced at the output of the PA. When combined, the whole system exhibits linear behavior, where the magnitude and phase of the "complex gain" remain constant, as shown in Fig. 3(c). Based on this analysis, a simple and effective predistortion circuit is proposed in Fig. 4. This simple circuit consists of a diode in series with a resistor and in parallel with a capacitor. The values of resistor R and capacitor C serve as the two independent variables needed for adjustment of magnitude and phase. Based on the distortion characteristics of the PA, either of the parameters can therefore be altered to achieve the desired predistortion characteristics. The circulator connects the predistorter and the PA (Port 2) to be linearized, which is found to improve the return loss at the input (Port 1) effectively as well. The diode complies with the voltage-current characteristic shown in Eq. 8: where i[D] = the current through the diode; v[D] = the voltage across the diode; I[S] = the reverse saturation current; and α = a parameter related to the diode's semiconductor process. For the diode presented here, I[S] and a have both been determined. Suppose that v[D] is a single-tone signal, represented by Eq. 9: then i[D] can be expanded to the Fourier series as shown in Eq. 10: where I[n] = the Fourier coefficient and can be expressed as shown by Eq. 11: was to the fundamental frequency, the impedance that the diode presents to the fundamental-frequency signal is represented by Eq. 12: Continue on Page 3 Page Title where Z[d] = the diode impedance and J[1](aV[L]) = a first-order Bessel's function of the first kind. By applying the large argument approximation, when aV[L]) is large enough, it can be simplified by the approximation of Eq. 13: Then reflection coefficient {γ} can be obtained by means of an easy derivation, as shown in Eq. 14: where R and C are two independent variables that make it possible to control the AM-to-AM and AM-to-PM performance of the predistorter flexibly and effectively by varying the values to give the desired output. To confirm the validity of the proposed predistorter, it can be connected to a PA for study, so as to determine the amount of improvement in linearity that it can yield. The PA can be a typical unit used in a wireless repeater or basestation application. The chosen amplifier operates at 1.9 GHz with output power at 1-dB compression of +43 dBm, and with linearity and efficiency performance levels as shown in Figs. 5 and 6, respectively. Two-tone testing is an almost universally accepted method of assessing amplifier linearity and can illustrate both amplitude and phase distortion present in an amplifier. For evaluating the experimental single-diode predistorter with this repeater amplifier, a two-tone signal centered at 1.9 GHz with 1-MHz tone spacing was applied to the input of the PA to verify linearity; test results are shown in Fig. 5. It is clear that at +43 dBm output power, the third-order IMD coefficients (IMD3) are about -30 and -33 dBc, corresponding to the lower and upper bands, respectively. Figure 6 shows the dependence of the power-added efficiency (PAE) on the output power. PAE is used in this evaluation since it takes into account both amplification capability as well as power consumption. The PAE is about 35% at +43 dBm output power. This is somewhat low, since the PA is operating under Class A bias conditions for optimum linearity performance. Now that the performance of the PA alone is well understood, the single-diode predistorter will be added to the amplifier to gauge its improvement in the PA's linearity. The values of R and C were judiciously selected to provide suitable inverse predistortion characteristics so as to minimize the nonlinear distortion at the output of the PA. Similar tests were performed on the amplifier with the predistorter as previously conducted without it, and the results are depicted in Fig. 7 and Fig. 8. The linearized PA suppresses IMD3 products by about -39.3 and -43.5 dBc corresponding to the lower and upper bands, respectively, at +43-dBm output power (Fig. 7). The analysis of the results clearly shows that an impressive improvement in linearity of nearly 10 dB was achieved owing to the predistorter. In addition, there is an increase in PAE to a certain extentspecifically, 36.8%. This may be explained by minimizing the undesired distortion products (such as intermodulation products and harmonics) and their opportunities to rob power from the desired fundamental-frequency signals. In summary, the theory detailed here and the tests performed on the simple circuit show the validity of the proposed predistortion circuit. Its simple architecture allows it to be realized cost effectively and compactly. In addition to providing improved PA linearity, it also provides a boost in amplifier PAE. The authors would like to acknowledge the hardware and software support from the National Key Laboratory of Antennas and Microwave Technology. The authors are also thankful to their colleagues at the Microwave Research Institute for their sincere and great help. 1. P.B. Kenington, High-Linearity RF Amplifier Design, Artech House, Norwood, MA, 2000. 2. S.C. Cripps, Advanced Techniques in RF Power Amplifier Design, Artech House, Norwood, MA, 2002. 3. M.S. Hashmi, Z.S. Rogojan, and F.M. Ghannouchi, "A flexible dual-inflection point RF predistortion linearizer for microwave power amplifiers," Progress In Electromagnetics Research C, Vol. 13, 2010, pp. 1-18. 4. Y. Shen, B. Hraimel, X. Zhang, G. Cowan, K. Wu, and T. Liu, "A novel analog broadband RF predistortion circuit to linearize electro-absorption modulators in multiband OFDM radio-over-fiber systems," IEEE Transactions on Microwave Theory and Techniques, Vol. 58, No. 11, 2010, pp. 3327-3335. 5. J. Yoon, H. Seo, I. Choi, and B. Kim, "Wideband LNA using a negative gm cell for improvement of linearity and noise figure," Journal of Electromagnetic Waves and Applications, Vol. 24, No. 5, 2010, pp. 619-630. 6. H. Choi, Y. Jeong, J.S. Kenney, and C.D. Kim, "Dual-band feedforward linear power amplifier for digital cellular and IMT-2000 base-station," Microwave & Optical Technology Letters, Vol. 51, No. 4, 2009, pp. 922-926. 7. L. Qiang, Z. Z. Ying, and G. Wei, "Design of a feedback predistortion linear power amplifier," Microwave Journal, Vol. 48, No. 5, 2005, pp. 232-241. 8. S. Yamanouchi, Y. Aoki, K. Kunihiro, T. Hirayama, T. Miyazaki, and H. Hida, "Analysis and design of a dynamic predistorter for WCDMA handset power amplifiers," IEEE Transactions on Microwave Theory & Techniques, Vol. 55, No. 3, 2007, pp. 495-503. 9. Y. Chung and J. Jones, "Si-LDMOS high power amplifier RFIC with integrated analogue predistorter," Electronics Letters, Vol. 44, No. 5, 2008, pp. 361-362. 10. S.C. Bera, R.V. Singh, and V.K. Grag, "Diodebased predistortion linearizer for power amplifier," Electronics Letters, Vol. 44, No. 2, 2008, pp. 125-126. 11. Y. Lee, M. Lee, S. Kam, and Y. Jeong, "A highlinearity wideband power amplifier with cascaded third-order analog predistorters," IEEE Microwave and Wireless Components Letters, Vol. 20, No. 2, 2010, pp. 112-114. 12. H. Park, S. Jung, K. Lim, M. Kim, H. Kim, C. Park, and Y. Yang, "Analysis and design of compact third-order intermodulation generation circuits," Microwave & Optical Technology Letters, Vol. 51, No. 9, 2009, pp. 2137-2140. 13. J. Yi, Y. Yang, M. Park, W. Kang, and B. Kim, "Analog predistortion linearizer for high-power RF amplifiers," IEEE Transactions on Microwave Theory & Techniques, Vol. 48, No. 12, 2000, pp.
{"url":"http://mwrf.com/content/simple-predistorter-polishes-pa-performance","timestamp":"2014-04-20T02:01:01Z","content_type":null,"content_length":"94126","record_id":"<urn:uuid:61106754-37e5-4ce7-ba85-4ce979ae5fb5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
If f(6) = 3, What is the equation of the tangent line to the graph y=f(x) at x=6? I forgot how to find the slope. - Homework Help - eNotes.com If f(6) = 3, What is the equation of the tangent line to the graph y=f(x) at x=6? I forgot how to find the slope. The notation f(6)=3 indicates that the point of tangency on the curve y=f(x) is (6,3) . And the slope of the line at the point of tangency is is m=y'. So, take the derivative of y. Then, plug-in x=6. Since the expression for f(x) is not given, there is no specific value for y'. Hence, the slope of the tangent line is f'(6). Now that the slope and point of tangency are known, apply the point-slope form to get the equation of the tangent line. Therefore, the equation of the tangent line is `y=xf'(6)-6f'(6)+3` . Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/f-6-3-what-equation-tangent-line-graph-y-f-x-x-6-442709","timestamp":"2014-04-19T02:18:59Z","content_type":null,"content_length":"25549","record_id":"<urn:uuid:fd9ef83a-8730-4df1-b99f-497c59586620>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
1.6. Rotating-Molecule Hamiltonian The Hamiltonian operator corresponding to the rotational part of the problem in diatomic molecules [3] (pp. 6-16) must now be examined in some detail. 6] (pp. 273-284). Since there can be no rotational angular momentum of a diatomic molecule about its internuclear axis (the z axis), the third component of the rotational angular momentum is zero, and hence absent from where the rotational angular momentum is expressed in the second line of (1.10) as the total angular momentum (J) minus the electronic orbital and spin angular momenta (L and S). For the purposes of where J[±] = J[x] ± iJ[y], L[±] = L[x] ± iL[y], and S[±] = S[x] ± iS[y]. The extremely thorough student will find that the total angular momentum operator J for linear molecules is somewhat peculiar, since its molecule-fixed components do not obey angular-momentum-type commutation relations. Furthermore, the operator 1.10). Nevertheless, it can be shown that correct results are obtained by ignoring the peculiarities associated with J in linear molecules and by treating J like the angular momentum operator defined for nonlinear molecules in [6]. The complete Hamiltonian 1.1). It is the matrix of the Hamiltonian (1.1) which will ultimately be diagonalized to obtain molecular energy levels and molecular wave functions. Loosely speaking, one can see in eq (1.10) the origin of the various entries in column 2 of table 2. Forgetting for a moment the absence of R[z] in (1.10), and remembering that the eigenvalue associated with the sum of the squares of the components of an angular momentum operator has the form ^2 X(X + 1), we note that: if the operators L and S in the rotational Hamiltonian (1.10) can both be ignored, then one might expect rotational energies to be given by B J(J + 1), since J is the quantum number associated with J^2; if the operator L in (1.10) can be ignored, but the operator S cannot be, then one might expect rotational energies to be given by B N(N + 1), since N is the quantum number associated with (J - S)^2; if neither L nor S in (1.10) can be ignored, then one might expect rotational energies to be given by B R(R + 1), since R is the quantum number associated with (J - L - S)^2. Since the operators L and S in (1.10) affect the course of the rotational energy levels only through the four cross terms J[x]L[x], J[y]L[y], J[x]S[x], J[y]S[y], and since the selection rules for nonvanishing matrix elements of L[x], L[y], and S[x], S[y] are ΔΛ and ΔΣ = ± 1, respectively, we see that the effects of L and/or S in (1.10) can be ignored when the separation between states of the nonrotating molecule satisfying the selection rules ΔΛ and/or ΔΣ = ± 1 is large compared to B J.
{"url":"http://physics.nist.gov/Pubs/Mono115/chap1.06.html","timestamp":"2014-04-20T03:31:59Z","content_type":null,"content_length":"6810","record_id":"<urn:uuid:08095a3a-3a5f-419e-b8ea-794723f1e4da>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Corona Del Mar, CA 92625 Professional math tutor, College Algebra, SAT, Spanish, K-6 program ...Susan is extremely experienced and has been tutoring full-time for over 8 years and has logged over 9,840 hours tutoring. She tutors all levels of math, pre-algebra, algebra I,II, , pre-calculus, trigonometry, statistics, and college algebra. She also... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Laguna_Niguel_geometry_tutors.aspx","timestamp":"2014-04-19T00:46:25Z","content_type":null,"content_length":"61119","record_id":"<urn:uuid:5e026f4f-b87c-4f85-895c-b5478ac2110e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
What if Lyrics, Aaliyah What if by Aaliyah. Buy album CD: Aaliyah What If thanks to TeamXtremeDude@aol.com for submitting the lyrics What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if I hate a jealous man, one who doesn't understand That I'm attractive so of course men wanna take my hand And lead me off round the corner maybe to their place But when they speak you get mad, jumpin all up in my face But on the other hand when women come and speak to you You kiss and hug them like it's something that you let me do You're only jealous cause you know the ish you put me through I might turn right around and do that ish right back to you What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if I hate a lying dude, one who doesn't know the rules If you gon cheat burn the receipt from the hotel room But instead you're up in my face saying you were at friends But they all call askin me where the hell you been Why they keep treatin us this way I guess it's a little game we play We'll burn you (oh), we'll cut you (oh) We'll kill you (oh) We don't have to take it no more What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if Why they keep treatin us this way I guess it's a little game we play We'll burn you (oh), we'll cut you (oh) We'll kill you (oh) We don't have to take it no more What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if What if.. every guy I saw What if.. sittin down at the bar What if.. I told him to give me a call What if.. what if, what if, what if What if.. every guy in the place What if.. was all up in my face What if.. what would you say What if.. what if, what if, what if Aaliyah Lyrics Index Song words / lyrics from Aaliyah album CD are property & copyright of their owners and provided for educational purposes.
{"url":"http://www.stlyrics.com/songs/a/aaliyah70/whatif2797.html","timestamp":"2014-04-16T13:39:54Z","content_type":null,"content_length":"17372","record_id":"<urn:uuid:f58c9e0e-75f5-4d99-a4dd-7705775434b2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Famous mathematicians with background in arts/humanities/law etc up vote 16 down vote favorite I've been motivated by this question about starting to study mathematics at an unusually advanced age. It would be nice to know examples of people who successfully switched from a very different field into mathematics. Which well-known mathematicians, past or present, started out as law/art/humanities/business students, but later turned to mathematics? This excludes mathematicians who switched from the sciences or engineering to mathematics such as Raoul Bott. soft-question ho.history-overview 5 If the question stays open, it should also be CW. Some kind of motivation for the question other than commendable curiosity would also be good – Yemon Choi Sep 28 '11 at 0:17 21 And don't say 'some1'. This is a site for professional people. – David Roberts Sep 28 '11 at 0:23 6 Dear Francois, "At the time of Fermat and Cayley" encompasses more than two centuries, as well as two quite different countries. It's probably possible to make a more refined analysis. (Not that this would necessarily bear on the present question.) Best wishes, Matt – Emerton Sep 28 '11 at 2:07 3 By the way, this question looks salvageable to me. Editing it to fix the problems (or simply voting it down) is better than permanently closing it. – Douglas Zare Sep 28 '11 at 2:19 4 The question is better but I still think is in danger of being "ahistorical", in that it presupposes a certain context of education and training which probably didn't apply to Famous People Who Will Be/Have Been Mentioned In The Answers (e.g. Fermat). Francois already made this point well – Yemon Choi Sep 28 '11 at 19:41 show 7 more comments closed as off topic by Andres Caicedo, Felipe Voloch, quid, Yemon Choi, Will Jagy Sep 30 '11 at 4:19 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 21 Answers active oldest votes Persi Diaconis left home at 14 to work with Dai Vernon as a magician. Trying to protect himself from being cheated in dishonest casinos, he was led to Feller's textbook on probability up vote 27 theory, which he couldn't understand. He started studying calculus at the City College of New York at the age of 24. Some more details may be found in this article. down vote add comment I remember reading an interview with Vladimir Arnold where he tells the following anecdote about Hassler Whitney. I don't have the interview in front of me, so some of the details may be not quite correct, but if memory serves, the story goes on like this. Whitney used to study music in America and at some point decided to spend a year in Germany. He arrived in G\"ottingen where it turned out that he had to take a course outside his main subject of study, which was music. He asked which one of the courses was the most difficult one. It turned out the most difficult subject was quantum mechanics. Whitney enrolled on that course. After the first lecture he came to see the professor and said -- I was one of the best students in Yale in my year, Herr Professor; how come I didn't understand a single word of the lecture?" up vote 26 --Well, you see, there are some prerequisites for this course. You have to know calculus and linear algebra and .... down vote -- Are there any books where I can read all this up? It took Whitney a couple of weeks to work through the books the professor told him to read. In a month Whitney was able to follow the course, and he decided to switch to mathematics at the end of the semester. Arnold tells this story to illustrate the dangers of early specialization. 1 I heard about Whitney's starting out in music from late Stanislaw Lojasiewicz (who also once contemplated a career in music), but could not find any confirmation in writing. – Margaret Friedland Sep 28 '11 at 21:18 Margaret -- again if memory serves, Arnold heard the story from Whitney himself; he regretted he hadn't asked who the quantum mechanics lecturer was. – algori Sep 29 '11 at 21:58 Apparently, $\L$ojasiewicz also heard it form Whitney- but I have only a recollection of him relating the story once at a lunch table. It is good that you were able to point to a more reliable reference; the story deserves to be known. I wish I had asked $\L$ojasiewicz about his own background in music. – Margaret Friedland Sep 30 '11 at 13:46 add comment Edward Witten.(Fields Medalist) Witten attended the Park School of Baltimore (class of '68), and went on to receive his Bachelor of Arts with a major in history and minor in linguistics from Brandeis University in 1971. He planned to become a political journalist, and published articles in The New Republic and The Nation. In 1968 Witten published an article in The Nation up vote arguing that the New Left had no strategy. He worked briefly for George McGovern, a Democratic presidential nominee in 1972. McGovern lost the election in a landslide to Richard Nixon. look 21 down at http://www.youtube.com/watch?v=f1BcyxQCnoE&feature=related add comment Hermann Grassmann studied theology, classical languages, philosophy, and literature in university, but not mathematics or physics. up vote 18 down 11 He had notable achievements in linguistics, too: he translated Rigveda (from Sanskrit to German) and observed some pattern in phonology that became known as Grassman's law. – Margaret Friedland Sep 28 '11 at 18:59 2 Grassmann also compiled a dictionary, which people use up to this day to read Rigveda (or at least were using 15 or so years ago). – algori Sep 28 '11 at 22:21 add comment My colleague Tadashi Tokieda studied classics at university and switched to maths after being inspired by a book on the subject. I can't remember the exact details, but they are up vote 18 down remarkable, as is he. I was fortunate enough to see him lecture this summer and I was stunned by his enthusiasm. Great example! – Dedalus Sep 29 '11 at 7:37 add comment Paul Halmos was a graduate student in philosophy, and decided that subject was too hard, and became a graduate student in mathematics. up vote 17 down vote 2 Halmos did not decide anything in this respect, rather he failed his philosophy exams (and was properly mortified by the event). – Did Oct 26 '11 at 8:51 add comment Serge Lang started out as a graduate student of philosophy at Princeton, but he switched to math, because he had "finished it", it being philosophy. Here's the relevant part from his biography here up vote 11 down vote "After returning to the United States, Lang went to Princeton University with the intention of studying for a doctorate in philosophy. After a year in the philosophy department, he changed to mathematics and Emil Artin became his thesis advisor." add comment Karl Marx. So you didn't know he was a mathematician? A book of his collected mathematical papers is in our math library, which is more than most mathematicians can claim. (They are mostly up vote attempts to understand the definition of a derivative if I recall correctly.) They were quite popular during the cultural revolution, Chinese mathematicians presumably figuring that the 10 down study of dialectical calculus was better then a one-way trip to one of Mao's holiday resorts. 1 From the current draft of the original question: "Which well-known mathematicians, past or present, started out as law/art/humanities/business students, but later turned to mathematics? " – Yemon Choi Sep 28 '11 at 20:45 2 Let us not forget the famous mathematician Napoleon, having a theorem named after him. – quid Sep 28 '11 at 21:04 1 Another example in reverse: Edmund Husserl (he studied with Weierstrass and wrote about foundations of mathematics and logic). – Margaret Friedland Sep 28 '11 at 21:15 Engels writing to Marx: "Yesterday I found the courage at last to study your mathematical manuscripts even without reference books, and was pleased to find that I did not need them. I 13 complement you on your work. The thing is as clear as daylight, so that we cannot wonder enough at the way the mathematicians insist on mystifying it. But this comes with the one-sided way these gentlemen think. To put dy/dx = 0/0, firmly and point-blank, does not enter their skulls. . . You need not fear that any mathematician has preceded you here." – Steven Landsburg Sep 28 '11 at 22:13 2 Here's a good summary: pballew.blogspot.com/2011/01/mathematics-of-karl-marx.html – Alex R. Sep 28 '11 at 22:18 add comment up vote Per Enflo is (sometimes) a concert pianist; see this section of his Wikipedia page, also his web page. 8 down As re Eugenia Cheng above: does this count as someone with a background in "non-maths" turning to maths? – Yemon Choi Sep 29 '11 at 21:14 @Yemon: I think so, but I'm no expert on music so I could be wrong. In the autobiography on his web site he discusses his teenage years, and it seems to me that he was considered a prodigy and accomplished at this time (e.g. had played as soloist with the Royal Opera Orchestra of Sweden and many other concerts, and studied with masters and so on). He also mentions that "I did little or no systematic study of mathematics in these years." He also suggests there that he's never really given piano away, but there is a clear point where mathematics (later) enters the picture. – Philip Brooker Sep 30 '11 at 4:32 In any case he started to study math at the university right after high-school at age 18 (like 'everybody else'). – quid Sep 30 '11 at 9:36 add comment I love this one: Harald Bohr was such a good soccer player that he was member of the Danish national team at the 1908 Olympiads. Two years later he got his PhD (apparently there was a large up vote 7 crowd at the event, a quite unusual occurrence for the math department) and went on to become a famous mathematician. down vote 4 Where is there any switch in fields. He was studying maths since 1904, starting aged 17/18. In addition, he had a hobby at which he was really good. – quid Sep 28 '11 at 21:09 3 I agree this does not qualify as an answer if not marginally. On the other hand, I do not understand your persistence in proving that. Several other answers are even worse! (what about Fermat or Leibnitz? shall we talk about Plato?) The main reason I posted this is that I love this fact snippet, and in particular the scene of a huge hooligan crowd gathering at the discussion of a PhD thesis in math :) – Piero D'Ancona Sep 29 '11 at 9:00 Yes, it is a fun fact; and in some sense more interesting than Leibniz, Fermat, Euler,... in my opinion also more than the music-semi-examples, which are not all that surprising or 2 unique. And both Yemon and I 'complained' in general about the too old examples (in my present this is only implict but I head a more explict one that I deleted not to clutter the general comment thread to much). So, I/we are not singularly 'complaining' about your answer. Perhaps we can agree that yours is off-topic in an fun/interesting/original way. – quid Sep 29 '11 at 16:24 1 Just to echo quid: for all the good it does, which is little, I have downvoted Richard Borcherds's egregiously off-topic answer. But hey, Fields Medallist! Upvote! Fun story! Upvote! [sigh] – Yemon Choi Sep 29 '11 at 20:29 3 Ok. Pointless discussion I guess, the original question was not too serious to begin with – Piero D'Ancona Sep 29 '11 at 22:08 show 4 more comments Marcel-Paul "Marco" Schützenberger studied medicine before obtained his second doctorate, in mathematics. He also work in formal linguistics with Noam Chomsky and Stephen Cole For a short biography, see up vote 7 down vote add comment Noam Elkies is a musician and composer. up vote 5 down vote 70 True (and I'd say that "composer" implies "musician"), but $-$ though I have no direct recollection $-$ I have it on good authority that this is not an example of "starting out" in arts and then "turning to science", because here fascination with numbers came first, albeit by only a few months. – Noam D. Elkies Sep 28 '11 at 0:53 Sorry, when I answered, I only paid attention to the title of this thread, and you certainly count as a mathematician with "background in" the arts. – David Corwin Sep 5 '12 at 18:15 @DavidCorwin: and a member of MO! – J. H. S. Feb 18 at 2:15 add comment Christiaan Huygens (1629–1695). A musical prodigy who by age 10 read fluently in four clefs and played the organ, two years before his first study of mathematical sciences (though admittedly 12 is not an "unusually advanced age" for that either...). This according to a display case at Leiden University which I saw at the ANTS-IV conference in 2000, and which reproduced some of his harmony exercises! up vote 5 down vote Huygens kept up his interest in music, later in life publishing a treatise on a tuning of 31 equal notes to the octave, an idea that apparently still has some currency in the Dutch music scene. According to Huygens' Wikipedia entry, the 20th volume of his 22-volume Collected works is titled Musique et mathématique. Musique. Mathématiques de 1666 à 1695. add comment Fermat was a lawyer. up vote 4 down vote add comment Cayley was a lawyer by profession for 14 years. up vote 4 down vote add comment Daniel Bernoulli. I am not sure if he counts, because he knew he wanted to study mathematics. Yet his formal education was in business and medicine. According to Wikipedia, "Around schooling age, his father, Johann Bernoulli, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to up vote 4 study mathematics. He later gave in to his father's wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would down vote teach him mathematics privately, which they continued for some time." add comment Leibniz studied philosophy and law. He worked as a diplomat. up vote 4 down vote add comment Henri Poincaré was a mining engineer. His first job was at the Corps des Mines as an inspector of mines. He participated in the rescue of miners trapped after an explosion, himself up vote 3 descending the shaft into the mine to investigate the cause of the explosion! Check this link for details. down vote 2 Engineering is excluded. – quid Sep 28 '11 at 21:05 Sorry, didn't realize that. – user5706 Sep 28 '11 at 21:38 I've never thought of mining engineering being related to maths. Possibly the world's most famous mining engineer, Major-General Sir Richard Hannay, KCB, OBE, DSO, Legion of Honour, never struck me as the maths-ey type... – user6503 Sep 30 '11 at 10:32 @Alan, are you informed about the French higher eductations system (at Poinarés time)? It is not overly surpising that he was in the Corps des Mines. And most high-school math teachers 1 are not research math types either. Shall we include all French math that attended the/a ENS because clearly they originally all wanted to become high-school teachers. And those at the Ecole polytchnique are actually soldiers. – quid Sep 30 '11 at 11:18 add comment I once read that the higher-category theorist Eugenia Cheng is also occasionally a concert pianist (she accompains lieder singers if I remember well). up vote 1 down vote 1 Well, the information is correct, but it didn't involve change of field, I'm pretty certain. – Todd Trimble♦ Sep 29 '11 at 18:55 1 Nope, no change of field - she does both. She told me once that if she changed careers she'd become a pastry chef - she's darn good at that too. – John Baez Sep 30 '11 at 7:20 add comment Frank Ryan isn't really a famous mathematician, but he was at one point famous and he did manage to get a Ph.D. in Mathematics. See Sports Illustrated article on Frank Ryan up vote 1 down vote 1 That's a lovely story, but in the interests of consistency I guess I must downvote – Yemon Choi Sep 30 '11 at 3:28 Perhaps back then (1965) attitudes in mainstream US culture towards science were different... – Yemon Choi Sep 30 '11 at 3:30 add comment According to an interview of his, Kazuya Kato started off studying aerospace engineering (or something similar) before becoming interested in math. up vote 0 down 6 "This excludes mathematicians who switched from the sciences or engineering to mathematics such as Raoul Bott." If not for this provision, one could name even more people, e.g. Solomon Lefschetz. – Margaret Friedland Sep 28 '11 at 20:14 Could you give a link to the interview- an interview with K. Kato sounds quite interesting – sisn Oct 12 '11 at 7:45 add comment Not the answer you're looking for? Browse other questions tagged soft-question ho.history-overview or ask your own question.
{"url":"http://mathoverflow.net/questions/76580/famous-mathematicians-with-background-in-arts-humanities-law-etc","timestamp":"2014-04-16T22:53:19Z","content_type":null,"content_length":"156155","record_id":"<urn:uuid:4314446f-1a3d-4966-af42-c8a56ae8acfc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
0 eigenvalue for a symmetric tridiagonal matrix up vote 6 down vote favorite Let $T\in \mathbb{R}^{n\times n}$ be a symmetric tridiagonal matrix having the off--diagonal entries equal to -1. The diagonal entries are all positive, $a_i>0$, $i=\overline{1,n}$, and there exist $j$ and $k$, $j\neq k$ such that $a_j=a_k\leq 1$. $a_j$ and $a_k$ are the smallest diagonal entries. I'm interested under what supplemental conditions can such a matrix have the smallest eigenvalue equal to 0? linear-algebra sp.spectral-theory matrices could you give some motivation/background? – Vladimir Dotsenko Feb 25 '12 at 13:26 1 The matrix $T$ is the linearization of a tridiagonal cooperative dynamical system around an equilibrium point. – Andreea Feb 25 '12 at 14:07 add comment 1 Answer active oldest votes To simplify things a little, I describe conditions under which the smallest eigenvalue is strictly positive. These can be adjusted to get equality to zero. Necessary and sufficient conditions for positive definiteness of the tridiagonal matrix in question are described below. Definition (Chain Sequence). A sequence $\lbrace x_k \rbrace_{k > 0}$ is a chain sequence if there exists another sequence $\lbrace y_k \rbrace_{k\ge 0}$ such that \begin {equation*} x_k = y_k(1-y_{k-1}), \end{equation*} where $y_0 \in [0,1)$ and $y_k \in (0,1)$ for $k > 0$. By the Wall-Wetzel Theorem, your tridiagonal matrix is positive definite if and only if \begin{equation*} \left\lbrace \frac{1}{a_ka_{k+1}} \right\rbrace_{k=1}^{n-1} \end{equation*} up vote 7 down vote accepted is a chain sequence. Example. In particular, if the entries of the matrix satisfy, \begin{equation*} 0 < \frac{1}{a_ka_{k+1}} < \frac{1}{4\cos^2\left(\frac{\pi}{n+1}\right)},\quad k=1,\ldots,n-1, \end{equation*} then it is positive definite. For additional information and details about this material, please see: 1. M. Andelic, and C. M. Da Fonesca. Sufficient conditions for positive definiteness of tridiagonal matrices revisited. (2010). Thank you very much for pointing out this approach and the paper. – Andreea Feb 25 '12 at 19:02 You are welcome; also note that the Example gives a sufficient (not necessary) condition, while the theorem cited is an IFF. – Suvrit Feb 25 '12 at 19:40 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra sp.spectral-theory matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/89483/0-eigenvalue-for-a-symmetric-tridiagonal-matrix","timestamp":"2014-04-16T04:56:11Z","content_type":null,"content_length":"56113","record_id":"<urn:uuid:eb35806c-3851-4ef1-a475-19365d04995d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
An Evolutionary Video Assignment Optimization Technique for VOD System in Heterogeneous Environment International Journal of Digital Multimedia Broadcasting Volume 2010 (2010), Article ID 645049, 13 pages Research Article An Evolutionary Video Assignment Optimization Technique for VOD System in Heterogeneous Environment Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong Received 8 January 2010; Accepted 21 June 2010 Academic Editor: Markus Kampmann Copyright © 2010 King-Man Ho et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD) system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA) is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques. 1. Introduction With the explosive growth of the Internet, the demand for various multimedia applications is rapidly increasing in recent years. Among different multimedia applications, Video-on-Demand (VOD) is playing a very important role. With VOD, customers can choose their desired video at arbitrary time they wish via public communication networks. Nevertheless, the VOD system is required to store several hundreds of videos as well as serve thousands of customers simultaneously. In order to build a cost-effective and scalable system, various designs have been proposed in terms of system architecture [1], bandwidth allocation [2], and transmission schemes [3]. Among different techniques, data broadcasting and proxy caching are two commonly used approaches. To improve the scalability of a VOD system using data broadcasting, the broadcast capability of a network is exploited such that video contents are distributed along a number of video channels shared among clients. Staggered broadcasting [4] is the simplest way to support broadcast services in the early day. After that, a number of efficient broadcasting protocols [5–8] were proposed. Apart from data broadcasting, hierarchical architectures [3] have also been explored to reduce the resources requirement. To leverage the workload of the central server and reduce the service latencies, an intermediate device called proxy is sit between the central server and the clients. In such architecture, a portion of video is cached in the proxy. The request generated by a client is served by the proxy if it caches the requested portion of the video. Meanwhile, the central server also delivers the remaining portion of the video to the client directly. Existing caching mechanisms can be mainly classified into four categories [9]: sliding-interval caching [10], prefix caching [11], segment caching [12], and rate-split caching [13]. Content distribution network (CDN) is an extension of the proxy caching in which a number of CDN servers are deployed at the edge of the network core. Unlike proxy which only stores a portion of the video, a full copy of the video is replicated in each CDN server. Then, the clients request the video from their closest CDN servers directly. This architecture significantly reduces the workload of the central server and provides a better quality of service (QoS) to the clients. Nevertheless, most of the previous works mainly focused on providing VoD services in a homogeneous environment. In a practical situation, the clients can connect to the network, say Internet, with different communication technologies such as modem, ASDL, and wireless link. Their downstream rates vary from 56kbps to 100Mbps or even higher. To meet different clients’ bandwidth requirement, the videos are encoded into different quality levels by the replication or layering approach. Replication [14] provides multiple versions of the video but at different data rates and one of them will be retrieved according to the requested video quality from the client. On the other hand, layering [15, 16] encodes the video into a number of layers and the client needs to retrieve several video layers concurrently to meet his/her requirements. To adapt such coding scheme, Kangasharju et al. [16] considered delivering layered video through proxy cache and developed a model for the layered video caching problem to determine which videos and which layers should be cached in order to maximize the revenue from the streaming services. The effectiveness of replication and layering for video transmission in a heterogeneous environment has been investigated in [17–19]. Kim and Ammar [17] compared the replication and layering approaches and the results showed that replication is better. However, they only focused on time-dependent streaming of a single video from the central server to the clients. Later, Hartanto et al. [18] studied the system performance with a proxy cache and compared replication with layering in a hierarchical framework. It was found that layering is more appropriate when a proxy server is used. In [19], the authors extended this work by exploring the proxy cache coupled with video broadcast technology. It was observed that layering can have further improvement in such framework. In addition, it was found that the proxy size, the efficiency of the broadcasting scheme, the bandwidth reserved for broadcasting as well as the layering overhead have significant impacts on the system performance. In general, the performance of layering is superior to that of replication. However, from the result in [19], replication performs better in some situations. For instance, replication should be used when the proxy size is zero. Thus, in this paper, we not only use both coding schemes to support different quality of video streams but also explore a hierarchical VoD system using proxy caching coupled with video broadcasting to further improve the system performance in a heterogeneous environment. Different from [19], in the proposed framework, the video streams with different quality levels can be encoded by replication or layering. Each of the video streams are then either cached in the proxy server or delivered over the broadcast/unicast channels. The objective of this work is to determine the appropriate coding strategy as well as the efficient transmission mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on a genetic algorithm (GA) is proposed. GA has been successfully demonstrated as a powerful optimization tool for solving various real-world complex problems [20] and has been deployed in some VoD applications, such as those mentioned in [21, 22]. The main contribution of this paper is that we explore the benefits of complementary coding schemes for a hierarchical VoD system. To determine the appropriate encoding schemes and the efficient transmission strategies, a mathematic model is formally stated to represent this complex video assignment problem. Then, we present an evolutionary approach based on GA to solve the proposed system model. This paper is organized as follows. The proposed system architecture and the system model will be first described in Section 2. In Section 3, the formulation of the problem will be derived and the conditions to minimize the system blocking probability will be discussed. The optimal video assignment strategy using GA, where the fitness function and chromosome representation for the problem will then be outlined and explained in Section 4. In Section 5, the experiment results will be presented. Finally, some concluding remarks will be given in Section 6. 2. System Model In this section, we describe the system architecture for video streaming services. Before we go into the details, the notations used in this paper are defined and listed in Table 1. Figure 1 shows a two-tier VoD system which consists of one central server and several proxy servers. The central server, which has a large storage space to store M videos for clients, is connected to the proxy servers that are physically located closer to the clients. The clients can connect to the network with different communication technologies such as modem, ASDL, or wireless link and their downstream rates vary from 56kbps to 100Mbps. To cater for the heterogeneous requirement, video m will be encoded into different quality levels of video streams which will be delivered to the clients according to their capacity constraints. If the clients have a low bandwidth connection such as 56Kbps, they will receive the videos encoded at a low bit rate. On the other hand, the high-quality video will be streamed to the customers having the broadband access capability. In the proposed architecture, th quality of video m, , can be encoded by the replication or layering approach. Note that a layered-encoded video incurs around 20%–30% overhead compared with a replicated video for the same quality level [17, 18, 23] and thus it requires more transmission bandwidth. Let be the overhead of the layered-encoded video where . Then, the relationship of the streaming rate of between these two approaches is given by . It is assumed that the proxy servers are independent and a large group of heterogeneous clients is served by a single proxy server. The proxy server has a limited storage space of K bits to cache some of the popular videos for users’ repeating requests in order to minimize the transmission cost. Let denote a proxy cache map matrix, where is set to 1 if a copy of is stored in the proxy server. It is set to 0, otherwise. As mentioned, the videos can be layer-encoded or replicated with different quality levels and stored in the proxy server. For layering, the base layer can be decoded independently while the enhancement layers should be decoded cumulatively. That means, layer k should be decoded along with layer 1 to layer To find a feasible cache assignment solution, we define a coding approach instance as the vector e, where indicates the highest quality level of video m encoded by the layering approach reconstructed correctly. In addition, to satisfy the storage space constraint in the proxy server, we have where . The first term and the second term calculate the storage requirement in the proxy server for the layered video and the replicated video for video , Upon receiving the user’s request, the proxy server will acknowledge the request if the requested item has been already cached. Otherwise, it will bypass the request to the higher level. Because the storage capacity of the proxy server is limited, some videos cannot be cached and eventually should be delivered from the central server. It is clearly seen that the system is not scalable as the bandwidth requirement will linearly increase with the arrival rate. Because of recent deployment of IP multicast delivery [24], to further enhance the system performance, broadcasting capability in such a hierarchical architecture is also exploited. Apart from storing the popular videos in the proxy server, some videos will also be broadcast to the clients over the backbone network. Thus, it is assumed that a generic network infrastructure that supports broadcasting operations is used to implement the broadcasting protocols. Since our focus is on the performance of the whole architecture, the broadcasting techniques are not our major concern. In general, any efficient protocols, such as those mentioned in [4–8], can be applied to the system framework. Let be the number of channels required for the protocol x to broadcast a video such that the start-up delay is insensitive to the clients. Given the bandwidth reserved for broadcasting , we define as a broadcast map matrix to indicate which quality level of a video should be sent over the broadcast channels. is set to 1 if a copy of is broadcast over the broadcast channels. Otherwise, it is set to 0. Therefore, the bandwidth required for broadcasting is equal to and . We can then construct a cache-broadcast map matrix , where to indicate whether is cached in the proxy server or delivered over the broadcast channels. is equal to 0 if is simply transmitted over unicast channel. 3. Problem Formulation In this section, the optimization problem of the proposed system is formally defined. It is reported in [25] that the interarrival time of client requests in multimedia streaming applications are exponentially distributed. Thus, the client requests follows a Poisson process with a rate of . Let and be the popularity of video m and the probability of client requesting th quality of video, respectively, where and . As the request arrival processes for different videos with different quality levels are mutually independent, the request rate of is given by . It is assumed that the video popularity follows Zipf’s distribution [26] with the skew parameter . Then where . Without loss of generality, it is further assumed that the service time of each unicast channel handled by the central server is exponentially distributed with mean by considering the varying length of different videos. As mentioned in Section 2, some of the requests can be satisfied by the proxy server and the broadcast channels but the central server still opens the dedicated channels to serve the clients due to the small proxy storage capacity and the limited broadcasting bandwidth. Equation (1) calculates the requests that go up to the central server for the dedicated streams: Since multiple qualities of video streams are delivered at different data rates from the central server to the clients, the average streaming rate of the dedicated channels can thus be found by where is the complement of . The first term calculates the average bandwidth of the dedicated channels required for the layered-encoded videos while the second term computes that for the replicated videos. To evaluate the performance of the central server, denote B as the available bandwidth between the central and proxy servers. Therefore, on average, the central server can support N virtual channels concurrently for the clients, where . According to the Erlang’s loss formula [27], the system can thus be modeled as an M/G/N/N queueing system and the blocking probability is equal to If the bandwidth from the proxy server to the clients is large enough and no requests will be blocked, the overall blocking probability of the system is given by Considering the coding approach (replication and layering) and transmission strategy (caching and broadcasting), the optimization problem (OPT) can thus be formally stated as follows: Equation (5) indicates the constraint that the total size of the cached videos is less than or equal to the proxy size and (6) shows that the broadcasting bandwidth is not larger than the bandwidth reserved for 4. Evolution Optimization In this section, we exploit a GA-based approach to obtain a near optimal solution for the OPT problem in Section 3. We first briefly review the terminologies and operations of GA. Then, to solve the problem, the chromosome representation, the population size, and the fitness function for the OPT problem are discussed. 4.1. Genetic Algorithm Genetic Algorithm (GA) is a population-based generic search method inspired by the survival of the fittest principal [28–30] that is derived from the mechanism of natural evolution context, where the stronger individual would likely be the champion in a competing world. The potential solution to the problem known as chromosome is constructed by a finite length of gene represented by a finite-length string over some finite alphabet (e.g., in a binary form). A pool of chromosomes forms a population, which is randomly generated at the beginning of the process. In each iteration, GA performs multidirectional stochastic search through a genetic evolution process by applying a number of genetic operators to the individual of the current population in order to produce individuals for the next generation. In general, a genetic operator known as crossover is used to combine two or more individuals from the pool to produce new individuals in the next generation. To introduce a genetic variation into the individual, mutation operator is applied to alter the value of each gene (i.e., allele) in an individual randomly with a small probability. Based on the fitness of the individuals in the current population, the individuals with a higher degree of fitness will be selected as a member of the population in the next generation through the selection process of GA. After a certain generation, it is expected that the best chromosome can be obtained which is reasonably close to the optimal solution. Figure 2 shows the general procedures of GA. The detailed working principle and implementation of GA can be found in [28–30]. GA has been successfully demonstrated as a powerful optimization tool for solving various real-world complex problems [20] and has been deployed in some applications, such as those mentioned in [21, 22]. 4.2. Chromosome Representation To represent the coding strategy and the caching mechanism of , 3 vectors are defined. Let vector and is set to 1 if is delivered over broadcast channel as mentioned. Then, vector , that is, , is defined (it is reminded that ). In addition, let be the binary form of e[i] for video i (note that is MSB while is LSB (MSB means most significant bit, LSB means least significant bit)). Since the highest value of e[i] is l, the number of bits required for representing e[j] is given by for all . Therefore, the chromosome can be represented in the form of binary string as depicted in Figure 3 and the allele space of each gene is . The total number of bits required for the chromosome can then be expressed by and thus the searching space includes possible solutions. 4.3. Population Size Population size is a critical factor affecting the performance of GA. Basically, a large population size requires a high computational cost while a small population size increases the chance of premature convergence. Other than randomly choosing initial populations, Reeves [31] proposed the principle of minimum population sizes for -ary alphabets to decide an appropriate value. The author suggested a preferable property of an initial population such that “every possible point in the search space should be reachable from the initial population by crossover only.” This property can be satisfied only if there is at least one instance of every allele at each locus in the whole population of chromosomes [31]. Given the population size Z, the length of chromosome G and the cardinality of the gene at each locus, the probability that at least one allele is presented at each locus in the initial population () can be computed by where is the Stirling number of the second kind. Equation (7) provides a guideline to choose a suitable Z so that it is large enough to ensure a high probability in the initial population. For example, to achieve , the minimum value of Z should be 21 given , and . 4.4. Fitness Function In GA, the fitness function is used to evaluate the goodness of a chromosome for the problem. The fitness function F of a chromosome is closely related to the output of the objective function (i.e., OPT) by this chromosome. Note that can be either cached in the proxy server or delivered over the broadcast channels if is set. However, it is obvious that the proxy capacity required for caching and the bandwidth required for broadcasting may exceed the limitations and the constraints in OPT are violated. A penalty scheme is thus applied to those chromosomes violating these constraints. Hence, we transform OPT to an unconstrained form to produce the fitness function: where is the penalty function. To reflect the condition of the low performers, we square the violation of the constraints [ 5. Experimental Results In our experiment, GAlib [32], which is a set of C++ genetic algorithm objects to perform optimization, is used to solve the OPT problem. It is assumed that there are 50 videos in the system and each of them is fixed as 90 minutes long and is encoded into seven quality levels. The client requests are modeled as the Poisson arrival process and the video popularity is followed by Zipf’s distribution with the skew parameter . Assume that the streaming rate of the base layer of all videos is Kbps and all layers that have the same rate [33], that is, . As the backbone bandwidth is fixed, the proportion of bandwidth, , is reserved for video broadcasting, that is, . The results in [8] showed that less than 10 broadcasting channels are sufficient to provide delay insensitive VoD services. Hence, H^x is set to 10 for the following experiments. As reported in [23], the amount of overhead incurred by the layered encoded videos is varied from 0 to 30%. To analogize the heterogeneity of network environments, two requesting patterns, namely, “SCENARIO(A)” and “SCENARIO(B)”, are defined in our experiment [19]. “SCENARIO(A)” is to model the less heterogeneity environment where the system only serves two types of clients (e.g., modem and Ethernet), that is, but . “SCENARIO(B)” focuses on the high heterogeneity environment that all the qualities of a video are requested uniformly, that is, , for all . Table 2 summarizes the parameters used in the experiment. We first evaluate the performance impact of various arrival rates to the blocking probability and compare the proposed system with the system using either the layering (S_L) or replication (S_R) approach (i.e., the system only uses layering or replication [19]). In Figure 4, as expected, the blocking probability is increasing when the arrival rate is increased under various configurations. It can be seen that the system with both layering and replication (S_MIX) can perform better than S_L and S_R in both scenarios. It can be found in Figure 4(a) that S_MIX can have a significant improvement in less heterogeneity environment. When the arrival rate is 0.1req/s, the blocking probability of S_MIX is reduced to 0.018 (S_R is 0.048 and S_L is 0.143). Note that S_MIX can still obtain up to about 20% reduction of blocking probability if the arrival rate is increased to 1req/s. In SCENARIO(B), it can be observed that S_MIX can have an improvement up to 8% as shown in Figure To investigate how the system can be improved by S_MIX approach, we first look at how the coding and cache strategy for different quality levels of videos in S_MIX is organized by GA as compared with that in S_L and S_R. Tables 3 depict the coding scheme and proxy-broadcast map for different system configurations. In the table, the coding and cache strategy for a specific quality level of video is represented by “()”, where “” and “ = coding scheme (R = Replication, L = Layering)”. “—” represents that the corresponding quality level is not required. We only show the configuration of the first 25 videos as the configuration of the rest videos are the same as the 25th one. In Table 3(a), it can be observed that all quality levels of the videos should be encoded by layering in S_L and only two quality levels are needed if replication is used in S_R. In S_MIX, it can be seen that the quality levels are encoded by the layering approach only if the upper quality levels of the corresponding video is cached in proxy or delivered over the broadcast channels. On the other hand, replication is used when the video is not cached or broadcast. As layering is suitable for caching and replication is favor to end-to-end transmission, S_MIX takes the benefits from both approaches. Unlike S_R and S_L that videos are cached according to the videos, S_MIX takes both coding strategy as well as the bandwidth usage into account. It is found that S_MIX allocates the cache space to most of the 2nd quality level of layered-encoded videos. Although the 1st quality level of the corresponding videos is required to be transmitted over the dedicated channel when the users request for the 2nd quality level of the videos, the server bandwidth requirement of S_MIX is still less than that of S_R because part of the video data can be obtained from the proxy server or the broadcast channels directly. Similar observations can been found in “SCENARIO(B).” Only cached or broadcast videos are layered-encoded and the others use replication so that more videos can be served by the proxy server or the broadcast channels as compared to S_R and fewer server bandwidth are required as compared to S_L. In order to have a close look on the effectiveness of S_MIX, Figures 5 and 6 show the blocking probability of the systems when these parameter are varied. We first investigate the impact of the proxy size. Figure 5 illustrates the system blocking probability as the proxy size is changed. Increasing the proxy size results in fewer video requests to the central server and thus the blocking probabilities are decreasing. It can be seen that S_MIX can perform better than S_L and S_R in both requesting patterns, especially at low arrival (i.e., 0.3req/s) and large proxy size. In Figure 5 (a), S_MIX can have significant improvement but S_L and S_R only have a linear improvement when the proxy size is changed. When K is set to 10%, S_MIX obtains up to about 65% reduction of blocking probability. When the arrival rate is increased to 0.8req/s, the system can still achieve 50% improvement compared to S_L. When the proxy size is increased, more layered-encoded videos with lower quality levels are assigned to the proxy server in S_MIX. Thus, more videos with less popularity can also be served by the proxy server directly. The similar trend can be observed in “SCENARIO(B)” which is shown in Figure 5(b). The results show that the blocking probability of S_MIX can be less than that of S_L up to 10%. Figure 6 shows the blocking probability when the proportion of bandwidth reserved for broadcasting is changed. It can be seen that the system performance is greatly improved in S_MIX compared to S_L and S_R when is increased, especially in less heterogeneity network environment. Although the system blocking probability can be further reduced when is increased, the system will suffer from a problem that the remaining bandwidth is not sufficient for the less popular videos. The skew parameter against the blocking probability is plotted in Figure 7. As expected, the blocking probability is increasing with the skew parameter. The performance of S_MIX is superior to that of S_L and S_R even if the popularity of all quality levels of all the videos are uniformly distributed, that is, . In “SCENARIO(A)”, the blocking probability of S_MIX is reduced to 0.5 (S_R is 0.696 and S_L is 0.686) when and . For high arrival rate, S_MIX can still achieve up to about 18% reduction of the blocking probability. 6. Conclusion In this paper, we investigate a feasible enhancement solution to a hierarchical VoD system using proxy caching coupled with video broadcasting and appropriate coding schemes in a heterogeneous environment. In the proposed framework, different quality levels of video can be encoded by either replication or layering approach. Each of them is then either cached in proxy server or delivered over video broadcast channels/or unicast channels. The objective of this work is to determine the appropriate coding strategy as well as the suitable delivery mechanism to a specific quality level of video such that the overall system blocking probability is minimized. To solve this complex problem, an evolutionary approach based on a genetic algorithm (GA) is used for finding a near-optimal solution for this difficult video assignment problem. From the results, it can be seen that the system performance can be significantly enhanced by efficiently coupling the various techniques. In this paper, we focus on videos coded with MPEG2 with different coding layers. Recently, the new scalable video coding (SVC) extension of H.264/AVC standard [34] provides network-friendly scalability at a bit stream level has been proposed. We are going to investigate the performance of the system with this coding technique in our framework in the future. 1. F. Thouin and M. Coates, “Video-on-demand networks: design approaches and future challenges,” IEEE Network, vol. 21, no. 2, pp. 42–48, 2007. View at Publisher · View at Google Scholar · View at 2. A. Dan, P. Shahabuddin, D. Sitaram, and D. Towsley, “Channel allocation under batching and VCR control in Video-on-Demand systems,” Journal of Parallel and Distributed Computing, vol. 30, no. 2, pp. 168–179, 1995. View at Publisher · View at Google Scholar · View at Scopus 3. K. A. Hua, M. A. Tantaoui, and W. Tavanapong, “Video delivery technologies for large-scale deployment of multimedia applications,” Proceedings of the IEEE, vol. 92, no. 9, pp. 1439–1451, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. J. W. Wong, “Broadcast delivery,” Proceedings of the IEEE, vol. 76, no. 12, pp. 1566–1577, 1988. View at Publisher · View at Google Scholar · View at Scopus 5. K. A. Hua and S. Sheu, “Skyscraper broadcasting: a new broadcasting scheme for metropolitan video-on-demand systems,” in Proceedings of the Conference on Communications Architectures, Protocols and Applications (SIGCOMM '97), pp. 89–100, 1997. View at Scopus 6. L.-S. Juhn and L.-M. Tseng, “Harmonic broadcasting for video-on-demand service,” IEEE Transactions on Broadcasting, vol. 43, no. 3, pp. 268–271, 1997. View at Scopus 7. W. C. Liu and J. Y. B. Lee, “Constrained consonant broadcasting—a generalized periodic broadcasting scheme for large scale video streaming,” in Proceedings of the IEEE International Conference on Multimedia & Expo, Baltimore, Md, USA, July 2003. 8. E. M. Yan and T. Kameda, “An efficient VoD broadcasting scheme with user bandwidth limit,” in Proceedings of the SPIE/ACM Conference on Multimedia Computing and Networking, vol. 5019, pp. 200–208, Santa Clara, Calif, USA, 2003. 9. J. Liu and J. Xu, “Proxy caching for media streaming over the internet,” IEEE Communications Magazine, vol. 42, no. 8, pp. 88–94, 2004. View at Publisher · View at Google Scholar · View at Scopus 10. R. Tewari, H. M. Vin, A. Dan, and D. Sitaram, “Resource-based caching for web servers,” in Proceedings of the Multimedia Computing and Networking (MMCN ’98), pp. 191–204, San Jose, Calif, USA, January 1998. View at Publisher · View at Google Scholar · View at Scopus 11. S. Sen, J. Rexford, and D. Towsley, “Proxy prefix caching for multimedia streams,” in Proceedings of the 18th Annual Joint Conference of the IEEE Computer and Communications Societie (INFOCOM '99), pp. 1310–1319, New York, NY, USA, March 1999. View at Scopus 12. S. Chen, B. Shen, S. Wee, and X. Zhang, “Designs of high quality streaming proxy systems,” in Proceedings of the IEEE Conference on Computer Communications (INFOCOM '04), pp. 1512–1521, Hong Kong, March 2004. View at Scopus 13. Z.-L. Zhang, Y. Wang, D. H. C. Du, and D. Su, “Video staging: a proxy-server-based approach to end-to-end video delivery over wide-area networks,” IEEE/ACM Transactions on Networking, vol. 8, no. 4, pp. 429–442, 2000. View at Scopus 14. T. Jiang, M. H. Ammar, and E. W. Zegura, “Inter-receiver fairness: a novel performance measure for multicast ABR sessions,” in Proceedings of the ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, vol. 26, pp. 202–211, Madison, Wis, USA, June 1998. View at Scopus 15. R. Rejaie and J. Kangasharju, “Mocha: a quality adaptive multimedia proxy cache for Internet streaming,” in Proceedings of the 11th IEEE International Workshop on Network and Operating System Support for Digital Audio and Video, pp. 3–10, January 2001. 16. J. Kangasharju, F. Hartanto, M. Reisslein, and K. W. Ross, “Distributing layered encoded video through caches,” IEEE Transactions on Computers, vol. 51, no. 6, pp. 622–636, 2002. View at Publisher · View at Google Scholar · View at Scopus 17. T. Kim and M. H. Ammar, “A comparison of layering and stream replication video multicast schemes,” in Proceedings of the IEEE International Workshop on Network and Operating System Support for Digital Audio and Video, pp. 63–72, New York, NY, USA, 2001. 18. F. Hartanto, J. Kangasharju, M. Reisslein, and K. W. Ross, “Caching video objects: layers vs. versions,” in Proceedings of IEEE International Conference on Multimedia and Expo (ICME '02), vol. 2, pp. 45–48, Lausanne, Switzerland, August 2002. 19. K.-M. Ho, W.-F. Poon, and K.-T. Lo, “Performance study of large-scale video streaming services in highly heterogeneous environment,” IEEE Transactions on Broadcasting, vol. 53, no. 4, pp. 763–773, 2007. View at Publisher · View at Google Scholar · View at Scopus 20. J. H. Holland, Adaptation in Natural and Artificial Systems, The MIT Press, Cambridge, Mass, USA, 1975. 21. K.-S. Tang, K.-T. Ko, S. Chan, and E. W. M. Wong, “Optimal file placement in VOD system using genetic algorithm,” IEEE Transactions on Industrial Electronics, vol. 48, no. 5, pp. 891–897, 2001. View at Publisher · View at Google Scholar 22. W. K. S. Tang, E. W. M. Wong, S. Chan, and K.-T. Ko, “Optimal video placement scheme for batching VOD services,” IEEE Transactions on Broadcasting, vol. 50, no. 1, pp. 16–25, 2004. View at Publisher · View at Google Scholar 23. J. I. Kimura, F. A. Tobagi, J. M. Pulido, and P. J. Emstad, “Perceived quality and bandwidth characterization of layered MPEG-2 video encoding,” in International Symposium Voice, Video and Data Communications, vol. 3845 of Proceedings of SPIE, pp. 308–319, Boston, Mass, USA, 1999. 24. A. Ganjam and H. Zhang, “Internet multicast video delivery,” Proceedings of the IEEE, vol. 93, no. 1, pp. 159–170, 2005. View at Publisher · View at Google Scholar 25. C. Costa, I. Cunha, A. Borges, C. Ramos, M. Rocha, J. Almeida, and B. Ribeiro-Neto, “Analyzing client interactivity in streaming media,” in Proceedings of the 13th International World Wide Web Conference Proceedings (WWW '04), pp. 534–543, May 2004. 26. G. Zipf, Human Behavior and the Principle of Least Effort, Addison-Wesley, Reading, Mass, USA, 1949. 27. J. Medhi, Stochastic Process, Wiley InterScience, New York, NY, USA, 2nd edition, 1994. 28. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Program, Springer, Berlin, Germany, 3rd edition, 1996. 29. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, London, UK, 1989. 30. K. F. Man, K. S. Tang, and S. Kwong, Genetic Algorithm: Concepts and Designs, Springer, London, UK, 1999. 31. C. R. Reeves, “Using genetic algorithms with small populations,” in Proceedings of the 5th International Conference on Genetic Algorithms (ICGA '93), pp. 92–99, 1993. 32. GAlib, http://lancet.mit.edu/ga/. 33. R. Rejaie, M. Handley, and D. Estrin, “Layered quality adaptation for Internet video streaming,” IEEE Journal on Selected Areas in Communications, vol. 18, no. 12, pp. 2530–2543, 2000. View at Publisher · View at Google Scholar · View at Scopus 34. H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of the H.264/AVC standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 9, pp. 1103–1120, 2007. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ijdmb/2010/645049/","timestamp":"2014-04-20T15:25:29Z","content_type":null,"content_length":"193891","record_id":"<urn:uuid:5f431a8c-dea1-4c51-b348-f553d064110a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Dr. Math Presents More Geometry: Learning Geometry is Easy! Just Ask Dr. Math Dr. Math Presents More Geometry: Learning Geometry is Easy! Just Ask Dr. Math ISBN: 978-0-471-69710-7 176 pages January 2005, Jossey-Bass You, too, can understand geometry---- just ask Dr. Math ? ! Are things starting to get tougher in geometry class? Don't panic. Dr. Math--the popular online math resource--is here to help you figure out even the trickiest of your geometry problems. Students just like you have been turning to Dr. Math for years asking questions about math problems, and the math doctors at The Math Forum have helped them find the answers with lots of clear explanations and helpful hints. Now, with Dr. Math Presents More Geometry, you'll learn just what it takes to succeed in this subject. You'll find the answers to dozens of real questions from students in a typical geometry class. You'll also find plenty of hints and shortcuts for using coordinate geometry, finding angle relationships, and working with circles. Pretty soon, everything from the Pythagorean theorem to logic and proofs will make more sense. Plus, you'll get plenty of tips for working with all kinds of real-life problems. You won't find a better explanation of high school geometry anywhere! See More PART 1. POINTS, LINES, PLANES, ANGLES, AND THEIR RELATIONSHIPS. 1. Angle relationships and perpendicular and parallel lines. 2. Proving lines parallel. 3. The parallel postulate. 4. Coordinates and distance. Resources on the Web. PART 2. LOGIC AND PROOF. 1. Introduction to logic. 2. Direct proof. 3. Indirect proof. Resources on the Web. PART 3. TRIANGLES: PROPERTIES, CONGRUENCE, AND SIMILARITY. 1. Triangle Inequality Property. 2. Centers of triangles. 3. Isosceles and equilateral triangles. 4. Congruence in triangles-SSS, SAS, ASA, and SSA. 5. Similarity in triangles. 6. Congruence proofs. Resources on the Web. 1. Properties of Polygons. 2. Properties of Quadrilaterals. 3. Area and Perimeter of Quadrilaterals. Resources on the Web. PART 5. CIRCLES AND THEIR PARTS. 1. Tangents. 2. Arcs and Angles. 3. Chords. Resources on the Web. See More THE MATH FORUM @ Drexel (www.mathforum.org) is an award-winning Web site and one of the most popular online math resources for students and teachers. The Math Forum @ Drexel offers answers to all kinds of math questions, prepared by a team of math experts. It also keeps archives of previous questions and answers, hosts online communities, and posts several "problems of the week." See More
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471697109.html","timestamp":"2014-04-17T10:11:51Z","content_type":null,"content_length":"43280","record_id":"<urn:uuid:9fd66220-35b3-40a6-b517-70d648452f2c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
North New Hyde Park, NY Math Tutor Find a North New Hyde Park, NY Math Tutor ...As you can see by my ratings and reviews, I am a very experienced, patient and passionate tutor. I am very familiar with the new comon core standards that students are expected to demonstrate. In one or two sessions, I am able to assess each student's needs and tailor a lesson plan to ensure th... 29 Subjects: including algebra 1, algebra 2, English, SAT math ...Of course, a big part of physics is math, and I am experienced and well qualified to tutor math from elementary school up through multivariable calculus and linear algebra, and I have on occasion tutored people in business math and even criminology, when requested.I studied linear algebra at Brow... 18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...Real Experience - I have tutored over 300 students in all areas of math including ACT/SAT math, algebra, geometry, pre-calculus, Algebra Regents, and more. I specialize in SAT/ACT Math. I teach students how to look at problems, how to break them down, which methods, strategies, and techniques to apply, and how to derive the quickest solution. 30 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...I have experience tutoring in a broad subject range, from Algebra through college level Calculus.I recently passed and an proficient in the material on both Exams P/1 and FM/2. I am able to tutor for the Praxis for Mathematics Content Knowledge. I have passed this test myself, getting only one question incorrect. 21 Subjects: including prealgebra, differential equations, linear algebra, logic ...Students I have taught range in age from five years old to adult. As a computer science major I was require to take many Math classes. Precalculus was one of them. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
{"url":"http://www.purplemath.com/North_New_Hyde_Park_NY_Math_tutors.php","timestamp":"2014-04-21T07:42:03Z","content_type":null,"content_length":"24613","record_id":"<urn:uuid:11f279db-a3e6-4ca7-a014-7bcba4aa0034>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Dania Trigonometry Tutor Find a Dania Trigonometry Tutor ...However, my first three years in College were spent studying the Humanities, before transferring to Engineering. These classes gave me extensive experience in all of the necessary skills for effective communication: Spelling, Grammar, Usage, Paper Writing and reading. In my College Humanities s... 14 Subjects: including trigonometry, reading, physics, geometry ...Rolling a die: - P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6 All students interested in being considered for the National Merit List, should make the PSAT a must. The new configuration of this preliminary standardized test is also an excellent means of familiarizing students with the SAT, ... 24 Subjects: including trigonometry, calculus, algebra 1, algebra 2 ...My University B.A degree from the University of California is Computational Mathematics, and I had to take upper division courses in discrete math. Discrete math is the study of mathematical structures and functions that are "discrete" instead of continuous. My University B.A degree from the Un... 48 Subjects: including trigonometry, chemistry, reading, calculus ...I have studied many of the great players' styles, but Dan Harrington's in particular. My strongest suit is cash game play, but I also have a grasp on deep stack tournament play. I'm by no means an expert, but I do know more than the average bear. 36 Subjects: including trigonometry, reading, chemistry, English ...I was an elementary school teacher for years. I have experience in all subjects, including math, reading, social studies and science, and I know how to make learning fun. I earned my Master's degree at one of the most prestigious schools of education. 27 Subjects: including trigonometry, chemistry, reading, GED
{"url":"http://www.purplemath.com/Dania_Trigonometry_tutors.php","timestamp":"2014-04-17T19:50:37Z","content_type":null,"content_length":"23997","record_id":"<urn:uuid:2f8b2bc6-7604-4a12-9926-82676f6cff28>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
A maximal set which is not complete - Trans. Amer. Math. Soc "... Abstract. Π0 1 classes are important to the logical analysis of many parts of mathematics. The Π0 1 classes form a lattice. As with the lattice of computably enumerable sets, it is natural to explore the relationship between this lattice and the Turing degrees. We focus on an analog of maximality, o ..." Cited by 16 (5 self) Add to MetaCart Abstract. Π0 1 classes are important to the logical analysis of many parts of mathematics. The Π0 1 classes form a lattice. As with the lattice of computably enumerable sets, it is natural to explore the relationship between this lattice and the Turing degrees. We focus on an analog of maximality, or more precisely, hyperhypersimplicity, namely the notion of a thin class. We prove a number of results relating automorphisms, invariance, and thin classes. Our main results are an analog of Martin’s work on hyperhypersimple sets and high degrees, using thin classes and anc degrees, and an analog of Soare’s work demonstrating that maximal sets form an orbit. In particular, we show that the collection of perfect thin classes (a notion which is definable in the lattice of Π0 1 classes) forms an orbit in the lattice of Π01 classes; and a degree is anc iff it contains a perfect thin class. Hence the class of anc degrees is an invariant class for the lattice of Π0 1 classes. We remark that the automorphism result is proven via a ∆0 3 automorphism, and demonstrate that this complexity is necessary. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=234795","timestamp":"2014-04-16T06:37:59Z","content_type":null,"content_length":"12771","record_id":"<urn:uuid:09ea6493-6aa1-4c94-bd2d-e7f1a2be02a4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
[NumPy-Tickets] [NumPy] #1433: numpy array captures 'in' statement when it shouldn't NumPy Trac numpy-tickets@scipy.... Sun Sep 5 15:06:31 CDT 2010 #1433: numpy array captures 'in' statement when it shouldn't Reporter: graik | Owner: somebody Type: enhancement | Status: reopened Priority: normal | Milestone: Component: numpy.core | Version: 1.3.0 Resolution: | Keywords: __contains__ Changes (by graik): * status: closed => reopened * type: defect => enhancement * resolution: wontfix => Thanks for having a look at this. Though I am afraid I disagree. Now I haven't checked every version of numpy or Numeric. But we have used this construction in a large python library since many years (http://biskit.sf.net). First with Numeric then with numpy. It could well be that the later Numeric versions were already broken / modified -- we kept using some version 23.x because everything later became increasingly unstable. I also made a big leap in numpy versions from a very early to the latest one. Anyway, this is non-python behavior. It seems then that numpy's equality operation should be fixed. Python data types have an expected behavior -- if compared to some other (incompatible) data type, they simply return False. This allows many common and elegant short cuts. For example, another very frequent pattern in our library was this (assigning a default value if None is given): a = None b = a or zeros( 10 ) instead of: a = None b = zeros( 10 ) if a is not None: b = a It works perfectly fine with all python data types. But because of numpy's new __equal__, we had to remove all these constructs. This new ValueError is quite annoying. It should only be raised if we are actually comparing Ticket URL: <http://projects.scipy.org/numpy/ticket/1433#comment:2> NumPy <http://projects.scipy.org/numpy> My example project More information about the NumPy-Tickets mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-tickets/2010-September/003836.html","timestamp":"2014-04-16T16:48:01Z","content_type":null,"content_length":"5132","record_id":"<urn:uuid:9615981c-caa1-435e-8dcb-d52b56990af5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Goodyear SAT Math Tutor ...Additionally, I taught at a community college. I have been teaching study skills on an annual basis since 1976. I had specific training in teaching study skills while instructing at Prescott Mile High Middle School in 1986-1990. 56 Subjects: including SAT math, English, reading, geometry ...It is my job as a tutor to guide the students to sift through the problems and obtain the correct answer. This is done by providing them with thoughtful sequential questions. This road map I take with the students will enable them to continue the thought process without me there. 20 Subjects: including SAT math, chemistry, physics, calculus ...For this reason I have always tutored my friends and loved ones in math whenever they need help. It gives me great pleasure when I know I have helped someone in their math education. My goal is to have everyone know math well because math is part of our everyday lives and serves as logical reasoning to solve every problem we have. 21 Subjects: including SAT math, chemistry, calculus, physics ...I worked as freelance graphic designer for 3 years and currently act as design consultant. I enjoy working with students interested in realism (learning to "see" the subject, measuring and framing techniques) or conceptual (concept development, multimedia, execution) art. I continue to draw and sketch by leisure. 8 Subjects: including SAT math, French, SAT reading, drawing ...I will be an outstanding tutor for your student. Thank you for giving me the opportunity.I am an accounting major at ASU. I have been a T.A. for 2 College Algebra classes at ASU for the past 2 semesters, and will continue to be one for the next 2 years until I graduate. 10 Subjects: including SAT math, chemistry, algebra 1, GED
{"url":"http://www.purplemath.com/Goodyear_SAT_Math_tutors.php","timestamp":"2014-04-21T15:10:07Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:b7d0cee0-e77c-4d21-890d-d355c78690d4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Classifying Polygons 1.6: Classifying Polygons Created by: CK-12 Learning Objectives • Define triangle and polygon. • Classify triangles by their sides and angles. • Understand the difference between convex and concave polygons. • Classify polygons by number of sides. Review Queue 1. Draw a triangle. 2. Where have you seen 4, 5, 6 or 8 - sided polygons in real life? List 3 examples. 3. Fill in the blank. 1. Vertical angles are always _____________. 2. Linear pairs are _____________. 3. The parts of an angle are called _____________ and a _____________. Know What? The pentagon in Washington DC is a pentagon with congruent sides and angles. There is a smaller pentagon inside of the building that houses an outdoor courtyard. Looking at the picture, the building is divided up into 10 smaller sections. What are the shapes of these sections? Are any of these division lines diagonals? How do you know? Triangle: Any closed figure made by three line segments intersecting at their endpoints. Every triangle has three vertices (the points where the segments meet), three sides (the segments), and three interior angles (formed at each vertex). All of the following shapes are triangles. You might have also learned that the sum of the interior angles in a triangle is $180^\circ$ Example 1: Which of the figures below are not triangles? Solution: $B$$D$ Example 2: How many triangles are in the diagram below? Solution: Start by counting the smallest triangles, 16. Now count the triangles that are formed by 4 of the smaller triangles, 7. Next, count the triangles that are formed by 9 of the smaller triangles, 3. Finally, there is the one triangle formed by all 16 smaller triangles. Adding these numbers together, we get $16 + 7 + 3 + 1 = 27$ Classifying by Angles Angles can be grouped by their angles; acute, obtuse or right. In any triangle, two of the angles will always be acute. The third angle can be acute, obtuse, or right. We classify each triangle by this angle. Right Triangle: A triangle with one right angle. Obtuse Triangle: A triangle with one obtuse angle. Acute Triangle: A triangle where all three angles are acute. Equiangular Triangle: When all the angles in a triangle are congruent. Example 3: Which term best describes $\triangle RST$ Solution: This triangle has one labeled obtuse angle of $92^\circ$ Classifying by Sides You can also group triangles by their sides. Scalene Triangle: A triangles where all three sides are different lengths. Isosceles Triangle: A triangle with at least two congruent sides. Equilateral Triangle: A triangle with three congruent sides. From the definitions, an equilateral triangle is also an isosceles triangle. Example 4: Classify the triangle by its sides and angles. Solution: We see that there are two congruent sides, so it is isosceles. By the angles, they all look acute. We say this is an acute isosceles triangle. Example 5: Classify the triangle by its sides and angles. Solution: This triangle has a right angle and no sides are marked congruent. So, it is a right scalene triangle. Polygon: Any closed, 2-dimensional figure that is made entirely of line segments that intersect at their endpoints. Polygons can have any number of sides and angles, but the sides can never be curved. The segments are called the sides of the polygons, and the points where the segments intersect are called vertices. Example 6: Which of the figures below is a polygon? Solution: The easiest way to identify the polygon is to identify which shapes are not polygons. $B$$C$$D$$A$ Example 7: Which of the figures below is not a polygon? Solution: $C$ Convex and Concave Polygons Polygons can be either convex or concave. The term concave refers to a cave, or the polygon is “caving in”. All stars are concave polygons. A convex polygon does not do this. Convex polygons look like: Diagonals: Line segments that connect the vertices of a convex polygon that are not sides. The red lines are all diagonals. This pentagon has 5 diagonals. Example 8: Determine if the shapes below are convex or concave. Solution: To see if a polygon is concave, look at the polygons and see if any angle “caves in” to the interior of the polygon. The first polygon does not do this, so it is convex. The other two do, so they are concave. Example 9: How many diagonals does a 7-sided polygon have? Solution: Draw a 7-sided polygon, also called a heptagon. Drawing in all the diagonals and counting them, we see there are 14. Classifying Polygons Whether a polygon is convex or concave, it is always named by the number of sides. Polygon Name Number of Sides Number of Diagonals Convex Example Triangle 3 0 Quadrilateral 4 2 Pentagon 5 5 Hexagon 6 9 Heptagon 7 14 Octagon 8 ? Nonagon 9 ? Decagon 10 ? Undecagon or hendecagon 11 ? Dodecagon 12 ? n-gon $n$$n > 12$ ? Example 10: Name the three polygons below by their number of sides and if it is convex or concave. Solution: The pink polygon is a concave hexagon (6 sides). The green polygon convex pentagon (5 sides). The yellow polygon is a convex decagon (10 sides). Know What? Revisited The pentagon is divided up into 10 sections, all quadrilaterals. None of these dividing lines are diagonals because they are not drawn from vertices. Review Questions • Questions 1-8 are similar to Examples 3, 4 and 5. • Questions 9-14 are similar to Examples 8 and 10 • Question 15 is similar to Example 6. • Questions 16-19 are similar to Example 9 and the table. • Questions 20-25 use the definitions, postulates and theorems in this section. For questions 1-6, classify each triangle by its sides and by its angles. 7. Can you draw a triangle with a right angle and an obtuse angle? Why or why not? 8. In an isosceles triangle, can the angles opposite the congruent sides be obtuse? In problems 9-14, name each polygon in as much detail as possible. 15. Explain why the following figures are NOT polygons: 16. How many diagonals can you draw from one vertex of a pentagon? Draw a sketch of your answer. 17. How many diagonals can you draw from one vertex of an octagon? Draw a sketch of your answer. 18. How many diagonals can you draw from one vertex of a dodecagon? 19. Determine the number of total diagonals for an octagon, nonagon, decagon, undecagon, and dodecagon. For 20-25, determine if the statement is true or false. 20. Obtuse triangles can be isosceles. 21. A polygon must be enclosed. 22. A star is a convex polygon. 23. A right triangle is acute. 24. An equilateral triangle is equiangular. 25. A quadrilateral is always a square. 26. A 5-point star is a decagon Review Queue Answers 2. Examples include: stop sign (8), table top (4), the Pentagon (5), snow crystals (6), bee hive combs (6), soccer ball pieces (5 and 6) 1. congruent or equal 2. supplementary 3. sides, vertex Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/Basic-Geometry/r1/section/1.6/","timestamp":"2014-04-23T14:26:36Z","content_type":null,"content_length":"130275","record_id":"<urn:uuid:59f004e6-5b97-48f3-b31e-f087d7010791>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
modular exponentation for RSA, why is 2^16 + 1 commonly chosen? up vote 2 down vote favorite I know that the number 2^16 + 1 is commonly used for RSA, since 0b 1 0000 0000 0000 0001 only contains two 1 bits. Many sites explain that this makes modular exponentiation faster, but I haven't come across an explanation of why it is faster. Why is it more efficient to use a number with a lot of zeros for modular exponentiation? prime-numbers nt.number-theory I don't think this is exactly a theoretical question, but I wouldn't mind seeing the answer myself! – stankewicz Mar 2 '10 at 19:42 I don't think "theoretical" per se is a requirement here. – Scott Morrison♦ Mar 2 '10 at 21:41 i think this is a fine question, but i do wish that more questions would use correct capitalization. i find it hard to read uncapitalized posts, and it comes across as unprofessional. – Theo Johnson-Freyd Mar 2 '10 at 23:46 4 @Theo, I would encourage you to start downvoting with this as a reason. I'll try to as well! "-1, inability to use <shift> key." – Scott Morrison♦ Mar 3 '10 at 0:51 5 @Theo, Scott: why not just edit the post? It's not worth editing for tiny grammatical things, but if it's enough for you to leave a comment or a downvote, I think it's better to just edit. – Anton Geraschenko Mar 3 '10 at 18:23 add comment 2 Answers active oldest votes There are a two minor advantages to choosing the exponent 2^16+1. The first advantage, as Johannes observed, is that for fixed size exponent, exponentiation to power e using the basic repeated squaring method is moderately faster when e has lots of zero bits. It is not true that exponents with more one bits are necessarily slower since there are plenty of such numbers with very short addition chains (though finding such short addition chains is an NP complete problem in general). In any case, e = 3 would be a much better choice than e = 2^16+1 for the sole purpose of exponentiation. up vote 4 down vote The second advantage is that 2^16+1 is a prime number and it is not too small. A requirement of the RSA algorithm is that the exponent e must be relatively prime with φ(pq) = (p-1)(q-1). accepted Since the large primes p and q are chosen randomly, there is always a chance that (p-1)(q-1) is not relatively prime with the (previously chosen) exponent e and the primes p,q must therefore be discarded. So small exponents e are poor choices since about every (e-1)^th choice of p and q is a bad one, thus shrinking the overall key space. Choosing e to be a large prime would be best, but too large an e would make exponentiation slow. In the end, e = 2^16+1 is a nice compromise value. I guess a third advantage of 65537 is that it is the smallest exponent allowed by NIST, though they don't give a specific reason for that choice. (Probably to avoid small exponent attacks, which are possible when proper padding is not used.) – François G. Dorais♦ Mar 3 '10 at 0:03 @François G. Dorais: If e=3, couldn't you just avoid generating primes that are 1 mod 3? I assume 0 mod 2 and 3 are being avoided already to save some time. It's not hard to take a random number k and consider 6k+5 as a potential prime. – aorq Mar 3 '10 at 6:41 Yes, that is fine for e=3. It is already difficult for e=5 without further decreasing the key space and/or increasing the key-generation cost. (Note that the real issue with e=3 is the risk of small exponent attacks.) – François G. Dorais♦ Mar 3 '10 at 14:47 Also, e=3 has the disadvantage that it might be vulnerable to the broadcast attack. If you send a message x using e=3 to 3 different users (with different n_1,n_2,n_3, WLOG, mutually 1 relatively prime (or we'd have a factorisation of 2 of them..) then we could solve x^3 mod n_i (i=1,2,3) using the CRT for x. This is a problem if e=3 were a common exponent. Hence the choice for a relatively large e (also prime and few bits set). – Henno Brandsma Dec 18 '10 at 10:32 add comment The usually used fast exponentiation algorithm is the so called square-and-multiply-algorithm. It needs exactly n+m multiplications, where n is the total length of the binary written up vote 6 exponent and m is the number of 1-bits in the exponent. Therefore exponentation with 2^16+1 is almost twice as fast as exponentiation with say 2^17-1. down vote hi thank you for your answer! can I ask, though, why does it need m+n operations? where is that coming from? thanks again! – sj steve Mar 2 '10 at 20:54 Consider how you might compute something like $5^9$ rapidly. First, compute the numbers $p_0 = 5^{2^0}$, $p_1 = 5^{2^1}$, $p_2 = 5^{2^2}$, and $p_3 = 5^{2^3}$. Each of these is computed from the previous one by squaring, so you have $n-1$ multiplications in total (or $n$ if you start by multiplying 1 by 5 to get $5^{2^0}$). Since 9 is 1001 in binary, you get $5^9$ by multiplying $p_0$ by $p_3$, for $m - 1$ more multiplications (again, $m$ if you start by multiplying 1). In total, we used four multiplications, where you might naively have used nine. – Matt Noonan Mar 2 '10 at 23:55 add comment Not the answer you're looking for? Browse other questions tagged prime-numbers nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/16901/modular-exponentation-for-rsa-why-is-216-1-commonly-chosen?sort=newest","timestamp":"2014-04-20T13:47:31Z","content_type":null,"content_length":"69733","record_id":"<urn:uuid:1471d7ec-feac-489f-8f55-cdae6db56a66>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Exam Problems The final exam will be given Tuesday, May 8 from 5:00 to 6:55 in Warren Weaver, room 101. It is closed book and closed notes. Topics covered • Search □ Blind search -- R+N chap 3, appendix B.2, handout on constrained AND/OR trees. □ Informed search --- R+N secs 4.1 except A* search, 4.2, 4.4 • Game playing -- R+N secs. 5.1-5.4 • Automated reasoning -- R+N chaps 6, 7, 9, handouts. • Machine Learning □ 1R algorithm --- handout □ Naive Bayes --- handout □ Decision trees --- R+N sec 18.3 □ Perceptrons/Back propagation networks --- R+N secs 19.1-19.4 □ Evaluation --- handout □ Minimum description length learning --- handout Problem 1 Explain the difference between ``pick" and ``choose'' in a non-deterministic algorithm. Illustrate with an example. (You may use one of the examples discussed in class.) Problem 2: What is the result of doing alpha-beta pruning in the game tree shown below? Problem 3: Name three conditions that must hold on a game for the technique of MIN-MAX game-tree evaluation to be applicable. Problem 4: Consider a domain where the individuals are people and languages. Let L be the first-order language with the following primitives: s(X,L) --- Person X speaks language L. c(X,Y) --- Persons X and Y can communicate. i(W,X,Y) --- Person W can serve as an interpreter between persons X and Y. j,p,e,f --- Constants: Joe, Pierre, English, and French respectively. A. Express the following statements in L: • i. Joe speaks English. • ii Pierre speaks French. • iii. If X and Y both speak L, then X and Y can communicate. • iv. If W can communicate both with X and with Y, then W can serve as an interpreter between X and Y. • v. For any two languages L and M, there is someone who speaks both L and M. • vi. There is someone who can interpret between Joe and Pierre. B. Show that (vi) can be proven from (i)---(v) using backward-chaining resolution. You must show the Skolemized form of each statement, and every resolution that is used in the final proof. You need not show the intermediate stages of Skolemization, or show resolutions that are not used in the final proof. Problem 5: A. Give an example of a decision tree with two internal nodes (including the root), and explain how it classifies an example. B. Describe the ID3 algorithm to construct decision trees from training data. C. What is the entropy of a classification C in table T? What is the expected entropy of classification C if table T is split on predictive attribute A? D. What kinds of techniques can be used to counter the problem of over-fitting in decision trees? Problem 6: Consider the following data set with three Boolean predictive attributes, W,X,Y and Boolean classification C. W X Y C T T T T T F T F T F F F F T T F F F F T We now encounter a new example: W=F, X=T, Y=F. If we apply the Naive Bayes method, what probability is assigned to the two values of C? Problem 7: "Local minima can cause difficulties for a feed-forward, back-propagation neural network." Explain. Local minima of what function of what arguments? Why do they create difficulties? Problem 8: Which of the following describes the process of task execution (classifying input signal) in a feed-forward, back-propagation neural network? Which describe the process of learning? (One answer is correct for each.) • a. Activation levels are propagated from the inputs through the hidden layers to the outputs. • b. Activation levels are propagated from the outputs through the hidden layers to the inputs. • c. Weights on the links are modified based on messages propagated from input to output. • d. Weights on the links are modified based on messages propagated from output to input. • e. Connections in the network are modified, gradually shortening the path from input to output. • f. Weights at the input level are compared to the weights at the output level, and modified to reduce the discrepancy. Problem 9 Explain briefly (2 or 3 sentences) the use of a training set and a test set in evaluating learning programs. Problem 10 Explain how the minimum description length (MDL) learning theory justifies the conjecture of A. perfect classification hypotheses (i.e. classification hypotheses that always give the correct classification, given the values of the predictive attributes) for nominal classifications. B. imperfect classification hypotheses (i.e. hypotheses that do better than chance) for nominal attributes. C. approximate classification hypotheses for numeric classifications. (i.e. hypotheses that give answers that are nearly correct.)
{"url":"http://cs.nyu.edu/courses/spring01/G22.2560-001/sample-fx-qns.html","timestamp":"2014-04-21T05:22:52Z","content_type":null,"content_length":"5283","record_id":"<urn:uuid:230b763b-bcff-4348-a818-01fd35790b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: An ideal gas is trapped between a mercury column and the closed end of a narrow vertical tube of uniform base .The upper end of the tube is open to the atmosphere (76 cm of Hg).The length of Hg coln and trapped air are 20 cm & 43 cm respectively.What will be the length of the air coln when the tube is tilted slowly in a vertical plane through an angle of 60 degree (T is const) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5078f6fae4b0ed1dac51207f","timestamp":"2014-04-18T00:24:59Z","content_type":null,"content_length":"25540","record_id":"<urn:uuid:98c55eee-df31-47a0-8171-962bee68fc0d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus, 1st Edition Buy New Buy New from BN.com Used and New from Other Sellers Used and New from Other Sellers from $113.19 Taking a fresh approach while retaining classic presentation, the Tan Calculus series utilizes a clear, concise writing style, and uses relevant, real world examples to introduce abstract mathematical concepts with an intuitive approach. In keeping with this emphasis on conceptual understanding, each exercise set in the three semester Calculus text begins with concept questions and each end-of-chapter review section includes fill-in-the-blank questions which are useful for mastering the definitions and theorems in each chapter. Additionally, many questions asking for the interpretation of graphical, numerical, and algebraic results are included among both the examples and the exercise sets. The Tan Calculus three semester text encourages a real world, application based, intuitive understanding of Calculus without comprising the mathematical rigor that is necessary in a Calculus text. Read More Show Less Product Details • ISBN-13: 9780534465797 • Publisher: Cengage Learning • Publication date: 11/9/2009 • Edition description: New Edition • Edition number: 1 • Sales rank: 933,401 • Product dimensions: 8.60 (w) x 10.10 (h) x 2.00 (d) Table of Contents 0. PRELIMINARIES. Lines. Functions and Their Graphs. The Trigonometric Functions. Combining Functions. Graphing Calculators and Computers. Mathematical Models. Chapter Review. 1. LIMITS. An Intuitive Introduction to Limits. Techniques for Finding Limits. A Precise Definition of a Limit. Continuous Functions. Tangent Lines and Rates of Change. Chapter Review. Problem-Solving Techniques. Challenge Problems. 2. THE DERIVATIVE. The Derivative. Basic Rules of Differentiation. The Product and Quotient Rules. The Role of the Derivative in the Real World. Derivatives of Trigonometric Functions. The Chain Rule. Implicit Differentiation. Related Rates. Differentials and Linear Approximations. Chapter Review. Problem-Solving Techniques. Challenge Problems. 3. APPLICATIONS OF THE DERIVATIVE. Extrema of Functions. The Mean Value Theorem. Increasing and Decreasing Functions and the First. Derivative Test. Concavity and Inflection Points. Limits Involving Infinity; Asymptotes. Curve Sketching. Optimization Problems. Newton's Method. Chapter Review. Problem-Solving Techniques. Challenge Problems. 4. INTEGRATION. Indefinite Integrals. Integration by Substitution. Area. The Definite Integral. The Fundamental Theorem of Calculus. Numerical Integration. Chapter Review. Problem-Solving Techniques. Challenge Problems. 5. APPLICATIONS OF THE DEFINITE INTEGRAL. Areas Between Curves. Volumes: Disks, Washers, and Cross Sections. Volumes Using Cylindrical Shells. Arc Length and Areas of Surfaces of Revolution. Work. Fluid Pressure and Force. Moments and Centers of Mass. Chapter Review. Problem-Solving Techniques. Challenge Problems. 6. THE TRANSCENDENTAL FUNCTIONS. The Natural Logarithmic Function. Inverse Functions. Exponential Functions. General Exponential and Logarithmic Functions. Inverse Trigonometric Functions. Hyperbolic Functions. Indeterminate Forms and L'Hôpital's Rule. Chapter Review. Challenge Problems. 7. TECHNIQUES OF INTEGRATION. Integration by Parts. Trigonometric Integrals. Trigonometric Substitutions. The Method of Partial Fractions. Integration Using Tables of Integrals and CAS. Improper Integrals. Chapter Review. Problem-Solving Techniques. Challenge Problems. 8. DIFFERENTIAL EQUATIONS. Differential Equations: Separable Equations. Direction Fields and Euler's Method. The Logistic Equation. First-Order Linear Differential Equations. Chapter Review. Challenge Problems. 9. INFINITE SEQUENCES AND SERIES. Sequences. Series. The Integral Test. The Comparison Tests. Alternating Series. Absolute Convergence; The Ratio and Root Tests. Power Series. Taylor and Maclaurin Series. Approximation by Taylor Polynomials. Chapter Review. Problem-Solving Techniques. Challenge Problems. 10. CONIC SECTIONS, PARAMETRIC EQUATIONS, AND POLAR COORDINATES. Conic Sections. Plane Curves and Parametric Equations. The Calculus of Parametric Equations. Polar Coordinates. Areas and Arc Lengths in Polar Coordinates. Conic Sections in Polar Coordinates. Chapter Review. Challenge Problems. 11. VECTORS AND THE GEOMETRY OF SPACE. Vectors in the Plane. Coordinate Systems and Vectors in Three-Space. The Dot Product. The Cross Product. Lines and Planes in Space. Surfaces in Space. Cylindrical and Spherical Coordinates. Chapter Review. Challenge Problems. 12. VECTOR-VALUED FUNCTIONS. Vector-Valued Functions and Space Curves. Differentiation and Integration of Vector- Valued. Functions. Arc Length and Curvature. Velocity and Acceleration. Tangential and Normal Components of Acceleration. Chapter Review. Challenge Problems. 13. FUNCTIONS OF SEVERAL VARIABLES. Functions of Two or More Variables. Limits and Continuity. Partial Derivatives. Differentials. The Chain Rule. Directional Derivatives and Gradient Vectors. Tangent Planes and Normal Lines. Extrema of Functions of Two Variables. Lagrange Multipliers. Chapter Review. Challenge Problems. 14. MULTIPLE INTEGRAL.S Double Integrals. Iterated Integrals. Double Integrals in Polar Coordinates. Applications of Double Integrals. Surface Area. Triple Integrals. Triple Integrals in Cylindrical and Spherical Coordinates. Change of Variables in Multiple Integrals. Chapter Review. Challenge Problems. 15. VECTOR ANALYSIS. Vector Fields. Divergence and Curl. Line Integrals. Independence of Path and Conservative Vector Fields. Green's Theorem. Parametric Surfaces. Surface Integrals. The Divergence Theorem. Stoke's Theorem. Chapter Review. Challenge Problems. APPENDICES. A. The Real Number Line, Inequalities, and Absolute Value. B. Proofs of Selected Theorems. Read More Show Less
{"url":"http://www.barnesandnoble.com/w/calculus-1st-edition-soo-t-tan/1100375013?ean=9780534465797&isbn=2940043674784&itm=1&usri=2940043674784","timestamp":"2014-04-16T17:41:03Z","content_type":null,"content_length":"113296","record_id":"<urn:uuid:6bd0625b-1611-4d16-a768-2ae62af4849a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
In case you missed it: October Roundup November 10, 2011 By David Smith In case you missed them, here are some articles from October of particular interest to R users. The creator of the ggplot2 package, Hadley Wickham, shares details on some forthcoming big-data graphics functions (based on research sponsored by Revolution Analytics). A list of several dozen free data sources that can easily be imported into R. Bob Muenchen gave a presentation "Introduction to R for SAS and SPSS Users"; the slides include many useful resources for new R programmers. Submissions have been posted for the "Applications of R in Business" contest, and your comments will be taken into consideration by the judges. How to make a Hallowe'en card with R graphics. An overview of the new features in R 2.14.0. The Systematic Investor blog shows how to implement an "average correlation" criterion for optimizing portfolios in R. The Quantum Forest blog includes several worked examples of random-effects modeling with R. The New York Times "Bits" blog published an article on Big Data that mentioned R, MapReduce and NoSQL. An article in Forbes includes the quote, "Anyone planning to work with Big Data ought to learn Hadoop and R". High-schoolers celebrate World Statistics Day with R. I posted slides from my presentation "100% R and More" on the features Revolution R Enterprise adds to open source R. A profile of quantitative developer and author of "R Cookbook", Paul Teetor. A report from the ACM Data Mining Camp includes several applications of R. A list of the "Top 50 Statistics Blogs" includes several that post content related to R. Antonio Piccolboni gave a presentation to the Bay Area R User Group demonstrating that it's much easer to do K-means clustering in Hadoop with help from R. R user Yanh Zhan offers seven good reasons to like R. Joseph Rickert reflects on a presentation by Brad Efron on Bayesian Inference. Slides are available for the presentation "Backtesting FINRA's Limit Up/Down Rules", where R was used to investigate the 2010 "Flash Crash" of the stock market. Two NYC-based R users have organized "DataDive" events for data scientists to apply their skills to help non-profit and charity organizations. Oracle has announced a "Big Data Appliance" that incorporates open source R. Other non-R-related stories in the past month included: statisticians in Glamour magazine, reviews of the book "A Million Random Digits", the good/evil nature of Big Data, an even worse use of pie charts than usual, a Rubik's Cube solving robot, and a gravity-defying Slinky. There is a new R user group in Dublin. Meeting times for these groups can be found on the updated R Community Calendar. As always, thanks for the comments and please send any suggestions to me at [email protected]. Don't forget you can follow the blog using an RSS reader like Google Reader, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here. for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/in-case-you-missed-it-october-roundup-2/","timestamp":"2014-04-17T13:09:06Z","content_type":null,"content_length":"43132","record_id":"<urn:uuid:5a550065-c3e1-47b0-968e-cdbe0afdd265>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Carlo Approximation of Pi Date: 11/24/2001 at 11:47:22 From: Sharmila V. Subject: Monte Carlo approximation of pi Using the circle-in-square model, find an algorithm to determine the value of pi. You can assume the existence of two functions - uniform (a,b) - returns a uniformly distributed random variable of type real between a and b. radis (x,y) - given the coordinates x and y, the function returns the distance from the centre of the circle/square at which the coordinates lie. How many trials will it take to achieve a 90% confidence interval for pi? or a 95% confidence interval? Date: 11/25/2001 at 08:39:29 From: Doctor Jubal Subject: Re: Monte Carlo approximation of pi Hi Sharmila, Algorithms like these, which use random numbers to approximate deterministic outcomes, are called Monte Carlo methods, after the famous casino city where random numbers produce a deterministic outcome (the house wins the gambler's money). The unit circle has an area of pi * r^2 = pi * 1^2 = pi, while the square it is inscribed in has side length 2, so its area is 4. One way of of approximating pi is to pick a point at random inside the square. With probability pi/4, that point also lies inside the unit circle, because the unit circle has pi/4 the area of the square. So this gives a way to approximate pi/4. Pick a large number of points at random, and then approximate number of points inside unit circle pi/4 ~ -------------------------------------- total number of points Of course, you can find pi itself by multiplying by four at the end. So to implement this scheme, you need a way of randomly picking a point inside the square and a way of testing whether that point is inside the unit circle. To pick a random point inside the square, we need to know where the square is. Since the unit circle is centered at the origin and has radius one, the square it is inscribed in has edges at x = +1, 1 and y = +-1. So if we call uniform(-1,1) twice, once for the x-coordinate and once for the y-coordinate, we'll have a random point in the The unit circle is the set of points that are exactly 1 unit away from the origin. Anything less than 1 unit away is inside it. Anything more than 1 unit away is outside it. So once you have the x and y coordinates, that coordinate pair lies inside the unit circle if radius(x,y) < 1. Repeat a number of times, recording how many of the points lie inside the circle along with the total number of points, and you have an How good an answer depends on the number of points. For this, we'll need some statistics The usual measure of a random process's variability is the variance s^2. s^2 is defined s^2 = <r^2> - <r>^2 where <r^2> is the expectation of the random variable r squared, and <r>^2 is the square of the random variable expectation (in other words, the square of the mean. The random variable r in this case is the true/false property of whether or not our randomly chosen point is inside the unit circle. Let's assign the property of being inside the circle a value of 1, and being outside the circle a value of 0. With probability pi/4, the point is inside the circle, so the mean (expectation) of r is <r> = (pi/4)(1) + (1-pi/4)(0) = pi/4 Technically, we already knew this, since it's the whole basis for the algorithm, but it's nice to know we're on the right track. The expectation of r^2 is also <r^2> = (pi/4)(1^2) + (1-pi/4)(0^2) = pi/4 so the variance s^2 is s^2 = <r^2> - <r>^2 = pi/4 - (pi/4)^2 = (pi/4)(1 - pi/4) For our problem, the variance itself is only useful because we can use it to derive a couple of other more useful quantities, namely the standard deviation s, which is just the square root of the variance: s = sqrt[(pi/4)(1-pi/4)] and the relative error, which is the standard deviation divided by the s/m = sqrt[(pi/4)(1-pi/4)] / (pi/4) = sqrt(4/pi - 1) = .5227232... The relative error measures how big the "average" error is compared to the correct answer. In this case, the "average" error is more than half the size of the correct answer, which is downright terrible! But this is for only a single trial, which will give us one of only two answers, pi = 4 or pi = 0, so it's understandable why it's so bad. The relative error decreases as the square root of the number of trials, that is (N is the number of trials) s/m = (s/m for one trial)/sqrt(N) 4/pi - 1 s/m = sqrt( ----------- ) So to find the 90% and 95% confidence intervals, you need to find the values of N that decrease the relative error to .1 and .05, Does this help? Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Jubal, The Math Forum Date: 11/25/2001 at 10:28:04 From: Sharmila Varshney Subject: Re: Monte Carlo approximation of pi Thanks for your quick reply to my question. The only thing is that it is a unit circle in a unit square. I hope this does not change the approach. I will test it out with the area of square equal to 2 instead of 4. I am using your forum for the first time and am very impressed. I think you all are doing a wonderful job!. I did not think that in this time and age people actually did this kind of thing. Thanks a million. Date: 11/25/2001 at 10:43:04 From: Doctor Jubal Subject: Re: Monte Carlo approximation of pi Hi Sharmila, That approach won't work, because the unit circle (which has area pi) won't fit inside a unit square (which has area 1, not 2). If you want to use a unit square, what you could do is take the arrangement of circle and square I described in my first response and consider only the part of it that lies in the first quadrant. That arrangement gives a unit square (area 1), with one fourth of the unit circle (area pi/4) inside it. The ratio of area occupied by circle to area occupied by square is still pi/4, because each of the fourth quarters of the circle-square combination is completely identical except for being rotated 90 degrees with respect to its To do this, the only modification you'd need to make to the approach I suggested to first time is to call uniform(0,1) rather than uniform(-1,1). This gives you random points that are guaranteed to be in the first quadrant. Because of symmetry, the test to see if the point is inside the unit circle still works, and because the probability of being in the unit circle is still pi/4, the error analysis still holds. Does this help? Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Jubal, The Math Forum Date: 11/25/2001 at 10:50:14 From: Sharmila Varshney Subject: Re: Monte Carlo approximation of pi I was just realizing the problem when I received your second email. Now I get the whole picture. Thanks so much! You have no idea how much of a help you have been. Would the approach be different if I had to consider a circle with diameter 1 in a unit square? Date: 11/27/2001 at 09:59:27 From: Doctor Jubal Subject: Re: Monte Carlo approximation of pi Hi Sharmila, Absolutely not. The shapes and the ratios of area are all the same. The only thing to change is that since the circle now has radius 1/2 and the square extends from -1/2 < x or y < 1/2, you want to generate random points with uniform(-1/2, 1/2) and then test if they're inside the circle by testing if radius(x,y) < 1/2. You can do this algorithm with any size square you want just by adjusting the constants. Most of the time, though, it's done using 1/4 of a circle with radius 1, simply because the library random number generating routines on many platforms generate random numbers with a uniform(0,1) distribution. - Doctor Jubal, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51909.html","timestamp":"2014-04-20T03:29:55Z","content_type":null,"content_length":"13320","record_id":"<urn:uuid:3a46aabe-b4dc-48d9-b97d-855164543e84>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Earlville, PA Math Tutor Find an Earlville, PA Math Tutor ...I am proficient in the subject areas included in the GED Math portion, including geometry, area of triangles, circles, and cylinders, and angles, among others. I can help study the types of answers generally given and the types of word problems to understand what the problem is asking. I can also help with general studies of the science, social studies, and language arts. 26 Subjects: including SAT math, logic, algebra 1, algebra 2 ...I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my success. 21 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I come from a strong background in mathematics and sciences. I am currently taking my third calculus course, and have completed many topics prior including geometry, algebra, pre-calculus, trigonometry, etc. I have also completed two chemistry courses, a science class which I am experienced enough in to tutor. 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...I differentiated the program as necessary for individual student needs. I passed the certification exam (PECT) for elementary education. I am a senior at Muhlenberg College, and I am currently entering my third college season on the track team. 26 Subjects: including calculus, precalculus, study skills, ESL/ESOL ...I have also been involved in co-teaching Algebra II with a general education teacher for the past four years. I have a dual Bachelor's degree in Special Education/Elementary Education through Kutztown University. I am also certified in Middle School math. 26 Subjects: including prealgebra, biology, chemistry, trigonometry
{"url":"http://www.purplemath.com/Earlville_PA_Math_tutors.php","timestamp":"2014-04-21T07:11:35Z","content_type":null,"content_length":"23951","record_id":"<urn:uuid:422619f4-d61e-4ffb-9c58-23263cfb1e04>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring 2012 Feb 10: Organizational meeting Feb 17: Xing Zhong (New Jersey Institute Technology, New Jersey) Threshold Phenomena for Symmetric Decreasing Solutions of Reaction-Diffusion Equations We study the Cauchy problem for nonlinear reaction-diffusion equation (u_t = u_xx + f(u), u(x,0) = \phi (x), x \in R, t > 0), with different nonlinearities. By using energy functional and exponentially weighted functional, for symmetric decreasing initial conditions, we prove one-to-one relation between long time behavior of solution and limit value of energy. Then we study the threshold phenomena. This is a joint work with Cyrill Muratov. Feb 24: No meeting today Mar 02: TBA Mar 09: Keith Promislow (Michigan State University, East Lansing, MI) Network Formation and Ion Conduction in Ionomer Membranes Many important processes in the physical world can be described as a gradient (overdamped) flow of a variational energy. We present a broad formalism for the generation of new classes of higher-order variational energies with a physically motivated structure. In particular we reformulate the Cahn-Hilliard energy, which is well know to describe the surface area of mixtures, into a higher-order model of interfacial energy for mixtures of charged polymers (ionomers) with solvent. These materials are important as selectively conductive membrane separators in a wide variety of energy conversion devices, including polymer electrolyte membrane fuel cells, Lithium ion batteries, and dye sensitized solar cells. Our reformulated energy, called the Functionalized Cahn-Hilliard (FCH) energy, captures elastrostatic interactions between the charged groups and the complex entropic effects generated by solvent-ion interactions, and allows us to unfold the bilayer and pore networks formed by the solvent phase imbibed into the polymer matrix. We discuss sharp interface reductions of the FCH energy, its gradient flows, and sharp interface reductions of the gradient flows that give rise to higher-order curvature driven flows. We also describe extensions to models that couple to ionic transport and as well as to multiphase models suitable to describe a wide range of membrane casting processes. [1] N. Gavish, J. Jones, Z. Xu, A. Christlieb, K. Promislow, submitted to Polymers Special issue on Thin Membranes (2012). [2] H. Zhang and K. Promislow, Critical points of Functionalized Lagrangians, Discrete and Continuous Dynamical Systems, A to appear. [3] N. Gavish, G. Hayrapetyan, Y. Li, K. Promislow, Physica D, 240: 675-693 (2011). [4] K. Promislow and B. Wetton, PEM fuel cells: A Mathematical Overview, Invited Review Paper to SIAM Applied Math. 70: 369-409 (2009) Mar 16: No seminar today Mar 23: Levent Kurt (The Graduate Center of CUNY) The Higher-Order Short Pulse Equation We derive an equation, the higher-order short pulse equation (HSPE), from the nonlinear wave equation to capture the dynamics of ultra-short solitons in cubic nonlinear media using both multiple scaling technique and re-normalization group. The multiple scaling derivation will be presented. The numerical solution of the HSPE as the exact one- and two-soliton solutions of the short pulse equation (SPE) being the initial conditions and its comparison to the numerical solutions of the SPE and original equation will also be demonstrated. Mar 30: Christina Mouser (William Paterson University, New Jersey) The Control of Frequency of a Conditional Oscillator Simultaneously Subjected to Multiple Oscillatory Inputs The gastric mill network of the crab Cancer borealis is an oscillatory network with frequency ~ 0.1 Hz. Oscillations in this network require neuromodulatory synaptic inputs as well as rhythmic inputs from the faster (~ 1 Hz) pyloric neural oscillator. We study how the frequency of the gastric mill network is determined when it receives rhythmic input from two different sources but where the timing of these inputs may differ. We find that over a certain range of the time difference one of the two rhythmic inputs plays no role what so ever in determining the network frequency while in another range, both inputs work together to determine the frequency. The existence and stability of periodic solutions to model sets of equations are obtained analytically using geometric singular perturbation theory. The results are validated through numerical simulations. Comparisons to experiments are also presented. Apr 20: Peter Gordon (New Jersey Institute Technology, New Jersey) Local kinetics and self-similar dynamics of morphogen gradients Some aspects of pattern formation in developing embryos can be described by nonlinear reaction-diffusion equations. An important class of these models accounts for diffusion and degradation of a locally produced single chemical species and describe formation of morphogen gradients, the concentration fields of molecules acting as spatial regulators of cell differentiation in developing tissues. At long times, solutions of such models approach a steady state in which the concentration decays with distance from the source of production. I will present our recent results that characterize the dynamics of this process. These results provide an explicit connection between the parameters of the problem and the time needed to reach a steady state value at a given position. I will also show that the long time behavior of such models, in certain cases, can be described in terms of very singular self-similar solutions. These solutions are associated with a limit of infinitely large signal production strength. This is a joint work with: C. Muratov, S. Shvartsman, C. Sample and A.Berezhkovskii. Apr 27: Ionut Florescu (Stevens Institute of Technology, New Jersey) Solving systems of PIDE's coming from regime switching jump diffusion models In this talk we consider an underlying model where constant parameters are switching according to a continuous time Markov process. The times of switch are modeled using a Cox process. In addition the model features jumps. We examine the option pricing problem when the stock process follows this process and we find that a tightly coupled system of partially integro-differential equations needs to be solved. We exemplify the solution on several case studies. We also analyze two types of jump distributions the log double exponential due to Kou and a new distribution which we call a log normal mixture which seems to be useful in precisely modeling the jumps and distinguishing them from sampled variability. May 04: Kasia Pawelek (University of Oakland, MI) Mathematical Modeling of Virus Infections and Immune Responses The first part of the talk is about mathematical models for the HIV infection. Such mathematical models have made considerable contributions to our understanding of HIV dynamics. Introducing time delays to HIV models usually brings challenges to both mathematical analysis of the models and comparison of model predictions with patient data. We incorporate two delays, one the time needed for infected cells to produce virions after viral entry and the other the time needed for the adaptive immune response to emerge to control viral replication, into an HIV-1 model. We begin model analysis with proving the local stability of the infection-free and infected steady states. By developing different Lyapunov functionals, we obtain conditions ensuring global stability of the steady states. We also fit the model including two delays to viral load data from 10 patients during primary HIV-1 infection and estimate parameter values. The second part of the talk deals with mathematical models for the Influenza infection. The mechanisms underlying viral control during an uncomplicated influenza virus infection are not fully understood. We developed a mathematical model including both innate and adaptive immune responses to study the within-host dynamics of equine influenza virus infection in horses. By comparing modeling predictions with both interferon and viral kinetic data, we examined the relative roles of target cell availability, and innate and adaptive immune responses in controlling the virus. This study provides a quantitative understanding of the biological factors that can explain the viral and interferon kinetics during a typical influenza virus infection. May 11: Kia Dalili (Stevens Institute of Technology, New Jersey) Modeling network evolution Networks constructed out of real world data often exhibit a number of properties not normally seen in random graphs. Amongst them are a tendency to have a modular structure and a small average shortest path length. We will introduce a model of network evolution using benefit-maximizing independent agents as nodes, and use it to explain how modularity emerges in complex networks and how the environment within which the agents interact controls the degree of modularity. Fall 2011 Sept 02: Organizational meeting Sept 09: No meeting today Sept 16: No meeting today Sept 23: No meeting today Sept 30: No meeting today Oct 07: No meeting today Oct 14: Philippe G. LeFloch (Université Paris VI and CNRS) Undercompressible shocks and moving phase boundaries Regularization-sensitive wave patterns often arise in continuum physics, especially in complex fluid flows, which may contain undercompressive shock waves and moving phase boundaries. I will review here the theory of solutions to nonlinear hyperbolic systems of conservation laws, in the regime when small-scale effects like viscosity and capillarity drive the selection and dynamics of (nonclassical) shocks. The concept of a kinetic relation was introduced and provides the proper tool in order to characterize admissible shocks. The kinetic relation depends on higher-order terms that take additional physics into account. A general theory of the kinetic relation has been developed by the author and his collaborators, which covers various issues such as the Riemann problem, the Cauchy problem, the front tracking schemes, and several numerical strategies adapted to handle nonclassical shocks. Relevant papers are available at the link: philippelefloch.org. Oct 21: Robert Numrich (College of Staten Island-CUNY) Computer Performance Analysis and the PI Theorem of Dimensional Analysis This talk applies the Pi Theorem of dimensional analysis to a representative set of examples from computer performance analysis. It takes a different look at problems involving latency, bandwidth, cache-miss ratios, and the efficiency of parallel numerical algorithms. The Pi Theorem is the fundamental tool of dimensional analysis, and it applies to problems in computer performance analysis just as well as it does to problems in other sciences. Applying it requires the definition of a system of measurement appropriate for computer performance analysis with a consistent set of units and dimensions. Then a straightforward recipe for each specific problem reduces the number of independent variables to a smaller number of dimensionless parameters. Two machines with the same values of these parameters are self-similar and behave the same way. Self-similarity relationships emphasize how machines are the same rather than how they are different. The Pi Theorem is simple to state and simple to prove, using purely algebraic methods, but the results that follow from it are often surprising and not simple at all. The results are often unexpected but they almost always reveal something new about the problem at hand. Oct 28: No meeting today Nov 04: No meeting today Nov 11: Joab Winkler (The University of Sheffield, UK) The computation of multiple roots of polynomials whose coefficients are inexact This lecture will show by example some of the problems that occur when the roots of a polynomial are computed using a standard polynomial root solver. In particular, polynomials of high degree with a large number of multiple roots will be considered, and it will be shown that even roundoff error due to floating point arithmetic, in the absence of data errors, is sufficient to cause totally incorrect results. Since data errors are usually larger than roundoff errors (and fundamentally different in character), the errors encountered with real world data are significant and emphasise the need for a computationally robust polynomial root solver. The inability of commonly used polynomial root solvers to compute high degree multiple roots correctly requires investigation of the cause of this failure. This leads naturally to a discussion of a structured condition number of a root of a polynomial, where structure refers to the form of the perturbations that are applied to the coefficients. It will be shown that this structured condition number, where the perturbations are such that the multiplicities of the roots are preserved, differs significantly from the standard condition numbers, which refer to random (unstructured) perturbations of the coefficients. Several examples will be given and it will be shown that the condition number of a multiple root of a polynomial due to a random perturbation in the coefficients is large, but the structured condition number of the same root is small. This large difference is typically several orders of magnitude. A method developed by Gauss for computing the roots of a polynomial will be discussed. This method has an elegant geometric interpretation in terms of pejorative manifolds, which were introduced by William Kahan (Berkeley). The method is rarely used now, but it will be considered because it differs significantly from all other methods (Newton-Raphson, Bairstow, Laguerre, etc.) and is non-iterative. The computational implementation of this method raises, however, some non-trivial issues – the determination of the rank of a matrix in a floating point environment and the quotient of two inexact polynomials – and they will be discussed because they are ill-posed operations. They must be implemented with care because simple methods will necessarily lead to incorrect results. I will finish the talk by giving several non-trivial examples (polynomials of high degree, with several multiple roots of high degree, whose coefficients are corrupted by noise), and the results will be compared with other methods for the computation of multiple roots of polynomials whose coefficients are corrupted by noise. Nov 18: No meeting today Nov 25: No meeting today Dec 02: Pam Cook (University of Delaware) Complex (wormlike micellar) fluids: Shear banding and inertial effects Pam Cook (University of Delaware) {with Lin Zhou, New York City College of Technology and Gareth McKinley, Massachusetts Institute of Technology) Concentrated surfactants in solution, depending on the concentration, salinity and temperature, self-assemble into highly entangled wormy cylindrical micelles. In solution these "worms" entangle, exhibiting visco-elastic properties like polymer solutions. In addition to reptative and Roussian relaxation/disentanglement the worms break and reform and are thus known as "living" polymers. When sheared, as the applied shear rate increases experiments show that the steady state velocity profile across the gap of the shear cell transitions from a single shear-rate to a two banded profile. The transition to the two banded state is accompanied by a strong viscous thinning. The VCM (Vasquez, Cook, McKinley) model is a rheological equation of state capable of describing these fluids which specifically incorporates the rate-dependent breakage and reforming of the worms as well as non-local effects arising from coupling between the macroscopic stress in the deformed elastic network and the microstructure. The constitutive equations describe the evolution of the number density and stresses of two micellar species (a long species ‘A’ which breaks to form two worms of a shorter species ‘B’ which can then reform). The resulting system of coupled nonlinear partial differential equations includes conservation of mass, momentum, and the constitutive relations. Tracking of the spatio-temporal evolution of flow shows that this "simple" two-species description does exhibit many of the key features observed in the deformation-dependent nonlinear rheology of these wormlike micellar solutions. The model has been studied in detail under several flow conditions including elongational flow, pressure-driven channel flow, and in Large Amplitude Oscillatory Shear (LAOS)). In those studies the flow was assumed to be inertialess, so that boundary information travels throughout the sample at infinite speed. In this talk, the predictions of the VCM model incorporating the effect of fluid inertia are presented as the flow evolves to steady state in a Couette cell following a controlled ramp in the shear rate. The presence of fluid inertia results in short time transient propagation of elastic shear waves (which damp and diffuse over longer time scales) between the boundaries and, as a result of the interaction of these shear waves with the spatio-temporal development of the shear-bands, the model predicts multiple shear banded states in steady shearing deformation over a wide range of parameter space. The dependence of the region of multiple banding on model parameters (elasticity, diffusivity, shear rate) and on initial conditions (ramp speed) is analyzed. Both three-band and four-band solutions are observed. Dec 09: No meeting today Dec 16: No meeting today
{"url":"http://websupport1.citytech.cuny.edu/faculty/hyuce/Yuce/Seminar.html","timestamp":"2014-04-18T14:18:39Z","content_type":null,"content_length":"56193","record_id":"<urn:uuid:4a45a346-c1d8-47b1-8d0c-134b69efdacb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Topological space Topological spaces are structures which allow one to formalize concepts such as convergence, connectedness and continuity. They appear in all branches of modern mathematics and can be seen as a central unifying notion. The branch of mathematics which studies topological spaces in their own right is called topology. Formally, a topological space is a set X together with a set T of subsets of X (i.e., T is a subset of the power set of X) satisfying: 1. The union of any collection of sets in T is also in T. 2. The intersection of any pair of sets in T is also in T. 3. X and the empty set are in T. The set T is also called a topology on X. The sets in T are referred to as open sets, and their complements in X are called closed sets. Roughly speaking, open sets are thought of as neighborhoods of points; two points are "close together" if there are many open sets that contain both of them. A function between topological spaces is said to be continuous if the inverse image of every open set is open. This is an attempt to capture the intuition that points which are "close together" get mapped to points which are "close together". • Any set with the discrete topology (i.e., every set is open, which has the effect that no two points are "close" to each other). • Any set with the trivial topology (i.e., only the empty set and the whole space are open, which has the effect of "lumping all points together"). • Any infinite set with the cofinite topology (i.e., the open sets are the empty set and the sets whose complement is finite). This is the smallest T[1] topology on the set. • A subset of a topological space. The open sets are the intersections of the open sets of the larger space with the subset. This is also called a subspace. • Products of topological spaces. For finite products, the open sets are the sets that are unions of products of open sets. • Quotient spaces. If f: X → Y is a function and X is a topological space, then Y gets a topology where a set is open if and only if its inverse image[?] is open. A common example comes from an equivalence relation defined on the topological space X: the map f is then the natural projection on the set of equivalence classes. • The Vietoris topology on the set of all non-empty subsets of a topological space X is generated by the following basis: for every n-tuple U[1],....,U[n] of open sets in X we construct a basis set consisting of all subsets of the union of the U[i] which have non-empty intersection with each U[i]. Topological spaces can be broadly classified according to their degree of connectedness, their size, their degree of compactness and the degree of separation of their points. A great many terms are used in topology to achieve these distinctions. These terms and definitions are collected together in the Topology Glossary. There are many other equivalent ways to define a topological space. Instead of defining open sets, it is possible to define first the closed sets, with the properties that the intersection of arbitrarily many closed sets is closed, the union of a finite number of closed sets is closed, and X and the empty set are closed. Open sets are then defined as the complements of closed sets. Another method is to define the topology by means of the closure operator. The closure operator is a function from the power set of X to itself which satisfies the following axioms (called the Kuratowski closure axioms): the closure operator is idempotent, every set is a subset of its closure, the closure of the empty set is empty, and the closure of the union of two sets is the union of their closures. Closed sets are then the fixed points of this operator. A topology on X is also completely determined if for every net in X the set of its limits is specified. Metric spaces were defined and investigated by Fréchet in 1906, Hausdorff spaces by Felix Hausdorff in 1914 and the current concept of topological space was described by Kuratowski in 1922. It is almost universally true that all "large" algebraic objects carry a natural topology which is compatible with the algebraic operations. In order to study these objects, one typically has to take the topology into account. This leads to concepts such as topological groups, topological vector spaces and topological rings. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/to/Topological_space","timestamp":"2014-04-17T21:42:08Z","content_type":null,"content_length":"25423","record_id":"<urn:uuid:c74604de-c1a7-4500-9065-107dc0697414>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Easy Calculus There are two most important branches of Calculus: Differential Calculus and Integral Calculus. To a logical minded person, Calculus is not hard to understand. Beginners study easy Calculus. Initial calculus entails learning analytic or logical Geometry (which involves analyzing graphs). Any person who has learned fundamental Algebra should be familiar with the perception of x and y coordinates. The horizontal axis or the Abscissa is the 'X' axis and the perpendicular axis to it is the 'Y' axis or the Ordinate. The main aim of Differential Calculus is in finding out the slopes of various equations. In simple terms, slope is the Tangent of the angle, which a line makes with x - axis. One can determine it in terms of coordinates, which is an application of easy calculus. If the x and y coordinates of two points of a line are given as (k , l ) and (k , l ) in that order then its Slope is Slope, s = (l – l ) / (k – k ) = Δ l / Δ k, Here, ‘Δ l’ is difference in y – coordinates and ‘Δ k’ is difference in corresponding x – coordinates. Differentiation is defined as the method of computing the Slope. The outcome of these computations is termed as the derivative. The branch of mathematics that incorporates this theory is known as differential calculus. We generally symbolize the derivative by dy / dx or f'(x). Integral calculus is a subdivision of calculus that is related to calculating areas under curves. Integration is defined as the method of calculating area. The resulting formula used to deduce the area is called the integral. The subdivision of mathematics that deals with these concepts is called integral calculus. It can be noted that Integration is reverse of differentiation. The symbol of integration is ‘∫’.
{"url":"http://math.tutorcircle.com/calculus/easy-calculus.html","timestamp":"2014-04-16T16:03:31Z","content_type":null,"content_length":"19983","record_id":"<urn:uuid:f6469b71-15cb-4402-92fb-13cd3ed6bf08>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help January 13th 2009, 10:40 AM #1 May 2008 I require some help? Calculate the area of a regular fifteen-sided figure inscribed in a circle of radius 10cm??? thanks in advance A regular 15-gon consists of 15 isosceles triangles which can be split into 2 right triangles each. The central angle of one triangle is $\alpha = \dfrac{360^\circ}{15}=24^\circ$ The area is calculated by: $a=15\cdot \underbrace{r\cdot \sin\left(\frac{24^\circ}2\right)}_{half\ base} \cdot \underbrace{r \cdot \cos \left(\frac{24^\circ}2\right)}_{height\ of\ triangle} = 15 \cdot r^2 \cdot \sin(12^\ circ) \cdot \cos(12^\circ)$ Plug in r = 10 and calculate the value. January 13th 2009, 10:47 AM #2
{"url":"http://mathhelpforum.com/trigonometry/68027-area.html","timestamp":"2014-04-18T05:10:33Z","content_type":null,"content_length":"34165","record_id":"<urn:uuid:ac654f30-0bdc-422c-bc81-428371accd3d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Dupont, CO Geometry Tutor Find a Dupont, CO Geometry Tutor ...I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students. We have worked on applications and how this relates to things in real life. I realize that Algebra is a big step up from the math many have worked at previously with great 7 Subjects: including geometry, GRE, algebra 2, algebra 1 ...I appreciate diverse ways of thinking between individuals, and no two lessons are ever alike. When we sit down for tutoring, I hope that you are able to share what you have learned recently and what you might be confused about. Sometimes, nothing makes sense though, and I can help you work thro... 8 Subjects: including geometry, accounting, algebra 2, economics ...A thorough understanding of chemistry is critical for other subjects such as biology, and I love helping students see the important applications of chemistry. I have tutored chemistry and biology for almost 10 years. During this time, I have worked with a variety of students of different skill levels and backgrounds. 7 Subjects: including geometry, chemistry, biology, algebra 1 ...I am a licensed Electrocardiogram Technician since 2007, and I can type 102 words per minute.As a high school student, I was part of a program that helped the children of broken families. We meet up with our children every Wednesday and either helped them with their homework, or if they finished... 31 Subjects: including geometry, reading, English, biology ...I also embrace technology to make learning more effective and efficient. Send me a note and we'll get started!I use the flipped model here, where I will assign a short youtube video on the topic we are working on. I expect the student to watch and try a few problems. 41 Subjects: including geometry, reading, Spanish, English Related Dupont, CO Tutors Dupont, CO Accounting Tutors Dupont, CO ACT Tutors Dupont, CO Algebra Tutors Dupont, CO Algebra 2 Tutors Dupont, CO Calculus Tutors Dupont, CO Geometry Tutors Dupont, CO Math Tutors Dupont, CO Prealgebra Tutors Dupont, CO Precalculus Tutors Dupont, CO SAT Tutors Dupont, CO SAT Math Tutors Dupont, CO Science Tutors Dupont, CO Statistics Tutors Dupont, CO Trigonometry Tutors Nearby Cities With geometry Tutor Adams City, CO geometry Tutors Cadet Sta, CO geometry Tutors Commerce City geometry Tutors Crystola, CO geometry Tutors Deckers, CO geometry Tutors Hoyt, CO geometry Tutors Irondale, CO geometry Tutors Keystone, CO geometry Tutors Raymer, CO geometry Tutors Riverbend, CO geometry Tutors Roxborough, CO geometry Tutors Silvercreek, CO geometry Tutors Tarryall, CO geometry Tutors Welby, CO geometry Tutors Western Area, CO geometry Tutors
{"url":"http://www.purplemath.com/Dupont_CO_Geometry_tutors.php","timestamp":"2014-04-18T22:03:38Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:2ef3a8c4-7af9-4265-ae6b-cc1fd5cd5d43>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Weak and strong beta normalisations in typed lambda-calculi - The Journal of Symbolic Logic , 1996 "... We first present a new proof for the standardization theorem, a fundamental theorem in -calculus. Since our proof is largely built upon structural induction on lambda terms, we can extract some bounds for the number of fi-reduction steps in the standard fi-reduction sequences obtained from transfor ..." Cited by 7 (1 self) Add to MetaCart We first present a new proof for the standardization theorem, a fundamental theorem in -calculus. Since our proof is largely built upon structural induction on lambda terms, we can extract some bounds for the number of fi-reduction steps in the standard fi-reduction sequences obtained from transforming any given fi-reduction sequences. This result sharpens the standardization theorem and establishes a link between lazy and eager evaluation orders in the context of computational complexity. As an application, we establish a superexponential bound for the number of fi-reduction steps in fi-reduction sequences from any given simply typed -terms. 1 Introduction The standardization theorem of Curry and Feys [CF58] is a very useful result, stating that if u reduces to v for -terms u and v, then there is a standard fi-reduction from u to v. Using this theorem, we can readily prove the normalization theorem, i.e., a -term has a normal form if and only if the leftmost fi-reduction sequence f... - INFORM. AND COMPUT , 2007 "... We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic’s proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties ..." Cited by 3 (2 self) Add to MetaCart We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic’s proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties such as confluence, preservation of strong normalisation, strong normalisation of simply-typed terms, step by step simulation of β-reduction and full composition. "... We present a typing system for the λ-calculus, with non-idempotent intersection types. As it is the case in (some) systems with idempotent intersections, a λ-term is typable if and only if it is strongly normalising. Nonidempotency brings some further information into typing trees, such as a bound o ..." Cited by 2 (0 self) Add to MetaCart We present a typing system for the λ-calculus, with non-idempotent intersection types. As it is the case in (some) systems with idempotent intersections, a λ-term is typable if and only if it is strongly normalising. Nonidempotency brings some further information into typing trees, such as a bound on the longest β-reduction sequence reducing a term to its normal form. We actually present these results in Klop’s extension of λ-calculus, where the bound that is read in the typing tree of a term is refined into an exact measure of the longest reduction sequence. This complexity result is, for longest reduction sequences, the counterpart of de Carvalho’s result for linear head-reduction sequences. "... This paper is concerned with strong normalisation of cut-elimination for a standard intuitionistic sequent calculus. The cut-elimination procedure is based on a rewrite system for proof-terms with cut-permutation rules allowing the simulation of β-reduction. Strong normalisation of the typed terms i ..." Cited by 2 (1 self) Add to MetaCart This paper is concerned with strong normalisation of cut-elimination for a standard intuitionistic sequent calculus. The cut-elimination procedure is based on a rewrite system for proof-terms with cut-permutation rules allowing the simulation of β-reduction. Strong normalisation of the typed terms is inferred from that of the simply-typed λ-calculus, using the notions of safe and minimal reductions as well as a simulation in Nederpelt-Klop’s λI-calculus. It is also shown that the type-free terms enjoy the preservation of strong normalisation (PSN) property with respect to β-reduction in an isomorphic image of the type-free λ-calculus.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3211868","timestamp":"2014-04-24T21:07:18Z","content_type":null,"content_length":"20590","record_id":"<urn:uuid:5d323650-3d82-4ef9-b0b7-553617f95da7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Aut(G) = $C_3$, G = ? Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Is there a group G such that Aut(G) = $C_3$? What if we replace 3 with a prime number p? Dear Greg, your question is a common homework problem in a 1st course on group theory, and not appropriate for MO. Ryan Budney Nov 5 '10 at 16:04 I agree with Ryan and voted to close. Most probably it is a homework question. Anybody who knows the definition of Aut and Inn should be able to answer it. Mark Sapir Nov 5 '10 at 16:08 I personally felt this was closed too abruptly. Seeing that the OP possesses a PhD in mathematics, it is probably not a homework question. I would have voted to re-open at meta, except for some reason I'm unable to sign in. As far as I can tell, while the problem is not hard, it's not a complete triviality either. I have in the meantime mailed the OP my own solution. Todd Trimble♦ Nov 6 '10 at 4:26 I've just cast the last reopen vote. I would urge the questioner to add some motivation: what led you to ask this question, why are you interested in replacing 3 by an arbitrary prime? Is that extra just a casual add-on or is it something you're really interested in in your research? Andrew Stacey Nov 6 '10 at 12:40 The question was more curiosity since I've had the chance to play with gap recently. It just occurred to me that although many of the small groups also appear as automorphism groups, C3 never seemed to. As others have clearly indicated, the idea that p be prime is not necessary, only that p be odd. Although my dissertation was in Lie algebras I've since retrained and switched to modeling biological systems, something a number of people here do. Having not had the chance to teach abstract algebra even at the undergraduate level, I've not seen group theory in 15 years. Greg Gibson Nov 7 '10 at 19:14 show 2 more comments There is no group $G$ (finite or infinite) for which $Aut(G) \cong C_p$ (the cyclic group of order $p$), if $p > 1$ is an odd number. Suppose otherwise. The inner automorphism group $Inn(G)$ is a subgroup, also cyclic, and a well-known exercise in group theory is that if $Inn(G) \cong G/Z(G)$ is cyclic, then $G$ is abelian. An abelian group $G$ has an involution given by inversion. Unless inversion is trivial, we get an element of order 2 in $Aut(G) = C_p$, contradiction. If inversion is trivial, then the abelian group $G$ becomes a vector space over $\mathbb{F}_2$. In that case it is an easy to prove that either $Aut(G)$ is trivial or has an element of order 2; either way we get a contradiction. Edit: After listening to some comments about this at meta, I amended my answer so that it gives less away or leaves a bit more to the imagination, or so I hope. I can't see what's wrong with reading other people's proofs. I do that all the time. Franz Lemmermeyer Nov 6 '10 at 15:57 This proof isn't that elementary, because the axiom of choice is invoked! Is it consistent with ZF that there exists a vector space over F_2 admitting no nontrivial involutions? Worse yet, is it consistent with ZF that there exists a vector space V over F_2 with Aut(V) = C_3?? Jared Weinstein Nov 6 '10 at 18:47 Perhaps the MO rule should be, undergraduate homework no, graduate homework yes? Outside our own specialties, we are all at the level of graduate students. Gerry Myerson Nov 6 '10 at 21:50 @Gerry: that's pretty close to how I feel. There are some problems in Hartshorne that I never figured out. @Mark: sigh. Unless there's another way, one has to hit upon the idea of using inversion, there is the reformulation in terms of vector spaces, there is AC. Perhaps it all seems very trivial to you. It may not be trivial for everyone. Todd Trimble♦ Nov 6 '10 at 22:12 I agree with Gerry Myerson that outside of our own specialties, our knowledge level is the same as that of graduate students: slightly familiar with the terminology, but unsure of its applications. I actually have considered this to be the raison d'etre for Math-Overflow $-$ what may appear to be a tall hard-to-climb-over step or fence from the point of view of one mathematical or scientific specialty may be similar or equivalent in another specialty to a very easy problem with a well-known canonical solution. Asking here allows us to let other experts help us when we're stalled. :) sleepless in beantown Nov 7 '10 at 5:45 show 15 more comments The original question has already been answered, but Jared Weinstein asked in a comment about what happens if we don't assume the axiom of choice. I've convinced myself that it's consistent with ZF to have a vector space over $\mathbb{F}_2$ with automorphism group $C_3$. In case any set theorists (other than me) are looking at this question, here's the model I have in mind. (It's up a permutation model, using atoms, but the Jech-Sochor theorem suffices to convert it into a ZF-model.) Start with the full universe $V$ built from a countable set $A$ of atoms (and satisfying vote AC). In $V$, give $A$ the structure of a $\mathbb{F}_4$-vector space, obviously of dimension $\aleph_0$. (The relevance of the 4-element field $\mathbb{F}_4$ is that the two elements that are 13 not in the 2-element subfield are cube roots of 1, so multiplication by either of them gives an automorphism of order 3.) Let $G$ be the group of automorphisms of this vector space, and let down $M$ be the Fraenkel-Mostowski-Specker permutation submodel of $V$ determined by the group $G$ with finite subsets of $A$ as supports. In $M$, $A$ is an $\mathbb{F}_4$-vector-space. vote Multiplication by the elements of $\mathbb{F}_4\setminus\mathbb{F}_2$ gives a $C_3$-action on the underlying abelian group. Fairly easy calculations (admittedly not yet written down) convince me that this abelian group has no automorphisms in $M$ beyond this copy of $C_3$. Thanks! This is a good example where I see an advantage to the strategy you suggest (a permutation model, and then the Jech-Sochor embedding), than directly using a symmetric model or something like that. Andres Caicedo Nov 8 '10 at 0:49 I'll second those thanks. So the trivial homework problem can't be settled within ZF, and involves something more than just playing with definitions of Aut and Inn. Learn something new every Todd Trimble♦ Nov 8 '10 at 14:13 @Todd: Indeed, there's something more involved, something that depends on the axiom of choice, but in this case the relevant "something" is only that vector spaces have bases, and that permutations of a basis induce automorphisms of the vector space. Andreas Blass Nov 8 '10 at 16:30 @Andreas: I know that; indeed I wrote that in my answer before I revised it. I'm really referring back to one of Mark's comments which I take issue with. Todd Trimble♦ Nov 8 '10 at 17:49 add comment Not the answer you're looking for? Browse other questions tagged finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/44961/autg-c-3-g/45068","timestamp":"2014-04-20T08:23:48Z","content_type":null,"content_length":"74612","record_id":"<urn:uuid:f0d150a5-a544-4925-ab44-74ae42d3bfe5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Newbie Question, Probability Sven Schreiber svetosch at gmx.net Thu Dec 21 09:10:17 CST 2006 A. M. Archibald schrieb: > On 20/12/06, Alan G Isaac <aisaac at american.edu> wrote: >> This is my "most missed" functionality in NumPy. >> (For now I feel cannot ask students to install SciPy.) >> Although it is a slippery slope, and I definitely do not >> want NumPy to slide down it, I would certainly not complain >> if this basic functionaltiy were moved to NumPy... > If numpy were to satisfy everyone who says, "I like numpy, but I wish > it included [their favourite feature from scipy] because I don't want > to install scipy", numpy would grow to include everything in scipy. Well my package manager just reported something like 800K for numpy and 20M for scipy, so I think we're not quite at the point of numpy taking over everything yet (if those numbers are actually meaningful, probably I'm missing something ?). I would also welcome if some functionality could be moved to numpy if the size requirements are reasonably small. Currently I try to avoid to depend on the scipy package to make my programs more portable, and I'm mostly successful, but not always. The p-value stuff in numpy would be helpful here, as Alan already said. Now I don't know if that stuff passes the size criterion, some expert would know that. But if it does, it would be nice if you could consider moving it over eventually. Of course you need to strike a balance, and the optimum is debatable. But again, if scipy is really more than 20 times the size of numpy, and some frequently used things are not in numpy, is there really an urgent need to freeze numpy's set of functionality? just a user's thought, More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-December/025144.html","timestamp":"2014-04-16T13:43:45Z","content_type":null,"content_length":"4434","record_id":"<urn:uuid:6e1029b2-d9f0-4ce7-9015-1081ec27ddee>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
2-D Correlation Compute 2-D cross-correlation of two input matrices The 2-D Correlation block computes the two-dimensional cross-correlation of two input matrices. Assume that matrix A has dimensions (Ma, Na) and matrix B has dimensions (Mb, Nb). When the block calculates the full output size, the equation for the two-dimensional discrete cross-correlation is where and . ┃ Port │ Input/Output │ Supported Data Types │ Complex Values ┃ ┃ │ │ │ Supported ┃ ┃ I1 │ Vector or matrix of intensity values │ ● Double-precision floating point │ Yes ┃ ┃ │ │ │ ┃ ┃ │ │ ● Single-precision floating point │ ┃ ┃ │ │ │ ┃ ┃ │ │ ● Fixed point │ ┃ ┃ │ │ │ ┃ ┃ │ │ ● 8-, 16-, 32-bit signed integer │ ┃ ┃ │ │ │ ┃ ┃ │ │ ● 8-, 16-, 32-bit unsigned │ ┃ ┃ │ │ integer │ ┃ ┃ I2 │ Scalar, vector, or matrix of intensity values or a scalar, vector, or matrix that represents one plane of the RGB video │ Same as I1 port │ Yes ┃ ┃ │ stream │ │ ┃ ┃ Output │ Convolution of the input matrices │ Same as I1 port │ Yes ┃ If the data type of the input is floating point, the output of the block is the same data type. The dimensions of the output are dictated by the Output size parameter and the sizes of the inputs at ports I1 and I2. For example, assume that the input at port I1 has dimensions (Ma, Na) and the input at port I2 has dimensions (Mb, Nb). If, for the Output size parameter, you choose Full, the output is the full two-dimensional cross-correlation with dimensions (Ma+Mb-1, Na+Nb-1). If, for the Output size parameter, you choose Same as input port I1, the output is the central part of the cross-correlation with the same dimensions as the input at port I1. If, for the Output size parameter, you choose Valid, the output is only those parts of the cross-correlation that are computed without the zero-padded edges of any input. This output has dimensions (Ma-Mb+1, Na-Nb+1). However, if all (size(I1)<size(I2)), the block errors out. If you select the Normalized output check box, the block's output is divided by sqrt(sum(dot(I1p,I1p))*sum(dot(I2,I2))), where I1p is the portion of the I1 matrix that aligns with the I2 matrix. See Example 2 for more information. Fixed-Point Data Types The following diagram shows the data types used in the 2-D Correlation block for fixed-point signals. You can set the product output, accumulator, and output data types in the block mask as discussed in Dialog Box. The output of the multiplier is in the product output data type if at least one of the inputs to the multiplier is real. If both of the inputs to the multiplier are complex, the result of the multiplication is in the accumulator data type. For details on the complex multiplication performed, refer to Multiplication Data Types. Example 1 Suppose I1, the first input matrix, has dimensions (4,3). I2, the second input matrix, has dimensions (2,2). If, for the Output size parameter, you choose Full, the block uses the following equations to determine the number of rows and columns of the output matrix: The resulting matrix is If, for the Output size parameter, you choose Same as input port I1, the output is the central part of with the same dimensions as the input at port I1, (4,3). However, since a 4-by-3 matrix cannot be extracted from the exact center of , the block leaves more rows and columns on the top and left side of the matrix and outputs: If, for the Output size parameter, you choose Valid, the block uses the following equations to determine the number of rows and columns of the output matrix: In this case, it is always possible to extract the exact center of . Therefore, the block outputs Example 2 In cross-correlation, the value of an output element is computed as a weighted sum of neighboring elements. For example, suppose the first input matrix represents an image and is defined as I1 = [17 24 1 8 15 11 18 25 2 9] The second input matrix also represents an image and is defined as I2 = [8 1 6 4 9 2] The following figure shows how to compute the (2,4) output element (zero-based indexing) using these steps: The (2,4) output element from the cross-correlation is . Computing the (2,4) Output of Cross-Correlation The normalized cross-correlation of the (2,4) output element is 585/sqrt(sum(dot(I1p,I1p))*sum(dot(I2,I2))) = 0.8070, where I1p = [1 8 15; 7 14 16; 13 20 22]. Dialog Box The Main pane of the 2-D Correlation dialog box appears as shown in the following figure. This parameter controls the size of the output scalar, vector, or matrix produced as a result of the cross-correlation between the two inputs. If you choose Full, the output has dimensions (Ma+Mb -1, Na+Nb-1). If you choose Same as input port I1, the output has the same dimensions as the input at port I1. If you choose Valid, output has dimensions (Ma-Mb+1, Na-Nb+1). If you select this check box, the block's output is normalized. The Data Types pane of the 2-D Correlation dialog box appears as shown in the following figure. Select the Rounding Modes for fixed-point operations. Select the Overflow mode for fixed-point operations. Specify the product output data type. See Fixed-Point Data Types and Multiplication Data Types for illustrations depicting the use of the product output data type in this block: ● When you select Same as first input, these characteristics match those of the first input to the block. ● When you select Binary point scaling, you can enter the word length and the fraction length of the product output, in bits. ● When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the product output. The bias of all signals in the Computer Vision System Toolbox™ software is The Product Output inherits its sign according to the inputs. If either or both input I1 and I2 are signed, the Product Output will be signed. Otherwise, the Product Output is unsigned. The table below show all cases. ┃ Sign of Input I1 │ Sign of Input I2 │ Sign of Product Output ┃ ┃ unsigned │ unsigned │ unsigned ┃ ┃ unsigned │ signed │ signed ┃ ┃ signed │ unsigned │ signed ┃ ┃ signed │ signed │ signed ┃ Use this parameter to specify how to designate the accumulator word and fraction lengths. Refer to Fixed-Point Data Types andMultiplication Data Types for illustrations depicting the use of the accumulator data type in this block. The accumulator data type is only used when both inputs to the multiplier are complex: ● When you select Same as product output, these characteristics match those of the product output. ● When you select Same as first input, these characteristics match those of the first input to the block. ● When you select Binary point scaling, you can enter the word length and the fraction length of the accumulator, in bits. ● When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the accumulator. The bias of all signals in the Computer Vision System Toolbox software is 0. Choose how to specify the word length and fraction length of the output of the block: ● When you select Same as first input, these characteristics match those of the first input to the block. ● When you select Binary point scaling, you can enter the word length and the fraction length of the output, in bits. ● When you select Slope and bias scaling, you can enter the word length, in bits, and the slope of the output. The bias of all signals in the Computer Vision System Toolbox software is 0. Select this parameter to prevent the fixed-point tools from overriding the data types you specify on the block mask. For more information, see fxptdlg, a reference page on the Fixed-Point Tool in the Simulink^® documentation. See Also ┃ 2-D Autocorrelation │ Computer Vision System Toolbox ┃ ┃ 2-D Histogram │ Computer Vision System Toolbox ┃ ┃ 2-D Mean │ Computer Vision System Toolbox ┃ ┃ 2-D Median │ Computer Vision System Toolbox ┃ ┃ 2-D Standard Deviation │ Computer Vision System Toolbox ┃ ┃ 2-D Variance │ Computer Vision System Toolbox ┃ ┃ 2-D Maximum │ Computer Vision System Toolbox ┃ ┃ 2-D Minimum │ Computer Vision System Toolbox ┃
{"url":"http://www.mathworks.com/help/vision/ref/2dcorrelation.html?nocookie=true","timestamp":"2014-04-25T02:08:13Z","content_type":null,"content_length":"55390","record_id":"<urn:uuid:0f8ca652-0619-41eb-b9ae-2cbdf6d0d90c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] numarray.where confusion Alok Singhal as8ca at virginia.edu Wed May 26 10:19:09 CDT 2004 On 26/05/04: 11:24, Perry Greenfield wrote: > (due to confusions with "a" in text I'll use x in place of "a") > I believe the problem you are seeing (I'm not 100% certain yet) > is that although it is possible to assign to an array-indexed > array, that doing that twice over doesn't work since Python is, > in effect, treating x[m1] as an expression even though it is > on the left side. That expression results in a new array that the > second indexing updates, but then is thrown away since it is not > assigned to anything else. > Your second try creates a temporary t which is also not a view into > a so when you update t, a is not updated. Thanks or this info. It makes sense now. I suspected earlier that t was not a view but a copy, but didn't realise that the same thing was happening with x[m1][m2]. > try > x[m1[0][m2]] = array([10,20]) > instead. The intent here is to provide x with the net index array > by indexing m1 first rather than indexing x first. > (note the odd use of m1[0]; this is necessary since where() will > return a tuple of index arrays (to allow use in multidimensional > cases as indices, so the m1[0] extracts the array from the tuple; > Since m1 is a tuple, indexing it with another index array (well, > tuple containing an index array) doesn't work). This works, but for the fact that in my real code I *am* dealing with multidimensional arrays. But this is a nice trick to remember. (So, the following "does not work": x = arange(9) m1 = where(x > 4) m2 = where(x[m1] < 7) On 26/05/04: 11:41, Todd Miller wrote: > Here's how I did it (there was an easier way I overlooked): > a = arange(10) > m1 = where(a > 5, 1, 0).astype('Bool') > m2 = where(a < 8, 1, 0).astype('Bool') > a[m1 & m2] = array([10, 20]) Ah. This works! Even for multidimensional arrays. On 26/05/04: 18:06, Francesc Alted wrote: > Perhaps the easier way looks like this? > >>> a = arange(10) > >>> a[(a>5) & (a<8)] = array([10, 20]) > >>> a > array([ 0, 1, 2, 3, 4, 5, 10, 20, 8, 9]) > Indexing is a very powerful (and fun) thing, indeed :) I like this too. Thank you all for the help! Alok Singhal (as8ca at virginia.edu) __ Graduate Student, dept. of Astronomy / _ University of Virginia \_O \ http://www.astro.virginia.edu/~as8ca/ __/ More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-May/003024.html","timestamp":"2014-04-18T10:38:07Z","content_type":null,"content_length":"5672","record_id":"<urn:uuid:6bd501a0-46f3-4071-92f3-2af9fc842244>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiation - Application Hello! I would be incredibly thankful if somebody could show step by step how they solve this question: A new car hire company has just opened and wants to make the maximum amount of profit. The amount of profit, P, is dependent on two factors; x, the number of tens of cars rented; and y, the distance travelled by each car. The profit is given by the formula: P=xy+40, where y=sinx. Find the smallest number of cars that the comapny needs to rent to give the maximum amount of profit. This is how i've done it so far: P = xy+40 P = x*sinx+40 dp/dx=sinx+xcosx (product rule) sinx+xcosx = 0 x =0 My answer is obviously wrong, so I'd appreciate some help!
{"url":"http://mathhelpforum.com/calculus/166871-differentiation-application.html","timestamp":"2014-04-18T23:53:48Z","content_type":null,"content_length":"42375","record_id":"<urn:uuid:6a35635a-96b3-472e-a85a-7228d41a4613>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
r Engineering Introduction Theory HOWTO Examples Applications in Engineering Simulated annealing is one of the many stochastic optimization methods inspired by natural phenomena - the same inspiration that lies at the origin of genetic algorithms, ant colony optimization, bee colony optimization, and many other algorithms. Simulated annealing can be seen as a stochastic version of the gradient descent optimization method that we've studied previously; but instead of taking steps along the gradient at a given point, the algorithm takes steps stochastically. Annealing is a metallurgical process used to temper metals through a heating and cooling treatment. The weaknesses in the metal that are eliminated by annealing are the result of atomic irregularities in the crystalline structure of the metal. These irregularities are due to atoms being stuck in the wrong place of the structure. In the process of annealing, the metal is heated up and then allowed to cool down slowly. Heating up gives the atoms the energy they need to get un-stuck, and the slow cool-down period allows them to move to their correct location in the structure. Annealing can be seen as a multiple-optima optimization problem. A weakness in the metal is due to an atom having converged on a local optimum in the metal's crystalline structure. Heating the metal gives that atom the ability to escape the local optimum, and the slow cool-down period allows it to converge on its global optimum. Simulated Annealing Simulated annealing is a stochastic optimization algorithm based on the observation we have made about the annealing process. Like the open deterministic optimization algorithms we have studied, it will iteratively improve a value by moving it step by step through the function space. However, in order to escape local optima, the algorithm will have a probability of taking a step in a bad direction: in other words, of taking a step that increases the value for a minimization problem or that decreases the value for a maximization problem. To simulate the annealing process, this probability will depend in part on a "temperature" parameter in the algorithm, which is initialized at a high value and decreased at each iteration. Consequently, the algorithm will initially have a high probability of moving away from a nearby (likely local) optimum. Over the iterations that probability will decrease and the algorithm will converge on the (hopefully global) optimum it did not have the chance to escape from. The key to a successful simulated annealing algorithm is thus properly handling the "temperature" parameter. It should be high enough to allow the algorithm to escape local optima and decrease slowly enough to allow the algorithm to explore the search interval without getting stuck. However, it should not be set too high or remain at a high value for too long, to keep the algorithm from escaping the global optimum. As we mentioned, the probability of accepting a step away from an optimum depends in part on the current value of the "temperature" parameter. The other factor it depends on is how much worse the new value would be after this step. The probability P of accepting a step away from an optimum is computed as: Given a scalar-valued function of a vector variable, f(x), find a global optimum of that function while avoiding local optima. We will assume x is an N-dimensional vector. We assume that the function f(x) is continuous. We will use sampling and iteration. Initial Requirements We have an initial temperature T and a step size h. Iteration Process Given the approximation Halting Conditions Halt once the temperature value is below a threshold. Example 1 Consider the function Figure 1. The function We begin the search at at Figure 2. Stochastic exploration of the function by simulated annealing with side and top views. Applications to Engineering Simulated annealing is often used in engineering to optimize systems where the output performance is a complex function of multiple parameters. An example application is given by Wilson (see reference) to optimize the performance of traveling-wave tubes. These tubes are essential components of modern satellites, where they are used to amplify the satellite's radio frequency signals. As can be seen from the tube's internal setup illustrated in Figure 3, the basic principle behind this system is to make the radio frequency signals circulate through a coil while firing an electron beam through the center of the coil to transfer the electrons' kinetic energy to the electromagnetic field of the radio wave. The challenge in using this setup is that the velocity of the electrons must be synchronized with the phase velocity of the radio frequency wave to maximize the energy transfer. By using a simulated annealing optimization software, Wilson has shown that he can improve the basic system's efficiency. But moreover, by simply changing some of the search parameters, the algorithm can maximize the transfer over specific bandwidths of the radio frequency signal, or solve the constrained problem of maximizing the energy transfer while minimizing the signal distortion. To quote Wilson's own conclusions: "A primary advantage of this algorithm is that the simulated annealing allows a global optimum solution to be obtained whereas most optimization algorithms converge on a local optimum. Another major advantage is that the algorithm can be readily adapted to optimize any calculable TWT output characteristic in terms of any combination of the model's cavity, beam, and focusing parameters." Figure 3. A traveling-wave tube.
{"url":"https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/11Optimization/annealing/","timestamp":"2014-04-20T05:43:39Z","content_type":null,"content_length":"16876","record_id":"<urn:uuid:0bf195fd-4dbc-49a1-ab6d-f141fb86792f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Fantasy Baseball May 11th 2012, 11:00 PM #1 May 2012 United States I thought this was a probability question since it would most likely be solved using a Combination or Permutation, but could not find a sub-forum for that, so here I am on this particular sub-forum. This isn't really help on homework or anything, just a question I came up with on my own and couldn't figure out the answer to. So in my fantasy baseball league, there are 12 coaches. Each coach drafts 23 players (9 pitchers, 5 Outfielders, 1 1B, 1 2B, 1 SS, 1 3B, 1 middle infielder (SS or 2B), 1 corner infielder (3B or 1B), 2 catchers, and a utility hitter (any position). There are 30 teams in the MLB to choose players from, and each team has 25 players to choose from, making a total of 750 players to choose from (I don't want to include minor league players in this problem). Each team has a different number of positional (some have more pitchers than hitters, or more outfielders than infielders etc) and some players are eligible for more than one position (if they played more than 20 games in that position the previous year, then they are eligible for that position). What I want to solve for is the number of possible drafts that can happen, assuming a snake draft (coaches pick players 1-12 then 12-1 then back to 1-12 etc.). I don't even really want a solution for any given year, but more an equation that would let me solve it for any given year. Is it even possible to write one equation that would solve it for any year?
{"url":"http://mathhelpforum.com/math-topics/198698-fantasy-baseball.html","timestamp":"2014-04-18T20:46:03Z","content_type":null,"content_length":"30160","record_id":"<urn:uuid:44cb93f6-b0ac-4109-a314-5b81b4fe6bea>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If f(t)=8sec^2t-7t^2, find the antiderivative F(t). Then, if F(0)=0, find what F(1.2) would equal • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509da945e4b013fc35a149eb","timestamp":"2014-04-18T21:12:05Z","content_type":null,"content_length":"51534","record_id":"<urn:uuid:d64314d8-46ec-4bf8-a69d-3fb4e5516df8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
st: possible bug in ml method lf [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: possible bug in ml method lf From Greg Colman <greg_colman1@yahoo.com> To statalist@hsphsun2.harvard.edu Subject st: possible bug in ml method lf Date Tue, 15 Dec 2009 11:39:44 -0800 (PST) Dear Listers, following code creates 5 observations on x, which has a density theta*x^(theta-1). I then estimate theta using Stata's ml routine, first with method lf and then with d0. In each case theta is estimated with and without a constraint, the constraint being that theta = 3. The point of this exercise is to get Stata to calculate the gradient and the variance using formulas for the unconstrained likelihood but at the constrained value of theta, as one would do in a lagrange multiplier test. The formula for the variance of theta is theta^2/n, which for the constrained model is 1.8. This is exactly what is produced by method d0 (and d1, though that's not shown), but quite far from what is produced by method lf. Does this not imply something is wrong with method lf? version 11.0 clear all capture log close set more off set seed 875411 local obs 5 set obs `obs' local theta = 2 gen u = runiform() gen x = exp(ln(u)/`theta') gen lnx = ln(x) sum lnx sca meanlnx = r(mean) sca theta1 = -1/meanlnx sca vartheta1 = theta1^2/`obs' sca list theta1 vartheta1 cap prog drop mlexamp1 prog define mlexamp1 args lnf theta qui replace `lnf' = ln(`theta') + (`theta'-1)*ln($ML_y1) cap prog drop mlexamp2 prog define mlexamp2 args todo b lnf g tempvar theta mleval `theta' = `b' quietly { mlsum `lnf' = ln(`theta') + (`theta'-1)*ln($ML_y1) local null = 3 /* lf version */ ml model lf mlexamp1 (x =) ml maximize ml model lf mlexamp1 (x =) ml init `null', copy ml maximize, iter(0) mat list e(V) /* d0 version */ ml model d0 mlexamp2 (x =) ml maximize ml model d0 mlexamp2 (x =) ml init `null', copy ml maximize, iter(0) mat list e(V) * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-12/msg00572.html","timestamp":"2014-04-20T21:03:21Z","content_type":null,"content_length":"6931","record_id":"<urn:uuid:d8789bae-5241-42f9-91ec-5603b748a578>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Maimi, OK ACT Tutor Find a Maimi, OK ACT Tutor ...After a few months in a 4th grade classroom, I'd found my niche. For the past 20 years I've been teaching 4th and 5th grades and tutoring/mentoring/coaching students through middle and high school in Providence, San Francisco, Miami and in an American School in Sao Paulo, Brazil (I speak fluent ... 61 Subjects: including ACT Math, English, reading, Spanish ...I have experience in tech support, virus removal, knowledge management, networking, and computer hardware repair. I have worked with Microsoft Windows at an Administrator level for over 8 years. I understand troubleshooting, OS install and removal, drive partitioning, multiple OS boot, Windows Services, Command Prompt, and Networking. 23 Subjects: including ACT Math, reading, English, biology I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including ACT Math, physics, calculus, geometry ...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others and always do my best to make sure the information is enjoyable and being presented effectively... 30 Subjects: including ACT Math, reading, calculus, biology I have been teaching for 15 years. I have had experience with a wide range of ages from kindergarten to sixth grade. I am certified in Elementary Education, grades K-6 (all subjects) and am currently working on certification for Middle Grades Math Grades 5-9. 12 Subjects: including ACT Math, reading, algebra 1, SAT math
{"url":"http://www.purplemath.com/Maimi_OK_ACT_tutors.php","timestamp":"2014-04-18T13:42:20Z","content_type":null,"content_length":"23682","record_id":"<urn:uuid:964029ce-0f12-4d84-a068-023cb75da63d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimizing the Efficiency of Reverse Osmosis Seawater Desalination Abstract: Thermoelectric Effects Peltier Seebeck and Thomson Full page: Thermoelectric Effects Peltier Seebeck and Thomson Optimizing the Efficiency of Reverse Osmosis Seawater Desalination Uri Lachish, guma science Abstract: A way is considered to achieve efficient reverse osmosis seawater desalination without use of energy recovery or pressure exchange devices. 1. Introduction 2. Basic scheme of desalination by reverse osmosis 3. Improving desalination: I. Modules in series 4. Improving desalination: II. Energy recovery 5. Summary of the energy balance 6. A spiral membrane module 7. Cyclic flow operation 8. Desalination energy, salinity and cycle time in cyclic flow operation 9. Comparison of continuous flow to cyclic flow 10. Difficulties with cyclic flow operation 11. Utilization of the energy accumulated within concentrated salt water 12. Summary and conclusions 1. Introduction Seawater desalination requires minimal energy consumption equal to the osmotic pressure times the volume of desalinated water [1]. The osmotic pressure is nearly proportional to the salt concentration in the water. For a seawater osmotic pressure of 27 bar the minimal energy is about 0.75 kW hour / cubic meter and it varies according to the water salinity. This minimal energy, derived by thermodynamic considerations, is general and true to all desalination technologies and not only reverse osmosis. Advanced reverse osmosis systems apply energy recovery or pressure conversion devices and report higher energy consumption of above 2 kW hour / m^3 [2, 3]. Curiously, the energy may be easily reduced and approach the theoretical minimum. Why this is not done? Producing one volume of desalinated water with nearly minimal consumption of energy requires the use of several volumes of seawater that mostly go back to the sea. These volumes are prepared prior to desalination by chemical treatment and filtering operations. The cost of the pre osmosis water is then higher than the cost of the energy saved in the process, so there is no advantage doing that. The ratio of the desalinated water volume to the seawater volume used to produce it is called the recovery ratio. High recovery ratio saves on the cost of seawater preparation prior to the osmosis process, and low recovery ratio saves on the energy cost of desalination. The optimal recovery ratio depends on the relative costs of these operations and may vary under different conditions. The purpose of these pages is to consider a way to achieve efficient seawater desalination by reverse osmosis in a system that does not apply energy recovery or pressure conversion devices. 2. Basic scheme of desalination by reverse osmosis Figure-1 shows the basic scheme of desalination by reverse osmosis: Figure-1: Basic scheme of desalination by reverse osmosis. High-pressure pump pumps seawater into a module separated by a semi permeable membrane into two volumes. The membrane lets water flow through it but blocks the transport of salts, so the water in the volume beyond the membrane, called permeate, is desalinated, and the salt is left behind in the volume in front of the membrane. The concentrated salt water in this volume leaves the module via a pressure control valve. The osmotic pressure P[s] is given by van't Hoff equation: P[s] = c∙R∙T (1) Where c is the ionic molar concentration, R = 0.082 (liter bar / degree mole) is the gas constant, and T is the absolute temperature in Kelvin units. T is equal to the Celsius temperature + 273.17. Thus, T = 300 K for 27^o C. Typical ionic salt concentration of seawater is: c = 1.1 mole / liter, and the corresponding osmotic pressure is: P[sea] = 1.1 x 0.082 x 300 = 27 bar. The flow rate of water through the membrane F[rate] is given by: F[rate] = K[f]∙(P[pump] - P[s]) (2) The membrane properties and its area determine the flow rate factor K[f]. P[pump ]is the pressure generated by the pump and controlled by the pressure control valve. P[s] is the osmotic pressure of the concentrated salt water in the module. The pump pressure must be higher than the osmotic pressure in order to force seawater flow through the membrane and permeate water out of the module. The flow rate is proportional to the difference between the two pressures. When they are equal water does not flow through the membrane, and if the pump pressure is lower than the osmotic pressure, permeate water will flow back towards the concentrated salt water. Consider an example where the water recovery ratio is 0.5. That is, for every two volumes of seawater pumped into the module one volume will come out as permeate water and one as doubly concentrated salt water. The high-pressure pump consumes energy equal to the pump pressure times the volume of water that it pumps. Since the pump has to pump two V volumes of seawater in order to produce one V volume of permeate water, the consumed work is: W = P∙2∙V (3) Since the osmotic pressure of the concentrated salt water is twice as much as that of seawater, P[s] = 2∙P[sea], the required pump pressure will be: P = 2∙P[sea] + ΔP (4) ΔP is the overpressure, above the osmotic pressure, that drives water flow through the membrane. The work then becomes: W = (4∙P[sea] + 2∙ΔP)∙V (5) It is, therefore, more than four times higher than the minimal theoretical desalination energy (P[sea]∙V). In summary, the practical desalination energy is higher than the theoretical minimum for two reasons. a. The feed volume of seawater is higher than the volume of permeate-water. b. The osmotic pressure of concentrated salt water within desalination module is higher than that of seawater. 3. Improving desalination: I. Modules in series Figure-2 shows a desalination system where a number of modules are connected in series. In practical systems there are six or seven modules in series. Figure-2: Connecting membrane modules in series. Seawater flows into a first module where about 10% penetrate through the membrane and become permeate water. The rest more concentrated water flows to a second module where again part of it penetrates through the membrane and part of it continues to the next membrane. The salt concentration and therefore also the osmotic pressure increase at each consecutive module, while the overall pump pressure is nearly the same in all of them. The flow rate through the membrane is proportional to the difference between the pump pressure and the osmotic pressure (equation 2). Therefore, the pressure difference and the flow rate through the membrane are highest at the first module. They decrease at each consecutive module, and are lowest at the last module. In this system there is no need of overpressure to drive water through the membranes if sufficient number of modules are connected in series. Most of the permeate-water comes from the first modules and little water comes from the membrane of the last module, where the osmotic pressure is slightly below the pump pressure. For 50% water recovery the work of desalination thus becomes (by equation W = 4∙P[sea] ∙V (6) The semi permeable membrane is not perfect and about 0.5% - 1% of the salt in the water penetrates through it. Series connection of modules is advantageous since most of the water comes from modules with lower salt concentration, resulting in lower salt concentration in the permeate water. 4. Improving desalination: II. Energy recovery The mechanical energy consumed by the high-pressure pump is transformed into heat within the desalination system. Part of the heat is generated by dissipate water flow through the membrane and part by water flow through the pressure control valve. Part of the (free) energy is accumulated within the concentrated salt water that leaves the system. This energy is not lost and in principle can be utilized, returned back to the system and improve its efficiency. Is it worth doing? The question will be discussed in a next section. The energy loss within the pressure control valve can be avoided by application of a variety of energy recovery devices. Figure-3 presents a system where the pressurized salt water, that leaves the membrane modules, drives a rotary turbine [4]. The turbine drives an auxiliary high-pressure pump that supplies seawater to the membrane modules and reduces the water supply and energy consumption of the first pump. Figure-3: Energy recovery with a turbine and an auxiliary pump. Water is practically incompressible and therefore cannot accumulate energy. This property is the basis of a class of devices that exchange pressurized concentrated salt water within the modules with outside seawater [5 - 12]. There are many specific designs but they all operate with the same "rotating door" principle presented in figure-4. Figure-4: Exchange of pressurized concentrated salt water with seawater. The "rotating door" has two compartments, one filled with pressurized concentrated salt water, and one filled with seawater. The "door" rotates 180 degrees and exchanges the positions of the two compartments (as seen on the left of figure-4). By that it introduces seawater to the high-pressure line of the modules and relieves pressurized concentrated salt water to the seawater line. The seawater in the right compartment now flows towards the membranes and is replaced by another dose of concentrated salt water. The concentrated salt-water in the left compartment flows away and is replaced by fresh seawater. The "door" then rotates 180 degrees again. The operation involves pressurizing seawater and depressurizing concentrated salt water. Since water is incompressible, these processes do not involve consumption or waste of energy. Many practical systems do not look like figure-4 at all, but rather apply mechanisms of moving pistons and valves to achieve this mode of a "rotating door" operation [5 - 11]. One company has developed a continuously rotating high-speed rotor for this purpose [3, 12]. In the limit of a 100% efficient energy recovery device, the externally powered pump will supply a volume V of seawater equal to the volume V of the delivered permeate water. The rest of the required seawater comes from the energy recovery device. The work of desalination, P∙V, is far lower than in systems that do not apply energy recovery because now V is the volume of permeate water and not of seawater supply. For the case of 50% water recovery the pressure is P = P[s] = 2∙P[sea] and the desalination work is: W = P[s]∙V = 2∙P[sea]∙V (7) It is half of the work required by a system without energy recovery, calculated in equation 6. In practical systems an energy recovery efficiency of 70 - 80% is reported for turbine type devices, and over 90% for rotating door type devices. The work of desalination is then higher than the ideal value of equation 7. The osmotic pressure at the system exit for any water recovery ratio α(α = output volume of permeate water / input volume of seawater) is P[s] = P[sea] / (1 - α), and the corresponding work of desalination is: W =P[sea]∙V / (1 - α) (8) The calculation for energy recovery efficiency below 100% is given in the appedix. Figure-5 shows the minimal desalination energy as a function of the water recovery ratio for the energy recovery efficiencies 0, 0.85, 0.9, 0.95, and 1. Figure-5: Dependence of the minimal desalination energy on the water recovery ratio for the energy recovery efficiencies 0 (1), 0.85 (2), 0.9 (3), 0.95 (4), and 1 (5). One energy unit in the figure is the theoretical limit P[sea]∙V, equal to 0.75 kWatt Hour / cubic meter for osmotic pressure of 27 bar. The work of desalination decreases and approaches the theoretical limit as α is reduced. The optimal α value is determined by the energy cost compared to the cost of pre-osmosis water, as discussed in the introduction. 5. Summary of the energy balance The work consumed by the pump is equal to P∙V where P is the pump pressure and V is the volume of seawater that it pumps. All this work is transformed into heat. P[s] is the osmotic pressure of the concentrated salt solution within a membrane module and ΔP is the over pressure that drives water flow through the membrane. The pump pressure P is equal to their sum, P = P[s] + ΔP. When modules are connected in series P[s] and ΔP change from module to module but their sum is nearly the same. P[s] is lowest at the first module and it increases with each successive module. V[permeate] is the volume of desalinated water produced by the process and V[concentrate] is the volume of concentrated salt solution that returns to the sea. In systems that do not apply energy recovery devices the overall pumped volume is equal to the sum V = V[permeate] + V[concentrate]. In systems that apply energy recovery devices of 100% efficiency the volume pumped by the pump is equal to the volume of desalinated water, V = V[permeate]. Table 1 summarizes the energy losses, how they can be reduced, and for what price. │ Loss │ Type │ Reduce by │ The price │ │ P[s]∙V │ Thermodynamic transformation of mechanical energy into heat. │ Decrease the osmotic pressure P[s] by reducing the water recovery │ Higher consumption of seawater. │ │ [permeate] │ │ ratio. │ │ │ ΔP ∙V[permeate] │ Dissipate heat of water flow through the membrane │ Lower water throughput. │ Lower utilization of the desalination │ │ │ │ │ plant │ │ P∙V │ Dissipate heat of water flow through the pressure control │ Application of energy recovery or pressure exchange devices. │ More equipment │ │ [concentrate] │ valve. │ │ │ W = P∙V Where W is the pump work, P is the pump pressure, and V is the volume of pumped water. W = P∙V ∙ 100 for W in Joules (Watt seconds), P in bars, and V in Liters. W = P∙V / 36 for W in kWatt hours, P in bars, and V in cubic meters. For example, the energy required to pump a volume of V = 1 cubic meter of salt water with osmotic pressure of P[s] = 27 bar, through a semi permeable membrane , is: W = 27 ∙ 1 / 36 = 0.75 kWatt hour. 6. A spiral membrane module Figure-6 shows the water flow within a spiral membrane module Figure-6: Spiral membrane module. The membrane is shaped into a spirally wound flat sleeve (green) contained in a high-pressure cylinder. The sleeve is closed at the spiral sides and outer end, and the inner end is connected to a central pipe. Pressurized seawater (red arrows) flows in the direction of the cylinder axis along the external surface of the membrane sleeve. Some water penetrates through the membrane and leaves the salt behind, thus it turns into permeate water within the sleeve. Permeate water (blue arrows) flows within the spiral sleeve towards the central pipe that leads it out of the module. Water that penetrates through the membrane leaves behind it a locally highly concentrated salt at the external surface of the membrane. This concentrated salt immediately stops any further water flow through the membrane unless it is removed fast enough by a lateral seawater flow along the surface. The membrane sleeve is supported from its inside with a porous spacer that prevents sleeve collapse by the osmotic pressure. Another porous spacer surrounds the sleeve and stabilizes the space of seawater flow. The module manufacturer supplies the testing conditions of the membrane module. For example, the following data is given for "FILMTEC 8" Seawater RO Elements" (SW30HR-380) by "DOW" [13]. Module size: Length 1016 mm, diameter 201 mm, diameter of central pipe 29 mm. Operating pressure: 55.2 bar (800 psi), (max 70 bar (1015 psi). Max Feed Flow: 14 m^3 / hour. Product water flow rate: 23 m^3 / day at 25^o C. Single element recovery (Permeate Flow to Feed Flow): 0.08 (max 0.15 at lower feed flow). Salt (NaCl) concentration: 32000 ppm (32 gram / liter). These numbers give: Feed water flow of 200 (max 233) liter / minute. Permeate water flow of 16 liter / minute. Osmotic pressure of 27 bar (390 psi), calculated by van't Hoff formula (equation 1). Flow rate factor (equation 2): K[f] = F[rate] / (P - P[s]) = 16 / (55.2 - 27) = 0.57 (liter / minute) / bar (9) 7. Cyclic flow operation Sections 2 - 6 describe the reverse osmosis technology of seawater desalination. The rest of these pages are theoretical considerations and calculations by the author. Semi permeable membranes favor operation with continuous water flow and permanent operating pressure. Flow disturbances and unstable pressure stress the membranes and increase their wear. However, the continuous flow mode requires application of energy recovery devices for efficient operation. An operation mode of cyclic flow may achieve, in principle, energy efficiency comparable to continuous flow and there is no need of energy recovery devices. Therefore, this possibility may not be ignored, even for a price of modifying the semi permeable membrane or the membrane module. The system described in figure-7 includes a low pressure circulating pump and a two-state valve. Figure-7: Seawater desalination in a cyclic water flow. At one state of the valve the salt-water compartment of the module is closed. The high-pressure pump pumps seawater into the membrane module and all the water penetrates through the membrane and turns into permeate water since there is no other water exit. The low-pressure pump circulates the water in the module at a flow rate required by the module manufacturer for proper operation. Since there is no exit for the salt it will accumulate within the module and steadily increase the osmotic pressure. At a pre determined osmotic pressure the valve revolves and relieves the pressure within the module. At this state of the valve the two pumps drive the concentrated salt water out of the module and replace it with fresh seawater. The valve then revolves again and the operation is repeated. Pressure release of concentrated salt water by valve revolution does not waste energy, similarly to the case of the "rotating door" (section 4), since water is incompressible and does not accumulate energy. However, there are other energy-losses that will be considered later. In cyclic operation the high-pressure pump pumps a volume of seawater equal to the volume of delivered permeate water. In this respect it is equivalent to continuous operation with an energy recovery device. Only here there is no such a device. Efficient continuous operation without energy recovery is achieved with deep sea deslination by reverse osmosis [14 - 15]. 8. Desalination energy, salinity and cycle time in cyclic flow operation Since the pressure increases with the salt concentration of salt-water within the module, the work of pumping water through the membrane will be: W = ∫p∙dV = ∫(P[s] + ΔP) ∙dV (10) where P[s] is the increasing osmotic pressure. The over pressure ΔP is determined by the flow rate of the high-pressure pump ΔP = F[rate] / K[f]. The salt concentration c[s] within the module is given by: c[s] = c[sea]∙ (V + V[0]) / V[0] (11) where c[sea] is the salt concentration of seawater, V[0] is the salt-water volume within the module, and V is the delivered permeate water. Since the osmotic pressure is proportional to the salt concentration it is given by a similar equation: P[s] = P[sea]∙(V + V[0]) / V[0] (12) The work of desalinating a volume V of permeate water will be (by inserting equation 12 into equation 10 and integration): W = P[sea]∙(0.5∙V^2 / V[0] + V) + ΔP∙V (13) W = (P[sea]∙(0.5∙V / V[0] + 1) + ΔP)∙V (14) W = (P[sea]∙(1 - α / 2) / (1 - α) + ΔP)∙V (15) Where α = V' / (V' + V[0]) is the recovery ratio. V' is the volume of permeate water delivered in one cycle. In cyclic operation there is no need to connect modules in series. This is an advantage that leads to higher permeate water throughput. Salinity, the salt concentration in permeate-water for 1% salt penetration through a semi permeable membrane, is: Salinity = 0.01∙ [∫c[s]∙dV] / V (16) where c[s] is the salt concentration of salt water within the module and V is the volume of permeate water. c[s] =(c[sea] / P[sea])∙P[s] by using equations (10) - (12), therefore: Salinity = 0.01∙ (c[sea] / P[sea])∙ [∫P[s]∙dV] / V = 0.01∙ c[sea] ∙ (1 - α / 2) / (1 - α) (17) The cycle time in cyclic operation depends on the seawater volume within the module. Using the module dimensions in section 6 its internal volume is estimated to be 32 liter. Assuming that half of this volume is solid material, membrane and spacers, and the rest is divided to equal volumes of salt-water and permeate-water, the salt-water volume will be V[0] = 8 liter. This is a coarse The permeate-water recovery ratio is α = V' / (V' + V[0]) , where V' is the permeate-water delivered per cycle. V' = F[rate] ∙ t, where F[rate] is the permeate-water flow rate and t is the cycle time. Therefore, the cycle time in seconds is: t = 60 ∙ (V[0] / F[rate]) ∙ α / (1 - α) (18) Calculated values of the desalination energy, salinity, cycle time and water throughput are given in the next section. The cycle time may be increased by connecting an auxiliary tank in series to the salt water side of the membrane module. It is also possible to alternately connect two tanks so that in one tank pressurized water circulates with increasing salt content while the other tank is flushed with seawater and vice versa. In this case the membrane module may be loaded under permanent pressure. Such a system, however, requires the operation of more valves. 9. Comparison of continuous flow to cyclic flow The comparison is done for the testing parameter values mentioned in section 6. Higher values may be applied to practical operation, though they should not exceed the operating limits. a. Continuous flow system equipped with 6 modules connected in series, and with a 100% efficient energy recovery device. Table 2 summarizes the operation parameters. │ Feed Flow │ Permeate Flow (liter / min) │ │ Pressure │ Feed (L/min) │ 1st Module │ Water Recovery │ Energy │ Salt │ │ bar │ Pump │ Recovery │ % │ V[1] │ α (%) │ V │ kWh/m^3 │ mg/L │ │ 55.2 │ 78 │ 155 │ 7 │ 16 │ 33.5 │ 78 │ 1.53 │ 377 │ │ 55.2 │ 74 │ 136 │ 8 │ 16 │ 37 │ 74 │ 1.53 │ 385 │ │ 55.2 │ 50 │ 50 │ 16 │ 16 │ 50 │ 50 │ 1.53 │ 414 │ │ 45.4 │ 50 │ 150 │ 5.2 │ 10.5 │ 25 │ 50 │ 1.26 │ 360 │ The table is calculated by the equations: P[s](1) = P[sea] (19) Permeate(i) = K[f] ∙ (P[pump] - P[s](i)) (20) Supply(i) = Σ(j = 1 to i) Permeate(j) (21) P[s](i + 1) = P[sea] ∙ Feed / (Feed - Supply(i)) (22) W = P[pump]∙V (23) Salinity = 0.01 ∙ c[sea] ∙ (Σ Permeate(i) ∙ Ps(i) / Psea) / Supply(6) (24) P[s](i) is the osmotic pressure in the i'th module. Permeate(i) is the permeate water flow of the i'th module. Supply(i) is the sum of permeate water flows of the first i modules. P[sea] = 27 bar is the osmotic pressure of seawater at 300 K (27^o C). c[sea] = 32 gram / liter is the salinity of seawater. P[pump] is the pump pressure in bars, given in the table. Kf = 0.57 (liter / minute) / bar is the flow rate factor. α is the permeate-water recovery ratio. V is the volume of delivered permeate-water. V[1] is the volume of permeate-water delivered by the first module. Feed, given in the table, is the water feed flow through a module. Feed is the same for all modules since they are connected in series. The work of desalination per 1 m^3 of permeate-water is: W/V = P[pump]∙100 (Joule / liter = Watt second / liter) = P[pump]∙100 / 3600 (kW hour / m^3) (25) Salinity, the amount of salt in permeate-water is calculated for %1 salt penetration through semi permeable membrane. I. The calculation is somewhat inaccurate since it assumes uniform salt concentration within each module while the concentration does change within each one. . The desalination energy calculated in the table assumes 100% energy recovery. In practical systems, with lower energy recovery, the desalination energy will be higher than the table values, and the difference will increase as the water recovery ratio decreases. . Comparison of lines 1 - 3 demonstrates the effect of increasing the water recovery ratio by reducing the overall feed rate of seawater. Higher ratio saves pre-osmosis seawater but reduces the throughput of permeate-water. . Comparison of lines 2 - 4 demonstrates the effect of pump pressure on the system performance. Higher pressure saves pre-osmosis seawater and increases the throughput of permeate-water, but also increases the energy of desalination. b. Cyclic flow system equipped with 6 modules connected in parallel. In a cyclic system there is no need to connect modules in series. The modules are connected in parallel and the flow in each module is 1 / 6 of the overall flow. Table 3 summarizes the operation parameters for permeate water supply similar to table 2. │ Feed Flow │ Permeate Flow (liter / min) │ │ Pressure (bar) │ Feed (L/min) │ 1 module │ Recovery │ Energy │ Salt │ cycle │ │ ΔP │ P[start] │ P[end] │ Pump │ Flush │ % │ V[1] │ α( ) │ V │ kWh/m^3 │ mg/L │ sec │ │ 22.8 │ 49.8 │ 63.4 │ 78 │ 155 │ 5.6 │ 12.8 │ 33.5 │ 78 │ 1.57 │ 401 │ 20.5 │ │ 21.6 │ 48.6 │ 64.5 │ 74 │ 136 │ 6.2 │ 12.3 │ 37 │ 74 │ 1.57 │ 414 │ 25.1 │ │ 14.6 │ 41.6 │ 68.6 │ 50 │ 50 │ 8.3 │ 8.3 │ 50 │ 50 │ 1.53 │ 480 │ 63.4 │ │ 14.6 │ 41.6 │ 50.6 │ 50 │ 150 │ 4.2 │ 8.3 │ 25 │ 50 │ 1.28 │ 373 │ 21.1 │ │ 28.2 │ 55.2 │ 70 │ 96 │ 173 │ 8.3 │ 16 │ 35.7 │ 96 │ 1.74 │ 409 │ 16.6 │ The table is calculated for the pumping period only. The period required to flush the concentrated salt water in the module and replace it with fresh seawater is about 10% of the pumping period. Therefore, the overall cycle is about 10% longer than the table values, and the flow rates per overall cycle are about 10% lower than the table values. The Feed and Water Recovery columns are identical to table 2 (except the last line), so that the two processes are compared for the same permeate-water recovery-ratio and throughput. The table is calculated by the equations: ΔP = (V / 6) / K[f ](26) P[start] =P[sea] + ΔP (27) P[end] = P[s] +ΔP = P [sea] / (1 - α) + ΔP (28) W / V = (P[sea]∙ (1 - α / 2) / (1 - α) + ΔP) / 36 (29) ΔP is the over pressure that drives water flow through the membrane. V is the delivered volume of permeate-water. V[1] is the volume of permeate-water delivered by one module. Kf = 0.57 (liter / minute) / bar is the flow rate factor. P[start] is the pressure at the start of the pumping cycle. P[sea] = 27 bar is the osmotic pressure of seawater. P[end] is the pressure at the end of the pumping cycle. α = V' / (V' + V[0]) is the permeate-water recovery ratio. V' is the volume of permeate water delivered in one cycle. V[0] = 8 liter is the volume of salt water within a module. W / V is the desalination energy per 1 m^3 of permeate-water (equation 15, section 8). The permeate water salinity is calculated by equation 17, section 8. c[sea] = 32 gram / liter is the salt concentration of seawater. c. Conclusion Comparison of the two tables indicates that the energy of desalination in the two processes, operated at similar permeate-water recovery ratios and throughputs, is practically the same. However, the two processes have further energy losses not considered in the tables. In the continuous flow process there is a full permeate-water flow only at the first module and the flow drops at each successive module. Therefore the capacity of permeate-water flow is not fully utilized. Compared to that, in the equivalent cyclic process the modules are connected in parallel and the permeate-water flow per module is lower than the permitted limit value. Alternatively (line 5 of table 3), the cyclic process can operate at the highest permitted permeate-water flow and achieve higher permeate-water throughput per module, though, at a cost of a higher desalination energy. 10. Difficulties with cyclic flow operation Apart from variable pressure operation that might wear or even damage the membrane, other factors should be considered as well. Any part of the system that accumulates energy will waste it in the cyclic process. Consider a possible expansion of the high-pressure cylinder that contains the membrane unit by the pressurized water in it. If the 201 mm diameter cylinder expands by one millimeter its inner volume will increase by V = 0.4 liter. The energy accumulated in the cylinder is equal to p∙ V / 2 and it is lost when the pressure is relieved. Inserting p = P[sea] = 27 bar, and V = 0.4 ∙ 10^-3 m^3, the energy will be E = (27 / 36) ∙ 0.4 ∙ 10^-3 / 2 = 0.15 ∙ 10^-3 kW hour per cycle. If a cycle delivers about 8 liters of permeate-water, the energy loss will be 0.15 ∙ 10^-3 ∙ 1000 / 8 = 0.02 kW hour per one m^3 of permeate water. Similar loss might come from pressure squeezing of the permeate-water spacer within the membrane sleeve, and the loss can be calculated in a similar way. A more rigid spacer material, and possibly, mechanically pre squeezing the membrane unit within the cylinder, may reduce the loss. When a number of modules are connected in parallel to one pump it is important to have similar water flow in each of them to within tight tolerance. Otherwise, in some modules the replacement of concentrated salt water with seawater will not be complete, while in other modules there will be excessive flow and loss of seawater. The concentrated salt water within the membrane module is replaced by fresh seawater when the pressure is relieved. During this time permeate water will start to flow back through the membrane towards the salt-water. According to specs, the flow rate of salt-water, in parallel to the membrane, is at least ten times higher than the flow rate of permeate-water through the membrane. Therefore, the time of seawater replacement will be about ten times shorter than the time of permeate-water pumping, and the permeate water loss will be less than 10%. The back flow of permeate-water is not completely negative since it automatically flushes the membrane during each cycle. 11. Utilization of the energy accumulated within concentrated salt water Figure-8 presents a scheme for utilizing energy from concentrated salt water. Figure-8: Utilizing energy from concentrated salt water. A low-pressure pump flushes one compartment of a membrane module with seawater, while a medium-pressure pump pumps concentrated salt water via the other compartment. The pressurized water drives a turbine that supplies mechanical energy. The pressure difference that drives water through the membrane is: ΔP = P[pump] + P[sea] - P[s] where P[pump] is the pump pressure, P[sea ]is the osmotic pressure of seawater and P[s] is the osmotic pressure of the concentrated salt water. If the pressure difference is negative, ΔP < 0, or, P [pump] < P[s] - P[sea], water will flow from the seawater side of the membrane towards the concentrated water side. The volume of water that drives the turbine is then equal to the sum of a volume V delivered by the pump, and a volume V[1] that flows through the membrane. The work consumed by the pump is P[pump]∙ V and the work that drives the turbine is P[pump]∙ (V + V[1]). Therefore there is a net energy profit of P[pump]∙ (V + V[1]) - P[pump]∙ V = P[pump]∙ V[1] that comes from dilution of the concentrated salt water. The size of a membrane module for utilizing concentrated salt water is similar to that of a desalination module, and, as seen in figure-8, it has four different water outlets instead of three. Therefore, addition of salt utilizing ability to a desalination plant practically means using two different types of membrane modules and doubling their number. In addition to that the energy utilizing process consumes more seawater. Apart from investing in more membrane modules of a type that doesn't exist yet, the consumption of extra seawater makes energy utilization of concentrated salt water a non-beneficial process. The same amount of extra seawater may alternatively be added to a standard desalination system and save more energy by the reduction of the water recovery ratio. The use of more seawater in a desalination system reduces the osmotic pressure within it, and the reduced pressure saves energy consumption in systems equipped with an energy recovery device, as discussed in section 4. In summary of this section, there is no benefit in utilizing the (free) energy accumulated within the concentrated salt water. The same amount of seawater, required to dilute the concentrated salt, will achieve higher energy saving by adding it into a standard desalination system, without the need to invest in extra equipment. 12. Summary and conclusions A cyclic operated system that does not apply energy recovery devices is suggested for seawater desalination by reverse osmosis. The desalination energy, product water salinity and system throughput are comparable to that of continuous water flow systems that do apply energy recovery devices. Appendix: Energy recovery efficiency below 100% Consider a system operating with a water recovery ratio α and with an energy recovery device of efficiency Ef. V = α ∙ V[sea] is a volume of permeate water and V[sea] is the overall volume of seawater used to produce it. Out of the volume V[sea], a volume V is delivered by the pump, and the rest of the seawater volume V[sea] - V = V∙ (1 / α - 1) is delivered by the energy recovery device. The work done by the pump is P ∙ V where P is the pump pressure. For the volume V ∙ (1 / α - 1) delivered by the energy recovery device there is a need to add an energy (1 - Ef) ∙ P ∙ V ∙ (1 / α - 1) to compensate for the incomplete efficiency. Adding together the work of the pump and the energy added to the recovery device yields: W = P ∙ V ∙ [1 + (1 - Ef) ∙ (1 / α - 1)] (31) For example, the work for efficiency Ef = 0.95 and water recovery ratio α = 0.1 is W = P ∙ V ∙ [1 + 0.05 ∙ 9] = P ∙ V ∙ 1.45, compared to P ∙ V, for the efficiency Ef = 1. Therefore, for a recovery ratio of 0.1, a system with 95% efficient energy recovery device consumes 45% more energy than a system without any recovery loss. The minimal desalination energy for recovery without loss is given by equation 8, P ∙ V = P[sea] ∙ V / (1 - α). Therefore the minimal desalination energy for a system including the energy recovery loss will be: W[min] = P[sea] ∙ V ∙ [1 + (1 - Ef) ∙ (1 / α - 1)] / (1 - α) (32) Osmosis Reverse Osmosis and Osmotic Pressure what they are Desalination machine Energy of Seawater Desalination A Pipe of Fresh Water instead of "Canal of the Seas" 1. "Energy of Seawater Desalination", http://urila.tripod.com/desalination.htm, April (2000). 2. P. Geisler, W. Krumm , and T.A. Peters, Reduction of the energy demand for seawater RO with the pressure exchange system PES, Desalination 135 (2001) 205-210. http://www.desline.com/articoli/ 3. J.P. MacHarg, The Real Net Energy Transfer Efficiency of an SWRO Energy Recovery Device. http://www.energy-recovery.com/tech/real.pdf 4. R.A. Oklejas, Apparatus for improving efficiency of a reverse osmosis system, US patent no 6139740 (2000). 5. R. Verde, Equipment for desalination of water by reverse osmosis with energy recovery US patent application no 2001/0017278 (2001). 6. P. Elliot-Moore, Energy recovery device, US patent application no 2001/0004442 (2001). 7. C. Pearson, Fluid driven pumps and apparatus employing such pumps, US patent no 6203696 (2001). 8. W.D. Childs, A. Dabiri, Integrated pumping and/or energy recovery system, US patent no 6017200 (2000). 9. R.J. Raether, Apparatus for desalinating salt water, US patent no 5916441 (1999). 10. S. Shumway, Linear spool valve device for work exchanger system, US patent no 5797429 (1998). 11. G.B. Andeen, Fluid motor-pumping apparatus and method for energy recovery, US patent no 4637783 (1987). 12. L.J. Hauge, Pressure exchanger having a rotor with automatic axial alignment, US patent no 5988993 (1999). 13. DOW, FILMTEC SW30HR-380 Membrane Elements. Membrane System Design Guidelines. 14. D.C. Bullock, and W.T. Andrews, Deep Sea Reverse Osmosis: The Final Quantum Jump. http://www.desalco.ky/d-pdfs/deepsea.pdf 15. P. Paccenti, M. de Gerloni, M. Reali, D. Chiaramonti, S.O. Gartner, P. Helm, and M. Stohr, Submarine seawater reverse osmosis desalination system, Desalination 126 (1999) 213 - 218. http:// On the net: May, revised September 2002, Appendix added January, references added March 2003. By the author:
{"url":"http://urila.tripod.com/Seawater.htm","timestamp":"2014-04-17T18:30:06Z","content_type":null,"content_length":"59181","record_id":"<urn:uuid:37719a25-652a-4acd-a04b-f06b4f4ade55>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation of pitch angle and energy diffusion coefficients with the PADIE code Glauert, Sarah A.; Horne, Richard B.. 2005 Calculation of pitch angle and energy diffusion coefficients with the PADIE code. Journal of Geophysical Research, 110 (A4), A04206. 15, pp. 10.1029/ Full text not available from this repository. We present a new computer code (PADIE) that calculates fully relativistic quasi-linear pitch angle and energy diffusion coefficients for resonant wave-particle interactions in a magnetized plasma. Unlike previous codes, the full electromagnetic dispersion relation is used so that interactions involving any linear electromagnetic wave mode in a predominantly cold plasma can be addressed for any ratio of the plasma-frequency to the cyclotron frequency ωpe /∣Ωe∣. The code can be applied to problems in astrophysical, magnetospheric, and laboratory plasmas. The code is applied here to the Earth's radiation belts to calculate electron diffusion by whistler mode chorus, electromagnetic ion cyclotron (EMIC), and Z mode waves. The high-density approximation is remarkably good for electron diffusion by whistler mode chorus for energies E ≥ 100 keV, even for ωpe/∣Ωe∣ ≈ 2 but underestimates diffusion by orders of magnitude at low energies (∼10 keV). When a realistic angular spread of propagating waves is introduced for EMIC waves, electron diffusion at ∼0.5 MeV is only slightly reduced compared with the assumption of field-aligned propagation, but at ∼5 MeV, electron diffusion at pitch angles near 90° is reduced by a factor of 5 and increased by several orders of magnitude at pitch angles 30°–80°. Scattering by EMIC waves should contribute to flattening of the distribution function. The first results for electron diffusion by Z mode waves are presented. They show that unlike the whistler and EMIC waves, energy diffusion exceeds pitch angle diffusion over a broad range of pitch angles less than 45°. The results suggest that Z mode waves could provide a significant contribution to electron acceleration in the radiation belts during storm times. Actions (login required)
{"url":"http://nora.nerc.ac.uk/1792/","timestamp":"2014-04-17T09:49:45Z","content_type":null,"content_length":"23157","record_id":"<urn:uuid:63af69dc-376c-4097-aef4-853fad5bfd50>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Disease Rate Statistics Tweet Vegan For Life by Jack Norris, RD & For Updates: Follow @JackNorrisRD or subscribe to JackNorrisRD.com Ginny Messina, MPH, RD Explanation of Disease Rate Statistics When comparing incidence or mortality rates of two or more groups, one group is assigned the rate of 1.00 and the other groups are compared to it. These are called disease rate ratios or odds rate Group A 1.00 Group B 1.35 Group B has a 35% higher rate of the disease than Group A. Group A 1.00 Group B .85 Group B has a 15% lower rate of the disease than Group A. In addition to the rates, there has to be a test to determine if the rates are different enough not to be due merely to random chance (also known as statistical significance). Statistical significance for a disease rate is usually expressed by way of a 95% confidence interval (CI). This is done by giving a lower and upper limit for the interval. If 1.00 does not fall between the two numbers (i.e., within the interval), then the finding is significant and not due to random chance. Example 1: .85 (.75, .95) The finding is statistically significant because 1.00 falls outside the 95% CI. Example 2: .85 (.65, 1.05) The finding is not statistically significant because 1.00 falls inside the 95% CI. Sometimes, p-values are given rather than confidence intervals. In these cases, a p-value of less than .05 means the finding is statistically significant. When disease rates are adjusted, it means they are changed to account for variables that might affect them. For example, say a study finds that smoking is related to cancer and that drinking is also related to cancer. But many of the people in the study both smoke and drink, so you don't know whether it was the smoking or the drinking (or both) that is actually related to the cancer. By adjusting, you can look at the different levels of drinking taking into account how much the subjects smoked, and get a number for drinking that isn't influenced as much by smoking. Most studies adjust for more than one variable at a time. They often adjust for all the variables that, in the non-adjusted analysis, had a significant relationship to the disease. What often happens is that a variable loses its significance once the results are adjusted. For example, a study of people aged from 20 to 60 years old will likely correlate the likelihood of having a heart attack with having gray hair. But once you adjust for the age of the participants, the correlation with gray hair will fall away and we can then assume gray hair doesn't cause heart attacks. Well-designed studies allow researchers to consider adjusted results in their calculations. Frequently, these adjusted results are easier to draw conclusions from. The articles on VeganHealth.org use adjusted rates unless otherwise noted.
{"url":"http://veganhealth.org/articles/ss","timestamp":"2014-04-21T15:08:13Z","content_type":null,"content_length":"10663","record_id":"<urn:uuid:f1684d94-75ba-4ab9-9eed-0b7f0ff34805>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Phantom (Terry Goodkind) Richard Rahl, the protagonist of the best-selling Sword of Truth series, seeks to protect the world from an evil spell which (among other things) has removed his wife from existence. As Kati Voigt points out to me, in the fifth chapter of this tenth novel in the series, Richard makes use of a relationship between math and magic to explain his reasoning to some of his partners. For example: (quoted from Phantom) Ann's face had gone crimson. "It's a spell-form! It's inert! It can't be biological!" "That's the problem," Richard said, answering her point rather than her anger. "You can't have these kind of variables tainting what's supposed to be a constant. It would be like a math equation in which any of the numbers could spontaneously change their value. Such a thing would render math invalid and unworkable. Algebraic symbols can vary -- but even then they are specific relational variables. The numbers, though, are constants. Same with this structure: emblems have to be constructed of inert constants -- you might say like simple addition or subtractions. An interval variable currupts the constant of an emblematic form." "I don't follow," Zedd admitted. Richard gestured to the table. "You drew the Grace in blood. The Grace is a constant. The blood is biological. Why did you do it that way?" "To make it work," Ann snapped. "We had to do it that way in order to initiate an interior perspective of the verification web. That's the way it's done. That's the method." Richard held up a finder. "Exactly. You deliberately introduced a controlled biological variable -- blood -- into what is a constant -- a Grace. Keep in mind though, that it remains outside the spell-form itself; it's merely an empowering agent, a catalyst. I think it must be that such a variable in the Grace allows the spell you nitiated to run its course without being influenced by a constant -- the Grace. Do you see? ..." I don't really see what he is getting at, even though I work with variables and constants for a living. And, it appears that there is only this one chapter of the book (which is only one in a series) that discusses math so explicitly. So, you might say that this tiny amount of mathematical nonsense is not worth including on this database. However, I find it terribly interesting the way Richard seems to be able to use math to understand the magic whereas his colleagues just have memorized some techniques that work. This seems very much like a role that mathematics plays in the real world as Moreover, this is only one of a large number of works of fiction which draw analogies between math and magic. (See the list of "similar" works below.) This, I think, is "emblematic" (to use one of Richard Rahl's favorite terms) of the fact that many people find mathematics to be incomprehensible and powerful, like magical spells. Thanks to Kati Voigt for pointing out this small but interesting bit of mathematical fiction!
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf1002","timestamp":"2014-04-19T09:57:28Z","content_type":null,"content_length":"11124","record_id":"<urn:uuid:9d07d1cb-3527-4ec8-a718-7dc745db774d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Weakly bipartite graphs and the max-cut problem Results 1 - 10 of 21 "... We survey how semidefinite programming can be used for finding good approximative solutions to hard combinatorial optimization problems. ..." , 1992 "... We group in this paper, within a unified framework, many applications of the following polyhedra: cut, boolean quadric, hypermetric and metric polyhedra. We treat, in particular, the following applications: ffl ` 1 - and L 1 -metrics in functional analysis, ffl the max-cut problem, the Boole probl ..." Cited by 25 (2 self) Add to MetaCart We group in this paper, within a unified framework, many applications of the following polyhedra: cut, boolean quadric, hypermetric and metric polyhedra. We treat, in particular, the following applications: ffl ` 1 - and L 1 -metrics in functional analysis, ffl the max-cut problem, the Boole problem and multicommodity flow problems in combinatorial optimization, ffl lattice holes in geometry of numbers, ffl density matrices of many-fermions systems in quantum mechanics. We present some other applications, in probability theory, statistical data analysis and design theory. , 2000 "... Many classes of valid and facet-inducing inequalities are known for the family of polytopes associated with the Symmetric Travelling Salesman Problem (STSP), including subtour elimination, 2-matching and comb inequalities. For a given class of inequalities, an exact separation algorithm is a procedu ..." Cited by 23 (6 self) Add to MetaCart Many classes of valid and facet-inducing inequalities are known for the family of polytopes associated with the Symmetric Travelling Salesman Problem (STSP), including subtour elimination, 2-matching and comb inequalities. For a given class of inequalities, an exact separation algorithm is a procedure which, given an LP relaxation vector x∗ , nds one or more inequalities in the class which are violated by x , or proves that none exist. Such algorithms are at the core of the highly successful branch-and-cut algorithms for the STSP. However, whereas polynomial time exact separation algorithms are known for subtour elimination and 2-matching inequalities, the complexity of comb separation is unknown. A partial answer to the comb problem is provided in this paper. We de ne a generalization of comb inequalities and show that the associated separation problem can be solved efficiently when the subgraph induced by the edges with x ∗ e ¿0 is planar. The separation algorithm runs in O(n³) time, where n is the number of vertices in the graph. , 1995 "... Given the integer polyhedron P I := convfx 2 Z n : Ax bg, where A 2 Z m\Thetan and b 2 Z m , a Chv'atal-Gomory (CG) cut is a valid inequality for P I of the type ..." Cited by 12 (3 self) Add to MetaCart Given the integer polyhedron P I := convfx 2 Z n : Ax bg, where A 2 Z m\Thetan and b 2 Z m , a Chv'atal-Gomory (CG) cut is a valid inequality for P I of the type - MATHEMATICAL SUPPORT FOR MOLECULAR BIOLOGY; DIMACS SERIES IN DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE 47 , 1995 "... We consider the problem of sorting a permutation by reversals (SBR), calling for the minimum number of reversals transforming a given permutation of {1, ..., n} into the identity permutation. SBR was inspired by computational biology applications, in particular genome rearrangement. We propose an ..." Cited by 9 (8 self) Add to MetaCart We consider the problem of sorting a permutation by reversals (SBR), calling for the minimum number of reversals transforming a given permutation of {1, ..., n} into the identity permutation. SBR was inspired by computational biology applications, in particular genome rearrangement. We propose an exact branch-andbound algorithm for SBR. A lower bound is computed by solving a linear program with a possibly exponential (in n) number of variables, by using column generation techniques. An effective branching scheme is described, which is combined with a greedy algorithm capable of producing near--optimal solutions. The algorithm presented can solve to optimality SBR instances of considerably larger size with respect to previous existing methods. , 1996 "... We introduce new classes of valid inequalities, called wheel inequalities, for the stable set polytope PG of a graph G. Each "wheel configuration" gives rise to two such inequalities. The simplest wheel configuration is an "odd" subdivision W of a wheel, and for these we give necessary and sufficie ..." Cited by 7 (0 self) Add to MetaCart We introduce new classes of valid inequalities, called wheel inequalities, for the stable set polytope PG of a graph G. Each "wheel configuration" gives rise to two such inequalities. The simplest wheel configuration is an "odd" subdivision W of a wheel, and for these we give necessary and sufficient conditions for the wheel inequality to be facet-inducing for PW . Generalizations arise by allowing subdivision paths to intersect, and by replacing the "hub" of the wheel by a clique. The separation problem for these inequalities can be solved in polynomial time. 1 Introduction Let G = (V; E) be a simple connected graph with jV j = n 2 and jEj = m. A subset of V is called a stable set if it does not contain adjacent vertices of G. Let N be a stable set. The incidence vector of N is x 2 f0; 1g V such that x v = 1 if and only if v 2 N . The stable set polytope of G, denoted by PG , is the convex hull of incidence vectors of stable sets of G. Some well-known valid inequalities for PG ... - J. Combin. Theory Ser. B , 2001 "... We give a proof of Guenin's theorem characterizing weakly bipartite graphs by not having an odd-K 5 minor. The proof curtails the technical and case-checking parts of Guenin's original proof. Cited by 5 (0 self) Add to MetaCart We give a proof of Guenin's theorem characterizing weakly bipartite graphs by not having an odd-K 5 minor. The proof curtails the technical and case-checking parts of Guenin's original proof. - INTERNATIONAL JOURNAL ON COMPUTATONAL SCIENCE AND ENGINEERING , 2007 "... Given a graph with non-negative edge weights, the MAXCUT problem is to partition the set of vertices into two subsets so that the sum of the weights of edges with endpoints in di#erent subsets is maximized. This classical NP-hard problem finds applications in VLSI design, statistical physics, an ..." Cited by 5 (0 self) Add to MetaCart Given a graph with non-negative edge weights, the MAXCUT problem is to partition the set of vertices into two subsets so that the sum of the weights of edges with endpoints in di#erent subsets is maximized. This classical NP-hard problem finds applications in VLSI design, statistical physics, and classification among other fields. This paper compares the performance of several greedy construction heuristics for MAX-CUT problem. In particular, a new "worst-out" approach is studied and the proposed edge contraction heuristic is shown to have an approximation ratio of at least 1/3. The results of experimental comparison of the worst-out approach, the well-known best-in algorithm, and modifications for both are also included. - CWI QUARTERLY , 1993 "... Seymour's conjecture on binary clutters with the so-called weak (or Q+-) max-flow min-cut property implies -- if true -- a wide variety of results in combinatorial optimization about objects ranging from matchings to (multicommodity) flows and disjoint paths. In this paper we review in particular th ..." Cited by 4 (0 self) Add to MetaCart Seymour's conjecture on binary clutters with the so-called weak (or Q+-) max-flow min-cut property implies -- if true -- a wide variety of results in combinatorial optimization about objects ranging from matchings to (multicommodity) flows and disjoint paths. In this paper we review in particular the relation between classes of multicommodity flow problems for which the so-called cut-condition is sufficient and classes of polyhedra for which Seymour's conjecture is true.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=615739","timestamp":"2014-04-16T05:42:22Z","content_type":null,"content_length":"34299","record_id":"<urn:uuid:131dae7c-fefb-40fa-931c-6ee407b72a69>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Techny Statistics Tutor Find a Techny Statistics Tutor I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background in science and math, a love of written and oral communication and a strong desire to share the knowledg... 35 Subjects: including statistics, English, chemistry, reading ...I have a Ph.D. in Physical Chemistry and run my own company while enjoying tutoring on the side. I have taught community college and tutored at the campus tutoring center for several years. WyzAnt has been a convenient way to get private students recently. 20 Subjects: including statistics, chemistry, calculus, physics ...If you want a tutor who can help you or your student unlock the secrets of the subject and help move students from where they are to enjoying the subject more, please contact me. I just completed the coursework for a secondary education license in in Illinois. I'm waiting for the state to approve my license. 16 Subjects: including statistics, physics, calculus, geometry ...I'm too busy with other work. I've cut it out. - Please DO NOT contact me to install software (QuickBooks, Excel, etc.) on your computer. I am not a computer repair or software installation person, even if it’s accounting or stats software. - Projects related to work or final exam/coursework projects are OK. 6 Subjects: including statistics, accounting, finance, business I will be teaching honors physics and chemistry this year. This summer, I worked for ComEd's "smart grid" education program. I also spent a year doing ACT tutoring at Huntington Learning Center. I am available for tutoring chemistry, physics, earth science, math, and ACT on the weekends 12 Subjects: including statistics, chemistry, physics, algebra 1 Nearby Cities With statistics Tutor Abbott Park, IL statistics Tutors Central Park, IL statistics Tutors Chesney Shores, IL statistics Tutors Cloverdale, IL statistics Tutors Downey, IL statistics Tutors Echo Lake, IL statistics Tutors Fox River Valley Gardens, IL statistics Tutors Keeneyville, IL statistics Tutors Long Lake, IL statistics Tutors Northwoods, IL statistics Tutors Oldtown, IL statistics Tutors Ontarioville, IL statistics Tutors Stanton Point, IL statistics Tutors Timber Lake, IL statistics Tutors West Miltmore, IL statistics Tutors
{"url":"http://www.purplemath.com/techny_statistics_tutors.php","timestamp":"2014-04-20T08:57:54Z","content_type":null,"content_length":"24080","record_id":"<urn:uuid:e35b6644-e2c6-414b-8d47-14f1fb8c56c6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Tucker, GA SAT Math Tutor Find a Tucker, GA SAT Math Tutor ...As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other c... 14 Subjects: including SAT math, chemistry, geometry, biology ...My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I took one course in Differential Equations and received an A. I also took several other courses that included Differential Equations in the solution process. When I graduated from college I ... 20 Subjects: including SAT math, calculus, geometry, algebra 1 ...I am a member of Mensa (which admits entrance based on success on an IQ exam). My GRE scores placed me in the 98th percentile for quantitative reasoning (math) and the 85th percentile for verbal reasoning; I scored a perfect 6 for writing. I can teach you my means of preparation and my test taki... 22 Subjects: including SAT math, reading, English, GED Hi,My name is Alex. I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams. 16 Subjects: including SAT math, calculus, geometry, algebra 1 ...As an engineering graduate from Georgia Tech I have had multiple courses in college level physics. My manufacturing engineering career has given me a broad understanding of the principles of physics. I have home-schooled two of my boys in math through high school and have tutored several in high school math. 15 Subjects: including SAT math, chemistry, physics, geometry Related Tucker, GA Tutors Tucker, GA Accounting Tutors Tucker, GA ACT Tutors Tucker, GA Algebra Tutors Tucker, GA Algebra 2 Tutors Tucker, GA Calculus Tutors Tucker, GA Geometry Tutors Tucker, GA Math Tutors Tucker, GA Prealgebra Tutors Tucker, GA Precalculus Tutors Tucker, GA SAT Tutors Tucker, GA SAT Math Tutors Tucker, GA Science Tutors Tucker, GA Statistics Tutors Tucker, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Tucker_GA_SAT_Math_tutors.php","timestamp":"2014-04-19T02:09:05Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:ecce2d41-617f-46ce-8d49-588f1933761d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
True/False about Vector Spaces March 5th 2011, 08:16 PM True/False about Vector Spaces Hello, i have a true false question about vector spaces that i cannot figure out and its part of our test review so i need to know it. A.)The columns of an invertible nxn matrix form a basis for R^n B.)In some cases, the linear dependence relations amoung the columns of a matrix can be affected by certain elementary row operations on the matrix. C.)A single vector by itself is linearly dependent D.)if H=Span(b1,.....,bp) then (b1,.....,bp) is a basis for H E.)A basis is a spanning set that is as large as possible. I think that A,B,C are true. Am i right? and are the others true? Thank You, March 5th 2011, 11:48 PM More important, show some work. Why do you think they are true?. For example: $A$ invertible implies $\det Aeq 0$ wich implies $\textrm{rank}A=n$ etc. and are the others true? A little help: for D) choose $H=\mathbb{R}^2$ and $b_1=(1,0),b_2=(2,0),b_3=(0,1)$ . March 7th 2011, 08:22 AM Well (C) is false, unless the vector is the 0 vector (why?). For (B), I want to say that that is also false (i guess it depends what "affect" means). D and E are false as well, both with simple explanations. What happens if you repeat a vector in a spanning set? What is the largest spanning set you could possibly take?
{"url":"http://mathhelpforum.com/advanced-algebra/173589-true-false-about-vector-spaces-print.html","timestamp":"2014-04-20T20:05:46Z","content_type":null,"content_length":"6366","record_id":"<urn:uuid:c350ab59-70f9-4a6c-a727-c1e3339382dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Clipping/Test for quad visibility [Archive] - OpenGL Discussion and Help Forums 12-16-2003, 09:04 AM I would like to test for the (partial) visibility of a rectangle in model space, when rendered into viewport space. I'm currently using Gl's GL_SELECT mode, but there are speed issues with this. Problem reduces to: an algorithm to test for overlap between a convex quadrilateral Q (the projection into viewport space) and a rectangle R (the viewport itself). If any vertex of Q is inside R, there's overlap (trivial). If any edge of Q crosses an edge of R, there's overlap (Cohen-Sutherland, perhaps?) Finally, if Q encloses R, there's overlap (can't see any easy way to do this). Can anyone help?
{"url":"https://www.opengl.org/discussion_boards/archive/index.php/t-159362.html","timestamp":"2014-04-20T11:01:22Z","content_type":null,"content_length":"10415","record_id":"<urn:uuid:5c1875ce-8452-43e3-8030-f8593ee7f4f7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor Calculator Real 36 1.0.0.9 Taylor Calculator Real 36 1.0.0.9 download Taylor Calculator Real 36 for teachers and students. Calculates partial sums of Taylor series of standard functions (including hyperbolic). Related software downloads Calculator Prompter is a math expression calculator. You can evaluate expressions like sin(cos(tan(pi)))#2%10^3 Calculator Prompter has a built-in error recognition system that ... Missing ')'; - Unknown symbol ... etc. With Calculator Prompter you can enter the whole expression, including brackets, and operators.You can use Calculator to perform any of the standard operations for .... Free download of Calculator Prompter 2.7 Math tool for high school math, middle school math teaching and studying. Function graphing and analyzing: 2D, 2.5D function graphs and animations, extrema, root, tangent, limit,derivative, integral, inverse; sequence of number: arithmetic progression, geometric progression; analytic geometry: vector, line, circle, ellipse, hyperbola and parabola; solid geometry: spatial line, prism, pyramid, cylinder, .... Free download of Math Studio 2.8.1 Function Grapher is graph maker to create 2D, 2.5D, 3D and 4D function graphs, animations and table graphs. 2D functions can be in the form of explicit, parametric, piecewise, implicit and inequality. 3D functions can be in the form of explicit, parametric and implicit. .... Free download of Function Grapher 3.9.1 ScienCalc is a convenient and powerful scientific calculator. ScienCalc calculates mathematical expression. It supports the common ... Find values for your equations in seconds: Scientific calculator gives students, teachers, scientists and engineers the power to find values for even the most complex equation set. You can build equation set, which can .... Free download of Scientific Calculator - ScienCalc 1.3.9 EqPlot plots 2D graphs from complex equations. The application comprises algebraic, trigonometric, hyperbolic and transcendental functions. EqPlot can be used to verify the results of nonlinear regression analysis program. Graphically Review Equations: EqPlot gives engineers and researchers the power to graphically review equations, by putting a large number of equations at .... Free download of EqPlot 1.3.9 What is Yorick? Yorick is an interpreted programming language for scientific simulations or calculations, postprocessing or steering large simulation codes, interactive scientific graphics, and reading, writing, or translating large files of numbers. Yorick includes an interactive graphics package, and a binary file package capable of translating to and from the .... Free download of Yorick for Windows 2.1.05 This software utility can plot regular or parametric functions, in Cartesian or polar coordinate systems, and is capable to evaluate the roots, minimum and maximum points as well as the first derivative and the integral value of regular functions. Easy to use, ergonomic and intuitive interface, large graphs are only a .... Free download of WinDraw 1.0 Lite version converts several units of length. Plus version converts length, weight and capacity measures. By typing a number into box provided will instantly display the results without the user having to search through a confusing menu of choices. Great for mathematical problems, science or travel. Many different uses for this .... Free download of Breaktru Quick Conversion 10.1 Math calculator, also derivative calculator, integral calculator, calculus calculator, expression calculator, equation solver, can be used to calculate expression, derivative, root, extremum, integral.Math calculator, also a derivative calculator, integral calculator, calculus calculator, expression calculator, equation solver, can be used to calculate expression, derivative, root, extremum, integral.Math calculator, also a derivative calculator, .... Free download of Math Calculator 2.5.1 ... you need to perform complex mathematical calculations. Scientific Calculator Precision 36 is programmed in C#. All calculations are done in proprietary data type. The calculator handles mathematical formulas of any length and complexity. ... functions. Special numbers NaN, Uncertainty, and Infinity. The calculator follows classical approach when uncertainty of f(x) calculation. Free download of College Scientific Calculator 36 1.0.1.8
{"url":"http://www.downloadtyphoon.com/taylor-calculator-real-36/downloadbulwgsoo","timestamp":"2014-04-16T16:06:41Z","content_type":null,"content_length":"36360","record_id":"<urn:uuid:3fa233b3-5304-4f8a-bd97-1fe3a5dd4754>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
PhysicsLAB: Inertial vs Gravitational Mass Resource Lesson Inertial vs Gravitational Mass Printer Friendly Version Usually when we speak of an object's mass we do not distinguish whether we are referring to its inertial mass or its gravitational mass. This is because the quantity of matter present in an object, i.e., its mass, does not depend on the method by which it is measured. is measured with the use of a double-pan or triple-beam balance. It is a static measurement - that is, a measurement that can only be accurately recorded when the system is in a state of rest. This method involves placing an unknown mass on the pan and using countermasses to return the balance to This type of measurement only works in the presence of gravity and is actually based on the torque produced by the product of the weights and their lever-arms' distance from the axis of rotation. Since torques produce rotation, when the clockwise torque caused by the countermasses equals the counterclockwise torque caused by the unknown mass, we say that the balance is in equilibrium. Since the balance will then be in a state of rest, we can read the correct value for the unknown's gravitational mass from the balance's scale. is measured with the use of an inertial balance, or spring-loaded pan. It is a dynamic measurement - that is, a measurement that can only be accurately recorded while the system is in a state of motion. This method capitalizes on an object's inertia, or its tendency to continue in its current state of motion, as a means of quantifying the amount of matter present. The pan is first calibrated by counting the number of vibrations in a specified amount of time produced by two objects whose masses are known. From this information, the period (represented with the variable T) of each object's mass is calculated by dividing the total amount of time by the total number of vibrations. Period is usually measured in terms of seconds per vibration. These two periods are then plotted on a graph of T^2 vs Mass. Subsequent knowledge of the vibrational period of any unknown mass will allow its inertial mass to be interpolated from this calibration graph. This type of balance will measure an object's inertial mass even in the absence of gravity. freely falling bodies experience the same acceleration. When you use net F = ma for a projectile in freefall, net F equals the force of gravitational attraction between the object and the Earth; that is, the object's weight. Weight is calculated as the product of the object's gravitational mass and the Earth's gravitational field strength, g. wt = mg When we look at the other side of the equation, ma, then we are talking about the object's inertial mass - its resistance to a change in its state of motion, that is, its resistance to being accelerated. This mass is a measure of how much inertia must be accelerated. net F = ma -m[gravitational]g = m[inertial]a Since we can experimentally determine that all freely-falling bodies experience the same acceleration, that is, a = -g, we have proof that m[gravitational] = m[inertial] and there is no need to distinguish between the two definitions. The value of an object's mass is unique, independent of its method of measurement. Related Documents Resource Lesson: Usually when we speak of an object's mass we do not distinguish whether we are referring to its inertial mass or its gravitational mass. This is because the quantity of matter present in an object, i.e., its mass, does not depend on the method by which it is measured. is measured with the use of a double-pan or triple-beam balance. It is a static measurement - that is, a measurement that can only be accurately recorded when the system is in a state of rest. This method involves placing an unknown mass on the pan and using countermasses to return the balance to equilibrium. This type of measurement only works in the presence of gravity and is actually based on the torque produced by the product of the weights and their lever-arms' distance from the axis of rotation. Since torques produce rotation, when the clockwise torque caused by the countermasses equals the counterclockwise torque caused by the unknown mass, we say that the balance is in equilibrium. Since the balance will then be in a state of rest, we can read the correct value for the unknown's gravitational mass from the balance's scale. is measured with the use of an inertial balance, or spring-loaded pan. It is a dynamic measurement - that is, a measurement that can only be accurately recorded while the system is in a state of motion. This method capitalizes on an object's inertia, or its tendency to continue in its current state of motion, as a means of quantifying the amount of matter present. The pan is first calibrated by counting the number of vibrations in a specified amount of time produced by two objects whose masses are known. From this information, the period (represented with the variable T) of each object's mass is calculated by dividing the total amount of time by the total number of vibrations. Period is usually measured in terms of seconds per vibration. These two periods are then plotted on a graph of T^2 vs Mass. Subsequent knowledge of the vibrational period of any unknown mass will allow its inertial mass to be interpolated from this calibration graph. This type of balance will measure an object's inertial mass even in the absence of gravity. freely falling bodies experience the same acceleration. When you use net F = ma for a projectile in freefall, net F equals the force of gravitational attraction between the object and the Earth; that is, the object's weight. Weight is calculated as the product of the object's gravitational mass and the Earth's gravitational field strength, g. wt = mg When we look at the other side of the equation, ma, then we are talking about the object's inertial mass - its resistance to a change in its state of motion, that is, its resistance to being accelerated. This mass is a measure of how much inertia must be accelerated. net F = ma -m[gravitational]g = m[inertial]a Since we can experimentally determine that all freely-falling bodies experience the same acceleration, that is, a = -g, we have proof that m[gravitational] = m[inertial] and there is no need to distinguish between the two definitions. The value of an object's mass is unique, independent of its method of measurement. Usually when we speak of an object's mass we do not distinguish whether we are referring to its inertial mass or its gravitational mass. This is because the quantity of matter present in an object, i.e., its mass, does not depend on the method by which it is measured. This type of measurement only works in the presence of gravity and is actually based on the torque produced by the product of the weights and their lever-arms' distance from the axis of rotation. Since torques produce rotation, when the clockwise torque caused by the countermasses equals the counterclockwise torque caused by the unknown mass, we say that the balance is in equilibrium. Since the balance will then be in a state of rest, we can read the correct value for the unknown's gravitational mass from the balance's scale. Subsequent knowledge of the vibrational period of any unknown mass will allow its inertial mass to be interpolated from this calibration graph. This type of balance will measure an object's inertial mass even in the absence of gravity. Since we can experimentally determine that all freely-falling bodies experience the same acceleration, that is, a = -g, we have proof that
{"url":"http://dev.physicslab.org/Document.aspx?doctype=3&filename=Dynamics_InertialGravitationalMass.xml","timestamp":"2014-04-16T07:14:01Z","content_type":null,"content_length":"34862","record_id":"<urn:uuid:7aa25e43-e43e-4149-8925-4e8b4a953dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00570-ip-10-147-4-33.ec2.internal.warc.gz"}