content
stringlengths
86
994k
meta
stringlengths
288
619
Please Factor using Quadratic Expressions. State whether it haus a common factor and pattern. Use decomposition and... - Homework Help - eNotes.com Please Factor using Quadratic Expressions. State whether it haus a common factor and pattern. Use decomposition and show step by step. 5. B. 6m^2+13mn+2n^2 If you have more than one question, you need to make separate posts. To factor using decomposition, you need to find factors of the product of the first and last coefficient that sum to the middle coefficient. In this case, you are looking for factors of 6x2=12 that sum to 13. This gives factors of 12 and 1. Then split the middle term up using these coefficients. Finally, use factoring by grouping on the resulting expression twice. For this expression, we get: `6m^2+13mn+2n^2` split middle term using factors 12 and 1 `=6m^2+12mn+mn+2n^2` factor by grouping `=6m(m+2n)+n(m+2n)` factor again The expression factors to `(m+2n)(6m+n)` . Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/please-factor-using-quadratic-expressions-state-367755","timestamp":"2014-04-19T09:25:46Z","content_type":null,"content_length":"25520","record_id":"<urn:uuid:6e7a7bbc-8872-4f36-bd61-877d383343cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Paramount Statistics Tutor Find a Paramount Statistics Tutor Hello! My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, Probability and Statistics. 14 Subjects: including statistics, calculus, physics, algebra 1 ...Physics is my passion. It describes how the world around us works, and it is the foundation of the other sciences (chemistry, biology, etc., have their roots in physics). I love talking about and teaching physics, as it can be applied to (and describe) many common real-life situations. For ex... 11 Subjects: including statistics, calculus, SAT math, physics ...I'm well versed in the subject of statistics whether its probabilities, descriptives, graphing tools, hypothesis testing, parametric & non-parametric tests, correlations (binary/multivariate), regressions (linear, multivariate, and logistic), ANOVA/ANCOVA/MANOVA, exploratory/confirmatory factor a... 3 Subjects: including statistics, SPSS, ecology ...I received my B.S in Psychology and am currently an MBA & M.S of Finance student at the University of Southern California (USC). I have more than 7 years of experience and am confident that I can help you achieve your goals! I am very patient and make sure that I personalize all my lessons based... 25 Subjects: including statistics, reading, English, SAT math I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including statistics, chemistry, English, calculus
{"url":"http://www.purplemath.com/paramount_ca_statistics_tutors.php","timestamp":"2014-04-20T21:19:44Z","content_type":null,"content_length":"23976","record_id":"<urn:uuid:2c71c9e3-46c4-4cb2-9516-e469ae5bcbbb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Plasticity: theory and engineering applications Sándor Kaliszky We haven't found any reviews in the usual places. BASIC CONCEPTS AND NOTATIONS 19 VARIATIONAL PRINCIPLES 88 8 other sections not shown References from web pages Stress Analysis and Tests of Seamless Gas Cylinders made of High ... Mean stress correction fm = 0.866. Allowable cycles N = 3439 cycles. Literature:. [1] Kaliszky S.: Plasticity, Theory and Engineering Applications. ... info.tuwien.ac.at/ iaa/ deutsch/ forschung/ Inst_ber19_hsd31.pdf Bibliographic information
{"url":"http://books.google.com/books?id=GtVRAAAAMAAJ&q=plastic+theory&dq=related:LCCN73130875&lr=&vq=%22Non-linear+structures%3B+matrix+methods+of+analysis+and+design+by+computers%22&source=gbs_citations_module_r&cad=7","timestamp":"2014-04-17T22:19:41Z","content_type":null,"content_length":"125196","record_id":"<urn:uuid:d934e8a4-7fe9-4ecc-b8ea-cd6c3a433169>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
On the group actions on Hurwitz surfaces up vote 2 down vote favorite Let $C$ be a Hurwitz surface, $G=\text{Aut}(C)$ and $N$ is a proper normal subgroup of $G$. Is there a simple argument (without using of classification theorems) for the fact that $N$ acts on $C$ I found this fact here see Section 3. algebraic-curves ag.algebraic-geometry group-actions I am a bit puzzled by the question; is it not the case that $G$ acts freely on $C$? Then of course, $N$ acts freely as well. – Aakumadula Feb 17 '13 at 4:20 No. The quotient map by the $G$-action have ramification points of indexes $2$, $3$ and $7$ (see [wiki][1]). They have non-trivial stabilizers. [1]: en.wikipedia.org/wiki/ Hurwitz%2527s_theorem_on_automorphisms – Klim Puhov Feb 17 '13 at 9:36 In that case, $N$ could contain elements corresponding to the inertia group of these ramifications, and therefore cannot act freely either. – Aakumadula Feb 17 '13 at 9:45 Could you provide an example? I found this fact here heldermann-verlag.de/gcc/gcc02/gcc028.pdf in Section 3. – Klim Puhov Feb 17 '13 at 10:07 add comment 1 Answer active oldest votes I think that Klim wants to talk about proper normal subgroups $N$ of $G$. In that case, $N$ cannot contain an inertia generator: $G$ is generated by $a,b,c$ with relatively prime orders up vote $2,3,7$ and $abc=1$. So if for instance $a\in N$, then modulo $N$ we have $bc=1$, and the order of $b$ divides $3$ and $7$. So $a,b,c\in N$, hence $G=N$. 1 down Could you explain, please, why $G$ is generated by a,b,c with relatively prime orders 2,3,7 and abc=1? I can't find the proof in literature. – Klim Puhov Feb 17 '13 at 10:26 Well, many people even take that as the definition, as in the paper you quoted in your comment. I believe that the Riemann-Hurwitz genus formula, together with the proof of the Hurwitz bound, gives this connection. – Peter Mueller Feb 17 '13 at 10:39 thanks. This clarifies the question a lot – Aakumadula Feb 17 '13 at 10:43 @Peter: Your answer is exactly what I called "using of classification theorems". So, unfortunately, it is not helpfull for me. Using Riemann-Hurwitz genus formula, together with the proof of the Hurwitz bound, I can show that the quotient map by the $G$-action have ramification points of indexes $2$, $3$ and $7$, but I can't figure out that $G$ have the description that you give in your answer. – Klim Puhov Feb 17 '13 at 11:18 @Klim: I do not agree that I use any classification theorems. If $G$ is the automorphism group of a Hurwitz surface $C$, then $C/G$ is a projective line $P^1$ (should be a by-product in the proof of the Hurwitz bound). Finite branched Galois covers with group $G$ of $P^1$ are described in terms of generating systems $g_1,\dots,g_r$ of $G$ with $g_1g_2\dots g_r=1$, where the $g_i$ are the generators of the local monodromies. Knowing that in the Hurwitz case there are three generators of orders $2$, $3$, and $7$ respectively is more or less rephrasing that the Hurwitz bound is sharp. – Peter Mueller Feb 17 '13 at 14:02 show 2 more comments Not the answer you're looking for? Browse other questions tagged algebraic-curves ag.algebraic-geometry group-actions or ask your own question.
{"url":"http://mathoverflow.net/questions/122015/on-the-group-actions-on-hurwitz-surfaces","timestamp":"2014-04-16T22:50:18Z","content_type":null,"content_length":"61152","record_id":"<urn:uuid:2dd3e86f-004b-4915-b319-b1c71f674bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity of a function up vote 0 down vote favorite Let $f\in L^2(\mathbb{R}^3)$ with compact suppport and $z\in\mathbb{C}$. Is the following function continuous for $z\in Q = \{ z : \Re z\in [a,b], \Im \sqrt{z} \in (0,1] \}$: $$ F(z)=\bigg(\alpha-i\ frac{\sqrt{z}}{4\pi}\bigg)^{-1}\int_{\mathbb{R}^3}dx\bigg( f(x)\frac{e^{i\sqrt{z}|x|}}{4\pi|x|}\bigg)$$ ? I have tried to evaluate $$F(z)-F(z')$$ using the theorem of dominated convergence. fa.functional-analysis real-analysis continuity If you want braces to display, you need two backslashes like \\{ or use \lbrace and \rbrace (I changed this). I left the 'lonely' bracket in the display as it was not clear what you intended. – quid Mar 6 '13 at 20:26 For the integral to be well-defined, what is the behavior of $f(x)$ near $x=0$? Or, is the differential operator $d$ applied to the whole thing? – i707107 Mar 6 '13 at 21:30 I've modified the question!Is it clear now? – Mario Mar 6 '13 at 22:18 The integral is well defined thanks to the Schwartz inequality – Mario Mar 6 '13 at 22:26 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fa.functional-analysis real-analysis continuity or ask your own question.
{"url":"http://mathoverflow.net/questions/123802/continuity-of-a-function","timestamp":"2014-04-16T22:08:18Z","content_type":null,"content_length":"49636","record_id":"<urn:uuid:b31e3cc8-77d9-4408-bf65-a582010fc094>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Replies: 1 Last Post: Feb 15, 2013 4:27 PM Messages: [ Previous | Next ] Posted: Feb 15, 2013 4:08 PM Although f-fold XVAL is a very good and popular way for designing and testing NNs, there is no MATLAB NNTBX function for doing so. I have done it for classification and regression by brute force using do loops. Unfortunately, that code is not available since my old computer crashed. Nevertheless, the coding was straightforward because random weights are automatically assigned when the obsolete functions NEWPR, NEWFIT or NEWFF are created. With the current functions FITNET, PATTERNNET and FEEDFORWARDNET, you either have to use a separate step with CONFIGURE or let the TRAIN function initialize the weights. I did add a few modifications. With f >=3 partitions there are F = f*(f-1)/2 ( = 10 for f = 5) ways to choose a holdout nontraining pair for validation and testing. So, for each of F (=10) nets I trained untill BOTH holdout set errors were minimized. I then obtained 2 nontraining estimates for error: The error of holdout2 at the minimum error of holdout1 and vice versa. Therefore for f = 5, I get 2*F = 20 holdout error estimates which is a reasonable value for obtaining min/median/mean/std/max (or even histogram) summary statistics. However, I did get a query from one statistician who felt that somehow the multiplication factor of F/f = (f-1) ( = 4 for f=5) is biased. The factor of f-1 is from the use of f-1 different validation sets for each of f test sets. However, the f-1 different validation sets correspond to f-1 different training sets, so I don't worry about it. In addition, for each of the F nets you can run Ntrials different weight initializations. So, for f = 5, Ntrials = 5 you can get F*Ntrials = 100 error estimates. With timeseries, preserving order and uniform spacing is essential. I have only used f-fold XVAL with f = 3 and either DIVIDEBLOCK or DIVIDEINT types of data division. In the latter case the spacing is tripled and success will depend on the difference of the significant lags of the auto and/or cross correlation functions. The trick of two holdout error estimates per net still works. Therefore, you can get 2*Ntrials error estimates. I can't see any other way to preserve order and uniform spacing. The MATLAB commands lookfor crossvalidation lookfor 'cross validation' lookfor validation may yield functions from other toolboxes which may be of use. However I think the use of 2 holdout nontraining subsets is unique to neural network training. Hope this helps. Date Subject Author 2/15/13 CROSSVALIDATION TRAINING FOR NEURAL NETWORKS Greg Heath 2/15/13 Re: CROSSVALIDATION TRAINING FOR NEURAL NETWORKS Greg Heath
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2435652&messageID=8341048","timestamp":"2014-04-18T18:56:26Z","content_type":null,"content_length":"19554","record_id":"<urn:uuid:91e53fc9-daa1-4098-a4d8-65515abafa64>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
April 18th April 18th Special Products After studying this lesson, you will be able to: • Use Special Products Rules to multiply certain polynomials. We will consider three special products in this section. Square of a Sum (a + b)^ 2 = a^ 2 + 2ab + b^ 2 Example 1 (x + 3)^ 2 We are squaring a sum. We can just write the binomial down twice and multiply using the FOIL Method or we can use the Square of a Sum Rule. Using the Square of a Sum Rule, we: square the first term which is x...this will give us x^ 2 multiply the 2 terms together and double them x times 3 is 3x... double it to get 6x square the last term which is 3...this will give us 9 The answer is x^ 2 + 6x + 9 Example 2 (x + 2)^ 2 We are squaring a sum. We can just write the binomial down twice and multiply using the FOIL Method or we can use the Square of a Sum Rule. Using the Square of a Sum Rule, we: square the first term which is x...this will give us x^ 2 multiply the 2 terms together and double them x times 2 is 2x... double it to get 4x square the last term which is 2...this will give us 4 The answer is x^ 2 + 4x + 4 Square of a Difference (a - b)^ 2 = a^ 2 - 2ab + b^ 2 Example 3 (x - 2)^ 2 We are squaring a difference. We can just write the binomial down twice and multiply using the FOIL Method or we can use the Square of a Sum Rule. Using the Square of a Difference Rule, we: square the first term which is x...this will give us x^ 2 multiply the 2 terms together and double them x times -2 is 2x...double it to get -4x square the last term which is -2...this will give us 4 The answer is x^ 2 - 4x + 4 Product of a Sum and a Difference (a + b)(a - b) = a^ 2 - b^ 2 Example 4 ( x + 5 ) ( x - 5 ) We have the product of a sum and a difference. Here's what we do: multiply the first terms x times x will be x^ 2 multiply the last terms 5 times 5 will be -25 The answer is x^ 2 -25 Example 5 ( x + 7 ) ( x - 7 ) We have the product of a sum and a difference. Here's what we do: multiply the first terms x times x will be x^ 2 multiply the last terms 7 times 7 will be - 49 The answer is x^ 2 - 49
{"url":"http://www.algebra-online.com/tutorials-2/special-products-1.htm","timestamp":"2014-04-18T23:26:23Z","content_type":null,"content_length":"19357","record_id":"<urn:uuid:615f6e7f-37e2-4793-aad6-e34f681c8d18>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Strafford, PA Statistics Tutor Find a Strafford, PA Statistics Tutor ...My expertise allows me to quickly identify students' problem areas and most effectively address these in the shortest amount of time possible. For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous a... 19 Subjects: including statistics, calculus, geometry, algebra 1 ...I have 14 years' experience as a practicing actuary. I am a Fellow of the Society of Actuaries, having completed the actuarial exam process. I took courses in linear algebra, linear programming, and linear optimization. 18 Subjects: including statistics, calculus, geometry, GRE ...I favor the Socratic Method of teaching, asking questions of the student to help him/her find her/his own way through the problem rather than telling what the next step is. This way the student not only learns how to solve a specific proof, but ways to approach proofs that will work on problems ... 58 Subjects: including statistics, reading, geometry, biology I have been working as a statistician at the University of Pennsylvania since 1991, providing assistance to researchers in various areas of health behavior. I am proficient in several statistical packages, including SPSS, STATA, and SAS. One of my particular strengths is the ability to explain sta... 1 Subject: statistics ...I have taught middle school math for 6 years. On a daily basis, I helped students with study skills such as time management, organization, and reading carefully. I have tutored and homeschooled many students as well. 21 Subjects: including statistics, reading, algebra 1, SAT math Related Strafford, PA Tutors Strafford, PA Accounting Tutors Strafford, PA ACT Tutors Strafford, PA Algebra Tutors Strafford, PA Algebra 2 Tutors Strafford, PA Calculus Tutors Strafford, PA Geometry Tutors Strafford, PA Math Tutors Strafford, PA Prealgebra Tutors Strafford, PA Precalculus Tutors Strafford, PA SAT Tutors Strafford, PA SAT Math Tutors Strafford, PA Science Tutors Strafford, PA Statistics Tutors Strafford, PA Trigonometry Tutors Nearby Cities With statistics Tutor Broad Axe, PA statistics Tutors Center Square, PA statistics Tutors Charlestown, PA statistics Tutors Cynwyd, PA statistics Tutors Gulph Mills, PA statistics Tutors Ithan, PA statistics Tutors Miquon, PA statistics Tutors Oakview, PA statistics Tutors Penllyn, PA statistics Tutors Radnor, PA statistics Tutors Rose Tree, PA statistics Tutors Saint Davids, PA statistics Tutors Southeastern statistics Tutors Valley Forge statistics Tutors Wayne, PA statistics Tutors
{"url":"http://www.purplemath.com/Strafford_PA_statistics_tutors.php","timestamp":"2014-04-20T08:55:27Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:53904ca4-4db5-44a8-9e6b-2a6b46a3ea4e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal form transformation up vote 0 down vote favorite Hi, my question is related to normal form transformations...One of the papers I would like to understand is Wu, S., "Well-posedness in Sobolev spaces of the full water wave problem in 2-D" where the author is getting ride of the quadratic nonlinearities using a normal form transformation and then a change of coordinates...I understood that the Shatah procedure, didn't fully remove the quadratic terms(in the full 2D water wave problem), but can anyone explain how that change of coordinates works?....how exactly are the quadratic terms removed?, and how she can control the residual terms (the ones that arise after the transformation....)? I have to admit that the paper is dense and I am not convinced I understood 30% of it.. Thanks! Didi sobolev-spaces ap.analysis-of-pdes fa.functional-analysis 1 Could you provide some mathematical detail? People here are not necessarily going to track down the paper and find the content you are talking about....and then try to answer your question. – David Roberts Jun 2 '11 at 22:41 Your question is something that i am very interested in! Mihaela – Mihaela Jun 3 '11 at 1:37 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged sobolev-spaces ap.analysis-of-pdes fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/66775/normal-form-transformation?answertab=votes","timestamp":"2014-04-16T07:26:13Z","content_type":null,"content_length":"47851","record_id":"<urn:uuid:1b6d4b43-490f-4c06-b287-187aacbd23d4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Best book to start with? 1) Conceptual Physics by Hewitt. From what I hear it is good for concepts but severly lacking in math, which is pointless. Not pointless. Great book. 2) Physics by Halliday and Resnik. Does it do a good job explaining the concepts, on top of the math? No, it doesn't do a good job on either math or concepts. 3) 3 volume Feynman lectures. I heard these are like the best books ever for physics. However, some say they are not for physics beginners. Should I read the Halliday book first, plus some other basic physics books, before reading Feynman lecture? They're hard. I would not read them as your first book. But I do not like to waste time, and don't want to read the same concepts in 10 different books if I dont have to. You've got it backwards. Reading more books takes less time than reading one book. When you read more books, you can start with an easy one and work up. Also, if you don't understand something in book A, you can look at book B. Read Hewitt. You'll have questions as you read it, so post them on PF. Once we get to the end of that process you can worry about what book to read next.
{"url":"http://www.physicsforums.com/showthread.php?p=4252321","timestamp":"2014-04-19T09:45:26Z","content_type":null,"content_length":"37403","record_id":"<urn:uuid:aed0a0cb-1002-4a1b-a2ec-9c7bc8bf5e87>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
In Defense of the 8-Point Algorithm Results 1 - 10 of 102 - International Journal of Computer Vision , 1998 "... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..." Cited by 320 (7 self) Add to MetaCart Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3&times;3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet. - In Computer Graphics (SIGGRAPH’96 , 1996 "... Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple ex ..." Cited by 233 (20 self) Add to MetaCart Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations. The technique, called view morphing, works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations. - in Proc. IEEE Conf. Computer Vision, Pattern Recognition , 1997 "... We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structu ..." Cited by 210 (7 self) Add to MetaCart We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structure is formulated in terms of the absolute quadric — the singular dual 3D quadric ( rank 3 matrix) giving the Euclidean dot-product between plane normals. This is equivalent to the traditional absolute conic but simpler to use. It encodes both affine and Euclidean structure, and projects very simply to the dual absolute image conic which encodes camera calibration. Requiring the projection to be constant gives a bilinear constraint between the absolute quadric and image conic, from which both can be recovered nonlinearly from images, or quasi-linearly from. Calibration and Euclidean structure follow easily. The nonlinear method is stabler, faster, more accurate and more general than the quasi-linear one. It is based on a general constrained optimization technique — sequential quadratic programming — that may well be useful in other vision problems. , 1997 "... The conventional approach to three-dimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..." Cited by 167 (4 self) Add to MetaCart The conventional approach to three-dimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an image-warping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the center-of-projection and pinhole-camera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The image-warping equation alone is insufficient to synthesize desired images because multiple reference-image points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the image-based approach can be enhanced with a more general pinholecamera - International Journal of Computer Vision , 1997 "... A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequenc ..." Cited by 141 (4 self) Add to MetaCart A structure from motion algorithm is described which recovers structure and camera position, modulo a projective ambiguity. Camera calibration is not required, and camera parameters such as focal length can be altered freely during motion. The structure is updated sequentially over an image sequence, in contrast to schemes which employ a batch process. A specialisation of the algorithm to recover structure and camera position modulo an affine transformation is described, together with a method to periodically update the affine coordinate frame to prevent drift over time. We describe the constraint used to obtain this specialisation. Structure is recovered from image corners detected and matched automatically and reliably in real image sequences. Results are shown for reference objects and indoor environments, and accuracy of recovered structure is fully evaluated and compared for a number of reconstruction schemes. A specific application of the work is demonstrated -- affine structure is used to compute free space maps enabling navigation through unstructured environments and avoidance of obstacles. The path planning involves only affine constructions. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 1994 "... Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is ..." Cited by 140 (6 self) Add to MetaCart Modelling th# push broom sensors commonly used in satellite imagery is quite di#cult and computationally intensive due to th# complicated motion ofth# orbiting satellite with respect to th# rotating earth# In addition, th# math#46 tical model is quite complex, involving orbital dynamics, andh#(0k is di#cult to analyze. Inth#A paper, a simplified model of apush broom sensor(th# linear push broom model) is introduced. Ith as th e advantage of computational simplicity wh#A9 atth# same time giving very accurate results compared with th# full orbitingpush broom model. Meth# ds are given for solving th# major standardph# togrammetric problems for th e linear push broom sensor. Simple non-iterative solutions are given for th# following problems : computation of th# model parameters from groundcontrol points; determination of relative model parameters from image correspondences between two images; scene reconstruction given image correspondences and ground-control points. In addition, th# linearpush broom model leads toth#0 retical insigh ts th# t will be approximately valid for th# full model as well.Th# epipolar geometry of linear push broom cameras in investigated and sh own to be totally di#erent from th at of a perspective camera. Neverth eless, a matrix analogous to th e essential matrix of perspective cameras issh own to exist for linear push broom sensors. Fromth#0 it is sh# wn th# t a scene is determined up to an a#ne transformation from two viewswith linearpush broom cameras. Keywords :push broom sensor, satellite image, essential matrixph# togrammetry, camera model The research describ ed in this paper hasb een supportedb y DARPA Contract #MDA97291 -C-0053 1 Real Push broom sensors are commonly used in satellite cameras, notably th# SPOT satellite forth# generatio... - IEEE Transactions on Pattern Analysis and Machine Intelligence , 1997 "... Abstract—The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the eight-point algorithm is a frequently cited method for computing the fundamental matrix from a set of eight or more point matches. It has the advantage of simplicity of implementati ..." Cited by 132 (1 self) Add to MetaCart Abstract—The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the eight-point algorithm is a frequently cited method for computing the fundamental matrix from a set of eight or more point matches. It has the advantage of simplicity of implementation. The prevailing view is, however, that it is extremely susceptible to noise and hence virtually useless for most purposes. This paper challenges that view, by showing that by preceding the algorithm with a very simple normalization (translation and scaling) of the coordinates of the matched points, results are obtained comparable with the best iterative algorithms. This improved performance is justified by theory and verified by extensive experiments on real images. Index Terms—Fundamental matrix, eight-point algorithm, condition number, epipolar structure, stereo vision. - European Conference on Computer Vision , 1998 "... This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal par ..." Cited by 123 (2 self) Add to MetaCart This paper describes a theory and a practical algorithm for the autocalibration of a moving projective camera, from views of a planar scene. The unknown camera calibration, and (up to scale) the unknown scene geometry and camera motion are recovered from the hypothesis that the camera’s internal parameters remain constant during the motion. This work extends the various existing methods for non-planar autocalibration to a practically common situation in which it is not possible to bootstrap the calibration from an intermediate projective reconstruction. It also extends Hartley’s method for the internal calibration of a rotating camera, to allow camera translation and to provide 3D as well as calibration information. The basic constraint is that the projections of orthogonal direction vectors (points at infinity) in the plane must be orthogonal in the calibrated camera frame of each image. Abstractly, since the two circular points of the 3D plane (representing its Euclidean structure) lie on the 3D absolute conic, their projections into each image must lie on the absolute conic’s image (representing the camera calibration). The resulting numerical algorithm optimizes this constraint over all circular points and projective calibration parameters, using the inter-image homographies as a projective scene representation. - In IEEE Conf. Computer Vision & Pattern Recognition , 1996 "... This paper describes a family of factorization-based algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the Tomasi-Kanade algorithm from affine to fully perspective cameras, and fro ..." Cited by 106 (5 self) Add to MetaCart This paper describes a family of factorization-based algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the Tomasi-Kanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on ‘privileged’ points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVDbased factorization, but runs much more quickly for large problems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8204","timestamp":"2014-04-19T23:44:43Z","content_type":null,"content_length":"40221","record_id":"<urn:uuid:384b4fa8-35b4-4b83-9fea-c9978f652195>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
R Snippet for Sampling from a Dataframe July 27, 2009 By Will It took me a while to figure this out, so I thought I'd share. I have a dataframe with millions of observations in it, and I want to estimate a density distribution, which is a memory intensive process. Running my kde2d function on the full dataframe throws and error -- R tries to allocate a vector that is gigabytes in size. A reasonable alternative is to run the function on a smaller for the author, please follow the link and comment on his blog: Getting Genetics Done daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/r-snippet-for-sampling-from-a-dataframe/","timestamp":"2014-04-18T18:30:49Z","content_type":null,"content_length":"34112","record_id":"<urn:uuid:8ee6dd74-5333-4b64-af30-2775ac74befb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Newton, MA Geometry Tutor Find a Newton, MA Geometry Tutor I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including geometry, calculus, ACT Math, algebra 1 ...I like to help students understand the importance of trying to determine if answers make sense. I am a parent of two high school students so I understand the stress involved in trying to equip them for college. I obtained my Master's degree from MIT and my bachelor's degree from the University of Minnesota. 10 Subjects: including geometry, physics, calculus, algebra 2 ...I have worked with students with ADD & ADHD extensively in my private tutoring. Some have been on meds.; others not, some have been on school IEPs, some not, some have been high school students, others middle and elementary students. My tutoring work for the Lexington public school system, run... 34 Subjects: including geometry, reading, English, writing ...I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics. Seasonally I work with students on SAT preparation, which I love and excel at. 15 Subjects: including geometry, calculus, physics, statistics ...I hope to continue to teach and inspire others, as well as play a greater part in my local community along the way.Prior to teaching, I had been in the customer service and sales industries (in various front-line, training, administrative, and management roles) for over 20 years. I worked for Fo... 23 Subjects: including geometry, calculus, GRE, algebra 1
{"url":"http://www.purplemath.com/Newton_MA_Geometry_tutors.php","timestamp":"2014-04-19T17:48:17Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:3d063300-9356-4045-ade0-632b0238a910>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer The meter (abbreviation, m) is the Standard International (SI) unit of displacement or length. One meter is the distance traveled by a ray of electromagnetic (EM) energy through a vacuum in 1/ The metre (International spelling as used by the International Bureau of Weights and Measures), or meter (American spelling), (SI unit symbol: m), is the fundamental unit of length (SI dimension symbol: L) in the International System of Units (SI). Originally intended to be one ten-millionth of ... - read more Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also submit an answer or search documents about what is mitar Share your answer: what is mitar? Question Analizer what is mitar resources
{"url":"http://www.askives.com/what-is-mitar.html","timestamp":"2014-04-20T03:50:20Z","content_type":null,"content_length":"33843","record_id":"<urn:uuid:8f300972-a9cd-4246-a2cc-7c502ee2bc79>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
, 1993 "... The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) a ..." Cited by 37 (7 self) Add to MetaCart The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) and the right nearest smaller element a k (with k ? i). We give an O(log log n) time optimal parallel algorithm for the problem on a CRCW PRAM. We apply this algorithm to achieve optimal O(log log n) time parallel algorithms for four problems: (i) Triangulating a monotone polygon, (ii) Preprocessing for answering range minimum queries in constant time, (iii) Reconstructing a binary tree from its inorder and either preorder or postorder numberings, (vi) Matching a legal sequence of parentheses. We also show that any optimal CRCW PRAM algorithm for the triangulation problem requires \Omega\Gammauir log n) time. Dept. of Computing, King's College London, The Strand, London WC2R 2LS, England. , 1991 "... This paper introduces the Parallel Priority Queue (PPQ) abstract data type. A PPQ stores a set of integer-valued items and provides operations such as insertion of n new items or deletion of the n smallest ones. Algorithms for realizing PPQ operations on an n-processor CREW-PRAM are based on two new ..." Cited by 15 (1 self) Add to MetaCart This paper introduces the Parallel Priority Queue (PPQ) abstract data type. A PPQ stores a set of integer-valued items and provides operations such as insertion of n new items or deletion of the n smallest ones. Algorithms for realizing PPQ operations on an n-processor CREW-PRAM are based on two new data structures, the n-Bandwidth-Heap (n-H) and the n-Bandwidth- Leftist-Heap (n-L), that are obtained as extensions of the well known sequential binary-heap and leftist-heap, respectively. Using these structures, it is shown that insertion of n new items in a PPQ of m elements can be performed in parallel time O(h + log n), where h = log m n , while deletion of the n smallest items can be performed in time O(h + log log n). Keywords Data structures, parallel algorithms, analysis of algorithms, heaps, PRAM model. This work has been partly supported by the Ministero della Pubblica Istruzione of Italy and by the C.N.R. project "Sistemi Informatici e Calcolo Parallelo" y Istituto di Ela... , 1991 "... The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledge-base on non-numerical parallel algorithms can be characterized in a structural way ..." Cited by 11 (4 self) Add to MetaCart The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledge-base on non-numerical parallel algorithms can be characterized in a structural way. Each structure relates a few problems and technique to one another from the basic to the more involved. The second half of the paper provides a bird's-eye view of such structures for: (1) list, tree and graph parallel algorithms; (2) very fast deterministic parallel algorithms; and (3) very fast randomized parallel algorithms. 1 Introduction Parallelism is a concern that is missing from "traditional" algorithmic design. Unfortunately, it turns out that most efficient serial algorithms become rather inefficient parallel algorithms. The experience is that the design of parallel algorithms requires new paradigms and techniques, offering an exciting intellectual challenge. We note that it had... - In use as class notes since , 1993 "... Copyright 1992-2009, Uzi Vishkin. These class notes reflect the theorertical part in the Parallel ..." , 2001 "... We present a BSP (Bulk Synchronous Parallel) algorithm for solving the All Nearest Smaller Values Problem (ANSVP), a fundamental problem in both graph theory and computational geometry. Our algorithm achieves optimal sequential computation time and uses only three communication supersteps. In the wo ..." Cited by 3 (2 self) Add to MetaCart We present a BSP (Bulk Synchronous Parallel) algorithm for solving the All Nearest Smaller Values Problem (ANSVP), a fundamental problem in both graph theory and computational geometry. Our algorithm achieves optimal sequential computation time and uses only three communication supersteps. In the worst case, each communication phase takes no more than an (n/p + p)-relation, where p is the number of the processors. In addition, our average-case analysis shows that, on random inputs, the expected communication requirements for all three steps are bounded above by a p-relation, which is independent of the problem size n. Experiments have been carried out on an SGI Origin 2000 with 32 R10000 processors and a SUN Enterprise 4000 multiprocessing server supporting 8 UltraSPARC processors, using the MPI libraries. The results clearly demonstrate the communication eciency and load balancing for computation. - In Proc. 19th Intl. Coll. on Automata, Languages, and Programming , 1992 "... Abstract. We investigate the complexity of merging sequences of small integers on the EREW PRAM. Our most surprising result is that two sorted sequences of n bits each can be merged in O(log log n) time. More generally, we describe an algorithm to merge two sorted sequences of n integers drawn from ..." Cited by 1 (0 self) Add to MetaCart Abstract. We investigate the complexity of merging sequences of small integers on the EREW PRAM. Our most surprising result is that two sorted sequences of n bits each can be merged in O(log log n) time. More generally, we describe an algorithm to merge two sorted sequences of n integers drawn from the set {0,..., m − 1} in O(log log n + log m) time using an optimal number of processors. No sublogarithmic merging algorithm for this model of computation was previously known. The algorithm not only produces the merged sequence, but also computes the rank of each input element in the merged sequence. On the other hand, we show a lower bound of Ω(log min{n, m}) on the time needed to merge two sorted sequences of length n each with elements in the set {0,..., m − 1}, implying that our merging algorithm is as fast as possible for m = (log n) Ω(1). If we impose an additional stability condition requiring the ranks of each input sequence to form an increasing sequence, then the time complexity of the problem becomes Θ(log n), even for m = 2. Stable merging is thus harder than nonstable merging. 1 , 2001 "... Communication has been pointed out to be the major bottleneck for the performance of parallel algorithms. Theoretical parallel models such as PRAM have long been questioned due to the fact that the theoretical algorithmic efficiency does not provide a satisfactory performance prediction when algorit ..." Add to MetaCart Communication has been pointed out to be the major bottleneck for the performance of parallel algorithms. Theoretical parallel models such as PRAM have long been questioned due to the fact that the theoretical algorithmic efficiency does not provide a satisfactory performance prediction when algorithms are implemented on commercially available parallel machines. This is mainly because these models do not provide a reasonable scheme for measuring the communication overhead. Recently several practical parallel models aiming at achieving portability and scalability of parallel algorithms have been widely discussed. Among them, the Bulk Synchronous Parallel (BSP) model has received much attention as a bridging model for parallel computation, as it generally better addresses practical concerns like communication and synchronization. The BSP model has been used in a number of application areas, primarily in scientific computing. Yet, very little work has been done on problems generally considered to be irregularly structured, which usually result in highly data-dependent communication patterns and make it difficult to achieve communication efficiency. Typical examples are fundamental problems in graph theory and computational geometry, which are important as a vast number of interesting problems in many fields are defined in terms of v them. Thus practical and communication-efficient parallel algorithms for solving these problems are important. In this dissertation, we present scalable parallel algorithms for some fundamental problems in graph theory and computational geometry. In addition to the time complexity analysis, we also present some techniques for worst-case and average-case communication complexity analyses. Experimental studies have been performed on two differ... "... We provide the rst non-trivial lower bound, p , where p is the number of the processors and n is the data size, on the average-case communication volume, , required to solve the parenthesis matching problem and present a parallel algorithm that takes linear (optimal) computation time and o ..." Add to MetaCart We provide the rst non-trivial lower bound, p , where p is the number of the processors and n is the data size, on the average-case communication volume, , required to solve the parenthesis matching problem and present a parallel algorithm that takes linear (optimal) computation time and optimal expected message volume, + p. "... 1. lntrod uction The simulation of a discrete el·ent system is traditionally regarded as the process of generating an operation pat h that represents the system state as a function of time. This normally entails the use of a global clock and an event list. In the last few years, much effort has been ..." Add to MetaCart 1. lntrod uction The simulation of a discrete el·ent system is traditionally regarded as the process of generating an operation pat h that represents the system state as a function of time. This normally entails the use of a global clock and an event list. In the last few years, much effort has been devoted to the task of splittin g the simulation process into a number of sub-processes and executing the latter in parallel on different processors [1,2, 3,4,5). For example, when simulating a queueing network, the ide~,might be to allocate each processor to a node, or a group of nodes, and let it handle ' the corresponding events, taking care of possible interactions with other processors. At best, the degree of parallelism obtained by such an approach will be equal to the number of nodes, and in general may be much smaller [4, 5J. We propose new methods that do not limit the degree of parallelism in this way. The concepts of "time " and "event " are no longer present explicitly, and the necessity for the event list disappears. In section 3, we consider the problem of simulating a long run of a first in, first out (FIFO) G/G/I queue [6J, using P processors. A simple algorithm is presented for computing the arrival and departures times of the first 11 ' jobs in time proportional to 11' / P,
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=101080","timestamp":"2014-04-19T23:07:45Z","content_type":null,"content_length":"36700","record_id":"<urn:uuid:73d6a7df-7cfe-49a6-803f-3f519aeb1321>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 19 Java Programming Write a program that asks the users to enter five test scores into an array Douglass includes a lot of maxims in his Narrative. What is the effect of these? They make his autobiography less believable because some of them are not true. He reveals many of the values of the people during the time in which he lived. They demonstrate that slaves were hig... Thank you I'm sorry that was the wrong problem is actually (5^4 * 5^7)/5^8 actually it says by using the laws of exponents? simplify (5x^2-3x-8)?? Write a program that asks for and reads the price of a book (it may have a decimal part), multiplies it by 7% sales tax and displays both the sales tax and final price of the book on the screen. liberal arts math 1 Five hundred raffle tickets were sold for $2 each. A prize of $400 is to be awarded. If you buy one ticket, determine the expected value. What is the fair price for a ticket? discrete math "a club with 20 members must choose a three-person committee and a five-person committee. how many ways can the two committees be chosen if the committees can overlap? how many ways can the two committees be chosen if the committees cannot overlap?". 5/48=0.104 0.104*100=10.4% proponents of the peak oil theory claim that worldwide production of petroleum will soon reach a maximum and start to decline. (there is considerable debate about how soon this will occur) let P(t)= the world's rate of petroleum production t years after 2000. let T be the ... a cake is put into an oven to bake. the temp, H, of the cake (F) is a function of how long it has been in the over, t (min). thus, H=f(t) What does f'(t) represent? is f'(t) positive or negative? is f"(t) positive or negative? as i drive to the city, let f(t) be the total distance i have traveled t hours after starting my trip. i left my house and began my trip at exactly 12 noon. just a fraction of a second before 2 pm, i cam across a construction zone, and so i hit the brakes. What is the sign of ... Find the pattern 31,28,31,30,---,30,31,---,30,31,30,31 What is the freezing point of a solution of 5grams of CaCO3 and 10ml of H20? Would 3/8 as a percent be 37.5%? how do i write a net ionic equation for Lead (II) Nitrate and Sodium Carbonate react to form Lead Carbonate and Sodium Nitrate??
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=caren","timestamp":"2014-04-16T14:21:42Z","content_type":null,"content_length":"9313","record_id":"<urn:uuid:3b26f169-0338-4be5-a331-6c1442cd0df5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Class 5's Names Copyright © University of Cambridge. All rights reserved. Why do this problem? This problem uses simple and manageable information to illustrate various methods of recording data. The questions require learners to interpret the data presented, as well as to re-present the data in different ways themselves. You could also encourage children to look at ways to present data more critically by discussing which method they think is best and why. Possible approach The focus of this particular task is not on data collection itself, but you may wish to tackle this problem once the children have had some experience of creating their own tally charts, frequency tables and bar charts. Alternatively, the activity could be a vehicle for introducing some different ways of representing data which learners may not have come across before. Tell the 'story' of the problem and invite pupils to work in pairs. They might find it useful to have a of the names given at the beginning of the problem and this sheet which contains the tally chart, fequency table and bar chart. Allow pairs to choose the resources they need to create the different representations, although having squared paper easily available is likely to be helpful. In a plenary, initiate discussion about how they knew which member of the class was away that day and encourage them to offer opinions on which of their representations is best for this data. Listen out for those that give clear explanations for their choices. Key questions What does this chart/table tell you? Tell me about the way you're creating your chart/table. What can you tell from the tally/table/graph you have made? How do you know who was away from school that day? Possible extension The problem Real Statistics offers more opportunities for data analysis, and goes on to invite data collection and further analysis. Possible support The Pet Graph is a simpler challenge which focuses just on a block graph.
{"url":"http://nrich.maths.org/7522/note?nomenu=1","timestamp":"2014-04-17T09:44:53Z","content_type":null,"content_length":"7798","record_id":"<urn:uuid:9e786add-5cdc-4c40-80c1-ad71a8560237>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Re: how would this be done? type classes? [Haskell-cafe] Re: how would this be done? type classes? existentialtypes? Matthias Fischmann fis at wiwi.hu-berlin.de Sat Mar 18 08:52:57 EST 2006 On Fri, Mar 17, 2006 at 04:53:42PM +0000, Ben Rudiak-Gould wrote: > Matthias Fischmann wrote: > >and now it gets interesting: i need instances for Rs on Show, Read, > >Eq, Ord. Show is very simple, but Read? > I think you're right: it's impossible to implement Read for Rs in an > extensible way, because there's no way to obtain the necessary Resource > dictionary at runtime. I've wished in the past for a family of functions, With all the suggestions on this list I figured something out that compiles, though. It requires extension of the Read instance of Rx, but that's ok because it is an issue local to the module. Here is the class (Show a, Read a) => Resource a where rsName :: a -> String rsAdvance :: a -> a rsStarved :: a -> Bool data Rs = forall a . (Resource a) => Rs a instance Resource Rs where rsName (Rs a) = rsName a rsAdvance (Rs a) = Rs (rsAdvance a) rsStarved (Rs a) = rsStarved a instance Show Rs where show (Rs r) = "Rs " ++ rsName r ++ " (" ++ show r ++ ")" instance Read Rs where readsPrec pred = readConstructor readConstructor ('R':'s':' ':'"':s) = readResourceType "" s readConstructor s = [] readResourceType acc ('"':' ':'(':s) = readResource (reverse acc) s readResourceType acc (x:s) | isAlpha x = readResourceType (x:acc) s readResourceType _ s = [] readResource "Rice" s = case readsPrec 0 s of [(r :: RsRice, s')] -> readClosingParen (Rs r) s'; _ -> [] readResource "CrudeOil" s = case readsPrec 0 s of [(r :: RsCrudeOil, s')] -> readClosingParen (Rs r) s'; _ -> [] readResource _ s = assert False (error "no instance.") readClosingParen r (')':s) = case readsPrec pred s of rs -> (r, s) : rs readClosingParen _ _ = [] (Is there a better way to match list prefixes? If I had read a paper on monadic parsing or two, this might look more elegant, but it seems to me to be sufficient for this simple application. Feel free to post the true thing. :-) I am more convinced yet that Eq and Ord are impossible: Which specific resource type is hidden in the Rs constructor is, well: hidden. But there is a dirty trick if you have enough time and memory to waste, and it doesn't even require extention for each new instance: instance Eq Rs where r == r' = show r == show r' instance Ord Rs where compare r r' = compare (show r) (show r') And here are the resource instances: data RsRice = RsRice rsRiceName :: String, -- an intuitive and descriptive name of the resource rsRiceProduction :: Int, rsRiceConsumption :: Int, rsRiceReserve :: Int -- available for consumption or trading deriving (Show, Read, Eq, Ord) instance Resource RsRice where rsName _ = "Rice" rsAdvance r = r { rsRiceReserve = rsRiceReserve r + rsRiceProduction r - rsRiceConsumption r } rsStarved = (== 0) . rsRiceReserve rsReserve (RsRice _ _ _ res) = res rsSpend = rsRiceTrade (-) rsEarn = rsRiceTrade (+) rsRiceTrade :: (Int -> Int -> Int) -> RsRice -> Int -> RsRice rsRiceTrade (+) r amount = r { rsRiceReserve = rsRiceReserve r + amount } data RsCrudeOil = RsCrudeOil rsCrudeOilName :: String, rsCrudeOilProduction :: Int, rsCrudeOilConsumption :: Int, rsCrudeOilReserve :: Int, rsCrudeOilReserveSize :: Int -- any water unit above this number is discarded immediately. deriving (Show, Read, Eq, Ord) instance Resource RsCrudeOil where -- ... Btw, I am tempted to implemente crude oil as an incremental extension to rice, by adding a record field 'rice'. Would this increase the number of indirections for basic operations on resources, or would ghc be capable of optimizing that away entirely? Thanks again to all, I am following the thread, even if I won't answer any more. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url : http://www.haskell.org//pipermail/haskell-cafe/attachments/20060318/5e3c8f38/attachment.bin More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2006-March/014981.html","timestamp":"2014-04-21T16:06:33Z","content_type":null,"content_length":"7373","record_id":"<urn:uuid:6a3d05ec-9ffb-47b7-80ab-1afef3a062f8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Un poco logico y un poco loco By I.H. we mean by First year in maths students are baffled about mathematical induction(M.I.). It is counter intuitive and it is true, it is hard to wrap one’s head around this concept. So to prove some mathematical property holds we are given the following template: 1. First show that the property $P$ ( the property of being divisible, being prime or equal to a formula’s value, you name it etc.) is true for the base case. If the base case is $c$ then we prove that the property is true for it. Thus we work on showing $P(c)$ is true. 2. Second, we assume it is true for $k$, that is $P(k)$ is true – we take this as fact. This second step is called the induction hypothesis – I.H. 3. Lastly, we prove that $P(k+1)$ is true as well using I.H. Thus should we succeed in this final step, we have all the right to claim that we have proof that $P$ is true for all $n$. It is I.H. that is hard to take. Why is it that a.) we should assume it and b.) why is it that if step 3 succeeds provided we use the fact of step 2, we have the right to say – Q.E.D. or say “proven as required”! What right do we have in assuming I.H. There are a few comments that we can make about this: • Firstly, M.I. is about the property of numbers (in general). Numbers obey this I. H. property. As a classic example consider a number $x$ such that $x > c$, for some number like 8. So if we have $x > 8$, can we say that the next number after $x$ will also be greater than 8? We can guess that this should be true. So let us prove it…Consider $x > 8$, let us add $1$ to both sides still maintaining the inequality. $x > 8 \Rightarrow x + 1 > 8 + 1 = 9 > 8 \therefore x + 1 > 8$. • Secondly, we are allowed to assume I.H. for after all we can set our $n = c$, i.e., to our base case and check if the property $P(c+1)$ holds — thus by the same token we are allowed to move from the truth of $P(c)$ to the truth that $P(c+1)$. So we can assume I.H. because of this “domino effect”. The crucial bit is to show now that due to our utilisation of I.H. for an arbitrary $n = k$, we get the truth of $P(k+1)$. • Lastly and strongly, we can take M.I. to be an axiom! Meaning, a proposition which is self-evidently (if I can use that word) true! Indeed Wikipaedia has it like this – $\forall P[P(0) \wedge \ forall k \in \mathbb{N}[P(k) \Rightarrow P(k+1)]] \bold{\Rightarrow} \forall n \in \mathbb{N}[P(n)]$. • Take a good look at this and consider the statement before the last $\Rightarrow$. If you look we have this form $A \Rightarrow B$. Remember modus ponens? It says if we have $A \Rightarrow B$ and we have $A$, deduce $B$. Well when we are doing M.I. what we are actually doing is that we are establishing the truth of $A$ and when we succeed – voila, use this with the axiom and so conclude the property $P$ holds for all Dr. Benjamin Levitt alerted me to the article I first heard from a colleague at the university where I work. Apparently a group of neuroscientists examined the brains of 15 mathematicians using Magnetic Resonance Imaging (MRI). They showed these scientists pictures of mathematical formulas and the activity of the brains of these subjects reacted in the same way one’s brain reacts when it experiences viewing a beautiful piece of art or listening to beautiful piece of music. Read the University College London article here. Thinking of this now, it is this beauty that attracted me to studying mathematics, aside from of course, being inspired by great teachers who taught me and exposed me to its beauty. Though my father was an engineer (an my uncle a PE and has a PhD in hydrology), I do not think it was the one which moved me to be a student of maths. I think I was more influenced by enthusiastic teachers who were passionate with the subject – I mean, these teachers loved the subject and their tamed enthusiasm nevertheless showed when they wrote proofs on the blackboard. I like to study maths because of its beauty yet I am so poor at it. I am so addicted to this TV series, I hope it stays for a long long time. It is beautifully done and the interaction of complex character personalities is often heartwarming and fun. Whenever I watch an episode, I cannot help but get reminded of what C. S. Peirce said… From P.J. Davis & R. Hersh, The Mathematical Experience, 1981, Penguin Books C. S. Pierce in the middle of the nineteenth century, announced that “mathematics is the science of making necessary conclusions.” Conclusions about what? About quantity? About space?The content of mathematics is not defined by this definition; mathematics could be “about” anything as long as it is a subject that exhibits the pattern of assumption-deduction- conclusion. Sherlock Holmes remarks to Watson in The Sign of Four that “Detection is, or ought to be, an exact science and should be treated in the same cold and unemotional manner. You have attempted to tinge it with romanticism, which produces much the same effect as if you worked a love-story or an elopement into the fifth proposition of Euclid.” Here Conan Doyle, with tongue in cheek, is asserting that criminal detection might very well be considered a branch of mathematics. Peirce would agree. Kurt Gödel is considered the greatest logician since Aristotle. Prior to his death, Gödel wrote a proof for the existence of God. Some theorise that the reason he did not publish nor share this proof earlier was for fear of being ostracised by the academic community where he belonged. He was afraid it wouldn’t be cool. Using a MacBook and proof assistants (Coq and Isabelle), Christoph Benzmüller of Free University of Berlin and Bruno Woltzenlogel Paleo of Free University of Vienna, confirmed that Gödel’s proof was correct, at least as far as higher order model logic is concerned. We might note that KG’s proof indeed, involved modal operators. It only took a few minutes (even seconds) for the computer to validate that the steps KG made in his proof were valid and correct. Christop and Bruno’s paper can be found here. While the report from Spiegel Online can be found here. This is an ode (although it is not a poem) to the Myhill-Nerode Theorem, thank God for it. I saw a question that required one to show that the language of strings of EVEN length is regular. Formally, it means to show that $EVEN = \{ w | |w| = 2n, n \in \mathbb{N} \}$ is regular. The way people have approached this is to specify an alphabet like $\{0,1 \}$ and then construct a NFA or a DFA (finite automaton) that recognises only a even strings from these alphabets. A sample is found below Hmmm, I am uncomfortable with this. Why? We have not been given any alphabet information on the problem. Why use or limit it to $\{0,1 \}$? What if the alphabet is composed of $\{a,b,c \}$, are we not capable of producing strings of even length from this alphabet? Sure we can but what happens to our DFA proof? It will fall short. I feel it is not a good approach, and I do not think it is rigorous, even though the proof is concisely crisp. The proof should handle the generic case with no appeal as to how the alphabet looks like. I believe that is the best approach. We can of course handle the general case by stating a general assumption on $\Sigma$ and say $\Sigma = \{ a_1, a_2,a_3,...,a_n \}$. Then we replace the 0/1 notation in the diagram with $a_1/a_2/a_3/.../a_n$., then we are done. That covers every possibility and will only accept a string of even length. Anyway back to the problem what if we have not been given the composition of $\Sigma$ ? The Myhill-Nerode Theorem ( briefly treated here) if we know it is there to rescue you. Briefly it effectively states that the following statements are equivalent: • $L \subseteq \Sigma^*$ is regular • The relation $R_L$, such that $x,y \in \Sigma^*, (x,y) \in R_L$ iff $\forall z \in \Sigma^*, xz \in L$ exactly when $yz \in L$, is finite. We prove now that $EVEN$ is regular: Form the membership $R_L$ iff the right side is satisfied. Hence, $(x,y) \in R_L$ then we have two cases for $z$. 1. $z$ may be of even length $m$. Then $xz \in EVEN$, because $|xz| = |x|+|z| = 2n + m$ is even because an even number ($2n$) + even number($m$) results an even number, thus $xz \in EVEN$. Similarly for $yz \in EVEN$, the same argument applies. 2. $z$ may be of odd length $m$. Then $xz ot \in EVEN$, because $|xz| = |x|+|z| = 2n + m$ is odd because an even number ($2n$) + odd number($m$) results an odd number, thus $xz ot \in EVEN$. Similarly for $yz ot \in EVEN$, the same argument applies. 3. There is only one equivalence class we can make of $R_L$, so finite. We have shown that the second part of the above theorem is satisfied by our $EVEN$ language, and since by the theorem, this is equivalent to the first part, we then conclude that $EVEN$ is regular. This is for young people doing CS and are wondering how to use the Pumping Lemma to verify if a formal language is regular or not :-). So let us have a theorem about the balanced parenthesis language $L_{()} = \{ (^n \varphi )^n | n \geq 1 \wedge \varphi eq \Lambda \}$ is not a regular language. Some text books use $\Lambda$ to denote the empty string so here $\varphi$ is none empty. The method of proof is by contraction (Reduction ad Absurdium – RAA). So we assume the language is regular and hence can be pumped but show it to be otherwise, or not true by arriving at a contradiction. By the Pumping Lemma (PLem), $\exists m$ so that $m$ can be used to split $w \in L_{()}$ into parts. We actually use this $m$ to form $w$. We can do this because the Lemma says this $m$ is valid for any $w, |w| \geq m$. Assume $L_{()}$ to be regular. Then by PLem, there exists an $m$ such that for $w \in L_{()}$, $|w| \geq m$. Consider now the $w$ formed by setting $n = m$. Then we have $w = (^m \varphi )^m$ . Further we know that $|w| = 2m+1 > m$ satisfying PLem premise, so it can be applied. As per PLem, we can break $w$ into $x, y, z$ components. As per PLem also $|xy| \leq m$. Looking at the form of our $w$ this implies that $xy$ must be composed of all left parenthesis, i.e $xy = (^m$, then $y=(^p$ for some $p < m$. As per PLem, we can pump $y$ for any $k \geq 0$. So let $k =0$ then the new $w = (^{m-p} \varphi )^m$. But this implies that $|(^{m - p}| = m - p eq m =|)^m|$, i.e. parentheses are not balanced. But this means $w ot \in L_{()}$. Contradiction. At this point we have found a $k$ where the decomposition results in the string outside of $L_{()}$and we can stop. Anyway consider now also $y$ pumped upwards to $k$. Then we have the new $w = (^{m-p}(^{pk} \varphi )^m$. Looking at the left of $\varphi$ we have $|(^{m-p}(^{pk}|= m - p + pk eq m= |)^m|$. Again the parentheses do not balance out. Thus $w ot \in L_{()}$. Contradiction. Feedback is appreciated – let me know if it has helped/not helped. Thanks. A study in the USA reports that students learn more from non-tenured track teachers. The article is found here at InsideHigherEd.com This is heart warming. It is nice to know that casual lecturers have their place. Sometimes it is even better because you do not have to always look for funding to justify your existence. I feel for my tenured friends, they also work long hours drafting their grant proposals.
{"url":"http://unpocologico.wordpress.com/","timestamp":"2014-04-18T18:14:12Z","content_type":null,"content_length":"62822","record_id":"<urn:uuid:762f4a11-8122-4cb5-ba0d-b6e38fc61819>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
small probability question May 13th 2012, 08:42 PM #1 May 2012 small probability question It goes something like this: There is a lottery that has a pot that is drawn at 1900 dollars. Tickets can be bought for $5 each, and each ticket entered into the lottery will raise the pot by $5. Three prizes will be given: 1st place: $1300 2nd place: $650 3rd place: $300 You can enter as many times as you want, and you can also recruit an unlimited number of friends to enter for you (within reason). Each person can only win one prize. I am trying to calculate the optimal number of tickets to enter to gain the most profit from this, based on the odds. I want to calculate two different points: one is where you have a 50% chance of winning a prize, and the other is where you spend the same amount of money as you expect to earn back (for example, if i expected to win back on average (1300+650+300)/3= 750, I would want to calculate the probability of me winning money if I spent 750 dollars on the lottery. First I thought that 1 ticket is equivalent to a 3/380 probability of winning (1900/5=380 total entries) but that doesn't seem right, so i am stuck... any light you could shed on the situation would be greatly appreciated no i dont want my homework done for me, just any pointers on how to calculate the probability/ equations to use would be much appreciated Re: small probability question bump please May 14th 2012, 07:07 PM #2 May 2012
{"url":"http://mathhelpforum.com/statistics/198787-small-probability-question.html","timestamp":"2014-04-16T20:50:55Z","content_type":null,"content_length":"32376","record_id":"<urn:uuid:42bce5b4-77f7-4b0f-bc46-efcb922130d2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Phylogenetics: HyPhy Lab From eebedia EEB 349: Phylogenetics The goal of this lab exercise is to show you how to use the HyPhy program for data exploration and hypothesis testing within a maximum likelihood framework. Obtaining the sequences A Nexus data file containing sequences and a tree is located here: wickett.nex. This dataset was assembled by our own Norm Wickett and contains several sequences of bryophytes, including two from a parasitic bryophyte that is non-green and does not photosynthesize. Norm's sequences have now been published, so there is no need for secrecy, but the names of the taxa have been nevertheless been obscured (pending permission to reveal them). The parasitic ones are clearly labeled, however. The sequences are of a gene important for photosynthesis. The basic idea behind today's lab is to see if we can detect shifts in the evolution of these sequences at the point where these organisms became non-photosynthetic (thus presumably no longer needing genes like this). Using a codon model Although HyPhy can utilize the same standard models found in PAUP*, it also lets you to do some interesting and useful things that PAUP* cannot, such as (1) use codon and secondary structure models, and (2) allow the model of evolution to change across a tree. Downloading and installing HyPhy HyPhy is available for Mac and Windows from the HyPhy home page. Loading data into HyPhy Start HyPhy and dismiss the "Welcome to HyPhy" dialog box (if it appears) by pressing the Ok button. Choose File > Open > Open Data File, then navigate to and select the wickett.nex data file that you saved previously. You should now see the sequences appear in a window entitled "DataSet wickett". I will refer to this as the Data window from this point on. Creating a partition HyPhy thinks of your data as being composed of one or more partitions. Partitioning data means assigning characters (sites) into mutually-exclusive groups. For example, suppose your data set comprises two genes: you might want to assign a separate model for each gene, so in this case you would create two partitions (one for each gene). The word partition is used in two ways The word partition is ambiguous: it formerly meant "wall" or "divider" but, with the advent of computer hard drives, it has also come to mean the space between the walls or dividers. When someone says they partitioned their data, they mean that they erected dividers, for example between the rbcL and 18S genes. When someone says they applied a GTR+I+G model to the rbcL partition, they have now switched to using the word partition to mean the sites on the rbcL side of the divider. No partitioning implies one partition! Even if you choose to not partition (old meaning) your data in HyPhy, you must go through the motions of creating a single partition (new meaning) because HyPhy only allows you to apply a model to a partition. To create a single partition containing all of your sites, choose Edit > Select All from the Data window menu, then choose Data > Selection->Partition to assign all the selected sites to a new partition. You should see a line appear below your sequences with a partition name "wickett_part". Assign a data type to your partition Now that you have a partition, you can create a model for it. Under the column name Partition Type, choose codon (just press the Ok button in the dialog box that appears). You have now chosen to view your data as codons (i.e. three nucleotides at a time) rather than as single nucleotides. The third possible choice for Partition Type is Di-nucl., which you would use if you were planning to use a secondary structure (i.e. stem) model, which treats each sequential pair of nucleotides as a state. Assign a tree topology to your partition Under Tree Topology, you have several options. Because a tree topology was defined in the wickett.nex data file, this tree topology shows up in the drop-down list as wickett_tree. Choose wickett_tree as the tree topology for your partition. Assign a substitution model to your partition The only substitution models that show up in the drop-down list are codon models because earlier you chose to treat your data as codon sequences rather than nucleotide sequences. The substitution model you should use is MG94xHKY85_3x4 (second from the bottom). This model is like the Muse and Gaut (1994) codon model, which is the only codon model I discussed in lecture. You will remember (I'm sure) that the MG94 model allows substitutions to be either synonymous or non-synonymous, but does not make a distinction between transitions and transversions. The HKY85 model distinguishes between transitions and transversions (remember kappa?), but does not distinguish between synonymous and non-synonymous substitutions. Thus, MG94xHKY85 is a hybrid model that allows all four possibilities: synonymous transitions, synonymous transversions, nonsynonymous transitions and nonsynonymous transversions. The name is nevertheless a bit puzzling because (as you will find out in a few minutes) it actually behaves more like the GTR model than the HKY model in that it allows all 6 possible types of substitutions (A<->C, A<->G, A<->T, C<->G, C<->T and G<->T) to have their own rates. The 3x4 part on the end of the name means that the 61 codon frequencies are obtained by multiplying together the four nucleotide frequencies that are estimated separately for the three codon positions. Thus, the frequency for the AGT codon is obtained by multiplying together these three quantities: • the frequency of A nucleotides at first positions • the frequency of G nucleotides at second positions • the frequency of T nucleotides at third positions (Note: HyPhy corrects these for the fact that the three stop codons are not included.) This involves estimating the 4 nucleotides frequencies at each of the 3 codon positions, hence the 3x4 in the Local vs. global You have only a couple more decisions to make before calculating the likelihood. You must choose Local or Global from the Parameters drop-down list. Local means that HyPhy will estimate some substitution model parameters for every branch in the tree. Global means that all substitution model parameters will apply to the entire tree. In all the models discussed thus far in the course, we were effectively using the global option except for the branch lengths themselves, which are always local parameters (it doesn't usually make any sense to think of every branch having the same Tell HyPhy to use the Local option (this should already be set correctly). Equilibrium frequencies You should also leave the equilibrium frequencies set to "Partition". This sets the equilibrium base frequencies to the empirical values (i.e. the frequency of A is the number of As observed in the entire partition divided by the total number of nucleotides in the partition). Other options include: • Dataset, which would not be different than "Partition" in this case where there is only one partition defined, • Equal, which sets all base frequencies equal to 0.25, and • Estimate, which estimates the base frequencies Computing the likelihood under a local codon model You are now ready to compute the maximum likelihood estimates of the parameters in your model. Choose Likelihood > Build Function to build a likelihood function, then Likelihood > Optimize to optimize the likelihood function (i.e. search for the highest point on the likelihood surface, thus obtaining maximum likelihood estimates of all parameters). Saving the results When HyPhy has finished optimizing (this will take several seconds to several minutes, depending on the speed of the computer you are using), it will pop up a "Likelihood parameters for wickett" window (hereafter I will just refer to this as the Parameters window) showing you values for all the quantities it estimated. Click on the HYPHY Console window to bring it to the foreground, then, using the scroll bar to move up if needed, answer the following questions: What is the maximum log-likelihood under this model? How many shared (i.e. global) parameters does HyPhy say it estimated? What are these global parameters? How many local parameters does HyPhy say it estimated? What are these local parameters? (Hint: for n taxa, there are 2n-3 branches) Switch back to the Parameters window now and look at the very bottom of the window to answer these questions: What is the total number of parameters estimated? What is the value of AIC reported by HyPhy? Calculate the AIC yourself using this formula: AIC = -2*lnL + 2*nparams Before moving on, save a snapshot of the likelihood function with the current parameter values by choosing "Save LF state" from the drop-down list box at the top of the Parameters window. Choose the name "unconstrained" when asked. After saving the state of the likelihood function, choose "Select as alternative" from the same drop-down list. This will allow us to easily perform likelihood ratio tests using another, simpler model as the null model. Viewing the tree and obtaining information about branches The first item in the Parameters window should be "wickett_tree". Double-click this line to bring up a Tree window showing the tree. You may need to expand the Tree window to see the entire tree. This shows the tree with branch lengths scaled to be proportional to the expected number of substitutions (the normal way to scale branch lengths). The next step is to compare the unconstrained model (in which there are the same number of omega parameters as there are branches) with simpler models involving fewer omega parameters. For example, one model you will use in a few minutes allows the three branches in the parasite clade to evolve under one omega, while all other branches evolve under an omega value that is potentially different. For future reference, you should determine now what name HyPhy is using for the branch leading to the two parasite taxa. Click on the branch leading to the two parasites. It should turn into a dotted line. Now double-click this branch and you should get a dialog box popping up with every bit of information known about this branch: What is the branch id for this branch that leads to the two parasite sequences? You can now close the "Branch Info" dialog box. Computing the likelihood under the most-constrained model Under the current (unconstrained) model, two parameters were estimated for each branch: the synonymous substitution rate and the nonsynonymous substitution rate. Now let's constrain each branch so that the ratio (omega) between the nonsynonymous rate and the synonymous rate is identical for all branches. To do this, first notice that each branch is represented by two parameters in the Parameter window. For example, the branch leading to Parasite_A is associated with these two parameters: The goal is to constrain these two parameters so that the nonsynonymous rate is always omega times the synonymous rate, where omega is a new parameter shared by all branches. Select the two parameters listed above for the branch leading to PARASITE_A. (You can do this by single-clicking both parameters while simultaneously holding down the Shift key.) Once you have both parameters selected, click on the third button from the left at the top of the Parameters window. This is the button decorated with the symbol for proportionality. Clicking this button will produce a long list of possiblities: here is the one you should choose: wickett_tree.PARASITE_A.nonSynRate:={New Ratio}*wickett_tree.PARASITE_A.synRate Once you select this option, HyPhy will ask for a name: type as the name of the new ratio. Now select the two parameters for a different pair of branches, say Click the proportionality constraint button again, but this time choose Note that you can choose to use a constraint for other branches once you have defined it for one branch. Continue to apply this constraint to all 19 remaining branches. When you are finished, choose Likelihood > Optimize from the menu at the top of the Parameters window. Performing a model comparison After HyPhy is finished optimizing the likelihood function, answer the following questions using the numbers at the bottom of the Parameters window: What is the estimated value of the omega parameter? Does this value of omega imply stabilizing selection, neutral evolution or positive selection? What is the maximized log-likelihood of this (most-constrained) model? How many parameters are being estimated now? What is the AIC value reported by HyPhy? Does this most-constrained model fit the data better than the unconstrained model? What is the difference between the log-likelihood of this (most-constrained) model and the log-likelihood of the previous (unconstrained) model? What is the likelihood ratio test statistic for this comparison? How many degrees of freedom does this likelihood ratio test have? Is the likelihood ratio test significant? (click for an online chi-square calculator) Is a model in which one value of omega applies to every branch satisfactory, or is there enough variation in omega across the tree that it is necessary for each branch to have its own specific omega parameter in order to fit the data well? Does AIC concur with the likelihood ratio test? (Hint: models with smaller values of AIC are preferred over models with larger AIC values.) Although you should do the calculation yourself first, you can now have HyPhy perform the likelihood ratio test for you to check your calculations. In the drop-down list box at the top of the Parameters window, choose "Save LF state" and name it "most-constrained". Now, using the same list box, choose "Select as null". Now perform the test by choosing LRT from the same drop-down list box. The results should appear in the HYPHY Console window. Computing the likelihood under a partially-constrained model Let's try one more model that is intermediate between the unconstrained and most-constrained models you just analyzed. This model will allow for omega to be different in the non-green, parasitic clade compared to the remaining green, non-parasite part of the tree. For one of the three branches in the parasite clade (say, the branch leading to PARASITE_A), select the two parameters associated with the branch and click the rightmost button at the top of the Parameters window (this button releases the constraint previously placed on these two parameters). With the two parameters still selected, click the proportionality constraint button again (third from left) and choose the option wickett_tree.PARASITE_A.nonSynRate:={New Ratio}*wickett_tree.PARASITE_A.synRate and specify as the name of the New Ratio. Now apply this new ratio to the other two branches in the clade by first releasing the existing constraint and then applying the omega2 constraint. Once you are finished, choose Likelihood > Optimize again to search for the maximum likelihood point. Now choose "Save LF state", naming this one "partially-constrained". Answer the following questions using the values shown in the Parameter window: What is the maximized log-likelihood under this model? How many parameters were estimated? What is the value of omega now? What is the value of omega2? Which is higher: omega or omega2? Does this make sense in light of what you know about the organisms involved and the function of this gene? What is the AIC value reported by HyPhy for this model? Based on AIC, which of the three models tested thus far would you prefer? You can now perform a likelihood ratio test. Using the drop-down list box at the top of the Parameters window, specify the most-constrained model to be the null model and the partially-constrained model to be the alternative. Choose LRT from the drop-down list to perform the test. Perform one more likelihood ratio test, this time using the partially-constrained model as the null and the unconstrained model as the alternative. Do AIC and LRT agree on which model of the three models is best? Why or why not?
{"url":"http://hydrodictyon.eeb.uconn.edu/eebedia/index.php/Phylogenetics:_HyPhy_Lab","timestamp":"2014-04-16T04:22:40Z","content_type":null,"content_length":"34724","record_id":"<urn:uuid:18e41b04-2370-4f60-b9c9-4e6625d26c22>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Department is "Faculty of Science and Technology > Mathematics and Statistics" and Year is 1995 Number of items: 41. Aitkin, M. and Francis, Brian J. (1995) Fitting overdispersed generalized linear models by non-parametric maximum likelihood. GLIM Newsletter, 25. pp. 37-45. Baldwin, D. and Johnson, F. N. (1995) Tolerability and safety of citalopram. Reviews in Contemporary Pharmacotherapy, 6. pp. 315-325. Barry, J. and Diggle, Peter J. (1995) Choosing the smoothing parameter in a Fourier approach to non-parametric deconvolution of a density estimate. Journal of Non-parametric Statistics, 4 (3). pp. 223-232. ISSN 1029-0311 Berridge, Damon M. (1995) Modelling ordinal recurrent events. Journal of Statistical Planning and Inference, 47 (1-2). pp. 71-78. ISSN 0378-3758 Blair, Lynne and Blair, Gordon S. and Bowman, Howard and Chetwynd, Amanda G. (1995) Formal specification and verification of multimedia systems in open distributed processing. Computer Standards and Interfaces, 17 (5-6). pp. 413-436. ISSN 0920-5489 Blower, Gordon (1995) Abel means of operator-valued processes. Studia Mathematica, 115 (3). pp. 261-276. ISSN 0039-3223 Blower, Gordon (1995) On the stochastic spectral radius formula. The Quarterly Journal of Mathematics, 46 (1). pp. 1-10. ISSN 0033-5606 Bound, John P. and Francis, Brian J. and Harvey, Peter W. (1995) Downs-Syndrome - prevalence and ionizing-radiation in an area of north-west England, 1957-1991. Journal of Epidemiology and Community Health, 49 (2). pp. 164-170. Bowman, Howard and Blair, Lynne and Blair, Gordon S. and Chetwynd, Amanda G. (1995) Formal description of multimedia systems : an assessment of potential techniques. Computer Communications, 18 (12). pp. 964-977. Brown, Gavin M. and Nieduszynski, Ian A. and Morris, Haydn G. and Abram, Beverley L. and Huckerby, Thomas N. and Block, Joel A. (1995) Skeletal keratan sulphate structural analysis using keratanase II digestion followed by high-performance anion-exchange chromatography. Glycobiology, 5 (3). pp. 311-317. Chetwynd, Amanda G. and Rhodes, S. J. (1995) Chessboard squares. Discrete Mathematics, 141 (1-3). pp. 47-59. ISSN 0012-365X Crouchley, Rob (1995) A random-effects model for ordered categorical-data. Journal of the American Statistical Association, 90 (430). pp. 489-498. Crouchley, Rob and Pickles, Andrew R. (1995) Multivariate survival models for repeated and correlated events. Journal of Statistical Planning and Inference, 47 (1-2). pp. 95-110. ISSN 0378-3758 Diggle, Peter J. and Chetwynd, Amanda G. and Häggkvist, R. and Morris, S. E. (1995) Second-order analysis of space-time disease clustering. Statistical Methods in Medical Research, 4 (2). pp. Dixon, M. J. and Tawn, J. A. (1995) A semi-parametric model for multivariate extreme values. Statistics and Computing, 5 (3). pp. 215-252. Donnelly, P. K. and Oman, J. P. and Henderson, Robin and Opelz, G. (1995) Predialysis living donor renal transplantation - is it still the gold standard for costs, convenience and graft survival? Transplantation Proceedings, 27 (1). pp. 1444-1446. Flowerdew, R. T. N. and Davies, R. B. and Finch, J. and Mason, J. and Al-Hamad, A. and Hayes, L. and Geddes, A. (1995) Migration, kinship and household change. Changing Britain: Newsletter of the ESRC Population and Household Change Research Programme, 2. Gore, M. E. and Preston, Nancy and A'Herm, R. P. and Hill, C. and Mitchell, P. and Chang, J. and Nicolson, M. (1995) Platinum-Taxol non-cross resistance in epithelial ovarian cancer. British Journal of Cancer, 71 (6). pp. 1308-1310. ISSN 1532-1827 Henderson, Robin (1995) Problems and prediction in survival data analysis. Statistics in Medicine, 14 (2). pp. 161-184. Henderson, Robin and McKnespiey, P. and Temple, A. (1995) The volumetric calibration of tanks: design of trials. ESARDA Bulletin, 17. pp. 365-369. Inglis, Nicholas F. J. and Wiseman, Julian D. A. (1995) Very odd sequences. Journal of Combinatorial Theory, Series A, 71 (1). pp. 89-96. ISSN 0097-3165 Jameson, G. J. O. (1995) The number of elements required to determine (p,1)-summing norms. Illinois Journal of Mathematics, 39 (2). pp. 251-257. Johnson, F. N. (1995) Emerging clinical applications of divalproex. Reviews in Contemporary Pharmacotherapy, 6. pp. 573-585. Kingman, S. P. and Gatrell, A. C. and Rowlingson, B. (1995) Testing for clustering of health events within a geographical information system framework. Environment and Planning A, 27 (5). pp. Ledford, A. W. and Tawn, J. A. (1995) Contribution to discussion of the paper by Cheng and Traylor. Journal of the Royal Statistical Society - Series B: Statistical Methodology, 57. pp. 27-28. ISSN Mitchell, J. D. and Davies, Richard B. and Al-Hamad, A. and Gatrell, Anthony C. and Batterby, G. (1995) MND risk factors: an epidemiological study in the north west of England. Journal of the Neurological Sciences, 129 (Supple). pp. 61-64. ISSN 0022-510X Mitchell, R. and Hollis, S. and Crowley, V. and McLoughlin, J. and Peers, N. and Robertson, W. R. (1995) Immunometric assays of luteinizing-hormone (LH) - differences in recognition of plasma-LH by anti-intact and beta-subunit-specific antibodies in various physiological and pathophysiological situations. Clinical Chemistry, 41 (8). pp. 1139-1145. ISSN 1530-8561 Mitchell, Robert and Hollis, Sally and Rothwell, Claire and Robertson, William R. (1995) Age related changes in the pituitary-testicular axis in normal men: lower serum testosterone results from decreased bioactive LH drive. Clinical Endocrinology, 42 (5). pp. 501-507. ISSN 1365-2265 Montgomery, S. A. and Johnson, F. N. (1995) Citalopram in the treatment of depression. Reviews in Contemporary Pharmacotherapy, 6. pp. 297-306. Moyeed, R. A. (1995) Spline smoother as a dynamic linear model. Australian Journal of Statistics, 37 (2). pp. 193-204. Nicholson, M. and Barry, J. (1995) Inferences from surveys about the presence of an unobserved species: estimating the probability that a species is present at a random sampling point. Oikos, 72 (1). pp. 74-78. Oskrochi, Gholam (1995) Analysis of censored correlated observations. Journal of Statistical Planning and Inference, 47 (1-2). pp. 165-180. ISSN 0378-3758 Power, S. C. (1995) Homology for operator algebras I: spectral homology for reflexive algebras. Journal of Functional Analysis, 131 (1). pp. 29-53. Power, S. C. (1995) Infinite lexicographic products of triangular algebras. Bulletin of the London Mathematical Society, 27 (3). pp. 273-277. ISSN 1469-2120 Robinson, M. E. and Tawn, J. A. (1995) Statistics for exceptional athletics records. Journal of the Royal Statistical Society - Series C: Applied Statistics, 44 (4). pp. 499-511. ISSN 1467-9876 Towers, David (1995) Lie algebras whose maximal subalgebras are modular*. Algebras, Groups and Geometries, 12. pp. 89-98. de Falguerolles, A. and Francis, Brian J. (1995) Fitting bilinear models in GLIM. GLIM Newsletter, 25. pp. 9-20. dos Santos, Dirley M. and Davies, Richard B. and Francis, Brian J. (1995) Non-parametric hazard versus non-parametric frailty distribution in modelling recurrence of breast-cancer. Journal of Statistical Planning and Inference, 47 (1-2). pp. 111-127. ISSN 0378-3758 Seeber, Gilg and Francis, Brian and Hatzinger, Reinhold and Steckel-Berger, Gabriel, eds. (1995) Statistical modelling:Proceedings of the 10th International Workshop on Statistical Modelling. Innsbruck, Austria, 10-14 July, 1995. Lecture Notes in Statistics . Springer Verlag, Berlin. ISBN 0387945652 9780387945651 Chetwynd, Amanda G. and Burn, B. (1995) A cascade of numbers. Edward Arnold, London. Chetwynd, Amanda G. and Diggle, Peter J. (1995) Discrete mathematics. Modular mathematics . Arnold, London. ISBN 0340610476
{"url":"http://eprints.lancs.ac.uk/view/divisions/mas/1995.type.html","timestamp":"2014-04-24T11:09:23Z","content_type":null,"content_length":"21458","record_id":"<urn:uuid:ac298e00-c714-47c4-80f1-809747729dd3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Stat 351 Nonparametric Statistics Instructor: Moo K. Chung E-mail: mchung@stat.wisc.edu Lectures: TR 2:30-3:45 1289 CS&S Office Hour: TR 1:30-2:20 or by appointment Office: 4382 CS&S Tel: (608) 262-1287 Requirements: Stat 201,301 or 224. Assignment problems will require computer programming in matlab, S+/R or any other programming language. Textbook: J.D. Gibbons and S. Chakraborti, Nonparametric Statistical Inference, 3rd Edition, Marcel Dekker, Inc., 1992. One copy has been asked to be reserved in the engineering library. ☆ Basics: hypothesis testing, p-values (lecture 1,2). ☆ Order statistics: probability integral transform, random number generator, empirical distributions, histograms (lecture 3,4). ☆ Run test: the total number of runs, the length of the longest run (lecture 5,6). ☆ Goodness-of-fit test: Chi-squre, Kolmogorov-Smirnov test (lecture 7-9) ☆ Testing Normality: normal probability plot, qq-plots (lecture 10) ☆ Rank test: ranks, correlations (lecture 11) ☆ Sign test: ordinary sign test, Wilcoxon signed-rank test (lecture 12,13) ☆ Two sample test:Kolmogorv-Smirnov test, Mann-Whitney U test (lecture 14, 15) ☆ Kruskal-Wallis H test (lecture 16) ☆ Kernel methods: kernel density estimation, kernel smoothing, numerical implementation in one dimension (lecture 17-18) ☆ Splines: Bezier curves, cubic splines, cublic spline smoothing (lecture 19-20) ☆ Basis function methods (lecture 21-23) Course Evaluation • Assignments (30%) No late homework will be graded. • Midterm Exam (40%) There will be one in class midterms on October 15 or 17 and a takehome exam afterward. Calculators are permitted and one page cheating sheet will be permitted. The last day to drop the course is November 1. • Final Exam or Project (30%) Covers everything. Calculators and one page cheating sheet will be permitted. Computer projects in matlab or S+/R can substitute the final exam. Project topics will be announced later. Students are required to submit at least 7 page double spaced and typed report by the last class day ( December 12). Final Grade (100%) = Assignments (30%) + Midterm (20%) + Takehome Exam (20%) + Final Exam or Project (30%). Computer Access and Softwares 1. R http://cran.r-project.org/ 2. Matlab http://www.stat.wisc.edu/computing/ 3. Creating an account in stat computers http://www.stat.wisc.edu/computing/instruct.html Matlab code handouts Homeworks & Exams Homework 1 Homework 2 Takehome Exam 1 Homework 3 Takehome Exam 2 Homework 4
{"url":"http://www.stat.wisc.edu/~mchung/teaching/stat351/stat351.html","timestamp":"2014-04-18T00:12:57Z","content_type":null,"content_length":"7388","record_id":"<urn:uuid:096687fe-2878-4e32-8cd2-3aef40fe894b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability- pick 10 numbers from 1-100 January 26th 2013, 02:22 PM #1 Probability- pick 10 numbers from 1-100 Hi all, You choose a number at random from the numbers 1-100 ten times. What is the probability of choosing 10 distinct numbers? What is the probability that the first number chosen is larger than each of the other nine numbers chosen? So, for the first question, I compute the following- (100*99*98*97*96*95*94*93*92*91)/(100^10) = .628 For the second question, I can either first draw a one, OR a two, OR a three. If I draw a 1, nothing is lower so the probability is 0. If I draw a 100, 99 numbers are lower. So, I perform the following calculation- SUM (1/100) * ((n/100)^9) for n = 1 to 99. This is equivalent to saying "probability I first draw number n AND I subsequently draw 9 numbers lower than it" I get .095. Is this correct? Is there a better way? Re: Probability- pick 10 numbers from 1-100 Hi all, You choose a number at random from the numbers 1-100 ten times. What is the probability of choosing 10 distinct numbers? What is the probability that the first number chosen is larger than each of the other nine numbers chosen? So, for the first question, I compute the following- (100*99*98*97*96*95*94*93*92*91)/(100^10) = .628 For the second question, I can either first draw a one, OR a two, OR a three. If I draw a 1, nothing is lower so the probability is 0. If I draw a 100, 99 numbers are lower. So, I perform the following calculation- SUM (1/100) * ((n/100)^9) for n = 1 to 99. This is equivalent to saying "probability I first draw number n AND I subsequently draw 9 numbers lower than it" I get .095. Is this correct? Is there a better way? The reason I deleted my reply is that it is not clear to me from the wording of the question if the numbers are distinct as in part #1 or not. I do agree with you on that part. Are they distinct or not? Re: Probability- pick 10 numbers from 1-100 The numbers do not have to be distinct in the second part of the question January 26th 2013, 03:28 PM #2 January 26th 2013, 08:39 PM #3
{"url":"http://mathhelpforum.com/advanced-applied-math/212082-probability-pick-10-numbers-1-100-a.html","timestamp":"2014-04-18T07:11:57Z","content_type":null,"content_length":"38280","record_id":"<urn:uuid:38fb6063-5d8e-4581-aeb4-c6b0945121c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Diffraction of electromagnetic waves by periodic arrays of rectangular cylinders Reflection, transmission, and absorption of electromagnetic waves by periodic arrays of conducting or dielectric rectangular cylinders are studied by a finite-difference time-domain technique. Truncated gratings made of lossless and lossy conducting and dielectric elements are considered. Results for surface current density, transmission, and reflection coefficients are calculated and compared with corresponding results in the literature, which are obtained by approximate or rigorous methods applicable only to idealized infinite models. An excellent agreement is observed in all cases, which demonstrates the accuracy and efficacy of our proposed analysis technique. Additionally, this numerical method easily analyzes practical gratings that contain a finite number of elements made of lossless, lossy, or even inhomogeneous materials. The results rapidly approach those for the idealized infinite arrays as the number of elements is increased. The method can also solve nested gratings, stacked gratings, and holographic gratings with little analytical or computational effort. © 2006 Optical Society of America OCIS Codes (050.0050) Diffraction and gratings : Diffraction and gratings (050.1940) Diffraction and gratings : Diffraction (050.1950) Diffraction and gratings : Diffraction gratings (050.2770) Diffraction and gratings : Gratings ToC Category: Diffraction and Gratings Original Manuscript: March 29, 2005 Revised Manuscript: May 27, 2005 Manuscript Accepted: June 2, 2005 Mohammad R. Zunoubi and Hassan A. Kalhor, "Diffraction of electromagnetic waves by periodic arrays of rectangular cylinders," J. Opt. Soc. Am. A 23, 306-313 (2006) Sort: Year | Journal | Reset 1. W. P. Pinello, R. Lee, and A. C. Cangellaris, "Finite element modeling of electromagnetic wave interactions with periodic dielectric structures," IEEE Trans. Microwave Theory Tech. 42, 2294-2301 (1994). [CrossRef] 2. H. A. Kalhor, "Diffraction of electromagnetic waves by plane metallic gratings," J. Opt. Soc. Am. 68, 1202-1205 (1978). [CrossRef] 3. M. Neviere, M. Cadilhac, and R. Petit, "Applications of conformal mappings to the diffraction of electromagnetic waves by a grating," IEEE Trans. Antennas Propag. AP-21, 37-46 (1973). [CrossRef] 4. R. C. Hall and R. Mittra, "Scattering from a periodic array of resistive arrays," IEEE Trans. Antennas Propag. AP-33, 1009-1011 (1985). [CrossRef] 5. J. A. Kong, "Second-order coupled-mode equations for spatially periodic media," J. Opt. Soc. Am. 67, 825-829 (1977). [CrossRef] 6. D. E. Tremain and K. K. Mei, "Application of the unimoment method to scattering from periodic dielectric structures," J. Opt. Soc. Am. 68, 775-783 (1978). [CrossRef] 7. N. A. Khizhnyak, N. V. Ryazantseva, and V. V. Yachin, "The scattering of electromagnetic waves by a periodic magnetodielectric layer," J. Electromagn. Waves Appl. 10, 731-739 (1996). [CrossRef] 8. T. L. Zienko, A. I. Nosieh, and Y. Okuno, "Plane wave scattering and absorption by resistive strip and dielectric-strip periodic gratings," IEEE Trans. Antennas Propag. 46, 498-505 (1998). 9. C. Zuffada, T. Cwik, and C. Ditchman, "Synthesis of novel all-dielectric grating filters using genetic algorithms," IEEE Trans. Antennas Propag. 46, 657-663 (1998). [CrossRef] 10. S. Cui and D. Weile, "Analysis of electromagnetic scattering from periodic structures by FEM truncated by anisotropic PML boundary condition," Microwave Opt. Technol. Lett. 35, 106-110 (2002). 11. W. Ya, S. Dey, and R. Mittra, "Modeling of periodic structures using the finite-difference time-domain (FDTD)," in Proceedings of 1999 IEEE Antennas and Propagation Society International Symposium (IEEE Press, 1999), Vol. 1, pp. 594-597. 12. H. A. Kalhor, "Plane metallic gratings of finite number of strip," IEEE Trans. Antennas Propag. 37, 406-407 (1989). [CrossRef] 13. H. Liu and R. Paknys, "Scattering from periodic array of grounded parallel strips, TE incidence," IEEE Trans. Antennas Propag. 50, 798-806 (2002). [CrossRef] 14. K. S. Yee, "Numerical solution of initial boundary value problems involving Maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. 14, 302-307 (1966). [CrossRef] 15. J. Fang and K. K. Mei, "A super-absorbing boundary algorithm for solving electromagnetic problems by time-domain finite-difference method," in Proceedings of 1988 IEEE Antennas and Propagation Society International Symposium (IEEE Press, 1988), pp. 472-475. 16. A. Peterson, "Integral equation computer program for periodic planar conducting arrays," Electrical and Computer Engineering Department, Georgia Institute of Technology, Atlanta, Ga. (personal communication, 2004). 17. H. L. Bertoni, Li-H. S. Cheo, and T. Tamir, "Frequency-selective reflection and transmission by a periodic dielectric layer," IEEE Trans. Antennas Propag. 37, 78-83 (1989). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-23-2-306","timestamp":"2014-04-19T08:05:06Z","content_type":null,"content_length":"176879","record_id":"<urn:uuid:3fe050f0-d936-4b5f-832f-46674c77f066>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with a few quick questions (Only 14 year olds work so easy) November 1st 2008, 08:07 AM Help with a few quick questions (Only 14 year olds work so easy) 1) For the two numbers "-2 and 8", it is impossibl to find the geometric mean. Is this true or false? And explain your answer 2)Write the missing numbers in this fraction sum: "1 OVER 3" PLUS "8 OVER X" X=? 3) The area of face of a cube is 9xSQUARED. Write and expression for the total surface rea of the cube. Write your answer as simply as possible Also, write an expression for the volume of the cube. Also as simply as possible. 4) The fraction "1 OVER 9" is half of the fraction ________ THNK YOU! November 1st 2008, 08:30 AM 1) For the two numbers "-2 and 8", it is impossibl to find the geometric mean. Is this true or false? And explain your answer The geometric mean only applies to positive numbers in order to avoid taking the root of a negative product, thus yielding an imaginary number. In your example, you would have $\sqrt{-16}$. Not 2)Write the missing numbers in this fraction sum: "1 OVER 3" PLUS "8 OVER X" X=? $\frac{1}{3}+\frac{8}{x}= ??$ Something missing here, I think. 3) The area of face of a cube is 9xSQUARED. Write and expression for the total surface rea of the cube. Write your answer as simply as possible Total surface would be the sum of the areas of all 6 faces. Therefore, $SA=6(9x^2)=54x^2$ Also, write an expression for the volume of the cube. Also as simply as possible. Volume of a cube is expressed as $V=s^3$ Each edge would have to be $3x$, so 4) The fraction "1 OVER 9" is half of the fraction ________ November 1st 2008, 08:37 AM Wow ultimate thanks man (THANKED) In number two, the whole question is as follows: Write the missing numbers in these fraction sums A) "1 OVER 4" PLUS "x OVER 8" = 1 I got x as 6 B) "1 OVER 3" PLUS "8 OVER X" = 1 Does that help? November 1st 2008, 08:49 AM Multiply everything by LCD of 8 You were correct!!(Clapping) Multiply everything by LCD of 3x November 1st 2008, 08:55 AM
{"url":"http://mathhelpforum.com/algebra/56883-help-few-quick-questions-only-14-year-olds-work-so-easy-print.html","timestamp":"2014-04-19T03:20:28Z","content_type":null,"content_length":"11128","record_id":"<urn:uuid:4f786d8a-88f9-4c87-83be-3feec7872468>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Test for trend in surveys [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Test for trend in surveys From Steven Samuels <sjhsamuels@earthlink.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Test for trend in surveys Date Wed, 1 Oct 2008 18:05:33 -0400 There is, to my knowledge, no such thing as test for trend of type Pearson chi-squared. I suspect that Ángel is referring to the Cochran-Armitage test one degree-of-freedom chi square test for trend (A. Agresti, 2002, Categorical Data Analysis, 2nd Ed. Wiley Books, Section 5.3.5). Let Y be the 0-1 binary outcome variable and X be the variable which contains category scores. One survey-enabled approach is Phil's suggestion: use -svy: logit-. However -svy: reg- will produce a result closer to that of the Cochran-Armitage test. Why? The Cochran-Armitage test statistic is formally equivalent to an O.L.S. regression of Y on X, with a standard error for beta which substitutes the total variance for the residual variance. The statistic is (beta/se)^2. The total variance is equal to P(1-P), where P is the overall sample proportion. In other words, the standard error is computed under the null hypothesis of equal proportions. The -svy: reg- command will estimate the same regression coefficient, but with a standard error that is robust to heterogeneity in proportions. In both survey-enabled commands, t = (b/se) has a t distribution with degrees of freedom (d.f.) based on the survey design; t^2 has an F(1, d.f.) distribution. On Sep 30, 2008, at 6:39 AM, Philip Ryan wrote: Well, the z statistic testing the coefficient on the exposure variable is as valid and as useful a summary (test) statistic as the chi-square statistic produced by a test of trend in tables. If you prefer chi-squares, you could just square the z statistic to get the chi-square on 1 df. And if you prefer likelihood ratio chi-squares to the Wald z (or Wald chi-square) then the modelling approach can deliver that also. Quoting Ángel Rodríguez Laso <angelrlaso@gmail.com>: Thanks to Philip and Neil for their advice. Philip's proposal is absolutely compatible with survey data, but I was interested in a summary statistic of the type of Pearson chi-squared. To this respect, Neil puts forward a test (nptrend) that would be perfect if it allowed complex survey specifications. I believe strata and clusters are not important because the formula for the standard error of this nonparametric test (see Stata Reference Manual K-Q page 338) should not be affected by these specifications. But nptrend does not accept weights as an option, what I think makes it unsuitable for complex survey analyses. Angel Rodriguez Laso 2008/9/29 Philip Ryan <philip.ryan@adelaide.edu.au>: For a 2 x k table [with a k-category "exposure" variable] just set up a dose-response model: svyset <whatever> svy: logistic <binary outcome var> <exposure var> and check the coefficient of <exposure var>, along with its confidence and P-value. If you prefer a risk metric rather than odds, then use svy: glm..... with appropriate link and error specifications. Quoting Ángel Rodríguez Laso <angelrlaso@gmail.com>: Dear Statalisters, Is there a way to carry out a test for trend in a two-way table in survey analysis in Stata? Many thanks. Angel Rodriguez Laso * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00053.html","timestamp":"2014-04-19T19:39:46Z","content_type":null,"content_length":"9315","record_id":"<urn:uuid:58d7eb57-58fe-4762-95c8-03b71a2f1b66>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete sets and eigenvalues question Let's say I'm looking at the infinite square well. Typically, given some arbitrary initial (normalized) wavefunction, we can decompose it into a linear combination of components of the complete set (on the interval [-a,a] or whatever) of sin's and cos's. Then, if you measure something like the energy, you get one of the eigenstates (one of the sin's or cos's), and you measure the energy associated with that eigenstate. But there are many complete sets, sin and cos are just one of them. So, let's say we chose some other one. Obviously, because it's complete, you could decompose the initial wavefunction into a linear combo of this set with the same average energy. But this set might have a different spectrum of energy eigenvalues. But this seems like a contradiction, because nature doesn't care what math you're Could this happen? If not with the infinite square well, with an unbound particle?
{"url":"http://www.physicsforums.com/showthread.php?p=4152097","timestamp":"2014-04-19T22:52:55Z","content_type":null,"content_length":"32571","record_id":"<urn:uuid:53509472-930c-4c58-bbc1-7014a9624b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Dec 12, 2006 8:33:32 PM (7 years ago) • v14 v15 479 479 === Normalisation of equalities === 481 Normalisation of an equality `s = t` of arbitrary type terms `s` and `t` [DEL::DEL]leads to a (possibly empty) set of normal equations, or to a type error. We proceed as follows: 481 Normalisation of an equality `s = t` of arbitrary type terms `s` and `t` leads to a (possibly empty) set of normal equations, or to a type error. We proceed as follows: 483 483 1. Reduce `s` and `t` to HNF, giving us `s'` and `t'`. … … 494 494 * Otherwise, fail. (Reason: a wobbly type variable, lack of left linearity, or non-decreasingness prevents us from obtaining a normal equation. If it is a wobbly type variable, the user can help by adding a type annotation; otherwise, we cannot handle the program without (maybe) losing decidability.) 496 Rejection of local assumptions that after normalisation are either not left linear or not decreasing may lead to incompleteness. However, this should only happen for programs that [DEL: combine GADTs and type functions in ellaborate ways. (We still lack an example that produces such a situation, though.):DEL] 496 Rejection of local assumptions that after normalisation are either not left linear or not decreasing may lead to incompleteness. However, this should only happen for programs that 498 498 '''TODO:''' I am wondering whether we can do that pulling out type family applications from left-hand sides and turning them into extra type equations lazily.
{"url":"https://ghc.haskell.org/trac/ghc/wiki/TypeFunctionsSynTC?action=diff&version=15","timestamp":"2014-04-21T13:11:26Z","content_type":null,"content_length":"16771","record_id":"<urn:uuid:c03bf80a-8bc9-44ff-ae68-3af60a24ab82>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2008 [00195] [Date Index] [Thread Index] [Author Index] Re: Relational Operators and Random Integers • To: mathgroup at smc.vnet.net • Subject: [mg90330] Re: Relational Operators and Random Integers • From: Jean-Marc Gulliet <jeanmarc.gulliet at gmail.com> • Date: Sun, 6 Jul 2008 07:18:30 -0400 (EDT) • Organization: The Open University, Milton Keynes, UK • References: <g4ncm0$gd3$1@smc.vnet.net> Peter Evans wrote: > Hi all, > I'm a new user of Mathematica 6 and am struggling with some basics. I wish to write a set of rules which are dependent upon a random variable. I've been using RandomChoice to choose my variable and then large If and Which statements to produce my desired dynamics. > The problem is that the number that these statements end up spitting out aren't recognised as what they are in further If and Which statements. Here's a simple example that demonstrates my problem: > In[1]:= x := RandomChoice[{1, 2, 3}] > x > Which[x == 1, 1, x == 2, 2, x == 3, 3] > Out[2]= 1 > Out[3]= 2 > Mathematica clearly thinks x to be 1 but the If statement indicates its 2. What am I doing wrong here? The issue is about SetDelayed vs Set ( := or = ), that is between delayed assignment vs immediate assignment. SetDelayed ( := ) tells Mathematica to evaluate the RHS of the expression only when the LHS is called and *every time* the LHS is called. So in your case, x is evaluated a for the first time on the "second" line and it is evaluated again when Mathematica evaluates the Which statement. On the other hand, Set ( = ) evaluates immediately the RHS and assigns the result to x. After that, RandomChoice is not evaluated again and the value of x stays constant. In the example below, notice that there are three output lines (with SetDelayed there are only two since the first expression is not evaluated immediately). In[1]:= x = RandomChoice[{1, 2, 3}] Which[x == 1, 1, x == 2, 2, x == 3, 3] Out[1]= 2 Out[2]= 2 Out[3]= 2 See "Immediate and Delayed Definitions" -- Jean-Marc
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Jul/msg00195.html","timestamp":"2014-04-16T07:23:43Z","content_type":null,"content_length":"27086","record_id":"<urn:uuid:9f2a26c6-4e99-4369-807d-e8880daaaecc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Replacement Bill Taylor W.Taylor at math.canterbury.ac.nz Thu Aug 23 01:14:20 EDT 2007 A Mani wrote: ->If we want to formalize vagueness, then it makes sense to drop replacement. Yes; but why would one want to formalize vagueness, in math, of all places? Earlier, Roger Jones said: > The obvious semantics for first order set theory is > "true in the cumulative heirarchy" i.e. true in that interpretation > of set theory which is described by the iterative conception of set. > Unfortunately it seems to me that the supposition that the iterative > conception can be completed and then yields a definite collection of sets > is incoherent. It is easy to derive a contradiction from this supposition. I can see how one might regard the "completed cumulative hierarchy" as incoherent; but I cannot see how one might derive a contradiction from the supposition thereof. You say it is easy. Can you elaborate please? Bill Taylor More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-August/011875.html","timestamp":"2014-04-16T19:29:09Z","content_type":null,"content_length":"3121","record_id":"<urn:uuid:ba5f96cb-b160-4c41-bbab-0e9967881d3c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
[59.02] One-point Statistics of the Cosmic Density Field in Real and Redshift Spaces with A Multiresolutional Decomposition AAS 201st Meeting, January, 2003 Session 59. Structure in and at Clusters of Galaxies Oral, Tuesday, January 7, 2003, 10:00-11:30am, 606-607 [Previous] | [Session 59] | [Next] [59.02] One-point Statistics of the Cosmic Density Field in Real and Redshift Spaces with A Multiresolutional Decomposition H. Zhan, L.Z. Fang (Dept of Physics, Univ of Arizona) A method of performing the one-point statistics of the cosmic density field is developed with a multiresolutional decomposition based on the discrete wavelet transform (DWT). The algorithm for recovering the DWT one-point statistics from redshift distortion is also derived. Tests on N-body simulations show that this algorithm works well on scales from a few hundred to a few h^-1 Mpc for popular cold dark matter models. The same recovery can be applied to the rms density fluctuation within a given radius. One can design model independent estimators of the redshift distortion parameter (\beta) from combinations of DWT modes in redshift space. When the non-linear redshift distortion is not negligible, the traditional estimator from quadrupole-to-monopole ratio needs additional information about the scale-dependence, such as the power-spectrum index or the real-space correlation function of the random field. The DWT \beta estimators, however, do not need such extra information. Numerical tests show that the proposed DWT estimators are able to determine \beta with less than 15% uncertainty in the redshift range 0 \leq z \leq 3. The author(s) of this abstract have provided an email address for comments about the abstract: zhanhu@physics.arizona.edu [Previous] | [Session 59] | [Next] Bulletin of the American Astronomical Society, 34, #4 © 2002. The American Astronomical Soceity.
{"url":"http://aas.org/archives/BAAS/v34n4/aas201/216.htm","timestamp":"2014-04-18T20:59:52Z","content_type":null,"content_length":"2981","record_id":"<urn:uuid:44b2da1a-4162-4d31-b6b3-7c277247a622>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Point Charges Are Fixed At Location On The ... | Chegg.com three point charges are fixed at location on the x AXIS: x1=0, q2 is at x2=3m and q3 is at x3=6m. find the electric potential at the point on the y=3m if q1=q2=2, q3=-2mC(Assume the potential is zero very far from all charges)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/three-point-charges-fixed-location-x-axis-x1-0-q2-x2-3m-q3-x3-6m-find-electric-potential-p-q2451539","timestamp":"2014-04-17T17:14:45Z","content_type":null,"content_length":"20397","record_id":"<urn:uuid:1f5555ab-d07a-4f91-9bba-b4612fee1689>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Kondo lattice Introduction to the kondo lattice The Kondo lattice model is a quantum mechanical toy model of solid state physics to model interactions between conduction electrons and localized spin degrees of freedom in one (1D), two (2D) and three (3D) dimensions. It has been studied for more than three decades up to now and its applications are found for many physical systems, e.g., GaAs quantum dots (2D), quantum wires (1D), carbo nanotubes (1D), manganites (3D) and many other systems. The interaction between the local spin degrees of freedom is dynamically generated by means of the conduction electrons, which can hop to their nearest neighbor lattice sites and exchange spin with the onsite local spin degree of freedom. Typically, all spins are taken to be one half. The model can be extended, e.g., by Coulomb interaction between the conduction electrons and by dipole interaction between the localized spins.
{"url":"http://www.kondo-lattice.com/","timestamp":"2014-04-19T09:41:31Z","content_type":null,"content_length":"3373","record_id":"<urn:uuid:45f0f94e-7d44-43de-92c2-b01d2a905d9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Semantical analysis of perpetual strategies - In: Proc. of the 3 rd International Conference on Logical Foundations of Computer Science, LFCS'94, A. Nerode and Yu.V. Matiyasevich, eds., Springer LNCS , 1994 "... We consider reductions in Orthogonal Expression Reduction Systems (OERS), that is, Orthogonal Term Rewriting Systems with bound variables and substitutions, as in the -calculus. We design a strategy that for any given term t constructs a longest reduction starting from t if t is strongly normaliza ..." Cited by 18 (8 self) Add to MetaCart We consider reductions in Orthogonal Expression Reduction Systems (OERS), that is, Orthogonal Term Rewriting Systems with bound variables and substitutions, as in the -calculus. We design a strategy that for any given term t constructs a longest reduction starting from t if t is strongly normalizable, and constructs an infinite reduction otherwise. The Conservation Theorem for OERSs follows easily from the properties of the strategy. We develop a method for computing the length of a longest reduction starting from a strongly normalizable term. We study properties of pure substitutions and several kinds of similarity of redexes. We apply these results to construct an algorithm for computing lengths of longest reductions in strongly persistent OERSs that does not require actual transformation of the input term. As a corollary, we have an algorithm for computing lengths of longest developments in OERSs. 1 Introduction A strategy is perpetual if, given a term t, it constructs an infinit... - MATH. STRUCTURES IN COMP. SCI. 9(4):403–435 , 1998 "... We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on -terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, ..." Cited by 3 (0 self) Add to MetaCart We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on -terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, which are not based on Abramsky's applicative bisimulation. We discuss in particular, what we call, the cartesian coinduction principle, which arises when we exploit the elementary observation that functional behaviours can be expressed as cartesian graphs. Using the paradigm of final semantics, the soundness of this principle over an applicative structure can be expressed easily by saying that the applicative structure can be construed as a strongly extensional coalgebra for the functor (P( \Theta )) \Phi (P( \Theta )). In this paper, we present two general methods for showing the soundenss of this principle. The first applies to approximable applicative structures. Many c.p.o. -models in...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2815399","timestamp":"2014-04-19T02:43:04Z","content_type":null,"content_length":"16116","record_id":"<urn:uuid:9995520d-fa26-440f-92c1-4fff82f76acd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [SI-LIST] : Q: Plane-jumping return currents D. C. Sessions (dc.sessions@vlsi.com) Wed, 22 Sep 1999 17:35:16 -0700 Eric Goodill wrote: > Mike Jenkins wrote: > > > > Eric, > > > > One line of your question, "My system is running pretty fast > > (> 1 Gbps)", caught my eye. At that speed, which I assume might > > be Fibre Channel or Gigabit Ethernet, you may well be running > > differential. (If not, good luck to you.) But if your lines > > are dif'l, they carry their own return current. Depending on > > geometry, there is some discontinuity, but MUCH less than > > single-ended. If your lines are, in fact, differential, and > > if you wish me to elaborate, I will. > Mike, > Yes, differential. However, we're using edge-coupled pairs, and it's my > understanding, though I've done no analysis, that about 10% - 15% is about > as much coupling as you can get between edge-coupled lines. Thus, there is > still a strong coupling between the trace and it's reference place. > Therefore, I suspect that there's non-ignorable amount of return current in > the reference planes. I'd be interested to see a > return-current-distribution plot for a diff pair both in the reference > planes and the coupled traces. I don't think so. Sure, there's a fair bit of capacitive current between each trace and the adjacent plane, but since they're equal and opposite the loop is very small and entirely lateral. Cross a plane boundary and there's no need for any current across the break. D. C. Sessions **** To unsubscribe from si-list: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu/si-list ****
{"url":"http://www.qsl.net/wb6tpu/si-list2/1757.html","timestamp":"2014-04-16T16:42:27Z","content_type":null,"content_length":"3850","record_id":"<urn:uuid:5144cd87-b74d-4ed1-9cfc-a795d5704fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
- ``Data/Random/Distribution/Categorical'' {-# LANGUAGE FlexibleInstances, FlexibleContexts module Data.Random.Distribution.Categorical where import Data.Random.RVar import Data.Random.Distribution import Data.Random.Distribution.Uniform import Control.Arrow import Control.Monad import Control.Applicative import Data.Foldable (Foldable(foldMap)) import Data.Traversable (Traversable(traverse, sequenceA)) import Data.List import Data.Function -- |Construct a 'Categorical' random variable from a list of probabilities -- and categories, where the probabilities all sum to 1. categorical :: Distribution (Categorical p) a => [(p,a)] -> RVar a categorical ps = rvar (Categorical ps) -- |Construct a 'Categorical' random process from a list of probabilities -- and categories, where the probabilities all sum to 1. categoricalT :: Distribution (Categorical p) a => [(p,a)] -> RVarT m a categoricalT ps = rvarT (Categorical ps) -- | Construct a 'Categorical' distribution from a list of weighted categories, -- where the weights do not necessarily sum to 1. {-# INLINE weightedCategorical #-} weightedCategorical :: (Fractional p) => [(p,a)] -> Categorical p a weightedCategorical = normalizeCategoricalPs . Categorical -- |Construct a 'Categorical' distribution from a list of observed outcomes. -- Equivalent events will be grouped and counted, and the probabilities of each -- event in the returned distribution will be proportional to the number of -- occurrences of that event. empirical :: (Fractional p, Ord a) => [a] -> Categorical p a empirical xs = normalizeCategoricalPs (Categorical bins) where bins = [ (genericLength bin, x) | bin@(x:_) <- group (sort xs) -- |Categorical distribution; a list of events with corresponding probabilities. -- The sum of the probabilities must be 1, and no event should have a zero -- or negative probability (at least, at time of sampling; very clever users -- can do what they want with the numbers before sampling, just make sure -- that if you're one of those clever ones, you normalize before sampling). newtype Categorical p a = Categorical [(p, a)] deriving (Eq, Show) instance (Fractional p, Ord p, Distribution StdUniform p) => Distribution (Categorical p) a where rvarT (Categorical []) = fail "categorical distribution over empty set cannot be sampled" rvarT (Categorical ds) = do let (ps, xs) = unzip ds cs = scanl1 (+) ps u <- stdUniformT getEvent u cs xs -- In the (hopefully) extremely rare event that, due to numerical -- instability, the last 'c' is less than 1 _and_ a number greater than -- it is drawn, simply retry the sampling. If it comes to that, also -- do one last sanity check that lastC > 0, to make sure that there -- is some nonzero chance of termination. getEvent u cs0 xs0 = go 0 cs0 xs0 go lastC [] _ | lastC > 0 = do {newU <- stdUniformT; getEvent newU cs0 xs0} | otherwise = fail "categorical distribution sampling error: total probablility not greater than zero" go lastC (c:cs) (x:xs) | c < lastC = fail "categorical distribution sampling error: negative probability for an event!" | u > c = go c cs xs | c == c = return x | otherwise = fail "categorical distribution sampling error: NaN probability" go _ _ _ = error "rvar/Categorical: programming error! this case should be impossible!" instance Functor (Categorical p) where fmap f (Categorical ds) = Categorical [(p, f x) | ~(p, x) <- ds] instance Foldable (Categorical p) where foldMap f (Categorical ds) = foldMap (f . snd) ds instance Traversable (Categorical p) where traverse f (Categorical ds) = Categorical <$> traverse (\(p,e) -> (\e' -> (p,e')) <$> f e) ds sequenceA (Categorical ds) = Categorical <$> traverse (\(p,e) -> (\e' -> (p,e')) <$> e) ds instance Fractional p => Monad (Categorical p) where return x = Categorical [(1, x)] -- I'm not entirely sure whether this is a valid form of failure; see next -- set of comments. fail _ = Categorical [] -- Should the normalize step be included here, or should normalization -- be assumed? It seems like there is (at least) 1 valid situation where -- non-normal results would arise: the distribution being modeled is -- "conditional" and some event arose that contradicted the assumed -- condition and thus was eliminated ('f' returned an empty or -- zero-probability consequent, possibly by 'fail'ing). -- It seems reasonable to continue in such circumstances, but should there -- be any renormalization? If so, does it make a difference when that -- renormalization is done? I'm pretty sure it does, actually. So, the -- normalization will be omitted here for now, as it's easier for the -- user (who really better know what they mean if they're returning -- non-normalized probability anyway) to normalize explicitly than to -- undo any normalization that was done automatically. (Categorical xs) >>= f = {- normalizeCategoricalPs . -} Categorical $ do (p, x) <- xs let Categorical fx = f x (q, y) <- fx return (p * q, y) instance Fractional p => Applicative (Categorical p) where pure = return (<*>) = ap -- |Like 'fmap', but for the probabilities of a categorical distribution. mapCategoricalPs :: (p -> q) -> Categorical p e -> Categorical q e mapCategoricalPs f (Categorical ds) = Categorical [(f p, x) | (p, x) <- ds] -- |Adjust all the weights of a categorical distribution so that they -- sum to unity. normalizeCategoricalPs :: (Fractional p) => Categorical p e -> Categorical p e normalizeCategoricalPs orig@(Categorical ds) = -- For practical purposes the scale factor is strict anyway, -- so check if the total probability is 1 and, if so, skip -- the actual scaling part. -- Along the way, discard any zero-probability events. if null ds || ps =~ 1 then orig else Categorical [ (p * scale, e) | (p, e) <- ds , p /= 0 ps = foldl1' (+) (map fst ds) scale = recip ps -- Using same implicit-epsilon trick as in Distribution instance -- (see comments there) x =~ y = (100 + (x-y) == 100) -- |Simplify a categorical distribution by combining equivalent categories (the new -- category will have a probability equal to the sum of all the originals). collectEvents :: (Ord e, Num p, Ord p) => Categorical p e -> Categorical p e collectEvents = collectEventsBy compare ((sum *** head) . unzip) -- |Simplify a categorical distribution by combining equivalent events (the new -- event will have a weight equal to the sum of all the originals). -- The comparator function is used to identify events to combine. Once chosen, -- the events and their weights are combined by the provided probability and -- event aggregation function. collectEventsBy :: (e -> e -> Ordering) -> ([(p,e)] -> (p,e))-> Categorical p e -> Categorical p e collectEventsBy compareE combine (Categorical ds) = Categorical . map combine . groupEvents . sortEvents $ ds groupEvents = groupBy (\x y -> snd x `compareE` snd y == EQ) sortEvents = sortBy (compareE `on` snd)
{"url":"http://hackage.haskell.org/package/random-fu-0.1.4/docs/src/Data-Random-Distribution-Categorical.html","timestamp":"2014-04-24T04:09:38Z","content_type":null,"content_length":"39218","record_id":"<urn:uuid:d26920f4-5b7b-48aa-a799-ebb8c632badf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
In Electric Circuits, It Is Common To See Current ... | Chegg.com Image text transcribed for accessibility: In electric circuits, it is common to see current behavior in the form of a square wave as shown in Figure 1. Figure 1 . Square wave signal. Solving for the Fourier series from: Determine: If T = 0.25 see., plot (lie first six terms of the Fourier series. Show, in the resulting graph, number of harmonics in a Gibbs phenomenon sense. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/electric-circuits-common-see-current-behavior-form-square-wave-shown-figure-1-figure-1--sq-q2016491","timestamp":"2014-04-18T03:28:14Z","content_type":null,"content_length":"20681","record_id":"<urn:uuid:54f69695-8e9d-4fcc-b989-cde26d1173dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties of a chaotic invariant set of a dynamical system Can someone please explain to me, why is it that having a dense set of periodic points is so important for an invariant set of a chaotic dynamical system? It has something to do with exhibiting a regular behavior intermingled with the chaotic (the regular element according to Devaney), but I can't seem to find a way to conclude this line of thinking (I don't have Devaney's book right now). Thanks in advance
{"url":"http://www.physicsforums.com/showthread.php?t=580962","timestamp":"2014-04-17T04:04:22Z","content_type":null,"content_length":"19755","record_id":"<urn:uuid:4173e23a-7f20-418c-b53b-e5c84fd1a99c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
One small problem with cyclotomic polynomials April 24th 2010, 07:21 AM One small problem with cyclotomic polynomials Can someone help me solve this problem with cyclotomic polynomials? $\Phi_n$ is the n-th cyclotomic polynomial. 1. Let $a$ be a non-zero integer, $p$ a prime, $n$ a positive integer and $pmid n$. Prove that $p\mid \Phi_n(a)$ if and only if $a$ has period $n$ in $(\mathbb{Z}/p\mathbb{Z})^*$. 2. Again assume $pmid n$. Prove that $p\mid \Phi_n(a)$ for some $a\in\mathbb{Z}$ if and only if $p\equiv 1 \pmod n$ Here's the source of the problem: Algebra - Google Livres Page 324, problem 21. Thanks a lot. April 24th 2010, 03:05 PM Can someone help me solve this problem with cyclotomic polynomials? $\Phi_n$ is the n-th cyclotomic polynomial. 1. Let $a$ be a non-zero integer, $p$ a prime, $n$ a positive integer and $pmid n$. Prove that $p\mid \Phi_n(a)$ if and only if $a$ has period $n$ in $(\mathbb{Z}/p\mathbb{Z})^*$. 2. Again assume $pmid n$. Prove that $p\mid \Phi_n(a)$ for some $a\in\mathbb{Z}$ if and only if $p\equiv 1 \pmod n$ Here's the source of the problem: Algebra - Google Livres Page 324, problem 21. Thanks a lot. let $f(x)=x^n-1=\prod_{d \mid n} \Phi_d(x).$ see that if $t$ is the order of $a$ modulo $p,$ then $p \mid \Phi_t(a).$ now suppose that $p \mid \Phi_n(a).$ then $p \mid a^n - 1$ and thus $t \mid n.$ that means, in $(\mathbb{Z}/p\mathbb{Z})[x],$ both $\Phi_n(x)$ and $\Phi_t(x)$ are divisible by $x-a$ and so if $t eq n,$ then $(x-a)^2 \mid f(x).$ hence $x - a \mid f'(x)=nx^{n-1},$ which gives us $na^{n-1} \equiv 0 \mod p$ and so $p \mid do the rest of the problem yourself. April 26th 2010, 06:07 AM Hello again, Thanks for your help, NonCommAlg! I just can't find a way to proof the $\Longleftarrow$ of 2, namely: if $p\equiv 1 \pmod n$ then $p\mid \Phi_n(a)$ for some $a\in\mathbb{Z}$. April 27th 2010, 07:27 PM
{"url":"http://mathhelpforum.com/advanced-algebra/141076-one-small-problem-cyclotomic-polynomials-print.html","timestamp":"2014-04-20T16:16:34Z","content_type":null,"content_length":"16536","record_id":"<urn:uuid:3603fd27-ecbd-4c0d-9ee8-809945bb885a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
IBM Research Ponder This: July 2012 Puzzle I like this month’s IBM Research puzzle for several reasons. First, it is really three puzzles in one, where each component is relatively self-contained. Second, it has some very “nice” elegant mathematics in it, but will also yield to computer simulation… up to a point. And third… I think it is hard. Problem: Alice and Bob are playing two games: they start simultaneously and the first one to win his game is the winner. Alice is given an urn with N balls, colored in N different colors and in every second she randomly picks two balls, colors the first ball in the color of the second ball and returns both to the urn. Her game is over once all the balls in the urn have the same color. Bob is given an urn with M balls, all colorless, and in every second he picks a random ball, color it, and puts it back to the urn. Bob’s game is over once all his balls are colored. Our question is: what are the possible values of M for which (on average) Bob is expected to lose for N=80 and win for N=81? The three pieces of the puzzle are to: (1) analyze Alice’s game, (2) analyze Bob’s game, and then (3) compare the two. I think the original intent of the problem was that Alice’s game should be the really “interesting” part. Consider computing the expected number of seconds for each player to finish. Bob’s game is a rather standard problem in thin disguise. Alice’s game, on the other hand, requires a bit more work. However, it is that part (3) of the puzzle that motivated this post. I think a key question is, what is meant by “Bob losing on average“? More precisely, once Alice and Bob have completed their games, what does the winner win? There are two reasonable possibilities: 1. The loser pays the winner a dollar for every additional second needed to finish the (loser’s) game. 2. The loser pays the winner a dollar. An update to the problem in response to this question confirms, as I suspected, that (1) is actually the intended interpretation. Good thing, too, because in that case the problem essentially reduces to simply finding the expected number of seconds for Alice to finish. But it seems to me that the original problem, as written, implies (2), which I think makes the problem much harder… or at least, much harder for me. This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://possiblywrong.wordpress.com/2012/07/15/ibm-research-ponder-this-july-2012-puzzle/","timestamp":"2014-04-20T20:56:27Z","content_type":null,"content_length":"51244","record_id":"<urn:uuid:71a06f88-42c4-4a09-85f1-1a444cdf2d50>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Class Sorting ☆ cern.colt.matrix.doublealgo.Sorting All Implemented Interfaces: Serializable, Cloneable public class Sortingextends PersistentObject Matrix quicksorts and mergesorts.Use idioms like This is another case demonstrating one primary goal of this library: Delivering easy to use, yet very efficient APIs.The sorts return convenient sort views.This enables the usage of algorithms which scale well with the problem size:For example, sorting a 1000000 x 10000 or a 1000000 x 100 x 100 matrix performs just as fast as sorting a 1000000 x 1 matrix.This is so, because internally the algorithms only move around integer indexes, they do not physically move around entire rows or slices.The original matrix is left unaffected. The quicksort is a derivative of the JDK 1.2 V1.26 algorithms (which are, in turn, based on Bentley's and McIlroy's fine work).The mergesort is a derivative of the JAL algorithms, with optimisations taken from the JDK algorithms.Mergesort is stable (by definition), while quicksort is not.A stable sort is, for example, helpful, if matrices are sorted successively by multiple columns. It preserves the relative position of equal elements. See Also: GenericSorting, Sorting, Arrays, Serialized Form □ Method Summary Modifier and Method and Description sort(DoubleMatrix1D vector) Sorts the vector into ascending order, according to the natural ordering. sort(DoubleMatrix1D vector, DoubleComparator c) Sorts the vector into ascending order, according to the order induced by the specified comparator. sort(DoubleMatrix2D matrix, BinFunction1D aggregate) Sorts the matrix rows into ascending order, according to the natural ordering of the values computed by applying the given aggregation function to each row;Particularly efficient when comparing expensive aggregates, because aggregates need not be recomputed time and again, as is the case for comparator based sorts. sort(DoubleMatrix2D matrix, double[] aggregates) Sorts the matrix rows into ascending order, according to the natural ordering of the matrix values in the virtual column aggregates;Particularly efficient when comparing expensive aggregates, because aggregates need not be recomputed time and again, as is the case for comparator based sorts. sort(DoubleMatrix2D matrix, DoubleMatrix1DComparator c) Sorts the matrix rows according to the order induced by the specified comparator. sort(DoubleMatrix2D matrix, int column) Sorts the matrix rows into ascending order, according to the natural ordering of the matrix values in the given column. sort(DoubleMatrix3D matrix, DoubleMatrix2DComparator c) Sorts the matrix slices according to the order induced by the specified comparator. sort(DoubleMatrix3D matrix, int row, int column) Sorts the matrix slices into ascending order, according to the natural ordering of the matrix values in the given [row,column] position. static void Demonstrates advanced sorting. static void Demonstrates advanced sorting. static void Demonstrates advanced sorting. zdemo5(int rows, int columns, boolean print) static void Demonstrates sorting with precomputation of aggregates (median and sum of logarithms). static void Demonstrates advanced sorting. zdemo7(int rows, int columns, boolean print) static void Demonstrates sorting with precomputation of aggregates, comparing mergesort with quicksort. □ Field Detail ☆ quickSort public static final Sorting quickSort A prefabricated quicksort. ☆ mergeSort public static final Sorting mergeSort A prefabricated mergesort. □ Method Detail ☆ sort public DoubleMatrix1D sort(DoubleMatrix1D vector) Sorts the vector into ascending order, according to the natural ordering .The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.To sort ranges use sub-ranging views. To sort descending, use flip views ... │7, 1, 3, 1 │==> 1, 1, 3, 7 │ │ │The vector IS NOT SORTED. │ │ │The new VIEW IS SORTED. │ vector - the vector to be sorted. a new sorted vector (matrix) view. Note that the original matrix is left unaffected. ☆ sort public DoubleMatrix1D sort(DoubleMatrix1D vector, DoubleComparator c) Sorts the vector into ascending order, according to the order induced by the specified comparator.The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.The algorithm compares two cells at a time, determinining whether one is smaller, equal or larger than the other.To sort ranges use sub-ranging views. To sort descending, use flip views ... // sort by sinus of cellsDoubleComparator comp = new DoubleComparator() { public int compare(double a, double b) { double as = Math.sin(a); double bs = Math.sin(b); return as < bs ? -1 : as == bs ? 0 : 1; }};sorted = quickSort(vector,comp); vector - the vector to be sorted. c - the comparator to determine the order. a new matrix view sorted as specified. Note that the original vector (matrix) is left unaffected. ☆ sort public DoubleMatrix2D sort(DoubleMatrix2D matrix, double[] aggregates) Sorts the matrix rows into ascending order, according to the natural ordering of the matrix values in the virtual column ;Particularly efficient when comparing expensive aggregates, because aggregates need not be recomputed time and again, as is the case for comparator based sorts.Essentially, this algorithm makes expensive comparisons cheap.Normally each element of is a summary measure of a row.Speedup over comparator based sorting = , on average.For this operation, quicksort is usually faster. The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.To sort ranges use sub-ranging views. To sort columns by rows, use dice views. To sort descending, use flip views ... Example:Each aggregate is the sum of a row │4 x 2 matrix: │aggregates=│4 x 2 matrix: │ │1, 1 │2 │1, 1 │ │5, 4 │9 │3, 0 │ │3, 0 │3 │4, 4 │ │4, 4 │8 │5, 4 │ │ │==> │The matrix IS NOT SORTED. │ │ │ │The new VIEW IS SORTED. │ // sort 10000 x 1000 matrix by sum of logarithms in a row (i.e. by geometric mean)DoubleMatrix2D matrix = new DenseDoubleMatrix2D(10000,1000);matrix.assign(new cern.jet.random.engine.MersenneTwister()); // initialized randomlycern.jet.math.Functions F = cern.jet.math.Functions.functions; // alias for convenience// THE QUICK VERSION (takes some 3 secs)// aggregates[i] = Sum(log(row));double[] aggregates = new double[matrix.rows()];for (int i = matrix.rows(); --i >= 0; ) aggregates[i] = matrix.viewRow(i).aggregate(F.plus, F.log);DoubleMatrix2D sorted = quickSort(matrix,aggregates);// THE SLOW VERSION (takes some 90 secs)DoubleMatrix1DComparator comparator = new DoubleMatrix1DComparator() { public int compare(DoubleMatrix1D x, DoubleMatrix1D y) { double a = x.aggregate(F.plus,F.log); double b = y.aggregate(F.plus,F.log); return a < b ? -1 : a==b ? 0 : 1; }};DoubleMatrix2D sorted = quickSort(matrix,comparator); matrix - the matrix to be sorted. aggregates - the values to sort on. (As a side effect, this array will also get sorted). a new matrix view having rows sorted. Note that the original matrix is left unaffected. IndexOutOfBoundsException - if aggregates.length != matrix.rows(). ☆ sort public DoubleMatrix2D sort(DoubleMatrix2D matrix, int column) Sorts the matrix rows into ascending order, according to the natural ordering of the matrix values in the given column.The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.To sort ranges use sub-ranging views. To sort columns by rows, use dice views. To sort descending, use flip views ... │4 x 2 matrix:│column = 0; │4 x 2 matrix: │ │7, 6 │view = quickSort(matrix,column); │1, 0 │ │5, 4 │System.out.println(view); │3, 2 │ │3, 2 │==> │5, 4 │ │1, 0 │ │7, 6 │ │ │ │The matrix IS NOT SORTED.│ │ │ │The new VIEW IS SORTED. │ matrix - the matrix to be sorted. column - the index of the column inducing the order. a new matrix view having rows sorted by the given column. Note that the original matrix is left unaffected. IndexOutOfBoundsException - if column < 0 || column >= matrix.columns(). ☆ sort public DoubleMatrix2D sort(DoubleMatrix2D matrix, DoubleMatrix1DComparator c) Sorts the matrix rows according to the order induced by the specified comparator.The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.The algorithm compares two rows (1-d matrices) at a time, determinining whether one is smaller, equal or larger than the other.To sort ranges use sub-ranging views. To sort columns by rows, use dice views. To sort descending, use flip views ... // sort by sum of values in a rowDoubleMatrix1DComparator comp = new DoubleMatrix1DComparator() { public int compare(DoubleMatrix1D a, DoubleMatrix1D b) { double as = a.zSum(); double bs = b.zSum(); return as < bs ? -1 : as == bs ? 0 : 1; }};sorted = quickSort(matrix,comp); matrix - the matrix to be sorted. c - the comparator to determine the order. a new matrix view having rows sorted as specified. Note that the original matrix is left unaffected. ☆ sort public DoubleMatrix2D sort(DoubleMatrix2D matrix, BinFunction1D aggregate) Sorts the matrix rows into ascending order, according to the natural ordering of the values computed by applying the given aggregation function to each row;Particularly efficient when comparing expensive aggregates, because aggregates need not be recomputed time and again, as is the case for comparator based sorts.Essentially, this algorithm makes expensive comparisons cheap.Normally defines a summary measure of a row.Speedup over comparator based sorting = , on average. The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.To sort ranges use sub-ranging views. To sort columns by rows, use dice views. To sort descending, use flip views ... Example:Each aggregate is the sum of a row │4 x 2 matrix:│aggregates= │4 x 2 matrix: │ │1, 1 │hep.aida.bin.BinFunctions1D.sum │1, 1 │ │5, 4 │==> │3, 0 │ │3, 0 │ │4, 4 │ │4, 4 │ │5, 4 │ │ │ │The matrix IS NOT SORTED.│ │ │ │The new VIEW IS SORTED. │ // sort 10000 x 1000 matrix by median or by sum of logarithms in a row (i.e. by geometric mean)DoubleMatrix2D matrix = new DenseDoubleMatrix2D(10000,1000);matrix.assign(new cern.jet.random.engine.MersenneTwister()); // initialized randomlycern.jet.math.Functions F = cern.jet.math.Functions.functions; // alias for convenience// THE QUICK VERSION (takes some 10 secs)DoubleMatrix2D sorted = quickSort(matrix,hep.aida.bin.BinFunctions1D.median);//DoubleMatrix2D sorted = quickSort(matrix,hep.aida.bin.BinFunctions1D.sumOfLogarithms);// THE SLOW VERSION (takes some 300 secs)DoubleMatrix1DComparator comparator = new DoubleMatrix1DComparator() { public int compare(DoubleMatrix1D x, DoubleMatrix1D y) { double a = cern.colt.matrix.doublealgo.Statistic.bin(x).median(); double b = cern.colt.matrix.doublealgo.Statistic.bin(y).median(); // double a = x.aggregate(F.plus,F.log); // double b = y.aggregate(F.plus,F.log); return a < b ? -1 : a==b ? 0 : 1; }};DoubleMatrix2D sorted = quickSort(matrix,comparator); matrix - the matrix to be sorted. aggregate - the function to sort on; aggregates values in a row. a new matrix view having rows sorted. Note that the original matrix is left unaffected. ☆ sort public DoubleMatrix3D sort(DoubleMatrix3D matrix, int row, int column) Sorts the matrix slices into ascending order, according to the natural ordering of the matrix values in the given position.The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.To sort ranges use sub-ranging views. To sort by other dimensions, use dice views. To sort descending, use flip views ... The algorithm compares two 2-d slices at a time, determinining whether one is smaller, equal or larger than the other.Comparison is based on the cell [row,column] within a slice.Let A and B be two 2-d slices. Then we have the following rules ○ A < B iff A.get(row,column) < B.get(row,column) ○ A == B iff A.get(row,column) == B.get(row,column) ○ A > B iff A.get(row,column) > B.get(row,column) matrix - the matrix to be sorted. row - the index of the row inducing the order. column - the index of the column inducing the order. a new matrix view having slices sorted by the values of the slice view matrix.viewRow(row).viewColumn(column). Note that the original matrix is left unaffected. IndexOutOfBoundsException - if row < 0 || row >= matrix.rows() || column < 0 || column >= matrix.columns(). ☆ sort public DoubleMatrix3D sort(DoubleMatrix3D matrix, DoubleMatrix2DComparator c) Sorts the matrix slices according to the order induced by the specified comparator.The returned view is backed by this matrix, so changes in the returned view are reflected in this matrix, and vice-versa.The algorithm compares two slices (2-d matrices) at a time, determinining whether one is smaller, equal or larger than the other.To sort ranges use sub-ranging views. To sort by other dimensions, use dice views. To sort descending, use flip views ... // sort by sum of values in a sliceDoubleMatrix2DComparator comp = new DoubleMatrix2DComparator() { public int compare(DoubleMatrix2D a, DoubleMatrix2D b) { double as = a.zSum(); double bs = b.zSum(); return as < bs ? -1 : as == bs ? 0 : 1; }};sorted = quickSort(matrix,comp); matrix - the matrix to be sorted. c - the comparator to determine the order. a new matrix view having slices sorted as specified. Note that the original matrix is left unaffected. ☆ zdemo1 public static void zdemo1() Demonstrates advanced sorting. Sorts by sum of row. ☆ zdemo2 public static void zdemo2() Demonstrates advanced sorting. Sorts by sum of slice. ☆ zdemo3 public static void zdemo3() Demonstrates advanced sorting. Sorts by sinus of cell values. ☆ zdemo5 public static void zdemo5(int rows, int columns, boolean print) Demonstrates sorting with precomputation of aggregates (median and sum of logarithms). ☆ zdemo6 public static void zdemo6() Demonstrates advanced sorting. Sorts by sum of row. ☆ zdemo7 public static void zdemo7(int rows, int columns, boolean print) Demonstrates sorting with precomputation of aggregates, comparing mergesort with quicksort. SCaVis 1.8 © jWork.org
{"url":"http://jwork.org/scavis/api/doc.php/cern/colt/matrix/doublealgo/Sorting.html","timestamp":"2014-04-21T13:13:42Z","content_type":null,"content_length":"54750","record_id":"<urn:uuid:c49d7ff2-7647-480a-af28-a2ab334d618c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Voting Game Theory Problem January 17th 2013, 05:35 AM Voting Game Theory Problem □ Three voters vote over two candidates (A and B), and each voter has two pure strategies: vote for A and vote for B. □ When A wins, voter 1 gets a payoff of 1, and 2 and 3 get payoffs of 0; when B wins, 1 gets 0 and 2 and 3 get 1. Thus, 1 prefers A, and 2 and 3 prefer B. □ The candidate getting 2 or more votes is the winner (majority rule). Find all very weakly dominant strategies (there may be more than one, or none). Find all pure strategy Nash equilibria (there may be more than one, or none)?
{"url":"http://mathhelpforum.com/new-users/211470-voting-game-theory-problem-print.html","timestamp":"2014-04-18T10:19:45Z","content_type":null,"content_length":"3763","record_id":"<urn:uuid:15d656e0-3fe0-4f71-a8c2-2f9b3f008755>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Medina, WA Geometry Tutor Find a Medina, WA Geometry Tutor I have over nine years of experience teaching both one-on-one and in classroom settings and love meeting and helping people. In particular, I love working with teenagers and helping them develop the skills they need to succeed in middle school and high school and to prepare for college. I think I ... 28 Subjects: including geometry, chemistry, English, algebra 1 ...I am open to teach any language that also has value in the real world. I have worked as an IT industry professional for over 30 years. I have programmed in various languages, installed and configured hardware and software, and recently have focused on computer network security issues. 43 Subjects: including geometry, chemistry, calculus, physics ...With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. I'm currently a Math and Science tutor at a school. In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. 26 Subjects: including geometry, chemistry, calculus, physics ...As I would not need to cover transportation costs, my hourly rate will be lower and more consistent.*** I have a been tutoring for most of my life, from kindergarten-aged children up to college and university-level classes. I mainly tutored accounting classes for five years at Green River Commu... 12 Subjects: including geometry, reading, accounting, ASVAB ...You may read some of their responses. Not only did I help them to understand and apply concepts, I also guided them through their various projects and assisted them in writing statistical research papers. Teaching SAT is a big chunk of of my 15 years for tutoring and teaching experience. 20 Subjects: including geometry, reading, calculus, statistics Related Medina, WA Tutors Medina, WA Accounting Tutors Medina, WA ACT Tutors Medina, WA Algebra Tutors Medina, WA Algebra 2 Tutors Medina, WA Calculus Tutors Medina, WA Geometry Tutors Medina, WA Math Tutors Medina, WA Prealgebra Tutors Medina, WA Precalculus Tutors Medina, WA SAT Tutors Medina, WA SAT Math Tutors Medina, WA Science Tutors Medina, WA Statistics Tutors Medina, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Medina_WA_geometry_tutors.php","timestamp":"2014-04-21T02:47:57Z","content_type":null,"content_length":"24012","record_id":"<urn:uuid:27570016-051f-4513-8c8e-74e6f37e499e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Mathematics for Applications -- from Wolfram Library Archive This course deals with the application of mathematics to problems in engineering and science. Emphasis is placed on the three phases of such an application and on the development of skills necessary to carry out each step: translation of the physical information to a mathematical model; treatment of the model by mathematical methods; and interpretation of the result in physical terms. I cannot imagine teaching this course without Mathematica. As you can see from the topics covered, much of the material involves intense computation, and Mathematica allows us to concentrate on ideas and techniques, not algorithms. Also with Mathematica one can visualize solutions to determine their "reasonableness." I feel that Mathematica's numeric, symbolic, and graphical capabilities are "tailor made" for this course. Students at the junior level have little difficulty with Mathematica in this course, as many have used it before, and the notebooks contain all of the necessary commands and syntax. Also, doing a few calculations by hand convinces the students very quickly of the need to learn to use technology • Vectors - Dot product, cross product, abstract vector spaces • Vector Differential Calculus - vector functions, curves, velocity, acceleration, curvature, vector fields, streamlines, gradient, divergence, curl • Vector Integral Calculus - line integrals, flows, Green's Theorem, potential theory, surface integrals, flux, Divergence Theorem, Stokes' Theorem • Laplace transforms - delta and Heaviside functions, DE's • Sturm-Liouville Theory - eigenfunction expansions, orthogonal polynomials • Fourier Series, Partial Differential Equations • Fourier Integral, Fourier Transform, Partial Differential Equations
{"url":"http://library.wolfram.com/infocenter/Courseware/240/","timestamp":"2014-04-17T21:32:31Z","content_type":null,"content_length":"35675","record_id":"<urn:uuid:19abe8a5-7d24-4c37-9ec0-4e1a559009d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
New Method for Teaching Division - Teachnology Teacher Forum You raise some good points. Double division is a new method, but that doesn't mean it is not better. We have to be open to new ideas. I would agree that the method is longer, but definitely not convoluted. I think it is much more obvious exactly what is happening. Long division teaches how to follow a long procedure and it gives some practice multiplying and subtracting. I'm not sure how many people understand what is happening when you bring down the next digit, or understand why you have to add a zero to the answer when the "number after subtracting" is less than the divisor. About polynomial division, on the one hand long division is more similar to polynomial division because in both cases you are choosing a multiple of the divisor from out of the blue. But on the other hand, double division does a better job of reinforcing the idea that division is really subtracting off multiples of the divisor. This may prepare students for polynomial division better. I would sum it up like this: Reasons To Teach Division: - to teach mathematics - a method to actually use in rare instances Double Division Advantages (compared to long division): - simpler procedure - teaches how division works - no trial and error - gives practice doubling numbers Double Division Disadvantage (compared to long division): - more subtraction - more steps (see note below) - requires more space on the paper - may not lead as directly to polynomial division - arguable Your thoughts? If we assume there is an equal chance of all ten digits being in the answer then on average there will be 1.5X as many "multiple and subtract" steps. For example a "7" in the answer requires 3 steps, and a zero in the answer requires no steps. Also remember that the multiply part of the "multiply and subtract" steps is already done for you. So this part will be faster. Of coarse you have to pre-multiple the divisor three times in the In the end I think it is longer, but not as much as you might think initially.
{"url":"http://www.teach-nology.com/forum/showthread.php?t=1163&amp;page=2","timestamp":"2014-04-20T03:11:08Z","content_type":null,"content_length":"78847","record_id":"<urn:uuid:b53518db-0ef4-4c40-a5da-7dff4bf75943>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
matematicasVisuales | Kepler: The best proportions for a wine barrel One important problem related with the origin of Calculus is finding "maxima and minima" of a function. For example, Euclid proved that among all rectangles of equal perimeter the square has the largest area. Fermat (born in 1601) has a different approach and was interested in the tangent of a curve and the relation of this tangent and the maximum (or minimun) of a function. Before Fermat, Kepler (1571-1630) was aware of this relation, though not in the sense of functions and derivatives. He wrote "Nova Stereometria doliorum vinariorum" (1615) a book about the volume of wine barrels and other bodies. In this page I only try to illustrate Otto Toeplitz's explanation of this episode of the History of Calculus ("The Calculus: A genetic Approach" by Otto Toeplitz): "Aside from his astronomical works, an instance of this is to be found in his so-called 'Doliometry', the 'barrel calculation'. When he, the imperial court astrologer at Linz, married the second time, he bought for the wedding wine from a barrel. To compute the bill, the wine merchant measured the barrel by inserting a foot rule into the taphole S until it reached the lid at D; then he read off the length SD = d and set the price accordingly. This method outraged Kepler, who saw that a narrow, high barrel might have the same SD as a wide one and would indicate the same wine price, thought its volume would be ever so much smaller. Giving further thought to this method of using d to determine the volume, Kepler approximated the barrel somewhat roughly by a cylinder, with v the radius of the base and h the height. Then Hence, the general formula for the volume of a cylinder: In our case, replacing r for its value: Notice that the volume is a cubic function of the height: Then he asked: If d is fixed, what value of h gives the largest volume V? V is a polynomial in h; hence the derivative (though of course, Kepler did not use derivatives) For V to be a maximum, V' must equal zero; hence That defined a barrel of definite proportions. Kepler noticed that in his Rhenish homeland barrels were narrowed and higher than in Austria, where their shape was peculiarly close to that having a maximum volumen for a fixed d -so close, indeed, that Kepler could not believe this to be a accidental. So he imagined that centuries ago somebody had calculated barrel shapes, as he himself was doing, and had taught the Austrians tho construct their barrels in this particular fashion- a very practical one, indeed. Kepler showed that if a barrel did not satisfy the exact mathematical but deviated somewhat from it, this would have but little effect on the volume, because near its maximum a function changes only slowly. Thus, while the Austrian method of price determination, if applied to Rhenish barrels, would be a clear fraud, it was quite legitimate for Austrian barrels. The Austrian shape had the advantage of permitting such a quick and simple method. So Kepler relaxed in this instance. Working out finer approximations of various barrel shapes, he consulted Archimedes and discovered that his own method of indivisibles had enabled him to obtain results in a far simpler and more general way than Archimedes, who had been struggling with cumbersome and difficult proofs. What he did not suspect was that Archimedes, too, had found his results by the same method of indivisibles (for the Method was lost until 1906!). Kepler devoted to these problems a whole book containing computations of many volumes." [Otto Toeplitz] You can see the whole book "Nova Stereometria doliorum vinariorum" in The Posner Library. You can see a page from Nova Stereometria doliorum vinaniorum, german version, from MathDL. In the article Kepler: The Volume of a Wine Barrel published by MathDl (Loci, June 2010) I presented a more complete version of this page around the anecdote that lead to Kepler to study volumes of barrels and other bodies and to write his book (published in 1615) "Nova Stereometria doliorum vinariorum" (New solid geometry of wine barrels) where he used some infinitesimal techniques previous to the discovering of differential and integral calculus by Newton and Leibnitz. (In this article some mathlets don't work perfectly well but you can see a full version here).
{"url":"http://matematicasvisuales.com/english/html/history/kepler/doliometry.html","timestamp":"2014-04-19T01:47:25Z","content_type":null,"content_length":"23474","record_id":"<urn:uuid:570d54c1-9431-44d2-a339-4f41e806e3dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Complexity of winning strategies for open games (for open player) up vote 3 down vote favorite If $G\subseteq\omega^{<\omega}$ is a computable clopen game, then $G$ has a winning strategy which is hyperarithmetic $(\Delta^1_1)$, by an inductive ranking process. The key observation here is that the length of this induction is bounded above by the length of the Kleene-Brouwer ordering $G_{KB}$, which is a computable ordinal and hence $<\omega_1^{CK}$, and that each successive stage of the induction can be achieved by one application of the jump operator, so there is a winning strategy with complexity at most $0^{(\vert G_{KB}\vert)}$. (An annoying subtlety here is that the theory $\Delta^1_1-CA_0$, which amounts to closure under hyperarithmeticity, does not prove determinacy of clopen games, since there are games which are not actually clopen but have no hyperarithmetic witnesses to their ill-foundedness.) My question is whether a version of this result is also true for open games. Specifically, let $T\subseteq\omega^{<\omega}$ be an open game in which the "Open" player (i.e., the player trying to fall off the tree) has a winning strategy; do they necessarily have a winning strategy hyperarithmetic in $T$? I'm asking this question because I was looking through my notes from a previous class, and I ran across the assertion that "a similar ranking argument" shows that the answer is 'yes;' however, I can't reconstruct this argument, and I'm wondering whether I (or the lecturer) was simply incorrect; or whether there's a basic argument I'm not seeing. lo.logic computability-theory descriptive-set-theory This was proved by Andreas Blass in ``A. Blass, Complexity of winning strategies, Discrete Math. 3 (1972), 295–300. " – Liang Yu Feb 12 '13 at 5:23 The rough idea is if the open player has a winning strategy, then for any node $\sigma$ with an even length in the tree $T$, we may define a partial function $f(\sigma)=\inf_n\sup_m f(\sigma ^{\ smallfrown} n ^{\smallfrown}m)$ and ensure $f(\emptyset)$ always exists. – Liang Yu Feb 12 '13 at 5:36 Liang, yes, but you are missing a +1 in your expression---it should be $f(\sigma\frown n\frown m)+1$---without it you don't get non-zero values. This $f$ is exactly the game value. – Joel David Hamkins Feb 12 '13 at 13:52 Joel, you are right. There should be +1 there. – Liang Yu Apr 3 '13 at 15:00 add comment 1 Answer active oldest votes The answer is yes. The point is that if there is any winning strategy for a designated player from a given position, then there is in a sense a canonical winning strategy, which is to make the first move that minimizes the game value of the resulting position, and for a given winning position in a fixed game, I claim that this strategy will have at worst hyperarithmetic complexity. To explain, consider how the ordinal game values arise. We fix the tree of all possible finite plays. We assign value $0$ to any position in which the designated open player has already won. We assign value $\alpha+1$ to a position with the open player to move, if $\alpha$ is least among the values of the positions to which he or she can legally play. If it is the opponent's move and every move by the opponent has a value, then the value of the position is the supremum of these values. Thus, the open player seeks to reduce value, and wins when the value hits zero. The opposing player seeks to maintain the value as undefined or as high as possible. Since playing according to the value-reducing strategy reduces value at every move for the open player, it follows that the tree $T_p$ of all positions arising from the value-reducing strategy is well-founded, and the value of $p$ is precisely the rank of the well-founded tree $T_p$, if one should consider only the positions where it is the opponent's turn to play. up vote 4 down vote Note that the assertion, "position $p$ in tree $T$ has value $\alpha$" is $\Sigma_1$ expressible in any admissible structure containing $T$ and $\alpha$, since this is equivalent to the accepted assertion that there is a ordinal assignment fulfilling the recursive definition of game value, which gives $p$ value $\alpha$ in that tree. It follows that there can be no position in $T$ with value $\omega_1^{T}$, since otherwise we would get a $\Sigma_1$-definable map unbounded in $\omega_1^{T}$. So the value of a position in $T$ is a $T$-computable ordinal or If a player has a winning strategy from a position $p$, then because the ordinal game value assignment is unique and all relevant values in the game proceeding from $p$ will be bounded by the fixed value $\beta_p$ of $p$, it follows that the value-reducing strategy from $p$ is $\Delta^1_1(T)$ definable and hence hyperarithmetic in $T$. Basically, the way I think about it is that once you know a code for the ordinal value of the initial position, then the strategy only cares about positions with value less than that, and you can bound the ordinals that arise in the recursive definition of game value. Since the ordinal game value assignment is unique, this allows the strategy to become $\Delta^1_1$ in a code for the initial value, which is bounded by the ordinal value of the well-founded tree. I apologize for my long-winded and redundant answer. – Joel David Hamkins Feb 12 '13 at 3:13 See also Andreas Blass's monotone fixed point argument in the comments of mathoverflow.net/questions/63423/checkmate-in-omega-moves/…. Basically, one can view my argument above as an unraveling of that way of thinking. – Joel David Hamkins Feb 12 '13 at 3:16 It's not long-winded or redundant - I'm really happy to have an answer that's so explicit. Thanks! – Noah S Feb 12 '13 at 7:40 add comment Not the answer you're looking for? Browse other questions tagged lo.logic computability-theory descriptive-set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/121549/complexity-of-winning-strategies-for-open-games-for-open-player/121551","timestamp":"2014-04-18T20:51:48Z","content_type":null,"content_length":"63410","record_id":"<urn:uuid:043123ec-42bc-41ff-b882-bac21bac6df2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Boolean Algebra Truth Table for fundamental conjunction and dnf July 29th 2013, 03:55 AM #1 Junior Member Jul 2013 Boolean Algebra Truth Table for fundamental conjunction and dnf Hello again! Can anyone help me understand how f, g and h got their column of answers from xyz columns? Is it thru multiplication or addition? i would appreciate it if the explanation is in simple english, i am very poor with mathematical symbols. Need to understand this before i could move on with my assignment. I would understand the truth table if the examplr gave an expression like xy+z, then I'd solve for that because i'll know what to multiply and what to add. But the example above is just looking for fundamental conjunctions with a value of 1 and i don't know how f g h got their 1's and 0's. Thanks again! Re: Boolean Algebra Truth Table for fundamental conjunction and dnf The functions f, g and h are just given. The fundamental idea of a function is that it is some way of converting an input into an output. Writing an expression with "and", "or" and "not" is just one way to define a Boolean function. We can use any means as long as it guarantees that every possible input is connected to a single output. In this case, f, g and h are given by explicitly listing their values for each input. The text says, "Let's consider these particular functions". Now, it turns out that every Boolean function can be specified by a Boolean expression, as in this example. This is not the case for functions on real numbers or even natural numbers. For example, the function that, given the coefficients a[4], ..., a[0] of a polynomial x^5 + a[4]x^4 + ... + a [0] of degree 5 returns the smallest root of this polynomial cannot be expressed using the four arithmetical operations and roots. Re: Boolean Algebra Truth Table for fundamental conjunction and dnf So.... xyz was not calculated to get f and g all along? the textbook just put jumbled 0's and 1's on the f and g column with no calculation? and f(xyz)= ~x~yz =1 is also a given and I have to look into the table which is negatable in that way? hehe thanks Re: Boolean Algebra Truth Table for fundamental conjunction and dnf I have to understand the example for me to be able to answer my assignment problems 4 and 5. Your explanation is waaaaay better than my instructor's: Looking at one line... to explain.... the second line has the values of x=0, y=0 and z=1 in the explanation ... the function f is = x'y'z = x'*y'*z >> the terms are multiplied to gether the value of x' is 1 the value of y' is 1 the value of z is 1 so the result of f = x'y'z = 1*1*1 = 1 It would seem that you are not seeing how the functions f and g and h are described in the write up. Look clearly into the values of the functions... He didn't say that f values are given already. He just "rephrased" the example. Which I still don't get it, and yes he explains like that too, like a robot. It drives me crazy haha This is my assignment by the way: This is what I did (don't mind the note I have on the bottom part, that's what I asked my professor again and again): Last edited by jpab29; July 29th 2013 at 02:37 PM. Re: Boolean Algebra Truth Table for fundamental conjunction and dnf So with my work uploaded, with w=x=0, y=z=1 given, I will automatically make put 1 on the f column where I see w=0, x=0, y=1, z=1, but then all zeros on other rows of f, right? Am i on the right Re: Boolean Algebra Truth Table for fundamental conjunction and dnf When you said f is given, it gave me light! So what this guy is saying makes sense to me now: Re: Boolean Algebra Truth Table for fundamental conjunction and dnf i have to understand the example for me to be able to answer my assignment problems 4 and 5. Your explanation is waaaaay better than my instructor's: Looking at one line... To explain.... The second line has the values of x=0, y=0 and z=1 in the explanation ... The function f is = x'y'z = x'*y'*z >> the terms are multiplied to gether the value of x' is 1 the value of y' is 1 the value of z is 1 so the result of f = x'y'z = 1*1*1 = 1 it would seem that you are not seeing how the functions f and g and h are described in the write up. Look clearly into the values of the functions... he didn't say that f values are given already. He just "rephrased" the example. Which i still don't get it, and yes he explains like that too, like a robot. It drives me crazy haha this is my assignment by the way: this is what i did (don't mind the note i have on the bottom part, that's what i asked my professor again and again): wait!!! My truth table is wrong! Lol hahaha i'll redo it. w 1-8 should be 0 and 9-16 should be 1... And so the other columns i will redo. So sorry. Gosh i'm math drunk. Re: Boolean Algebra Truth Table for fundamental conjunction and dnf on second thought, I thought I got it wrong. hehehe It was my other assignment. emakarov, thank you so much for your help. My grades in math are actually dangerously failing, and week 5 (this last week) was my last chance. When I get the results back (hopefully I pass), I will post the answers I had here as help for future students who need. Thanks again! July 29th 2013, 09:47 AM #2 MHF Contributor Oct 2009 July 29th 2013, 02:28 PM #3 Junior Member Jul 2013 July 29th 2013, 02:34 PM #4 Junior Member Jul 2013 July 29th 2013, 03:27 PM #5 Junior Member Jul 2013 July 29th 2013, 03:36 PM #6 Junior Member Jul 2013 July 29th 2013, 09:39 PM #7 Junior Member Jul 2013 July 30th 2013, 12:09 AM #8 Junior Member Jul 2013
{"url":"http://mathhelpforum.com/discrete-math/220894-boolean-algebra-truth-table-fundamental-conjunction-dnf.html","timestamp":"2014-04-20T07:37:33Z","content_type":null,"content_length":"56747","record_id":"<urn:uuid:1f521ca1-c35c-4b78-a08f-41da490ef0e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Implicitly' printed from http://nrich.maths.org/ $X(r)$ is defined implicitly by the quadratic relationship Part 1: Which of the choices $r=1,-1,100$ give real values for $X(r)$? Part 2: What is the range of values of $r$ for which $X(r)$ takes real values? What happens when $r=0$? Part 3: Sketch the overall shape of $X(r)$ against $r$ and find the maximum and minimum values of $X(r)$. Note: You could numerically find a sensible conjecture for the minimum and maximum values of $X(r)$, but to prove this you will need to use calculus.
{"url":"http://nrich.maths.org/6406/index?nomenu=1","timestamp":"2014-04-21T12:18:32Z","content_type":null,"content_length":"3778","record_id":"<urn:uuid:27c7fae5-6107-4786-8f2b-17112a60ab61>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic topology of finite topological spaces Algebraic topology of finite topological spaces August 11, 2007 Posted by Noah Snyder in Algebraic Topology, fun problems. Here’s a fun question that was floating around Mathcamp last week: find a finite topological space which has a nontrivial fundamental group. One answer to this question after the jump. One example is a space S with 4 points, two of which are open and two of which are closed. First, consider the line with the origin doubled. Now quotient out by setting all positive points equal to each other, and all negative points equal to each other. This gives a four point space S. There’s a map from the circle to S given by sending your favorite two points on the circle to the closed points, and the two open intervals between them to the open points. It is not difficult to see that this cannot be extended to the disc. A better proof is to exhibit S’ the universal cover of S. The space S’ looks like: The points in the middle column are closed. The points in the other two columns are open, and the closure of any such point contains the two nearest points in the middle column. S’ is not contractable, but any compact (i.e. finite) subset of it is contractable, so it is simply connected. Hence $\pi_1(S) \cong \mathbb{Z}$ since the deck transformations of S’ just come from shifting up and down. Here are two more fun problems: find all the homology and homotopy groups of this 4 point space. For general topological spaces, you wouldn’t expect the usual fundamental group defined in terms of paths to still classify covering spaces. But I think I remember hearing that there is still a Grothendieck-style definition of a fundamental group that classifies finite-degree covering spaces. In the special case of a CW complex, this would be the profinite completion of the usual fundamental group. I don’t know if possible to do it without the finite-degree restriction. Maybe it’s just what comes out of Grothendieck’s formalism, which was creating with algebraic fundamental groups in mind. Acording to Wikipedia (and to Eric, a camper and our resident expert on point-set topology) a space has a universal cover if and only if it is path-connected, locally path-connected, and semi-locally simply connected. All of these conditions are easy to check for S. There’s a more categorical way of thinking about the fundamental group: if you have any reasonable notion of “covering space,” then one can take the category of covering spaces of a given one. Call this $\mathcal C$. For any reasonable notion of covering space (in particular, the usual topological one), one has a functor $\mathcal{C}\to \mathsf{Set}$ sending a cover to the fiber over a generic point. This functor is even monoidal for the “tensor product” on coverings given by fiber product. By analogy with the Tannakian formalism, one can define the fundamental group of $X$ for a given notion of covering to be the automorphism group of this forgetful functor. If $X$ has a universal cover, then you can check that you’ll get back the usual fundamental group. If you restrict to finite covers, you’ll get the profinite completion of the fundamental group. If you switch to the algebraic category, you should get Grothendieck’s algebraic fundamental group. The reason that you get a profinite group here is that the algebraic restriction forces you to only consider finite covers. Right. What I meant was that I remember hearing that you always have a Galois category (i.e. the finite discrete version of a Tannakian category) for any topological space whatsoever. And so even though you can’t always define pi_1, you can always define its completion, or rather what would be its completion if pi_1 existed. It looks like any sufficiently subdivided CW complex can be rendered as a locally finite topological space in the same way as you did with the circle. In particular, you should be able to get any finitely presented group as $\pi_1$ of a finite topological space. Are there interesting questions about finite “homotopy types”? It’s not clear that this adds anything new to algebraic topology. Incidentally, there are multiple algebraic categories (e.g., tame, etale, Nisnevich), coming from different notions of cover, and they yield very different fundamental groups. Cool “postcards” from mathcamp, Noah! Your entry here got me thinking: There is an equivalence of categories O: FinTop –> FinPreOrd between finite topological spaces and finite preorders, where the order –> in O(X) is defined by x –> y iff x is contained in the closure of y. For Noah’s 4-point example S, the associated preorder O(S) looks like a d a d with b and c both pointing to a and to d (no other relations). On the other hand, one can take the classifying space of a finite preorder B: FinPreOrd –> Top as usual, by taking geometric realization of the nerve of the preorder (considered as a category). On Noah’s example S, the classifying space of the associated preorder, BO(S), is a circle S^1. The map S^1 –> S that Noah defined generalizes: for finite topological spaces X, I believe I can define a continuous map BO(X) –> X, almost as a piece of pure category theory. In the end, it comes down to defining a continuous map Aff(n) –> D(n) from the n-dimensional affine simplex to the finite topological space with n+1 points represented by the preorder Delta_n = (0 –> 1 –> … –> n). I’ll leave this to the imagination for now (details available on request). Then, does anyone know what can be said of this map BO(X) –> X in terms of homotopy? For example, does pi_1 induce an isomorphism? What happens with higher homotopy groups? If X is T_0 (I haven’t checked whether it still works for non-T_0 spaces, the map BO(X) -> X (which is a quotient map) turns out to have a nice universal property: any map Y -> X lifts to BO(X), as long as Y is sufficiently nice (metrizable or a CW complex, say; the actual condition is hereditary perfect normality). Furthermore, the lift is unique up to a homotopy such that every stage of the homotopy is a lift. It’s easy to see that this implies that the map induces isomorphisms on all homotopy groups. You can either use this to show it also induces isomorphisms on homology, or you can prove that directly by induction on the number of points and Mayer-Vietoris. Any barycentric subdivision of a simplicial complex C is BO(X), where X is the poset of faces of C ordered by inclusion. Thus every finite simplicial complex has a finite “model”. Thanks, Eric — very useful reply. I think the weak homotopy equivalence for finite T_0 spaces implies the same holds for all finite spaces: A finite space X is T_0 iff its associated preorder is a poset, and every preorder P is equivalent as a category to a (unique up to isomorphism) poset P’, with P’ a retract of P. It’s well known that the categorical equivalence implies BP and BP’ are homotopy equivalent. On the other hand, the equivalence P ~ P’ means there is a preorder map (0 –> 1) = 2 –> hom(P, P) sending 1 to the identity and 0 to a factoring through P’. Now switch to the topological picture, and pull back along the evident continuous map I = [0, 1] –> 2 to conclude that P and P’ are homotopy equivalent as spaces. It now follows from naturality of BO(X) –> X that this map is a weak homotopy equivalence for all finite X. This is way late, but McCord showed any finite simplicial complex is weakly equivalent to its poset of faces with the Alexandrov topology so you can get any finitely presented group. In particular the nerve of a poset is weakly equivalent to the poset. Sorry comments are closed for this entry Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/2007/08/11/algebraic-topology-of-finite-topological-spaces/","timestamp":"2014-04-19T17:02:54Z","content_type":null,"content_length":"77169","record_id":"<urn:uuid:623d1674-5bf5-4e20-8715-a8f9baab3a13>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
The Old Laplace Transform The Laplace Transform is typically credited with taking dynamical problems into static problems. Recall that the Laplace Transform of the function hh is ℒ⁢h⁢s≡∫0∞e−(s⁢t)⁢h⁢tdt . ℒ h s t 0 s t h t . MATLAB is very adept at such things. For example: >> syms t >> laplace(exp(t)) ans = 1/(s-1) >> laplace(t*(exp(-t)) ans = 1/(s+1)^2 The Laplace Transform of a matrix of functions is simply the matrix of Laplace transforms of the individual elements. ℒ⁢ett⁢e−t=1s−11s+12 ℒ t t t 1 s 1 1 s 1 2 Now, in preparing to apply the Laplace transform to our equation from the dynamic strang quartet module: x′=B⁢x+g , x B x g , we write it as ℒ⁢dxdt=ℒ⁢B⁢x+g ℒ t x ℒ B x g and so must determine how acts on derivatives and sums. With respect to the latter it follows directly from the definition that ℒ⁢B⁢x+g=ℒ⁢B⁢x+ℒ⁢g=B⁢ℒ⁢x+ℒ⁢g . ℒ B x g ℒ B x ℒ g B ℒ x ℒ g . Regarding its effect on the derivative we find, on integrating by parts, that ℒ⁢dxdt=∫0∞e−(s⁢t)⁢dx⁢tdtdt=x⁢t⁢e−(s⁢t)|0∞+s⁢∫0∞e−(s⁢t)⁢x⁢tdt . ℒ t x t 0 s t t x t 0 x t s t s t 0 s t x t . Supposing that are such that x⁢t⁢e−(s⁢t)→0 x t s t 0 t→∞ t we arrive at ℒ⁢dxdt=s⁢ℒ⁢x−x⁢0 . ℒ t x s ℒ x x 0 . Now, upon substituting Equation 2 Equation 3 Equation 1 we find s⁢ℒ⁢x−x⁢0=B⁢ℒ⁢x+ℒ⁢g , s ℒ x x 0 B ℒ x ℒ g , which is easily recognized to be a linear system for ℒ⁢x ℒ x , namely (s⁢I−B)⁢ℒ⁢x=ℒ⁢g+x⁢0 . s I B ℒ x ℒ g x 0 . The only thing that distinguishes this system from those encountered since our first brush with these systems is the presence of the complex variable . This complicates the mechanical steps of Gaussian Elimination or the Gauss-Jordan Method but the methods indeed apply without change. Taking up the latter method, we write ℒ⁢x=s⁢I−B-1⁢(ℒ⁢g+x⁢0) . ℒ x s I B ℒ g x 0 . The matrix s⁢I−B-1 s I B is typically called the transfer function or resolvent, associated with , at . We turn to MATLAB for its symbolic calculation. (for more information, see the tutorial on MATLAB's symbolic toolbox). For example, >> B = [2 -1; -1 2] >> R = inv(s*eye(2)-B) R = [ (s-2)/(s*s-4*s+3), -1/(s*s-4*s+3)] [ -1/(s*s-4*s+3), (s-2)/(s*s-4*s+3)] We note that s⁢I−B-1 s I B is well defined except at the roots of the quadratic, s2−4⁢s+3 s 2 4 s 3 . This quadratic is the determinant of s⁢I−B s I B and is often referred to as the characteristic polynomial of BB. Its roots are called the eigenvalues of BB. As a second example let us take the BB matrix of the dynamic Strang quartet module with the parameter choices specified in fib3.m, namely B=( -0.1350.1250 0.5-1.010.5 00.5-0.51 ) B -0.1350.1250 0.5-1.010.5 00.5-0.51 The associated s⁢I−B-1 s I B is a bit bulky (please run ) so we display here only the denominator of each term, s3+1.655⁢s2+0.4078⁢s+0.0039 . s 3 1.655 s 2 0.4078 s 0.0039 . Assuming a current stimulus of the form i 0 ⁢t=t3⁢e−t610000 i 0 t t 3 t 6 10000 E m =0 E m 0 ℒ⁢g⁢s=0.191s+16400 ℒ g s 0.191 s 1 6 4 0 0 and so Equation 6 persists in ℒ⁢x=s⁢I−B-1⁢ℒ⁢g=0.191s+164⁢(s3+1.655⁢s2+0.4078⁢s+0.0039)⁢s2+1.5⁢s+0.270.5⁢s+0.260.2497 ℒ x s I B ℒ g 0.191 s 1 6 4 s 3 1.655 s 2 0.4078 s 0.0039 s 2 1.5 s 0.27 0.5 s 0.26 0.2497 Now comes the rub. A simple linear solve (or inversion) has left us with the Laplace transform of xx. The accursed We shall have to do some work in order to recover xx from ℒ⁢x ℒ x . confronts us. We shall face it down in the Inverse Laplace module
{"url":"http://cnx.org/content/m10169/latest/?collection=col10048/latest","timestamp":"2014-04-17T10:09:42Z","content_type":null,"content_length":"110602","record_id":"<urn:uuid:25c631d1-5014-4a85-9c68-8fc6c2eb8775>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Accuracy, Convergence and Mesh Quality [This article was first published in the May/June 2012 issue of The Connector. It was so popular there it's reposted here to reach a broader audience.] “We know embarrassingly little about how the mesh affects the CFD solution,” said Prof. Carl Ollivier-Gooch of the University of British Columbia. That statement is counter to what we all know to be true in practice, that a good mesh helps the computational fluid dynamics (CFD) solver converge to the correct answer while minimizing the computer resources expended. Stated differently, most every decent solver will yield an accurate answer with a good mesh, but it takes the most robust of solvers to get an answer on a bad mesh. The crux of the issue is what precisely is meant by “a good mesh.” Syracuse University’s Prof. John Dannenhoffer points out that we are much better at identifying a bad mesh than we are at judging a good one. Distinguishing good from bad is clouded by the fact that badness is a black-white determination of whether the mesh will run or not. (Badness often only means whether there are any negative volume cells.) On the other hand, goodness is all shades of gray – there are good meshes and there are better meshes. Neither is goodness all about the mesh. Gone are the days when one could eyeball the mesh and make a good/bad judgment. Adaptive meshes that are justified by visual inspection of how much thinner shock waves are in a contour plot of density just do not make the grade. What matters is how accurately the CFD solution reflects reality. Therefore, the solver’s numerical algorithm and the physics of the flow to be computed also have to be accounted for in the evaluation of a mesh. Implicit in the paragraphs above is the idea of judging mesh quality in advance of computing the CFD solution. There are those who think that a priori mesh quality assessment is of limited value and that changing the mesh in response to the developing flow solution (via mesh adaption or adjoint methods or other technology) is the better way to generate a good mesh and an accurate solution. Mesh Quality Workshop Given this state of affairs, it was important to assemble mesh generation researchers and practitioners to assess the topic of mesh quality. Pointwise participated in the “Mesh Quality/Resolution, Practice, Current Research, and Future Directions Workshop” last summer in Dayton and hosted by the DoD High Performance Computing Modernization Program (HPCMO) and organized by the PETTT Program (User Productivity, Enhancement, Technology Transfer and Training) and AIAA’s MVCE Technical Committee (Meshing, Visualization, and Computational Environments). The workshop brought together all the stakeholders of mesh quality: CFD practitioners, CFD researchers, CFD solver code developers (both commercial and government) and mesh generation software developers. A list of the workshop presentations is included at the end of this article (References 1a-1i). Hugh Thornburg from High Performance Technologies wrote an overview of the workshop (Reference 2) that nicely sums up the current state of affairs: • “A mesh as an intermediate product has no inherent requirements and only needs to be sufficient to facilitate the prediction of the desired result.” I interpret this as the double-negative quality judgment that the grid is “not bad.” • “The mesh must capture the system/problem of interest in a discrete manner with sufficient detail to enable the desired simulation to be performed.” As long as “desired simulation” implicitly includes “to a desired level of accuracy,” this is a good definition. • Thornburg also acknowledges many practical constraints on mesh generation such as time allotted for meshing, topology issues for parametric studies, limits on mesh size due to computational resources, and solver-specific requirements. Thornburg also offers Simpson’s Verdict library (Reference 3) as a de facto reference that covers “most if not all commonly used techniques” for computing element properties. User’s Perspective The importance of a priori indicators of mesh quality is exemplified by NASA’s Stephen Alter, who defined and demonstrated the utility of his GQ (grid quality) metric that combines both orthogonality and stretching into a single number. Driven by the desire to ensure the accuracy of supersonic flow solutions over blunt bodies computed using a thin layer Navier-Stokes solver, he has established criteria for the GQ metric that give him confidence prior to starting a CFD solution. Two aspects of GQ are notable. First, this metric’s reliance on orthogonality is closely coupled to the numerics of the solver – TLNS assumptions break down when the grid lacks orthogonality. Second, use of a global metric aids decision making, or as Thornburg wrote, “A local error estimate is of little use.” GQ represents domain expertise – the use of specific criteria within a specific application domain. Researcher’s Perspective Dannenhoffer reported on an extensive benchmark study that involved parametric variation of a structured grid’s quality for a 5 degree double-wedge airfoil in Mach 2 inviscid flow at 3 degrees angle of attack. Variations of the mesh included resolution, aspect ratio, clustering, skew, taper, and wiggle (using the Verdict definitions). Dannenhoffer’s main conclusion was very interesting: there was little (if any) correlation between the grid metrics and solution accuracy. This may have been exacerbated by the fact that he found it difficult to change one metric without influencing another (e.g. adding wiggle to the mesh also affected skew) or it may have been due to the specific flow conditions. Dannenhoffer also introduced the concept of grid validity (as opposed to grid quality), which is intended to measure whether the grid conforms to the configuration being modeled (which in practice it sometimes does not). He proposed three types of validity checks: 1. Type 1 checks whether cells have positive volumes and faces that do not intersect each other. Here again is an instance of the “Is this grid bad?” question. 2. Type 2 checks whether interior cell faces match uniquely with one other interior face and whether boundary cell faces lie on the geometry model of the object being meshed. 3. Type 3 checks whether each surface of the geometry model is completely covered by boundary cell faces, whether each hard edge of the geometry is covered by edges of boundary cell faces, and whether the sum of the boundary faces areas matches the actual geometry surface area. Prof. Christopher Roy from Virginia Tech showed a counter-intuitive example (at least from the standpoint of a priori metrics) that the solution of 2D Burger’s equation on an adapted mesh (with cells of widely varying skew, aspect ratio, and other metrics) has much less discretization error than the solution on a mesh of perfect squares. From this example alone, it is clear that metrics based solely on cell geometry are not good indicators of mesh quality as it pertains to solution accuracy. Solver’s Perspective The workshop was fortunate to have the participation of several flow solver developers, who shared details about how their solver is affected by mesh quality. The common thread among all was that convergence and stability are more directly affected by mesh quality than solution accuracy. Metacomp Technologies’ Vinit Gupta cited cell skewness and cell size variation as two quality issues to be aware of for structured grids. In particular, grid refinement across block boundaries in the far field where gradients are low has a strong, negative impact on convergence. For unstructured and hybrid meshes, anisotropic tets in the boundary layer and the transition from prisms to tets outside the boundary layer also can be problematic. Gupta also pointed out two problems associated with metric computations. Cell volume computations that rely on a decomposition of a cell into tets are not unique and depend on the manner of decomposition. Therefore, volume (or any measure that relies on volume) reported by one program may differ from that reported by another. Similarly, face normal computations for anything but a triangle are not unique and also may differ from program to program. (This is a scenario we have often encountered at Pointwise when there is a disagreement with a solver vendor over a cell’s volume that turns out to be the result of different computation methods.) Fluent and CFX ANSYS’ Konstantine Kourbatski showed how cell shapes that differ from perfect (dot product of face normal vector with vector connecting adjacent cell centers) make the system of equations stiffer slowing convergence. He then introduced metrics, Orthogonal Quality and two skewness definitions, with rules of thumb for the Fluent solver. It was interesting to note that the orthogonality measure ranges from 0 (bad) to 1 (good) whereas the skewness metric is directly opposite: 0 is good and 1 is bad. Another example of a metric criterion was that aspect ratios should be kept to less than 5 in the bulk flow. Kourbatski also provided guidelines for the CFX solver. He also pointed out that resolution of critical flow features (e.g. shear layers, shock waves) is vital to an accurate solution and that bad cells in benign flow regions usually do not have a significant effect on the solution. Kestrel, the CFD solver from the CREATE-AV program, was represented by David McDaniel from the University of Alabama at Birmingham. At the start, he made two important statements. First, their goal is to “do well with the mesh given to us.” (This is similar to Pointwise’s approach to dealing with CAD geometry – do the absolute best with the geometry provided.) Second, he notes that mixed-element unstructured meshes (their primary type) are terrible according to traditional mesh metrics, despite being known to yield accurate results. This same observation is true for adaptive meshes and meshes distorted by the relative motion of bodies within a mesh (e.g. flaps deflecting, stores dropping). More significantly, McDaniel notes a “scary” interdependence between solver discretization and mesh geometry by recalling Mavriplis’ paper on the drag prediction workshop (Reference 4) in which two extremely similar meshes yielded vastly different results with multiple solvers. To address mesh quality, Kestrel’s developers have implemented non-dimensional quality metrics that are both local and global and that are consistent in the sense that 0 always means bad and 1 always means good. The metrics important to Kestrel are an area-weighted measure of quad face planarity, an interesting measure of flow alignment with the nearest solid boundary, a least squares gradient that accounts for the orientation and proximity of neighbor cell centroids, smoothness, spacing and isotropy. Differing from Dannenhoffer’s result, McDaniel showed a correlation of mesh quality with solution accuracy with the caveat that a well resolved mesh can have poor quality and still produce a good answer. (In other words, more points always is better.) Alan Mueller’s presentation on CD-adapco’s STAR-CCM+ solver began by pointing out that mesh quality begins with CAD geometry quality and manifests as either a low quality surface mesh or an inaccurate representation of the true shape. This echoes Dannenhoffer’s grid validity idea. After introducing a list of their quality metrics, Mueller makes the following statement, “Results on less than perfect meshes are essentially the same (drag and lift) as on meshes where considerable resources were spent to eliminate the poor cells in the mesh.” Here we note that the objective functions are integrated quantities (drag and lift,) instead of distributed data like pressure profiles. After all, integrated quantities are the type of engineering data we want to get from CFD. This insensitivity of accuracy to mesh quality supports Mueller’s position that poor cell quality is a stability issue. Accordingly, the approach with STAR-CCM+ is to be conservative – opt for robustness over accuracy. Specifically, they are looking for metrics that will result in division by zero in the solver. Skewness as it effects diffusion flux and linearization is one such example. Mesher’s Perspective Dr. John Steinbrenner and Nick Wyman shared Pointwise’s perspective on solution-independent quality metrics by taking a counter-intuitive approach. You would think that a mesh generation developer would promote the efficacy of a priori metrics. But the error in a CFD solution consists of geometric errors, discretization errors, and modeling errors. Geometric errors are similar to points made by Dannenhoffer and Mueller about properly representing the shape. Modeling errors come from turbulence, chemical, and thermophysical properties. Discretization involves degradation of the solver’s numerics. The discretization error is driven by coupling between the mesh and the solver’s numerical algorithm. Therefore, although Pointwise can compute and display many metrics, it is important to note that many of them lack a direct relationship to the solver’s numerics and accordingly they are only loose indicators of solution accuracy. On the other hand, these metrics are convenient to compute, can address Dannenhoffer’s grid validity issue, and provide a mechanism for launching mesh improvement techniques. They also form the basis of a user’s ability to develop domain expertise – metrics that correlate to their specific application domain. 1. CFD solver developers believe mesh quality affects convergence much more than accuracy. Therefore, the solution error due to poor or incomplete convergence cannot be ignored. 2. One researcher was able to show a complete lack of correlation between mesh quality and solution accuracy. It would be valuable to reproduce this result for other solvers and flow conditions. 3. Use as many grid points as possible (Dannenhoffer, McDaniel). In many cases, resolution trumps quality. However, the practical matter of minimizing compute time by using the minimum number of points (what Thornburg called an optimum mesh) means that quality still will be important. 4. A priori metrics are valuable to users as an effective confidence check prior to running the solver. It is important that these metrics account for cell geometry but also the solver’s numerical algorithm. The implication is that metrics are solver-dependent. A further implication is that Dannehoffer’s grid validity checks be implemented. 5. There are numerous quality metrics that can be computed, but they are often computed inconsistently from program to program. Development of a common vocabulary for metrics would aid portability. 6. Interpreting metrics can be difficult because their actual numerical values are non-intuitive and stymie development of domain expertise. A metric vocabulary should account for desired range of result numerical values and the meaning of “bad” and “good.” 1. Workshop presentations 1. Stephen Alter, NASA Langley, “A Structured-Grid Quality Measure” 2. John Dannehoffer, Syracuse University, “On Grid Quality and Validity” 3. Christopher Roy, Virginia Tech, “Discretization Error” 4. Vinit Gupta, Metacomp Technologies, “CFD++ Perspective on Mesh Quality” 5. Konstantine Kourbataski, ANSYS, “Assessment of Mesh Quality in ANSYS CFD” 6. David McDaniel, University of Alabama at Birmingham, “Kestrel/CREATE-AV Perspective on Mesh Quality” 7. Alan Mueller, CD-adapco, “A CD-adapco Perspective on Mesh Quality” 8. John Steinbenner and Nick Wyman, Pointwise, “Solution Independent Metrics” 9. Presentations from the Mesh Quality Workshop are available by email request to pettt-requests@drc.com. 2. Thornburg, Hugh J., “Overview of the PETTT Workshop on Mesh Quality/Resolution, Practice, Current Research, and Future Directions”, AIAA paper no. 2012-0606, Jan. 2012. 3. Stimpson, C.J. et al, “The Verdict Geometric Quality Library”, Sandia Report 2007-1751, 2007. 4. Mavriplis, Dimitri J., “Grid Quality and Resolution Issues from the Drag Prediction Workshop Series”, AIAA paper 2008-930, Jan. 2008. 5. Roache, P.J., “Quantification of Uncertainty in Computational Fluid Dynamics” Annual Review of Fluid Mechanics Vol. 29, 1997, pp. 123-160. 6. Knupp, Patrick M., “Remarks on Mesh Quality”, AIAA, Jan. 2007. Subscribe to The Connector Subscribe to The Connector for access to more articles like this one, application stories, and tips on generating meshes using Pointwise. 20 Responses to Accuracy, Convergence and Mesh Quality 1. Well, this is a tough topic. Here is another example. Not sure if anyone mentioned this. One unfortunate aspect about non linear aerodyanmics is that multiple solutions can exists and how one gets to a solution is important. For example we use methods such as multi-grid and local time stepping to increase the rate of convergence for steady state runs, however, that does not mean that how one gets to the solution is actually physically possible. Of course the multi-grid or local time stepping behavior is dependent on the grid, but it is dependent in the sense that if the grid makes the method behave like a time accurate solution, then the grid is “good”. But that means the convergence is slow. Therefore a trade-off exists. 2. If memory serves me correctly, it was at either the most recent High Lift or Drag workshop that the issue of starting point came up – if you start a simulation from scratch you get one answer but if you piggyback on a previous solution you get a different answer. So you’re right, a lot of the strategies we use to keep run time and convergence down my also be effecting accuracy. 3. You may be referring to this? http://hiliftpw.larc.nasa.gov/Workshop1/ParticipantTalks/pulliam-nasa.pdf 4. Martin: Yes, perhaps that’s what I was recalling. However, Adobe Acrobat is so freakin’ slow that I may never find out for certain. 5. I stand by my statement. We can’t say anything beyond “Refinement helps accuracy” and “Good meshes converge better than bad meshes”. Not even a definition of what’s “good” and what’s “bad” beyond what the solver tells us at run time. This is especially true for unstructured meshes. After 30+ years of unstructured mesh finite volume methods in computational aero, I think that qualifies as knowing embarrassingly little. 6. Carl: Does this at least qualify as knowing what we don’t know? Or are we still in the not knowing what we don’t know stage? In other words, admitting ignorance should put us on the path to □ John, I agree with Martin: we don’t yet know what we don’t know. About all we know for sure is that more resolution helps, and that solution-based adaptivity makes that cheaper. We have (at best) only anecdotal evidence about what matters in terms of mesh quality above some (pretty awful) threshold. Don’t worry, if I wake up in the middle of the night knowing the answer, I’ll write it down so I don’t forget, and call you first thing in the morning. 7. Maybe we are still be in the “not knowing what we don’t know stage” And, IMO, someone needs to define “accuracy” in regards to a CFD solution before we can apply the term to grids. For example the “Post-Workshop Grid Studies” section of people.nas.nasa.gov/~pulliam/mypapers/AIAA-2010-4219.pdf. The ultra-fine grid had 2.4 billion points. I would guess very few entities can create solutions of this size. Other than grid density, what metrics apply to such a grid? Sure a grid cell could be skewed, but, with that density, would a good grid generator actually need to create a skewed (or bad) grid? And, how may solvers, (or unstructured grid generators?), can handle 2.4 billion points? And it is depressing (in terms of analyzing accuracy at the high fidelity level) to think that a flow feature, such as the wing body junction separation, disappears at these high grid densities (189 million pts) and that the trailing edge separation is changing enough to affect the solution on a global scale, even up to 2.4 billion (alpha (CL=0.5) changes from (about) 2.308 to 2.285 (1% diff) from grid densities of 213 million to 2.4 billion). However, it is not surprising, just depressing. And this is a simple geometry. Maybe two categories of accuracy need to be defined. Engineering accuracy and High Fidelity accuracy. 8. Martin: I would rename your second accuracy as Scientific Accuracy. I’m mostly worried about Engineering Accuracy in the sense of being able to provide tools that deliver results with a known error band. If someone else wants to work on techniques for driving CFD accuracy to ultra-high precision that’s great. Stated another way, maybe engineering accuracy means getting the trends right while scientific accuracy means nailing one single solution. The older I get the happier I am simply to be able to ask the right questions. It seems to me that the mesh quality workshop has us doing that. 9. OK, Scientific Accuracy then since CFD as a whole is categorized as high fidelity. I am definitely on board with your last statement. It will also be interesting to see how grid metrics evolve to handle these massive cases. Personally, I don’t think I can adequately visualize what a 2.4 billion cell grid looks like in terms of grid quality. And, I guess, it is not so much about the individual cell, but about trends in cells (as was shown in the work shop) 10. Martin: Not to go off on a tangent but your comment about CFD being categorized as high fidelity may be part of our collective problem. (I am not pinning this problem on you, just saying that your comment made me think about it.) Government subsidies aside, maybe CFD needs to be like General Motors – sure there’s a Cadillac Escalade but most people could benefit from a Chevy Cavalier to get from place to place. CFD needs to be positioned as an engineering tool not some esoteric science for experts. Handling billion cell problems reminds me of something I read recently about how automated systems (the only way you’ll practically make a billion cell mesh) require more operator oversight than manual system. In other words, even though automatic means “without human intervention” the reverse is true in practice. Another random thought: if number of points trumps cell quality issues, why not just use the brute force approach and throw hundreds of millions of cells at problems? After all, that’s just a computer hardware issue (storage, RAM, processing). Stop wasting engineer’s time finessing things cuz the engineer’s time is worth way more than hardware. 11. Following your tangent, it’s interesting that over the years (panel, Euler, and earlier NS/RANS codes) engineering and scientific accuracy were one and the same. In other words, in a way, a Cadillac Escalade didn’t exist. Now they are diverging. The Escalade model has been introduced. At least for this (airplane) type of problem. (I’m neglecting laminar to turbulent transition) However, other areas, such as base separation, engineering and scientific are still hand-in-hand. From what I understand, from others and personal experience, base drag using RANS can be 20-40% off. (Ford Model-T) However, DES/LES brings it down to a couple of percent, which is well within engineering accuracy. So, over the coming years, we’ll get there too. Yup, the brute force method has a prominent place in the process. And, over time, we’ll learn the ins and outs of it. After all, at a minimum, how we throw the points at a problem does affect the convergence (time stepping) rate (accuracy). 12. What part did roundoff errors play in the large simulations? Back in my spectral methods days, roundoff errors got involved as the pseudospectral derivative increased in size (became more 13. Patrick: Sorry, I don’t know the answer to that question. 14. I thought I would add more substance to my base drag and eddy viscosity comments by showing an analysis of a decelerator on my web site. http://www.hegedusaero.com/examples/Decelerator/
{"url":"http://blog.pointwise.com/2012/07/05/accuracy-convergence-and-mesh-quality/","timestamp":"2014-04-17T18:24:03Z","content_type":null,"content_length":"111623","record_id":"<urn:uuid:92e2b44a-514c-4f03-9229-685e2cc392ac>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Summer 2006 :: Algorithmic Combinatorics on Words REU The tutorial Preliminaries on Partial Words by Dr. Francine Blanchet-Sadri is available. The powerpoint presentation entitled Basic XHTML and CSS by Margaret Moorefield is available. The LaTeX Tutorial by Stephanie Rednour and Robert Misior is available. Photo Album │ week 1 │ week 2 │ week 3 │ week 4 │ │ week 5 │ week 6 │ week 7 │ week 8 │ For a detailed account of a specific week, click on the week number. │ Week 1 │ orientation, preliminaries on partial words, description of problems, creation of teams, team meetings, welcome breakfast │ │ Week 2 │ LaTeX tutorial, basic XHTML and CSS, team meetings, technical writing, dinner │ │ Week 3 │ team meetings, technical writing │ │ Week 4 │ guest speaker Brian Shirey, initial student presentations, team meetings, technical writing │ │ Week 5 │ team meetings, technical writing │ │ Week 6 │ team meetings, technical writing │ │ Week 7 │ A day with Professor Jeffrey Shallit │ │ Week 8 │ team meetings, technical writing, talk on NSF Graduate Research Fellowship Program, final student presentations, farewell picnic │ Crystal Davis Mihai Cucuringu Deepak Bal Ohio State University-Columbus Naomi Brownstein University of Central Florida Ajay Chriscoe University of North Carolina-Greensboro Joshua Gafni University of Pennsylvania Taktin Mizutani Oey Harvard University Justin Palumbo Rutgers, The State University of New Jersey-New Brunswick Timothy Rankin Davidson College Gautam Sisodia The University of Texas at Arlington Kevin Wilson University of Michigan-Ann Arbor F. Blanchet-Sadri, “Algorithmic Combinatorics on Partial Words,” Chapman & Hall/CRC Press, 2008. Book Chapter F. Blanchet-Sadri, “Open Problems on Partial Words,” In G. Bel-Enguix, M.D. Jimenez-Lopez and C. Martin-Vide (Eds.), New Developments in Formal Languages and Applications, Ch. 2, Vol. 3, Springer-Verlag, Berlin, Heidelberg, 2008, pp 11-58. Papers and Websites 1. F. Blanchet-Sadri, Deepak Bal and Gautam Sisodia, “Graph connectivity, partial words, and a theorem of Fine and Wilf,” Information and Computation, 206 (2008) 676-693. 2. F. Blanchet-Sadri, N.C. Brownstein and Justin Palumbo, “Two Element Unavoidable Sets of Partial Words.” In T. Harju, J Karhumäki, and A. Lepistö (Eds.): DLT 2007, 11th International Conference on Developments in Language Theory, July 3-6, 2007, Turku, Finland, Lectures Notes in Computer Science, Vol. 4588, Springer-Verlag, Berlin, Heidelberg, 2007, pp 96-107. 3. F. Blanchet-Sadri and Mihai Cucuringu, “Counting primitive partial words.” Journal of Automata, Languages and Combinatorics, to appear. 4. F. Blanchet-Sadri, Joshua Gafni and Kevin Wilson, “Correlations of partial words.” In W. Thomas and P. Weil (Eds.), STACS 2007, 24th International Symposium on Theoretical Aspects of Computer Science, February 22-24, 2007, Aachen, Germany, Lecture Notes in Computer Science, Vol. 4393, Springer-Verlag, Berlin, Heidelberg, 2007, pp 97-108. 5. F. Blanchet-Sadri, Taktin Oey and Timothy Rankin, “Computing Weak Periods of Partial Words,” In E. Csuhaj-Varju and Z. Esik (Eds.), AFL 2008, 12th International Conference on Automata and Formal Languages, May 27-30, 2008, Balatonfüred, Hungary, Proceedings, pp 134-145. 6. F. Blanchet-Sadri, Taktin Oey and Timothy Rankin, “Fine and Wilf's Theorem for Partial Words with Arbitrarily Many Weak Periods,” International Journal of Foundations of Computer Science, Vol. 21, No. 5, 2010, pp 705-722. 7. F. Blanchet-Sadri and Ajay Chriscoe, “Periods and binary partial words: An algorithm revisited.” 8. F. Blanchet-Sadri, Raphael Jungers, and Justin Palumbo, “Testing avoidability of sets of partial words is hard,” Theoretical Computer Science, Vol. 410, 2009, pp 968-972. • Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa, Portugal, and Forum for Interdisciplinary Mathematics gratefully acknowledged and honored Professor Francine Blanchet-Sadri for Outstanding Contributions in Mathematical Sciences (the award was presented at the SCRA 2006-FIM XIII, 13th International Conference of the Forum for Interdisciplinary Mathematics on Interdisciplinary Mathematical and Statistical Techniques, New University of Lisbon-Tomar Polytechnic Institute, Tomar, Portugal, September 3, 2006). • Kevin Wilson won a Goldwater Scholarship in 2007. • Kevin Wilson won the Cornwell Prize from the department of Mathematics at the University of Michigan-Ann Arbor (this prize is given to “a student (graduate or undergraduate) at the University of Michigan who shall have demonstrated the greatest intellectual curiosity and given the most promise of original study and creative work in Mathematics”). • Mihai Cucuringu was selected for Honorable Mention in the Computing Research Association's Outstanding Undergraduate Award for 2007. • Naomi Brownstein was selected for Honorable Mention for a Goldwater Scholarship in 2007. • Ajay Chriscoe's paper entitled "Periods and binary partial words: An algorithm.", Theoretical Computer Science, Vol. 314 (2004) 189-216, was nominated for the 2006 Frank and Brennie Morgan AMS-MAA-SIAM Prize for outstanding research in mathematics by an undergraduate student. • Naomi Brownstein received the “Order of Pegasus” which is the most prestigious and significant award a student can receive while at UCF. It recognizes students for outstanding academic achievement, leadership, service, and research. • Naomi Brownstein was selected for Honorable Mention for the Schafer Prize for 2007. • Kevin Wilson received a graduate research fellowship from the National Science Foundation in 2008. • Naomi Brownstein received a graduate research fellowship from the National Science Foundation in 2008. F. Blanchet-Sadri, "Partial Words," 5th International Ph.D. School in Formal Languages and Applications, Tarragona, Spain, May 12-13, 2006 (10 hours). F. Blanchet-Sadri, "Partial Words," SCRA 2006-FIM XIII, 13th International Conference on Interdisciplinary Mathematical & Statistical Techniques, New University of Lisbon-Tomar Polytechnic Institute, Tomar, Portugal, September 2006 (Plenary Talk). F. Blanchet-Sadri, "Algorithmic Combinatorics on Words," SCRA 2006-FIM XIII, 13th International Conference on Interdisciplinary Mathematical & Statistical Techniques, New University of Lisbon-Tomar Polytechnic Institute, Tomar, Portugal, September 2006 (Invited Talk for Session on Undergraduate Research in Interdisciplinary Mathematics). Kevin Wilson, "Correlations of Partial Words," STACS 2007, 24th International Symposium on Theoretical Aspects of Computer Science, February 22, 2007, Aachen, Germany (joint work with F. Blanchet-Sadri and Joshua D. Gafni). Kevin Wilson presenting the paper entitled “Correlations of Partial Words” at STACS 2007, 24th International Symposium on Theoretical Aspects of Computer Science, Aachen, Germany, February 22, Justin Palumbo, “Two Element Unavoidable Sets of Partial Words.” DLT 2007, 11th International Conference on Developments in Language Theory, July 3, 2007, Turku, Finland (joint work with F. Blanchet-Sadri and N.C. Brownstein). Justin Palumbo presenting the paper entitled “Two Element Unavoidable Sets of Partial Words.” at DLT 2007, 11th International Conference on Developments in Language Theory, Turku, Finland, July 3, 2007. Naomi Brownstein, “Two Element Unavoidable Sets of Partial Words.” International Conference on Advances in Interdisciplinary Statistics and Combinatorics, October 12, 2007, Greensboro, North Carolina (joint work with F. Blanchet-Sadri and Justin Palumbo). Naomi Brownstein presenting the paper entitled “Two Element Unavoidable Sets of Partial Words,” at the International Conference on Advances in Interdisciplinary Statistics and Combinatorics, Greensboro, NC, October 12, 2007. Naomi Brownstein, “Two Element Unavoidable Sets of Partial Words”, 16th International Conference on Interdisciplinary Mathematical & Statistical Techniques IMST 2008/FIM XVI, May 15-18, 2008, Memphis, Tennessee (joint work with F. Blanchet-Sadri and Justin Palumbo). Naomi Brownstein presenting the paper entitled “Two Element Unavoidable Sets of Partial Words” at 16th International Conference on Interdisciplinary Mathematical & Statistical Techniques IMST 2008/FIM XVI, Memphis, Tennessee, May 2008. F. Blanchet-Sadri, “Computing Weak Periods of Partial Words,” AFL 2008, 12th International Conference on Automata and Formal Languages, May 28, 2008, Balatonfured, Hungary (joint work with Taktin Oey and Timothy Rankin). F. Blanchet-Sadri organized a Session on Semigroups and Languages for the SCRA 2006-FIM XIII, 13th International Conference on Interdisciplinary Mathematical & Statistical Techniques, New University of Lisbon-Tomar Polytechnic Institute, Tomar, Portugal, September 2006. We attended the FOCS 2006, 47th Annual IEEE Symposium on Foundations of Computer Science from October 22 to October 24, 2006 in Berkeley, California. F. Blanchet-Sadri was invited to attend the conference "Promoting Undergraduate Research in Mathematics" from September 28 to September 30, 2006 in Rosemont, Illinois. F. Blanchet-Sadri served on the programme committee of LATA 2007, 1st International Conference on Language and Automata Theory and Applications that was held in Tarragona, Spain, March 29-April 4, 2007. F. Blanchet-Sadri chaired a session for LATA 2007, 1st International Conference on Language and Automata Theory and Applications that was held in Tarragona, Spain, March 29, 2007. T-shirt designed by the participants of Summer 2006
{"url":"http://www.uncg.edu/cmp/reu/summer2006/","timestamp":"2014-04-19T02:04:36Z","content_type":null,"content_length":"24645","record_id":"<urn:uuid:0d079b02-f282-497f-bfcb-d233d51c88f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from November 2008 on Conceptual Integrity Does Santa Exist? Posted on November 27, 2008 by Thomas Drakengren This post is a favourite of mine, so I’ll try to keep posting it every year it when Christmas is getting closer. Just a few months before Christmas! But be prepared when your children start asking you whether Santa really exists or not. It’s not as easy to convince them as it once were. The solution to convincing today’s enlightened children is of course to be very rigorous. We need to prove to them that Santa really exists. So, let’s be pretty formal, and assume that S is the sentence “If S is true, then Santa exists”. That’s just a definition; nothing unusual going on. Seems that if we prove that S is true, then we’ll be done. But we’ll see. Now, the actual logical proof starts. Suppose S is true. This is just an assumption. By the definition of S, we can just replace S by its definition, and we get “If S is true, then Santa exists” is true. Well, not much gained yet. Probably we’re just warming up. But we can in fact use the assumption, “S is true” once more, together with that. Then we get “Santa exists”. Not bad! But this is of course only because we assumed that S is true. So we’re not there yet. Let’s summarize what we got from the assumption: “If S is true, then Santa exists”. OK, well, this is the same as what S itself says. Finally something; we’ve proved S itself to be true! But wait, if S is true, and “If S is true, then Santa exists” is also true, then obviously Santa exists. Done! So, just sit down together, the whole family, a few days before Christmas, and carefully go through this proof, and you have removed one uncertainty from the celebrations. Also you need to know that there are also grownups who haven’t understood this fact yet. This is my contribution for the people out there who still want to celebrate that old-fashioned Christmas! (The proof freely from Boolos and Jeffrey, “Computability and Logic”.) Filed under: Fun, Logic, Mathematics, Philosophy | Tagged: christmas, existence, Logic, Mathematics, Philosophy, proof, santa | 1 Comment » Probably a Working PIM Syncing Solution Posted on November 20, 2008 by Thomas Drakengren After using OggSync for ten days, as reported in my previous post, I believe I can say that my complete PIM syncing solution works pretty well. I haven’t had any problems with OggSync, actually, even though I’m using the beta version. It’s installed on home computer, work computer, and on my Windows Mobile 6 phone, and all of them sync to Google Calendar, using two different calendars; one for private and one for work. Both are synced to the home computer and the phone. Have a look at my previous post for the complete syncing solution, using LapLink PDAsync and Windows Mobile Device Center (the ActiveSync of Vista). As a bonus, I can sync my contact list with Gmail, too! Filed under: Productivity, Technology, Tools, Web | Tagged: calendar, gmail, google calendar, laplink, pdasync, pim, sync, synchronize, tasks, windows mobile | 1 Comment » Next attempt at PIM syncing Posted on November 10, 2008 by Thomas Drakengren Now I’m trying OggSync for syncing my calendar. The professional subscription wasn’t that expensive, and a colleague of mine was using it without problems, so I’m giving it a try. Works well after two days’ of use! So, now I’m syncing my two Outlook calendars with Google Calendar using OggSync (different calendars for private and work), my mobile phone directly with Google Calendar using OggSync, my tasks and contacts for my work computer using LapLink PDAsync (contact sync in OggSync doesn’t support categories), and tasks, contacts and notes for my home computer using Windows Mobile Device Center (Vista’s ActiveSync). What a mess! I haven’t found a better (that is, working) combination, though. I’ll be back with a review later of whether this works over a longer period of time or not. My feeling is that OggSync is very stable indeed. Filed under: Productivity, Tools, Web | Tagged: calendar, contacts, google, google calendar, laplink, notes, oggsync, outlook, pdasync, pim, sync, synchronize, tasks, vista, windows mobile device center, wmdc | 1 Comment »
{"url":"http://blog.drakengren.com/2008/11/","timestamp":"2014-04-20T10:47:39Z","content_type":null,"content_length":"32345","record_id":"<urn:uuid:8abb6cc3-e686-4f7f-961d-c40f8be0ae48>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
XY Graph for Inverse Function? May 8th 2009, 05:39 PM #1 Junior Member Apr 2009 XY Graph for Inverse Function? I need to graph $y=x^2+4$. I understand subbing in values for X to get Y. The graph is a positive (opens upwards) parabola. The inverse would be. $y=\pm \sqrt {x-4}$ As for graphing, I usually use x values of -3,-2,-1,0,1,2,3. Do I substitute those values into the inverse equation the same I would the regular function? If I sub the value of 0 in for X, then X-4 = -4. Then nothing is the square of -4. How do I calculate to find Y properly, when working with an inverse equation? Last edited by NotSoBasic; May 8th 2009 at 05:54 PM. I need to graph $y=x^2+4$. I understand subbing in values for X to get Y. The graph is a positive (opens upwards) parabola. The inverse would be. $y=\pm \sqrt {x-4}$ As for graphing, I usually use x values of -3,-2,-1,0,1,2,3. Do I substitute those values into the inverse equation the same I would the regular function? If I sub the value of 0 in for X, then X-4 = -4. Then nothing is the square of -4. How do I calculate to find Y properly, when working with an inverse equation? The domain of inverse function is $x \ge 4$ so, you cannot put the values which are less than 4. The inverse is not defined for those values. I've used an applet to graph and see how the domain is $x \ge 4$ Is there someway to find out what values of x to start with when I don't have access to an applet, or do I just use trial and error until I find a number that can be squared? So now I can substitute values into x, and I would assume that I should only use the y values which end up as a whole number, ie. if x = 8, then $y=\pm\sqrt {8-4}$ which then equals $y=\pm2$ Is making an xy graph possible for inverse, how do I properly show the values on paper before I attempt to graph the points? Or to get my points for the inverse I just switch the x<=>y from my original xy table? That would seem to make sense, since it is an inverse. =P From there $x \ge 4$ because y=0 on the inverse? I need to graph $y=x^2+4$. I understand subbing in values for X to get Y. The graph is a positive (opens upwards) parabola. The inverse would be. $y=\pm \sqrt {x-4}$ As for graphing, I usually use x values of -3,-2,-1,0,1,2,3. Do I substitute those values into the inverse equation the same I would the regular function? If I sub the value of 0 in for X, then X-4 = -4. Then nothing is the square of -4. How do I calculate to find Y properly, when working with an inverse equation? Since $x^2$ is never negative, $y= x^2+ 4$ is never negative. The function $y= x^2+ 4$ has domain "all real numbers" and range "all real numbers larger than or equal to 4". The inverse function reverses domain and range. The inverse function would have domain "all real numbers larger than or equal to 4" and range "all real numbers". The fact that the domain is "all real numbers large than or equal to 4" is why you cannot put, say, x= 3 or x= 2, in for x. Of course, this function, $x^2+ 4$ is not "one-to-one" and so does not have a true inverse. You have to break it into two functions, $y= \sqrt{x- 4}$ and $y= -\sqrt{x- 4}$ in order to get the entire range of "all real numbers". Great, thanks guys! I'm doing this stuff from home on my own, and having your assistance to explain things not in my given documents is very much appreciated! Her is another way to plot the graph of an inverse function: STEP 1:Plot the graph of the given function. STEP 2:Plot the reflection of the graph drawn with respect to the line $y=x$ acting as mirror. And you have the graph of the inverse function. May 8th 2009, 06:05 PM #2 Aug 2008 May 8th 2009, 06:48 PM #3 Junior Member Apr 2009 May 8th 2009, 06:59 PM #4 Junior Member Apr 2009 May 9th 2009, 04:17 AM #5 MHF Contributor Apr 2005 May 9th 2009, 05:07 AM #6 Junior Member Apr 2009 May 9th 2009, 07:55 AM #7
{"url":"http://mathhelpforum.com/pre-calculus/88165-xy-graph-inverse-function.html","timestamp":"2014-04-18T01:09:09Z","content_type":null,"content_length":"51547","record_id":"<urn:uuid:4360bb1b-4840-4206-b65a-d6aa54fce15e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] wired error message in scipy.sparse.eigen function: Segmentation fault [Numpy-discussion] wired error message in scipy.sparse.eigen function: Segmentation fault David Cournapeau david@silveregg.co... Thu Jan 28 00:11:25 CST 2010 Jankins wrote: > Yes. I am using scipy.sparse.linalg.eigen.arpack. > The exact output is: > /usr/local/lib/python2.6/dist-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so I need the output of ldd on this file, actually, i.e the output of "ldd It should output the libraries actually loaded by the OS. > In fact, the matrix is from a directed graph with about 18,000 nodes and > 41,000 edges. Actually, this matrix is the smallest one I used. Is it available somewhere ? 41000 edges should make the matrix very sparse. I first thought that your problem may be some buggy ATLAS, but the current arpack interface (the one used by sparse.linalg.eigen) is also quite buggy in my experience, though I could not reproduce it. Having a matrix which consistently reproduce the bug would be very useful. In the short term, you may want to do without arpack support in scipy. In the longer term, I intend to improve support for sparse matrices linear algebra, as it is needed for my new job. > Now I switch to use numpy.linalg.eigvals, but it is slower than > scipy.sparse.linalg.eigen.arpack module. If you have a reasonable ATLAS install, scipy.linalg.eigvals should actually be quite fast. Sparse eigenvalues solver are much slower than full ones in general as long as: - your matrices are tiny (with tiny defined here as the plain matrix requiring one order of magnitude less memory than the total available memory, so something like matrices with ~ 1e7/1e8 entries on current desktop computers) - you need more than a few eigenvalues, or not just the biggest/smallest ones More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-January/048154.html","timestamp":"2014-04-18T07:15:05Z","content_type":null,"content_length":"4747","record_id":"<urn:uuid:60de1658-5d9c-4ed3-9e2e-93a0cf6b3657>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
help with kinds of angle pairs August 25th 2012, 06:35 AM help with kinds of angle pairs i was absent for days due to sickness and we have a homework. i can't understand the book itself. please help me by explaining on how to get the answer to these questions.. thank you. this is the figure: Attachment 24593 these are the questions Line r intersects lines x and y 1. Give as many vertical angles as you can see in the figure if <1=30deg, give the measure of 2.) <2 3.) <3 4.) <4 August 25th 2012, 07:04 AM Prove It Re: help with kinds of angle pairs When you say "vertical angles" do you mean "pairs of vertically opposite angles"? August 25th 2012, 07:08 AM Re: help with kinds of angle pairs August 25th 2012, 07:09 AM Prove It Re: help with kinds of angle pairs Surely you can count how many pairs of vertically opposite angles there are in this diagram... August 26th 2012, 09:40 PM Re: help with kinds of angle pairs You made it already? I think this is so simple.
{"url":"http://mathhelpforum.com/geometry/202535-help-kinds-angle-pairs-print.html","timestamp":"2014-04-18T15:54:46Z","content_type":null,"content_length":"5089","record_id":"<urn:uuid:c24fe7a1-d05b-40fa-a0e7-5156b79ab4fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology of integrable non-Hamiltonian systems This is the research project of a PhD student of mine. Actually the general project is sufficiently large for a number of PhD theses. There are lots of open questions in the non-Hamiltonian case. What my student is doing is to study simplest (mostly low-dimensional) cases, and with only nondegenerate singularities: - Systems of type (1,1), i.e. 1 vector field and 1 function, dimension 2. Already in this case, the picture is non-trivial. - Any dimension: a real classification of nondegenerate singular points of type (n,0), i.e. zero function and n vector fields - Dimension 3: local strcture of type (1,2), type (2,1), and also some questions about the global structure. - Dimension 4: Monodromy phenomenon around certain singularities in dimension 4. That will be enough for his thesis. He must write up a research article before summer.
{"url":"http://zung.zetamu.net/2012/01/topology-of-integrable-non-hamiltonian-systems-with-nondegenerate-singularities/","timestamp":"2014-04-16T19:51:38Z","content_type":null,"content_length":"110704","record_id":"<urn:uuid:bc4a4723-ba4e-4605-9a41-edd07b18450b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Lionville Science Tutor Find a Lionville Science Tutor ...Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems... 14 Subjects: including ACT Science, physics, calculus, geometry My love of science and teaching began in college where I was given the opportunity to do research in the biology department and become a teaching assistant. I then proceeded to graduate school, where I received a PhD in biomedical sciences. In graduate school, I became involved with a tutoring pro... 5 Subjects: including genetics, ACT Science, biology, physical science ...I worked in the biopharmaceutical industry for two large biologics companies for a total of 8 years. Currently I am working for an environmental monitoring company focusing in detection of environmental contaminants in water and soil. I have 13 years of experience as a trainer, and am a certified instructional designer through Langevin Learning Services. 2 Subjects: including biology, microbiology ...My test-taking program for all entrance exams involves review of the following academic skills: vocabulary, verbal reasoning, and the ability to relate ideas logically, including synonyms and analogies; reading comprehension of short passages and answering questions about that passage. My writ... 51 Subjects: including nursing, geometry, ESL/ESOL, algebra 1 ...I took 5 years of undergraduate and graduate level math and I have used math professionally every day since then. The core of successfully solving calculus problems is just patient, careful, hard work. Every student has his unique learning style. 10 Subjects: including physics, astronomy, calculus, geometry
{"url":"http://www.purplemath.com/lionville_pa_science_tutors.php","timestamp":"2014-04-19T12:05:58Z","content_type":null,"content_length":"23821","record_id":"<urn:uuid:7e9d523e-117d-416f-9465-f4d33aa672b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Raidz in FreeNAS eating more space than expected up vote 5 down vote favorite I just got 6 new 2TB drives, and added them to my FreeNAS box. I have only dealt with RAID1 previously, and each setup has given what I was expecting. However, with the 6*2TB drives, I wanted to maximize the space available, so I went with raidz. But I seem to be missing space. I have 8.6TB available after the raidz was built. Maybe I did my math horribly wrong, but (N-1) x S(min) (where N=6 and S(min)=2TB) should result in 10TB. (I understand it would be more like 9.something) Does raidz actually consume more then 1 drive worth of space? Or could their possibly be another problem? (All drives have been independently verified that 2TB of space is available) storage zfs freenas raidz add comment 1 Answer active oldest votes Freenas/Zfs reserves a small fragment of drive space. So besides having only ~1.82TB of actual space. ZFS reserves 1/64th of drive space for its own means, thus 'stealing' another ~28gb from you on every drive. Also freenas makes a 2gb swap file on every drive, Then losing the 1 drive to Raidz, 8.6TB seems pretty close. up vote 7 down Source: http://cuddletech.com/blog/?p=261 vote accepted edit Freenas swap file on every drive added. add comment Not the answer you're looking for? Browse other questions tagged storage zfs freenas raidz or ask your own question.
{"url":"http://serverfault.com/questions/399137/raidz-in-freenas-eating-more-space-than-expected","timestamp":"2014-04-18T01:10:24Z","content_type":null,"content_length":"63810","record_id":"<urn:uuid:5f298c22-d536-4482-8d0e-6fc413cac2bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
statistics (science) :: Sample survey methods Update or expand this article! In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information. Once you are finished, your modifications will be sent to our editors for review. You will be notified if your changes are approved and become part of the published article! Update or expand this article! In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information. Once you are finished, your modifications will be sent to our editors for review. You will be notified if your changes are approved and become part of the published article! Article Free Pass As noted above in the section Estimation, statistical inference is the process of using data from a sample to make estimates or test hypotheses about a population. The field of sample survey methods is concerned with effective ways of obtaining sample data. The three most common types of sample surveys are mail surveys, telephone surveys, and personal interview surveys. All of these involve the use of a questionnaire, for which a large body of knowledge exists concerning the phrasing, sequencing, and grouping of questions. There are other types of sample surveys that do not involve a questionnaire. For example, the sampling of accounting records for audits and the use of a computer to sample a large database are sample surveys that use direct observation of the sampled units to collect the data. A goal in the design of sample surveys is to obtain a sample that is representative of the population so that precise inferences can be made. Sampling error is the difference between a population parameter and a sample statistic used to estimate it. For example, the difference between a population mean and a sample mean is sampling error. Sampling error occurs because a portion, and not the entire population, is surveyed. Probability sampling methods, where the probability of each unit appearing in the sample is known, enable statisticians to make probability statements about the size of the sampling error. Nonprobability sampling methods, which are based on convenience or judgment rather than on probability, are frequently used for cost and time advantages. However, one should be extremely careful in making inferences from a nonprobability sample; whether or not the sample is representative is dependent on the judgment of the individuals designing and conducting the survey and not on sound statistical principles. In addition, there is no objective basis for establishing bounds on the sampling error when a nonprobability sample has been used. Most governmental and professional polling surveys employ probability sampling. It can generally be assumed that any survey that reports a plus or minus margin of error has been conducted using probability sampling. Statisticians prefer probability sampling methods and recommend that they be used whenever possible. A variety of probability sampling methods are available. A few of the more common ones are reviewed here. Simple random sampling provides the basis for many probability sampling methods. With simple random sampling, every possible sample of size n has the same probability of being selected. This method was discussed above in the section Estimation. Stratified simple random sampling is a variation of simple random sampling in which the population is partitioned into relatively homogeneous groups called strata and a simple random sample is selected from each stratum. The results from the strata are then aggregated to make inferences about the population. A side benefit of this method is that inferences about the subpopulation represented by each stratum can also be made. Cluster sampling involves partitioning the population into separate groups called clusters. Unlike in the case of stratified simple random sampling, it is desirable for the clusters to be composed of heterogeneous units. In single-stage cluster sampling, a simple random sample of clusters is selected, and data are collected from every unit in the sampled clusters. In two-stage cluster sampling, a simple random sample of clusters is selected and then a simple random sample is selected from the units in each sampled cluster. One of the primary applications of cluster sampling is called area sampling, where the clusters are counties, townships, city blocks, or other well-defined geographic sections of the population. Decision analysis, also called statistical decision theory, involves procedures for choosing optimal decisions in the face of uncertainty. In the simplest situation, a decision maker must choose the best decision from a finite set of alternatives when there are two or more possible future events, called states of nature, that might occur. The list of possible states of nature includes everything that can happen, and the states of nature are defined so that only one of the states will occur. The outcome resulting from the combination of a decision alternative and a particular state of nature is referred to as the payoff. When probabilities for the states of nature are available, probabilistic criteria may be used to choose the best decision alternative. The most common approach is to use the probabilities to compute the expected value of each decision alternative. The expected value of a decision alternative is the sum of weighted payoffs for the decision. The weight for a payoff is the probability of the associated state of nature and therefore the probability that the payoff occurs. For a maximization problem, the decision alternative with the largest expected value will be chosen; for a minimization problem, the decision alternative with the smallest expected value will be chosen. Decision analysis can be extremely helpful in sequential decision-making situations—that is, situations in which a decision is made, an event occurs, another decision is made, another event occurs, and so on. For instance, a company trying to decide whether or not to market a new product might first decide to test the acceptance of the product using a consumer panel. Based on the results of the consumer panel, the company will then decide whether or not to proceed with further test marketing; after analyzing the results of the test marketing, company executives will decide whether or not to produce the new product. A decision tree is a graphical device that is helpful in structuring and analyzing such problems. With the aid of decision trees, an optimal decision strategy can be developed. A decision strategy is a contingency plan that recommends the best decision alternative depending on what has happened earlier in the sequential process. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/564172/statistics/60726/Sample-survey-methods","timestamp":"2014-04-16T17:41:37Z","content_type":null,"content_length":"96898","record_id":"<urn:uuid:d1c61dc5-bcc4-40d5-9197-2eb75e941842>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] help! type 'float64scalar' is not type 'float' [Numpy-discussion] help! type 'float64scalar' is not type 'float' Travis Oliphant oliphant at ee.byu.edu Thu Aug 3 15:37:33 CDT 2006 Sebastian Haase wrote: >On Wednesday 02 August 2006 22:43, Travis Oliphant wrote: >>Sebastian Haase wrote: >>>I just found >>>numpy.isscalar() and numpy.issctype() ? >>>These sound like they would do what I need - what is the difference >>>between the two ? >>Oh, yeah. >>numpy.issctype works with type objects >>numpy.isscalar works with instances >>Neither of them distinguish between scalars and "numbers." >>If you get errors with isscalar it would be nice to know what they are. >I'm still trying to reproduce the exception, but here is a first comparison >that - honestly - does not make much sense to me: >(type vs. instance seems to get mostly the same results and why is there a >difference with a string ('12') ) These routines are a little buggy. I've cleaned them up in SVN to reflect what they should do. When the dtype object came into existence a lot of what the scalar types where being used for was no longer needed. Some of these functions weren't updated to deal with the dtype objects correctly either. This is what you get now: >>> import numpy as N >>> N.isscalar(12) >>> N.issctype(12) >>> N.isscalar('12') >>> N.issctype('12') >>> N.isscalar(N.array([1])) >>> N.issctype(N.array([1])) >>> N.isscalar(N.array([1]).dtype) >>> N.issctype(N.array([1]).dtype) >>> N.isscalar(N.array([1])[0].dtype) >>> N.issctype(N.array([1])[0].dtype) >>> N.isscalar(N.array([1])[0]) >>> N.issctype(N.array([1])[0]) > # apparently new 'scalars' have a dtype attribute ! More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-August/009891.html","timestamp":"2014-04-20T10:23:05Z","content_type":null,"content_length":"5719","record_id":"<urn:uuid:fd4f58c2-6bfa-4b1a-a9ab-61a24bc6a3c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Subject is "Algae" Number of items at this level: 155. Achari, G P Kumaraswamy (1994) Role of reproductive bodies of algae as ultra microscopic food of larval and planktonic animals: a new finding in environmental science. Seafood Export Journal, 25 (14). pp. 13-16. Anoop, A K and Krishnakumar, P K and Rajagopalan, M (2007) Trichodesmium erythraeum (Ehrenberg) bloom along the southwest coast of India (Arabian Sea) and its impact on trace metal concentrations in seawater. Estuarine, Coastal and Shelf Science, 71 (3-4). pp. 641-646. Asha, P S and Diwakar, K and Sivanesh, H (2013) Long line farming of Kappaphycus alvarezi in Tuticorin coastal areas and its implication on environment. Marine Fisheries Information Service; Technical and Extension Series (217). pp. 4-5. Asha, P S and Rajagopalan, M and Diwakar, K (2004) Effect of sea weed, sea grass and powdered algae in rearing the hatchery produced juveniles of Holothuria (metriatyla) scabra, jaeger. Proceedings of the National Symposium on Recent Trends in Fisheries . pp. 79-85. Balakrishnan, S and Ravichandran, M and Kaliaperumal, N (1992) Studies on the distribution and standing crop of algae at Muthupet estuary, Tamilnadu. Seaweed Research and Utilisation, 15 (1 & 2). pp. Balasubramanian, T and Wafar, M V M (1975) Primary productivity of some sea grass beds in the Gulf of Mannar. Mahasagar, 8 (1 & 2). pp. 87-92. Bensam, P and Kaliaperumal, N and Gandhi, V and Raju, A and Rangasamy, V S and Kalimuthu, S and Ramalingam, J R and Muniyandi, K (1990) Occurrence and growth of the commercially important red algae in fish culture pond at Mandapam. Seaweed Research and Utilisation, 13 (2). pp. 101-108. Boban, Subhadra and George, Grinson (2010) Algal biorefinery-based industry: an approach to address fuel and food insecurity for a carbon-smart world. Journal of the Science of Food and Agriculture . pp. 131-12. Chennubhotla, V S Krishnamurthy and Kaladharan, P and Kaliaperumal, N and Rajagopalan, M S (1992) Seasonal variations in production of cultured seaweed Gracilaria edulis (Gmelin) Silva in Minicoy Lagoon (Lakshadweep). Seaweed Research and Utilisation, 14 (2). pp. 109-113. Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1978) Seasonal Changes in Growth, Fruiting Cycle and Oospore Output in Turbinaria conoides (J. Agardh) Kiitzing. Botanica Marina, 21 . pp. 67-69. Chennubhotla, V S Krishnamurthy and Nasser, A K V and Kunhikoya, K K and Anasu Koya, A and Rajagopalan, M S (1994) Observations on the grazing phenomenon of the cultured seaweed, Gracilaria edulis by fish in Minicoy lagoon (Lakshadweep). Marine Fisheries Information Service, Technical and Extension Series, 127 . pp. 11-12. Chennubhotla, V S Krishnamurthy and Rao, M Umamaheswara and Rao, K S (2013) Commercial importance of marine macro algae. Seaweed Research and Utilization, 35 (1 & 2). pp. 118-128. Chennubhotla, V S Krishnamurthy and Rao, M Umamaheswara and Rao, K S (2013) Exploitation of marine algae in Indo-Pacific region. Seaweed Research and Utilization, 35 (1 & 2). pp. 1-7. Gireesh, R (2011) Influence of algal cell size on filtration and ingestion rates during different larval stages of the yellow neck clam, Paphia malabarica Chemnitz. Aquaculture Nutrition, 17 (3). pp. Gireesh, R and Gopinathan, C P (2008) Effects of microalgal diets on larval growth and survival of Paphia malabarica chemnitz. Aquaculture Research, 39 . pp. 552-556. Gopinathan, C P (1986) Differential Growth Rates Of Micro-Algae In Various Culture Media. Indian Journal of Fisheries, 33 (4). pp. 450-456. Gopinathan, C P and Gireesh, R (2006) Micro algae culture as live feed. Recent Advances on Applied Aspects of Indian Marine Algae with Reference to Global Scenario, 1 (A). pp. 341-350. Gopinathan, C P and Jayasurya, P K and Kaliamoorthy, M and Giri, Sunirmal (2005) Micro algae. CMFRI Special Publication Mangrove ecosystems: A manual for the assessment of biodiversity, 83 . pp. Humphrey, G F (1973) Photosynthetic and respiratory rates, and phosphorus content of algae grown at different phosphate levels. MBAI Special Publication dedicated to Dr.N K Panikkar (1). pp. 74-79. Imelda, Joseph and Panigrahi, A and Kishore Chandra, P (2000) Tolerance of three marine microalgae to cryoprotectants dimethyl sulfoxide, methanol and glycerol. Indian Journal of Marine Sciences . pp. 243-247. James, P S B R (1972) On a bloom of Trichodesmium thiebautii Gomont in the Gulf of Mannar at Mandapam. Indian Journal of Fisheries, 19 (1&2). pp. 205-207. Jayasankar, Reeta (2005) Effect of salinity on physiology of Gracilaria spp. (Gigartinales, Rhodophyta). Seaweed Research and Utilisation, 27 (1 & 2). pp. 19-24. Jayasankar, Reeta (1992) On the successful culture of Gracilaria edulis from spores. Marine Fisheries Information Service, Technical and Extension Series, 117 . pp. 15-17. Jayasankar, Reeta (2004) Photosynthetic efficiency of marine algae from Mandapam coast. Seaweed Research and Utilisation, 26 Special Issue . pp. 185-190. Jayasankar, Reeta and Paliwal, Kailash (2002) Seasonal variation in the elemental composition of Gracilaria species of the Gulf of Mannar, Tamil Nadu coast. Seaweed Research and Utilisation, 24 (1). pp. 55-59. Jayasankar, Reeta and Ramakrishnan, Remya and Nirmala, K (2002) Changes in the pigment constituents of Gracilaria edulis (Gmelin) Silva cultured in open sea off Narakkal by reproductive method. Seaweed Research and Utilisation, 24 (1). pp. 47-54. Jayasankar, Reeta and Ramakrishnan, Remya and Nirmala, K and Seema, C (2005) Biochemical constituents of Gracilaria edulis cultured from spores. Seaweed Research and Utilisation, 17 (1 & 2). pp. Jayasankar, Reeta and Ramalingam, J R and Kaliaperumal, N (1990) Biochemical composition of some green algae from Mandapam coast. Seaweed Research and Utilisation, 12 (1 & 2). pp. 37-40. Jayasankar, Reeta and Valsala, K K (2008) Influence of different concentrations of sodium bicarbonate on growth rate and chlorophyll content of Chlorella salina. Journal of the Marine Biological Association of India, 50 (1). pp. 74-78. Jayasankar, Reeta and Varghese, Sally (2002) Cultivation of marine red alga Gracilaria edulis (Gigartinales, Rhodophyta) from spores. Indian Journal of Marine Sciences, 31 (1). pp. 75-77. Jayasankar, Reeta (1993) Seasonal variation in biochemical constituents of Sargassum wightii (grevillie) with reference to yield in alginic acid content. Seaweed Research and Utilisation, 16 (1 & 2). pp. 13-16. Jayasankar, Reeta and Kaliaperumal, N (1991) Experimental culture of Gracilaria edulis by spore shedding method. Seaweed Research and Utilisation, 14 (1). pp. 21-23. Jayasankar, Reeta and Kulandaivelu, G (1999) Fatty acid profiles of marine red alga Gracilaria spp (Rhodophyta, Gigartinales). Indian Journal of Marine Sciences, 28 . pp. 74-76. Jayasankar, Reeta and Ramamoorthy, N (1993) Some observations on the growth of Chlorella salina. Seaweed Research and Utilisation, 16 (1 & 2). pp. 139-144. Jayasankar, Vidya and Kizhakudan, Joe K and Margaret Muthu Rathinam, A and Santhosi, I and Rajendran, P and Thiagu, R (2012) Manipulation of fatty acids in the estuarine clam Meretrix casta (Gmelin, 1791) by supplementation with the microalgal diet, Isochrysis galbana. Indian Journal of Fisheries, 59 (3). pp. 99-102. Jean Jose, J and Lipton, A P and Subhash, S K (2008) Impact of Marine Secondary Metabolites (MSM) from Hypnea musciformis as an Immunostimulant on Hemogram Count and Vibrio alginolyticus Infection in the Shrimp, Penaeus monodon, at Different Salinities. The Israeli Journal of Aquaculture – Bamidgeh, 60 (1). pp. 65-69. Jones, W Eifion and Moorjani, A Shakuntala (1973) Attachment and early development of the tetraspores of some coralline red algae. MBAI Special Publication dedicated to Dr. N K Panikkar (1). pp. Kaladharan, P (2000) भारत में समुद्री शैवाल का पैदावार - अतीत, वर्त्तमान और भविष्य. मत्स्यगंधा 2000 राजभाषा स्वर्ण जयंती विशेषांक . pp. 116-120. Kaladharan, P (2005) Gracilariopsis lemaneiformis (Bory) Dawson - a red alga reported from certain backwaters of Kerala. Journal of the Bombay Natural History Society , 102 (3). pp. 378-379. Kaladharan, P (2006) Occurrence of Halophila beccarii Asch. from Kumbala estuary, Kerala. Journal of the Bombay Natural History Society , 103 (1). pp. 137-138. Kaladharan, P (1998) Photosynthesis of seagrass, Thalassia hemprichii in oxygen enriched and depleted enclosures. Journal of the Marine Biological Association of India, 40 (1 & 2). pp. 179-180. Kaladharan, P and Asokan, P K (2012) Dense bed of the seagrass Halophila beccarii in Kadalundi Estauary, Kerala. Marine Fisheries Information Service; Technical and Extension Series (212). p. 18. Kaladharan, P and Asokan, P K (2012) Green tide and fish mortality along Calicut coast. Marine Fisheries Information Service; Technical and Extension Series (211). pp. 11-12. Kaladharan, P and Asokan, P K (2012) Harvesting in situ microalgal feed by enriching seawater. Marine Fisheries Information Service; Technical and Extension Series (212). pp. 8-9. Kaladharan, P and Chennubhotla, V S Krishnamurthy (1993) Introduction and growth of Gracilaria edulis, in Minicoy lagoon (Lakshadweep). Fishing Chimes, 13 (7). p. 55. Kaladharan, P and Gireesh, R (2003) Laboratory culture of Gracilaria spp. and Ulva lactuca in seawater enriched media. Seaweed Research and Utilisation, 25 (1 & 2). pp. 139-142. Kaladharan, P and Gireesh, R and Smitha, K S (2002) Cost effective medium for the laboratory culture of live feed micro algae. Seaweed Research and Utilisation, 24 (1). pp. 35-40. Kaladharan, P and Kandan, S (1997) Primary productivity of seaweeds in the lagoon of Minicoy atoll of Laccadive archipelago. Seaweed Research and Utilisation, 19 (1 & 2). pp. 25-28. Kaladharan, P and Leelabhai, K S (2007) Effect of humic acids on mercury toxicity to marine algae. Fishery Technology, 44 (1). pp. 93-98. Kaladharan, P and Said Koya, K P and Sulochanan, Bindu (2012) Seagrass Meadows and Conservation. Geography and You, 12 (75). pp. 24-27. Kaladharan, P and Seetha, K (2000) Agarolytic activity in the enzyme extracts of Oscillatoria sp. Journal of the Marine Biological Association of India, 42 (1 & 2). pp. 151-152. Kaladharan, P and Sridhar, N (1999) Cytokinins from Marine green alga, Caulerpa racemosa (Kuetz) Taylor. Fishery Technology, 36 (2). pp. 87-89. Kaladharan, P and Veena, S and Vivekanandan, E (2009) Carbon sequestration by a few marine algae: observation and projection. Journal of the Marine Biological Association of India, 51 (1). pp. Kaladharan, P and Velayudhan, T S (2005) GABA from Hypnea valentiae (Turn.) Mont. and its effect on larval settlement of Perna viridis Linnaeus. Seaweed Research and Utilisation, 27 (1 & 2). pp. Kaladharan, P and Vivekanandan, M (1990) Photosynthetic potential and accumulation of assimilates in the developing chloroembryos of Cyamopsis tetragonoloba (L.) Taub. Plant Physiology, 92 . pp. Kaliaperumal, N (1990) Influence of low and high temperature on diurnal periodicity of tetraspore shedding in some red algae. Journal of Marine Biological Association of India, 32 (1 & 2). pp. Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R (1986) Growth, Phenology and Spore Shedding in Gracilaria arcuata var. arcuata (Zanardini) Umamaheswara Rao & G. corticata var. cylindrica ( J . Agardh) Umamaheswara Rao (Rhodophyta). Indian Journal of Marine Sciences, 15 . pp. 107-110. Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Muniyandi, K (1993) Growth of Gracilaria edulis in relation to environmental factors in field cultivation. Seaweed Research and Utilisation, 16 (1 & 2). pp. 167-176. Kaliaperumal, N and Jayasankar, Reeta and Ramalingam, J R (2003) Outdoor culture of agar yielding red alga Gracilaria edulis (Gmelin) Silva. Seaweed Research and Utilisation, 25 (1 & 2). pp. 159-162. Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Pillai, S Krishna and Chennubhotla, V S Krishnamurthy and Rajagopalan, M S and Rao, P V Subba and Rao, K Rama and Thomas, P C and Zaidi, S H and Subbaramaiah, K (1996) Distribution of marine algae and seagrass off Valinokkam-Kilakkarai, Tamil Nadu coast. Seaweed Research and Utilisation, 18 (1 & 2). Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1996) Effect of repeated harvesting on the growth of Sargasum spp and Turbiniria conoides occurring in Mandapam area. Seaweed Research and Utilisation, 18 (1 & 2). Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (2003) Pilot scale field cultivation of the agarophyte Gracilaria edulis (Gmelin) Silva at Vadakadu (Rameswaram). Seaweed Research and utilisation, 25 (1 & 2). pp. 213-219. Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1992) Studies on the agar content in Gracilaria arcuata Var. Arcuata and G. Corticata Var. Cylindrica. Seaweed Research and Utilisation, 15 (1 & 2). pp. 191-195. Kaliaperumal, N and Pandian, G (1984) Marine algal flora from some localities of South Tamil Nadu Coast. Journal of the Marine Biological Association of India, 26 (1&2). pp. 159-164. Kaliaperumal, N and Rajagopalan, M S and Chennubhotla, V S Krishnamurthy (1992) Field cultivation of gracilarla edulis (Gmelin) Silva in the lagoon of Minicoy (Lakshadweep). Seaweed Research and Utilisation, 14 (2). pp. 103-107. Kaliaperumal, N and Ramalingam, J R (2005) Effect of different fertilizers on the growth of Gracilaria edulis (Gmelin) silva in onshore cultivation. Indian Hydrobiology, 7 Supplement . pp. 63-67. Kaliaperumal, N and Ramalingam, J R and Kalimuthu, S and Ezhilvalavan, R (2002) Seasonal changes in growth, biochemical constituents and phycocolloid of some marine algae of Mandapam coast. Seaweed Research and Utilisation, 24 (1). pp. 73-77. Kaliaperumal, N and Rao, M Umamaheswara (1987) Effect of thermal stress on spore shedding in some red algae of Visakhapatnam coast. Indian Journal of Marine Sciences, 16 . pp. 201-202. Kaliaperumal, N and Rao, M Umamaheswara (1975) Growth, fruiting cycle and oospore output in Turbinaria decurrensbory. Indian Journal of Fisheries, 22 (1&2). pp. 225-230. Kaliaperumal, N and Rao, M Umamaheswara (1986) Growth, reproduction and sporulation of marine alga Gelidium pusillum(Stackhouse) Le Jolis. Indian Journal of Marine Sciences, 15 . pp. 29-32. Kaliaperumal, N and Rao, M Umamaheswara (1982) Seasonal growth and reproduction of Gelidiopsis variabilis (Greville) Schmitz. Journal of Experimental Marine Biology and Ecology, 61 . pp. 265-270. Kalidas , C and Edward, Loveson (2005) Role of micro algae pigments in aquaculture. Aqua International . pp. 34-37. Kalimuthu, S (1980) Variations in growth and mannitol and alginic acid Contents of Sargassum myriocystum J. Agardh. Indian Journal of Fisheries, 27 (1&2). pp. 265-266. Kalimuthu, S and Chennubhotla, V S Krishnamurthy and Selvaraj, M and Najmuddin, M and Panigrahy, R (1980) Alginic acid and mannitol contents in relation to Growth in Stoechospermum marginatum (C. Agardh) Kuetzing. Indian Journal of Fisheries, 27 (1&2). pp. 267-268. Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1992) Distribution and seasonal changes of marine algal flora from seven localities around Mandapam. Seaweed Research and Utilisation, 15 (1 & 2). pp. 119-126. Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1995) Distribution of algae and seagrasses in the estuaries and backwaters of Tamil Nadu and Pondichery. Seaweed Research and Utilisation, 17 . pp. 79-96. Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1991) Standing crop, algin and mannitol of some alginophytes of Mandapam coast. Journal of the Marine Biological Association of India, 33 (1&2). pp. 170-174. Kannan, P and Rajagopalan, M (2004) Role of marine macrophytes as feed for green turtle Chelonia mydas. Seaweed Research and Utilisation, 26 (1 & 2). pp. 187-192. Koya, C N Haneefa and Nasser, A K V and Mohamed, Gulshad (1999) Productivity of the coral reef alga Halimeda gracilis Harv. ex. J.Ag. Minicoy island, Lakshadweep. Seaweed Research and Utilisation, 21 (1 & 2). pp. 79-84. Kunda , Sumanth Kumar and Kaladharan, P (2003) Agar factory discharge as fuel and manure. Seaweed Research and Utilisation, 25 (1 & 2). pp. 165-168. Lambade, S B and Mohamed, K S (2002) Laboratory - scale high density culture of the marine diatom Chaetoceros sp. Indian Journal of Fisheries, 49 (1). pp. 13-22. Lewis, E J (1965) On a Gonyaulax bloom off Mt Dalley, in the Arabian sea. Proceedings of the Seminar on Sea, Salt and Plants . pp. 224-226. Lipton, A P and Pramitha, V S and Jean Jose, J (2009) Marine Secondary Metabolites (MSM) from Macro Algae Enhance Bacterial Clearance in Hemolymph of Penaeus monodon. The Israeli Journal of Aquaculture – Bamidgeh, 61 (1). pp. 42-47. Manilal, Aseer and Selvin, J and Seghal Kiran, G and Sugathan, Sujith and Feby, F A S and Lipton, A P (2013) Micro-algal lethality potentials of marine organisms collected from the Indian littoral. Thalassus, 29 (2). pp. 59-65. Manilal, Aseer and Sugathan, Sujith and Sabarathnam, Balu and George, Kiran S and Selvin, J and Shakir, Chippu and Lipton, A P (2011) Biological activity of the red alga Laurencia brandenii. Acta Botanica Croatica, 70 (1). pp. 81-90. Manilal, Aseer and Sugathan, Sujith and Sabarathnam, Balu and Seghal Kiran, G and Selvin, J and Shakir, Chippu and Lipton, A P (2010) Bioactivity of the red algae Asparagopsis taxiformis collected from the Southwestern coast of India. Brazilian journal of oceanography, 58 (2). pp. 93-100. Merina, Medo and Lipton, A P and Wesley, S Godwin (2011) Isolation, characterization and growth response of biofilm forming bacteria Bacillus pumilus from the sea grass, Halodule pinifolia off Kanyakumari coast. Indian Journal of Marine Sciences, 40 (3). pp. 443-448. Mishra, Pathik Chandra and Jayasankar, Reeta and Seema, C (2006) Yield and quality of carrageenan from Kappaphycus alvarezii subjected to different physical and chemical treatments. Seaweed Research and Utilisation, 28 (1). pp. 113-117. Mohamed, Gulshad (2005) Farming of Hypnea valentiae (Turner) Montagne at Minicoy Lagoon (Lakshadweep). Seaweed Research and Utilisation, 27 (1 & 2). pp. 93-98. Mohamed, Gulshad (2000) High yield of Acanthophora spicifera from culture at Minicoy lagoon, Lakshadweep. Marine Fisheries Information Service, Technical and Extension Series, 163 . pp. 3-4. Mohamed, Gulshad and Nasser, A K V (2005) Role of the coralline alga Halimeda gracilis Harvey ex. J. Agardh in sediment development at Minicoy Island (Lakshadweep) during monsoon months. Seaweed Research and Utilisation, 27 (1 & 2). pp. 11-18. Nammalwar, P and Narayanan, K (1979) Mass mortality of fishes due to the bloom of Trichodesmium thiebauttii Gomont on the Gulf of Mannar Coast. Science and Culture, 45 (4). pp. 170-171. Natarajan, P and Palanichamy, S and Mohan, S and Thiagarajan, R (1997) Development of novel techniques to maintain Chlorella spp. stock culture in artificial seawater. Marine Fisheries Information Service, Technical and Extension Series, 149 . pp. 12-13. Noble, A (1960) Occurrence of the blue green alga Aphanocapsa littoralis Hansg. var. macrococca Hansg. causing colouration of the sand and its relation with the tides. Journal of the Marine Biological Association of India, 3 (1 & 2). pp. 262-263. Palanichamy, S and Rani, V (2004) Observations on the long term preservation and culture of the marine microalga, Nannochloropsis oculata. Journal of the Marine Biological Association of India, 46 (1). pp. 98-103. Palaniswami, Rani and Rajapandian, M E (1998) Microalgal species as feed for conditioning adult oyster Crassostrea madrasensis (Preston). Journal of the Marine Biological Association of India, 39 (1 & 2). pp. 159-162. Panikkar, N K (1955) Observations on the ionic composition of blue green algae growing in saline lagoons. Proceedings of the National Institute of Sciences of India, 21 (B). pp. 90-102. Pillai, V K (1954) Growth requirements of a halophilic blue-green alga, Phormidium Tenue (menegh). Indian Journal of Fisheries, 1 (1&2). pp. 130-144. Pillai, V K (1954) Utilization of natural byproducts for the cultivation of blue-green algae. Current Science, 24 (1). pp. 21-23. Prabhakaran, M P and Nandan, S Bijoy and Jayachandran, P R and Pillai, N G K (2013) Species diversity and community structure of ichthyofauna in the seagrass ecosystem of Minicoy Atoll, Lakshadweep, India. Indian Journal of Geo-Marine Sciences, 42 (3). pp. 349-359. Prabhu, M S and Ramamurthy, S and Dhulkhed, M H and Radhakrishnan, N S (1971) Trichodesmium bloom and the failure of oil sardine fishery. Mahasagar, 4 (2). pp. 62-64. Prabhu, M S and Ramamurthy, S and Kuthalingam, M D K and Dhulkhed, M H (1965) On an unusual swarming of the planktonic blue-green algae Trichodesmium Spp., off Mangalore. Current Science, 34 (3). p. Pramitha, V S and Lipton, A P (2013) Antibiotic potentials of red macroalgae Hypnea musciformis (Wulfen) Lamouroux and Hypnea valentiae (Turner) Montagne. Seaweed Research and Utilization, 35 (1 & 2). pp. 95-107. Pramitha, V S and Lipton, A P (2011) Growth responses of microalgae, Chlorella salina and Isochrysis galbana exposed to extracts of the macroalga, Hypnea musciformis. Indian Journal of Fisheries, 58 (4). pp. 95-99. Qasim, S Z and Joseph, K J (1975) Utilization of Nitrate and Phosphate by the Green Alga Tetraselmis gracilis Kylin. Indian Journal of Marine Sciences, 4 (2). pp. 161-164. Radhakrishnan, E V and Gore, P S and Raveendran, O and Unnithan, R V (1979) Microbial decomposition of the floating weed Salvinia molesta Aublet in Cochin backwaters. Indian Journal of Marine Sciences, 8 (3). pp. 170-174. Raghu Prasad, R (1958) A note on the occurrence and feeding habits of noctiluca and their effects on the plankton community and fisheries. Proceedings of the Indian Academy of Sciences, 47 (6). pp. Raghu Prasad, R and Tampi, P R S and Durve, V S (1961) Note on the occurrence of the Anthomedusa cladonema in the Indian region. Journal of the Marine Biological Association of India, 3 (1 & 2). pp. Rajendran, I and Chakraborty, Kajal and Vijayan, K K and Vijayagopal, P (2013) Bioactive sterols from the brown alga Anthophycus longifolius (Turner) Kützing, 1849 (= Sargassum longifolium). Indian Journal of Fisheries, 60 (1). pp. 83-86. Ramalingam, J R and Kaliaperumal, N and Kalimuthu, S (2003) Commercial scale production of carrageenan from red algae. Seaweed Research and Utilisation, 25 (1 & 2). pp. 37-46. Ramalingam , J R and Kaliaperumal, N and Kalimuthu, S (2002) Agar production from Gracilaria with improved qualities. Seaweed Research and Utilisation, 24 (1). pp. 25-34. Rao, A Chandrasekhara and Kaladharan, P (2003) Improvement of yield and quality of agar from Gracilaria edulis (Gmelin) Silva. Seaweed Research and Utilisation, 25 (1 & 2). pp. 131-138. Rao, M Umamaheswara (1968) On two new records of Codiaceae from India. Journal of the Marine Biological Association of India, 10 (2). pp. 407-409. Rao, M Umamaheswara (1969) Seasonal variations in growth, alginic acid and mannitol contents of Sargassum wightii and Turbinaria conoides from the Gulf of Mannar, India. Proceedings of International Seaweed Symposium, 6 . pp. 579-584. Rao, M Umamaheswara (2001) Status and future of marine algal research on the East Coast of India. Souvenir issued on the Occasion of the inauguration of Visakhapatnam R C of CMFRI 17 October 2001 . pp. 80-81. Rao, M Umamaheswara (1975) Studies on the growth and reproduction of Gracilaria corticata near Mandapam in the Gulf of Mannar. Journal of the Marine Biological Association of India, 17 (3). pp. Rao, M Umamaheswara and Kaliaperumal, N (1983) Effects of environmental factors on the liberation of spores from some red algae of Visakhapatnam coast. Journal of Experimental Marine Biology and Ecology, 70 . pp. 45-53. Rao, M Umamaheswara and Kalimuthu, S (1972) Changes in Mannitol and Alginic Acid Contents of Turbinaria ornata (TURNER) J. AGARDH in Relation to Growth and Fruiting. Botanica Marina, 15 (1). pp. Sarangi, R K and Mohamed, Gulshad (2011) Seasonal algal bloom and water quality around the coastal Kerala during southwest monsoon using in situ and satellite data. Indian Journal of Geo-Marine Sciences, 40 (3). pp. 356-369. Sethi, S N and Naik, G B (2007) Wonder Gift of Nature: Spirulina. Fishing Chimes, 27 (3). pp. 16-29. Subrahmanyan, R (1962) On Ruttnera pringsheirnii sp. nov. (Chrysophyceae) from the Coastal Waters of India. Archiv fur Mikrobiologie , 42 . pp. 219-225. Sukumaran, Soniya and Kaliaperumal, N (2001) Sporulation in Gracilaria crassa Harvey ex. J.Agasdh at different environmental factors. Seaweed Research and Utilisation, 23 (1 & 2). pp. 81-87. Sumitra, Vijayaraghavan and Joseph, K J and Balachandran, V K and Chandrika, V (1975) Production of dissolved carbohydrate (DCHO) in three Unialgal cultures. Journal of the Marine Biological Association of India, 17 (1). pp. 206-212. Thomas, M M (1969) On a new distributional record of Parapenaeopsis tenella (Bate) from the south eastern coast of India. Journal of the Marine Biological Association of India , 10 (1). pp. 166-167. Thomas, M M and Pillai, V K and Pillai, N N (1973) Caridina pseudogracilirostris sp.nov. (Atyidae: Caridina) from the Cochin Backwater. Journal of the Marine Biological Association of India, 15 (2). pp. 871-872. Varma, R Prasanna (1959) Studies on the succession of marine algae on a fresh substratum in Palk Bay. Proceedings of the Indian Academy of Sciences, 49 (4 B). pp. 245-263. Varma, R Prasanna and Rao, K Krishna (1962) Algal resources of Pamban area. Indian Journal of Fisheries , 9A (1). pp. 205-211. Vijayakumaran, K and Chittibabu, K and Girijavallabhan, K G (2001) Comparative growth of seven species of micro-algae in artificial and natural media. Journal of the Marine Biological Association of India, 43 (1 & 2). pp. 161-167. Manisseri, Mary K and Antony, Geetha and Rao, G Syda (2012) Common Seaweeds and Seagrasses of India Herbarium Vol.1. Central Marine Fisheries Research Institute, Kochi. ISBN 978-81-923271-4-3 Manisseri, Mary K and Antony, Geetha and Rao, G Syda (2012) Common Seaweeds and Seagrasses of India Herbarium Vol.2. Central Marine Fisheries Research Institute, Kochi. ISBN 978-81-923271-4-3 Book Section Dash, Biswajit and Kaladharan, P and Rao, G Syda (2008) आँध्रप्रदेश में विशाखपट्टणम तट पर समुद्री शैवाल काप्पाफाईककस जातियों का समुद्री संवर्धन - संभावनाएं और प्रत्याशाएं. In: आँध्रप्रदेश के समुद्री मात्स्यिकी. Kaladharan, P,(ed.) Central Marine Fisheries Research Institute, Visakhapatnam, pp. 22-23. Gopinathan, C P (1998) Microalgae culture. In: Proceedings of the Workshop National Aquaculture Week. Sakthivel, M and Vivekanandan, E and Rajagopalan, M and Meiyappan, M M and Paulraj, R and Ramamurthy, S and Alagaraja, K,(eds.) The Aquaculture Foundation of India, Chennai, pp. 76-81. Kaladharan, P and Jayasankar, Reeta (2003) Seaweeds. In: Status of Exploited Marine Fishery Resources of India. Mohan Joseph, M and Jayaprakash, A A,(eds.) CMFRI, Cochin, pp. 228-239. ISBN Ramalingam, J R (2000) Production of export quality agar. In: Souvenir 2000. UNSPECIFIED,(ed.) Central Marine Fisheries Research Institute, Mandapam, pp. 81-83. Rao, D S and Girijavallabhan, K G and Muthusamy, S and Chandrika, V and Gopinathan, C P and Kalimuthu, S and Najmuddin, M (1991) Bioactivity in marine algae. In: Bioactive compounds from marine organisms With Emphasis on the Indian Ocean: An Indo-United States Symposium. Thompson, Mary Frances and Sarojini, R and Nagabhushanam, R,(eds.) Oxford and IBH Publishing Company, New Delhi, pp. Sulochanan, Bindu and Kumaraguru, A K and Mathew, Grace (2010) Seagrass Diversity and Influence of Beach Erosion in Palk Bay and Gulf of Mannar Seagrass Beds. In: Coastal Fishery Resources of India - Conservation and sustainable utilisation. Meenakumari, B and Boopendranath, M R and Edwin, Leela and Sankar, T V and Gopal, Nikita and Ninan, George,(eds.) Society of Fisheries Technologists, pp. Conference or Workshop Item Bensam, P and Udhayashankar, T R (1993) Colonisation and Growth of the Sea Grasses, Halodule uninervis (Forskal) Ascherson and Halophila ovalis (R.Brown) Hooker f. in marine culture ponds at Mandapam. In: Second Indian Fisheries Forum , 27-31 May, 1990, Mangalore; India. Christabell, Jonsy and Lipton, A P (2010) Influence of aqueous extract of macroalga Hypnea musciformis Lamour and Codium tomentosum (Huds) stackhouse at different growth phases of microalgae Chlorella marina (Chlorophyta). In: Proceedings of the National seminar on Role of Environmental Changes in the lower group biodiversity with special reference to algal diversity, February 9-10, 2010, WCC Nagercoil. Gopinathan, C P (2004) Marine microalgae. In: Proceedings of Ocean Life Food and Medicine Expo, 27-29 February 2004 , Chennai. Jayasankar, Reeta (1999) पुंनरुत्पादन एककों से ऐगारोद्भभिद ग्रासिलेरिया जातियों की संवर्धन शक्यता. In: द्वितीय राष्ट्रीय वैज्ञानिक राजभाषा संगोष्ठी Proceedings of the 2nd National Scientific Seminar in Official Language Hindi - लघु पैमाने का समुद, 17 August 1999, सी एम एफ आर आइ कोचि CMFRI Kochi. Rao, M Umamaheswara (1972) Coral reef flora of the Gulf of Manar and Palk Bay. In: Proceedings of the First International Symposium on Corals and Coral Reefs, Marine Biological Association of India, Mandapam Camp, India, 12-16 January 1969, Mandapam. Rathnakala, R and Chandrika, V (1999) Growth inhibition of fish pathogens by antagonistic actinomycetes isolated from mangrove environment. In: The Fourth Indian Fisheries Forum Proceedings, 24-28 November 1996, Kochi. Manisseri, Mary K and Naomi, T S and Antony, Geetha and Vinod, K and George, Rani Mary and Rao, M Umamaheswara and Ramalingam, J R and Gopakumar, G and Jasmine, S and Reshmi, E G and Venkatesan, V and Thomas, V J and Geetha, P M and Sreekumar, K M (2014) Seaweeds and Seagrasses. [Image] CSMCRI, Mandapam and CMFRI, Kochi (1976) Report on Survey of Marine Algal Resources of Tamilnadu 1971 - 1976. Technical Report. CMFRI; Kochi, Kochi. CMFRI, Kochi (2012) Green Algal extract (GAe): Natural in every sense. CMFRI; Kochi. Teaching Resource Joseph, Shoji and Ajith Kumar, P B (2013) Microalgal culture and maintenance in marine hatcheries. [Teaching Resource] Kaladharan, P and Thara, K (2011) Live algal feed and their culture. [Teaching Resource] Said Koya, K P and Kunhikoya, V A and Kaladharan, P (2012) Sea grass Ecosystem - A Lakshdweep perspective. [Teaching Resource] Sulochanan, Bindu (2012) Seagrass distribution and its vulnerability in India. [Teaching Resource] Vipinkumar, V P and Chakraborty, Kajal (2013) Green Algal extract (GAe). [Teaching Resource] Asma, V M (1993) Impact assessment of biocides on microalgae - a study -in vitro. PhD thesis, Central Marine Fisheries Research Institute. Submitted to Cochin University of Science and Technology. Jayasankar, Reeta (1996) Physiological studies on the productivity of Gracilaria. PhD thesis, Madurai Kamaraj University. Kalimuthu, S (2000) Studies on some Indian members of the Rhodymeniales. PhD thesis, Bharathidasan University, Tiruchirappalli. Koya, C N Haneefa (2000) Studies on ecology, chemical constituents and culture of marine macroalgae of Minicoy Island, Lakshadweep. PhD thesis, Central Institute of Fisheries Education, Versova. Sukumaran, Soniya (2000) Studies on sporulation in some commercially important marine algae of Mandapam coast. PhD thesis, Central Institute of Fisheries Education, Versova.
{"url":"http://eprints.cmfri.org.in/view/subjects/sub21.type.html","timestamp":"2014-04-20T10:56:32Z","content_type":null,"content_length":"63719","record_id":"<urn:uuid:f8ce8602-da93-418b-b6ae-be1d0553a438>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: DD What is the 6^th term of the binomial expansion of [x4 - y5]^10 ? DD View Solution DD Approximate (0.98)^30 to the nearest tenth using the first two terms of a binomial expansion. View Solution DD What is the number of terms in the binomial expansion of (x + y)^15 ? View Solution DD Find the eleventh term in the binomial expansion of (x - 4)^10. View Solution DD Approximate the value of (2.02)^7 by using the first two terms of a binomial expansion. View Solution DD Find the 4^th term of the binomial expansion (1 - x)^4. DD View Solution DD What is the fifth term of (7x - y)^12 ? View Solution DD Find the sixth term in the binomial expansion (x - y^4)^10. DD View Solution DD Find the ninth term of the binomial expansion [x3 + y]^18. View Solution DD Find the twelfth term in the binomial expansion of (3 + xy)^14. View Solution DD Each edge of a cube is x cm in length. If the length of each edge is increased by 8 in, then find the binomial expansion that represents the volume of the new cube. View Solution DD Find the eighth term of the binomial expansion (x^6 + y^5)^11. View Solution DD The first three terms of the binomial expansion of (x - y^6)^7 are View Solution DD Find the fourth term in the binomial expansion of (x^13 + 1x)^5. View Solution DD Find the value of middle term in the expansion of (xy + yx)^8. View Solution DD Find the 14^th term in the binomial expansion of (36 + x^2 + 12x)^7. View Solution DD Evaluate: (7! × 4!)5! DD View Solution DD What is the number of terms in the binomial expansion of (x + y)^16 ? View Solution DD Evaluate: (6 x 5 x 4 x 3 x 2 x 1) (4 x 3 x 2 x 1)(2 x 1) DD View Solution DD Evaluate: (8 x 7 x 6 x 5 x 4 x 3 x 2 x 1)(5 x 4 x 3 x 2 x 1)( 3 x 2 x 1) ) DD View Solution DD Evaluate: 2! × 5! DD View Solution DD Evaluate: 5!3! View Solution DD Evaluate: 10!5! × 5! DD View Solution DD Use the binomial theorem to find the expansion. View Solution (x + y)^3 DD DD Use binomial theorem to find the expansion of (2x - y)^4. View Solution DD Use binomial theorem to find the expansion of (1 - x)^5 View Solution DD Use binomial theorem to find the expansion of (x^2 + 2)^4. View Solution DD Use binomial theorem to find the expansion of (2x + 3y) ^3. DD View Solution DD Find the third term in the binomial expansion of (2x + y)^8. DD View Solution DD Find the fifth term of the binomial expansion (3 - x)^6. DD View Solution DD Find the 12^th term of the binomial expansion (1 - x)^12. DD View Solution DD Find the tenth term in the binomial expansion of (x - 2)^9. View Solution DD What is the eleventh term of (6x - y)^20 ? View Solution DD Find the sixth term in the binomial expansion (x - y^2)^8. DD View Solution DD Find the ninth term in the binomial expansion of (9 + xy)^12. View Solution DD Approximate the value of (3.03)^10 by using the first two terms of a binomial expansion. View Solution DD Approximate (0.97)^20 to the nearest tenth using the first two terms of a binomial expansion. View Solution DD If a! = 5! × 3!, then the value of a is: View Solution DD Find the ninth term of the binomial expansion [x9 + y]^15. View Solution DD Find the twelfth term of the binomial expansion (x^4 + y^3)^15. View Solution DD The first three terms of the binomial expansion of (x - y^5)^6 are View Solution DD Each edge of a cube is x cm in length. If the length of each edge is increased by 10 ft, then find the binomial expansion that represents the volume of the new cube. View Solution DD Find the sixth term in the binomial expansion of (x^13 + 1x)^7. View Solution DD Find the value of middle term in the expansion of (xy + yx)^16. View Solution DD Find the 8^th term in the binomial expansion of (9 + x^2 + 6x)^4. View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgedmxkhgja&.html","timestamp":"2014-04-19T22:05:31Z","content_type":null,"content_length":"91213","record_id":"<urn:uuid:91997969-6c85-4036-af2a-5e334d8350d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Theory 1111 Submissions [4] viXra:1111.0059 [pdf] submitted on 17 Nov 2011 The New Prime Theorems (1291)-(1340) Authors: Chun-Xuan Jiang Comments: 90 pages Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J[2] (ω) we prove that the new prime theorems (1291)-(1340) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution π[k](N[0],2) ≥ 1. This is the Book theorem. Category: Number Theory [3] viXra:1111.0040 [pdf] submitted on 10 Nov 2011 The New Prime Theorems (1241)-(1290) Authors: Chun-Xuan Jiang Comments: 90 pages Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J[2] (ω) we prove that the new prime theorems (1241)-(1290) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution π[k](N[0],2) ≥ 1. This is the Book theorem. Category: Number Theory [2] viXra:1111.0038 [pdf] submitted on 10 Nov 2011 On a Strengthened Hardy-Hilbert's Type Inequality Authors: Guangsheng Chen Comments: 8 pages In this paper, by using the Euler-Maclaurin expansion for the zeta function and estimating the weight function effectively, we derive a strengthenment of a Hardy-Hilbert's type inequality proved by W.Y. Zhong. As applications, some particular results are considered. work. Category: Number Theory [1] viXra:1111.0002 [pdf] submitted on 1 Nov 2011 The New Prime Theorems (1191)-(1240) Authors: Chun-Xuan Jiang Comments: 90 Pages. Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J[2] (ω) we prove that the new prime theorems (1191)-(1240) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution π[k](N[0],2) ≥ 1. This is the Book theorem. Category: Number Theory
{"url":"http://vixra.org/numth/1111","timestamp":"2014-04-17T10:33:58Z","content_type":null,"content_length":"6610","record_id":"<urn:uuid:687bfbf3-dafc-460a-9362-e3c37abf1763>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 51 analytical geometry make a full analysis of algebraic equation of y=(x-2)^2(x^2-9) A ball is dropped from the roof of a building. How fast is it moving after 4.5 seconds? (Absolute value) Physics. Please please help! Nevermind, that'd be my error. Read it wrong Physics. Please please help! Elena, Im a little stuck on the first equation you have given. In the denominator, we are supposed to multiply r1 by r2. How did you get 20 x 10? Shouldn't it be 0.1m x 0.02m? Math Question 2 i need to noe the factors of 63. im doing math hw healthcare reimbursement What is the difference between hospital claims processing and professional (provider and/or CMS 1500) claims processing? When someone buys a ticket for an airline flight there is a 0.0995 probability that the person will not show up for the flight (based on data from the IBM research paper by Lawrence, Hong, and Cherrier). An agent for Air America want to book 24 persons on an airplane that can ... Zinc reacts with HCl to produce ZnCl2 and Hydrogen gas, H2. What volume, in ml of a 1.25 M solution would be required to completely react with 3.27 g of Zn? Zn(s) + 2 HCl (aq) -------> ZnCl2 + H2 physics help! Objects of equal mass are oscillating up and down in simple harmonic motion on two different vertical springs. The spring constant of spring 1 is 155 N/m. The motion of the object on spring 1 has twice the amplitude as the motion of the object on spring 2. The magnitude of the... A thin uniform rod is rotating at an angular velocity of 7.70 rad/s about an axis that is perpendicular to the rod at its center. As the figure indicates, the rod is hinged at two places, one-quarter of the length from each end. Without the aid of external torques, the rod sud... A uniform plank of length 5.0 m and weight 225 N rests horizontally on two supports, with 1.1 m of the plank hanging over the right support (see the drawing). To what distance x can a person who weighs 408 N walk on the overhanging part of the plank before it just begins to tip? Review Conceptual Example 7 before starting this problem. A uniform plank of length 5.0 m and weight 225 N rests horizontally on two supports, with 1.1 m of the plank hanging over the right support (see the drawing). To what distance x can a person who weighs 408 N walk on the... One end of a meter stick is pinned to a table, so the stick can rotate freely in a plane parallel to the tabletop. Two forces, both parallel to the tabletop, are applied to the stick in such a way that the net torque is zero. The first force has a magnitude of 2.00 N and is ap... A top is a toy that is made to spin on its pointed end by pulling on a string wrapped around the body of the top. The string has a length of 59 cm and is wrapped around the top at a place where its radius is 1.9 cm. The thickness of the string is negligible. The top is initial... A 4.15-g bullet is moving horizontally with a velocity of +369 m/s, where the sign + indicates that it is moving to the right (see part a of the drawing). The bullet is approaching two blocks resting on a horizontal frictionless surface. Air resistance is negligible. The bulle... A ball is attached to one end of a wire, the other end being fastened to the ceiling. The wire is held horizontal, and the ball is released from rest (see the drawing). It swings downward and strikes a block initially at rest on a horizontal frictionless surface. Air resistanc... got it already thanks A wagon is rolling forward on level ground. Friction is negligible. The person sitting in the wagon is holding a rock. The total mass of the wagon, rider, and rock is 95 kg. The mass of the rock is 0.34 kg. Initially the wagon is rolling forward at a speed of 0.50 m/s. Then th... A 4.15-g bullet is moving horizontally with a velocity of +369 m/s, where the sign + indicates that it is moving to the right (see part a of the drawing). The bullet is approaching two blocks resting on a horizontal frictionless surface. Air resistance is negligible. The bulle... physics/ 2D motion A projectile of mass 0.539 kg is shot from a cannon. The end of the cannon s barrel is at height 6.4 m, as shown in the figure. The initial velocity of the projectile is 9 m/s . How long does it take the projectile to hit the ground? Answer in units of s physics (please help!) A projectile of mass 0.539 kg is shot from a cannon. The end of the cannon s barrel is at height 6.4 m, as shown in the figure. The initial velocity of the projectile is 9 m/s . How long does it take the projectile to hit the ground? Answer in units of s if a ball is thrown down at an inital speed of 22 m/s, what will be its position after 2.8 seconds? acceleration is 9.8 m/s squared Thank you so much! if a ball is thrown into the air at a speed of 22 m/s what is its position after 2.8 seconds? calculate the heat released for 5.00 grams if H2O cooling from 99 degrees celcius to 22 degrees celcius in joules and calories Based on the steps occurring in an acid-base reaction, which definition of acids and bases is most practical? Give three reasons for your choice. Find the volume of a cube whose side measures four x to the seventh power. what am I suppose to answer here. The main point of this argument or what? I have never answer a question about analyzing the flaw? What does that mean, what should I be looking at? is is for mistakes in the arguments or what? Thank you 10th grade well first you find how many coins there are in all. In this problem the entire amount is 6 coins, 3 of the 6 coins are dimes, if you pull out one dime there are now 2 dimes out of 5 coins ( because you took one away) so the answer would be two out of five chances. computer science Given an equilateral triangle with different charges namely charge on A is negative 4nC; B is positive 2nC and C is 1nC. The sides are all equal with 20cm. How can you calculate the resultant field on A; the mutual potential energy of the system? Solve for: cos (2 theta) = 2 - 2 sin^2 (theta) 2m-1=3m what is the first step needed to solve this what is the simplest form of 6/15, 15/60,and 78??????? 6th grade Because particles or molecules are always moving!!! Alegra II He Has Snow Brains 3/5 of the members of a hiking club went on the last hiking trip. if 39 people went on the last hiking trip how many are in the club? Use direct object pronouns in this sentence. Mi consejero académico me hizo unas sugerencias. Physical Science Can be used to represent an idea, object, or event that is too big, too small, too complex, or too dangerous to observe or test directly. Physical Science I am sorry, this is homework but this is all that is on each statement. Physical Science I am sorry, this was all I had on the puzzle and it is homework that I had in Physical Science. Physical Science Non of those work because I'm working on this crossword puzzle, and the word is 8 letters long and ends with a D. Physical Science Exact, agreed-upon quantity used for comparison. Physical Science I accidently typed in my next question in the wrong thing. Physical Science Can be used to represent an idea, object, or event that is too big, too small, too complex, or too dangerous to observe or test directly. Physical Science Application of science to help people It costs you $10 to draw a sample of size n=1 and measure the attribute of interest. You have a budget of $1,200. a. Do you have sufficient funds to estimate the population mean for the attribute of interest with a 95% confidence interval 4 units in width? Assume stdeviation(s... social studies what was used to stuff the nose of a mummy 5th Grade Math how do you do metric measurement social studies There is a sign that is 10ft on one side and 15ft on the other side (from the ground to the base of the sign), the billboard itself is 20ft. The painter brings a 25ft ladder in length. If the patch of ground 10ft from the base of the sign is used, at what point will the ladder...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=meagan","timestamp":"2014-04-19T07:34:50Z","content_type":null,"content_length":"16819","record_id":"<urn:uuid:544d7862-8710-46c3-94aa-ca09b08df7dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Marcus Hook Algebra 1 Tutor Find a Marcus Hook Algebra 1 Tutor I am an experienced tutor in advanced math subjects such as Calculus, Differential Equations, and all Algebra courses. I am also experienced in tutoring Biology, Chemistry, and Statistics. I have capabilities in advanced college level science courses (especially in Mechanical Engineering). I can also tutor in business courses such as accounting and finance. 24 Subjects: including algebra 1, chemistry, calculus, physics ...Determining parameters, such as perimeter, area, surface area, and volume for circles, triangles, and quadrilaterals will become a simple task as I work with the student through extra examples and simplified explanations. While the basic understanding of shapes and size is fairly straight forwar... 9 Subjects: including algebra 1, chemistry, geometry, algebra 2 ...I have used Macintosh computers since 1985 and have been a master of using them. My teaching duties at my chemistry teaching position included duties for managing the classroom for the future program for the entire school, and this meant training staff to use Macbooks and training them to use MS... 26 Subjects: including algebra 1, chemistry, geometry, biology I am a certified teacher who has worked with children ranging in ages from 4 to 19. I enjoy working with kids of all ages. I believe that children have their own learning styles and it is my job to teach them the way they learn best. 10 Subjects: including algebra 1, reading, grammar, special needs ...I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish language studies at the 300 level in college and have studied various epic literature, moral philosophy, an... 18 Subjects: including algebra 1, Spanish, reading, biology
{"url":"http://www.purplemath.com/Marcus_Hook_algebra_1_tutors.php","timestamp":"2014-04-19T15:26:35Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:a8d57aae-fcb9-45d1-84c8-50c76e27ffe3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
EQUATION SHEETS - Math, Engineering, Equations, Derivatives, 2013 © Equationsheets.com. All rights reserved. Covers such subjects as: Math, Math 1050, Equation, Equations, Equation Sheet, Equation Sheets, Engineering, Engineer, Trigonometry, Geometry, Pre-Algebra, Algebra, Calculus, Pre-Calculus, Angles, Angle, Function, Functions Physics, Fraction, Fractions, Exponent, Exponents, Logarithm, Logarithms, Units Circle, Pythagorean, Theorem, Identities, Sine, Sines, Cosine, Cosines, Tangent, Tangents, Cosecant, Cosecants, Secant, Secants, Cotangent, Cotangents, Hyperbolic, Derivative, Derivatives, Integral, Integrals, Triangle, Triangles, Square, Squares, Circle, Circles, Sphere, Spheres, Degree, Degrees, Theorem, Theorems, Constant, Constants, Conversion, Conversions, Temperature, Temperatures, Distance, Point, Points, Formula, Formulas, Sum, Product, Quadradic, Interpolation, Binomial, Series, Difference, Integer, Integers, Equilateral, Parallelogram, Parallelograms, Trapezoid, Trapezoids, Sector, Sectors, Ellipse, Ellipses, Cone, Cones, Pyramid, Pyramids, Diameter, Diameters, Radius, Radii, Radians, Arc, Powers of Ten, Scientific Notation, Cylinder, Cylinders, Wedge, Wedges, Frustrum, Prism, Prisms, Avagadro, Avagadros, Atomic, Mass, Average, Bohr, Boltzmann, Boltzman, Boltzmanns, Boltzmans, Boltzmann’s, Boltzman’s, Compton, Deuteron, Dirac, Diracs, Dirac’s, Electron, Electron Volt, Faraday, Faradays, Faraday’s, Gravitational, Gravity, Josephson, Flux, Quantum, Neutron, Nuclear, Permeability, Permittivity, Photon, Wavelength, Planck, Plancks, Planck’s, Plank, Planks, Plank’s, Proton, Quantized Hall, Rydberg, Rydbergs, Rydberg’s, Speed of Light, Vacuum, Atmospheric, Stefan-Boltzmann, Stefan Boltzmann, Stefan-Boltzman, Wein, Weins, Wein’s, Length, Area, Volume, Time, Velocity, Speed, Acceleration, Mass, Force, Weight, Energy, Work, Heat, Power, Pressure, Electricity, Light, Radiation, Viscosity, Frequency, Frequencies, Cofunction, Quadradic Equation, Unit Conversion, SI Units, English Units, Scientific and Mathematical Constants, Quadradic, Formula, Formulae, Constant, Study Sheet, Metric System
{"url":"http://equationsheets.com/","timestamp":"2014-04-18T13:47:52Z","content_type":null,"content_length":"73451","record_id":"<urn:uuid:52d9c861-2e07-4fa1-9aa0-407979c90531>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
In Section 8.2 we computed the magnitude and phase spectra of 1D and 2D sequences. The power spectrum is a measure of the distribution of signal energy as a function of energy and graphs of the spectrum are commonly used to visually analyze the frequency content of a signal or the frequency response of a system. The power spectrum is defined as the squared magnitude of the discrete Fourier transform of a signal. Therefore, for the case of 2D signals, we get Function PowerSpectrum , in addition to the direct realization of Equations (8.2.1) and (8.2.4) , implements three more methods for calculating the power spectrum of a signal or image. These are known as the Bartlett, Welch, and Blackman and Tukey methods [
{"url":"http://reference.wolfram.com/legacy/applications/digitalimage/UsersGuide/ImageTransforms/8.3.html","timestamp":"2014-04-16T10:15:09Z","content_type":null,"content_length":"35686","record_id":"<urn:uuid:3fe93c42-2cf5-4496-bd09-470ad5ccff33>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
RV Park Reviews Campground Discussion Forum > Latitide And Longitude - OK, I have a problem and since I am not an expert, I am asking this forum for help. I have a Gaming GPS which quotes as an example 28.52.201N and 082.46.982W HOWEVER, I found around co-ordinates quoted as 28.54.99W and 82.52.29W. The co-ordinates are irrelevant but I wonder how to change one system into the other or vice versa. One set of quotation is all the way with TWO(2) digits and the other with 2 and 3 digits. I wonder if there is a way to change one system into the other and vice versa.
{"url":"http://www.rvparkreviews.com/invboard/lofiversion/index.php?t3013.html","timestamp":"2014-04-16T07:36:42Z","content_type":null,"content_length":"5515","record_id":"<urn:uuid:f83e9af8-4251-4c50-85c1-2114971422fe>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Glen Burnie ACT Tutor Find a Glen Burnie ACT Tutor My goal is to utilize my knowledge in Chemistry (General and Organic) and Mathematics (Elementary to Calculus) to help individuals who have difficulty with the subjects. I have been known to teach anyone and that is something which students at my previous job told me. I have the ability to teach anyone in the subjects mentioned above. 19 Subjects: including ACT Math, chemistry, physics, calculus Hey guys, I have a lot of experience working with high school students in math. During my junior and senior years of high school I was part of an award-winning math team at my high school and consistently won awards at the regional and state levels. My strongest subjects are easily in math and alt... 33 Subjects: including ACT Math, reading, Spanish, trigonometry ...This contribution has brought me a great deal of personal satisfaction and solidified my understanding of the importance of science in our society. During this work, I have had the privilege of being a mentor to several undergraduate biology students. Two of my former mentees are enrolled in biology PhD programs and a third is working towards her M.D./PhD. 17 Subjects: including ACT Math, biology, ASVAB, algebra 1 ...Most of my recent tutoring classes are: Geometry, Analytic Geometry, Algebra I, II, Linear Algebra, Pr-Calculus, AP Calculus, Trigonometry, Integrated Math, Discrete Math, Digital Electronics, Physics. Also,I tutor Arabic language,Egyptian dialect. I'm available morning and evening hours for your convenience. 12 Subjects: including ACT Math, calculus, geometry, Arabic I am a current graduate student studying biochemistry & structural biology. I obtained my undergraduate degree in biochemistry from Georgia Tech where I became well versed in everything from organic chemistry to genetics. At GT I was involved heavily in organic chemistry research, synthesizing antibiotics for use against resistant bacteria. 40 Subjects: including ACT Math, chemistry, writing, physics
{"url":"http://www.purplemath.com/Glen_Burnie_ACT_tutors.php","timestamp":"2014-04-17T15:53:59Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:2c860748-36a6-44cb-b2f4-8d5ed39943f0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Writing the equation of the line... February 22nd 2010, 09:56 AM #1 Feb 2010 Writing the equation of the line... Hi there can you help me find the equation of the line that has a slope of -4/3 and passes through (-2,5) ? Thank you! The formula for the equation of a line knowing 2 points and a slope is: $y-y_1 = m(x-x_1)$ where m is the slope and $x1$ and $y1$ are the coordinates of the given point. Simply substitute: $y-5 = -\frac{4}{3} (x- (-2))$ and clean up if you want. Usually, on AP Calc tests, they recommend you don't waste your time simplifying because they accept this answer. $5=\frac{-4}{3}(-2)+b<br />$ Just as a different method February 22nd 2010, 10:12 AM #2 February 22nd 2010, 03:04 PM #3
{"url":"http://mathhelpforum.com/algebra/130135-writing-equation-line.html","timestamp":"2014-04-19T20:25:58Z","content_type":null,"content_length":"36587","record_id":"<urn:uuid:ac1871c7-31a3-43bf-8d08-44119248c7cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersection of two ellipses October 18th 2008, 08:09 AM #1 Oct 2008 Intersection of two ellipses Hello all, I have two ellipses, both share the same origin, one is rotated relative to the other which is aligned with the coordinate system. For the unrotated ellipse, it is in the form Ax^2+By^2+F=0. The rotated ellipse has a similar form, but includes a rotational component of course: Ax^2+By^2+Exy+F=0. I am looking for the intersection points (there should be four) of these two ellipses. It is where the two equations are equal to one another. Setting these two equations equal to one another and simplifying yields something like: (B1-B0)y^2=-E1xy-(A1-A0)x^2. My question is how can I solve for both x and y to find these intersections? Is there a simpler manner? Thank you for any input. Hello all, I have two ellipses, both share the same origin, one is rotated relative to the other which is aligned with the coordinate system. For the unrotated ellipse, it is in the form Ax^2+By^2+F=0. The rotated ellipse has a similar form, but includes a rotational component of course: Ax^2+By^2+Exy+F=0. I am looking for the intersection points (there should be four) of these two ellipses. It is where the two equations are equal to one another. Setting these two equations equal to one another and simplifying yields something like: (B1-B0)y^2=-E1xy-(A1-A0)x^2. My question is how can I solve for both x and y to find these intersections? Is there a simpler manner? Thank you for any input. Solve both as quadratics in x (so the solutions are in terms of y and the coefficients), and equate the solutions and solve for y. I think you will end up having to solve a 4th degree equation (a.k.a. a biquadratic). This can be done analytically but is not very pleasant. October 19th 2008, 01:46 AM #2 Grand Panjandrum Nov 2005 October 19th 2008, 07:26 AM #3
{"url":"http://mathhelpforum.com/pre-calculus/54331-intersection-two-ellipses.html","timestamp":"2014-04-18T04:41:30Z","content_type":null,"content_length":"36549","record_id":"<urn:uuid:cb6a2bb7-2829-4055-917c-b71422b65e0b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
(i) The incident ray, the refracted ray and the normal to the refracting surface at the point of incidence all lie in the same plane. (ii) The ratio of the sines of the angle of incidence (i) and of the angle of refraction (r) is a constant quantity μ for two given media, which is called the refractive index of the second medium with respect to the first. (sin i / sin r) = constant = μ When light propagates through a series of layers of different medium as shown in the figure, then the Snell’s law may be written as μ[1] sin 1 = μ[2] sin 2 = μ[3] sin 3 = μ[4] sin 4 = constant In general, μ sinθ = constant Fig a (series of transparent layers of different refractive indices), Fig b (A light ray passing from air to water bends toward the normal) When light passes from rarer to denser medium it bends toward the normal as shown in the fig. According to Snell’s law μ[1] sin θ1 = μ[2] sin θ[2] When a light ray passes from denser to rarer medium it bends away from the normal as shown in the fig. b above For a given point object, the image formed by refraction at plane surface is illustrated by the following diagrams. The same result is obtained for the other case also. The image distance from the refracting surface is also known as Apparent depth or height. Apparent Shift Apparent shift = Object distance from refracting surface – image distance from refracting surface. Δy (apparent shift) t = 1 - μ^-1 where t is the object distance and μ = μ[1]/ μ[2] • If there are a number of slabs with different refractive indices placed between the observer and the object. Total apparent shift = ΣΔy[i] Illustration 3: A person looking through a telescope T just sees the point A on the rim at the bottom of a cylindrical vessel when the vessel is empty. When the vessel is completely filled with a liquid (? = 1.5), he observes a mark at the centre, B, of the vessel. What is the height of the vessel if the diameter of its cross-section is 10cm? Solution: It is mentioned in the problem that on filling the vessel with the liquid, point B is observed for the same setting; this means that the images of point B, is observed at A, because of refraction of the ray at C. For refraction at C : sin r / sin i = μ[1] = 1.5 sin r = AD / AC = 10/ √( 10^2 + h^2), where h is height of vessel
{"url":"http://www.askiitians.com/iit-jee-ray-optics/refraction-at-plane-surface/","timestamp":"2014-04-20T01:25:38Z","content_type":null,"content_length":"56875","record_id":"<urn:uuid:82a38deb-47a5-4bbe-aef9-3aaca383cfc2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Bayesian Regularization of the Video Matting Problem Nicholas Apostoloff and Andrew W. Fitzgibbon Department of Engineering Science, University of Oxford To regularize the inverse problem of Video Matting in a Bayesian framework using priors on the distribution of alpha values and the spatio-temporal consistency of image sequences. Video Matting Video matting is a classic image processing problem involving the extraction of a foreground object from an arbitrary background in a sequence of images. It is most prevalent in the film industry for special effects shots that require the superposition of an actor onto a new background. The compositing equation linearly combines a background image B with a fore- ground image F to form the composite image C using the alpha matte : C = F + (1 - )B = × + × C F (1 - ) B However, the video matting problem is the inverse of this: Given an sequence of images C, solve for , F and B.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/733/2585380.html","timestamp":"2014-04-18T15:50:49Z","content_type":null,"content_length":"8247","record_id":"<urn:uuid:92acf114-d4b1-4162-b803-4dacbc22c0e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Our users: Hi, I wanted to comment on your software, not only is it really easy to follow, but I can hide the step by step procedures, or choose to show them. Also, the fact that it explains how to use the Gina Simpson, DE I got 95% on my college Algebra midterm which boosted my grade back up to an A. I was down to a C and worried when I found your software. I credit your program for most of what I learned. Thanks for the quick reply. Judy McDonald, TX I recently came across on the internet and ordered the algebra software for my child. I am happy to report that the visual and hands on approach is just what my child needed to grasp fundamental algebra concepts. Margaret, CA Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2011-01-14: • cramer's rule ti-84 plus silver • biology power notes • "subtracting rules" alegbra • algebra tile worksheets • What are the steps used to solve an equation with rational expressions? • online inequality calculator • beach gradient • plotting ordered pairs puzzle free • holt algebra 1b textbook • creative publications algebra with pizzazz • holt algebra1 california answers • fifth grade worksheets and algebraic expressions • a first course in abstract algebra solutions • simplifying a sum of radical expressions calculator • Decimal to Mixed Number Calculator • grade 7 exponents question paper • Solutions to A first course in abstract algebra • free math simplifier • adding/subtracting three integers • math140 finite math problems and solutions • 8th grade calculator • adding radical expressions calculator • dilation geometry worksheet • 9th grade problems with answers for evaluate each expression • maths p.p.t • math Transpose a formula worksheet • pre algebra worksheets for 7th graders • free formula sheets for geometry • algebra with pizzazz what you shouldnt • factoring program on ti 84 • online taks test practice english and math FOR 6TH GREAD • McDougal Littell Algebra 1 answer key free • systems of equations ti84 • program to help with complex fractions • famous question pre-algebra with pizzazz answer • foil online • dividing two polynomials in java • free algebra with pizzazz worksheet answer • mcdougal littell math 9 taks objectives • a first course in abstract algebra solutions manual • calculator online cu radical • online inequalities calculator • online polynomial solver • ninth and tenth grade math fractions • algebra prayers • math trivia's • derivative implicit calculator • 10th grade math online games • simplifying rational expressions worksheet • multivariable equation solver online • PRENTICE HALL ALGEBRA 2 WORKBOOK ANSWERS • multiplication of radicals with different index • pizzazz math • logarithmic equation solver • 3rd order polynomial solver • UCSMP Advanced Algebra • 7th grade math square root • McDougal Littell Algebra 1 (2007) Answer sheet • exponential expression calculator • Excel Solver solve 3 equations in 3 unknowns • inverse operations worksheets with only positive numbers • free rational expression solver • polynomials subtract, multiply,, divide 9th grade • solving arithmetic progressions • step by step online integrator • what is the difference between evaluation and simplification of an algebraic expression • free math problem solver online algebrator • solving real life problems with formulas worksheet • Simplifying rational expressions with ac method • pre algebra with pizzazz creative publication • discovered radicals math • rational expression calculator online • solve my math probl • Algebra calculator for rational expressions and equations • 9th grade algebra • algebra 9th grade • real life situation multiply positive and negative rational number • solving formulas for specified variables worksheet • algebraic expression in relation to real life situation • coordinate plane worksheet picture • Math's Helper Plus free code • inverse operations worksheet • graphing linear equations test questions • rational expressions and equations calculator • how to convert radicals into decimals
{"url":"http://www.mhsmath.com/math-problem-solving/function-domain/radicals-in-math.html","timestamp":"2014-04-19T01:57:34Z","content_type":null,"content_length":"19964","record_id":"<urn:uuid:9c90a78c-55e9-49ff-a58b-53a53da26ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
The Higgs Boson for not-so-dummies 3 Comments A few months ago I agreed to do a physics talk for the Hamilton Junior Naturalists (Junats). When pushed for a title I decided on the Large Hadron Collider and the Higgs Boson. Hmm. How does one go about explaining the Higgs Boson to an 11 year old? (I’ve got to Friday evening to come up with a decent answer). In fact, how does one explain the Higgs Boson to a physicist (i.e. me)? Particle Physicists, especially the ultra-theoretical ones, speak a language that is almost incomprehensible to everyone else, including other physicists.Think of it as being a being a kiwi trying to listen to a broad Glasgow accent. If I listen carefully to a particle physicist, I can hear words that I recognize, but just what they are trying to get across I can only get an inkling of. Here’s an amusing but not terribly helpful video on minutephysics on why the Higgs is just so important. Did you get that? (If you did, please explain it to me.) What irritates me about this video is the way that maths is used as an excuse for something being the way that it is. "Toss in the ingredients (in this case the Higgs field), let the math machine ferment, and out comes the answer (in this case mass)" Particle physicists, you have got to do better than that. You can’t say that something is the way that it is because the maths says so. No, you’ve created the maths to describe the situation you have. Sure, there can be unexpected solutions that pop out that in fact represent reality, and that gives you confidence that you are on the right track with your mathematical description, but, fundamentally, you have to be describing something PHYSICAL for it to be at all meaningful. Maths would exist quite happily in a universe of complete nothingness – physics, on the other hand, wouldn’t. If you are like me and need a bit more help here, there’s a few more videos to choose from. http://youtu.be/RIg1Vh7uPyw (Fermilab) youtu.be/KPoxewA-URo (Brian Cox’s extended effort). Enjoy. I’ll let you know how Friday night goes.
{"url":"http://sciblogs.co.nz/physics-stop/2012/09/11/the-higgs-boson-for-not-so-dummies/","timestamp":"2014-04-21T02:07:55Z","content_type":null,"content_length":"119432","record_id":"<urn:uuid:7b88c36f-ee9e-44b1-aebe-5f068552e22b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Image Entropy Replies: 0 Image Entropy Posted: Jul 13, 2012 3:57 PM how can I calculate the entropy of individual pixels? the Shannon entropy calculate the entropy of an image using thie E = - sum(p*log(p)); where p is the historgram so if i want to calculate the entropy to every pixel using window of [3 3] to calculate the pixel intropy how this can logicallly and functionally done?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2390581&messageID=7849435","timestamp":"2014-04-21T00:33:53Z","content_type":null,"content_length":"13748","record_id":"<urn:uuid:d5425cd6-e00c-4681-b6c5-07dec1001718>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Can you tell whether a space is Banach from the unit ball? up vote 14 down vote favorite Let $V$ be a real vector space. It is well known that a subset $B\subset V$ is the unit ball for some norm on $V$ if and only if $B$ satisfies the following conditions: 1. $B$ is convex, i.e. if $v,w\in B$ and $\lambda\in[0,1]$ then $\lambda v+(1-\lambda)w \in B$. 2. $B$ is balanced, i.e. $\lambda B \subset B$ for all $\lambda \in [-1,1]$. 3. $\displaystyle\bigcup_{\lambda > 0} \lambda B = V$ and $\displaystyle\bigcap_{\lambda>0} \lambda B = \{0\}$. My question is: is there some simple way to determine from $B$ whether the resulting norm on $V$ will be complete? Keep in mind that $V$ does not yet have a topology. Edit: I guess the word "simple" is a bit misleading. What I'm looking for is some geometric insight into how the shape of $B$ affects whether the result is a Banach space. When $V$ is finite dimensional, all sets $B$ satisfying conditions (1) - (3) give equivalent norms, so all $B$'s are somehow roughly the same shape. In what way do the shapes vary when $V$ is infinite-dimensional, and how does this affect the completeness of the resulting norm? fa.functional-analysis banach-spaces 1 If the dimension is finite all norms induce the same topology, so that's something. – Adam Hughes Feb 28 '11 at 17:46 1 More directly to the point, if the dimension is finite all norms give $V$ a complete metric structure. An easy necessary condition for completeness is that $B$ satisfies a version of the nested interval property: any nested sequence of translates of dilates of $B$ has a nonempty intersection. – Mark Meckes Feb 28 '11 at 18:46 7 Translating the notion of 'converging sequence' and of 'Cauchy sequence' in terms of $B$ in place of the norm is quite immediate. The resulting formulation of completeness in terms of $B$ is not that different form the usual one. Do you expect a simpler way than that? – Pietro Majer Feb 28 '11 at 19:48 3 Another silly condition is that B must be an algebra for the monad "$\ell^1$" (that sends a set to the unit ball of the $\ell^1$-space on it). Aka, $B$ must be totally convex. (silly for the same reason) – Andrew Stacey Mar 1 '11 at 14:14 4 Your ambient space, Jim, with the product topology is a complete locally convex space. If $B$ is closed in the product topology, the norm induced by $B$ is complete. – Bill Johnson Mar 2 '11 at show 11 more comments 3 Answers active oldest votes A sufficient (additional) condition is that $B$ be compact for some Hausdorff vector topology for $V$. The proof goes as follows. Letting $\langle x_i\rangle$ be a Cauchy sequence, it is contained in some $nB$, and so it has some cluster point $y$ there. Given $\varepsilon>0$, there is $i_0$ such that $x_i-x_j\in\ frac 12\varepsilon B$ for $i,j\ge i_0$ , and it remains to show that $x_{i_0}-y\in\frac 12\varepsilon B$ . Indeed, if this does not hold, we have $y\not\in x_{i_0}-\frac 12\varepsilon B$ . Since $B$ is compact, it is closed in the Hausdorff case, and so is $x_{i_0}-\frac 12\varepsilon B$ as we have a vector topology. By the cluster point property, there must be some $i\ge i_0$ with $x_i\not\in x_{i_0}-\frac 12\varepsilon B$ , which is impossible. Edit. Actually, I originally had in mind a more general condition, but I couldn't correctly recall it when writing the answer. Namely, a sufficient condition is that there be a Hausdorff topological vector space $E$ and there an absolutely convex compact set $C$ and a linear map $\ell:V\to E$ with $B=\ell^{-1}[C]$ , and such that we have $y\in{\rm rng\ }\ell$ whenever $y\ in E$ is such that $(y+\varepsilon C)\cap({\rm rng\ }\ell)\not=\emptyset$ for all $\varepsilon>0$ . The proof is essentially the same as the one given above with $C$ in place of $B$ . up vote 4 This more general condition applies for example in the case where $B$ is the closed unit ball of $C^k([0,1])$ since we can take $E=({\mathbb R}^{[0,1]})^{k+1}$ and $C=([-1,1]^{[0,1]})^ down vote {k+1}$ and $\ell$ given by $y\mapsto\langle y,y',y'',\ldots y^{(k)}\rangle$ . II Edit. The sufficient condition I gave above is of "extrinsic nature", and as such probably not in the spirit requested in the original question. An "intrinsic" condition, which is (probably) "simple", and in the line already suggested above in the first answer and in the comments, is that for any sequence $\langle x_i:i\in\mathbb N_0\rangle$ in $V$ satisfying $\ lbrace 2^{i+2}(x_i-x_{i+1}):i\in\mathbb N_0\rbrace\subseteq B$ , there be $x\in V$ with $\lbrace 2^i(x_i-x):i\in\mathbb N_0\rbrace\subseteq B$ . However, obviously this is not very practical to be verified in concrete situations. Basing on my experience and intuition, I would generally say that "extrinsic" conditions probably are more convenient than "intrinsic" ones. So, I think the question is good, but the restriction put there on the direction for searching for the answer is wrong. In practice, when one constructs (prospective new) Banach spaces, there is often some surrounding "larger" topological vector space where the new spaces will be continuously injected. In view of this, it is natural to look for extrinsic conditions. 1 Actually this condition is necessary and sufficient for $B$ to be the unit ball of a space that is isometrically isomorphic to a dual Banach space. This is another exercise in books, IIRC. – Bill Johnson Mar 1 '11 at 17:20 It is very reasonable to argue that the restrictive nature of the question is wrong. Indeed, the point of view that I'm taking is very different from the usual perspective on such things. This is intentional -- I'm curious what can be said about completeness in this context, without any of the usual tools available. It's certainly possible that nothing very interesting can be said, in which case the question is not successful. – Jim Belk Mar 2 '11 at 21:14 Then I have no idea what you are looking for, Jim. You should give some motivation for your question and explain what type of condition you want. From my perspective, from what you have written, what you have asked has already been over answered. – Bill Johnson Mar 3 '11 at 0:53 That's fair. I'll accept this answer and move on. Thanks everybody! – Jim Belk Mar 3 '11 at 4:44 add comment I don't know if this counts as "simple". But $V$ is Banach if and only if, whenever $(x_n)$ is a sequence in $B$, and $\sum_n \|x_n\|<1$, then $\sum_n x_n$ converges in $V$ (and necessarily to something in $B$). Now, you can phrase this convergence purely in terms of $B$. You need that there is $x\in B$ such that, for all $\epsilon>0$, there exists $N$ such that $\epsilon^{-1}(x - \sum_{n=1}^N x_n) \in B$. up vote 11 down vote That doesn't seem super-simple to me. 1 Edit: While I typed this, Pietro make a comment. Of course, all I've done is actually carry out Pietro's comment more explicitly... – Matthew Daws Feb 28 '11 at 19:50 Thank you, I was wondering if I had to expand the comment, with the same remark ;-) Also note that for the completeness of a normed space it it sufficient the convergence of all geometric series, that is with $|x_n|\leq 2^{-n}$ – Pietro Majer Feb 28 '11 at 20:00 3 Take $B$ to be the closed unit ball of the Minkowski functional for $B$. The usual way of checking that the normed space that has $B$ as the unit ball is complete is to verify that $B$ is closed in some Banach space that contains $B$ s.t the unit ball of the Banach space contains $B$. You can find this as an exercise in some books (not that I recall which ones). – Bill Johnson Feb 28 '11 at 21:00 Bill, I did wonder a bit whether the question was just a homework problem. – Deane Yang Mar 1 '11 at 4:19 add comment Unit balls with precisely the property that you are looking for have been studied under the rather awkward name of completant (presumably directly from the French) in the book on applications of bornologies to functional analysis by Hogbe-Nlend. I think that the only result of any substance that you will find is a variant of Grothendieck's completeness theorem which can be found there. One assumes that the ball is a closed bounded set in an ambient topological vector space which is complete. This, amongst others, provides what is probably the simplest and most transparent proof of the completeness of the $ \ell^p$ and $L^p $-spaces. up vote 2 down By the way the class of spaces of Bill Johnson's answer has also been investigated. They were introduced by Waelbroeck and called Waelbroeck spaces by Buchwalter. They form a concrete vote representation of the category opposite to that of Banach spaces---see Cigler-Losert-Michor on functors on categories of Banach spaces (available onine). A good example of their use is in the characterisation of von Neumann algebas as $ C^*$ algebras which, as Banach spaces, are Waelbroeck. This givea a useful pointer on how to form limits in the category of von Neumann Call me grumpy, but I can't help feeling Sakai's name should be mentioned in the last para, even if the words "dual Banach space" are for some reason being avoided. (And yes, I have read that section and others of Cigler-Losert-Michor) – Yemon Choi Oct 23 '12 at 13:39 tried to remove this answer since it seems to have caused offence but apparently don't have the power. perhaps somebody who does could do me the favour. – jbc Oct 23 '12 at 18:14 I wouldn't say offence; just a certain irritation with the style of the last para. BTW, I am not sure it makes to say that some object is "Waelbroeck, as a Banach space". It is a rather subtle consequence of Sakai's theorem/proof that in the isometric sense, there is only one way for a von Neumann algebra to be Waelbroeck: that is, the underlying Banach space of a $W^\ ast$-algebra has a unique isometric predual. – Yemon Choi Oct 29 '12 at 21:50 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis banach-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/56912/can-you-tell-whether-a-space-is-banach-from-the-unit-ball?sort=newest","timestamp":"2014-04-20T18:54:12Z","content_type":null,"content_length":"82259","record_id":"<urn:uuid:3f46a51d-100f-4535-9301-7836555034ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Almonesson Math Tutor Find an Almonesson Math Tutor ...Please note that I only tutor college students, advanced high school students, returning adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure exams.I have nearly completed a PhD in math with a heavy emphasis in computer algebra. I am a world-reno... 11 Subjects: including differential equations, logic, calculus, precalculus ...I am a Board Certified Behavior Analyst, and a certified Special Education Teacher, proficient in remediating both Reading and Math delays. For the past 14 years I have worked as a Special Education teacher in the School District of Philadelphia. I am a Board Certified Behavior Analyst and for ... 31 Subjects: including prealgebra, algebra 1, English, reading ...I helped my brother as well as my mother through their college math courses while I was still in high school. I am currently enrolled in Calculus 2 at Rowan University, which allows me to use Algebra every day, so I am very current in the subject. I have also volunteered to tutor children at my local Library. 4 Subjects: including linear algebra, algebra 1, algebra 2, prealgebra ...I consider one of the most important elements of science to be researching the correct answer. I have a strong background in research, as is necessary for any advanced science degree. I was a competitive swimmer for five years, with experience in backstroke, breaststroke, butterfly and freestyle. 20 Subjects: including statistics, algebra 1, algebra 2, biology ...Before tutoring for WyzAnt, I tutored about 150 hours of math, science, and engineering before, but it was all volunteer and unpaid. I also currently work as an academic and SAT tutor for StudyPoint. When I was in college, I was the editor of a biweekly magazine for two years: because I went to an engineering school, many of my writers didn't have some of the basics down. 25 Subjects: including algebra 2, geometry, algebra 1, differential equations
{"url":"http://www.purplemath.com/Almonesson_Math_tutors.php","timestamp":"2014-04-18T05:57:57Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:e6e58a49-39ca-4631-b626-e0df2cdcd3b6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
How does interacting Lagrangian have form of product of fields? [...] I dont understand how to know the form of interacting Lagrangian has form of product of fields(example Lagrangian of Fermi field interacting with electromagnetic field). It just comes from quantizing the classical theory. It seem that following Haag's theorem there not exist quantized equation of motion for interacting fields. That's not quite what Haag's theorem says. The free representation and the interacting representation are both constructed to be Poincare representations, but Haag's theorem basically means you can't (rigorously, nonperturbatively) express the latter in terms of the former. Getting around the manifestations of this is one of the reasons for
{"url":"http://www.physicsforums.com/showthread.php?t=451697","timestamp":"2014-04-18T15:42:22Z","content_type":null,"content_length":"29674","record_id":"<urn:uuid:032cfec1-cc13-4fbb-9962-e318e612a53f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve this linear inequality (with steps please): (x-3)(x+2) < 0 Best Response You've already chosen the best response. ans x=3 and x=-2 Best Response You've already chosen the best response. @krypton, this is inequality , your answer was just the value that will make the equation become zero. then you must evaluate it (above 3, between -2 and 3, below -2) and find which one cause negative number.like this |dw:1322052738890:dw| so the answer is x<3 Best Response You've already chosen the best response. Best Response You've already chosen the best response. I think the correct answer would be -2<x<+3. Are you agree kevinfrankly? Best Response You've already chosen the best response. oops sorry you're right paulom .. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ecc7f12e4b04e045aecd311","timestamp":"2014-04-19T02:02:34Z","content_type":null,"content_length":"107818","record_id":"<urn:uuid:cd9725c7-6e81-4147-90d6-50d7865038c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Oblique Asymptote July 13th 2007, 11:02 AM Finding Oblique Asymptote Hi all, I'm currently having the hardest time understanding how to find the Oblique Asymptote. An example is x^2 -1 / x + 3. I know I have to use long division method to figure this out but I don't even know how to start on such a simple one as this. Can someone give me a detailed method of doing this? Thanks alot~ July 13th 2007, 11:18 AM If you long divide you get: Therefore, f(x) approaches the line y=ax+b as x increases or decreases without bound. Hence, your asymptote is $y=x-3$ July 13th 2007, 11:20 AM If you long divide you get: Therefore, f(x) approaches the line y=ax+b as x increases or decreases without bound. Hence, your asymptote is $y=x-3$ Thanks but I was wondering how to do the division part step by step :confused: July 13th 2007, 11:27 AM Oh. I would suggest getting an algebra book and practicing. You really should be able to do that at this level. But here goes this one: x -3 x+3|x^2 -1 So, you have: $\frac{8}{x+3}+x-3$ It works just like other division.
{"url":"http://mathhelpforum.com/pre-calculus/16825-finding-oblique-asymptote-print.html","timestamp":"2014-04-21T12:23:58Z","content_type":null,"content_length":"7546","record_id":"<urn:uuid:4b155939-3c83-4ae1-877f-a52444fcf2e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
[a]=[b] <--> a~b August 16th 2012, 02:20 PM #1 Jul 2010 [a]=[b] <--> a~b Thm: [a]=[b] <--> a~b I have proved: [a]=[b] --> a~b but cannot figure out how to prove: [a]=[b] <-- a~b I don't know if I can just write the steps backwards since I don't see how I would know: (4) --> (5) This fact was stated without proof in a youtube playlist on equivalence relations: Last edited by lamp23; August 16th 2012 at 02:22 PM. Re: [a]=[b] <--> a~b Re: [a]=[b] <--> a~b I think (3) and (4) need an existencial quantifier for the variable x. Otherwise, seems fine. Re: [a]=[b] <--> a~b Re: [a]=[b] <--> a~b Re: [a]=[b] <--> a~b Yes. But isn't that a different proof? I mean if we want to make the proof in the OP correct, isn't there a need of a quantifier in order for a (an element, ultimately, a set) to be in relation with another element (which x is not, because it's a free variable)? I'm unsure. Re: [a]=[b] <--> a~b yes, something along the lines of: let x be in [a] = [b]. we know such an x exists because both [a] and [b] are non-empty (because ~ is an equivalence, so ~ is reflexive. so at the very least, a is in [a], and b is in [b] (a and b may actually be the same, but perhaps not)). but it's not necessary to introduce x at all. as Plato points out, we have: a in [a] = [b], so a in [b], so b~a, and by symmetry, a~b. doing so eliminates the need to use transitivity to show [a] = [b] → a~b (we we *do* need transitivity to show the reverse implication, August 16th 2012, 02:48 PM #2 August 16th 2012, 02:51 PM #3 August 16th 2012, 03:00 PM #4 August 16th 2012, 03:53 PM #5 Jul 2010 August 16th 2012, 06:01 PM #6 August 16th 2012, 07:11 PM #7 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/discrete-math/202246-b-b.html","timestamp":"2014-04-19T05:06:59Z","content_type":null,"content_length":"50125","record_id":"<urn:uuid:be94d714-c6b9-43db-93aa-af309ed950da>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Kuranishi deformation theory Kuranishi deformation theory deformation theory of compact complex manifolds. • K.Kodaira, Complex manifolds and deformation of complex structures Translated from the 1981 Japanese original by Kazuo Akao. Reprint of the 1986 English edition. Classics in Mathematics. Springer-Verlag, Berlin, 2005. ISBN 3-540-22614-1 (very detailed, exposition unfortunately rather disorientating) • Masatake Kuranishi, Deformations of compact complex manifolds Séminaire de Mathématiques Supérieures, No. 39 (Été 1969). Les Presses de l’Université de Montral, Montreal, Que., 1971. Created on June 15, 2012 12:39:30 by Urs Schreiber
{"url":"http://ncatlab.org/nlab/show/Kuranishi+deformation+theory","timestamp":"2014-04-19T04:57:47Z","content_type":null,"content_length":"11823","record_id":"<urn:uuid:553fda94-7f5a-4b9c-bb0f-76857e488f6d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] determining a rectangle's aspect ratio March 14th 2009, 04:34 PM #1 Mar 2009 [SOLVED] determining a rectangle's aspect ratio NOTE: I reposted this in Advanced Geometry before realizing there was a "Go Advanced" button here for doing that. I'll leave this open here for now but I think it's a more difficult problem than I first thought... It's been too long since I've had to figure something like this out. I'm hoping someone will be kind enough to help. I'm attaching a picture of what I'm working on as it will make it easier to describe. (I'll leave the image up until someone's given me an answer). I'm looking for the width (line dc) of this rectangle when the height (line de) is 5 inches (and for the formula needed to figure it out). Note that the blue line (da) and red line (ac) are equal in length and that angle abc is 90 degrees. Also note that line ec is a diagonal of the rectangle from the lower left corner to the upper right corner and line af runs from the middle of line dc to the lower right corner (f). Just from experimenting a bit I know that line dc is about 7.1 inches. I'd like the formula for getting the exact length. Thank you in advance. I didn't think this would be that hard to figure out but I don't really know how to even approach solving it. Last edited by paul6; March 14th 2009 at 06:40 PM. Reason: to explain that it's been reposted to the advanced forum March 14th 2009, 09:34 PM #2 Mar 2009
{"url":"http://mathhelpforum.com/geometry/78697-solved-determining-rectangle-s-aspect-ratio.html","timestamp":"2014-04-20T20:12:11Z","content_type":null,"content_length":"32823","record_id":"<urn:uuid:94cf85f0-0b83-4e83-ace4-234efdd4d3b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Angle Pairs 6.1: Angle Pairs Created by: CK-12 House building and Tipis In Mrs. Patterson’s World Cultures class the students have just started studying house building. Mrs. Patterson explained that house building doesn’t just pertain to the kinds of houses that we live in, but to houses around the world both past and present. The students are going to work on a project on a specific type of house. Jaime is very excited. She has always been interested in Native Americans, so she has chosen to work on a tipi. Jaime selected one of the books that Mrs. Patterson brought in and began to leaf through the pages looking at all of the different types of tipis constructed. “Tipis?” Mrs. Patterson asked looking over Jaime’s shoulder. “Yes, I want to design and build one for part of my project,” Jaime explained. “That’s wonderful. You will need to use a lot of math to accomplish that too,” Mrs. Patterson stated. Jaime hadn’t thought about the math involved in building a tipi. But as she looked through the pages on designing and building tipis, she noticed that there were a lot of notes on different angles. One of the types of angles mentioned was a complementary pair of angles. Another was a supplementary pair of angles. These angles were important to figure out when stitching the liner of the tipi Jaime is puzzled. She can’t remember how to identify a complementary or a supplementary pair of angles. This lesson is all about angle pairs and relationships. Pay close attention to the information in the lesson and at the end you will be able to help Jaime to identify these angle pairs. What You Will Learn In this lesson you will learn how to understand the following skills. • Identify angle pairs as complementary, supplementary or neither. • Identify adjacent and vertical angles formed by intersecting lines. • Identify intersecting, parallel or perpendicular lines in a plane. • Find measures of angle pairs using known relationships and sufficient given information. Teaching Time I. Identify Angle Pairs as Complementary, Supplementary or Neither In this lesson, we will begin to examine angles that are formed by different types of lines. An angle is measurement of the space created when lines intersect. Here is an example of the angles formed when two lines intersect. You can see that there are four angles created in this drawing and that they are labeled 1 – 4. We have reviewed some lines and that angles are created when lines intersect. Sometimes, the way that the lines intersect can create an angle pair. This is when two special angles are formed and these angles have a special relationship. Let’s look at some angle pairs. The two basic forms of angle pairs are called complementary and supplementary angles. Complementary angles are two angles whose measurements add up to exactly $90^{\circ}$ In other words, when we put them together they make a right angle. Below are some pairs of complementary angles. Supplementary angles are two angles whose measurements add up to exactly $180^{\circ}$ When we put them together, they form a straight angle. A straight angle is a line. Take a look at the pairs of supplementary angles below. Once you know how to identify the angle pairs, you will be able to classify angle pairs as supplementary, complementary or neither. Let’s look at an example. Classify the following pairs of angles as either complementary or supplementary. Now let’s look at how we can identify the angle pairs. First, look at the first pair of angles labeled $a$ We can see that the measure of the angles in this pair is 30 and 60 degrees. We know that the sum of complementary angles is $90^{\circ}$ Now look at the second angle pair labeled $b$ We can see that the measure of the angles in this pair is 110 and 70. The sum of these two angles is $180^{\circ}$ Note: The word “supplementary” or “complementary” refers to the relationship between the two angles. Sometimes, a pair of angles will be neither complementary nor supplementary. Let’s look at an example. The sum of these angles is $70^{\circ}$ Go back and write all of the vocabulary words from this section in your notebooks. Draw a small example of each word next to its definition. II. Identify Adjacent and Vertical Angles Formed by Intersecting Lines When lines intersect, they create special relationships between the angles that they form. Once we understand these relationships, we can use them to find the measure of angles formed by the intersecting lines. Adjacent angles are angles that share the same vertex and one common side. If they combine to make a straight line, adjacent angles must add up to $180^{\circ}$ The word “adjacent” means “next to” that can help you to remember adjacent angles. Below, angles 1 and 2 are adjacent. Angles 3 and 4 are also adjacent. Can you see that angles 1 and 2, whatever their measurements are, will add up to $180^{\circ}$Because the adjacent angle pairs form lines, we can also say that they are supplementary. They must add up to $180^{\circ}$ We can find the sums in this way. $\angle{1} + \angle{2} = 180^{\circ}\\\angle{3} + \angle{4} = 180^{\circ}\\$ As you work through this lesson, you will find that some information leads you to other information. Here is the first example of that. Because adjacent angles form a straight line, they are also supplementary. The sum of the angles is $180^{\circ}$ Notice that when there are two angles next to each other, there are also two angles diagonally across from each other. These are called vertical angles. Vertical angles are angles that are diagonally across from each other and have the same measure. These relationships always exist whenever any two lines intersect. Look carefully at the figures below. Understanding the four angles formed by intersecting lines is a very important concept in Identify the vertical angles and the adjacent angles in the diagram below. First, think back to the definition of adjacent and vertical angles. Adjacent angles form a straight line. They are supplementary angles. We can see from the diagram that angles 1 and 3 are adjacent. Angles 2 and 4 are also adjacent. Vertical angles are diagonal from each other and have the same measure. In this case, angles 1 and 4 are vertical. Angles 2 and 3 are also vertical angles. You will use this information again when problem solving, but now, let’s look at types of lines. III. Identify Intersecting, Parallel and Perpendicular Lines in a Plane In other math classes, you learned about different types of lines. Lines exist in space. Two lines intersect when they cross each other. Because all lines are straight, intersecting lines can only cross each other once. There are parallel lines, intersecting lines and perpendicular lines. Let’s start by briefly reviewing these terms and then we can look at the angles formed when these lines intersect. Types of Lines Parallel Lines are lines that are an equal distance apart. This means that these lines will never intersect. Intersecting lines are lines that cross at some point. Perpendicular lines are lines that intersect at a $90^{\circ}$. Keep all of this information in mind as we now apply what we have learned to problem solving. IV. Find Measures of Angle Pairs using Known Relationships and Sufficient Given Information Now that you have learned some information about angle pairs and their relationships, you can use what you have learned to find the measures of missing angles. Let’s review what we have just learned. • Supplementary angles are two angles that form a straight line, and their sum is always $180^{\circ}$$90^{\circ}$ • Adjacent angles are next to each other. When they form a line, their sum is $180^{\circ}$ • Vertical angles are directly opposite each other. They are equal. Here is our first example. Fill in the figure below with the angle measures for all of the angles shown. First, notice that we only have one angle to go on. This angle measures 70 degrees. However, that is enough information to figure out all of the other angles in this diagram. We can use the information that we know about angles to figure the measures of these angles out. Let’s begin with adjacent angles. Angle $b$$180^{\circ}$ We can write this equation. $180 = 70 + b$ We know that $b$$110^{\circ}$ Next, we can work on the vertical angles. Angle $c$$b$$c$$110^{\circ}$ Angle $a$$70^{\circ}$$70^{\circ}$ Using our known information, we have figured out the measures of all of the missing angles. Now let’s take what we have learned and apply it to the problem from the introduction. Real-Life Example Completed House building and Tipis Here is the original problem once again. Reread it and then write a definition to describe both complementary and supplementary angles. There are two parts to your answer. In Mrs. Patterson’s World Cultures class the students have just started studying house building. Mrs. Patterson explained that house building doesn’t just pertain to the kinds of houses that we live in, but to houses around the world both past and present. The students are going to work on a project on a specific type of house. Jaime is very excited. She has always been interested in Native Americans, so she has chosen to work on a tipi. Jaime selected one of the books that Mrs. Patterson brought in and began to leaf through the pages looking at all of the different types of tipis constructed. “Tipis?” Mrs. Patterson asked looking over Jaime’s shoulder. “Yes, I want to design and build one for part of my project,” Jaime explained. “That’s wonderful. You will need to use a lot of math to accomplish that too,” Mrs. Patterson stated. Jaime hadn’t thought about the math involved in building a tipi. But as she looked through the pages on designing and building tipis, she noticed that there were a lot of notes on different angles. One of the types of angles mentioned was a complementary pair of angles. Another was a supplementary pair of angles. These angles were important to figure out when stitching the liner of the tipi Jaime is puzzled. She can’t remember how to identify a complementary or a supplementary pair of angles. Remember there are two parts to your answer. Solution to Real – Life Example Jaime needs to understand the difference between complementary and supplementary angle pairs. First, notice that the word “pair” refers to two, so we are talking about two angles. Here are the definitions. Complementary Angles – are two angles whose sum is $90^{\circ}$ Supplementary angles – are two angles whose sum is $180^{\circ}$ Complementary angles form a right angle and supplementary angles form a straight line. Here are the vocabulary words that are found in this lesson. Parallel lines lines that are an equal distance apart and will never intersect. Intersecting lines lines that cross at one point. Perpendicular lines lines that intersect at a $90^{\circ}$$90^{\circ}$ the measure of the space formed by two intersecting lines. Straight angle is a straight line equal to $180^{\circ}$ Angle Pairs the relationship formed by two angles. Complementary Angles two angles whose sum is $90^{\circ}$ Supplementary Angles two angles whose sum is $180^{\circ}$ Adjacent Angles angles that are next to each other and whose sum is $180^{\circ}$ Vertical Angles angles that are diagonally across from each other and whose sum is $90^{\circ}$ Time to Practice Directions: Write the definitions for the following types of lines. 1. Parallel lines 2. Intersecting lines 3. Perpendicular lines Directions: Answer the following questions about different types of lines. 4. What is the symbol for parallel lines? 5. What is the symbol for perpendicular lines? 6. An intersection on a highway is an example of what type of lines? 7. A four way stop is an example of what type of lines? 8. Is it possible for intersecting lines to also be considered parallel or perpendicular? Directions: If the following angle pairs are complementary, then what is the measure of the missing angle? 9. $\angle{A}&=55^{\circ}\\\angle{B}&= ?$ 10. $\angle{C}&=33^{\circ}\\\angle{D}&= ?$ 11. $\angle{E}&=83^{\circ}\\\angle{F}&= ?$ 12. $\angle{G}&=73^{\circ}\\\angle{H}&= ?$ Directions: If the following angle pairs are supplementary, then what is the measure of the missing angle? 13. $\angle{A}&=10^{\circ}\\\angle{B}&= ?$ 14. $\angle{A}&=80^{\circ}\\\angle{B}&= ?$ 15. $\angle{C}&=30^{\circ}\\\angle{F}&= ?$ 16. $\angle{D}&=15^{\circ}\\\angle{E}&= ?$ 17. $\angle{M}&=112^{\circ}\\\angle{N}&= ?$ 18. $\angle{O}&=2^{\circ}\\\angle{P}&= ?$ Directions: Define the following types of angle pairs. 19. Vertical angles 20. Adjacent angles 21. Corresponding angles 22. Interior angles 23. Exterior angles Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Grade-8/r1/section/6.1/","timestamp":"2014-04-17T05:12:22Z","content_type":null,"content_length":"138154","record_id":"<urn:uuid:b459a178-0c1a-438f-a5f4-cad43f5d2855>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
3. Computing the Time Value of Money. Using time value of money tables, calculate the following. a. The future value of $450 six years from now at 7 percent. b. The future value of $800 saved each year for 10 years at 8 percent. c. The amount a person would have to deposit today (present value) at a 6 percent interest rate to have $1,000 five years from now. d. The amount a person would have to deposit today to be able to take out $500 a year for 10 years from an account earning 8 percent. 3. Computing the Time Value of Money. Using time value of money tables, calculate the following. a. The future value of $450 six years from now at 7 percent. b. The future value of $800 saved each year for 10 years at 8 percent. c. The amount a person would have to deposit today (present value) at a 6 percent interest rate to have $1,000 five years from now. d. The amount a person would have deposit today to be able to take out $500 a year for 10 years from an account earning 8 percent. The answer is a. The future value of $450 six years from now at 7 percent. Expert answered| |Points 1674| Asked 4/22/2012 12:52:27 AM Updated 4/17/2013 4:55:36 PM 1 Answer/Comment Not a good answer? Get an answer now. (FREE)
{"url":"http://www.weegy.com/?ConversationId=E1453124","timestamp":"2014-04-18T16:30:30Z","content_type":null,"content_length":"33525","record_id":"<urn:uuid:73341fb1-7595-4856-9948-03a59f81d4c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Ingleside, IL Algebra 2 Tutor Find a Ingleside, IL Algebra 2 Tutor ...I have had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math. Before I knew I was going to teach French, I was originally going to become a math teacher. 16 Subjects: including algebra 2, English, chemistry, French ...In fact, one of my ACT students raised her Math score from 25 to a 31 in two and a half months!! I thoroughly enjoy teaching math and working with young people of all ages!I taught freshman and sophomore calculus and differential equations at a major engineering college. It was a comprehensive two year curriculum. My approach is to try different examples to explain the material. 18 Subjects: including algebra 2, physics, calculus, geometry ...As a tutor, I hope to share my love of Latin and Greek with others! --------------------- In addition to tutoring Latin and Greek, I also help students brainstorm, edit, and proofread essays and term papers. I am well-versed in MLA, APA, and Chicago style and citation conventions. Additionall... 20 Subjects: including algebra 2, reading, writing, English ...The study of randomness is at the heart of probability. While individual occurrences for the outcome of a random event (e.g. the toss of a fair coin) are impossible to predict in advance, if repeated many times the sequence of outcomes will exhibit patterns. In probability, formulas are devised... 67 Subjects: including algebra 2, chemistry, Spanish, English ...I have taken up to calculus 3, and looking forward to learn linear algebra upcoming semester. My highest ACT Math score was 35 and the lowest was 30. I am changing my major to secondary education in mathematics because I love helping others. 6 Subjects: including algebra 2, geometry, algebra 1, precalculus Related Ingleside, IL Tutors Ingleside, IL Accounting Tutors Ingleside, IL ACT Tutors Ingleside, IL Algebra Tutors Ingleside, IL Algebra 2 Tutors Ingleside, IL Calculus Tutors Ingleside, IL Geometry Tutors Ingleside, IL Math Tutors Ingleside, IL Prealgebra Tutors Ingleside, IL Precalculus Tutors Ingleside, IL SAT Tutors Ingleside, IL SAT Math Tutors Ingleside, IL Science Tutors Ingleside, IL Statistics Tutors Ingleside, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bassett, WI algebra 2 Tutors Benet Lake algebra 2 Tutors Camp Lake algebra 2 Tutors Fox Lake Hills, IL algebra 2 Tutors Fox Lake, IL algebra 2 Tutors Indian Creek, IL algebra 2 Tutors Long Lake, IL algebra 2 Tutors Mccullom Lake, IL algebra 2 Tutors Pell Lake algebra 2 Tutors Powers Lake, WI algebra 2 Tutors Ringwood, IL algebra 2 Tutors Round Lake Heights, IL algebra 2 Tutors Round Lake, IL algebra 2 Tutors Stanton Point, IL algebra 2 Tutors Wilmot, WI algebra 2 Tutors
{"url":"http://www.purplemath.com/Ingleside_IL_Algebra_2_tutors.php","timestamp":"2014-04-18T06:17:26Z","content_type":null,"content_length":"24340","record_id":"<urn:uuid:f433ecd6-6948-445d-8d30-022d4786b847>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations with parentheses: Word problem skills An equation has only one equal (=) sign. I don't know what the = $365 means. Are you sure you copied the equation correctly? I'll attempt the solution by ignoring the $365. $x+(x-25)=2(x-25)$ Firse, since a + precedes the group on the left side of the equation, simply remove the parentheses. Nothing changes. $x+x-25=2(x-25)$ Distribute (multiply) the 2 across the difference (x-25) on the right side. $x+x-25=2x-50$ Combine terms on the left side. $2x-25=2x-50$ Subtract 2x from both sides. $2x-25-2x=2x-50-2x$ $-25=-50$ This is a false statement. Therefore the equation has no solution. I feel like there's an error in the way you presented your problem. Check it and re-post if necessary.
{"url":"http://mathhelpforum.com/math-topics/52415-equations-parentheses-word-problem-skills.html","timestamp":"2014-04-16T05:13:07Z","content_type":null,"content_length":"35429","record_id":"<urn:uuid:d58fdc4e-5bc1-417f-8926-0b00c6231a3e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Revere, MA Statistics Tutor Find a Revere, MA Statistics Tutor ...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English. 16 Subjects: including statistics, French, elementary math, algebra 1 ...The characteristics, distributions, and complexity of Earth's cultural mosaics. 11. The patterns and networks of economic interdependence on Earth's surface. 12. The process, patterns, and functions of human settlement. 13. 44 Subjects: including statistics, chemistry, physics, writing ...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner. 15 Subjects: including statistics, physics, calculus, geometry ...Elementary education tutoring in math and English is also available (reading comprehension, writing, vocabulary, grammar, prealgebra, etc.). Please note that I require a cancellation notice of 24 hours for all lessons. If you fail to cancel at least 24 hours prior to the lesson, you will be cha... 67 Subjects: including statistics, reading, English, calculus ...I have helped students from 6th grade through college grow in confidence as well as achieve greater success in the classroom. Further, I am a graduate of Boston College and licensed teacher. I have experience working with students in numerous subjects as well as in standardized test preparation. 64 Subjects: including statistics, English, reading, biology
{"url":"http://www.purplemath.com/Revere_MA_Statistics_tutors.php","timestamp":"2014-04-17T15:50:58Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:97212a76-f467-4e3f-b10a-8b4d45ea94d9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Whitespace in Idris As a weekend hack, I’ve been implementing the Whitespace programming language, in Idris. You can find it on github. Whitespace, behind the syntax, is a stack based language with unbounded integers and a single global heap, and control flow implemented using labels, conditional jumps, and subroutine calls. There’s no types (just integers for everything) So what do we gain by implementing it in a language with dependent types? It turns out there’s some interesting things we can do, and problems we have to solve: • Various operations are only valid if the stack is in the right form (e.g. arithmetic requires at least two items at the top of the stack, and leaves at least one item). We can often statically check that the stack is in the right form before executing such instructions, and at least statically check that the necessary dynamic checks are carried out. • We need to consider how to represent the stack – a Stack n is the type of stacks with at least n elements. • There’s a lot of bounds checking of stack references, and as much of this as possible is made static. • A jump is only valid if the given label is defined. We can statically check this, too, and guarantee that a program we execute has well-defined labels. • Although whitespace is Turing complete, we can still use the totality checker to verify that executing individual instructions terminates. If nothing else, we get to see how to use dependent types to make assumptions and invariants explicit in the code. Even if those invariants aren’t used to guarantee total correctness, we at least get the type checker to tell us where we’ve violated them. In fact, I avoided a few errors this way and found a couple of missing or badly specified parts of the whitespace tutorial… Perhaps I’ll write more on this at some point…
{"url":"http://edwinb.wordpress.com/2012/11/25/whitespace-in-idris/","timestamp":"2014-04-18T20:42:59Z","content_type":null,"content_length":"28141","record_id":"<urn:uuid:c4975005-1ccd-417e-8a6d-10559ef0f2b2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Frozen noise generation ("Susan E. Hall" ) Subject: Re: Frozen noise generation From: "Susan E. Hall" <susanhal(at)DAL.CA> Date: Fri, 13 Dec 2002 14:50:27 -0400 Enrique wrote: >We wish to generate two ".wav" files, each containing a burst of frozen >noise (N1 and N2) of the same duration (say 200 ms) but with different >frequency content. Let's say that N1 has frequency components from 10 Hz >to 3000 Hz, and N2 from 3001 Hz to 4000 Hz. Most importanly, we wish that >the SPL level for all frequencies be equal within each noise and equal for >the two noises. >So far, we have attempted to generate the noise by adding a large number of >equal-amplitude, random phase tones (phases are uniformly distributed >between 0 and 2pi). We then normalize each of the noises to their maximum >amplitudes to produce a wav file. The result is that the spectrum of each >noise is flat over their respective frequencies. However, the levels are >different for the two noises (remember we wish for them to be equal). It >is lower for the noise with the largest number of frequency >components. This is probably the result of the normalization prior to >producing the wav file. >Is there a clever (analytical) solution to our problem, or do we have to >adjust the level of noises manually to make them equal? I'm taking a stab at answering this question as much for the self-tutorial value it provides me as anything else. I hope anyone would let me know if I have made any errors. Also, Enrique, if I have oversimplified, it doesn't imply that I think you are unaware of the basics, it is there just to help make my argument coherent to myself. Let me make sure I understand your procedure first. You add equal-amplitude random-phase sine components, 200ms in length, over the two ranges 10-3000 and 3001-4000 Hz. Am I right in assuming that you are using equal linear frequency spacing of the sine components in the two noises, and therefore summing more components for your low freq noise? If so, then at this point, by *definition*, the spectrum levels of the two noises are the same, because that's how you made them. The wider bandwidth noise would at this point have a greater *waveform amplitude*, since it is the sum of more spectral components. You then proceed to normalize each waveform to max amplitude. The noise waveform with the fewer spectral components had the lower waveform amplitude, so it increased more in the normalization process. So as you said, the noise with the *more* components now *must* have a lower spectrum level than the other one. You *can't* have both equal spectrum level and equal amplitude waveforms, *unless* the linear bandwidth of the two noises is equal. If you want unequal bandwidth noises at equivalent spectrum level, their waveforms amplitudes will have to be unequal. Note that you won't be able to get around this in the construction phase, by simply using the *same* number of components in the construction of both noises (eg, by by using a wider frequency spacing for the components summed to create the wider bandwidth noise). The result of this procedure *would* be two waveforms of equal amplitude, but if you were to calculate their spectrum levels, you would find that they would differ. This is because they don't *really* contain the same number of components. Yes, you made them with the same number of components, but actually, in calculating the spectrum of a noise, the components for the purpose of the fft calculation become once again equally-spaced at fs/N (sampling frequency divided by N points in the fft). Since you'd be using the same fs and N in getting the spectrum of the two noises to compare them, you'd once again find that the spectrum level of the narrower bandwidth noise would be higher: That is, for two waveforms of equal amplitude, the one having a narrower bandwidth would have a higher spectrum level. This message came from the mail archive
{"url":"http://www.auditory.org/postings/2002/570.html","timestamp":"2014-04-18T01:51:53Z","content_type":null,"content_length":"4793","record_id":"<urn:uuid:f7d4978e-11d1-4eab-bc8c-2bb189fe37cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
PCQuest | Latest technology news, product reviews, and buying advice ┃Snapshot ┃ ┃ ┃ ┃Applies to: Data Clustering, and Machine Learning researchers and developers ┃ ┃USP: Performs a fast graph-based clustering without any eigen vector computation ┃ ┃Primary Links:1. http://ld2.in/43g 2. http://ld2.in/43h ┃ ┃Search engine keywords: Graclus, graph clustering software, multilevel clustering┃ ┃ ┃ Dr R Rajendra Prasath and Sumit Goswami, IIT Kharagpur Clustering is an unsupervised way of putting the objects (data, text, features etc) into coherent groups. It is useful in many application areas in which the identification of associated features is of primary objective. Different types of clustering algorithms include k-nearest neighbor, hierarchical, partitional, mixture-solving and mode seeking, fuzzy, search based, semantic, spectral, artificial neural network (ANN) based, and evolutionary based approaches. Clustering large data is a computationally expensive task. So, the demand of fast clustering approaches is growing. Here we present a fast and efficient clustering tool that uses graph based representation of the objects as nodes and their associations as edges. How clustering works Consider a set of unlabelled (raw) objects as shown in Figure. 1(a). Now the task is to apply the clustering algorithm so as to group the objects as shown in the Figure. 1(b) A basic clustering algorithm includes the following steps: 1) Feature extraction from the given data. 2) Defining the Proximity Measure among the objects; may be a distance measure between objects. 3) Formation of Coherent Groups. 4) Evaluating the purity of the coherent groups/clusters. 5) Presenting the clusters as output. What does Graclus do Graclus is a fast clustering tool that computes the clusters from unlabelled data using graph representations. The input data to be clustered is first encoded into a graph with its information in the adjacency list format in either of two ways: simple graph or weighted graph. Simple graph assumes unit weight to the edges between any two nodes. The weighted graph considers different weights to the edges connecting any two nodes. These weights can be computed in user defined ways. One simple approach to compute the weight of an edge between any two nodes is finding the co-occurrence statistics of those two selected objects. Then, Graclus tool applies normalized cut and ratio association between nodes and edges of a graph without eigenvector computation and clusters the objects using three steps: coarsening, clustering, and refining. Coarsening transforms the given graph into smaller and smaller sub-graphs of desired size. Now, we use these smaller sub-graphs with breadth-first traversal of vertices to form the clusters. Each smaller sub-graph is clustered and the clusters are incrementally added and refined to get the final clusters. The final output contains cluster IDs to which each object belongs to. Building the Tool There are three versions of Graclus Open source tool in Gunzipped tar files – ver 1.0, ver 1.1, and ver 1.2 (latest). This tool was originally implemented in C language. The latest version - graclus1.2.tar.gz - can be downloaded from the link (1) mentioned in the Primary Links. The source C files can be extracted from the tar using: $ tar –xzvf graclus1.2.tar.gz This will extract the files in the folder namely graclus1.2. The latest version GCC/G++ compiler and MAKE / GMAKE are the prerequisite for compiling this distribution. The following commands are used to build the distribution and get the executable graclus. $ cd graclus1.2 $ make clean $ make On successful compilation, this will create “graclus” - the executable file. There is a Matlab interface implemented to perform this fast graph clustering algorithm and the bug free code can be found at: http://ld2.in/43i Building the Input Graph Circled objects are nodes and the lines connecting them are edges. We transform the given data into an undirected graph G = (V, E) where V is the set of objects (data, feature, etc.) and E is the set of edges connecting a pair of objects with certain weight. This weight signifies the strength of association between these two objects with respect to the specific feature selected by the user. Since the graph is undirected, we do not take the order of the objects into account i.e. the relation between the objects (1) and (5) is same as the relation between (5) and (1). Consider the text data: Doc 1 : Human machine interface for Lab computer Doc 2 : opinion of computer system response time Doc 3 : User interface management system Doc 4 : Human system engineering testing of EPS Doc 5 : User perceived response time From the above documents, we extract the unique term list having 17 entries as below: Each term is considered as a node and the number of documents having the term co-occurrences defines the edge weight. The Serial Number inside ( ) denotes the label of the node associated with the corresponding term. There are two ways to compute the edge weights of the graph: (1) All edges have the same weight and (2) Edges weights are different. (1) All edges have the same weight: Consider the graph in Figure 2. Here all edges are assumed to have the same weight. The corresponding adjacency list with graph details of Figure 2 is as follows: Here the first line contains 3 entries:number of nodes, number of edges, and type of the edge weighting (0 = un-weighted; 1 = weighted). Then each line from second onwards shows link details of the objects 1 to n. Thus in this plain text file, we will have 1+n lines in total. The entries present inside the text box is saved in the plain text file with a file name (Here we save the data in the file name: input.graph) (2) Edges weights are different: Consider the graph in Figure 3. Here the edges may have distinct edge weights. The corresponding adjacency list with graph details of Figure 3 is as follows: Clustering using Graclus Now using the graph clustering tool – graclus, we cluster the objects (1) to (17). To do so, we execute the command: $ graclus where graclus is the executable tool, graph input file is the plain text graph file having lines equal to the number of objects+1, and m denotes the number of clusters preferred by the user. Here we use input.graph as the input graph file having entries in 18 lines (one for each object from (1) to (17) and one additional line at the beginning having the statistics of the graph: number of nodes, number of edges and type of the input graph. We choose m=4, that is number is clusters preferred is 4. Thus we execute the following command to get the cluster IDs for the objects (1) – (17). $ graclus input.graph This command performs the fast graph clustering with optimal parameters suggested for normalized cuts and ratio association on the input.graph. Once the clustering is finished, graclus generates an output file having m cluster IDs, one per line in the specific output file with the file name: input.graph.part.4. This means that the file contain cluster IDs for the objects: (1) – (17). The Clusters IDs will be from 0 to m-1, where 0 refers to the cluster 1 and 1 refers to cluster 2, and so on. Processing the output file Now let us take the output file: input.graph.part.4. Here we perform clustering separately for graph with unit / distinct edge weights respectively. The number of lines in this output file is same as the number of objects considered for clustering. In the output file, the first entry belongs to the cluster to which the first term is belonging to. For e.g. the first entry is 1 implies that the first term 'Human' belongs to cluster 2 and the 12th entry contains 3 which implies that the term 'user' belongs to cluster 4. Thus number of entries in the output file is equal to the number of unique terms. We can group the terms by referring its corresponding cluster IDs from the output file. The following clusters show the terms grouped with respect to the graclus output for graph with unit edge weight. 1: [machine, interface, for, Lab, computer] 2: [Human, of, system, engineering, testing, EPS] 3: [opinion, response, time, management] 4: [User, perceived] When we use distinct edge weights, as it can be seen from the above clusters, the clusters formed with the associated terms is more coherent and meaningful. This indirectly refers to the purity of the clusters. The clustering with unit edge weights are also equally useful keeping in view of the reduced time and computational efforts. However, the effective clustering is done by the suitable choice of the objective function with the selected normalized cut and ratio association factors. Graclus tool can be used in effective feature clustering to improve text classification, to improve the efficiency of Image segmentation, to find associated terms for understanding user contexts in Information retrieval, data mining, object/pattern recognition and many other interdisciplinary fields associated with machine learning techniques. • Red Hat releases Enterprise Linux 7 Beta version April 17, 2014 · 1 COMMENTS • Hitachi Systems Acquires Micro Clinic India April 16, 2014 · NO COMMENTS Survey Box
{"url":"http://www.pcquest.com/pcquest/news/178414/graclus-a-fast-graph-clustering-tool","timestamp":"2014-04-17T06:41:20Z","content_type":null,"content_length":"67526","record_id":"<urn:uuid:878d924d-5fe6-4df5-bebe-5a989f91cbf8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Size Determination Sample Size Determination PPT Sponsored High Speed Downloads 5.3 Determining Sample Size to Estimate p To Estimate a Population Proportion p If you desire a C% confidence interval for a population proportion p with an accuracy specified by you, how large does the sample size need to be? Statistical planning and Sample size determination Planning Which variables will I collect and how will they be measured? What types of data do they represent? 5.6 Determining Sample Size to Estimate Required Sample Size To Estimate a Population Mean If you desire a C% confidence interval for a population mean with an accuracy specified by you, how large does the sample size need to be? Sample size determination Nick Barrowman, PhD Senior Statistician Clinical Research Unit, CHEO Research Institute March 29, 2010 Outline Example: lowering blood pressure Introduction to some statistical issues in sample size determination Two simple approximate formulas Descriptions of sample ... Determination of Sample Size DR R.P. NERURKAR Associate Professor Dept. of Pharmacology T.N. MEDICAL COLLEGE B.Y.L NAIR HOSPITAL, MUMBAI AYUSH Workshop Sample Size Determination Donna McClish Issues in sample size determination Sample size formulas depend on Study design Outcome measure Dichotomous Ordered Continuous Time to event (survival) Issues in sample size determination Sample size is a function of Type II error (beta) Effect size Type I ... Sample Size Determination Everything You Ever Wanted to Know About Sampling Distributions--And More! Sampling Distribution A frequency distribution of all the means obtained from all the samples of a given size Example: $$ spent on CD’s at Best Buy Daffy $34.00 Donald $72.00 Sylvester $36.00 ... Statistical Methods in Clinical Trials Sample Size Determination Ziad Taib Biostatistics AstraZeneca March 12, 2012 * * * * 88 134 256 844 2152 0.9 65 100 191 631 1607 0.8 51 79 150 496 1264 0.7 32 49 94 309 787 0.5 2.00 1.75 1.50 1.25 1.15 power The number of events needed for a certain power ... Experimental design, basic statistics, and sample size determination Karl W Broman Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Module 28 Sample Size Determination This module explores the process of estimating the sample size required for detecting differences of a specified magnitude for three common circumstances. Experimental design and sample size determination Karl W Broman Department of Biostatistics Johns Hopkins University http://www.biostat.jhsph.edu/~kbroman Note This is a shortened version of a lecture which is part of a web-based course on “Enhancing Humane Science/Improving Animal Research ... Sample Size Determination Janice Weinberg, ScD Professor of Biostatistics ... SAMPLE SIZE: How many subjects are needed to assure a given probability of detecting a statistically significant effect of a given magnitude if one truly exists? Title: Sampling and Sample Size Determination Author: Sandy Cahoon Last modified by: Sandy Cahoon Created Date: 3/15/2004 9:20:52 PM Document presentation format Punch-Lines The intuition of sample size determination Determine sample size for estimating means and proportions Three ways of estimating s and p Chapter 17 Sample Size Determination *Adapted from Andrei Strijnev Basic Considerations in Determining Sample Size Precision – the ... Sample Size and Power Laura Lee Johnson, Ph.D. Statistician National Center for Complementary and Alternative Medicine [email protected] Tuesday, November 15, 2005 Determination of Sample Size: A Review of Statistical Theory Chapter 13 Sample v. Population Inferential Descriptive Sample statistics Population parameters Tables Frequency distribution Percentage distribution Probability Proportion Measures of Central Tendency Mean Median Mode Measures of ... Title: Sampling and Sample Size Determination Author: Sandy Cahoon Last modified by: Sandy Cahoon Created Date: 3/15/2004 9:20:52 PM Document presentation format Sample size determination for estimation of the accuracy of two conditionally independent diagnostic tests Marios Georgiadis, Faculty of Veterinary Medicine, SAMPLE SIZE DETERMINATION Dr. M. H. Rahbar Professor of Biostatistics Department of Epidemiology Director, Data Coordinating Center College of Human Medicine Special Sample Size Determination Situations Sample Size Using Nonprobability Sampling When using nonprobability sampling, sample size is unrelated to accuracy, so cost-benefit considerations must be used Sample Accuracy Sample accuracy: ... Determining the Size of a Sample Sample Accuracy Sample accuracy: refers to how close a random sample’s statistic is to the true population’s value it represents Important points: Sample size is not related to representativeness Sample size is related to accuracy Sample Size and Accuracy ... Title: Sample Size Determination in Studies Where Health State Utility Assessments Are The Measures of Interest. How many subjects do I nneed to caddess Sample Size Determination Inappropriate Wording or Reporting “A previous study in this area recruited 150 subjects & found highly sign. Results” Previous study may have been lucky “Sample sizes are not provided because there is no prior information on which to base them” Do a pilot study ... Chapter XII Sampling: Final and Initial Sample Size Determination Chapter Outline 1) Overview 2) Definitions and Symbols 3) The Sampling Distribution 4) Statistical approaches to Determining Sample Size 5) Confidence Intervals i. One Sample Mean Inference (Chapter 5) In Unit 2 we will discuss: How to estimate the mean from a normal distribution and compute a 95% confidence interval for the true unknown population mean. Experimental design, basic statistics, and sample size determination Karl W Broman Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Sample Size Determination Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall (continued) Now solve for n to get Ch. 8-* For the Mean Large Populations Sample Size Determination To determine the required sample size for the mean, ... Title: No Slide Title Subject: Ch12 Sample Size Determination Author: Charles Pflanz revised by B. R. Oates Last modified by: Barbara R. Oates Created Date SAMPLING: Process of Selecting your Observations (Masoud Hemmasi, Ph.D.) Distribution of Sample Means (Xs) for Different Population Distributions With every increase in the sample size… Determining Sample Size for Proportion in PHStat PHStat | Sample Size | Determination for the Proportion … Example in Excel Spreadsheet Confidence Interval for Population Total Amount Point Estimate Confidence Interval Estimate Confidence Interval for Population Total: ... Sampling Fundamentals 2 Sampling Process Identify Target Population Select Sampling Procedure Determine Sampling Frame Determine Sample Size Determining Sample Size – Ad Hoc Methods Rule of thumb Each group should have at least 100 respondents Each sub-group should have 20 – 50 respondents ... ... σ Unknown 8.3 Sample Size Determination 8.4 Confidence Intervals for a Population Proportion 8.5 A Comparison of Confidence Intervals and Tolerance Intervals (Optional) 8.1 z-Based Confidence Intervals for a Mean: ... Sampling Fundamentals 2 Sampling Process Identify Target Population Select Sampling Procedure Determine Sampling Frame Determine Sample Size Determining Sample Size – Ad Hoc Methods Rule of thumb Each group should have at least 100 respondents Each sub-group should have 20 – 50 respondents ... Title: No Slide Title Subject: Ch12 Sample Size Determination Author: Charles Pflanz Last modified by: TL User Created Date: 1/20/2000 5:25:32 PM Document presentation format This concept for determination of sample size is usually referred to as power analysis for sample size determination. Power Analysis For determination of sample size based on power analysis, the investigator is required to specify the following information. Practical issues in Determining Sample Sizes. Importance of the Research Issue: If the results of the survey research are very critical, then the sample size should be increased. The Normal Probability Distribution Sample size determination – adjusting for population size Make an adjustment in the sample size if the sample size is more than 5 percent of the size of the total SAMPLE SIZE DETERMINATION Reasons for Sampling Samples can be studied more quickly than populations. A study of a sample is less expensive than studying an entire population, because smaller number of items or subjects are examined. Topics for Today Proportion Estimation Sample Size Determination Intro to Hypothesis Testing Reference: Burt and Barber, Chapters 8-9, Pages 274-296 In the name of God Determining the Size of a Sample Dr Mohammad Hossein Fallahzade Sample Accuracy Sample accuracy: refers to how close a random sample’s statistic is to the true population’s value it represents Important points: Sample size is not related to representativeness Sample size ... Survey Sample Size MKTG 3342 Fall 2008 Professor Edward Fox Survey Sample Size MKTG 3342 Fall 2008 Professor Edward Fox Sample Size Determination Convenience – Say … about 100. Sample Size Determination Text, Section 3-7, pg. 101 FAQ in designed experiments Answer depends on lots of things; including what type of experiment is being contemplated, how it will be conducted, resources, ... T-test for dependent Samples (ak.a., Paired samples t-test, Correlated Groups Design, Within-Subjects Design, Repeated Measures, ……..) Next week: Read Russ Lenth’s paper on effective sample-size Business Research Methods William G. Zikmund Chapter 17: Determination of Sample Size What does Statistics Mean? Descriptive statistics Number of people Trends in employment Data Inferential statistics Make an inference about a population from a sample Population Parameter Versus Sample ... Sample Size Determination To learn the financial and statistical issues in the determination of sample size To discover methods for determining sample size To gain an appreciation of a normal distribution To understand population, sample, and sampling distributions. PPS Sampling Determination of Sample Size PBV = population book value RF = reliability factor (Table 9-14) TM = tolerable misstatement EM = expected misstatement EF = expansion factor (Table 9-15) Introduction to sample size and power calculations How much chance do we have to reject the null hypothesis when the alternative is in fact true? using Microsoft Excel 3rd Edition Chapter 6 Confidence Interval Estimation Chapter Topics Estimation process Point estimates Interval estimates Confidence interval estimation for the mean ( known) Determining sample size Confidence interval estimation for the mean ( unknown) Confidence interval ... Confidence Intervals Chapter Outline 8.1 z-Based Confidence Intervals for a Population Mean: σ Known 8.2 t-Based Confidence Intervals for a Population Mean: σ Unknown 8.3 Sample Size Determination 8.4 Confidence Intervals for a Population Proportion 8.5 A Comparison of Confidence Intervals and ... Other Methods of Sample Size Determination…cont. Statistical analysis requirements of sample size specification Sometimes the researcher’s desire to use particular statistical technique influences sample size.
{"url":"http://ebookily.org/ppt/sample-size-determination","timestamp":"2014-04-25T02:29:08Z","content_type":null,"content_length":"43077","record_id":"<urn:uuid:f52421a6-3810-4c1c-bf79-13856045897e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Moving Average stephen sefick ssefick at gmail.com Thu Feb 26 16:05:23 CET 2009 I wrote a little code using Fourier filtering if you would like to take a look at this: x <- read.production(file.choose()) #contiguous.zoo(data.frame(x[,"RM202DO.Conc"], coredata(x[,"RM202DO.Conc"]))) #contiguous.zoo(data.frame(x[,"RM61DO.Conc"], coredata(x[,"RM61DO.Conc"]))) short <- x[42685:48535,"RM202DO.Conc"] #short <- x[53909:59957,"RM61DO.Conc"] short.ts <- ts(coredata(short), frequency=96) #fourier filtering short.fft <- fft(short.ts) plot(Re(short.fft), xlim=c(0,10), ylim=c(-1000, 1000)) short.fft[789:5563] = 0+0i short.ifft = fft(short.fft, inverse = TRUE)/length(short.fft) #zoo series filt <- zoo(coredata(Re(short.ifft)) , index(short)) window.plot <- function(x, y, a, b, s, d){ plot(window.chron(x, a, b, s, d)) plot(window.chron(y, a, b, s, d)) window.plot(short, filt, "04/17/2007", "00:01:00", "04/17/2007", "23:46:00") plot.e <- function(b, w, x, y, z){ a <- window.chron(b, w, x, y, z) plot(a, ylim=range(a)+0.06*c(-1, 1)) lines(a*0.98, col="blue") lines(a*1.02, col="red") it may not be exactly what you want, but you will have a handle on what spectral properties that you have removed. On Thu, Feb 26, 2009 at 9:54 AM, Ted Harding <Ted.Harding at manchester.ac.uk> wrote: > On 26-Feb-09 13:54:51, David Winsemius wrote: >> I saw Gabor's reply but have a clarification to request. You say you >> want to remove low frequency components but then you request smoothing >> functions. The term "smoothing" implies removal of high-frequency >> components of a series. > If you produce a smoothed series, your result of course contains > the low-frequency comsponents, with the high-frequency components > removed. > But if you then subtract that from the original series, your result > contains the high-frequency components, with the low-frequency > compinents removed. > Moving-average is one way of smoothing (but can introduce periodic > components which were not there to start with). > Filtering a time-series is a very open-ended activity! In many > cases a useful start is exploration of the spectral properties > of the series, for which R has several functions. 'spectrum()' > in the stats package (loaded bvy default) is one basic function. > help.search("time series") will throw up a lot of functions. > You might want to look at package 'ltsa' (linear time series > analysis). > Alternatively, if yuou already have good information about the > frequency-structure of the series, or (for instance) know that > it has a will-defined seasonal component, then you could embark > on designing a transfer function specifically tuned to the job. > Have a look at RSiteSearch("{transfer function}") > Hoping this helps, > Ted. >> If smoothing really is your goal then additional R resource would be >> smooth.spline, loess (or lowess), ksmooth, or using smoothing terms in >> regressions. Venables and Ripley have quite a few worked examples of >> such in MASS. >> -- >> David Winsemius >> On Feb 26, 2009, at 7:07 AM, <mauede at alice.it> wrote: >>> I am looking for some help at removing low-frequency components from >>> a signal, through Moving Average on a sliding window. >>> I understand thiis is a smoothing procedure that I never done in my >>> life before .. sigh. >>> I searched R archives and found "rollmean", "MovingAverages {TTR}", >>> "SymmetricMA". >>> None of the above mantioned functions seems to accept the smoothing >>> polynomial order and the sliding window with as input parameters. >>> Maybe I am missing something. >>> I wonder whether there is some building blocks in R if not even a >>> function which does it all (I do not expect that much,though). >>> Even some literature references and/or tutorials are very welcome. >>> Thank you so much, >>> Maura >>> tutti i telefonini TIM! >>> [[alternative HTML version deleted]] >>> ______________________________________________ >>> R-help at r-project.org mailing list >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide >>> http://www.R-project.org/posting-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >> ______________________________________________ >> R-help at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. > -------------------------------------------------------------------- > E-Mail: (Ted Harding) <Ted.Harding at manchester.ac.uk> > Fax-to-email: +44 (0)870 094 0861 > Date: 26-Feb-09 Time: 14:54:43 > ------------------------------ XFMail ------------------------------ > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. Stephen Sefick Let's not spend our time and resources thinking about things that are so little or so large that all they really do for us is puff us up and make us feel like gods. We are mammals, and have not exhausted the annoying little problems of being mammals. -K. Mullis More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2009-February/189774.html","timestamp":"2014-04-19T01:53:39Z","content_type":null,"content_length":"10111","record_id":"<urn:uuid:b466b867-2975-4cdb-a866-72cf50956094>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Using content-MathML for computation and analysis in Science and Engineering Re: Using content-MathML for computation and analysis in Science and Engineering From: David Carlisle <davidc@nag.co.uk> Date: Tue, 20 Mar 2012 22:46:34 +0000 Message-ID: <4F6908CA.7040809@nag.co.uk> CC: "www-math@w3.org" <www-math@w3.org>, Peter Murray-Rust <pm286@cam.ac.uk> On 20/03/2012 22:10, Paul Libbrecht wrote: > Le 20 mars 2012 à 08:12, Andreas Strotmann a écrit : >> More to the point, a sequence of assignments would therefore >> 'naturally' be expressed as nested lambda expressions in MathML to >> preserve semantics. > My personal opinion, as a mathematician, is that this way of writing > might be well-founded in terms of expressivity or logic, it remains > fully opaque to most mathematicians except logicians. > The concept of binding is understandable, and even that of mapping, > but having to enter everything within lambda terms tends to be a real > readability problem. > paul I'd agree with Paul here. Also, expressing the assignments via lambda binding limits (in most natural encodings) the scope of the assignment to a single expression, which may be too limiting. While Andreas is right that a term encoding like MathML's (or equivalently) OpenMath's has its roots in functional encodings and lambda expressions, I don't see anything wrong with having symbols denoting imperative assignments and, if necessary other imperative can be given a perfectly well defined meaning as x := 1 and can be defined to have scope a containing element, or the current math expression, or the entire document, depending on the needs of the I came across this old article of Gaston Gonnet on a programming CD for (The OpenMath examples are written in an old lispish linear syntax pre-dating xml and no longer supported in OpenMath, but the basic ideas are independent of syntax) I was actually looking for this one That encodes assignment as well as for and while loops etc, and is part of the current xml cd collection (although classed as experimental) Received on Tuesday, 20 March 2012 22:46:59 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 20 March 2012 22:47:00 GMT
{"url":"http://lists.w3.org/Archives/Public/www-math/2012Mar/0033.html","timestamp":"2014-04-18T05:44:11Z","content_type":null,"content_length":"11110","record_id":"<urn:uuid:0f0c4218-214b-4d4c-b844-394f392fd077>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal nilpotent subgroups of SO(n,1) up vote 3 down vote favorite For the Lie group $SO(n,1)$ I believe the maximal nilpotent subgroups are conjugate to either a diagonal group times a compact group or a unipotent group times a compact group. In either case the compact group will commute with the other group. Is this true and if so how do I prove it? lie-groups nilpotent-groups do you mean "maximal connected nilpotent" (="maximal unipotent" here)? If you really mean "maximal nilpotent", you probably also have more subgroups, including finite subgroups. – Yves Cornulier Feb 9 '13 at 10:00 @Yves I do mean maximal connected nilpotent. Thank you. – Davis Feb 10 '13 at 3:14 add comment 1 Answer active oldest votes Let's look first at maximal solvable subgroups, i.e. Borel subgroups. If $G=KAN$ is an Iwasawa decomposition of $G$, Borel subgroups are conjugate to $MAN$, where $M$ is the centralizer of $A$ in $K$. In the case of $SO(n,1)$, we have $K\simeq SO(n),A\simeq\mathbb{R}$(this is the maximal diagonalizable subgroup), $N\simeq\mathbb{R}^{n-1}$and $M\simeq SO up vote 2 down (n-1)$. Your conjecture follows from this (observe that nilpotent subgroups of compact groups are abelian-by-finite, by Lie-Kolchin). vote accepted add comment Not the answer you're looking for? Browse other questions tagged lie-groups nilpotent-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/121273/maximal-nilpotent-subgroups-of-son-1","timestamp":"2014-04-20T05:57:16Z","content_type":null,"content_length":"51874","record_id":"<urn:uuid:cc51f18c-ac1d-4f2c-8751-55189805c3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Coursera Calculus course with hand-drawn animated materials Robert Ghrist from University of Pennsylvania wrote in to tell us about his new, free Coursera course in single-variable Calculus, which starts on Jan 7. Calculus is one of those amazing, chewy, challenging branches of math, and Ghrist's hand-drawn teaching materials look really engaging. Calculus is one of the grandest achievements of human thought, explaining everything from planetary orbits to the optimal size of a city to the periodicity of a heartbeat. This brisk course covers the core ideas of single-variable Calculus with emphases on conceptual understanding and applications. The course is ideal for students beginning in the engineering, physical, and social sciences. Distinguishing features of the course include: the introduction and use of Taylor series and approximations from the beginning; * a novel synthesis of discrete and continuous forms of Calculus; * an emphasis on the conceptual over the computational; and * a clear, entertaining, unified approach. Calculus: Single Variable Thanks, Robert! 10 Responses to “Free Coursera Calculus course with hand-drawn animated materials” 1. Phssthpok says: Great post, and I am likely being pedantic, but I think it’s important to distinguish between *describing* and *explaining*. 2. DevinC says: I signed up for this course, rather than another introductory calculus course offered through Coursera, because Ghrist’s approach seems radically different than the industry standard. His Funny Little Calculus Textbook starts off with functions, but immediately jumps to Taylor series, assuming the reader knows how to take the derivatives of simple polynomials. In other words, it seems more like a course about understanding calculus than doing calculus. 3. s2redux says: Trying to figure out why George Takei agreed to read this script while wearing pants that are 2 sizes too small. 4. SamSam says: Yay, I already signed up for this course a few months ago! Even as a programmer I’ve always felt my basic calc was a bit rusty, and while I could probably just take a two-session refresher course and jump straight to Calc 2, this course looked fun. Now… hopefully I can stick with the schedule better than I could with “The History of the World Since 1300.” Who would have guessed that 700 years of history would require lots of reading and lectures? (The course was very good, and I made it through four weeks on-schedule, but in the end I didn’t have nearly enough time.) 5. Jeff Erickson says: Needs more Dante. 6. lafave says: I have all the prereqs; I just wish I remembered half of them. This class looks fun. Oh well. 7. sburns54 says: Holy moley! It’s still as densely unapproachable to me as I remember it being when I flunked it in high school! Even with cartoons, which always grab my attention! I was lost by 1:03 seconds in! Thank goodness there are other people that can do this stuff and put it to practical use. I’ll just stay in the kitchen, if anyone needs a sandwich. □ SamSam says: There’s also a more basic Calculus One course: https://www.coursera.org/course/calc1 The only prereqs are highschool algebra and trig. 8. Alissa Mower Clough says: I’m getting somewhat aroused. In which sense, I really don’t know. 9. penguinchris says: I did a computer science course on Coursera in the spring that I thought was very good. I then tried another and didn’t like the way it was done and gave up on it. I definitely need a calculus refresher and this looks good so I’m sold on this one, I hope it turns out well. I write books. My latest is a YA science fiction novel called Homeland (it's the sequel to Little Brother). More books: Rapture of the Nerds (a novel, with Charlie Stross); With a Little Help (short stories); and The Great Big Beautiful Tomorrow (novella and nonfic). I speak all over the place and I tweet and tumble, too. TAGS: calculus Copyfight education math videos youtube Watch these next
{"url":"http://boingboing.net/2012/12/04/free-coursera-calculus-course.html","timestamp":"2014-04-16T10:28:10Z","content_type":null,"content_length":"44084","record_id":"<urn:uuid:e82d8888-2429-491f-a549-8357999e99c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00555-ip-10-147-4-33.ec2.internal.warc.gz"}