content
stringlengths
86
994k
meta
stringlengths
288
619
• Great Posted by Legacy on 02/21/2004 12:00am Originally posted by: Satish Great Emad.. ... :) Keep it up.... Helped me lot in my project. • Some value addition... Posted by Legacy on 01/03/2004 12:00am Originally posted by: K.G. How to add support for operators like ==, >=, <= etc? • Function Evaluator Posted by Legacy on 09/27/2003 12:00am Originally posted by: PDJ Great work. Helps me a lot ! • Types Posted by Legacy on 07/04/2003 12:00am Originally posted by: Marco Guimar�es Does this support only Integers? I tried to put a double in the expression (5.3 or 3,6) and I can't get the correct result. Thanks ! • state variable Posted by Legacy on 04/18/2003 12:00am Originally posted by: internationale what do the states represented by the "state" variable 1, 2, 3 in the parse method signify? If you were to give them descriptive names what would you call them? • About function with no args Posted by Legacy on 04/10/2003 12:00am Originally posted by: Liu 2 + a() will not work, if a() is a function. • Logn() in Function Evaluator Posted by Legacy on 02/26/2003 12:00am Originally posted by: David Keen The description of the logn() function is incorrect. The first parameter is the base, the second the number whose log is being taken. E.g. logn(10,2) gives 0.3010, logn(10,100) gives 2 exactly. So this is not a problem with the function, but with its description in the table. • sin(pi).... using pi Posted by Legacy on 02/14/2003 12:00am Originally posted by: omoshima Math.Sin(Math.PI) gives 1.22460635382238E-16 which is obviously not correct because we all know that sin(pi) is really 0... I know this is a precision error, but, how could it be handled? anyone have any discussion about it? answers? I figured it out... I just had to handle the case when the person wanted to do sin(pi)... hehe. eaaaaaaasy. but, what about multiples of pi? like 2*pi!? blah... wait, wait... i think if you just mod by pi it'll be fine... YESS!! It worked!!! • -12+10 doesn't work Posted by Legacy on 12/12/2002 12:00am Originally posted by: Chepel 1) unary minus operator has not implemented... 2) no string support... 3) why not just (new CSharpCodeProvider()).CreateCompiler() ? • Why not ScriptControl (or VsaEngine)? Posted by Legacy on 12/04/2002 12:00am Originally posted by: Gene Stolarov In the good old times I had to implements scripting (function evaluation) myself not once or twice. Did manual parsers, yacc/lex. But today why won't you use interfaces provided by microsoft? You are locking yourself into the microsoft world by switching to .NET/C#, so why not to use other stuff they give you for free?
{"url":"http://www.codeguru.com/comment/get/48279310/","timestamp":"2014-04-19T12:06:04Z","content_type":null,"content_length":"7918","record_id":"<urn:uuid:f8e0862d-75e3-42db-8742-7b4a99eff32f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Cruz, CA SAT Math Tutor Find a Santa Cruz, CA SAT Math Tutor ...I recently graduated from Mills College with my degree in Mathematics. In my years at school, I studied abroad in both Germany and Hungary and through that have learned many different teaching styles. At college, I worked as a peer tutor for more than seven different mathematics courses and also was a teaching assistant and taught my own workshops of 8-12 students in Calculus twice a 28 Subjects: including SAT math, reading, English, calculus I am currently studying at the University of California Santa Cruz with a declared major in Biochemistry and Molecular Biology. I graduated high school with a 4.4 weighted GPA and academic distinction. I'm proficient with math up to Calculus 11A and can tutor in PSAT/SAT and ACT prep as well as AP Chemistry, Biology, U.S History and English. 18 Subjects: including SAT math, chemistry, English, biology ...My passion is in chemistry but the quantitative nature of the natural sciences means that I am fluent in algebra through calculus. By nature of my coursework and extracurricular research, I also have extensive experience in lab work and spent a fair amount of my tutoring time assisting with lab ... 24 Subjects: including SAT math, reading, chemistry, calculus I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including SAT math, calculus, statistics, geometry ...Before earning my secondary teaching credential, I worked as a math tutor for 3rd through 5th graders in a public elementary school. Due to this experience, I am able to explain math concepts at a level that is developmentally appropriate. I enjoy engaging students this age with fun puzzles and activities that at the same time deepen their conceptual understanding. 10 Subjects: including SAT math, calculus, geometry, algebra 1 Related Santa Cruz, CA Tutors Santa Cruz, CA Accounting Tutors Santa Cruz, CA ACT Tutors Santa Cruz, CA Algebra Tutors Santa Cruz, CA Algebra 2 Tutors Santa Cruz, CA Calculus Tutors Santa Cruz, CA Geometry Tutors Santa Cruz, CA Math Tutors Santa Cruz, CA Prealgebra Tutors Santa Cruz, CA Precalculus Tutors Santa Cruz, CA SAT Tutors Santa Cruz, CA SAT Math Tutors Santa Cruz, CA Science Tutors Santa Cruz, CA Statistics Tutors Santa Cruz, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/santa_cruz_ca_sat_math_tutors.php","timestamp":"2014-04-19T14:58:18Z","content_type":null,"content_length":"24398","record_id":"<urn:uuid:c3f05765-1530-4b93-bae1-4d3814c6294f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Walk Like a Sabermetrician We have seen previously that R/PA does not properly take into account the effect of avoiding outs or creating more PA, and that R/O overstates the importance of avoiding outs for an individual by treating an individual as a complete lineup. With the two most intuitive candidates for a proper individual rate stat disqualified, where do we turn next? It seems only natural that some sabermetricians decided to look back at where they started, with R/PA, and try to make adjustments to it to correct the problem that it has. The first published work that I personally saw that took this approach was done by the poster “Sibelius” in 2000 on FanHome. Sibelius saw the problems in both R/PA and R/O, but was frustrated that others did not share his concerns with R/O. So he published his own method which was based on a modification of R/PA to include the effect of extra PA. His approach began with the truism that each additional out a player avoided would save the team average runs/out from being lost. So by simply comparing the out rate of the player to the out rate of the team, and multiplying by the number of PA for the player, you have a measure of how many outs he has avoided. Then each of these is valued at the team runs/out figure: Runs Saved = (NOA - TmNOA)*PA*TmR/O In Part 3 of the series, I used a hypothetical team with a .330 NOA, .12 R/PA, and .179 R/O. Suppose we had a player on this team with a .400 NOA in 550 PA. He would make (.4-.33)*550 = 38.5 less outs then an average hitter, and these would be worth 38.5*.179 = 6.89 runs. To incorporate these into a rate stat, Sibelius simply added them to the basic Runs Created figure, and divided by Plate Appearances. So this stat is just R/PA PLUS the effect of avoiding outs. And so I will call it R+/PA. Incidentally, I independently developed this approach shortly after Sibelius posted it. Independently is probably a bit of a stretch because I had read his work and agreed with his ideas--I just did not realize that the specific approach I developed was mathematically equivalent to his. My approach was to calculate the number of extra PA the player had generated (through a technique like that described in Part 2 of this series) rather then the number of outs he was avoided, and then to value each extra PA at the team R/PA. But as Sibelius pointed out to me, this produced identical results to his more simple approach. So how does this do with the hypothetical players we have looked at before? In Part 1, we found that R/O rated a player who, when added to an otherwise average team, would score 5.046 R/G ahead of a player whose team would score 5.523 R/G. That first player draws 200 walks and makes 100 outs, while the second player hits 150 homers and makes 350 outs. They are added to a team with a .330 OBA, .12 R/PA, and .179 R/O as above. In this case, Player A has a .667 OBA, and will save (.667-.33)*300*.179 = 18.08 runs, while Player B with his .300 OBA will save (.3-.33)*500*.179 = -2.69 runs. Player A had 54 RC to begin with, so he has 72.08 R+, or .240 R+/PA. Player B had 184 RC to being with, for 181.31 R+, or .363 R+/PA. This is the “right” decision, as Player B’s team scored more runs. R/O comes to the opposite conclusion, that Player A was more valuable.. I do not want to give the impression that because R+/PA meshes with our logic in this case, it will do so in all cases. Take the case of a batter who draws 499 walks in 500 PA. His team will have an OBA of around .404 and score 6.075 R/G. This player, who I’ll call Player C, has a “+” figure of 59.79 runs, plus 499*(1/3) = 166.33 RC, for a R/PA of .333, R/O of 166.3, and a R+/PA of .452. Suppose that we have another player, D, who hits 170 home runs and makes 330 outs in 500 PA. At 1.4 runs, we’ll credit him with 238 RC, but his generation of PA is worth just .895 runs. He winds up with .476 R/PA, .721 R/O, and .478 R+/PA. But his team will have an OBA of “only” .331 and we expect them to score about 6.011 R/G. So Player C is more valuable in this case, but has a lower R+/PA, although admittedly both the R/G and R+/PA differences are fairly small. His R/O, though, is wildly ahead of Player D’s, to an extent that does not at all reflect the impact they have on their team’s scoring. R/PA comes to the “right” decision here, but again, the difference between the two players is way out of proportion with the impact they have on their team’s offense. From these results, perhaps you will agree with me if I state that R+/PA is a sort of third way between R/PA and R/O, that combines strengths and weaknesses. But I would not claim that it is the “correct” rate stat. We would expect a correct stat to always agree with the result of adding a player to a team, because that is how I defined the term “correct” in part 5. But then again, the rate stat is just one component of our evaluation of a batter. The other is our value stat, which we have assumed is Runs Above Average for the sake of this discussion. So how do the RAA figures based on R+/PA differ from those based on R/O? RAA based on R/O is, in this case looking at the team as the base entity, (R/O - TmR/O)*O. RAA based on R+/PA is (R+/PA - TmR/PA)*PA. So, based on R/O: Player A has RAA = (54/100 - .179)*100 = +36.1 Player B has RAA = (184/350 - .179)*350 = +121.35 Based on R+/PA, we have: Player A: RAA = (.240 - .12)*300 = +36 Player B: RAA = (.363 - .12)*500 = +121.5 As you can see, the figures are nearly identical, for two pretty extreme players! They would be even closer, if not identical, had I not rounded the figures off in the process. So the only difference between rating players on R/O and R+/PA, at least against average, is the form and value that the rate stat takes--the value portions are equivalent. But if two procedures yield identical values, shouldn’t they yield identical rates as well? The player has been to the plate the same number of times and made the same number of outs whether we calculate his value based on R/O or R+/PA. So why should his rate stat be different? If you agree with this line of thinking, then you are forced to reach the conclusion that we are using the wrong rate stat. Of course, you could argue that neither R/O or R+/PA forms the proper framework for assessing value. But even if we accept that these frameworks are flawed, we can still accept that within that faulty framework, there is a better way to express the rate stat. This is the road that we will go down in the next installment. 2 comments: 1. Looking forward to the next installment :-) 2. Heh. Someday, I am going to rewrite this whole series. Someday. Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason.
{"url":"http://walksaber.blogspot.com/2006/03/rate-stat-series-pt-6.html?showComment=1227018480000","timestamp":"2014-04-19T01:57:52Z","content_type":null,"content_length":"100369","record_id":"<urn:uuid:c9d1eb62-cc16-409b-8232-f7d613b66bd8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Carlos Correa, Peter Lindstrom, "Towards Robust Topology of Sparsely Sampled Data," IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 1852-1861, Dec., 2011. BibTex x @article{ 10.1109/TVCG.2011.245, author = {Carlos Correa and Peter Lindstrom}, title = {Towards Robust Topology of Sparsely Sampled Data}, journal ={IEEE Transactions on Visualization and Computer Graphics}, volume = {17}, number = {12}, issn = {1077-2626}, year = {2011}, pages = {1852-1861}, doi = {http://doi.ieeecomputersociety.org/10.1109/TVCG.2011.245}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Visualization and Computer Graphics TI - Towards Robust Topology of Sparsely Sampled Data IS - 12 SN - 1077-2626 EPD - 1852-1861 A1 - Carlos Correa, A1 - Peter Lindstrom, PY - 2011 KW - Neighborhood graphs KW - topology KW - sparsely sampled data. VL - 17 JA - IEEE Transactions on Visualization and Computer Graphics ER - Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods − a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis. [1] D. F. Andrews, Plots of high-dimensional data. Biometrics, 28 (1): 125– 136, 1972. [2] D. Asimov, The grand tour: a tool for viewing multidimensional data. SIAM J. Sci. Stat. Comput., 6: 128–143, January 1985. [3] P. Berkhin, Survey of clustering data mining techniques. Technical report, Accrue Software, San Jose, CA, 2002. [4] P. Bose, S. Collette, F. Hurtado, M. Korman, S. Langerman, V. Sacristan, and M. Saumell, Some Properties of Higher Order Delaunay and Gabriel Graphs. In Proceedings of the Canadian Conference on Computational Geometry (CCCG10), 2010. 4 pages. [5] P. Bose, S. Collette, S. Langerman, A. Maheshwari, P. Morin, and M. Smid, Sigma-local graphs. J. of Discrete Algorithms, 8: 15–23, March 2010. [6] G. Box and N. Draper, Empirical Model-Building and Response Surfaces. John Wiley & Sons, 1987. [7] P.-T. T, V. Pascucci, and B. Hamann, Maximizing adaptivity in hierarchical topological models. In Shape Modeling and Applications, 2005 International Conference, pages 298 – 307, 2005. [8] J. Cardinal, S. Collette, and S. Langerman, Empty region graphs. Comput. Geom. Theory Appl., 42: 183–195, April 2009. [9] H. Carr and J. Snoeyink, Representing interpolant topology for contour tree computation. In Topology-Based Methods in Visualization II, Lecture Notes in Computer Science. Springer-Verlag, 2008. [10] H. Carr, J. Snoeyink, and U. Axen, Computing contour trees in all dimensions. Comput. Geom. Theory Appl., 24: 75–94, February 2003. [11] J. Carroll and P. Arabie, Multidimensional scaling. Annual Review of Psychology, 31: 607–649, 1980. [12] F. Chazal, L. J. Guibas, S. Y. Oudot, and P. Skraba, Analysis of scalar fields over point cloud data. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '09, pages 1021–1030, Philadelphia, PA, USA, 2009. Society for Industrial and Applied Mathematics. [13] H. Chernoff, The Use of Faces to Represent Points in K-Dimensional Space Graphically. Journal of the American Statistical Association, 68 (342): 361–368, 1973. [14] R. J. Cimikowski, Properties of some euclidean proximity graphs. Pattern Recogn. Lett., 13: 417–423, June 1992. [15] W. C. Cleveland and M. E. McGill, Dynamic Graphics for Statistics. CRC Press, Inc., Boca Raton, FL, USA, 1st edition, 1988. [16] C. D. Correa, P. Lindstrom, and P.-T. Bremer, Topological spines: A structure-preserving visual representation of scalar fields. IEEE Transactions on Visualization and Computer Graphics (Proceedings Visualization / Information Visualization 2011), 17 (12), 2011. [17] N. R. Draper and H. Smith, Applied Regression Analysis (Wiley Series in Probability and Statistics). John Wiley & Sons Inc, 2 sub edition, 1998. [18] Q. Du, V. Faber, and M. Gunzburger, Centroidal Voronoi Tessellations: Applications and Algorithms. SIAM Review, 41 (4):637–676, 1999. [19] H. Edelsbrunner, J. Harer, V. Natarajan, and V. Pascucci, Morse-smale complexes for piecewise linear 3-manifolds. In Proceedings of the nineteenth annual symposium on Computational geometry, SCG '03, pages 361–370, New York, NY, USA, 2003. ACM. [20] H. Edelsbrunner, D. Letscher, and A. Zomorodian, Topological persistence and simplification. Discrete & Computational Geometry, pages 511–533, 2002. [21] I. Fodor, A Survey of Dimension Reduction Techniques, 2002. [22] S. Fortune, Voronoi diagrams and delaunay triangulations. In J. E. Goodman, and J. O'Rourke editors, , Handbook of discrete and computational geometry, pages 377–388. CRC Press, Inc., Boca Raton, FL, USA, 1997. [23] I. Fujishiro, T. Azuma, and Y. Takeshima, Automating transfer function design for comprehensible volume rendering based on 3d field topology analysis (case study). In Proceedings of the conference on Visualization '99: celebrating ten years, VIS '99, pages 467–470, Los Alamitos, CA, USA, 1999. IEEE Computer Society Press. [24] R. K. Gabriel and R. R. Sokal, A new statistical approach to geographic variation analysis. Systematic Zoology, 18 (3): 259–278, Sept. 1969. [25] S. Gerber, P.-T. T, V. Pascucci, and R. Whitaker, Visual exploration of high dimensional scalar functions. IEEE Transactions on Visualization and Computer Graphics, 16: 1271–1280, 2010. [26] A. Gyulassy, V. Natarajan, V. Pascucci, P.-T. T, and B. Hamann, Topology-based simplification for feature extraction from 3d scalar fields. Visualization Conference, IEEE, 0: 68, 2005. [27] W. Harvey and Y. Wang, Topological landscape ensembles for visualization of scalar-valued functions. Computer Graphics Forum, 29 (3): 993– 1002, 2010. [28] T. Hastie and R. Tibshirani, Generalized Additive Models. Chapman and Hall, 1990. [29] A. Inselberg, The plane with parallel coordinates. The Visual Computer, 1: 69–91, 1985. 10.1007/BF01898350. [30] J. Jaromczyk and G. Toussaint, Relative neighborhood graphs and their relatives. Proceedings of the IEEE, 80 (9): 1502 –1517, Sept. 1992. [31] E. Kandogan, Star coordinates: A multi-dimensional visualization technique with uniform treatment of dimensions. In In Proceedings of the IEEE Information Visualization Symposium, Late Breaking Hot Topics, pages 9–12, 2000. [32] J. M. Keil and C. A. Gutwin, Classes of graphs which approximate the complete euclidean graph. Discrete Comput. Geom., 7: 13–28, January 1992. [33] D. Kirkpatrick and J. Radke, A framework for computational morphology. CG, 85: 217–248, 1985. [34] D. W. Matula and R. R. Sokal, Properties of gabriel graphs relevant to geographic variation research and the clustering of points in the plane. Geographical Analysis, 12 (3): 205–222, 1980. [35] M. D. McKay, R. J. Beckman, and W. J. Conover, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21 (2): 239–245, 1979. [36] M. Meila, Comparing clusterings—an information based distance. Journal of Multivariate Analysis, 98 (5): 873–895, 2007. [37] M. Morse, Relations between the critical points of a real function on n independent variables. Trans. Am. Math. Soc., 27 (3): 345–396, 1925. [38] P. Niyogi, S. Smale, and S. Weinberger, A topological view of unsuper-vised learning from noisy data. Preprint, 2008. [39] P. Oesterling, C. Heine, H. Janicke, G. Scheuermann, and G. Heyer, Visualization of high dimensional point clouds using their density distribution's topology. IEEE Transactions on Visualization and Computer Graphics, 99(PrePrints), 2011. [40] P. Oesterling, G. Scheuermann, S. Teresniak, G. Heyer, S. Koch, T. Ertl, and G. Weber, Two-stage framework for a topology-based projection and visualization of classified document collections. In Visual Analytics Science and Technology (VAST), 2010 IEEE Symposium on, pages 91 – 98, oct. 2010. [41] J. C. Park, H. Shin, and B. K. Choi, Elliptic gabriel graph for finding neighbors in a point set and its application to normal vector estimation. Comput. Aided Des., 38:619–626, June 2006. [42] V. Pascucci, G. Scorzelli, P.-T. T, and A. Mascarenhas, Robust on-line computation of reeb graphs: simplicity and speed. ACM Trans. Graph., 26, July 2007. [43] H. Pohlheim, Examples of Objective Functions, 2006. [44] G. Reeb, Sur les points singuliers d'une forme de Pfaff completement intégrable ou d'une fonction numérique. Comptes Rendus de L'Académie des Séances de Paris, 222: 847–849, 1946. [45] R. Seidel, The upper bound theorem for polytopes: an easy proof of its asymptotic version. Computational Geometry, 5 (2): 115 – 116, 1995. [46] R. Srinivasan, Importance sampling: Applications in communications and detection. Springer, 2002. [47] B. Tang, Orthogonal array-based latin hypercubes. Journal of the American Statistical Association, 88 (424): 1392–1397, 1993. [48] S. Thompson, Sampling. John Wiley & Sons, Inc., 1992. [49] G. Toussaint, Proximity graphs for nearest neighbor decision rules: Recent progress. In Progress, Proceedings of the 34 th Symposium on the INTERFACE, pages 17–20, 2002. [50] G. T. Toussaint, Pattern recognition and geometric complexity. In Proc. 5th ICPR, pages 1324–1347, 1980. [51] F. Tsai, Comparative study of dimensionality reduction techniques for data visualization. Journal of Artificial Intelligence, 3: 119–134, 2010. [52] R. Urquhart, Graph theoretical clustering based on limited neighbourhood sets. Pattern Recognition, 15 (3): 173 – 187, 1982. [53] M. van Kreveld, R. van Oostrum, C. Bajaj, V. Pascucci, and D. Schikore, Contour trees and small seed sets for isosurface traversal. In ACM Symposium on Computational geometry, pages 212–220, [54] R. C. Veltkamp, The γ-neighborhood graph. Computational Geometry, 1 (4): 227–246, 1992. Index Terms: Neighborhood graphs, topology, sparsely sampled data. Carlos Correa, Peter Lindstrom, "Towards Robust Topology of Sparsely Sampled Data," IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 1852-1861, Dec. 2011, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tg/2011/12/ttg2011121852-abs.html","timestamp":"2014-04-18T04:24:58Z","content_type":null,"content_length":"60522","record_id":"<urn:uuid:3483197c-10ae-44d1-a848-9d343b8d1875>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] Mutable state vs RAM on fire From: joshua at anwu.org (joshua at anwu.org) Date: Wed May 2 18:50:17 EDT 2012 You're right; your version is easier on the eyes. Starting from 1 instead of 0 was deliberate, however. DZ's method involves multiplying N polynomials of degree X, where X is the number of sides on a die and N is the number of dice. So, 2d6 would be like this: (x^1 + x^2 + x^3 + x^4 + x^5 + x^6) (x^1 + x^2 + x^3 + x^4 + x^5 + x^6) With the final exponents as the number rolled, and the final coefficients as the chance (out of 6^2) to roll it. Which is why I used alists, at first; I don't really need the 0 part of the vector. I think I will give hashes a try. On Wed, May 02, 2012 at 05:08:03PM -0400, Matthias Felleisen wrote: > > I rewrote the thing to use vectors instead, and altered the polynomial multiplication function to use (begin) and (vector-set!): > > > > https://github.com/TurtleKitty/Dice/blob/67c2b49707132395f73b43afe111e3904b3898f2/dice.rkt > > > > It too now calculates three hundred dice without breaking a sweat, but... I feel dirty. > It's also wrong and stylistically bad: > (define (poly-mul p1 p2) > (define deg1 (poly-deg p1)) > (define deg2 (poly-deg p2)) > (define noob (make-vector (- (+ deg1 deg2) 1))) > ;; MF: bug, these were 1s: > (for* ([i (in-range 0 deg1)] [j (in-range 0 deg2)]) > (define k (+ i j)) > (define a (* (vector-ref p1 i) (vector-ref p2 j))) > (vector-set! noob k (+ (vector-ref noob k) a))) > noob) > > Can anyone recommend a functional approach that won't melt my motherboard? > > I'm considering hashes, since they have the immutable version of hash-set that vectors seem to lack, but I thought I'd ask the experts. > Do try hash and for/hash. I think you will be pleased with the performance. -- Matthias > p.s. Do report back. Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2012-May/051792.html","timestamp":"2014-04-19T03:00:31Z","content_type":null,"content_length":"7350","record_id":"<urn:uuid:b3f5ef80-2194-4a41-8ced-2d346fd7fbb3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
GNU Aris Manual: eq 6.2.7 Equivalence • P <-> Q <=> (P → Q) ^ (Q → R) Equivalence uses the definition of the biconditional. Claiming that ‘P if and only if Q’ is exactly the same as claiming ‘if P then Q’ and ‘if Q then P’. Equivalence is the only rule that works with biconditionals explicitly, and is thus used any time a biconditional is seen.
{"url":"http://www.gnu.org/software/aris/manual/html_node/eq.html","timestamp":"2014-04-17T10:49:25Z","content_type":null,"content_length":"3703","record_id":"<urn:uuid:83f043d0-59c5-4f33-8c90-7fa10616f12e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Dockweiler, CA Prealgebra Tutor Find a Dockweiler, CA Prealgebra Tutor My background includes a Bachelor of Science in Civil Engineering from Missouri S&T and hundreds of hours training technicians as a corporate trainer with at American Electric Power. I have volunteered to mentor students grades K-12 for over a decade and have been tutoring math privately for a few ... 5 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...English Literature is designed to engage students in the careful reading and critical analysis of imaginative writing. Through closely reading texts, students can deepen their understanding of the ways writers use language to provide both meaning and pleasure for their readers. As they read, st... 27 Subjects: including prealgebra, English, reading, algebra 2 ...I bring years of in-classroom experience to every tutoring session. I received my Bachelors Degree at Georgetown University in English Literature. Additionally, I have practical experience with every elementary-level subject. 28 Subjects: including prealgebra, reading, writing, English ...Where Linear Algebra enables one to start seeing the deep connections that exist among mathematical structures themselves, Differential Equations enables one to start seeing the deep connection between mathematics and the operation of the real world. In other words, one begins to finally underst... 20 Subjects: including prealgebra, chemistry, calculus, reading ...However some concepts in algebra 1 can be challenging as well. But challenging doesn't mean impossible. With hard work and some patience from the student and tutor, algebra 1 can be mastered. 19 Subjects: including prealgebra, Spanish, reading, chemistry Related Dockweiler, CA Tutors Dockweiler, CA Accounting Tutors Dockweiler, CA ACT Tutors Dockweiler, CA Algebra Tutors Dockweiler, CA Algebra 2 Tutors Dockweiler, CA Calculus Tutors Dockweiler, CA Geometry Tutors Dockweiler, CA Math Tutors Dockweiler, CA Prealgebra Tutors Dockweiler, CA Precalculus Tutors Dockweiler, CA SAT Tutors Dockweiler, CA SAT Math Tutors Dockweiler, CA Science Tutors Dockweiler, CA Statistics Tutors Dockweiler, CA Trigonometry Tutors Nearby Cities With prealgebra Tutor Cimarron, CA prealgebra Tutors Dowtown Carrier Annex, CA prealgebra Tutors Farmer Market, CA prealgebra Tutors Foy, CA prealgebra Tutors Green, CA prealgebra Tutors Lafayette Square, LA prealgebra Tutors Miracle Mile, CA prealgebra Tutors Oakwood, CA prealgebra Tutors Pico Heights, CA prealgebra Tutors Rimpau, CA prealgebra Tutors Sanford, CA prealgebra Tutors Vermont, CA prealgebra Tutors Westvern, CA prealgebra Tutors Wilcox, CA prealgebra Tutors Wilshire Park, LA prealgebra Tutors
{"url":"http://www.purplemath.com/Dockweiler_CA_prealgebra_tutors.php","timestamp":"2014-04-21T02:11:53Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:3a22e719-0270-4dc1-8aad-13b6e13740b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Help! Rounding without using preset MATLAB functio Dear Steven, > I'm supposed to write a code that will round any values to a specified numeric placement, such as the tens, hundredths, etc... without using any built in functions such as 'round','ceil',etc.. I'm at a complete loss. > Say, for example I'm testing on a single value 10.23, and I wanted to round it to 10.2. This is my job according to: At first: From a scientifical point of view, the question cannot be solved. You cannot do *anything* in Matlab "without any built-in functions"! Even this calls built-in conversions: a = 10.23; And even less: Starting Matlab calls built-in functions in matlabrc.m! Therefore I'd reject the question completely, or aks the teacher to specify the list of forbidden functions explicitely --- any kind of "etc..." disables the possibility to create a valid answer. But if you cannot convince your teacher from this scientific point of view, you *must* pretend a solution due to the absence of a valid solution. Idea 1: Print the number to the screen. Hold a sheet of paper before the screen such, that you cannot see the not wanted figures. Idea 2: Print the number to the command window and decrease the window width until the wnated number of figures is not visible anymore. Idea 3: Print the number to a string (SPRINTF), search for the dot (FINDSTR) and delete the trailing characters. These 3 ideas are equivalent to CEIL, not ROUND. At least the idea 3 could consider the value of the first cropped figure, but this means a lot of work e.g. for rounding '9.999999999999999999999'. Idea 4: Let e.g. SPRINTF('%.4f') do the work for you. Idea 5: x = 10.23 y = ((x * 10) - rem(x * 10, 1)) / 10 If REM is not allowed, use MOD. If MOD is not allowed, use: x - fix(x) If FIX is not allowed, use: x - double(uint64(x)) If the TIMES operator ("*") is not allowed, simulate it by a SUM: y * 10 => sum(x(ones(1, 10)) if SUM is not allowed, create a FOR loop: s = 0; for i = 1:10, s = s + x; end Idea 6: Write a C-mex file. Unfortunately there is no ROUND in the standard C-libs. But there is a modulo operator, which can be used with idea 5. Idea 7: EVAL does not call a function, it asks Matlab to do it: y = eval('round(x*10)/10') Idea 8: Forget Matlab, do it with a pencil on paper. If your teacher complains, point to the fact, that this produces less CO2 and that your pencil needs less time to boot. But finally consider that "rounding" is not well defined at all for floating point numbers with a limited precision, see: (one of your collegues??) Please feel free to invite your teacher to this discussion. I'd looking forward to argue if it is possible to find a serious answer to a silly question. And please do not be impressed by such homework questions! Spend more time in learning how to use Matlab efficiently. Kind regards, Jan
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/296957","timestamp":"2014-04-24T08:37:19Z","content_type":null,"content_length":"59142","record_id":"<urn:uuid:5df8d426-7af2-428b-be71-8de76595d96a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix multiplication algorithms over time The asymptotically fastest algorithm for matrix multiplication takes time O(n^ω) for some value of ω. Here are the best known upper bounds on ω over time. The latest improvements, the first in over 20 years, are due to Andrew Stothers and Virginia Vassilevska Williams. The latter gave an O(n^2.3727)-time algorithm for multiplying matrices. When will the sometimes-conjectured ω = 2 be reached? Certainly nothing wrong with taking a linear fit of this data, right? So that would be around the year 2043. Unfortunately, the pessimist's exponential fit asymptotes to ω = 2.30041...
{"url":"http://youinfinitesnake.blogspot.com/2011/12/matrix-multiplication-algorithms-over.html","timestamp":"2014-04-21T07:04:09Z","content_type":null,"content_length":"39868","record_id":"<urn:uuid:4c85f0e5-3196-4031-921f-f33457097c1d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Mastering a Challenge Einstein was said to have had a learning problem called dyscalculia. Dyscalculia is sometimes called "math dyslexia". What it means is, modern mathematical numbers and equations don't translate into meaningful concepts. How could Einstein, a world-renowned physicist, possibly have dyscalculia and solve the equations that he did? Dyscalculia is something that more often than not co-occurs with extremely high IQ, even an immeasurable IQ. It is not a "learning disability" per se; it's more of a specific atypical learning style that can cause some serious problems in standard educational environments and testing environments. No matter what modern diagnosticians try to publish, dyscalculia does not affect logic and reasoning or articulation through the written word. What is most often affected is the ability to verbally articulate concepts that require numbers: an inability to "show your work" (i.e., explain your thought process), which begins with an inability to "count out loud" from an early age. People with dyscalculia are more likely to be abstract/analytical thinkers and learners, the most difficult learning style to comprehend and almost impossible to teach to people with other learning styles. When I was young (in the late-60's/early-70's), I was identified as being in MENSA's highest range of aptitude, technically immeasurable, and also given a disparaging label, like Einstein ("idiot savant"). It was determined that I had dyscalculia, something I can share with Einstein, Mary Tyler Moore and Cher, three of my favorite people. Although I was already an accomplished musician and had no difficulty reading music and understanding time signatures, yet, the simplest mathematically equations and concepts (division, "prime" numbers, multiplication) baffled me. Word problems were absolutely out of the question, yet, when showed a quadratic equation for the first time, I translated and solved it on my own. I kept getting placed into advanced mathematics and within one week, kept having to withdraw. teachers and school psychologists could not understand it until finally, the diagnosis "dyscalculia" came along and at that point, I was allowed to continue schooling without math classes because there simply was no point in asking me to take them. I would never "do math", I was told, never be a scientist. I lasted to ninth grade, like Einstein. I had to take my high school degree through correspondence, and graduated with an A average. I went into college, finally, for linguistics. My very last year in undergraduate courses, I had to pass college algebra with a minimum grade of 'B' or I would not be allowed to graduate. I had to take two years of math prior to college algebra, including pre-algebra, introduction to college math, an audited course for college algebra, and two preparation courses for algebra I and II. These classes I passed with A's, because I had a boyfriend who took the problem so seriously that he sat with me every night for one hour for over a year, drilling the steps into my head. I got 'A's in all my preparatory classes but, despite that, when I took the actual course, within a week it became obvious that I would not be able to understand the concepts whatsoever, as if I had never seen a math equation before and had never taken math before. I went into shock and I panicked. My college algebra professor was a retired football coach and he stepped in to intervene after I brought the problem to his attention. He said he'd seen this kind of thing before in audio-kinesthetic learners, but I was an abstract/analytical learner. He called a specialist, a retired colleague who had done research in dyscalculia. His colleague gave me a test to determine whether I could generate numbers that represented mathematical concepts and algebraic concepts. Sure enough, I aced that. I could create algebraic equations that expressed concepts of time, distance, dimension and rate without trouble but I could not internalize and process the terms or the steps required for college algebra. This colleague called it "the savant syndrome" (which, at any rate, was kinder than calling it the "idiot syndrome"). And so, my math professor worked with me three times a week outside of class and gave me specialized tests, proving to the math department that I could work with mathematical concepts but in my own atypical way, not the standard way which, to me, was backwards. By allowing me to explore concepts mathematically, he enabled me to be able to generate my own theorems and equations and eventually, to go on to receive a doctorate in linguistics and develop a super-matrix theorem for consciousness. He told me then that anybody else would have quit a long time ago. He said he didn't understand the syndrome, himself, but he knew one thing: as long as you don't quit, you will succeed. The only way a person fails is if they give up. I applied that wisdom from "Coach" Brown throughout the rest of my career and life, and his equation proves true: quit = fail; not quitting = success over time. The end product of my perseverance was the formula described below, in my best "math=ese": Definition of terms quit (Q), fail (F), success (S), time (t) Identity of variables 1 = yes, 0 = no With the given identities, it can be stated that if Q = 1, then Q = F;therefore, if Q equals 0, then Q = S/t This is the only formula that matters in life, or in math.
{"url":"http://www.wyzant.com/resources/blogs/240453/mastering_a_challenge","timestamp":"2014-04-20T14:59:49Z","content_type":null,"content_length":"35663","record_id":"<urn:uuid:6a43ab06-d224-46d5-b6d4-de12828d0c17>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
A002658 - OEIS A002658 a(0) = a(1) = 1; for n > 0, a(n+1) = a(n)*(a(0)+...+a(n-1)) + a(n)*(a(n)+1)/2. 11 (Formerly M1814 N0718) 1, 1, 2, 7, 56, 2212, 2595782, 3374959180831, 5695183504489239067484387, 16217557574922386301420531277071365103168734284282 (list; graph; refs; listen; history; text; internal format) OFFSET 0,3 COMMENTS Number of planted trees in which every node has degree <=3 and of height n; or products of height n when multiplication is commutative but non-associative. Also called planted 3-trees or planted unary-binary trees. The next term (which was incorrectly given) is in fact too large to include. See the b-file. Comment from Marc LeBrun: Maximum possible number of distinct new values after applying a commuting operation N times to a single initial value. Divide the natural numbers in sets of consecutive numbers, starting with {1}, each set with number of elements equal to the sum of elements of the preceding set. The number of elements in the n-th (n>0) set gives a(n). The sets begin {1}, {2}, {3,4}, {5,6,7,8,9,10,11}, ... - Floor van Lamoen (fvlamoen(AT)hotmail.com), Jan 16 2002 Consider the free algebraic system with one binary commutative (x+y) operator and one generator A. The number of elements of height n is a(n) where the height of A is zero and the height of (x+y) is one more than the maximum height of x and y. - Michael Somos, Mar 06 2012 Sergey Zimnitskiy, May 08 2013, provided an illustration for A006894 and A002658 in terms of packing circles inside circles. The following description of the figure was supplied by Allan Wilks. Label a blank page "1" and draw a black circle labeled "2". Subsequent circles are labeled "3", "4", ... . In the black circle put two red circles (numbered "3" and "4"); two because the label of the black circle is "2". Then in each of the red circles put blue circles in number equal to the labels of the red circles. So these get labeled "5", ..., "11". Then in each of the blue circles, starting with circle "5", place a set of green (say) circles, equal in number to the label of the enclosing blue circle. When all of the green circles have been drawn, they will be labeled "12", ..., "67". If you take the maximum circle label at each colored level, you get 1,2,4,11,67,2279,..., which is A006894, which itself is the partial sums of A002658. The picture is a visualization of Floor van Lamoen's comment above. REFERENCES I. M. H. Etherington, On non-associative combinations, Proc. Royal Soc. Edinburgh, 59 (Part 2, 1938-39), 153-162. F. Harary et al., Counting free binary trees..., J. Combin. Inform. System Sciences, 17 (1992), 175-181. Z. A. Melzak, A note on homogeneous dendrites, Canad. Math. Bull., 11 (1968), 85-93; http://cms.math.ca/10.4153/CMB-1968-012-1 N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence). N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). LINKS David Wasserman, Table of n, a(n) for n = 0..13 Sergey Zimnitskiy, Illustration of initial terms of A006894 and A002658 Index entries for sequences related to rooted trees Index entries for sequences related to trees Index entries for "core" sequences FORMULA a(n + 1) = a(n) * (a(n) / a(n-1) + (a(n) + a(n-1)) / 2) [equation (5) on page 87 of Melzak 1968 with a() instead of his f()] MAPLE s := proc(n) local i, j, ans; ans := [ 1 ]; for i to n do ans := [ op(ans), ans[ i ]*(add(j, j=ans)-ans[ i ])+ans[ i ]*(ans[ i ]+1)/2 ] od; RETURN(ans); end; t1 := s(10); A002658 := n-> MATHEMATICA Clear[a, b]; a[0] = a[1] = 1; b[0] = b[1] = 1; b[n_] := b[n] = b[n-1] + a[n-1]; a[n_] := a[n] = (a[n-1]+1)*a[n-1]/2 + a[n-1]*b[n-1]; Table[a[n], {n, 0, 9}] (* Jean-François Alcover, Jan 31 2013, after Frank Harary *) PROG (PARI) {a(n) = local(a1, a2); if( n<2, n>=0, a2 = a(n-1); a1 = a(n-2); a2 * (a2 / a1 + (a1 + a2) / 2))} /* Michael Somos, Mar 06 2012 */ a002658 n = a002658_list !! n a002658_list = 1 : 1 : f [1, 1] where f (x:xs) = y : f (y:x:xs') where y = x * sum xs + x * (x + 1) `div` 2 -- Reinhard Zumkeller, Apr 10 2012 CROSSREFS Cf. A006894, A005588. First differences of A072638. Sequence in context: A227381 A182055 A211209 * A175818 A034939 A048898 Adjacent sequences: A002655 A002656 A002657 * A002659 A002660 A002661 KEYWORD nonn,easy,core,nice AUTHOR N. J. A. Sloane. EXTENSIONS Corrected by David Wasserman, Nov 20 2006 STATUS approved
{"url":"http://oeis.org/A002658","timestamp":"2014-04-19T12:32:48Z","content_type":null,"content_length":"23248","record_id":"<urn:uuid:413ca339-ab55-4438-b982-5f89dac3ec48>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Millbrae Trigonometry Tutor Find a Millbrae Trigonometry Tutor Hello! I have been a professional tutor since 2003, specializing in math (pre-algebra through AP calculus), AP statistics, and standardized test preparation. I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. 14 Subjects: including trigonometry, calculus, statistics, geometry ...I wrote "Handbook for Centrifuge Designers" while at Beckman Instruments in Palo Alto, California. It is critical, before going into pre-algebra, that students have a firm command of arithmetic. My experience is that many such students have troubles adding and subtracting fractions, and my first task with such students is to make sure that they have mastery of that kind of 17 Subjects: including trigonometry, calculus, physics, geometry ...ACT is similar to the SAT but has a different format and you should pay attention to the schools that you apply to. They might want The ACT instead. The scoring is different but again when you practice enough you can become familiar with the questions and succeed. 13 Subjects: including trigonometry, calculus, ASVAB, geometry ...I have several years of experience tutoring in a wide variety of subjects and all ages, from small kids to junior high to high school, and kids with learning disabilities. I am also available to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language. I'm fluent in ... 48 Subjects: including trigonometry, Spanish, English, reading ...My undergraduate degree is in mathematics, and I have worked as a computer professional, as well as a math tutor. My doctoral degree is in psychology. I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply unders... 20 Subjects: including trigonometry, calculus, statistics, geometry
{"url":"http://www.purplemath.com/millbrae_trigonometry_tutors.php","timestamp":"2014-04-19T09:39:32Z","content_type":null,"content_length":"24303","record_id":"<urn:uuid:4d33f340-2653-4b60-ac37-19b12a491567>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515b58ebe4b07077e0c1823d","timestamp":"2014-04-18T23:21:30Z","content_type":null,"content_length":"67905","record_id":"<urn:uuid:f6264f80-c39a-43cf-bde7-0e36c17541c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the Burnside ring a lambda-ring? + conjecture in Knutson p. 113 up vote 3 down vote favorite Warning: I'll be using the "pre-$\lambda$-ring" and "$\lambda$-ring" nomenclature, as opposed to the "$\lambda$-ring" and "special $\lambda$-ring" one (although I just used the latter a few days ago on MO). It's mainly because both sources use it, and I am (by reading them) slowly getting used to it. Let $G$ be a finite group. The Burnside ring $B\left(G\right)$ is defined as the Grothendieck ring of the category of finite $G$-sets, with multiplication defined by cartesian product (with diagonal structure, or at least I have difficulties imagining any other $G$-set structure on it; please correct me if I am wrong). For every $n\in\mathbb{N}$, we can define a map $\sigma^n:B\left(G\right)\to B\left(G\right)$ as follows: Whenever $U$ is a $G$-set, we let $\sigma^n U$ be the set of all multisets of size $n$ consisting of elements from $U$. The $G$-set structure on $\sigma^n U$ is what programmers call "map": an element $g\in G$ is applied by applying it to each element of the multiset. This way we have defined $\sigma^n U$ for every $G$-set $U$; we extend the map $\sigma^n$ to all of $B\left(G\right)$ (including "virtual" $G$-sets) by forcing the rule $\displaystyle \sigma^i\left(u+v\right)=\sum_{k=0}^i\sigma^k\left(u\right)\sigma^{i-k}\left(v\right)$ for all $u,v\in B\left(G\right)$. Ah, and $\sigma^0$ should be identically $1$, and $\sigma^1=\mathrm{id}$. Anyway, this works, and gives a "pre-$\sigma$-ring structure", which is basically the same as a pre-$\lambda$-ring structure, with $\lambda^i$ denoted by $\sigma^i$. Now, we turn this pre-$\sigma$-ring into a pre-$\lambda$-ring by defining maps $\lambda^i:B\left(G\right)\to B\left(G\right)$ by $\displaystyle \sum_{n=0}^{\infty}\sigma^n\left(u\right)T^n\cdot\sum_{n=0}^{\infty}\left(-1\right)^n\lambda^n\left(u\right)T^n=1$ in $B\left(G\right)\left[\left[T\right]\right]$ for every $u\in B\ Now, let me quote two sources: Donald Knutson, $\lambda$-Rings and the Representation Theory of the Symmetric Group, 1973, p. 107: "The fact that $B\left(G\right)$ is a $\lambda$-ring and not just a pre-$\lambda$-ring - i. e., the truth of all the identities - follows from [...]" Michiel Hazewinkel, Witt vectors, part 1, 19.46: "It seems clear from [370] that there is no good way to define a $\lambda$-ring structure on Burnside rings, see also [158]. There are (at least) two different choices giving pre-$\lambda$-rings but neither is guaranteed to yield a $\lambda$-ring. Of the two the symmetric power construction seems to work best." (No, I don't have access to any of these references.) For a long time I found Knutson's assertion self-evident (even without having read that far in Knutson). Now I tend to believe Hazewinkel's position more, particularly as I am unable to verify one of the relations required for a pre-$\lambda$-ring to be a $\lambda$-ring: for $B\left(G\right)$. What also bothers me is Knutson's "conjecture" on p. 113, which states that the canonical (Burnside) map $B\left(G\right)\to SCF\left(G\right)$ is a $\lambda$-homomorphism, where $SCF\left(G\right)$ denotes the $\lambda$-ring of super characters on $G$, with the $\lambda$-structure defined via the Adams operations $\Psi^n\left(\varphi\left(H\right)\right)=\varphi\left(H^n\right)$ (I think he wanted to say $\left(\Psi^n\left(\varphi\right)\right)\left(H\right)=\varphi\left(H^n\right)$ instead) for every subgroup $H$ of $G$, where $H^n$ means the subgroup of $G$ generated by the $n$-th powers of elements of $H$. This seems wrong to me for $n=2$ and $H=\left(\mathbb Z / 2\mathbb Z\right)^2$ already. And if the ring $B\left(G\right)$ is not a $\lambda$-ring, then this conjecture is wrong anyway (since the map $B\left(G\right)\to SCF\left(G\right)$ is injective). Can anyone clear up this mess? I am really confused... Thanks a lot. lambda-rings adams-operations Your LaTeX in the last paragraph is sticking out into the "Related" section, or at least it appears this way to me (I'm using Chrome). You may want to put that long bit on a separate line. – Zev Chonoles Feb 5 '10 at 23:43 Hmm, I don't see much latex in my last paragraph. Maybe you mean the $\lambda^2\left(uv\right)$ formula? Okay, will put it in a new line. – darij grinberg Feb 6 '10 at 0:20 Your equation defining lambda^n in terms of sigma^n doesn't look right. I'm sure there needs to be a minus sign with the t in the series for the lambda's. – Charles Rezk Feb 6 '10 at 1:21 You're right, thanks. – darij grinberg Feb 6 '10 at 9:44 add comment 1 Answer active oldest votes I've just gone and looked up [158] (Gay, C. D.; Morris, G. C.; Morris, I. Computing Adams operations on the Burnside ring of a finite group. J. Reine Angew. Math. 341 (1983), 87--97. On p. 90, at the end of section 2, they say: "Knutson conjectured that the Adams operations on SCF(G) inherited from A(G) [=Burnside ring of G] are given by [the formula you mentioned, involving the subgroup generated by nth powers of a subgroup $K$]. We will show that this is correct if $K$ is cyclic, but not true in general." up vote 3 down vote accepted I haven't looked at it carefully, but they appear to give some more complicated looking formulas for the action of Adams operations on super-characters, valid in some cases. They don't seem to mention Knutson's claim that the Burnside ring is a lambda-ring (not merely pre-lambda). Thanks a lot. This kills the conjecture at least. I assume that for cyclic $K$, it is not particularly hard (one can wlog assume that $G=K$, and $B\left(G\right)$ for cyclic $G$ should be some kind of stunted Witt rings). – darij grinberg Feb 6 '10 at 9:39 Okay, I now found the first page of the reference: reference-global.com/doi/abs/10.1515/crll.1983.341.87 and indeed it claims that the Burnside ring is just pre-$\lambda$ rather than $\lambda$. And a reference to Siebeneicher claiming that it is $\lambda$ if $G$ is cyclic (which should be Witt vector theory again). My question is settled. – darij grinberg Feb 6 '10 at 10:49 Oh, it's actually online: gdz.sub.uni-goettingen.de/dms/load/toc/?PPN=PPN243919689_0341 – darij grinberg Feb 6 '10 at 10:54 add comment Not the answer you're looking for? Browse other questions tagged lambda-rings adams-operations or ask your own question.
{"url":"http://mathoverflow.net/questions/14324/is-the-burnside-ring-a-lambda-ring-conjecture-in-knutson-p-113","timestamp":"2014-04-17T19:08:57Z","content_type":null,"content_length":"62864","record_id":"<urn:uuid:b12d2410-2837-46be-b22e-fbb0703243ac>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Continued Fractions A simple continued fraction is an expression of the form \[ a_1 + \frac{1}{ a_2 + \frac{1}{ a_3 + ... } } \] where the $a_i$ are a possibly infinite sequence of integers such that $a_1$ is nonnegative and the rest of the seqence is positive. We often write $[a_1 ; a_2 , a_3, ...]$ in lieu of the above fraction. We may also call them regular continued fractions. Truncating the sequence at $a_k$ and computing the resulting expression gives the $k$th convergent $p_k / q_k$ for some positive coprime integers $p_k, q_k$. The first three convergents are \[ a_1, \frac{ a_1 a_2 + 1 }{a_2}, \frac{ a_1 (a_2 a_3 + 1) + a_3 }{a_2 a_3 + 1} \] Induction proves the recurrence relations: \[ \begin{aligned} p_k &=& a_k p_{k-1} + p_{k-2} \\ q_k &=& a_k q_{k-1} + q_{k-2} \end{aligned} \] for $k \ge 3$. We can make these relations hold for all $k \ge 1$ by defining $p_{-1} = 0, q_{-1} = 1$ and $p_0 = 1, q_0 = 0$. These correspond to the convergents 0 and $\infty$, the most extreme convergents possible for a nonegative integer. They also allows us to show \[ \frac{p_k}{q_k} - \frac{p_{k-1}}{q_{k-1}} = \frac{(-1)^k}{q_k q_{k-1}} \] Thus the difference between successive convergents approaches zero and alternates in sign, so a continued fraction always converges to a real number. This equation also shows that $p_k$ and $q_k$ are indeed coprime, a small detail glossed over earlier. A similar induction shows \[ p_k q_{k-2} - q_k p_{k-2} = a_k (-1)^{k-1} \] and thus $p_k / q_k$ decreases for $k$ even, and increases for $k$ odd. We demonstrate how to compute convergents of $a = [1;2,2,2,...]$ in practice. Terry Gagen introduced this to me as the “magic table”. I’ll refer to this method by this name, as I don’t know its official title. We write the sequence $a_i$ left to right, and two more rows are started, one for the $p_i$ and one for the $q_i$, which we bootstrap with the zero and infinity convergents: For each row, we carry out the recurrence relation from left to right. In other words, for each row entry, write in (number to the left) $\times$ (column heading) $+$ (number two to the left): 1 2 2 2 2 … \[ a = 1 + \frac{1}{1 + [1;2,2,2,...]} = 1 + \frac{1}{a+1} \] Rearranging, we see $a$ must be a solution to $x^2 = 2$, but since $a$ is positive (indeed, $a \gt 1$), we have $a = \sqrt{2}$. We obtain empirical evidence of some of our earlier statements: the convergents $1/1, 3/2, 7/5, 17/12, ...$ approach $\sqrt{2}$, alternatively overshooting and undershooting the target, but getting closer each time.
{"url":"http://crypto.stanford.edu/pbc/notes/contfrac/definition.html","timestamp":"2014-04-21T12:09:39Z","content_type":null,"content_length":"10576","record_id":"<urn:uuid:0d1a0031-8328-4d8b-a90c-a3c7a9cc7939>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about recursion on The Math Less Traveled Category Archives: recursion My post on factorization diagrams from a month ago turned out to be (unexpectedly) quite popular! I got ten times as many hits as usual the day it was published, and since then quite a few other people have created … Continue reading In an idle moment a while ago I wrote a program to generate "factorization diagrams". Here’s 700: It’s easy to see (I hope), just by looking at the arrangement of dots, that there are in total. Here’s how I did … Continue reading (This is my 200th post! =) And now, the amazing conclusion to this series of posts on Neil Calkin and Herbert Wilf’s paper, Recounting the Rationals, and the answers to all the questions about the hyperbinary sequence. Hold on to your hats! The Calkin-Wilf Tree First, … Continue reading Posted in arithmetic, computation, induction, iteration, number theory, pattern, proof, recursion, sequences, solutions Tagged algorithm, binary, Calkin-Wilf, Euclidean, Haskell, hyperbinary, tree 6 When I originally posed Challenge #12, a certain Dave posted a series of comments with some explorations and partial solutions to part II (the hyperbinary sequence). Although I gave the “solution” in my last post, no solution to any problem … Continue reading And now for the solution to problem #3 from Challenge #12, which asked: how many ways are there to write a positive integer n as a sum of powers of two, with no restrictions on how many powers of two … Continue reading First, a quick recap: continuing an exposition of the paper Recounting the Rationals, we’re investigating the tree of fractions shown below (known as the Calkin-Wilf tree), which is constructed by placing 1/1 at the root node, and giving each node … Continue reading Today I’d like to continue my exposition of the paper “Recounting the Rationals”, which I introduced in a previous post. Recall that our goal is to come up with a “nice” list of the positive rational numbers — where by … Continue reading
{"url":"http://mathlesstraveled.com/category/recursion/","timestamp":"2014-04-19T07:05:29Z","content_type":null,"content_length":"67657","record_id":"<urn:uuid:0227278f-594f-4497-88b8-53be0484fd8a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Sage Mathematical Software System - Sage Why Sage? Dan Drake First, I'll say a bit about how I use Sage: in the first place, I use it for my own research, and also use it for my teaching -- I've used it in calculus and differential equations courses, and next semester will use it in a discrete math course. Mostly I use it in my lectures to do demonstrations, but someday I hope to incorporate Sage into the homework. Free-as-in-beer means: I can install it anywhere I want -- my office computer, my laptop, the computer in the lecture hall, anywhere. I spend exactly zero time wondering if what I'm doing is allowed by the site license, wondering when we'll get the next version, and so on. Free means my students can use it. At the university where I went to graduate school, we used Matlab and Mathematica for our engineering calculus courses. During the first week, I would walk around the computer lab pretending not to hear everyone say 'if you want to use it at home, just go to [whatever site/service is popular for pirated software] and download it.' With Sage, *I* say, "if you want to use it at home, go to sagemath.org and download it." So the students who would otherwise pirate the software benefit by not doing something illegal (and risking installation of malware) -- but the honest students benefit too. Everyone can download and use Sage, so I can take a demo from class, publish it, and *everyone* they can look at it after class and interact with it themselves, instead of just watching me fiddle with it. Sure, Mathematica has their "Player" application, but with Sage, my students can easily alter the demo for themselves, which isn't possible with the Mathematica Player. Free also means that students who graduate and go into industry can continue to use Sage. Maplesoft may not be interested in suing a student who put Maple on his laptop and whose university has a site license, but if your company is developing a product and uses a pirated copy of Maple to do it, they will not be happy. Python means: You have some familiarity with one of the best and most popular programming languages out there, that is available for every platform, and is ubiquitous throughout industry. Students in my math courses who aren't math majors and will work in industry gain something by learning a bit of Python. Free-as-in-speech + Python means: When students run up against a bug, they might be able to fix it. Instead of just telling them to sit quietly and be patient until someone else fixes the problem for them, there's a reasonable chance that a good student can be involved in the solution. In education nowadays we encourage students to be "active learners" and so on; there is talk of "discovery-based learning". It is very much in the spirit of these educational philosophies for a student to discover a bug, realize it is a bug, and go about trying to fix it. For higher-level students: Upper division students in mathematics or similar field can participate in fixing bugs on more meaningful level, since they might understand the algorithms being used or know enough about programming to fix broken code. Sage is intended for professional-level real-world use, so for such students, working on improving Sage is real-world experience, useful to the students after graduation. (Who would you rather hire? The guy who did all his assignments, or the one who says, "I fixed bugs and added features to a large software project used by tens of thousands of people"?) Also, undergrad research is super hot these days, and Sage allows students to be up and running fast. Students who are not already highly proficient programmers have to spend tons of time learning, say, C, and then waste a lot of time messing around with pointer arithmetic. Sage allows those students who are more interested in math than malloc() to spend more time thinking about math and less time figuring out why their code segfaults. The Sage notebook server means: The notebook allows network transparency, so I only need to get Sage working well *once*, then use a web browser in the classroom. With shared computers in lecture halls, getting things installed is a pain -- but you can definitely rely on a web browser being installed. (And if the computer only has IE6, as many around here do, it's easy to get Firefox.) It also means that if I have Sage running on a fast computer, I get the benefits of that computer when accessing it from anywhere else. Ted Kosan William: "If it isn't too much trouble, if you have the time, could you consider posting a rough list of some of the factors that went into your decision, why you chose SAGE, how other competitors fared, and what isn't perfect yet about SAGE for your desired goals?" The reasoning behind why I chose SAGE was heavily influenced by the unusual nature of the degree program I teach in. Our Computer Engineering Technology degree is a hybrid degree which is half computer science and half computer engineering with an overall emphasis on application. The program's faculty consist of 2 computer scientists, 2 engineers and 1 technologist and the type of student that the degree is designed to produce is a deep generalist. I am the technologist and this has placed me in a good position to observe both the algorithm-oriented computer science approach and the mathematics-oriented engineering approach to problem solving. I observed that the engineering classes were using software like MathCad and MatLab to great advantage but, after seeing how the CS classes were solving problems using programming languages, tools like MathCad and MatLab did not appear to have a general enough design to me. I eventually decided to try Mathematica because of its more general design and in spite of the fact the engineers didn't quite understand why I would choose it over a traditional engineering-oriented software application :-) I worked with Mathematica for over 2 years and I liked its mathematics capabilities and notebook user interface but I found its programming capabilities to be somewhat awkward to use, especially when compared to the Python we had started to use in some of our CS classes. Beyond this, I am a Linux user and many of our students are too. I found that Mathematica's support for Linux was fairly poor and I was constantly running into issues that needed to be worked around or fixed. When we began our distance learning initiative, we chose to base it on open source software as much as possible and this is when I decided to find an open source alternative to Mathmatica. I think that most people who are searching for mathematics software quickly find this Comparison of Computer Algebra systems page and so did I: After eliminating all of the proprietary applications, the short-list of applications I selected to evaluate consisted of Axiom, Mathomatic, Maxima, SAGE, and Yacas. At that time I was heavily influenced by Mathamatica's GUI notebook front-end along with the GUI front-end of applications like MathCad. Therefore, I rated having a nice GUI front-end high on my list of requirements when I evaluated each of the applications on this list. I eventually decided to move forward using Maxima and Python running inside of TeXmacs and for a while I thought I had found what I was looking for. Maxima seemed like it was able to handle most of my mathematics needs and Python was able to handle most of the computing needs I had, even though I was only a newbie Python developer at the time. TeXmacs was also where I received experience with the concept of wrapping a wide range of software tools in one user interface and I liked the flexibility that this provided. The more I worked with TeXmacs, however, the more 'quirky' it began to seem to me. Beyond this, I began to want maxima and Python to be able to work together more intimately than they were able to do within TeXmacs. I reluctantly decided that I needed to continue my search. Fortunately for me, it was at this point that I experienced a kind of revelation with respect to Python. The language I had learned just before Python was Java and I came to Java from C. For me, Java opened a whole new world of programming that I did not know existed before, especially when I observed the way that computer scientists used it. When I decided to learn Python, however, my experience with Java put limits on what I expected Python to be capable of. As I dug deeper into Python, I started to see that Python was even more advanced than Java than Java had been with C. When I moved from C to Java, it felt like I had moved from manually pounding nails with a hammer to using a pneumatic nail gun. As I started to grasp the amazing power that a dynamic language like Python contains, however, it began to feel like moving from nailing boards with a nail gun to pointing a magic wand and having them appear in a board with no more effort than a flick of the wrist. As I began to study Python deeper and program in it more, it felt like my mind was starting to light up and I began to think about programming-based problem solving in a whole new way. This was the feeling that Python gave me when I started to see how to properly use it. I found myself wanting to enter this frame of mind more frequently and to hold it for longer periods of time. I also started to become convinced that this was the kind of thinking that we should be encouraging our students to embrace. It was with this new perspective that I reevaluated the list of mathematics applications I had compiled earlier and when I looked at SAGE again, it was with new eyes. Instead of Python being just a tool among equals like it was in TeXmacs, in SAGE it was elevated to the position of being the means of managing the enormous complexity inherent in these other tools and enabling the power in them to be made available in a way that seemed more natural and effective than the other approaches that I had looked at. I am also changing my thinking on the worth of entering mathematics using a rich graphical front-end vs. entering it using typed source code. When I was using Mathematica, I use to enter almost all of my input though the graphical notebook front-end because I thought it was somehow superior to entering input as ASCII text. I continued this thinking while I was using TeXmacs but as I have studied SAGE's documentation further, and started to work with it more, I am beginning to form the opinion that it is much more efficient to work at the Python source code level because staying at the Python source code level tends to keeps one's mind in the 'light up' state that I referred to earlier. Therefore, I went from thinking that the best approach for teaching newbies mathematics software was to hide the source code as much as possible behind a GUI front-end to coming up with a way to teach newbies how to program as easily as possible so that they would be able to effectively use a source code interface. Anyway, I know this answer is somewhat abstract, but that is how I made my decision :-) As for what isn't perfect yet about SAGE itself for my desired goals, I am still learning how to use SAGE properly ( and I am also still learning how to use Python properly ) so I am not quite ready to provide suggestions yet, but I will probably be coming up with some in the future. What I currently see a need for is a SAGE tutorial that is targeted at the programmer/mathematics newbie. I am going to try to develop a tutorial like this but it would be helpful if I could periodically ask some dumb questions on this email list about SAGE and mathematics in general.
{"url":"http://www.sagemath.org/library-why.html","timestamp":"2014-04-17T09:45:36Z","content_type":null,"content_length":"29016","record_id":"<urn:uuid:04b0b242-e7d8-44eb-85d2-1cb5629cefd2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Language Learning Statistical Language Learning Prof. Jason Eisner Course # 600.665 - Spring 2002 "When the going gets tough, the tough get empirical" -- Jon Carroll Course Description Catalog description: This course focuses on past and present research that has attempted, with mixed success, to induce the structure of language from raw data such as text. Lectures will be intermixed with reading and discussion of the primary literature. Students will critique the readings, answer open-ended homework questions, and undertake a final project. [Applications] Prereq: 600.465 or perm req'd. The main goals of the seminar are (a) to cover some techniques people have tried for inducing hidden structure from text, (b) to get you thinking about how to do it better. Since most of the techniques in (a) don't perform that well, (b) is more important. The course should also help to increase your comfort with the building blocks of statistical NLP - weighted transducers, probabilistic grammars, graphical models, etc., and the supervised training procedures for these building blocks. Vital Statistics │ Lectures: │ MTW 2-3 pm, Shaffer 304 (but we'll move to the NEB 325a conference room if we're not too big) │ │ Prof: │ Jason Eisner - jason@cs.jhu.edu │ │ Office hrs: │ MW 3-4 pm, or by appt, in NEB 326 │ │ Web page: │ http://cs.jhu.edu/~jason/665 │ │ Mailing list: │ cs665@cs.jhu.edu (cs665 also works on NLP lab machines) │ │ Textbook: │ none, but the textbooks for 465 may come in handy │ │ │ Grading: 30% written responses (graded as check/check-plus, etc.), 30% class participation, 40% project. │ │ Policies: │ Announcements: New readings announced by email and posted below. │ │ │ Submission: Email me written responses to the whole week's readings by 11 am each Monday. │ │ │ Academic honesty: dept. policy (but you can work in pairs on reading responses) │ Readings and Responses Generally we will discuss about 3 related papers each week. Since we may flit from paper to paper, comparing and contrasting, you should read all the papers by the start of the week. A centerpiece of the course is the requirement to respond thoughtfully to each paper in writing. You should email me your responses to the upcoming week's papers, in separate plaintext or postscript messages, by noon each Monday. (Include "665 response" and the paper's authors in the subject line.) I will print the responses out for everyone, and they will anchor our class discussion. They will also be a useful source of ideas for your final projects. A typical response is 1-3 paragraphs; in a given week you might respond at greater length to some papers than others. It's okay to work with another person. What should you write about? Some • Idea for a new experiment, model or other research opportunity inspired by the reading • A clearer explanation of some point that everyone probably had to struggle with • Unremarked consequences of the experimental design or results • Additional experiments you really wish the author had done • Other ways the research could be improved (e.g., flaws you spotted) • Non-obvious connections to other work you know about from class or elsewhere Please be as concrete as possible - and write clearly, since your classmates will be reading your words of wisdom! The Readings Suggestions for readings are welcome, especially well in advance. • Week of Jan. 28: Bootstrapping We will read one or two of these for Wednesday (to be chosen in class on Monday). □ David Yarowsky (1995). Unsupervised word sense disambiguation rivaling supervised methods. Proceedings of ACL '95, 189-196. http://www.cs.jhu.edu/~yarowsky/acl95.ps.gz □ Yael Karov and Shimon Edelman (1996). Learning similarity-based word sense disambiguation from sparse data. Proc. of the 4th Workshop on Very Large Corpora, Copenhagen. http://www.ai.mit.edu/ □ I. Dan Melamed (1997). A word-to-word model of translational equivalence. Proceedings of ACL/EACL '97, 490-497. http://xxx.lanl.gov/abs/cmp-lg/9706026 • Week of Feb. 4: Classes of "interchangeable" words □ Chapter 3 of: Lillian Lee (1997). Similarity-based approaches to natural language processing. Ph.D. thesis. Harvard University Technical Report TR-11-97. http://xxx.lanl.gov/ps/cmp-lg/9708011 □ Chapter 4 of: The same thing. □ Deerwester, S., Dumais, S. T., Landauer, T. K., Furnas, G. W. and Harshman, R. A. (1990). Indexing by latent semantic analysis. Journal of the Society for Information Science, 41(6), 391-407. http://lsi.research.telcordia.com/lsi/papers/JASIS90.pdf; scanned version with figures • Week of Feb. 11: Word meanings, word boundaries □ Carl de Marcken (1996). Linguistic structure as composition and perturbation. Proceedings of ACL-96. http://xxx.lanl.gov/ps/cmp-lg/9606027 □ Chengxiang Zhai (1997). Exploiting context to identify lexical atoms: A statistical view of linguistic context. Proceedings of the International and Interdisciplinary Conference on Modelling and Using Context (CONTEXT-97), Rio de Janeiro, Brzil, Feb. 4-6, 1997. 119-129. http://arXiv.org/ps/cmp-lg/9701001 □ Jeffrey Mark Siskind: ☆ (1995) `Robust Lexical Acquisition Despite Extremely Noisy Input,' Proceedings of the 19th Boston University Conference on Language Development (edited by D. MacLaughlin and S. McEwen), Cascadilla Press, March. ftp://ftp.nj.nec.com/pub/qobi/bucld95.ps.Z ☆ Section 6 of: (1996) A Computational Study of Cross-Situational Techniques for Learning Word-to-Meaning Mappings. Cognition 61(1-2): 39-91, October/November. ftp://ftp.nj.nec.com/pub/qobi • Week of Feb. 18: HMMs and Part-of-Speech Tagging □ Bernard Merialdo (1994). Tagging English text with a probabilistic model. Computational Linguistics 20(2):155-172. scanned PDF version □ David Elworthy (1994). Does Baum-Welch re-estimation help taggers? Proceedings of ANLP, Stuttgart, 53-58. http://xxx.lanl.gov/abs/cmp-lg/9410012 □ Emmanuel Roche and Yves Schabes (1995). Deterministic Part-of-Speech Tagging with Finite State Transducers. Computational Linguistics, March. http://www.merl.com/reports/TR94-07/ • Week of Feb. 25: Unsupervised Finite-State Topology □ Eric Brill (1995). Unsupervised Learning of Disambiguation Rules for Part of Speech Tagging. Proc. of 3rd Workshop on Very Large Corpora, MIT, June. Also appears in Natural Language Processing Using Very Large Corpora, 1997. http://www.cs.jhu.edu/~brill/acl-wkshp.ps. □ Sections 2.4-2.5 and Chapter 3 of: Andreas Stolcke (1994). Bayesian Learning of Probabilistic Language Models. Ph.D., thesis, University of California at Berkeley. ftp://ftp.icsi.berkeley.edu □ Jose Oncina (1998). The data driven approach applied to the OSTIA algorithm. In Proceedings of the Fourth International Colloquium on Grammatical Inference Lecture Notes on Artificial Intelligence Vol. 1433, pp. 50-56 Springer-Verlag, Berlin 1998. ftp://altea.dlsi.ua.es/people/oncina/articulos/icgi98.ps.gz (draft) Please also glance at the following papers so that you roughly understand a couple of the variants that Oncina and his colleagues have proposed: section 1 of this paper on learning stochastic DFAs, and section 3 of this paper dealing with OSTIA-D and OSTIA-R. • Week of Mar. 4: Learning Tied Finite-State Parameters □ Kevin Knight and Jonathan Graehl (1998). Machine Transliteration. Computational Linguistics 24(4):599-612, December. [Hardcopy available and preferred; in a pinch, read the slightly less detailed ACL-97 version.] □ Richard Sproat and Michael Riley (1996). Compilation of Weighted Finite-State Transducers from Decision Trees. Proceedings of ACL. http://arXiv.org/ps/cmp-lg/9606018 □ Jason Eisner (2002). Parameter Estimation for Probabilistic Finite-State Transducers. Submitted to ACL. http://cs.jhu.edu/~jason/papers/#acl02-fst • Week of Mar. 11: Inside-Outside Algorithm If you need to review the inside-outside algorithm, check my course slides before reading the following papers. The slide fonts are unfortunately a bit screwy unless you view under Windows. □ K. Lari and S. Young (1990). The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language 4:35-56. scanned PDF version □ Fernando Pereira and Yves Schabes (1992). Inside-outside reestimation from partially bracketed corpora. Proceedings of the 20th Meeting of the Association for Computational Linguistics. scanned PDF version □ Carl de Marcken (1995). On the unsupervised induction of phrase-structure grammars. Proc. of the 3rd Workshop on Very Large Corpora. http://bobo.link.cs.cmu.edu/grammar/demarcken.ps • Week of Mar. 18: Spring break! • Week of Mar. 25: More CFG Learning □ Chapter 4 of: Andreas Stolcke (1994). Bayesian Learning of Probabilistic Language Models. Ph.D., thesis, University of California at Berkeley. ftp://ftp.icsi.berkeley.edu/pub/ai/stolcke/ thesis.ps.Z [Same thesis as before. This week, read only chapter 4.] □ Stanley Chen (1995). Bayesian grammar induction for language modeling. In Proceedings of the 33rd ACL, pp. 228-235. http://www-2.cs.cmu.edu/~sfc/papers/acl95.ps.gz □ Glenn Carroll and Mats Rooth (1998). Valence induction with a head-lexicalized PCFG. Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing (EMNLP). http:// • Week of Apr. 2: Maximum Entropy Parsing Models □ Adwait Ratnaparkhi (1997). A linear observed time statistical parser based on maximum entropy models. Proceedings of EMNLP. http://xxx.lanl.gov/ps/cmp-lg/9706014 □ Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler (1999). Estimators for stochastic "unification-based" grammars. Proceedings of ACL. http://www.cog.brown.edu/~mj/ □ Eugene Charniak (2000). A maximum-entropy-inspired parser. Proceedings of NAACL. http://www.cs.brown.edu/~ec/papers/shortMeP.ps.gz • Week of Apr. 9: Bootstrapping Syntax □ David Magerman and Mitchell Marcus (1990). Parsing a natural language using mutual information statistics. Proceedings of AAAI. http://www-cs-students.stanford.edu/~magerman/papers/aaai90.ps □ Eric Brill and Mitchell Marcus (1992). Automatically acquiring phrase structure using distributional analysis. DARPA Workshop on Speech and Natural Language. http://nlp.cs.jhu.edu/~nasmith/ □ Menno van Zaanen (2000). Automatically acquiring phrase structure using distributional analysis. Proceedings of ICML. http://turing.wins.uva.nl/~mvzaanen/docs/p_icml00.ps • Week of Apr. 16: Neural nets □ Chalmers, D. (1990). Syntactic transformations on distributed representations. Connection Science 2, 53--62. http://www.u.arizona.edu/~chalmers/papers/transformations.ps □ The following two 6-page papers overlap considerably, so read one and flip through the other to find the differences. ☆ Simon Levy and Jordan Pollack (2001). Infinite RAAM: A Principled Connectionist Substrate for Cognitive Modeling. ICCM. http://citeseer.nj.nec.com/440008.html ☆ Simon Levy, Ofer Melnik, and Jordan Pollack (2000). Infinite RAAM: A Principled Connectionist Basis for Grammatical Competence. COGSCII. http://citeseer.nj.nec.com/305778.html □ Terry Regier (1995). A Model of the Human Capacity for Categorizing Spatial Relations. Cognitive Linguistics 6(1). http://www.psych.uchicago.edu/faculty/Terry_Regier/ftp/cogling94.ps • Week of Apr. 23 □ John M. Zelle and Raymond J. Mooney (1996). Comparative Results on Using Inductive Logic Programming for Corpus-based Parser Construction. In S. Wermter, E. Riloff and G. Scheler (Eds.), Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing. Springer Verlag. http://www.cs.utexas.edu/users/ml/papers/chill-bkchapter-95.ps.gz □ Robert C. Berwick and Sam Pilato (1987). Learning Syntax by Automata Induction. Machine Learning 2: 9-38. scanned individual pages Note: No class on Wednesday April 24. • Week of Apr. 30 □ Makoto Kanazawa (1996). Identification in the Limit of Categorial Grammars. Journal of Logic, Language and Information 5(2), 115-155. scanned PDF version □ Jason Eisner (2002). Discovering Syntactic Deep Structure via Bayesian Statistics. Cognitive Science 26(3), May. http://cs.jhu.edu/~jason/papers/#cogsci02 □ Jason Eisner (2002). Transformational Priors Over Grammars. Submitted to EMNLP. http://cs.jhu.edu/~jason/papers/#emnlp02 • Monday, May 13: Due date for final project • Wednesday, May 15, 9am-12pm: Project presentation party (in lieu of final exam) with 20-minute talks
{"url":"http://cs.jhu.edu/~jason/665/","timestamp":"2014-04-20T23:27:20Z","content_type":null,"content_length":"35257","record_id":"<urn:uuid:e295790b-24fa-402c-a754-0e550d388ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Oswego, IL Calculus Tutor Find an Oswego, IL Calculus Tutor ...I took AP Calculus BC in high school, and passed the AP exam with a 5 out of 5. I have also taken Calculus III in college. Since this time, I've helped many students improve their grades in 27 Subjects: including calculus, Spanish, chemistry, English ...I currently tutor all of the foundations for calculus as well (precalculus and trigonometry for example); with this background, I am able to help those in calculus who might need to circle back or refresh any of the supporting topics. I was an advanced math student, completing the equivalent of ... 13 Subjects: including calculus, statistics, algebra 2, geometry ...I also train math teams for competition. I believe that all students can learn, and that it is the responsibility of the teacher to present the material in a manner which will optimize student learning. I analyze the student's learning style, ability, interests, and talents in order to individualize instruction to best fit each student's needs. 24 Subjects: including calculus, geometry, algebra 1, GRE ...I have been exposed to people with special needs throughout most of my life. They are some of the most inspiring people I know and they deserve to be treated with respect and deserve the same type of treatment as everyone else. Currently, I am a manager/head coach for a 14 & Under Travel Softball Team. 19 Subjects: including calculus, reading, geometry, algebra 1 ...As far as tutoring, I have never personally been paid to tutor anyone, but have spent numerous hours of my time helping students/colleagues to learn productive ways that are easy to remember and work for them. I believe in sharing knowledge and always look for opportunities to help others learn.... 9 Subjects: including calculus, statistics, algebra 1, algebra 2 Related Oswego, IL Tutors Oswego, IL Accounting Tutors Oswego, IL ACT Tutors Oswego, IL Algebra Tutors Oswego, IL Algebra 2 Tutors Oswego, IL Calculus Tutors Oswego, IL Geometry Tutors Oswego, IL Math Tutors Oswego, IL Prealgebra Tutors Oswego, IL Precalculus Tutors Oswego, IL SAT Tutors Oswego, IL SAT Math Tutors Oswego, IL Science Tutors Oswego, IL Statistics Tutors Oswego, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Oswego_IL_calculus_tutors.php","timestamp":"2014-04-19T23:27:05Z","content_type":null,"content_length":"23967","record_id":"<urn:uuid:fdfd8ab4-6b94-4e23-8536-4032c9ceedb2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Does anyone know how to explain the concept of elasticity, particularly the fact that the elasticity changes as one moves along the Demand curve? Best Response You've already chosen the best response. elasticity of demand related with price represents the responsiveness of quantity demanded to the price change...(sorry for my broken english but idk these in Eng even though i know in Albanian) |dw:1319466313725:dw| Hope this helps :) ^_^ Best Response You've already chosen the best response. Elasticity is low for low-demand items (luxury). Elasticity is also low for necessity vital products (that are usually right most on the demand curve). The best way to explain it is to use the term elastic. Elastic will stretch when a force is exercised at either end. For example, when a kid try to pull a flexible elastic, she will be a able to make this elastic twice as longer. This represents the high elasticity when a small move of the force (price) will lead to a big move in the length of the elastic (demand). This is a nice example that I give to my students when I discuss micro with them. Best Response You've already chosen the best response. I agree with Angela and Bilalak and try to extend the concept of elasticity. when v consider it as a ratio of % change in demand to % change in price, as v move along the demand curve.......@ the middle point of the demand curve the price elasticity = 1 and in the upper part of the demand curve it will be > 1 and lower part of the demand curve it will be <1. this is because as v move from left upper to right lower more and more change v see in demand due to small changes in price........ Best Response You've already chosen the best response. \[\xi =DeltaQ \div DeltaP\] Elasticity means that if the price changes as 1%, the quantity demanded or supplied will be changed as \[\Delta\] percent. In math, we can see that elasticity will affect the slope of our curve. In economic term, elasticity measures the effect from in every times price changes to its quantity. Why this is so important? Because it will affect the burden in market if there are intervention or nature of consumers and producers. Best Response You've already chosen the best response. ACTUALLY I ASKED THIS QUESTION AS AN EARLY TEST OF HOW "OPEN STUDY" WOULD WORK, BUT IT IS A PLEASURE TO FIND THAT A NUMBER OF STUDENTS UNDERSTAND THE CONCEPT. I HAVE YET TO PRESENT THIS TO MY CLASS, BUT WE WILL TALK ABOUT IT SHORTLY. WMW Best Response You've already chosen the best response. YES, FOR THOSE OF YOU IN INDIA AND ELSEWHERE WHO HAVE TAKEN CALCULUS, THEN THIS IS AN EASY CONCEPT. FOR THOSE WHO HAVE NOT, IT CAN TAKE A "TON" OF EXPLAINING TO GET THIS CONCEPT ACROSS TO THE Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ea5124de4b09b578437b343","timestamp":"2014-04-20T18:30:37Z","content_type":null,"content_length":"76376","record_id":"<urn:uuid:33c700e1-f3d9-4f0e-bfef-8586f423c591>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
[FRIAM] Is mathematical pattern the theory of everything? Carl Tollander carl at plektyx.com Sun Nov 25 01:12:03 EST 2007 Some are sympathetic but have reservations. Sabine Hossenfelder: Christine Dantas: Peter Woit: http://www.math.columbia.edu/~woit/wordpress/?p=617 John Baez: http://math.ucr.edu/home/baez/week253.html Steinn Sigurðsson: Some of the sharp-elbow folks have stronger reservations. Lubos Motl: Jacques Distler: One of the more sympathetic people is actually Garrett Lisi: You can see a talk here: http://relativity.phys.lsu.edu/ilqgs/ (look at Tues, Nov 13th). I currently find Sabine's oft-referenced discussion the most accessible, which is not to say that I necessarily understand it all. Next try: Steinn Sigurðsson's post which purports to give a simple description of Garrett's argument and some problems with it. Caveat: I have not read Lisi's paper and have not formed my own opinion of it yet. These links are just pointers to discussions. Time to dig out Georgi's book on Lie Algebras, like I didn't have anything else to think about.... Richard Lowenberg wrote: > Of interest to some. rl > From the New Scientist (there are important diagrams at the site-- <http://www.newscientist.com/article/mg19626303.900;jsessionid=OEGLIBGOIACB > > > Is mathematical pattern the theory of everything? > by Zeeya Merali > GARRETT LISI is an unlikely individual to be staking a claim for a > theory of everything. He has no university affiliation and spends most > of the year surfing in Hawaii. In winter, he heads to the mountains > near Lake Tahoe, California, to teach snowboarding. Until recently, > physics was not much more than a hobby. > That hasn't stopped some leading physicists sitting up and taking > notice after Lisi made his theory public on the physics pre-print > archive this week (www.arxiv.org/abs/0711.0770). By analysing the most > elegant and intricate pattern known to mathematics, Lisi has uncovered > a relationship underlying all the universe's particles and forces, > including gravity - or so he hopes. Lee Smolin at the Perimeter > Institute for Theoretical Physics (PI) in Waterloo, Ontario, Canada, > describes Lisi's work as "fabulous". "It is one of the most compelling > unification models I've seen in many, many years," he says. > That's some achievement, as physicists have been trying to find a > uniform framework for the fundamental forces and particles ever since > they developed the standard model more than 30 years ago. The standard > model successfully weaves together three of the four fundamental > forces of nature: the electromagnetic force; the strong force, which > binds quarks together in atomic nuclei; and the weak force, which > controls radioactive decay. The problem has been that gravity has so > far refused to join the party. > Most attempts to bring gravity into the picture have been based on > string theory, which proposes that particles are ultimately composed > of minuscule strings. Lisi has never been a fan of string theory and > says that it's because of pressure to step into line that he abandoned > academia after his PhD. "I've never been much of a follower, so I > walked off to search for my own theory," he says. Last year, he won a > research grant from the charitably funded Foundational Questions > Institute to pursue his ideas. > He had been tinkering with "weird" equations for years and getting > nowhere, but six months ago he stumbled on a research paper analysing > E8 - a complex, eight-dimensional mathematical pattern with 248 > points. He noticed that some of the equations describing its structure > matched his own. "The moment this happened my brain exploded with the > implications and the beauty of the thing," says Lisi. "I thought: > 'Holy crap, that's it!'" > What Lisi had realised was that if he could find a way to place the > various elementary particles and forces on E8's 248 points, it might > explain, for example, how the forces make particles decay, as seen in > particle accelerators. > Lisi is not the first person to associate particles with the points of > symmetric patterns. In the 1950s, Murray Gell-Mann and colleagues > correctly predicted the existence of the "omega-minus" particle after > mapping known particles onto the points of a symmetrical mathematical > structure called SU(3). This exposed a blank slot, where the new > particle fitted. > Before tackling the daunting E8, Lisi examined a smaller cousin, a > hexagonal pattern called G2, to see if it would explain how the strong > nuclear force works. According to the standard model, forces are > carried by particles: for example, the strong force is carried by > gluons. Every quark has a quantum property called its "colour charge" > - red, green or blue - which denotes how the quarks are affected by > gluons. Lisi labelled points on G2 with quarks and anti-quarks of each > colour, and with various gluons, and found that he could reproduce the > way that quarks are known to change colour when they interact with > gluons, using nothing more than high-school geometry (see Graphic). > Turning to the geometry of the next simplest pattern in the family, > Lisi found he was able to explain the interactions between neutrinos > and electrons by using the star-like F4. The standard model already > successfully describes the electroweak force, uniting the > electromagnetic and the weak forces. Lisi added gravity into the mix > by including two force-carrying particles called "e-phi" and "omega", > to the F4 diagram - creating a "gravi-electroweak" force. > [snip] > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org More information about the Friam mailing list
{"url":"http://redfish.com/pipermail/friam_redfish.com/2007-November/006845.html","timestamp":"2014-04-18T15:42:26Z","content_type":null,"content_length":"10698","record_id":"<urn:uuid:c5c5191a-7718-43f7-92c5-95152db987f9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about recursion on The Math Less Traveled Category Archives: recursion My post on factorization diagrams from a month ago turned out to be (unexpectedly) quite popular! I got ten times as many hits as usual the day it was published, and since then quite a few other people have created … Continue reading In an idle moment a while ago I wrote a program to generate "factorization diagrams". Here’s 700: It’s easy to see (I hope), just by looking at the arrangement of dots, that there are in total. Here’s how I did … Continue reading (This is my 200th post! =) And now, the amazing conclusion to this series of posts on Neil Calkin and Herbert Wilf’s paper, Recounting the Rationals, and the answers to all the questions about the hyperbinary sequence. Hold on to your hats! The Calkin-Wilf Tree First, … Continue reading Posted in arithmetic, computation, induction, iteration, number theory, pattern, proof, recursion, sequences, solutions Tagged algorithm, binary, Calkin-Wilf, Euclidean, Haskell, hyperbinary, tree 6 When I originally posed Challenge #12, a certain Dave posted a series of comments with some explorations and partial solutions to part II (the hyperbinary sequence). Although I gave the “solution” in my last post, no solution to any problem … Continue reading And now for the solution to problem #3 from Challenge #12, which asked: how many ways are there to write a positive integer n as a sum of powers of two, with no restrictions on how many powers of two … Continue reading First, a quick recap: continuing an exposition of the paper Recounting the Rationals, we’re investigating the tree of fractions shown below (known as the Calkin-Wilf tree), which is constructed by placing 1/1 at the root node, and giving each node … Continue reading Today I’d like to continue my exposition of the paper “Recounting the Rationals”, which I introduced in a previous post. Recall that our goal is to come up with a “nice” list of the positive rational numbers — where by … Continue reading
{"url":"http://mathlesstraveled.com/category/recursion/","timestamp":"2014-04-19T07:05:29Z","content_type":null,"content_length":"67657","record_id":"<urn:uuid:0227278f-594f-4497-88b8-53be0484fd8a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplest Interpolating Polynomial April 11th 2010, 02:25 AM Simplest Interpolating Polynomial I am very confused, i am studying ''general least squares,polynomial regression, linear regression, newton's interpolating polynomials, lagrange interpolating polynomials'' now. I have a question but i don't know which method i should use, can you help me quickly? Thanks. This is my question: Consider the data in the following table for constant-pressure specific heat, C p (kJ/kg.K) at various temperatures T (K). Determine the simplest interpolating polynomial that is likely to predict Cp within 1% error over the specified range of temperature. T : 1000 1100 1200 1300 1400 1500 Cp 1.410 1.1573 1.1722 1.1858 1.1982 1.2095
{"url":"http://mathhelpforum.com/advanced-applied-math/138450-simplest-interpolating-polynomial-print.html","timestamp":"2014-04-20T10:07:51Z","content_type":null,"content_length":"5156","record_id":"<urn:uuid:dddb147c-bb69-4628-a90a-e9b7a358c8ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
solve() bug Igor Schein on Mon, 14 Feb 2000 11:51:16 -0500 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] results in SEGV for real-valued function f and any real values a and b I can see the confusion between formal variable x and numeric variable x. An obvious work-around is to do I was wondering if I can achieve the same without defining an intermediate function g.
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0002/msg00001.html","timestamp":"2014-04-19T19:35:52Z","content_type":null,"content_length":"3292","record_id":"<urn:uuid:b03d52bf-35a0-4703-bf0e-057977863e64>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this cool with you? Re: Is this cool with you? I will take and put it into my notes and work with it. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Ok, thank you, "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; How did you label your rows and columns? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym, Here is the state diagram with probabilities. B6 means second player with 6 bullets, A3 means first player with 3 bullets, DA means A is dead, BA means both are alive etc. Last edited by gAr (2011-04-07 23:56:57) "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Check the above image. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; I am labeling it like that too! That is sort of like a check. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Yes, it would be easier to check for mistakes, if any. And the software which I used to draw is JFLAP. It's a mighty software for dealing with automata theory! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; I need a little break to rest, I will see you later. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Ok, see you later. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; I have not been able to find anything wrong with your Markov chain. Could be because it is correct. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym, Thanks for checking. What confuses me is this: Take E.g. the state A5. A gets B with probability 5/12, or transitions to B5 with probability 7/12, which includes the chance of not firing also. If A didn't fire, then there should be a possible transition to A5. Don't know how the chain takes care of that! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; I am not sure I am understanding what you are asking. In your problem there is no option of not firing. If you wanted to have that then you could put the probability into the intersection of row a5 and column a5. That would mean there is a chance of A5 -> A5. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? I mean A pulls the trigger, but the cylinder slot is empty. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; I see what you are saying. Right now it looks like that is being combined to he aims fires and misses. Maybe you could create more choices? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? One way may be to name the states as A6B6, A5B6,A5B5, ... , A3B4, ... A1B1. That would be too many states. You get what I mean? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? I think it is easier to reword the problem. Or, increase the choices, right now A5 has two choices. Make a third choice of when the cylinder is empty. At least the matrix does not get bigger. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? I tried that too. If choices are increased, then from A5 there would be a possibility to go to B5,B4,B3,B2 or B1 and similarly for the others. Then calculating those probabilities would be a bigger task. I'll take a little break, see you later. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Okay, gAr. See you later. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym, I couldn't get any further with this. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; Well look, let's not tell anybody about it. That way we can keep my series solution and your markov chain up there. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym, I did not get what you implied! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; Just joking about hiding it. I was thinking that if I understand what you are saying those solutions are flawed. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "We will pretend to be right"? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Ya, got it reading again... "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Absolutely! Since I put the wrong answer up there first I will expect my name to go first when we name it the bobbym gAr solution. Kidding aside, I was thinking that the problem could me modelled like this 1) A5 fires and kills B, game over. 2) A5 fires and misses so control passes to B5. 3) A5 has an empty chamber so he stays in A5. He spins the cylinder and repeats 3) until 1 or 2 occurs. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=12832&p=19","timestamp":"2014-04-20T13:33:59Z","content_type":null,"content_length":"39777","record_id":"<urn:uuid:4c5fbb12-5ffa-4c23-8d9b-fd74bc6e0579>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
26-XX Real functions [See also 54C30] 26Bxx Functions of several variables 26B05 Continuity and differentiation questions 26B10 Implicit function theorems, Jacobians, transformations with several variables 26B12 Calculus of vector functions 26B15 Integration: length, area, volume [See also 28A75, 51M25] 26B20 Integral formulas (Stokes, Gauss, Green, etc.) 26B25 Convexity, generalizations 26B30 Absolutely continuous functions, functions of bounded variation 26B35 Special properties of functions of several variables, Hölder conditions, etc. 26B40 Representation and superposition of functions 26B99 None of the above, but in this section
{"url":"http://ams.org/mathscinet/msc/msc.html?t=26B15","timestamp":"2014-04-19T07:12:46Z","content_type":null,"content_length":"12758","record_id":"<urn:uuid:e5e1604e-987c-4cc3-9f67-8241d642643b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [TenTec] Voltages on 50 ohm Dummy Load If you've got 274 volts sinewave peak-to-peak, that's .707 * (274 / 2) = 96.8 volts RMS. Across 50 ohms, my number is P = 96.8**2/50 = 187 Watts. RMS volts are what you want, and it's the voltage swing from 0, not peak to peak. (RMS volts means "take the square root of the _average_ of the square of the voltage" over one or more cycles.) You're right that 1500 watts into 50 ohms is apx 274 volts (at any moment), but power averaged over a cycle is what we and the FCC usually talk about. To get the average right, we need to use RMS volts = .707 x peak volts (from zero). For Vrms = 274 volts, Vpeak is 387, and Vpeak-to-peak is 774. If you want to use Vpeak-to-peak (it can be easier to measure), the formula would be Power (watts rms) = 0.125 * (Vpp)**2 / R for 50 ohms, P(watts rms) = .0025 * (Vpp)**2 (This is only right if you don't have much harmonic distortion.) Your RF ammeter measures RMS amps if it's the thermocouple (heating) type. Hope this helps. This is an interesting thread, but I forget how it's related to TenTec. 73 Martin AA6E On 6/8/05, Robert & Linda McGraw K4TAX <RMcGraw@blomand.net> wrote: > I measure the peak to peak value using my scope. Then power equals E > squared divided by R. This is good enough for "government work". > Looking at it another way: > 100 watts across 50.0 ohms produces 70.710 volts and a current of 1.4142 > amps. > 1500 watts across 50.0 ohm produces 273.8613 volts and a current of 5.477 > amps. > Another approach, I have a known good RF amp meter that I use with my dummy > loads. Comes in handy. > Again, This is good enough for "government work". > 73 > Bob, K4TAX TenTec mailing list
{"url":"http://lists.contesting.com/_tentec/2005-06/msg00299.html?contestingsid=4opaqpb7i38302hdlfnad5kff3","timestamp":"2014-04-18T11:06:03Z","content_type":null,"content_length":"10613","record_id":"<urn:uuid:0a03f574-d392-4089-bba1-d5cff3ecf9be>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Seymour Lipschutz Department of Mathematics, Temple University, Philadelphia, PA 19122, USA Abstract: Let $P$ be a pree which satisfies the first four axioms of Stallings' pregroup. Then the following three axioms are equivalent: \item{[K]} If $ab, bc$ and $cd$ are defined, and $(ab)(cd)$ is defined, then $(ab)c$ or $(bc)d$ is defined. \item{[L]} Suppose $V=[x, y]$ is reduced and suppose $y=ab=cd$ where $xa$ and $xc$ are defined. Then $a^{-1}c$ is defined. \item{[M]} Suppose $W=[x, y, z]$ is reduced. Then $W$ is not reducible to a word of length one. Classification (MSC2000): 20E06 Full text of the article: Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/059/11.html","timestamp":"2014-04-21T07:45:36Z","content_type":null,"content_length":"3405","record_id":"<urn:uuid:909045c6-3f98-48d7-bf3c-f23115042242>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Crofton, MD Calculus Tutor Find a Crofton, MD Calculus Tutor ...Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cover my advisor’s graduate level classes from time to time. And, since August 2012 I have tutored math (from prealgebra to calculus II), chemistry and physics for mid- and high-school students here in th... 14 Subjects: including calculus, chemistry, physics, geometry ...I have 5 years of MATLAB experience. I often used it during college and graduate school. I have experience using it for simpler math problems, as well as using it to run more complicated 27 Subjects: including calculus, physics, geometry, algebra 1 ...Continuity as a Property of Functions. II. Derivatives A. 21 Subjects: including calculus, statistics, geometry, algebra 1 ...My current job requires use of these in finite element analysis, free body diagram of forces, and decomposing forces in a given direction. I have a BS in mechanical engineering and took Algebra 1 & 2 in high school and differential equations and statistics in college. My current job requires use of algebra to manipulate equations for force calculation. 10 Subjects: including calculus, physics, geometry, algebra 1 ...I am a graduate of the University of Maryland, where I completed a Bachelor of Arts in 2011 with performance on trombone as the major focus. Piano proficiency was a part of the degree requirement, a requirement which I met by demonstrating proficiency in an informal audition. I performed as a trombone instrumentalist in the Navy Band based at Pearl Harbor, Hawaii from 2004 to 15 Subjects: including calculus, statistics, piano, geometry Related Crofton, MD Tutors Crofton, MD Accounting Tutors Crofton, MD ACT Tutors Crofton, MD Algebra Tutors Crofton, MD Algebra 2 Tutors Crofton, MD Calculus Tutors Crofton, MD Geometry Tutors Crofton, MD Math Tutors Crofton, MD Prealgebra Tutors Crofton, MD Precalculus Tutors Crofton, MD SAT Tutors Crofton, MD SAT Math Tutors Crofton, MD Science Tutors Crofton, MD Statistics Tutors Crofton, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Crofton_MD_Calculus_tutors.php","timestamp":"2014-04-21T02:11:36Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:0f462ee3-d055-49bb-91c3-516d104179b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong Comments (376) Sort By: Best I don't like to be a bearer of bad news here, but it ought to be stated. This whole leverage ratio idea is very obviously an intelligent kludge / patch / work around because you have two base level theories that either don't work together or don't work individually. You already know that something doesn't work. That's what the original post was about and that's what this post tries to address. But this is a clunky inelegant patch, that's fine for a project or a website, but given belief in the rest of your writings on AI, this is high stakes. At those stakes saying "we know it doesn't work, but we patched the bugs we found" is not acceptable. The combination of your best guess at picking the rigtht decision theory and your best guess at epistemology produces absurd conclusions. Note that you allready know this. This knowledge which you already have motivated this post. The next step is to identify which is wrong, the decision theory or the epistemology. After that you need to find something that's not wrong to replace it. That sucks, it's probably extreamly hard, and it probably sets you back to square one on multiple points. But you can't know that one of your foundations is wrong and just keep going. Once you know you are wrong you need to act consistently with that. This whole leverage ratio idea is very obviously an intelligent kludge / patch / work around I'm not sure that the kludge works anyway, since there are still some "high impact" scenarios which don't get kludged out. Let's imagine the mugger's pitch is as follows. "I am the Lord of the Matrix, and guess what - you're in it! I'm in the process of running a huge number of simulations of human civilization, in series, and in each run of the simulation I am making a very special offer to some carefully selected people within it. If you are prepared to hand over $5 to me, I will kindly prevent one dust speck from entering the eye of one person in each of the next googleplex simulations that I run! Doesn't that sound like a great offer?" Now, rather naturally, you're going to tell him to get lost. And in the worlds where there really is a Matrix Lord, and he's telling the truth, the approached subjects almost always tell him to get lost as well (the Lord is careful in whom he approaches), which means that googleplexes of preventable dust specks hit googleplexes of eyes. Each rejection of the offer causes a lower total utility than would be obtained from accepting it. And if those worlds have a measure > 1/googleplex, there is on the face of it a net loss in expected utility. More likely, we're just going to get non-convergent expected utilities again. The general issue is that the causal structure of the hypothetical world is highly linear. A reasonable proportion of nodes (perhaps 1 in a billion) do indeed have the ability to affect a colossal number of other nodes in such a world. So the high utility outcome doesn't get suppressed by a locational penalty. This whole leverage ratio idea is very obviously an intelligent kludge / patch / work around because you have two base level theories that either don't work together or don't work individually. I'd be more worried about that if I couldn't (apparently) visualize what a corresponding Tegmark Level IV universe looks like. If the union of two theories has a model, they can't be mutually inconsistent. Whether this corresponding multiverse is plausible is a different problem. Why is decision/probability theory allowed to constrain the space of "physical" models? It seems that the proper theory should not depend on metaphysical assumptions. If they are starting to require uncertain metaphysical assumptions, I think that counts as "not working together". Metaphysical assumptions are one thing: this one involves normative assumptions. There is zero reason to think we evolved values that can make any sense at all of saving 3^^^3 people. The software we shipped with cannot take numbers like that in it's domain. That we can think up thought experiments that confuse our ethical intuitions is already incredibly likely. Coming up with kludgey methods to make decisions that give intuitively correct answers to the thought experiments while preserving normal normative reasoning and then--- from there--- concluding something about what the universe must be like is a really odd epistemic position to take. This post has not at all misunderstood my suggestion from long ago, though I don't think I thought about it very much at the time. I agree with the thrust of the post that a leverage factor seems to deal with the basic problem, though of course I'm also somewhat expecting more scenarios to be proposed to upset the apparent resolution soon. Hm, a linear "leverage penalty" sounds an awful lot like adding the complexity of locating you of the pool of possibilities to the total complexity. Thing 2: consider the case of the other people on that street when the Pascal's Muggle-ing happens. Suppose they could overhear what is being said. Since they have no leverage of their own, are they free to assign a high probability to the muggle helping 3^^^3 people? Do a few of them start forward to interfere, only to be held back by the cooler heads who realize that all who interfere will suddenly have the probability of success reduced by a factor of 3^^^3? This is indeed a good argument for viewing the leverage penalty as a special case of a locational penalty (which I think is more or less what Hanson proposed to begin with). Suppose we had a planet of 3^^^3 people (their universe has novel physical laws). There is a planet-wide lottery. Catherine wins. There was a 1/3^^^3 chance of this happening. The lotto representative comes up to her and asks her to hand over her ID card for verification. All over the planet, as a fun prank, a small proportion of people have been dressing up as lotto representatives and running away with peoples' ID cards. This is very rare - only one person in 3^^3 does this today. If the lottery prize is 3^^3 times better than getting your ID card stolen, should Catherine trust the lotto official? No, because there are 3^^^3/3^^3 pranksters, and only 1 real official, and 3^^^3 /3^^3 is 3^^(3^^3 - 3), which is a whole lot of pranksters. She hangs on to her card, and doesn't get the prize. Maybe if the reward were 3^^^3 times greater than the penalty, we could finally get some lottery winners to actually collect their winnings. All of which is to say, I don't think there's any locational penalty - the crowd near the muggle should have exactly the same probability assignments as her, just as the crowd near Catherine has the same probability assignments as her about whether this is a prankster or the real official. I think the penalty is the ratio of lotto officials to pranksters (conditional on a hypothesis like "the lottery has taken place"). If the hypothesis is clever, though, it could probably evade this penalty (hypothesize a smaller population with a reward of 3^^^3 years of utility-satisfaction, maybe, or 3^^^3 new people created), and so what intuitively seems like a defense against pascal's mugging may not be. How does this style of reasoning work on something more like the original Pascal's Wager problem? Suppose a (to all appearances) perfectly ordinary person goes on TV and says "I am an avatar of the Dark Lords of the Matrix. Please send me $5. When I shut down the simulation in a few months, I will subject those who send me the money to [LARGE NUMBER] years of happiness, and those who do not to [LARGE NUMBER] years of pain". Here you can't solve the problem by pointing out the very large numbers of people involved, because there aren't very high numbers of people involved. Your probability should depend only on your probability that this is a simulation, your probability that the simulators would make a weird request like this, and your probability that this person's specific weird request is likely to be it. None of these numbers help you get down to a 1/[LARGE NUMBER] level. I've avoided saying 3^^^3, because maybe there's some fundamental constraint on computing power that makes it impossible for simulators to simulate 3^^^3 years of happiness in any amount of time they might conceivably be willing to dedicate to the problem. But they might be able to simulate some number of years large enough to outweigh our prior against any given weird request coming from the Dark Lords of the Matrix. (also, it seems less than 3^^^3-level certain that there's no clever trick to get effectively infinite computing power or effectively infinite computing time, like the substrateless computation in Permutation City) When we jump to the version involving causal nodes having Large leverage over other nodes in a graph, there aren't Large numbers of distinct people involved, but there's Large numbers of life-centuries involved and those moments of thought and life have to be instantiated by causal nodes. (also, it seems less than 3^^^3-level certain that there's no clever trick to get effectively infinite computing power or effectively infinite computing time, like the substrateless computation in Permutation City) Infinity makes my calculations break down and cry, at least at the moment. Imagine someone makes the following claims: • I've invented an immortality drug • I've invented a near-light-speed spaceship • The spaceship has really good life support/recycling • The spaceship is self-repairing and draws power from interstellar hydrogen • I've discovered the Universe will last at least another 3^^^3 years Then they threaten, unless you give them $5, to kidnap you, give you the immortality drug, stick you in the spaceship, launch it at near-light speed, and have you stuck (presumably bound in an uncomfortable position) in the spaceship for the 3^^^3 years the universe will last. (okay, there are lots of contingent features of the universe that will make this not work, but imagine something better. Pocket dimension, maybe?) If their claims are true, then their threat seems credible even though it involves a large amount of suffering. Can you explain what you mean by life-centuries being instantiated by causal nodes, and how that makes the madman's threat less credible? If what he says is true, then there will be 3^^^3 years of life in the universe. Then, assuming this anthropic framework is correct, it's very unlikely to find yourself at the beginning rather than at any other point in time, so this provides 3^^^3-sized evidence against this scenario. I'm not entirely sure that the doomsday argument also applies to different time slices of the same person, given that Eliezer in 2013 remembers being Eliezer in 2012 but not vice versa. The spaceship has really good life support/recycling The spaceship is self-repairing and draws power from interstellar hydrogen That requires a MTTF of 3^^^3 years, or a per-year probability of failure of roughly 1/3^^^3. I've discovered the Universe will last at least another 3^^^3 years This implies that physical properties like the cosmological constant and the half-life of protons can be measured to a precision of roughly 1/3^^^3 relative error. To me it seems like both of those claims have prior probability ~ 1/3^^^3. (How many spaceships would you have to build and how long would you have to test them to get an MTTF estimate as large as 3^ ^^3? How many measurements do you have to make to get the standard deviation below 1/3^^^3?) Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing. Say the being that suffers for 3^^^3 seconds is morally relevant but not in the same observer moment reference class as humans for some reason. (IIRC putting all possible observers in the same reference class leads to bizarre conclusions...? I can't immediately re-derive why that would be.) But anyway it really seems that the magical causal juice is the important thing here, not the anthropic/experiential nature or lack thereof of the highly-causal nodes, in which case the anthropic solution isn't quite hugging the real query. IIRC putting all possible observers in the same reference class leads to bizarre conclusions...? I can't immediately re-derive why that would be. The only reason that I have ever thought of is that our reference class should intuitively consist of only sentient beings, but that nonsentient beings should still be able to reason. Is this what you were thinking of? Whether it applies in a given context may depend on what exactly you mean by a reference class in that context. If it can reason but isn't sentient then it maybe doesn't have "observer" moments, and maybe isn't itself morally relevant—Eliezer seems to think that way anyway. I've been trying something like, maybe messing with the non-sentient observer has a 3^^^3 utilon effect on human utility somehow, but that seems psychologically-architecturally impossible for humans in a way that might end up being fundamental. (Like, you either have to make 3^^^3 humans, which defeats the purpose of the argument, or make a single human have a 3^^^3 times better life without lengthening it, which seems impossible.) Overall I'm having a really surprising amount of difficulty thinking up an example where you have a lot of causal importance but no anthropic counter-evidence. Anyway, does "anthropic" even really have anything to do with qualia? The way people talk about it it clearly does, but I'm not sure it even shows up in the definition—a non-sentient optimizer could totally make anthropic updates. (That said I guess Hofstadter and other strange loop functionalists would disagree.) Have I just been wrongly assuming that everyone else was including "qualia" as fundamental to anthropics? Yeah, this whole line of reasoning fails if you can get to 3^^^3 utilons without creating ~3^^^3 sentients to distribute them among. Overall I'm having a really surprising amount of difficulty thinking up an example where you have a lot of causal importance but no anthropic counter-evidence. I'm not sure what you mean. If you use an anthropic theory like what Eliezer is using here (e.g. SSA, UDASSA) then an amount of causal importance that is large compared to the rest of your reference class implies few similar members of the reference class, which is anthropic counter-evidence, so of course it would be impossible to think of an example. Even if nonsentients can contribute to utility, if I can create 3^^^3 utilons using nonsentients, than some other people probably can to, so I don't have a lot of causal importance compared to them. Anyway, does "anthropic" even really have anything to do with qualia? The way people talk about it it clearly does, but I'm not sure it even shows up in the definition—a non-sentient optimizer could totally make anthropic updates. This is the contrapositive of the grandparent. I was saying that if we assume that the reference class is sentients, then nonsentients need to reason using different rules i.e. a different reference class. You are saying that if nonsentients should reason using the same rules, then the reference class cannot comprise only sentients. I actually agree with the latter much more strongly, and I only brought up the former because it seemed similar to the argument you were trying to remember. There are really two separate questions here, that of how to reason anthropically and that of how magic reality-fluid is distributed. Confusing these is common, since the same sort of considerations affect both of them and since they are both badly understood, though I would say that due to UDT/ADT, we now understand the former much better, while acknowledging the possibility of unknown unknowns. (Our current state of knowledge where we confuse these actually feels a lot like people who have never learnt to separate the descriptive and the normative.) The way Eliezer presented things in the post, it is not entirely clear which of the two he meant to be responsible for the leverage penalty. It seems like he meant for it to be an epistemic consideration due to anthropic reasoning, but this seems obviously wrong given UDT. In the Tegmark IV model that he describes, the leverage penalty is caused by reality-fluid, but it seems like he only intended that as an analogy. It seems a lot more probable to me though, and it is possible that Eliezer would express uncertainty as to whether the leverage penalty is actually caused by reality-fluid, so that it is a bit more than an analogy. There is also a third mathematically equivalent possibility where the leverage penalty is about values, and we just care less about individual people when there are more of them, but Eliezer obviously does not hold that view. I have a problem with calling this a "semi-open FAI problem", because even if Eliezer's proposed solution turns out to be correct, it's still a wide open problem to develop arguments that can allow us to be confident enough in it to incorporate it into an FAI design. This would be true even if nobody can see any holes in it or have any better ideas, and doubly true given that some FAI researchers consider a different approach (which assumes that there is no such thing as "reality-fluid", that everything in the multiverse just exists and as a matter of preference we do not / can not care about all parts of it in equal measure, #4 in this post) to be at least as plausible as Eliezer's current approach. You're right. Edited. (As always, the term "magical reality fluid" reflects an attempt to demarcate a philosophical area where I feel quite confused, and try to use correspondingly blatantly wrong terminology so that I do not mistake my reasoning about my confusion for a solution.) This seems like a really useful strategy! Agreed - placeholders and kludges should look like placeholders and kludges. I became a happier programmer when I realised this, because up until then I was always conflicted about how much time I should spend making some unsatisfying piece of code look beautiful. I don't at all think that this is central to the problem, but I do think you're equating "bits" of sensory data with "bits" of evidence far too easily. There is no law of probability theory that forbids you from assigning probability 1/3^^^3 to the next bit in your input stream being a zero -- so as far as probability theory is concerned, there is nothing wrong with receiving only one input bit and as a result ending up believing a hypothesis that you assigned probability 1/3^^^3 before. Similarly, probability theory allows you to assign prior probability 1/3^^^3 to seeing the blue hole in the sky, and therefore believing the mugger after seeing it happen anyway. This may not be a good thing to do on other principles, but probability theory does not forbid it. ETA: In particular, if you feel between a rock and a bad place in terms of possible solutions to Pascal's Muggle, then you can at least consider assigning probabilities this way even if it doesn't normally seem like a good idea. There is no law of probability theory that forbids you from assigning probability 1/3^^^3 to the next bit in your input stream being a zero True, but it seems crazy to be that certain about what you'll see. It doesn't seem that unlikely to hallucinate that happening. It doesn't seem that unlikely for all the photons and phonons to just happen to converge in some pattern that makes it look and sound exactly like a Matrix Lord. You're basically assuming that your sensory equipment is vastly more reliable than you have evidence to believe, just because you want to make sure that if you get a positive, you won't just assume it's a false positive. Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough. Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of years worth of R&D on your priors, to get them straight. However, the gap these evolution-set priors would have to cross to get even close to that absurd 1/3^^^3... It's a theoretical possibility that's by no stretch a realistic one. Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers. Me: I'm not sure about that. Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3? Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do. Mugger: "This should be good." Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving will be heavily duplicated. I'm not really sure that I care about duplicates that much. Mugger: Well I didn't say they would all be humans. Haven't you read enough Sci-Fi to know that you should care about all possible sentient life? Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren't that many small Turing machines to go around. And it's not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can't see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated. I'm not really sure that I care about duplicates that much. Bostrom would probably try to argue that you do. See Bostrom (2006). Am I crazy, or does Bostrom's argument in that paper fall flat almost immediately, based on a bad moral argument? His first, and seemingly most compelling, argument for Duplication over Unification is that, assuming an infinite universe, it's certain (with probability 1) that there is already an identical portion of the universe where you're torturing the person in front of you. Given Unification, it's meaningless to distinguish between that portion and this portion, given their physical identicalness, so torturing the person is morally blameless, as you're not increasing the number of unique observers being tortured. Duplication makes the two instances of the person distinct due to their differing spatial locations, even if every other physical and mental aspect is identical, so torturing is still adding to the suffering in the universe. However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it endlessly. Each duplicate is distinct, and so you're increasing the amount of joy in the universe every time you make a copy. It would be a net win, in fact, if you killed every human and replaced the earth with a computer doing nothing but running copies of that one person experiencing a moment of bliss. Unification takes care of this, by noting that duplicating someone adds, at most, a single bit of information to the universe, so spamming the universe with copies of the happy moment counts either the same as the single experience, or at most a trivial amount more. Am I thinking wrong here? However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it True just if your summum bonum is exactly an aggregate of moments of happiness experienced. I take the position that it is not. I don't think one even has to resort to a position like "only one copy counts". True, but that's then striking more at the heart of Bostrom's argument, rather than my counter-argument, which was just flipping Bostrom around. (Unless your summum malum is significantly different, such that duplicate tortures and duplicate good-things-equivalent-to-torture-in-emotional-effect still sum differently?) His first, and seemingly most compelling, argument for Duplication over Unification is that, assuming an infinite universe, it's certain (with probability 1) that there is already an identical portion of the universe where you're torturing the person in front of you. Given Unification, it's meaningless to distinguish between that portion and this portion, given their physical identicalness, so torturing the person is morally blameless, as you're not increasing the number of unique observers being tortured. I'd argue that the torture portion is not identical to the not-torture portion and that the difference is caused by at least one event in the common prior history of both portions of the universe where they diverged. Unification only makes counterfactual worlds real; it does not cause every agent to experience every counterfactual world. Agents are differentiated by the choices they make and agents who perform torture are not the same agents as those who abstain from torture. The difference can be made arbitrarily small, for instance by choosing an agent with a 50% probability of committing torture based on the outcome of a quantum coin flip, but the moral question in that case is why an agent would choose to become 50% likely to commit torture in the first place. Some counterfactual agents will choose to become 50% likely to commit torture, but they will be very different than the agents who are 1% likely to commit torture. I think you're interpreting Bostrom slightly wrong. You seem to be reading his argument (or perhaps just my short distillation of it) as arguing that you're not currently torturing someone, but there's an identical section of the universe elsewhere where you are torturing someone, so you might as well start torturing now. As you note, that's contradictory - if you're not currently torturing, then your section of the universe must not be identical to the section where the you-copy is torturing. Instead, assume that you are currently torturing someone. Bostrom's argument is that you're not making the universe worse, because there's a you-copy which is torturing an identical person elsewhere in the universe. At most one of your copies is capable of taking blame for this; the rest are just running the same calculations "a second time", so to say. (Or at least, that's what he's arguing that Unification would say, and using this as a reason to reject it and turn to Duplication, so each copy is morally culpable for causing new suffering.) I think it not unlikely that if we have a successful intelligence explosion and subsequently discover a way to build something 4^^^^4-sized, then we will figure out a way to grow into it, one step at a time. This 4^^^^4-sized supertranshuman mind then should be able to discriminate "interesting" from "boring" 3^^^3-sized things. If you could convince the 4^^^^4-sized thing to write down a list of all nonboring 3^^^3-sized things in its spare time, then you would have a formal way to say what an "interesting 3^^^3-sized thing" is, with description length (the description length of humanity = the description length of our actual universe) + (the additional description length to give humanity access to a 4^^^^4-sized computer -- which isn't much because access to a universal Turing machine would do the job and more). Thus, I don't think that it needs a 3^^^3-sized description length to pick out interesting 3^^^3-sized minds. Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do. Mugger: So then, you think the probability that you should care as much about my 3↑↑↑3 simulated people as I thought you did is on the order of 1/3↑↑↑3? After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did. I'm not really sure that I care about duplicates that much. Didn't you feel sad when Yoona-939 was terminated, or wish all happiness for Sonmi-451? All the other Yoona-939s were fine, right? And that Yoona-939 was terminated quickly enough to prevent divergence, wasn't she? (my point is, you're making it seem like you're breaking the degeneracy by labeling them. But their being identical is deep) But now she's... you know... now she's... (wipes away tears) slightly less real. You hit pretty strong diminishing returns on existence once you've hit the 'at least one copy' point. Related: Would an AI conclude it's likely to be a Boltzmann brain? ;) Or even if the AI experienced an intelligence explosion the danger is that it would not believe it had really become so important because the prior odds of you being the most important thing that will probably ever exist is so low. Edit: The AI could note that it uses a lot more computing power than any other sentient and so give itself an anothropic weight much greater than 1. With respect to this being a "danger," don't Boltzmann brains have a decision-theoretic weight of zero? Why zero? If you came to believe there was a 99.99999% chance you are currently dreaming wouldn't it effect your choices? Everyone's a Boltzmann brain to some degree. Just thought of something: How sure are we that P(there are N people) is not at least as small as 1/N for sufficiently large N, even without a leverage penalty? The OP seems to be arguing that the complexity penalty on the prior is insufficient to generate this low probability, since it doesn't take much additional complexity to generate scenarios with arbitrarily more people. Yet it seems to me that after some sufficiently large number, P(there are N people) must drop faster than 1/N. This is because our prior must be normalized. That is: Sum(all non-negative integers N) of P(there are N people) = 1. If there was some integer M such that for all n > M, P(there are n people) >= 1/n, the above sum would not converge. If we are to have a normalized prior, there must be a faster-than-1/N falloff to the function P(there are N people). In fact, if one demands that my priors indicate that my expected average number of people in the universe/multiverse is finite, then my priors must diminish faster than 1/N^2. (So that that the sum of N*P(there are N people) converges). TL:DR If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/(3^^^3), then you don't have a normalized distribution of priors. If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/((3^^^3)^2) then your expected number of people in the multiverse is divergent/infinite. Hm. Technically for EU differentials to converge we only need that the number of people we expectedly affect sums to something finite, but having a finite expected number of people existing in the multiverse would certainly accomplish that. The problem is that the Solomonoff prior picks out 3^^^3 as much more likely than most of the numbers of the same magnitude because it has much lower Kolmogorov complexity. I'm not familiar with Kolmogorov complexity, but isn't the aparent simplicity of 3^^^3 just an artifact of what notation we happen to have invented? I mean, "^^^" is not really a basic operation in arithmetic. We have a nice compact way of describing what steps are needed to get from a number we intuitively grok, 3, to 3^^^3, but I'm not sure it's safe to say that makes it simple in any significant way. For one thing, what would make 3 a simple number in the first place? I'm not familiar with Kolmogorov complexity, but In the nicest possible way, shouldn't you have stopped right there? Shouldn't the appearance of this unfamiliar and formidable-looking word have told you that I wasn't appealing to some intuitive notion of complexity, but to a particular formalisation that you would need to be familiar with to challenge? If instead of commenting you'd Googled that term, you would have found the Wikipedia article that answered this and your next question. You can as a rough estimate of the complexity of a number take the amount of lines of the shortest program that would compute the number from basic operations. More formally, substitute lines of a program with states of a Turing Machine. But what numbers are you allowed to start with on the computation? Why can't I say that, for example, 12,345,346,437,682,315,436 is one of the numbers I can do computation from (as a starting point), and thus it has extremely small complexity? You could say this -- doing so would be like describing your own language in which things involving 12,345,346,437,682,315,436 can be expressed concisely. So Kolmogorov complexity is somewhat language-dependent. However, given two languages in which you can describe numbers, you can compute a constant such that the complexity of any number is off by at most that constant between the two languages. (The constant is more or less the complexity of describing one language in the other). So things aren't actually too bad. But if we're just talking about Turing machines, we presumably express numbers in binary, in which case writing "3" can be done very easily, and all you need to do to specify 3^^^3 is to make a Turing machine computing ^^^. However, given two languages in which you can describe numbers, you can compute a constant such that the complexity of any number is off by at most that constant between the two languages. But can't this constant itself be arbitrarily large when talking about arbitrary numbers? (Of course, for any specific number, it is limited in size.) Well... Given any number N, you can in principle invent a programming language where the program do_it outputs N. The constant depends on the two languages, but not on the number. As army1987 points out, if you pick the number first, and then make up languages, then the difference can be arbitrarily large. (You could go in the other direction as well: if your language specifies that no number less than 3^^^3 can be entered as a constant, then it would probably take approximately log(3^^^3) bits to specify even small numbers like 1 or 2.) But if you pick the languages first, then you can compute a constant based on the languages, such that for all numbers, the optimal description lengths in the two languages differ by at most a The context this in which this comes up here generally requires something like "there's a way to compare the complexity of numbers which always produces the same results independent of language, except in a finite set of cases. Since that set is finite and my argument doesn't depend on any specific number, I can always base my argument on a case that's not in that set." If that's how you're using it, then you don't get to pick the languages first. You do get to pick the languages first because there is a large but finite (say no more than 10^6) set of reasonable languages-modulo-trivial-details that could form the basis for such a measurement. Two quick thoughts: • Any two theories can be made compatible if allowing for some additional correction factor (e.g. a "leverage penalty") designed to make them compatible. As such, all the work rests with "is the leverage penalty justified?" • For said justification, there has to some sort of justifiable territory-level reasoning, including "does it carve reality at its joints?" and such, "is this the world we live in?". The problem I see with the leverage penalty is that there is no Bayesian updating way that will get you to such a low prior. It's the mirror from "can never process enough bits to get away from such a low prior", namely "can never process enough bits to get to assigning such low priors" (the blade cuts both ways). The reason for that is in part that your entire level of confidence you have in the governing laws of physics, and the causal structure and dependency graphs and such is predicated on the sensory bitstream of your previous life - no more, it's a strictly upper bound. You can gain confidence that a prior to affect a googleplex people is that low only by using that lifetime bitstream you have accumulated - but then the trap shuts, just as you can't get out of such a low prior, you cannot use any confidence you gained in the current system by ways of your lifetime sensory input to get to such a low prior. You can be very sure you can't affect that many, based on your understanding of how causal nodes are interconnected, but you can't be that sure (since you base your understanding on a comparatively much smaller number of bits of evidence): It's a prior ex machina, with little more justification than just saying "I don't deal with numbers that large/small in my decision making". Is it just me, or is everyone here overly concerned with coming up with patches for this specific case and not the more general problem? If utilities can grow vastly larger than the prior probability of the situation that contains them, then an expected utility system will become almost useless. Acting on situations with probabilities as tiny as can possibly be represented in that system, since the math would vastly outweigh the expected utility from acting on anything else. I've heard people come up with apparent resolutions to this problem. Like counter balancing every possible situation with an equally low probability situation that has vast negative utility. There are a lot of problems with this though. What if the utilities don't exactly counterbalance? An extra bit to represent a negative utility for example, might add to the complexity and therefore the prior probability. Or even a tiny amount of evidence for one scenario over the other would completely upset it. And even if that isn't the case, your utility might not have negative. Maybe you only value the number of paperclips in the universe. The worst that can happen is you end up in a universe with no paperclips. You can't have negative paperclips, so the lowest utility you can have is 0. Or maybe your positive and negative values don't exactly match up. Fear is a better motivator than reward, for example. The fear of having people suffer may have more negative utility than the opposite scenario of just as many people living happy lives or something (and since they are both different scenarios with more differences than a single number, they would have different prior probabilities to begin with.) Resolutions that involve tweaking the probability of different events is just cheating since the probability shouldn't change if the universe hasn't. It's how you act on those probabilities that we should be concerned about. And changing the utility function is pretty much cheating too. You can make all sorts of arbitrary tweaks that would solve the problem, like having a maximum utility or something. But if you really found out you lived in a universe where 3^^^3 lives existed (perhaps aliens have been breeding extensively, or we really do live in a simulation, etc), are you just supposed to stop caring about all life since it exceeds your maximum amount of caring? I apologize if I'm only reiterating arguments that have already been gone over. But it's concerning to me that people are focusing on extremely sketchy patches to a specific case of this problem, and not the more general problem, that any expected utility function becomes apparently worthless in a probabilistic universe like ours. EDIT: I think I might have a solution to the problem and posted it here. What if the utilities don't exactly counterbalance? The idea is that it'd be great to have a formalism where they do by construction. Also, when there's no third party, it's not distinct enough from Pascal's Wager as to demand extra terminology that focusses on the third party, such as "Pascal's Mugging". If it is just agent doing contemplations by itself, that's the agent making a wager on it's hypotheses, not getting mugged by someone. I'll just go ahead and use "Pascal Scam" to describe a situation where an in-distinguished agent promises unusually huge pay off, and the mark erroneously gives in due to some combination of bad priors and bad utility evaluation. The common errors seem to be 1: omit the consequence of keeping the money for a more distinguished agent, 2: assign too high prior, 3: and, when picking between approaches, ignore the huge cost of acting in a manner which encourages disinformation. All those errors act in favour of the scammer (and some are optional), while non-erroneous processing would assign huge negative utility to paying up even given high priors. The idea is that it'd be great to have a formalism where they do by construction. There is no real way of doing that without changing your probability function or your utility function. However you can't change those. The real problem is with the expected utility function and I don't see any way of fixing it, though perhaps I missed something. Also, when there's no third party, it's not distinct enough from Pascal's Wager as to demand extra terminology that focusses on the third party, such as "Pascal's Mugging". If it is just agent doing contemplations by itself, that's the agent making a wager on it's hypotheses, not getting mugged by someone. Any agent subject to Pascal's Mugging would fall pray to this problem first, and it would be far worse. While the mugger is giving his scenario, the agent could imagine an even more unlikely scenario. Say one where the mugger actually gives him 3^^^^^^3 units of utility if he does some arbitrary task, instead of 3^^^3. This possibility immediately gets so much utility that it far outweighs anything the mugger has to say after that. Then the agent may imagine an even more unlikely scenario where it gets 3^^^^^^^^^^3 units of utility, and so on. I don't really know what an agent would do if the expected utility of any action approached infinity. Perhaps it would generally work out as some things would approach infinity faster than others. I admit I didn't consider that. But I don't know if that would necessarily be the case. Even if it is it seems "wrong" for expected utilities of everything to be infinite and only tiny probabilities to matter for anything. And if so then it would work out for the pascal's mugging scenario too I think. There is no real way of doing that without changing your probability function or your utility function. However you can't change those. Last time I checked, priors were fairly subjective even here. We don't know what is the best way to assign priors. Things like "Solomonoff induction" depend to arbitrary choice of machine. Any agent subject to Pascal's Mugging would fall pray to this problem first, and it would be far worse. Nope, people who end up 419-scammed or waste a lot of money investing into someone like Randel L Mills or Andrea Rossi live through their life ok until they read a harmful string in a harmful set of circumstances (bunch of other believers around for example). Last time I checked, priors were fairly subjective even here. We don't know what is the best way to assign priors. Things like "Solomonoff induction" depend to arbitrary choice of machine. Priors are indeed up for grabs, but a set of priors about the universe ought be consistent with itself, no? A set of priors based only on complexity may indeed not be the best set of priors -- that's what all the discussions about "leverage penalties" and the like are about, enhancing Solomonoff induction with something extra. But what you seem to suggest is a set of priors about the universe that are designed for the express purposes of making human utility calculations balance out? Wouldn't such a set of priors require the anthroporphization of the universe, and effectively mean sacrificing all sense of epistemic rationality? The best "priors" about the universe are 1 for what that universe right around you is, and 0 for everything else. Other priors are a compromise, an engineering decision. What I am thinking is that • there is a considerably better way to assign priors which we do not know of yet - the way which will assign equal probabilities to each side of a die if it has no reason to prefer one over the other - the way that does correspond to symmetries in the evidence. • We don't know that there will still be same problem when we have a non-stupid way to assign priors (especially as the non-stupid way ought to be considerably more symmetric). And it may be that some value systems are intrinsically incoherent. Suppose you wanted to maximize blerg without knowing what blerg even really is. That wouldn't be possible, you can't maximize something without having a measure of it. But I still can tell you i'd give you 3^^^^3 blergs for a dollar, without either of us knowing what blerg is supposed to be or whenever 3^^^^3 blergs even make sense (if blerg is an unique good book of up to 1000 page length, it doesn't because duplicates aren't blerg). Last time I checked, priors were fairly subjective even here. We don't know what is the best way to assign priors. Things like "Solomonoff induction" depend to arbitrary choice of machine. True, but the goal of a probability function is to represent the actual probability of an event happening as closely as possible. The map should correspond to the territory. If your map is good, you shouldn't change it unless you observe actual changes in the territory. Nope, people who end up 419-scammed or waste a lot of money investing into someone like Randel L Mills or Andrea Rossi live through their life ok until they read a harmful string in a harmful set of circumstances (bunch of other believers around for example). I don't know if those things have such extremes in low probability vs high utility to be called pascal's mugging. But even so, the human brain doesn't operate on anything like Solomonoff induction, Bayesian probability theory, or expected utility maximization. The actual probability is either 0 or 1 (either happens or doesn't happen). Values in-between quantify ignorance and partial knowledge (e.g. when you have no reason to prefer one side of the die to the other), or, at times, are chosen very arbitrarily (what is the probability that a physics theory is "correct"). I don't know if those things have such extremes in low probability vs high utility to be called pascal's mugging. New names for same things are kind of annoying, to be honest, especially ill chosen... if it happens by your own contemplation, I'd call it Pascal's Wager. Mugging implies someone making threats, scam is more general and can involve promises of reward. Either way the key is the high payoff proposition wrecking some havoc, either through it's prior probability being too high, other propositions having been omitted, or the like. But even so, the human brain doesn't operate on anything like Solomonoff induction, Bayesian probability theory, or expected utility maximization. People are still agents, though. The actual probability is either 0 or 1 (either happens or doesn't happen). Yes but the goal is to assign whatever outcome that will actually happen with the highest probability as possible, using whatever information we have. The fact that some outcomes result in ridiculously huge utility gains does not imply anything about how likely they are to happen, so there is no reason that should be taken into account (unless it actually does, in which case it New names for same things are kind of annoying, to be honest, especially ill chosen... if it happens by your own contemplation, I'd call it Pascal's Wager. Mugging implies someone making threats, scam is more general and can involve promises of reward. Either way the key is the high payoff proposition wrecking some havoc, either through it's prior probability being too high, other propositions having been omitted, or the like. Pascal's mugging was an absurd scenario with absurd rewards that approach infinity. What you are talking about is just normal everyday scams. Most scams do not promise such huge rewards or have such low probabilities (if you didn't know any better it is feasible that someone could have an awesome invention or need your help with transaction fees.) And the problem with scams is that people overestimate their probability. If they were to consider how many emails in the world are actually from Nigerian Princes vs scammers, or how many people promise awesome inventions without any proof they will actually work, they would reconsider. In pascal's mugging, you fall for it even after having considered the probability of it happening in Your probability estimation could be absolutely correct. Maybe 1 out of a trillion times a person meets someone claiming to be a matrix lord, they are actually telling the truth. And they still end up getting scammed, so that the 1 in a trillionth counter-factual of themselves gets infinite reward. But even so, the human brain doesn't operate on anything like Solomonoff induction, Bayesian probability theory, or expected utility maximization. People are still agents, though. They are agents, but they aren't subject to this specific problem because we don't really use expected utility maximization. At best maybe some kind of poor approximation of it. But it is a problem for building AIs or any kind of computer system that makes decisions based on probabilities. Maybe 1 out of a trillion times a person meets someone claiming to be a matrix lord, they are actually telling the truth I think you're considering a different problem than Pascal's Mugging, if you're taking it as a given that the probabilities are indeed 1 in a trillion (or for that matter 1 in 10). The original problem doesn't make such an assumption. What you have in mind, the case of definitely known probabilities, seems to me more like The LifeSpan dilemma where e.g. "an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan" The wiki page on it seems to suggest that this is the problem. If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate his moral decision making... The agent would always have to take those kinds of actions with far-fetched results, that have low but non-negligible probabilities but extremely high returns. This is seen as an unreasonable result. Intuitively, one is not inclined to acquiesce to the mugger's demands - or even pay all that much attention one way or another - but what kind of prior does this imply? Also this Peter de Blanc has proven[1] that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined. which is pretty concerning. I'm curious what you think the problem with Pascal's Mugging is though. That you can't easily estimate the probability of such a situation? Well that is true of anything and isn't really unique to Pascal's Mugging. But we can still approximate probabilities. A necessary evil to live in a probabilistic world without the ability to do perfect Bayesian updates on all available information, or unbiased priors. There is no evidence for the actual existence of neatly walled-of and unupdateable utility functions or probability functions, any more than there is for a luz'. Utility and probability functions are not perfect or neatly walled off. But that doesn't mean you should change them to fix a problem with your expected utility function. The goal of a probability function is to represent the actual probability of an event happening as closely as possible. And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn't change unless you've actually changed your preferences. And the goal of a utility function is to represent what you states you would prefer the universe to be in. This also shouldn't change unless you've actually changed your preferences. There's plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to. And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence. There's plenty of evidence of people changing their preferences over significant periods of time: it would be weird not to. Of course people can change their preferences. But if your preferences are not consistent you will likely end up in situations that are less preferable than if you had the same preferences the entire time. It also makes you a potential money pump. And I am well aware that the theory of stable utility functions is standardly patched up with a further theory of terminal values, for which there is also no direct evidence. What? Terminal values are not a patch for utility functions. It's basically another word that means the same thing, what state you would prefer the world to end up in. And how can there be evidence for a decision theory? Terminal values are not a patch for utility functions. Well, I've certainly seen discussions here in which the observed inconsistency among our professed values is treated as a non-problem on the grounds that those are mere instrumental values, and our terminal values are presumed to be more consistent than that. Insofar as stable utility functions depend on consistent values, it's not unreasonable to describe such discussions as positing consistent terminal values in order to support a belief in stable utility functions. Nick Beckstead's finished but as-yet unpublished dissertation has much to say on this topic. Here is Beckstead's summary of chapters 6 and 7 of his dissertation: [My argument for the overwhelming importance of shaping the far future] asks us to be happy with having a very small probability of averting an existential catastrophe [or bringing about some other large, positive "trajectory change"], on the grounds that the expected value of doing so is extremely enormous, even though there are more conventional ways of doing good which have a high probability of producing very good, but much less impressive, outcomes. Essentially, we're asked to choose a long shot over a high probability of something very good. In extreme cases, this can seem irrational on the grounds that it's in the same ballpark as accepting a version of Pascal's Wager. In chapter 6, I make this worry more precise and consider the costs and benefits of trying to avoid the problem. When making decisions under risk, we make trade-offs between how good outcomes might be and how likely it is that we get good outcomes. There are three general kinds of ways to make these tradeoffs. On two of these approaches, we try to maximize expected value. On one of the two approaches, we hold that there are limits to how good (or bad) outcomes can be. On this view, no matter how bad an outcome is, it could always get substantially worse, and no matter how good an outcome is, it could always get substantially better. On the other approach, there are no such limits, at least in one of these directions. Either outcomes could get arbitrarily good, or they could get arbitrarily bad. On the third approach, we give up on ranking outcomes in terms of their expected value. The main conclusion of chapter 6 is that all of these approaches have extremely unpalatable implications. On the approach where there are upper and lower limits, we have to be timid — unwilling to accept extremely small risks in order to enormously increase potential positive payoffs. Implausibly, this requires extreme risk aversion when certain extremely good outcomes are possible, and extreme risk seeking when certain extremely bad outcomes are possible, and it requires making one's ranking of prospects dependent on how well things go in remote regions of space and time. In the second case, we have to be reckless — preferring very low probabilities of extremely good outcomes to very high probabilities of less good, but still excellent, outcomes — or rank prospects non-transitively. I then show that, if a theory is reckless, what it would be best to do, according to that theory, depends almost entirely upon what would be best in terms of considerations involving infinite value, no matter how implausible it is that we can bring about any infinitely good or bad outcomes, provided it is not certain. In this sense, there really is something deeply Pascalian about the reckless approach. Some might view this as a reductio of expected utility theory. However, I show that the only way to avoid being both reckless and timid is to rank outcomes in a circle, claiming that A is better than B, which is better than C,. . . , which is better than Z, which is better than A. Thus, if we want to avoid these two other problems, we have to give up not only on expected utility theory, but we also have to give up on some very basic assumptions about how we should rank alternatives. This makes it much less clear that we can simply treat these problems as a failure of expected utility theory. What does that have to do with the rough future-shaping argument? The problem is that my formalization of the rough future-shaping argument commits us to being reckless. Why? By Period Independence [the assumption that "By and large, how well history goes as a whole is a function of how well things go during each period of history"], additional good periods of history are always good, how good it is to have additional periods does not depend on how many you've already had, and there is no upper limit (in principle) to how many good periods of history there could be. Therefore, there is no upper limit to how good outcomes can be. And that leaves us with recklessness, and all the attendant theoretical difficulties. At this point, we are left with a challenging situation. On one hand, my formalization of the rough future-shaping argument seemed plausible. However, we have an argument that if its assumptions are true, then what it is best to do depends almost entirely on infinite considerations. That's a very implausible conclusion. At the same time, the conclusion does not appear to be easy to avoid, since the alternatives are the so-called timid approach and ranking alternatives non-transitively. In chapter 7, I discuss how important it would be to shape the far future given these three different possibilities (recklessness, timidity, and non-transitive rankings of alternatives). As we have already said, in the case of recklessness, the best decision will be the decision that is best in terms of infinite considerations. In the first part of the chapter, I highlight some difficulties for saying what would be best with respect to infinite considerations, and explain how what is best with respect to infinite considerations may depend on whether our universe is infinitely large, and whether it makes sense to say that one of two infinitely good outcomes is better than the other. In the second part of the chapter, I examine how a timid approach to assessing the value of prospects bears on the value of shaping the far future. The answer to this question depends on many complicated issues, such as whether we want to accept something similar to Period Independence in general even if Period Independence must fail in extreme cases, whether the universe is infinitely large, whether we should include events far outside of our causal control when aggregating value across space and time, and what the upper limit for the value of outcomes is. In the third part of the chapter, I consider the possibility of using the reckless approach in contexts where it seems plausible and using the timid approach in the contexts where it seems plausible. This approach, I argue, is more plausible in practice than the alternatives. I do not argue that this mixed strategy is ultimately correct, but instead argue that it is the best available option in light of our cognitive limitations in effectively formalizing and improving our processes for thinking about infinite ethics and long shots. If an AI's overall architecture is such as to enable it to carry out the "You turned into a cat" effect - where if the AI actually ends up with strong evidence for a scenario it assigned super-exponential improbability, the AI reconsiders its priors and the apparent strength of evidence rather than executing a blind Bayesian update, though this part is formally a tad underspecified - then at the moment I can't think of anything else to add in. Ex ante, when the AI assigns infinitesimal probability to the real thing, and meaningful probability to "hallucination/my sensors are being fed false information," why doesn't it self-modify/ self-bind to treat future apparent cat transformations as hallucinations? "Now, in this scenario we've just imagined, you were taking my case seriously, right? But the evidence there couldn't have had a likelihood ratio of more than 10^10^26 to 1, and probably much less. So by the method of imaginary updates, you must assign probability at least 10^-10^26 to my scenario, which when multiplied by a benefit on the order of 3↑↑↑3, yields an unimaginable bonanza in exchange for just five dollars -" Me: "Nope." I don't buy this. Consider the following combination of features of the world and account of anthropic reasoning (brought up by various commenters in previous discussions), which is at least very improbable in light of its specific features and what we know about physics and cosmology, but not cosmically so. • A world small enough not to contain ludicrous numbers of Boltzmann brains (or Boltzmann machinery) • Where it is possible to create hypercomputers through complex artificial means • Where hypercomputers are used to compute arbitrarily many happy life-years of animals, or humanlike beings with epistemic environments clearly distinct from our own (YOU ARE IN A HYPERCOMPUTER SIMULATION tags floating in front of their eyes) • And the hypercomputed beings are not less real or valuable because of their numbers and long addresses Treating this as infinitesimally likely, and then jumping to measurable probability on receipt of (what?) evidence about hypercomputers being possible, etc, seems pretty unreasonable to me. The behavior you want could be approximated with a bounded utility function that assigned some weight to achieving big payoffs/achieving a significant portion (on one of several scales) of possible big payoffs/etc. In the absence of evidence that the big payoffs are possible, the bounded utility gain is multiplied by low probability and you won't make big sacrifices for it, but in the face of lots of evidence, and if you have satisfied other terms in your utility function pretty well, big payoffs could become a larger focus. Basically, I think such a bounded utility function could better track the emotional responses driving your intuitions about what an AI should do in various situations than jury-rigging the prior. And if you don't want to track those responses then be careful of those intuitions and look to empirical stabilizing assumptions. Treating this as infinitesimally likely, and then jumping to measurable probability on receipt of (what?) evidence about hypercomputers being possible, etc, seems pretty unreasonable to me. It seems reasonable to me because on the stated assumptions - the floating tags seen by vast numbers of other beings but not yourself - you've managed to generate sensory data with a vast likelihood ratio. The vast update is as reasonable as this vast ratio, no more, no less. The problem is that you seem to be introducing one dubious piece to deal with another. Why is the hypothesis that those bullet points hold infinitesimally unlikely rather than very unlikely in the first place? A simplified version of the argument here: • Therefore, we need unbounded utility. • Oops! If we allow unbounded utility, we can get non-convergence in our expectation. • Since we've already established that the utility function is not up for grabs, let's try and modify the probability to fix this! My response to this is that the probability distribution is even less up for grabs. The utility, at least, is explicitly there to reflect our preferences. If we see that a utility function is causing our agent to take the wrong actions, then it makes sense to change it to better reflect the actions we wish our agent to take. The probability distribution, on the other hand, is a map that should reflect the territory as well as possible! It should not be modified on account of badly-behaved utility computations. This may be taken as an argument in favor of modifying the utility function; Sniffnoy makes a case for bounded utility in another comment. It could alternatively be taken as a case for modifying the decision procedure. Perhaps neither the probability nor the utility are "up for grabs", but how we use them should be modified. One (somewhat crazy) option is to take the median expectation rather than the mean expectation: we judge actions by computing the lowest utility score that we have 50% chance of making or beating, rather than by computing the average. This makes the computation insensitive to extreme (high or low) outcomes with small probabilities. Unfortunately, it also makes the computation insensitive to extreme (high or low) options with 49% probabilities: it would prefer a gamble with a 49% probability of utility -3^^^3 and 51% probability of utility +1, to a gamble with 51% probability of utility 0, and 49% probability of +3^^^3. But perhaps there are more well-motivated alternatives. If we see that a utility function is causing our agent to take the wrong actions, then it makes sense to change it to better reflect the actions we wish our agent to take. If the agent defines its utility indirectly in terms of designer's preference, a disagreement in evaluation of a decision by agent's utility function and designer's preference doesn't easily indicate that designer's evaluation is more accurate, and if it's not, then the designer should defer to the agent's judgment instead of adjusting its utility. The probability distribution, on the other hand, is a map that should reflect the territory as well as possible! It should not be modified on account of badly-behaved utility computations. Similarly, if the agent is good at building its map, it might have a better map than the designer, so a disagreement is not easily resolved in favor of the designer. On the other hand, there can be a bug in agent's world modeling code in which case it should be fixed! And similarly, if there is a bug in agent's indirect utility definition, it too should be fixed. The arguments seem analogous to me, so why would preference be more easily debugged than world model? You probably shouldn't let super-exponentials into your probability assignments, but you also shouldn't let super-exponentials into the range of your utility function. I'm really not a fan of having a discontinuous bound anywhere, but I think it's important to acknowledge that when you throw a trip-up (^^^) into the mix, important assumptions start breaking down all over the place. The VNM independence assumption no longer looks convincing, or straightforward. Normally my preferences in a Tegmark-style multiverse would reflect a linear combination of my preferences for its subcomponents; but throw a 3^^^3 in the mix, and this is no longer the case, so suddenly you have to introduce new distinctions between logical uncertainty and at least one type of reality fluid. My short-term hack for Pascal's Muggle is to recognize that my consequentialism module is just throwing exceptions, and fall back on math-free pattern matching, including low-weighted deontological and virtue-ethical values that I've kept around for just such an occasion. I am very unhappy with this answer, but the long-term solution seems to require fully figuring out how I value different kinds of reality fluid. It seems to me like the whistler is saying that the probability of saving knuth people for $5 is exactly 1/knuth after updating for the Matrix Lord's claim, not before the claim, which seems surprising. Also, it's not clear that we need to make an FAI resistant to very very unlikely scenarios. I'm a lot more worried about making an FAI behave correctly if it encounters a scenario which we thought was very very unlikely. Also, if the AI spreads widely and is around for a long time, it will eventually run into very unlikely scenarios. Not 1/3^^^3 unlikely, but pretty unlikely. If the AI actually ends up with strong evidence for a scenario it assigned super-exponential improbability, the AI reconsiders its priors and the apparent strength of evidence rather than executing a blind Bayesian update, though this part is formally a tad underspecified. I would love to have a conversation about this. Is the "tad" here hyperbole or do you actually have something mostly worked out that you just don't want to post? On a first reading (and admittedly without much serious thought -- it's been a long day), it seems to me that this is where the real heavy lifting has to be done. I'm always worried that I'm missing something, but I don't see how to evaluate the proposal without knowing how the super-updates are carried out. Really interesting, though. That hyperbole one. I wasn't intending the primary focus of this post to be on the notion of a super-update - I'm not sure if that part needs to make it into AIs, though it seems to me to be partially responsible for my humanlike foibles in the Horrible LHC Inconsistency. I agree that this notion is actually very underspecified but so is almost all of bounded logical uncertainty. That hyperbole one. I agree that this notion is actually very underspecified Using "a tad" to mean "very" is understatement, not hyperbole. Using "a tad" to mean "very" is understatement, not hyperbole. One could call it hypobole. Specifically, litotes. If someone suggests to me that they have the ability to save 3^^^3 lives, and I assign this a 1/3^^^3 probability, and then they open a gap in the sky at billions to one odds, I would conclude that it is still extremely unlikely that they can save 3^^^3 lives. However, it is possible that their original statement is false and yet it would be worth giving them five dollars because they would save a billion lives. Of course, this would require further assumptions on whether people are likely to do things that they have not said they would do, but are weaker versions of things they did say they would do but are not capable of. Also, I would assign lower probabilities when they claim they could save more people, for reasons that have nothing to do with complexity. For instance, "the more powerful a being is, the less likely he would be interested in five dollars" or :"a fraudster would wish to specify a large number to increase the chance that his fraud succeeds when used on ordinary utility maximizers, so the larger the number, the greater the comparative likelihood that the person is fraudulent". the phrase "Pascal's Mugging" has been completely bastardized to refer to an emotional feeling of being mugged that some people apparently get when a high-stakes charitable proposition is presented to them, regardless of whether it's supposed to have a low probability. 1) Sometimes what you may actually be seeing is disagreement on whether the hypothesis has a low probability. 2) Some of the arguments against Pascal's Wager and Pascal's Mugging don't depend on the probability. For instance, Pascal's Wager has the "worshipping the wrong god" problem--what if there's a god who prefers that he not be worshipped and damns worshippers to Hell? Even if there's a 99% chance of a god existing, this is still a legitimate objection (unless you want to say there's a 99% chance specifically of one type of god). 3) In some cases, it may be technically true that there is no low probability involved but there may be some other small number that the size of the benefit is multiplied by. For instance, most people discount events that happen far in the future. A highly beneficial event that happens far in the future would have the benefit multiplied by a very small number when considering discounting. Of course in cases 2 and 3 that is not technically Pascal's mugging by the original definition, but I would suggest the definition should be extended to include such cases. Even if not, they should at least be called something that acknowledges the similarity, like "Pascal-like muggings". 1) It's been applied to cryonic preservation, fer crying out loud. It's reasonable to suspect that the probability of that working is low, but anyone who says with current evidence that the probability is beyond astronomically low is being too silly to take seriously. The benefit of cryonic preservation isn't astronomically high, though, so you don't need a probability that is beyond astronomically low. First of all,even an infinitely long life after being revived only has a finite present value, and possibly a very low one, because of discounting. Second, the benefit from cryonics is the benefit you'd gain from being revived after being cryonically preserved, minus the benefit that you'd gain from being revived after not cryonically preserved. (A really advanced society might be able to simulate us. If simulations count as us, simulating us counts as reviving us without the need for cryonic preservation.) I do not think that you have gotten Luke's point. He was addressing your point #1, not trying to make a substantive argument in favor of cryonics. 2) Some of the arguments against Pascal's Wager and Pascal's Mugging don't depend on the probability. For instance, Pascal's Wager has the "worshipping the wrong god" problem--what if there's a god who prefers that he not be worshipped and damns worshippers to Hell? Even if there's a 99% chance of a god existing, this is still a legitimate objection (unless you want to say there's a 99% chance specifically of one type of god). That argument is isomorphic to the one discussed in the post here: "Hmm..." she says. "I hadn't thought of that. But what if these equations are right, and yet somehow, everything I do is exactly balanced, down to the googolth decimal point or so, with respect to how it impacts the chance of modern-day Earth participating in a chain of events that leads to creating an intergalactic civilization?" "How would that work?" you say. "There's only seven billion people on today's Earth - there's probably been only a hundred billion people who ever existed total, or will exist before we go through the intelligence explosion or whatever - so even before analyzing your exact position, it seems like your leverage on future affairs couldn't reasonably be less than a one in ten trillion part of the future or so." Essentially, it's hard to argue that the probabilities you assign should be balanced so exactly, and thus (if you're an altruist) Pascal's Wager exhorts you either to devote your entire existence to proselytizing for some god, or proselytizing for atheism, depending on which type of deity seems to you to have the slightest edge in probability (maybe with some weighting for the awesomeness of their heavens and awfulness of their hells). So that's why you still need a mathematical/epistemic/decision-theoretic reason to reject Pascal's Wager and Mugger. What you have is a divergent sum whose sign will depend to the order of summation, so maybe some sort of re-normalization can be applied to make it balance itself out in absence of evidence. Actually, there is no order of summation in which the sum will converge, since the terms get arbitrary large. The theorem you are thinking of applies to conditionally convergent series, not all divergent series. Strictly speaking, you don't always need the sums to converge. To choose between two actions you merely need the sign of difference between utilities of two actions, which you can represent with divergent sum. The issue is that it is not clear how to order such sum or if it's sign is even meaningful in any way. Even if not, they should at least be called something that acknowledges the similarity, like "Pascal-like muggings". Any similarities are arguments for giving them a maximally different name to avoid confusion, not a similar one. Would the English language really be better if rubies were called diyermands? Chemistry would not be improved by providing completely different names to chlorate and perchlorate (e.g. chlorate and sneblobs). Also, I think English might be better if rubies were called diyermands. If all of the gemstones were named something that followed a scheme similar to diamonds, that might be an improvement. I disagree. Communication can be noisy, and if a bit of noise replaces a word with a word in a totally different semantic class the error can be recovered, whereas if it replaces it with a word in the similar class it can't. See the last paragraph in myl's comment to this comment. Humans have the luxury of neither perfect learning nor perfect recall. In general, I find that my ability to learn and ability to recall words are much more limiting generally speaking than noisy communication channels. I think that there are other sources of redundancy in human communication that make noise less of an issue. For example, if I'm not sure if someone said "chlorate" or "perchlorate" often the ambiguity would be obvious, such as if it is clear that they had mumbled so I wasn't quite sure what they said. In the case of the written word, Chemistry and context provide a model for things which adds as a layer of redundancy, similar to the language model described in the post you linked to. It would take me at least twice as long to memorize random/unique alternatives to hypochlorite, chlorite, chlorate, perchlorate, multiplied by all the other oxyanion series. It would take me many times as long to memorize unique names for every acetyl compound, although I obviously acknowledge that Chemistry is the best case scenario for my argument and worst case scenario for yours. In the case of philosophy, I still think there are advantages to learning and recall for similar things to be named similarly. Even in the case of "Pascal's mugging" vs. "Pascal's wager", I believe that it is easier to recall and thus easier to have cognition about in part because of the naming connection between the two, despite the fact that these are two different things. Note that I am not saying I am in favor of calling any particular thing "Pascal-like muggings," which draws an explicit similarity between the two, all I'm saying is that choosing a "maximally different name to avoid confusion" strikes me as being less ideal, and that if you called it a Jiro's mugging or something, that would more than enough semantic distance between the ideas. This is an awful lot of words to expend to notice that (1) Social interactions need to be modeled in a game-theoretic setting, not straightforward expected payoff (2) Distributions of expected values matter. (Hint: p(N) = 1/N is a really bad model as it doesn't converge). (3) Utility functions are neither linear nor symmetric. (Hint: extinction is not symmetric with doubling the population.) (4) We don't actually have an agreed-upon utility function anyway; big numbers plus a not-well-agreed-on fuzzy notion is a great way to produce counterintuitive results. The details don't really matter; as fuzzy approaches infinity, you get nonintuitiveness. It's much more valuable to address some of these imperfections in the setup of the problem than continuing to wade through the logic with bad assumptions in hand. Just gonna jot down some thoughts here. First a layout of the problem. 1. Expected utility is a product of two numbers, probability of the event times utility generated by the event. 2. Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3 3. Traditionally speaking, there's nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does not scale like 1/(3^^^3) 4. Thus Expected Payoff becomes enormous, and you should pay the dude $5. 5. If you actually follow this, you'll be mugged by random strangers offerring to save 3^^^3 people or whatever super-exponential numbers they can come up with. In order to avoid being mugged, your suggestion is to apply a scale penalty (leverage penalty) to the probability. You then notice that this has some very strange effects on your epistemology - you become incapable of ever believing the 5$ will actually help no matter how much evidence you're given, even though evidence can make the expected payoff large. You then respond to this problem with what appears to be an excuse to be illogical and/or non-bayesian at times (due to finite computing power). It seems to me that an alternative would be to rescale the untility value, instead of the probability. This way, you wouldn't run into any epistemic issues anywhere because you aren't messing with the epistemics. I'm not proposing we rescale Utility(save X people) by a factor 1/X, as that would make Utility(save X people) = Utility(save 1 person) all the time, which is obviously problematic. Rather, my idea is to make Utility a per capita quantity. That way, when the random hobo tells you he'll save 3^^^3 people, he's making a claim that requires there to be at least 3^^^3 people to save. If this does turn out to be true, keeping your Utility as a per capita quantity will require a rescaling on the order of 1/(3^^^3) to account for the now-much-larger population. This gives you a small expected payoff without requiring problematically small prior probabilities. It seems we humans may already do a rescaling of this kind anyway. We tend to value rare things more than we would if they were common, tend to protect an endangered species more than we would if it weren't endangered, and so on. But I'll be honest and say that I haven't really thought the consequences of this utility re-scaling through very much. It just seems that if you need to rescale a product of two numbers and rescaling one of the numbers causes problems, we may as well try rescaling the other and see where it leads. Any thoughts? As near as I can figure, the corresponding state of affairs to a complexity+leverage prior improbability would be a Tegmark Level IV multiverse in which each reality got an amount of magical-reality-fluid corresponding to the complexity of its program (1/2 to the power of its Kolmogorov complexity) and then this magical-reality-fluid had to be divided among all the causal elements within that universe - if you contain 3↑↑↑3 causal nodes, then each node can only get 1/3↑↑↑3 of the total realness of that universe. This reminds me a lot of Levin's universal search algorithm, and the associated Levin complexity. To formalize, I think you will want to assign each program p, of length #p, a prior weight 2^-#p (as in usual Solomonoff induction), and then divide that weight among the execution steps of the program (each execution step corresponding to some sort of causal node). So if program p executes for t steps before stopping, then each individual step gets a prior weight 2^-#p/t. The connection to universal search is as follows: Imagine dovetailing all possible programs on one big computer, giving each program p a share 2^-#'p of all the execution steps. (If the program stops, then start it again, so that the computer doesn't have idle steps). In the limit, the computer will spend a proportion 2^-#p/t of its resources executing each particular step of p, so this is an intuitive sense of the step's prior "weight". You'll then want to condition on your evidence to get a posterior distribution. Most steps of most programs won't in any sense correspond to an intelligent observer (or AI program) having your evidence, E, but some of them will. Let nE(p) be the number of steps in a program p which so-correspond (for a lot of programs nE(p) will be zero) and then program p will get posterior weight proportional to 2^-#p x (nE(p) / t). Normalize, and that gives you the posterior probability you are in a universe executed by a program p. You asked if there are any anthropic problems with this measure. I can think of a few: 1. Should "giant" observers (corresponding to lots of execution steps) count for more weight than "midget" observers (corresponding to fewer steps)? They do in this measure, which seems a bit 2. The posterior will tend to focus weight on programs which have a high proportion (nE(p) / t) of their execution steps corresponding to observers like you. If you take your observations at face value (i.e. you are not in a simulation), then this leads to the same sort of "Great Filter" issues that Katja Grace noticed with the SIA. There is a shift towards universes which have a high density of habitable planets, occupied by observers like us, but where very few or none of those observers ever expand off their home worlds to become super-advanced civilizations, since if they did they would take the executions steps away from observers like us. 3. There also seems to be a good reason in this measure NOT to take your observations at face value. The term nE(p) / t will tend to be maximized in universes very unlike ours: ones which are built of dense "computronium" running lots of different observer simulations, and you're one of them. Our own universe is very "sparse" in comparison (very few execution steps corresponding to 4. Even if you deal with simulations, there appears to be a "cyclic history" problem. The density nE(p)/t will tend to be is maximized if civilizations last for a long time (large number of observers), but go through periodic "resets", wiping out all traces of the prior cycles (so leading to lots of observers in a state like us). Maybe there is some sort of AI guardian in the universe which interrupts civilizations before they create their own (rival) AIs, but is not so unfriendly as to wipe them out altogether. So it just knocks them back to the stone age from time to time. That seems highly unlikely a priori, but it does get magnified a lot in posterior probability. On the plus side, note that there is no particular reason in this measure to expect you are in a very big universe or multiverse, so this defuses the "presumptuous philosopher" objection (as well as some technical problems if the weight is dominated by infinite universes). Large universes will tend to correspond to many copies of you (high nE(p)) but also to a large number of execution steps t. What matters is the density of observers (hence the computronium problem) rather than the total size. There's something very counterintuitive about the notion that Pascal's Muggle is perfectly rational. But I think we need to do a lot more intuition-pump research before we'll have finished picking apart where that counterintuitiveness comes from. I take it your suggestion is that Pascal's Muggle seems unreasonable because he's overly confident in his own logical consistency and ability to construct priors that accurately reflect his credence levels. But he also seems unreasonable because he doesn't take into account that the likeliest explanations for the Hole In The Sky datum either trivialize the loss from forking over $5 (e.g., 'It's All A Dream') or provide much more credible generalized reasons to fork over the $5 (e.g., 'He Really Is A Matrix Lord, So You Should Do What He Seems To Want You To Do Even If Not For The Reasons He Suggests'). Your response to the Holy In The Sky seems more safe and pragmatic because it leaves open that the decision might be made for those reasons, whereas the other two muggees were explicitly concerned only with whether the Lord's claims were generically right or generically wrong. Noting these complications doesn't help solve the underlying problem, but it does suggest that the intuitively right answer may be overdetermined, complicating the task of isolating our relevant intuitions from our irrelevant ones. I think the simpler solution is just to use a bounded utility function. There are several things suggesting we do this, and I really don't see any reason to not do so, instead of going through contortions to make unbounded utility work. Consider the paper of Peter de Blanc that you link -- it doesn't say a computable utility function won't have convergent utilities, but rather that it will iff said function is bounded. (At least, in the restricted context defined there, though it seems fairly general.) You could try to escape the conditions of the theorem, or you could just conclude that utility functions should be bounded. Let's go back and ask the question of why we're using probabilities and utilities in the first place. Is it because of Savage's Theorem? But the utility function output by Savage's Theorem is always OK, maybe we don't accept Savage's axiom 7, which is what forces utility functions to be bounded. But then we can only be sure that comparing expected utilities is the right thing to do for finite gambles, not for infinite ones, so talking about sums converging or not -- well, it's something that shouldn't even come up. Or alternatively, if we do encounter a situation with infinitely many choices, each of differing utility, we simply don't know what to do. Maybe we're not basing this on Savage's theorem at all -- maybe we simply take probability for granted (or just take for granted that it should be a real number and ground it in something like Cox's theorem -- after all, like Savage's theorem, Cox's theorem only requires that probability be finitely additive) and are then deriving utility from the VNM theorem. The VNM theorem doesn't prohibit unbounded utilities. But the VNM theorem once again only tells us how to handle finite gambles -- it doesn't tell us that infinite gambles should also be handled via expected utility. OK, well, maybe we don't care about the particular grounding -- we're just going to use probability and utility because it's the best framework we know, and we'll make the probability countably additive and use expected utility in all cases hey, why not, seems natural, right? (In that case, the AI may want to eventually reconsider whether probability and utility really is the best framework to use, if it is capable of doing so.) But even if we throw all that out, we still have the problem de Blanc raises. And, um, all the other problems that have been raised with unbounded utility. (And if we're just using probability and utility to make things nice, well, we should probably use bounded utility to make things nicer.) I really don't see any particular reason utility has to be unbounded either. Eliezer Yudkowsky seems to keep using this assumption that utility should be unbounded, or just not necessarily bounded, but I've yet to see any justification for this. I can find one discussion where, when the question of bounded utility functions came up, Eliezer responded, "[To avert a certain problem] the bound would also have to be substantially less than 3^^^^3." -- but this indicates a misunderstanding of the idea of utility, because utility functions can be arbitrarily (positively) rescaled or recentered. Individual utility "numbers" are not meaningful; only ratios of utility differences. If a utility function is bounded, you can assume the bounds are 0 and 1. Talk about the value of the bound is as meaningless as anything else using absolute utility numbers; they're not amounts of fun or something. Sure, if you're taking a total-utilitarian viewpoint, then your (decision-theoretic) utility function has to be unbounded, because you're summing a quantity over an arbitrarily large set. (I mean, I guess physical limitations impose a bound, but they're not logical limitations, so we want to be able to assign values to situations where they don't hold.) (As opposed to the individual "utility" functions that your'e summing, which is a different sort of "utility" that isn't actually well-defined at present.) But total utilitarianism -- or utilitarianism in general -- is on much shakier ground than decision-theoretic utility functions and what we can do with them or prove about them. To insist that utility be unbounded based on total utilitarianism (or any form of utilitarianism) while ignoring the solid things we can say seems backwards. Not everything has to scale linearly, after all. There seems to be this idea out there that utility must be unbounded because there are constants C_1 and C_2 such that adding to the world of person of "utility" (in the utilitarian sense) C_1 must increase your utility (in the decision-theoretic sense) by C_2, but this doesn't need to be so. This to me seems a lot like insisting "Well, no matter how fast I'm going, I can always toss a baseball forward in my direction at 1 foot per second relative to me; so it will be going 1 foot per second faster than me, so the set of possible speeds is unbounded." As it turns out, the set of possible speeds is bounded, velocities don't add linearly, and if you toss a baseball forward in your direction at 1 foot per second relative to you, it will not be going 1 foot per second faster. My own intuition is more in line with earthwormchuck163's comment -- I doubt I would be that joyous about making that many more people when so many are going to be duplicates or near-duplicates of one another. But even if you don't agree with this, things don't have to add linearly, and utilities don't have to be unbounded. I can find one discussion where, when the question of bounded utility functions came up, Eliezer responded, "[To avert a certain problem] the bound would also have to be substantially less than 3 ^^^^3." -- but this indicates a misunderstanding of the idea of utility, because utility functions can be arbitrarily (positively) rescaled or recentered. Individual utility "numbers" are not meaningful; only ratios of utility differences. I think he was assuming a natural scale. After all, you can just pick some everyday-sized utility difference to use as your unit, and measure everytihng on that scale. It wouldn't really matter what utility difference you pick as long as it is a natural size, since multiplying by 3^^^3 is easily enough for the argument to go through. I get the sense you're starting from the position that rejecting the Mugging is correct, and then looking for reasons to support that predetermined conclusion. Doesn't this attitude seem dangerous? I mean, in the hypothetical world where accepting the Mugging is actually the right thing to do, wouldn't this sort of analysis reject it anyway? (This is a feature of debates about Pascal's Mugging in general, not just this post in particular.) That's just how it is when you reason about reason; Neurath's boat must be repaired while on the open sea. In this case, our instincts strongly suggest that what the decision theory seems to say we should do must be wrong, and we have to turn to the rest of our abilities and beliefs to adjudicate between them. Well, besides that thing about wanting expected utilities to converge, from a rationalist-virtue perspective it seems relatively less dangerous to start from a position of someone rejecting something with no priors or evidence in favor of it, and relatively more dangerous to start from a position of rejecting something that has strong priors or evidence. Has the following reply to Pascal's Mugging been discussed on LessWrong? 1. Almost any ordinary good thing you could do has some positive expected downstream effects. 2. These positive expected downstream effects include lots of things like, "Humanity has slightly higher probability of doing awesome thing X in the far future." Possible values of X include: create 3^^^^3 great lives or create infinite value through some presently unknown method, and stuff like, in a scenario where the future would have been really awesome, it's one part in 10^30 better. 3. Given all the possible values of X whose probability is raised by doing ordinary good things, the expected value of doing any ordinary good thing is higher than the expected value of paying the 4. Therefore, almost any ordinary good thing you could do is better than paying the mugger. [I take it this is the conclusion we want.] The most obvious complaint I can think of for this response is that it doesn't solve selfish versions of Pascal's Mugging very well, and may need to be combined with other tools in that case. But I don't remember people talking about this and I don't currently see what's wrong with this as a response to the altruistic version of Pascal's Mugging. (I don't mean to suggest I would be very surprised if someone quickly and convincingly shoots this down.) The obvious problem with this is that your utility is not defined if you are willing to accept muggings, so you can't use the framework of expected utility maximization at all. The point of the mugger is just to illustrate this, I don't think anyone thinks you should actually pay them (after all, you might encounter a more generous mugger tomorrow, or any number of more realistic opportunities to do astronomical amounts of good...) Part of the issue is that I am coming at this problem from a different perspective than maybe you or Eliezer is. I believe that paying the mugger is basically worthless in the sense that doing almost any old good thing is better than paying the mugger. I would like to have a satisfying explanation of this. In contrast, Eliezer is interested in reconciling a view about complexity priors with a view about utility functions, and the mugger is an illustration of the conflict. I do not have a proposed reconciliation of complexity priors and unbounded utility functions. Instead, the above comment is a recommended as an explanation of why paying the mugger is basically worthless in comparison with ordinary things you could do. So this hypothesis would say that if you set up your priors and your utility function in a reasonable way, the expected utility of downstream effects of ordinary good actions would greatly exceed the expected utility of paying the mugger. Even if you decided that the expected utility framework somehow breaks down in cases like this, I think various related claims would still be plausible. E.g., rather than saying that doing ordinary good things has higher expected utility, it would be plausible that doing ordinary good things is "better relative to your uncertainty" than paying the mugger. On a different note, another thing I find unsatisfying about the downstream effects reply is that it doesn't seem to match up with why ordinary people think it is dumb to pay the mugger. The ultimate reason I think it is dumb to pay the mugger is strongly related to why ordinary people think it is dumb to pay the mugger, and I would like to be able to thoroughly understand the most plausible common-sense explanation of why paying the mugger is dumb. The proposed relationship between ordinary actions and their distant effects seems too far off from why common sense would say that paying the mugger is dumb. I guess this is ultimately pretty close to one of Nick Bostrom's complaints about empirical stabilizing assumptions. I believe that paying the mugger is basically worthless in the sense that doing almost any old good thing is better than paying the mugger. I think we are all in agreement with this (modulo the fact that all of the expected values end up being infinite and so we can't compare in the normal way; if you e.g. proposed a cap of 3^^^^^^^3 on utilities, then you certainly wouldn't pay the mugger). On a different note, another thing I find unsatisfying about the downstream effects reply is that it doesn't seem to match up with why ordinary people think it is dumb to pay the mugger. It seems very likely to me that ordinary people are best modeled as having bounded utility functions, which would explain the puzzle. So it seems like there are two issues: 1. You would never pay the mugger in any case, because other actions are better. 2. If you object to the fact that the only thing you care about is a very small probability of an incredibly good outcome, then that's basically the definition of having a bounded utility function. And then there is the third issue Eliezer is dealing with, where he wants to be able to have an unbounded utility function even if that doesn't describe anyone's preferences (since it seems like boundedness is an unfortunate restriction to randomly impose on your preferences for technical reasons), and formally it's not clear how to do that. At the end of the post he seems to suggest giving up on that though. Obviously to really put the idea of people having bounded utility functions to the test, you have to forget about it solving problems of small probabilities and incredibly good outcomes and focus on the most unintuitive consequences of it. For one, having a bounded utility function means caring arbitrarily little about differences between the goodness of different sufficiently good outcomes. And all the outcomes could be certain too. You could come up with all kinds of thought experiments involving purchasing huge numbers of years happy life or some other good for a few cents. You know all of this so I wonder why you don't talk about it. Also I believe that Eliezer thinks that an unbounded utility function describes at least his preferences. I remember he made a comment about caring about new happy years of life no matter how many he'd already been granted. (I haven't read most of the discussion in this thread or might just be missing something so this might be irrelevant.) As far as I know the strongest version of this argument is Benja's, here (which incidentally seems to deserve many more upvotes than it got). Benja's scenario isn't a problem for normal people though, who are not reflectively consistent and whose preferences manifestly change over time. Beyond that, it seems like people's preferences regarding the lifespan dilemma are somewhat confusing and probably inconsistent, much like their preferences regarding the repugnant conclusion. But that seems mostly orthogonal to pascal's mugging, and the basic point---having unbounded utility by definition means you are willing to accept negligible chances of sufficiently good outcomes against probability nearly 1 of any fixed bad outcome, so if you object to the latter you are just objecting to unbounded utility. I agree I was being uncharitable towards Eliezer. But it is true that at the end of this post he was suggesting giving up on unbounded utility, and that everyone in this crowd seems to ultimately take that route. I think we are all in agreement with this (modulo the fact that all of the expected values end up being infinite and so we can't compare in the normal way; if you e.g. proposed a cap of 3^^^^^^^3 on utilities, then you certainly wouldn't pay the mugger). Sorry, I didn't mean to suggest otherwise. The "different perspective" part was supposed to be about the "in contrast" part. It seems very likely to me that ordinary people are best modeled as having bounded utility functions, which would explain the puzzle. I agree with yli that this has other unfortunate consequences. And, like Holden, I find it unfortunate to have to say that saving N lives with probability 1/N is worse than saving 1 life with probability 1. I also recognize that the things I would like to say about this collection of cases are inconsistent with each other. It's a puzzle. I have written about this puzzle at reasonable length in my dissertation. I tend to think that bounded utility functions are the best consistent solution I know of, but that continuing to operate with inconsistent preferences (in a tasteful way) may be better in practice. It's in Nick Bostrom's Infinite Ethics paper, which has been discussed repeatedly here, and has been floating around in various versions since 2003. He uses the term "empirical stabilizing I bring this up routinely in such discussions because of the misleading intuitions you elicit by using an example like a mugging that sets off many "no-go heuristics" that track chances of payoffs, large or small. But just because ordinary things may have a higher chance of producing huge payoffs than paying off a Pascal's Mugger (who doesn't do demonstrations), doesn't mean your activities will be completely unchanged by taking huge payoffs into account. Maybe the answer to this reply is that if there is a downstream multiplier for ordinary good accomplished, there is also a downstream multiplier for good accomplished by the mugger in the scenario where he is telling the truth. And multiplying each by a constant doesn't change the bottom line. There is likely a broader-scoped discussion on this topic that I haven't read, so please point me to such a thread if my comment is addressed -- but it seems to me that there is a simpler resolution to this issue (as well as an obvious limitation to this way of thinking), namely that there's an almost immediate stage (in the context of highly-abstract hypotheticals) where probability assessment breaks down completely. For example, there are an uncountably-infinite number of different parent universes we could have. There are even an uncountably-infinite number of possible laws of physics that could govern our universe. And it's literally impossible to have all these scenarios "possible" in the sense of a well-defined measure, simply because if you want an uncountable sum of real numbers to add up to 1, only countably many terms can be nonzero. This is highly related to the axiomatic problem of cause and effect, a famous example being the question "why is there something rather than nothing" -- you have to have an axiomatic foundation before you can make calculations, but the sheer act of adopting that foundation excludes a lot of very interesting material. In this case, if you want to make probabilistic expectations, you need a solid axiomatic framework to stipulate how calculations are made. Just like with the laws of physics, this framework should agree with empirically-derived probabilities, but just like physics there will be seemingly-well-formulated questions that the current laws cannot address. In cases like hobos who make claims to special powers, the framework may be ill-equipped to make a definitive prediction. More generally, it will have a scope that is limited of mathematical necessity, and many hypotheses about spirituality, religion, and other universes, where we would want to assign positive but marginal probabilities, will likely be completely outside its light cone. This is probably obvious, but if this problem persisted, a Pascal-Mugging-vulnerable AI would immediately get mugged even without external offers or influence. The possibility alone, however remote, of a certain sequence of characters unlocking a hypothetical control console which could potentially access an above Turing computing model which could influence (insert sufficiently high number) amounts of matter/energy, would suffice. If an AI had to decide "until what length do I utter strange tentative passcodes in the hope of unlocking some higher level of physics", it would get mugged by the shadow of a matrix lord every time. It sounds like what you're describing is something that Iain Banks calls an "Out of Context Problem" - it doesn't seem like a 'leverage penalty' is the proper way to conceptualize what you're applying, as much as a 'privilege penalty'. In other words, when the sky suddenly opens up and blue fire pours out, the entire context for your previous set of priors needs to be re-evaluated - and the very question of "should I give this man $5" exists on a foundation of those now-devaluated priors. Is there a formalized tree or mesh model for Bayesian probabilities? Because I think that might be fruitful. This system does seem to lead to the odd effect that you would probably be more willing to pay Pascal's Mugger to save 10^10^100 people than you would be willing to pay to save 10^10^101 people, since the leverage penalties make them about equal, but the latter has a higher complexity cost. In fact the leverage penalty effectively means that you cannot distinguish between events providing more utility than you can provide an appropriate amount of evidence to match. It's not that odd. If someone asked to borrow ten dollars, and said he'd pay you back tomorrow, would you believe him? What if he said he'd pay back $20? $100? $1000000? All the money in the world? At some point, the probability goes down faster than the price goes up. That's why you can't just get a loan and keep raising the interest to make up for the fact that you probably won't ever pay it One scheme with the properties you want is Wei Dai's UDASSA, e.g. see here. I think UDASSA is by far the best formal theory we have to date, although I'm under no delusions about how well it captures all of our intuitions (I'm also under no delusions about how consistent our intuitions are, so I'm resigned to accepting a scheme that doesn't capture them). I think it would be more fair to call this allocation of measure part of my preferences, instead of "magical reality fluid." Thinking that your preferences are objective facts about the world seems like one of the oldest errors in the book, which is only possibly justified in this case because we are still confused about the hard problem of consciousness. As other commenters have observed, it seems clear that you should never actually believe that the mugger can influence the lives of 3^^^^3 other folks and will do so at your suggestion, whether or not you've made any special "leverage adjustment." Nevertheless, even though you never believe that you have such influence, you would still need to pass to some bounded utility function if you want to use the normal framework of expected utility maximization, since you need to compare the goodness of whole worlds. Either that, or you would need to make quite significant modifications to your decision theory. A note - it looks like what Eliezer is suggesting here is not the same as UDASSA. See my analysis here - and endoself's reply - and here. The big difference is that UDASSA won't impose the same locational penalty on nodes in extreme situations, since the measure is shared unequally between nodes. There are programs q with relatively short length that can select out such extreme nodes (parties getting genuine offers from Matrix Lords with the power of 3^^^3) and so give them much higher relative weight than 1/3^^^3. Combine this with an unbounded utility, and the mugger problem is still there (as is the divergence in expected utility). I agree that what Eliezer described is not exactly UDASSA. At first I thought it was just like UDASSA but with a speed prior, but now I see that that's wrong. I suspect it ends up being within a constant factor of UDASSA, just by considering universes with tiny little demons that go around duplicating all of the observers a bunch of times. If you are using UDT, the role of UDASSA (or any anthropic theory) is in the definition of the utility function. We define a measure over observers, so that we can say how good a state of affairs is (by looking at the total goodness under that measure). In the case of UDASSA the utility is guaranteed to be bounded, because our measure is a probability measure. Similarly, there doesn't seem to be a mugging issue. As lukeprog says here, this really needs to be written up. It's not clear to me that just because the measure over observers (or observer moments) sums to one then the expected utility is bounded. Here's a stab. Let's use s to denote a sub-program of a universe program p, following the notation of my other comment. Each s gets a weight w(s) under UDASSA, and we normalize to ensure Sum{s} w(s) = 1. Then, presumably, an expected utility looks like E(U) = Sum{s} U(s) w(s), and this is clearly bounded provided the utility U(s) for each observer moment s is bounded (and U(s) = 0 for any sub-program which isn't an "observer moment"). But why is U(s) bounded? It doesn't seem obvious to me (perhaps observer moments can be arbitrarily blissful, rather than saturating at some state of pure bliss). Also, what happens if U bears no relationship to experiences/observer moments, but just counts the number of paperclips in the universe p? That's not going to be bounded, is it? I agree it would be nice if things were better written up; right now there is the description I linked and Hal Finney's. If individual moments can be arbitrarily good, then I agree you have unbounded utilities again. If you count the number of paperclips you would again get into trouble; the analogous thing to do would be to count the mesure of paperclips. Yeah, I like this solution too. It doesn't have to be based on the universal distribution, any distribution will work. You must have some way of distributing your single unit of care across all creatures in the multiverse. What matters is not the large number of creatures affected by the mugger, but their total weight according to your care function, which is less than 1 no matter what outlandish numbers the mugger comes up with. The "leverage penalty" is just the measure of your care for not losing $5, which is probably more than 1/3^^^^3. Who might have the time, desire, and ability to write up UDASSA clearly, if MIRI provides them with resources? Is there any particular reason an AI wouldn't be able to self-modify with regards to its prior/algorithm for deciding prior probabilities? A basic Solomonoff prior should include a non-negligible chance that it itself isn't perfect for finding priors, if I'm not mistaken. That doesn't answer the question as such, but it isn't obvious to me that it's necessary to answer this one to develop a Friendly AI. A basic Solomonoff prior should include a non-negligible chance that it itself isn't perfect for finding priors, if I'm not mistaken. You are mistaken. A prior isn't something that can be mistaken per se. The closest it can get is assigning a low probability to something that is true. However, any prior system will say that the probability it gives of something being true is exactly equal to the probability of it being true, therefore it is well-calibrated. It will occasionally give low probabilities for things that are true, but only to the extent that unlikely things sometimes happen. As near as I can figure, the corresponding state of affairs to a complexity+leverage prior improbability would be a Tegmark Level IV multiverse in which each reality got an amount of magical-reality-fluid corresponding to the complexity of its program (1/2 to the power of its Kolmogorov complexity) and then this magical-reality-fluid had to be divided among all the causal elements within that universe - if you contain 3↑↑↑3 causal nodes, then each node can only get 1/3↑↑↑3 of the total realness of that universe. The difference between this and average utilitarianism is that we divide the probability by the hypothesis size, rather than dividing the utility by that size. The closeness of the two seems a bit Robin Hanson has suggested that the logic of a leverage penalty should stem from the general improbability of individuals being in a unique position to affect many others (which is why I called it a leverage penalty). At most 10 out of 3↑↑↑3 people can ever be in a position to be "solely responsible" for the fate of 3↑↑↑3 people if "solely responsible" is taken to imply a causal chain that goes through no more than 10 people's decisions; i.e. at most 10 people can ever be solely10 responsible for any given event. This bothers me because it seems like frequentist anthropic reasoning similar to the Doomsday argument. I'm not saying I know what the correct version should be, but assuming that we can use a uniform distribution and get nice results feels like the same mistake as the principle of indifference (and more sophisticated variations that often worked surprisingly well as an epistemic theory for finite cases). Things like Solomonoff distributions are more flexible... (As for infinite causal graphs, well, if problems arise only when introducing infinity, maybe it's infinity that has the problem.) The problem goes away of we try to employ a universal distribution for the reality fluid, rather than a uniform one. (This does not make that a good idea, necessarily.) This setup is not entirely implausible because the Born probabilities in our own universe look like they might behave like this sort of magical-reality-fluid - quantum amplitude flowing between configurations in a way that preserves the total amount of realness while dividing it between worlds - and perhaps every other part of the multiverse must necessarily work the same way for some If we try to use universal-distribution reality-fluid instead, we would expect to continue to see the same sort of distribution we had seen in the past: we would believe that we went down a path where the reality fluid concentrated into the Born probabilities, but other quantum paths which would be very improbable according to the Born probabilities may get high probability from some other similar to the Doomsday argument. Just to jump in here - the solution to the doomsday argument is that it is a low-information argument in a high-information situation. Basically, once you know you're the 10 billionth zorblax, your prior should indeed put you in the middle of the group of zorblaxes, for 20 billion total, no matter what a zorblax is. This is correct and makes sense. The trouble comes if you open your eyes, collect additional data, like population growth patterns, and then never use any of that to update the prior. When people put population growth patterns and the doomsday prior together in the same calculation for the "doomsday date," that's just blatantly having data but not updating on it. How confident are you of "Probability penalties are epistemic features - they affect what we believe, not just what we do. Maps, ideally, correspond to territories."? That seems to me to be a strong heuristic, even a very very strong heuristic, but I don't think it's strong enough to carry the weight you're placing on it here. I mean, more technically, the map corresponds to some relationship between the territory and the map-maker's utility function, and nodes on a causal graph, which are, after all, probabilistic, and thus are features of maps, not of territories, are features of the map-maker's utility function, not just summaries of evidence about the territory. I suspect that this formalism mixes elements of division of magical reality fluid between maps with elements of division of magical reality fluid between territories. A few thoughts: I haven't strongly considered my prior on being able to save 3^^^3 people (more on this to follow). But regardless of what that prior is, if approached by somebody claiming to be a Matrix Lord who claims he can save 3^^^3 people, I'm not only faced with the problem of whether I ought to pay him the $5 - I'm also faced with the question of whether I ought to walk over to the next beggar on the street, and pay him $0.01 to save 3^^^3 people. Is this person 500 times more likely to be able to save 3^^^3 people? From the outset, not really. And giving money to random people has no prior probability of being more likely to save lives than anything else. Now suppose that the said "Matrix Lord" opens the sky, splits the Red Sea, demonstrates his duplicator box on some fish and, sure, creates a humanoid Patronus. Now do I have more reason to believe that he is a Time Lord? Perhaps. Do I have reason to think that he will save 3^^^3 lives if I give him $5? I don't see convincing reason to believe so, but I don't see either view as problematic. Obviously, once you're not taking Hanson's approach, there's no problem with believing you've made a major discovery that can save an arbitrarily large number of lives. But here's where I noticed a bit of a problem in your analogy: In the dark matter case you say ""if these equations are actually true, then our descendants will be able to exploit dark energy to do computations, and according to my back-of-the-envelope calculations here, we'd be able to create around a googolplex people that way." Well, obviously the odds here of creating exactly a googolplex people is no greater than one in a googolplex. Why? Because those back of the hand calculations are going to get us (at best say) an interval from 0.5 x 10^(10^100) to 2 x 10^(10^100) - an interval containing more than a googolplex distinct integers. Hence, the odds of any specific one will be very low, but the sum might be very high. (This is simply worth contrasting with your single integer saved of the above case, where presumably your probabilities of saving 3^^^3 + 1 people are no higher than they were before.) Here's the main problem I have with your solution: "But if I actually see strong evidence for something I previously thought was super-improbable, I don't just do a Bayesian update, I should also question whether I was right to assign such a tiny probability in the first place - whether it was really as complex, or unnatural, as I thought. In real life, you are not ever supposed to have a prior improbability of 10^-100 for some fact distinguished enough to be written down, and yet encounter strong evidence, say 10^10 to 1, that the thing has actually happened." Sure you do. As you pointed out, dice rolls. The sequence of rolls in a game of Risk will do this for you, and you have strong reason to believe that you played a game of Risk and the dice landed as they did. We do probability estimates because we lack information. Your example of a mathematical theorem is a good one: The Theorem X is true or false from the get-go. But whenever you give me new information, even if that information is framed in the form of a question, it makes sense for me to do a Bayesian update. That's why a lot of so-called knowledge paradoxes are silly: If you ask me if I know who the president is, I can answer with 99%+ probability that it's Obama, if you ask me whether Obama is still breathing, I have to do an update based on my consideration of what prompted the question. I'm not committing a fallacy by saying 95%, I'm doing a Bayesian update, as I should. You'll often find yourself updating your probabilities based on the knowledge that you were completely incorrect about something (even something mathematical) to begin with. That doesn't mean you were wrong to assign the initial probabilities: You were assigning them based on your knowledge at the time. That's how you assign probabilities. In your case, you're not even updating on an "unknown unknown" - that is, something you failed to consider even as a possibility - though that's the reason you put all probabilities at less than 100%, because your knowledge is limited. You're updating on something you considered before. And I see absolutely no reason to label this a special non-Bayesian type of update that somehow dodges the problem. I could be missing something, but I don't see a coherent argument there. As an aside, the repeated references to how people misunderstood previous posts are distracting to say the least. Couldn't you just include a single link to Aaronson's Large Numbers paper (or anything on up-arrow notation, I mention Aaronson's paper because it's fun)? After all, if you can't understand tetration (and up), you're not going to understand the article to begin with. Now suppose that the said "Matrix Lord" opens the sky, splits the Red Sea, demonstrates his duplicator box on some fish and, sure, creates a humanoid Patronus. Now do I have more reason to believe that he is a Time Lord? Perhaps. Do I have reason to think that he will save 3^^^3 lives if I give him $5? I don't see convincing reason to believe so, but I don't see either view as Honestly, at this point, I would strongly update in the direction that I am being deceived in some manner. Possibly I am dreaming, or drugged, of the person in front of me has some sort of perception-control device. I do not see any reason why someone who could open the sky, split the Red Sea, and so on, would need $5; and if he did, why not make it himself? Or sell the fish? The only reasons I can imagine for a genuine Matrix Lord pulling this on me are very bad for me. Either he's a sadist who likes people to suffer - in which case I'm doomed no matter what I do - or there's something that he's not telling me (perhaps doing what he says once surrenders my free will, allowing him to control me forever?), which implies that he believes that I would reject his demand if I knew the truth behind it, which strongly prompts me to reject his demand. Or he's insane, following no discernable rules, in which case the only thing to do is to try to evade notice (something I've clearly already failed at). Either he's a sadist who likes people to suffer - in which case I'm doomed no matter what I do - or there's something that he's not telling me (perhaps doing what he says once surrenders my free will, allowing him to control me forever?), which implies that he believes that I would reject his demand if I knew the truth behind it, which strongly prompts me to reject his demand. That your universe is controlled by a sadist doesn't suggest that every possible action you could do is equivalent. Maybe all your possible fates are miserable, but some are far more miserable than others. More importantly, a being might be sadistic in some respects/situations but not in others. I also have to assign a very, very low prior to anyone's being able to figure out in 5 minutes what the Matrix Lord's exact motivations are. Your options are too simplistic even to describe minds of human-level complexity, much less ones of the complexity required to design or oversee physics-breakingly large simulations. I think indifference to our preferences (except as incidental to some other goal, e.g., paperclipping) is more likely than either sadism or beneficence. Only very small portions of the space of values focus on human-style suffering or joy. Even in hypotheticals that seem designed to play with human moral intuitions. Eliezer's decision theory conference explanation makes as much sense as Well, if I'm going to free-form speculate about the scenario, rather than use it to explore the question it was introduced to explore, the most likely explanation that occurs to me is that the entity is doing the Matrix Lord equivalent of free-form speculating... that is, it's wondering "what would humans do, given this choice and that information?" And, it being a Matrix Lord, its act of wondering creates a human mind (in this case, mine) and gives it that choice and information. Which makes it likely that I haven't actually lived through most of the life I remember, and that I won't continue to exist much longer than this interaction, and that most of what I think is in the world around me doesn't actually exist. That said, I'm not sure what use free-form speculating about such bizarre and underspecified scenarios really is, though I'll admit it's kind of fun. That said, I'm not sure what use free-form speculating about such bizarre and underspecified scenarios really is, though I'll admit it's kind of fun. It's kind of fun. Isn't that reason enough? Looking at the original question - i.e. how to handle very large utilities with very small probability - I find that I have a mental safety net there. The safety net says that the situation is a lie. It does not matter how much utility is claimed, because anyone can state any arbitrarily large number, and a number has been chosen (in this case, by the Matrix Lord) in a specific attempt to overwhelm my utility function. The small probability is chosen (a) because I would not believe a larger probability and (b) so that I have no recourse when it fails to happen. I am reluctant to fiddle with my mental safety nets because, well, they're safety nets - they're there for a reason. And in this case, the reason is that such a fantastically unlikely event is unlikely enough that it's not likely to happen ever, to anyone. Not even once in the whole history of the universe. If I (out of all the hundreds of billions of people in all of history) do ever run across such a situation, then it's so incredibly overwhelmingly more likely that I am being deceived that I'm far more likely to gain by immediately jumping to the conclusion of 'deceit' than by assuming that there's any chance of this being true. (nods) Sure. My reply here applies here as well. Friendly neighborhood Matrix Lord checking in! I'd like to apologize for the behavior of my friend in the hypothetical. He likes to make illusory promises. You should realize that regardless of what he may tell you, his choice of whether to hit the green button is independent of your choice of what to do with your $5. He may hit the green button and save 3↑↑↑3 lives, or he may not, at his whim. Your $5 can not be reliably expected to influence his decision in any way you can predict. You are no doubt accustomed to thinking about enforceable contracts between parties, since those are a staple of your game theoretic literature as well as your storytelling traditions. Often, your literature omits the requisite preconditions for a binding contract since they are implicit or taken for granted in typical cases. Matrix Lords are highly atypical counterparties, however, and it would be a mistake to carry over those assumptions merely because his statements resemble the syntactic form of an offer between humans. Did my Matrix Lord friend (who you just met a few minutes ago!) volunteer to have his green save-the-multitudes button and your $5 placed under the control of a mutually trustworthy third party escrow agent who will reliably uphold the stated bargain? Alternately, if my Matrix Lord friend breaches his contract with you, is someone Even More Powerful standing by to forcibly remedy the non-performance? Absent either of the above conditions, is my Matrix Lord friend participating in an iterated trading game wherein cheating on today's deal will subject him to less attractive terms on future deals, such that the net present value of his future earnings would be diminished by more than the amount he can steal from you today? Since none of these three criteria seem to apply, there is no deal to be made here. The power asymmetry enables him to do whatever he feels like regardless of your actions, and he is just toying with you! Do you really think your $5 means anything to him? He'll spend it making 3↑↑↑3 paperclips for all you know. Your $5 will not exert any predictable causal influence on the fate of the hypothetical 3↑↑↑3 Matrix Lord hostages. Decision theory doesn't even begin to apply. You should stick to taking boxes from Omega; at least she has an established reputation for paying out as promised. You should stick to taking boxes from Omega; at least she has an established reputation for paying out as promised. Caveat emptor, the boxes she gave me always were empty! I enjoyed this really a lot, and while I don't have anything insightful to add, I gave five bucks to MIRI to encourage more of this sort of thing. (By "this sort of thing" I mean detailed descriptions of the actual problems you are working on as regards FAI research. I gather that you consider a lot of it too dangerous to describe in public, but then I don't get to enjoy reading about it. So I would like to encourage you sharing some of the fun problems sometimes. This one was fun.) Not 'a lot' and present-day non-sharing imperatives are driven by an (obvious) strategy to accumulate a long-term advantage for FAI projects over AGI projects which is impossible if all lines of research are shared at all points when they are not yet imminently dangerous. No present-day knowledge is imminently dangerous AFAIK. I can't help but remember HPJEV talk about plausible deniability and how that relates to you telling people whether there is dangerous knowledge out there. present-day non-sharing imperatives are driven by an (obvious) strategy to accumulate a long-term advantage for FAI projects over AGI projects Do you believe this to be possible? In modern times with high mobility of information and people I have strong doubts a gnostic approach would work. You can hide small, specific, contained "trade secrets", you can't hide a large body of knowledge that needs to be actively developed. Edit: formatting fixed. Thanks, wedrifid. My response to the mugger: • You claim to be able to simulate 3^^^^3 unique minds. • It takes log(3^^^^3) bits just to count that many things, so my absolute upper bound on the prior for an agent capable of doing this is 1/3^^^^3. • My brain is unable to process enough evidence to overcome this, so unless you can use your matrix powers to give me access to sufficient computing power to change my mind, get lost. My response to the scientist: • Why yes, you do have sufficient evidence to overturn our current model of the universe, and if your model is sufficiently accurate, the computational capacity of the universe is vastly larger than we thought. • Let's try building a computer based on your model and see if it works. Edit: I seem to have misunderstood the formatting examples under the help button. Try an additional linebreak before the first bullet point. It takes log(3^^^^3) bits just to count that many things, so my absolute upper bound on the prior for an agent capable of doing this is 1/3^^^^3 Why does that prior follow from the counting difficulty? I was thinking that using (length of program) + (memory required to run program) as a penalty makes more sense to me than (length of program) + (size of impact). I am assuming that any program that can simulate X minds must be able to handle numbers the size of X, so it would need more than log(X) bits of memory, which makes the prior less than 2^-log(X). I wouldn't be overly surprised if there were some other situation that breaks this idea too, but I was just posting the first thing that came to mind when I read this. You're trying to italicize those long statements? It's possible that you need to get rid of the spaces around the asterisks. But you're probably better off just using quote boxes with ">" instead. Many of the conspiracy theories generated have some significant overlap (i.e. are not mutually exclusive), so one shouldn't expect the sum of their probabilities to be less than 1. It's permitted for P(Cube A is red) + P(Sphere X is blue) to be greater than 1. Okay, that makes sense. In that case, though, where's the problem? Claims in the form of "not only is X a true event, with details A, B, C, ..., but also it's the greatest event by metric M that has ever happened" should have low enough probability that a human writing it down specifically in advance as a hypothesis to consider, without being prompted by some specific evidence, is doing really badly epistemologically. Also, I'm confused about the relationship to MWI. One point I don't see mentioned here that may be important is that someone is saying this to you. I encounter lots of people. Each of them has lots of thoughts. Most of those thoughts, they do not express to me (for which I am grateful). How do they decide which thoughts to express? To a first approximation, they express thoughts which are likely, important and/or amusing. Therefore, when I hear a thought that is highly important or amusing, I expect it had less of a likelihood barrier to being expressed, and assign it a proportionally lower probability. Note that this doesn't apply to arguments in general -- only to ones that other people say to me. The prior probability of us being in a position to impact a googolplex people is on the order of one over googolplex, so your equations must be wrong That's not at all how validity of physical theories is evaluated. Not even a little bit. By that logic, you would have to reject most current theories. For example, Relativity restricted the maximum speed of travel, thus revealing that countless future generations will not be able to reach the stars. Archimedes's discovery of the buoyancy laws enabled future naval battles and ocean faring, impacting billions so far (which is not a googolplex, but the day is still young). The discovery of fission and fusion still has the potential to destroy all those potential future lives. Same with computer research. The only thing that matters in physics is the old mundane "fits current data, makes valid predictions". Or at least has the potential to make testable predictions some time down the road. The only time you might want to bleed (mis)anthropic considerations into physics is when you have no way of evaluating the predictive power of various models and need to decide which one is worth pursuing. But that is not physics, it's decision theory. Once you have a testable working theory, your anthropic considerations are irrelevant for evaluating its validity. Relativity restricted the maximum speed of travel, thus revealing that countless future generations will not be able to reach the stars That's perfectly credible since it implies a lack of leverage. Archimedes's discovery of the buoyancy laws enabled future naval battles and ocean faring, impacting billions so far 10^10 is not a significant factor compared to the sensory experience of seeing something float in a bathtub. The only thing that matters in physics is the old mundane "fits current data, makes valid predictions". To build an AI one must be a tad more formal than this, and once you start trying to be formal, you will soon find that you need a prior. That's perfectly credible since it implies a lack of leverage. Oh, I assumed that negative leverage is still leverage. Given that it might amount to an equivalent of killing a googolplex of people, assuming you equate never being born with killing. To build an AI one must be a tad more formal than this, and once you start trying to be formal, you will soon find that you need a prior. I see. I cannot comment on anything AI-related with any confidence. I thought we were talking about evaluating the likelihood of a certain model in physics to be accurate. In that latter case anthropic considerations seem irrelevant. It's likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage. so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage. Neither was any other scientist in history ever, including the the one in the Eliezer's dark energy example. Personally, I take a very dim view of applying anthropics to calculating probabilities of future events, and this is what Eliezer is doing. "Robin Hanson has suggested that the logic of a leverage penalty should stem from the general improbability of individuals being in a unique position to affect many others (which is why I called it a leverage penalty)." As I mentioned in a recent discussion post, I have difficulty accepting Robin's solution as valid -- for starters it has the semblance of possibly working in the case of people who care about people, because that's a case that seems as it should be symmetrical, but how would it e.g. work for a Clippy who is tempted with the creation of paperclips? There's no symmetry here because paperclips don't think and Clippy knows paperclips don't think. And how would it work if the AI in question in asked to evaluate whether such a hypothetical offer should be accepted by a random individual or not? Robin's anthropic solution says that the AI should judge that someone else ought hypothetically take the offer, but it would judge the probabilities differently if it had to judge things in actual life. That sounds as if it ought violate basic principles of rationality? My effort to steelman Robin's argument attempted to effectively replace "lives" with "structures of type X that the observer cares about and will be impacted", and "unique position to affect" with "unique position of not directly observing" -- hence Law of Visible Impact. Someone who reacts to gap in the sky with "its most likely a hallucination" may, with incredibly low probability, encounter the described hypothetical where it is not a hallucination, and lose out. Yet this person would perform much more optimally when their drink got spiced with LSD or if they naturally developed an equivalent fault. And of course the issue is that maximum or even typical impact of faulty belief processing which is described here could be far larger than $5 - the hypothesis could have required you to give away everything, to work harder than you normally would and give away income, or worse, to kill someone. And if it is processed with disregard for probability of a fault, such dangerous failure modes are rendered more likely. This is true, but the real question here is how to fix a non-convergent utility calculation. One of the points in the post was a dramatically non Bayesian dismissal of updates on the possibility of hallucination. An agent of finite reliability faces a tradeoff between it's behaviour under failure and it's behaviour in unlikely circumstances. With regards to fixing up probabilities, there is an issue that early in it's life, an agent is uniquely positioned to influence it's future. Every elderly agent goes through early life; while the probability of finding your atheist variation on the theme of immaterial soul in the early age agent is low, the probability that an agent will be making decisions at an early age is 1, and its not quite clear that we could use this low probability. (It may be more reasonable to assign low probability to an incredibly long lifespan though, in the manner similar to the speed prior). the vast majority of the improbable-position-of-leverage in any x-risk reduction effort comes from being an Earthling in a position to affect the future of a hundred billion galaxies, Why does "Earthling" imply sufficient evidence for the rest of this (given a leverage adjustment)? Don't we have independent reason to think otherwise, eg the Great Filter argument? Mind you, the recent MIRI math paper and follow-up seem (on their face) to disprove some clever reasons for calling seed AGI actually impossible and thereby rejecting a scenario in which Earth will "affect the future of a hundred billion galaxies". There may be a lesson there. Is it reasonable to take this as evidence that we shouldn't use expected utility computations, or not only expected utility computations, to guide our decisions? If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we should revise the original hypothesis. Utility as a theoretical construct is charming, but it does have to pay its way, just like anything else. P.S. I think the reasoning from "bounded rationality exists" to "non-Bayesian mind changes exist" is good stuff. Perhaps we could call this "on seeing this, I become willing to revise my model" phenomenon something like "surprise", and distinguish it from merely new information. Indeed, you can't ever present a mortal like me with evidence that has a likelihood ratio of a googolplex to one - evidence I'm a googolplex times more likely to encounter if the hypothesis is true, than if it's false - because the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex. You know the old saying about how once you assign something probability one, or probability zero, you can never change your mind regardless of what evidence you see? Well, odds of a googolplex to one, or one to a googolplex, work pretty much the same way." On the other hand, if I am dreaming, or drugged, or crazy, then it DOESN'T MATTER what I decide to do in this situation. I will still be trapped in my dream or delusion, and I won't actually be five dollars poorer because you and I aren't really here. So I may as well discount all probability lines in which the evidence I'm seeing isn't a valid representation of an underlying reality. Here's your $5. I will still be trapped in my dream or delusion Are you sure? I would expect that it's possible to recover from that, and some actions would make you more likely to recover than others. If all of my experiences are dreaming/drugged/crazy/etc. experiences then what decision I make only matters if I value having one set of dreaming/drugged/crazy experiences over a different set of such experiences. The thing is, I sure do seem to value having one set of experiences over another. So if all of my experiences are dreaming/drugged/crazy/etc. experiences then it seems I do value having one set of such experiences over a different set of such experiences. So, given that, do I choose the dreaming/drugged/crazy/etc. experience of giving you $5 (and whatever consequences that has?). Or of refusing to give you $5 (and whatever consequences that has)? Or something else? So I may as well discount all probability lines in which the evidence I'm seeing isn't a valid representation of an underlying reality. But that would destroy your ability to deal with optical illusions and misdirection. Perhaps I should say ...in which I can't reasonably expect to GET evidence entangled with an underlying reality. This would mean that all our decisions were dominated by tiny-seeming probabilities (on the order of 2-100 and less) of scenarios where our lightest action affected 3↑↑4 people... which would in turn be dominated by even more remote probabilities of affecting 3↑↑5 people... I'm pretty ignorant of quantum mechanics, but I gather there was a similar problem, in that the probability function for some path appeared to be dominated by an infinite number of infinitessimally-unlikely paths, and Feynman solved the problem by showing that those paths cancelled each other out. Relevant math, similar features in classical optics and Quantum Mechanics. Random thoughts here, not highly confident in their correctness. Why is the leverage penalty seen as something that needs to be added, isn't it just the obviously correct way to do probability. Suppose I want to calculate the probability that a race of aliens will descend from the skies and randomly declare me Overlord of Earth some time in the next year. To do this, I naturally go to Delphi to talk to the Oracle of Perfect Priors, and she tells me that the chance of aliens descending from the skies and declaring an Overlord of Earth in the next year is 0.0000007%. If I then declare this to be my probability of become Overlord of Earth in an alien-backed coup, this is obviously wrong. Clearly I should multiply it by the probability that the aliens pick me, given that the aliens are doing this. There are about 7-billion people on earth, and updating on the existence of Overlord Declaring aliens doesn't have much effect on that estimate, so my probability of being picked is about 1 in 7 billion, meaning my probability of being overlorded is about 0.0000000000000001%. Taking the former estimate rather than the latter is simply wrong. Pascal's mugging is a similar situation, only this time when we update on the mugger telling the truth, we radically change our estimate of the number of people who were 'in the lottery', all the way up to 3^^^^3. We then multiply 1/3^^^^3 by the probability that we live in a universe where Pascal's muggings occur (which should be very small but not super-exponentially small). This gives you the leverage penalty straight away, no need to think about Tegmark multiverses. We were simply mistaken to not include it in the first place. only this time when we update on the mugger telling the truth, we radically change our estimate of the number of people who were 'in the lottery', all the way up to 3^^^^3. We then multiply 1/3^^ ^^3 by the probability that we live in a universe where Pascal's muggings occur How does this work with Clippy (the only paperclipper in known existence) being tempted with 3^^^^3 paperclips? That's part of why I dislike Robin Hanson's original solution. That the tempting/blackmailing offer involves 3^^^^3 other people, and that you are also a person should be merely incidental to one particular illustration of the problem of Pascal's Mugging -- and as such it can't be part of a solution to the core problem. To replace this with something like "causal nodes", as Eliezer mentions, might perhaps solve the problem. But I wish that we started talking about Clippy and his paperclips instead, so that the original illustration of the problem which involves incidental symmetries doesn't mislead us into a "solution" overreliant on symmetries. How does this work with Clippy (the only paperclipper in known existence) being tempted with 3^^^^3 paperclips? First thought, I'm not at all sure that it does. Pascal's mugging may still be a problem. This doesn't seem to contradict what I said about the leverage penalty being the only correct approach, rather than a 'fix' of some kind, in the first case. Worryingly, if you are correct it may also not be a 'fix' in the sense of not actually fixing anything. I notice I'm currently confused about whether the 'causal nodes' patch is justified by the same argument. I will think about it and hopefully find an answer. I don't know of any set of axioms that imply that you should take expected utilities when considering infinite sets of possible outcomes that do not also imply that the utility function is bounded. If we think that our utility functions are unbounded and we want to use the Solomonoff prior, why are we still taking expectations? (I suppose because we don't know how else to aggregate the utilities over possible worlds. Last week, I tried to see how far I could get if I weakened a few of the usual assumptions. I couldn't really get anywhere interesting because my axioms weren't strong enough to tell you how to decide in many cases, even when the generalized probabilities and generalized utilities are known.) Suppose you could conceive of what the future will be like if it were explained to you. Are there more or less than a googleplex differentiable futures which are conceivable to you? If there are more, then selecting a specific one of those conceivable futures is more bits than posited as possible. If fewer, then...? The usual analyses of Pascal's Wager, like many lab experiments, privileges the hypothesis and doesn't look for alternative hypotheses. Why would anyone assume that the Mugger will do as he says? What do we know about the character of all powerful beings? Why should they be truthful to us? If he knows he could save that many people, but refrains from doing so because you won't give him five dollars, he is by human standards a psycho. If he's a psycho, maybe he'll kill all those people if I give him 5 dollars. That actually seems more likely behavior from such a dick. The situation you are in isn't the experimental hypothetical of knowing what the mugger will do depending on what your actions are. It's a situation where you observer X,Y, and Z, and are free to make inferences from them. If he has the power, I infer the mugger is a sadistic dick who likes toying with creatures. I expect him to renege on the bet, and likely invert it. "Ha Ha! Yes, I saved those beings, knowing that each would go on to torture a zillion zillion others." This is a mistake theists make all the time. They think hypothesizing an all powerful being allows them to account for all mysteries, and assume that once the power is there, the privileged hypothesis will be fulfilled. But you get no increased probability of any event from hypothesizing power unless you also establish a prior on behavior. From the little I've seen of the mugger, if he has the power to do what he claims, he is malevolent. If he doesn't have the power, he is impotent to deliver and deluded or dishonest besides. Either way, I have no expectation of gain by appealing to such a person. The usual analyses of Pascal's Wager, like many lab experiments, privileges the hypothesis and doesn't look for alternative hypotheses. Yes, privileging a hypothesis isn't discussed in great detail, but the alternatives you mention in your post don't resolve the dilemma. Even if you think that that the probabilities of a "good" and "bad" alternatives balance each other out to the quadrillionth decimal point, the utilities you get in your calculation are astronomical. If you think there's a 0.0000quadrillion zeros1 greater chance that the beggar will do good than harm, the expected utility of your $5 donation is inconceivably greater than than a trillion years of happiness. If you think there's at least a 0.0000 quadrillion zeros1 chance that $5 will cause the mugger to act malevolently, your $5 donation is inconceivably worse than a trillion years of torture. Both of theses expectations seem off. You can't just say "the probabilities balance out". You have to explain why the probabilities balance out to a bignum number of decimal points. You have to explain why the probabilities balance out to a bignum number of decimal points. Actually, I don't. I say the probabilities are within my margin of error, which is a lot larger than "0.0000quadrillion zeros1". I can't discern differences of "0.0000quadrillion zeros1". OK, but now decreasing your margin of error until you can make a determination is the most important ethical mission in history. Governments should spend billions of dollars to assemble to brightest teams to calculate which of your two options is better -- more lives hang in the balance (on expectation) than would ever live if we colonized the universe with people the size of atoms. Suppose a trustworthy Omega tells you "This is a once in a lifetime opportunity. I'm going to cure all residence of country from all diseases in benevolent way (no ironic or evil catches). I'll leave the country up to you. Give me $5 and the country will be Zimbabwe, or give me nothing and the country will be Tanzania. I'll give you a couple of minutes to come up with a decision." You would not think to yourself "Well, I'm not sure which is bigger. My estimates don't differ by more than my margin of error, so I might as well save the $5 and go with Tanzania". At least I hope that's not how you'd make the decision. <blockquote> Then you present me with a brilliant lemma Y, which clearly seems like a likely consequence of my mathematical axioms, and which also seems to imply X - once I see Y, the connection from my axioms to X, via Y, becomes obvious. </blockquote> Seems a lot like learning a proof of X. It shouldn't surprise us that learning a proof of X increases your confidence in X. The mugger genie has little ground to accuse you of inconsistency for believing X more after learning a proof of it. Granted the analogy isn't exact; what is learned may fall well short of rigorous proof. You may have only learned a good argument for X. Since you assign only 90% posterior likelihood I presume that's intended in your narrative. Nevertheless, analogous reasoning seems to apply. The mugger genie has little ground to accuse you of inconsistency for believing X more after learning a good argument for it. Continuing from what I said in my last comment about the more general problem with Expected Utility Maximizing, I think I might have a solution. I may be entirely wrong, so any criticism is welcome. Instead of calculating Expected Utility, calculate the probability that an action will result in a higher utility than another action. Choose the one that is more likely to end up with a higher utility. For example, if giving Pascal's mugger the money only has a one out of a trillionth chance of ending up with a higher utility than not giving him your money, you wouldn't give it. Now there is an apparent inconsistency with this system. If there is a lottery, and you have a 1/100 chance of winning, you would never buy a ticket. Even if the reward is $200 and the cost of a ticket only $1. Or even regardless how big the reward is. However if you are offered the chance to buy a lot of tickets all at once, you would do so, since the chance of winning becomes large enough to outgrow the chance of not winning. However I don't think that this is a problem. If you expect to play the lottery a bunch of times in a row, then you will choose to buy the ticket, because making that choice in this one instance also means that you will make the same choice in every other instance. Then the probability of ending up with more money at the end of the day is higher. So if you expect to play the lottery a lot, or do other things that have low chances of ending up with high utilities, you might participate in them. Then when all is done, you are more likely to end up with a higher utility than if you had not done so. However if you get in a situation with an absurdly low chance of winning, it doesn't matter how large the reward is. You wouldn't participate, unless you expect to end up in the same situation an absurdly large number of times. This method is consistent, it seems to "work" in that most agents that follow it will end up with higher utilities than agents that don't follow it, and Expected Utility is just a special case of it that only happens when you expect to end up in similar situations a lot. It also seems closer to how humans actually make decisions. So can anyone find something wrong with this? So if I'm getting what you're saying correctly, it would not sacrifice a single cent for a 49% chance to save a human life? And on the other hand it could be tempted to a game where it'd have 51% chance of winning a cent, and 49% chance of being destroyed? If the solution for the problem of infinitesmal probabilities, is to effectively ignore every probability under 50%, that's a solution that's worse than the problem... I stupidly didn't consider that kind of situation for some reason... Back to the drawing board I guess. Though to be fair it would still come out ahead 51% of the time, and in a real world application it would probably choose to spend the penny, since it would expect to make choices similarly in the future, and that would help it come out ahead an even higher percent of the time. But yes, a 51% chance of losing a penny for nothing probably shouldn't be worth more than a 49% chance at saving a life for a penny. However allowing a large enough reward to outweigh a small enough probability means the system will get stuck in situations where it is pretty much guaranteed to lose, on the slim, slim chance that it could get a huge reward. Caring only about the percent of the time you "win" seemed like a more rational solution but I guess not. Though another benefit of this system could be that you could have weird utility functions. Like a rule that says any outcome where one life is saved is worth more than any amount of money lost. Or Asimov's three laws of robotics, which wouldn't work under an Expected Utility function since it would only care about the first law. This is allowed because in the end all that matters is which outcomes you prefer to which other outcomes. You don't have to turn utilities into numbers and do math on them. Here's a question, if we had the ability to input a sensory event with a likelyhoodratio of 3^^^^3:1 this whole problem would be solved? Here's a question, if we had the ability to input a sensory event with a likelyhoodratio of 3^^^^3:1 this whole problem would be solved? Assuming the rest of our cognitive capacity is improved commensurably then yes, problem solved. Mind you we would then be left with the problem if a Matrix Lord appears and starts talking about 3^^^^ This seems like an exercise in scaling laws. The odds of being a hero who save 100 lives are less 1% of the odds of being a hero who saves 1 life. So in the absence of good data about being a hero who saves 10^100 lives, we should assume that the odds are much, much less than 1/(10^100). In other words, for certain claims, the size of the claim itself lowers the probability. More pedestrian example: ISTR your odds of becoming a musician earning over $1 million a year are much, much less than 1% of your odds of becoming a musician who earns over $10,000 a year. Isn't this more of social recognition of a scam? While there are decision-theoretic issues with the Original Pascal's Wager, one of the main problems is that it is a scam ("You can't afford not to do it! It's an offer you can't refuse!"). It seems to me that you can construct plenty of arguments like you just did, and many people wouldn't take you up on the offer because they'd recognize it as a scam. Once something has a high chance of being a scam (like taking the form of Pascal's Wager), it won't get much more of your attention until you lower the likelihood that it's a scam. Is that a weird form of Confirmation Bias? But nonetheless, couldn't the AI just function in the same way as that? I would think it would need to learn how to identify what is a trick and what isn't a trick. I would just try to think of it as a Bad Guy AI who is trying to manipulate the decision making algorithms of the Good Guy AI. The concern here is that if I reject all offers that superficially pattern-match to this sort of scam, I run the risk of turning down valuable offers as well. (I'm reminded of a TV show decades ago where they had some guy dress like a bum and wander down the street offering people $20, and everyone ignored him.) Of course, if I'm not smart enough to actually evaluate the situation, or don't feel like spending the energy, then superficial pattern-matching and rejection is my safest strategy, as you suggest. But the question of what analysis a sufficiently smart and attentive agent could do, in principle, to take advantage of rare valuable opportunities without being suckered by scam artists is often worth asking anyway. But wouldn't you just be suckered by sufficiently smart and attentive scam artists? It depends on the nature of the analysis I'm doing. I mean, sure, if the scam artist is smart enough to, for example, completely encapsulate my sensorium and provide me with an entirely simulated world that it updates in real time and perfect detail, then all bets are off... it can make me believe anything by manipulating the evidence I observe. (Similarly, if the scam artist is smart enough to directly manipulate my brain/mind.) But if my reasoning is reliable and I actually have access to evidence about the real world, then the better I am at evaluating that evidence, the harder I am to scam about things relating to that evidence, even by a scam artist far smarter than me. I also think that the variant of the problem featuring an actual mugger is about scam recognition. Suppose you get an unsolicited email claiming that a Nigerian prince wants to send you a Very Large Reward worth $Y. All you have to do is send him a cash advance of $5 first ... I analyze this as a straightforward two-player game tree via the usual minimax procedure. Player one goes first, and can either pay $5 or not. If player one chooses to pay, then player two goes second, and can either pay Very Large Reward $Y to player one, or he can run away with the cash in hand. Under the usual minimax assumptions, player 2 is obviously not going to pay out! Crucially, this analysis does not depend on the value for Y. The analysis for Pascal's mugger is equivalent. A decision procedure that needs to introduce ad hoc corrective factors based on the value of Y seems flawed to me. This type of situation should not require an unusual degree of mathematical sophistication to analyze. When I list out the most relevant facts about this scenario, they include the following: (1) we received an unsolicited offer (2) from an unknown party from whom we won't be able to seek redress if anything goes wrong (3) who can take our money and run without giving us anything verifiable in return. That's all we need to know. The value of Y doesn't matter. If the mugger performs a cool and impressive magic trick we may want to tip him for his skillful street performance. We still shouldn't expect him to payout Y. I generally learn a lot from the posts here, but in this case I think the reasoning in the post confuses rather than enlightens. When I look back on my own life experiences, there are certainly times when I got scammed. I understand that some in the Less Wrong community may also have fallen victim to scams or fraud in the past. I expect that many of us will likely be subject to disingenuous offers by unFriendly parties in the future. I respectfully suggest that knowing about common scams is a helpful part of a rationalist's training. It may offer a large benefit relative to other If my analysis is flawed and/or I've missed the point of the exercise, I would appreciate learning why. Thanks! When you say that player 2 "is obviously not going to pay out" that's an approximation. You don't know that he's not going to pay off. You know that he's very, very, very, unlikely to pay off. (For instance, there's a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he'll do, and therefore doesn't follow minimax.) But in Pascal's Mugging, "very, very, very, unlikely" works differently from "no chance at all".
{"url":"http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/","timestamp":"2014-04-19T05:23:47Z","content_type":null,"content_length":"858012","record_id":"<urn:uuid:e362a888-7739-4211-8e7a-e48f885e0a9a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Pleasanton, CA Math Tutor Find a Pleasanton, CA Math Tutor ...I specialize in the areas of computer technology, math, English, and physical science. This reflects by educational background with degrees in physics, English, and computer science. My experience in working with people from many backgrounds allows me to respect different cultures and their related learning styles. 44 Subjects: including calculus, discrete math, photography, C ...Pre-calculus is often somewhat of a survey course without a unified curriculum, and I have been able to help students tie together the various topics and explain how they will be important in their future math courses. Whether it's explaining the relevance of polar graphs or giving examples of t... 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...When I was in high school and college, I found that most math and science textbooks are very abstract and nearly impossible to understand for someone who is learning the subject for the first time, so I had to find other means to really learn the concepts. Now that I am past this hurdle, I want ... 12 Subjects: including geometry, algebra 1, algebra 2, calculus ...My teaching credentials are in multiple subject elementary school and biological sciences. I have attended a variety of trainings throughout my professional career. In writing, I have trained and applied the Step-Up to Writing program and Lucy Caulkins in my classroom. 26 Subjects: including algebra 2, probability, reading, statistics ...I have worked with ADD/ADHD, Autistic, & Aspergers students as well as various learning disabled/challenged students for the past 15 years. I know your student can succeed. Please don't let math or sciences scare them or you. 27 Subjects: including algebra 1, ACT Math, biology, ASVAB Related Pleasanton, CA Tutors Pleasanton, CA Accounting Tutors Pleasanton, CA ACT Tutors Pleasanton, CA Algebra Tutors Pleasanton, CA Algebra 2 Tutors Pleasanton, CA Calculus Tutors Pleasanton, CA Geometry Tutors Pleasanton, CA Math Tutors Pleasanton, CA Prealgebra Tutors Pleasanton, CA Precalculus Tutors Pleasanton, CA SAT Tutors Pleasanton, CA SAT Math Tutors Pleasanton, CA Science Tutors Pleasanton, CA Statistics Tutors Pleasanton, CA Trigonometry Tutors Nearby Cities With Math Tutor Berkeley, CA Math Tutors Concord, CA Math Tutors Danville, CA Math Tutors Dublin, CA Math Tutors Fremont, CA Math Tutors Hayward, CA Math Tutors Livermore, CA Math Tutors Oakland, CA Math Tutors Palo Alto Math Tutors San Jose, CA Math Tutors San Leandro Math Tutors San Ramon Math Tutors Santa Clara, CA Math Tutors Sunnyvale, CA Math Tutors Union City, CA Math Tutors
{"url":"http://www.purplemath.com/Pleasanton_CA_Math_tutors.php","timestamp":"2014-04-16T10:10:43Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:787fea0c-4991-4c29-bd39-9e2d71e8e2c4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Crofton, MD Calculus Tutor Find a Crofton, MD Calculus Tutor ...Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cover my advisor’s graduate level classes from time to time. And, since August 2012 I have tutored math (from prealgebra to calculus II), chemistry and physics for mid- and high-school students here in th... 14 Subjects: including calculus, chemistry, physics, geometry ...I have 5 years of MATLAB experience. I often used it during college and graduate school. I have experience using it for simpler math problems, as well as using it to run more complicated 27 Subjects: including calculus, physics, geometry, algebra 1 ...Continuity as a Property of Functions. II. Derivatives A. 21 Subjects: including calculus, statistics, geometry, algebra 1 ...My current job requires use of these in finite element analysis, free body diagram of forces, and decomposing forces in a given direction. I have a BS in mechanical engineering and took Algebra 1 & 2 in high school and differential equations and statistics in college. My current job requires use of algebra to manipulate equations for force calculation. 10 Subjects: including calculus, physics, geometry, algebra 1 ...I am a graduate of the University of Maryland, where I completed a Bachelor of Arts in 2011 with performance on trombone as the major focus. Piano proficiency was a part of the degree requirement, a requirement which I met by demonstrating proficiency in an informal audition. I performed as a trombone instrumentalist in the Navy Band based at Pearl Harbor, Hawaii from 2004 to 15 Subjects: including calculus, statistics, piano, geometry Related Crofton, MD Tutors Crofton, MD Accounting Tutors Crofton, MD ACT Tutors Crofton, MD Algebra Tutors Crofton, MD Algebra 2 Tutors Crofton, MD Calculus Tutors Crofton, MD Geometry Tutors Crofton, MD Math Tutors Crofton, MD Prealgebra Tutors Crofton, MD Precalculus Tutors Crofton, MD SAT Tutors Crofton, MD SAT Math Tutors Crofton, MD Science Tutors Crofton, MD Statistics Tutors Crofton, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Crofton_MD_Calculus_tutors.php","timestamp":"2014-04-21T02:11:36Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:0f462ee3-d055-49bb-91c3-516d104179b8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
R versus Matlab in Mathematical Psychology February 20, 2011 By Jeromy Anglim I recently attended the 2011 Australasian Mathematical Psychology Conference. This post summarises a few thoughts I had on the use of R, Matlab and other tools in mathematical psychology flowing from discussions with researchers at the conference. I wanted to get a sense of the software used by researchers in mathematical psychology. What was popular? Why was it popular? From the small-n, non-random sample of conference attendees that I spoke to over coffee and cake, I concluded: • Many experienced math psych researchers know a bit of both R and Matlab, but most specialised in one. • Matlab seemed to be substantially more popular than R in math psych. • The general attitude seemed to be that both tools offered similar functionality. • Reasons given for using Matlab: □ Consistency: several researchers commented that functions are highly consistent in Matlab, making it easier to return to coding in Matlab after a break. □ Superior built-in documentation: There was a sense that Matlab documentation was more user-friendly. □ Historical precedent: researchers grew up on Matlab and then taught it to their graduate students. □ Existing packages and models: it seems like Matlab is well established in cognitive psychology where substantial existing code to guide subsequent researchers. □ University pays: Thus, while R is free, Matlab is effectively free to the academic if the academic's university has a site licence. □ User friendly IDE: In R it seems that most users pretty quickly start playing around with alternative editors, whether it be ESS, Vim and R, Eclipse, Tinn-R or something else. In Matlab, the built-in IDE seemed popular. While these external editors can be configured to create a really powerful data analytic environment, Matlab users appreciated having something that was productive out-of-the-box. □ Matlab is user friendly for implementing matrix algebra based calculations. • Reasons given for using R: □ Free (as in beer) □ Open source: A few people talked about this. However, I got the sense that the ideology of open source technology could be encouraged further. □ Sweave: Even amongst Matlab users, there was a respect and interest in the idea of Sweave in R □ R's packages: The sheer number of packages particularly for statistics is one of R's great strengths. □ Superior graphics • A few people also spoke positively of Python (see this summary of useful Python packages for statisticsby Christophe Lalanne. • All the above links into general discussion of the relative merits of R, Matlab, and Python on SO. From my discussions, I saw no need for me to personally switch from R to Matlab. Sweave, graphics, and all the R packages are fantastic. The community around R is also one of its great strengths. Finally, open source just aligns better with science. • Open and freely modifiable source code • Freely available psychological measurement tools • Freely available data • Reproducible research documents using technologies such as Sweave • Open-access journals It all combines to support scientific disciplines in sharing and building knowledge through accountability and trust. This applies both to sharing between researchers as well as communicating with the broader community. I get a bad feeling when I think of researchers and interested community members who can't afford Matlab being excluded from research. However, it was interesting to consider how issues like user-friendly documentation, development environments, and consistency could be facilitated in a massive and distributed open source project such as R. ...END RANT... Related Posts daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/r-versus-matlab-in-mathematical-psychology-2/","timestamp":"2014-04-19T22:35:09Z","content_type":null,"content_length":"38890","record_id":"<urn:uuid:836897ea-1cab-41f5-a388-61116d376d7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Mind games and alternative solutions Last night I browsed a bit through a Scientific American "Mind" edition (volume 16, number 1) I bought some time ago. On page 96 it got some "Head Games" by Abbie F. Salny. I was able to solve one very fast, a pattern completion puzzle. Today I asked Esme, and later Andreja, a very good online friend, to solve the problem. And no, 64 is not the answer, since it only completes the sequence at the bottom and ignores the other two numbers in each circle. After some thinking and trying, Andreja came up with a very nice alternative solution. But before explaining that one, lets see what I came up with, and which matches the answer given at the bottom of the page in the Scientific American. So the number in the bottom of the circle is four times the difference between the left and right number. So the answer could be: 4(12 - 9) = 12. However, Andreja came up with 77. First she added the left and bottom number, see image below: And then by dropping the left most digit on the right number, the same right top sequence is generated, and hence the missing number is 77, since 12 + 77 = 89, and we drop the 8. So the function f in the image below keeps the right most digit of a number: I like this solution, since the digits that are dropped form another sequence: 1, 2, 4, 8. Very good job, and I wonder if more solutions are possible. If you find one, please post a comment. Andreja is very fond of puzzling games. She beat me big time with the online Alchemy game. And now plays a lot Sudoku online, which I still have to give a serious try. Mind games related • Sudoku - About the number place game Also today
{"url":"http://johnbokma.com/mexit/2005/11/09/mind-games.html","timestamp":"2014-04-19T17:13:01Z","content_type":null,"content_length":"5573","record_id":"<urn:uuid:027845c1-7e72-4d17-81ec-7f8fcaef13a0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Section 5: Problem Solving and Mathematical Models Shodor > Interactivate > Textbooks > Math Thematics 2nd Ed. 8th > Section 5: Problem Solving and Mathematical Models Math Thematics 2nd Ed. 8th Module 1 - Amazing Feats and Facts and Fiction Section 5: Problem Solving and Mathematical Models Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Activity (...) A Better Fire!! Activity: Students run a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Forest density, wind direction, size of forest. Rabbits and Wolves Activity: Experiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation. Spread of Disease Activity: Models how a population of susceptible, infected, and recovered people is affected by a disease. Discussion (...) Worksheet (...) No Results Found ©1994-2014 Shodor Website Feedback Math Thematics 2nd Ed. 8th Module 1 - Amazing Feats and Facts and Fiction Section 5: Problem Solving and Mathematical Models Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Activity (...) A Better Fire!! Activity: Students run a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Forest density, wind direction, size of forest. Rabbits and Wolves Activity: Experiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation. Spread of Disease Activity: Models how a population of susceptible, infected, and recovered people is affected by a disease. Discussion (...) Worksheet (...) No Results Found
{"url":"http://www.shodor.org/interactivate/textbooks/section/874/","timestamp":"2014-04-20T05:43:51Z","content_type":null,"content_length":"13842","record_id":"<urn:uuid:c56ed195-f096-4519-b58e-31311647253f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: Elementary Multiplication Ask Dr. Math Elementary Archive Dr. Math Home || Elementary || Middle School || High School || College || Dr. Math FAQ TOPICS Browse Elementary Multiplication Stars indicate particularly interesting answers or good places to begin browsing. This page: multiplication Selected answers to common questions: Flashcards/worksheets on the Web. Search Learning to multiply. Dr. Math Least common multiple, Greatest common factor. Number sentences. See also the Order of operations. Dr. Math FAQ: multiplication facts and Take the first digit, multiply it by the next consecutive number, and place it in front of 25. Can you prove this shortcut? order of operations When a word problem refers to the sum of the products, what exactly does it mean? Internet Library: multiplication One of my students asked how we came to use the term power to express the number of times we multiply a number by itself. T2T FAQ: A number is five times greater than x. Will this number be 6x or 5x? learning to multiply In a story problem, how do you know whether to multiply or divide? ELEMENTARY Can you think of a way to use doubling to multiply 6 x 7? Arithmetic My math teacher told us to work problems like this: a(b-c)= ab-ac. subtraction I have to add 2398 and 5752 and then divide by 37. I can do it, but it takes a long time. Is there any faster way to do those calculations? division What is 65 times ten million? Also, we've had pluses, multiples, and times. Are there going to be any new things after that? What is 75 percent of $5,000? What is the definition of a multiple? 2-dimensional What does multiplication have to do with addition, geometry, and real life? triangles/polygons Why do we have to cancel numbers? 3D and higher polyhedra Can you explain in simple terms why the Casting Out Nines method works? Golden Ratio/ I wanted to know why the word "of" means multiplication. Fibonacci Sequence How can I convince a 14 year old girl who is in 8th grade the importance of mental math? I think that skills like mentally adding and subtracting 2 digit numbers and being History/Biography able to estimate multiplying 2 digit large numbers are critical. My daughter's teacher says that such skills aren't needed because of calculators and computers. Measurement My 8-year old is having trouble understanding why multiplying by 10 just adds a 0 to the end of the number. Do you have any thoughts on how to explain it to her? dates/time Why do we need to have rules for order of operations? terms/units Why is 2 to the 0 power equal to 1? I don't understand how a number can be multiplied by itself zero times. Number Sense/ About Numbers Page: [<prev] 1 2 3 4 5 large numbers place value prime numbers square roots Word Problems Search the Dr. Math Library: [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2014 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
{"url":"http://mathforum.org/library/drmath/sets/elem_multiplication.html?s_keyid=38676324&f_keyid=38676326&start_at=161&num_to_see=40","timestamp":"2014-04-16T13:42:24Z","content_type":null,"content_length":"16234","record_id":"<urn:uuid:afb809df-5e1e-4b0f-9712-521ac14c96a8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 3. The loads taken from an a.c. supply consist of: (a) a heating load of 15 kW; (b) a motor load of 40 kVA at 0.6 power factor; and (c) a load of 20 kW at 0.8 lagging power factor. Calculate: (i) the total load from the supply in kW and kVA and its power factor, (ii) the kVA rating of the capacitor to bring the power factor to unity. Draw the power triangle and show how the capacitor would be connected to the supply and the loads. Answers [59 kW, 75.4 kVA, 0.782; 47 kVAr] • one year ago • one year ago Best Response You've already chosen the best response. ..I got the total load from supply in KW..But i can't find the it in KVA...!! Using the formula.. Active power in watts / Apparent power in voltamperes = power factor I onverted it from KW to KVA and KVA to KW..But how to convert "a heating load of 15 kW" into KVA. That's where i'm stuck..:/.. Best Response You've already chosen the best response. can you draw the power triangle ? Best Response You've already chosen the best response. ...The power triangle is to be done later in the question.. ..Also To be able to do so..Don't i need apparent power and reactive power..?? So i first need to get the totel load power in KVA.. Can you help..?? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f6d610e4b027eb5d996861","timestamp":"2014-04-19T07:16:44Z","content_type":null,"content_length":"33176","record_id":"<urn:uuid:853a1f21-0d08-4245-b2af-019ac0f2ba5a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Unknown mathematician makes historical breakthrough in prime theory 332 days ago Yitang Zhang is a largely unknown mathematician who has struggled to find an academic job after he got his PhD, working at a Subway sandwich shop before getting a gig as a lecturer at the University of New Hampshire. He's just had a paper accepted for publication in Annals of Mathematics, which appears to make a breakthrough towards proving one of mathematics' oldest, most difficult, and most significant conjectures, concerning "twin" prime numbers. According to the Simons Science News article, Zhang is shy, but is a very good, clear writer and lecturer. Since that time, the intrinsic appeal of these conjectures has given them the status of a mathematical holy grail, even though they have no known applications. But despite many efforts at proving them, mathematicians weren t able to rule out the possibility that the gaps between primes grow and grow, eventually exceeding any particular bound. Now Zhang has broken through this barrier. His paper shows that there is some number N smaller than 70 million such that there are infinitely many pairs of primes that differ by N. No matter how far you go into the deserts of the truly gargantuan prime numbers no matter how sparse the primes become you will keep finding prime pairs that differ by less than 70 million. The result is astounding, said Daniel Goldston, a number theorist at San Jose State University. It s one of those problems you weren t sure people would ever be able to solve. Unknown Mathematician Proves Elusive Property of Prime Numbers [Erica Klarreich/Wired/Simons Science News] (Photo: University of New Hampshire) Original: http://boingboing.net/2013/05/21/unknown-mathematician-makes-hi.html This blog is free & open source, however embeds may not be.
{"url":"http://www.minds.com/blog/view/83211/unknown-mathematician-makes-historical-breakthrough-in-prime%C2%A0theory","timestamp":"2014-04-18T13:06:26Z","content_type":null,"content_length":"33310","record_id":"<urn:uuid:ab02e826-5557-4fca-8d46-2a5929a6b248>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Solutions for Pell's Equation Date: 12/11/2000 at 03:20:29 From: Paul Weber Subject: Pell's equation I've been trying to find answers to Pell's equation (x^2 - Dy^2 = 1) for different D's, but I'm really having a problem. I've been looking in books and I simply don't understand some things such as the sqrt(D) convergents, or continued fraction expansion. If someone could explain this in layman's terms, I'd really appreciate it. Date: 12/11/2000 at 14:48:47 From: Doctor Rob Subject: Re: Pell's equation Thanks for writing to Ask Dr. Math, Paul. Your request is difficult to comply with, because solutions are intimately tied up with fractions close to the square root of D. That is because: x^2 - D*y^2 = 1 x^2 - 1 = D*y^2 (x^2-1)/y^2 = D sqrt(D) = sqrt(x^2-1)/y and for moderate sizes of x, sqrt(x^2-1) is approximately equal to sqrt(x^2) = x, so sqrt(D) is approximately x/y. Here is an algorithm for computing solutions, without an explanation of why it works, which you seem not to want. 0. Given: positive integer D, not a perfect square. 1. Initialize a(0) as the largest integer such that a(0)^2 < D; P(0) = 0 Q(0) = 1 A(-1) = 1 A(0) = a(0) B(-1) = 0 B(0) = 1 i = 0 2. Compute P(i+1) = a(i)*Q(i) - P(i) Q(i+1) = (D-P(i+1)^2)/Q(i) = Q(i-1) + a(i)*(P(i)-P(i-1)) 3. If Q(i+1) = 1 and i is odd, output the pair (A(i),B(i)), and stop. 4. Divide P(i+1)+a(0) by Q(i+1) getting integer quotient a(i+1). 5. Compute A(i+1) = a(i+1)*A(i) + A(i-1) B(i+1) = a(i+1)*B(i) + B(i-1) 6. Increase the value of i by 1 and and go back to Step 2. When you stop, the output of this algorithm is the smallest positive solution (x,y). - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51581.html","timestamp":"2014-04-24T03:04:43Z","content_type":null,"content_length":"6776","record_id":"<urn:uuid:2dd23b87-6d97-4e5c-ba03-2a8cf2a92888>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Modified Nodal Analysis This document is comprised of a brief introduction to modified nodal analysis (MNA), 3 examples and some observations about the MNA method that will be useful for developing a computer algorithm. Though the node voltage method and loop current method are the most widely taught, another powerful method is modified nodal analysis (MNA). MNA often results in larger systems of equations than the other methods, but is easier to implement algorithmically on a computer which is a substantial advantage for automated solution. To use modified nodal analysis you write one equation for each node not attached to a voltage source (as in standard nodal analysis), and you augment these equations with an equation for each voltage source. To be more specific, the rules for standard nodal analysis are shown below: Node Voltage Method To apply the node voltage method to a circuit with n nodes (with m voltage sources), perform the following steps (after Rizzoni). 1. Selective a reference node (usually ground). 2. Name the remaining n-1 nodes and label a current through each passive element and each current source. 3. Apply Kirchoff's current law to each node not connected to a voltage source. 4. Solve the system of n-1-m unknown voltages. The difficulty with this method comes from having to consider the effect of voltage sources. Either a separate equation is written for each source, or the supernode method must be used. The rules for modified nodal analysis are given by: Modified Nodal Analysis To apply the node voltage method to a circuit with n nodes (with m voltage sources), perform the following steps (after DeCarlo/Lin). 1. Selective a reference node (usually ground) and name the remaining n-1 nodes. Also label currents through each current source. 2. Assign a name to the current through each voltage source. We will use the convention that the current flows from the positive node to the negative node of the source. 3. Apply Kirchoff's current law to each node. We will take currents out of the node to be positive. 4. Write an equation for the voltage each voltage source. 5. Solve the system of n-1 unknowns. Note: I will only discuss independent current and voltage sources. Dependent sources are a simple extension. See Litovski or DeCarlo/Lin for reference. As an example consider the circuit below (from the previous document) Consider the circuit shown below (Step 1 has already been applied) Apply step 2 (currents through the voltage sources with current from positive node to negative node): Apply step 3 (with positive currents out of the node): Apply step 4: Apply step 5: Now all that is left is to solve the 5x5 set of equations (recall that the nodal analysis method resulted in just 1 equation, though we did some substitutions along the way). Solving the 5x5 equation is difficult by hand, but not so with a computer. If you'll recall, the nodal analysis method became a bit more difficult when one or more of the voltage sources was not connect to ground. Let's repeat Example 2 of the previous page with MNA. Here the circuit is repeated with steps 1 and 2 completed: Steps 3 and 4 Step 5 The fact that V1 is not grounded presented no difficulty at all. Let's consider one more example, this time with a current source (this example is from Litovski). Steps 1 and 2 have been completed. Now complete steps 3 and 4: And finally bring all the know variables to the right hand side and complete step 5: If you examine the matrix equations that resulted from the application of the MNA method, several patterns become apparent that we can use to develop an algorithm. All of the circuits resulted in an equation of the form. Let us examine example 2. This circuit had 3 nodes and 2 voltage sources (n=3, m=2). The resulting matrix is shown below. Note that the pink highlighted portion of the A matrix is 3x3 (in general nxn), and includes only known quantities, specifically the values of the passive elements (the resistors). In addition the highlighted portion of the A matrix is symmetric with positive values along the main diagonal, and only negative (or zero) values for the off-diagonal terms. If an element is connected to ground, it only appears along the diagonal; a non-grounded (e.g. R2) appears both on and off the diagonal). The rest of the terms in the A matrix (the non-highlighted portion) contains only ones, negative ones and zeros. Note also that the matrix size is 5x5 (in general (m+n)x(m+n)). For all of the circuits we will analyze (i.e., only passive elements and independent sources), these general observations about the A matrix will always hold. Now consider the x matrix, the matrix of unknown quantities. It is a 1x5 matrix (in general 1x(n+m)). The topmost 3 (in general n) elements are simply the node voltages. The bottom 2 (in general m) elements are the currents associated with the voltage sources. This brings us to the z matrix that contains only known quantities. It is also a 5x1 matrix (in general (n+m)x1). The topmost 3 (in general n) elements are either zero, or the sum of independent current sources (see example 3 for an case in point). The bottom 2 (in general m) elements are the independent voltage sources. To summarize: MNA applied to a circuit with only passive elements (resistors) and independent current and voltage sources results in a matrix equation of the form: For a circuit with n nodes and m independent voltage sources: □ The A matrix: ☆ is (n+m)x(n+m) in size, and consists only of known quantities. ☆ the nxn part of the matrix in the upper left: ○ has only passive elements ○ elements connected to ground appear only on the diagonal ○ elements not connected to ground are both on the diagonal and off-diagonal terms. ☆ the rest of the A matrix (not included in the nxn upper left part) contains only 1, -1 and 0 (other values are possible if there are dependent current and voltage sources; I have not considered these cases. Consult Litovski if interested.) □ The x matrix: ☆ is an (n+m)x1 vector that holds the unknown quantities (node voltages and the currents through the independent voltage sources). ☆ the top n elements are the n node voltages. ☆ the bottom m elements represent the currents through the m independent voltage sources in the circuit. □ The z matrix: ☆ is an (n+m)x1 vector that holds only known quantities ☆ the top n elements are either zero or the sum and difference of independent current sources in the circuit. ☆ the bottom m elements represent the m independent voltage sources in the circuit. The circuit is solved by a simple matrix manipulation: Though this may be difficult by hand, it is straightforward and so is easily done by computer. In the next page we will use these observations to describe an algorithm for generating the matrices automatically. Back Erik Cheever's Home Page Please email me with any comments or suggestions
{"url":"http://www.swarthmore.edu/NatSci/echeeve1/Ref/mna/MNA2.html","timestamp":"2014-04-20T18:43:49Z","content_type":null,"content_length":"13612","record_id":"<urn:uuid:50cbc7de-7f7e-4a24-a065-e8a9df1a319e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Anders W Sandvik , Room 316. Office Hours: Mondays, 3 PM - 4 PM; Wednesdays, 11 AM - 12 PM Research Interests: Profssor Sandvik is a condensed-matter theorist specializing in computational research on interacting quantum many-body systems, in particular quantum spin systems. His research has two interrelated themes; (i) developing algorithms for simulations of complex model systems and (ii) using those methods to study collective phenomena such as quantum phase transitions. "Home Page":http:// physics.bu.edu/~sandvik "Computational Physics Course page":http://physics.bu.edu/~py502 Selected papers: M.Sc., 1989, "Åbo Akademi University":http://www.abo.fi/public/?setlanguage=en Ph.D., 1993, "University of California, Santa Barbara":http://www.ucsb.edu In the news: Research Descriptions: Computational studies of quantum phase transitions A continuous ground state phase transition occurring in a quantum-mechanical many-particle system as a function of some system parameter is referred to as a quantum phase transition. At the quantum-critical point separating two different types of ground states, the quantum fluctuations play a role analogous to thermal fluctuations in a phase transition occurring at nonzero temperature. An important aspect of these transitions is that the critical fluctuations and the associated scaling behavior of the quantum-critical point influences the system not only in the close vicinity of the ground-state critical point itself, but also in a wide finite-temperature region surrounding it. While many quantum phase transitions can be understood in terms of a mapping of the quantum mechanical problem onto a classical statistical-mechanics problem with an additional dimension (corresponding to time), recent attention has been focused on exotic transitions which fall outside the classical framework and may be important in strongly-correlated electronic systems such as the high-Tc cuprate superconductors. Prof. Sandvik’s group uses quantum Monte Carlo techniques to explore such transitions in model systems, primarily quantum spin systems. The purpose of this research is to find and characterize various quantum phase transitions in an un-biased (non-approximate) way, in order to provide benchmarks and guidance to developing theories. The influence of disorder (randomness) on the nature of quantum phase transitions is also studied. Quantum Monte Carlo algorithms Monte Carlo methods are powerful computational tools for studies of equilibrium properties of classical many-particle systems. Using a stochastic process for generating random configurations of the system degrees of freedom, such methods simulate thermal fluctuations, so that expectation values of physical observables of interest are directly obtained by averaging “measurements” on the configurations. In quantum mechanical systems (e.g., electrons in a metal or superconductor, localized electronic spins (magnetic moments) of a certain insulators, or atoms in a magnetic or optical trap), quantum fluctuations have to be taken into account as well, especially at low temperatures, and these pose a much greater challenge than thermal fluctuations alone. Several different quantum Monte Carlo techniques have been devised during the past three decades, but many challenges remain in developing efficient algorithms for reaching large system sizes and low temperatures, and extending the applicability to models that are currently intractable. Prof. Sandvik is the principal developer of a scheme known as Stochastic Series Expansion, which during the last few years has emerged as the quantum Monte Carlo method of choice for studies of several classes of spin and boson systems. Recently, the group has initiated a research program to develop algorithms for studying ground states of quantum spin systems in the so-called valence bond (singlet pair) basis. This approach shows great promise for studies of quantum phase transitions and may also be applicable to fermion and boson models.
{"url":"http://buphy.bu.edu/people/show/64","timestamp":"2014-04-19T11:57:35Z","content_type":null,"content_length":"18119","record_id":"<urn:uuid:9a4e5835-916c-421b-8bc5-b72efa91aa3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
★MINI MONI MANIA★ You are currently browsing the tag archive for the ‘Complexity Theory’ tag. Happy birthday Koharu! In celebration of Koha’s birthday (07.15), here are a few double dactyls I’ve composed: Pancakey pancakey, Master chef Kirari Proves she can pancake-sort Faster than SHIPS. Quite inexplicably, This method takes but a Number of flips. Flavor Flav, baklava, Koha-chan’s talk of a “Genuine flavor” is Just a disguise— Hard-to-find particle Actually is quantum Largest in size. Hana wo Pu~n Kirari pikari, Koha and Mai, though Experts at tangent and Cosine and sine, Find themselves thwarted by Transforms affine. Konnichi pa Konnichi pa-pa-pa! Kirari’s tra-la-la Seizes the heartstrings and Moves one to tears. Poignantly touched by her Passersby nonetheless Cover their ears. More double dactyls are in the works, so stay tuned! Going over the last few installments of Pocket Morning Weekly Q&A, posted in translation at Hello!Online, one might notice a developing interest in mathematics by none other than Michishige Sayumi: Question: Is there something about which you’ve thought, “Certainly this year, I want to challenge myself with this!”? Michishige: Math Problems ☆ Question: Fill in the blank to the right with one word. “I’m surprisingly ___” Michishige: I’m surprisingly intellectual. Please try to understand that somehow. m(・-・)m Question: Among your fellow members, what about you makes you think, “At this, I definitely can’t lose!” Michishige: Simultaneous equations!! The evidence is indisputable. Sayumi is a math geek. XD While her fellow MoMusu are busy with more mundane interests, our Sayumi is off challenging herself with math problems (here, Sayu, try Project Euler) and has apparently discovered the wonders of linear algebra (I’m assuming at least some of those simultaneous equations are linear). No doubt Sayumi has mastered the techniques of Gauss-Jordan elimination, Cramer’s rule, and LU decomposition and is well on her way to achieving world domination. In addition to this, Sayumi has listed Tetris as a hobby and as a “special skill”. This is by far the geekiest interest I’ve seen in any H!P member. Because Tetris is not your average video game. It is a mind-stretching mathematical puzzle, and several of its subproblems are NP-complete. NP-complete, I tell you! This places it in the class of difficult problems that includes Boolean k -satisfiability, determining the existence of a Hamiltonian path, and Minesweeper. Sayumi is hardcore. For this, she gets an Excellence in Unabashed Geekitude Award. And I still need to give Koharu one, don’t I? Countdown! The Top 100 Hello! Project PVs My last post has apparently sparked a “laugh riot” of a debate that’s now more than three times as long as my original post. If you haven’t seen it yet, you may find it worth reading. Or maybe not. As always, I appreciate your feedback, positive or negative. It’s always good to know how effective my communication is. And now, on to the next batch: Recent Comments Matt ディトマソ on Koharu Kusumi: A Career Change… Kirarin☆Snow ☃ on Koharu Kusumi: A Career Change… Matt ディトマソ on Koharu Kusumi: A Career Change… stephen on Suugaku♥Joshi Gakuen, Episode… Stephen on [Puzzle Contest] Hello! Projec…
{"url":"https://minimonimania.wordpress.com/tag/complexity-theory/","timestamp":"2014-04-16T22:07:42Z","content_type":null,"content_length":"65692","record_id":"<urn:uuid:f1c57425-1fb6-4840-9175-1645d0b30c9f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry, Vol. 42, No. 2, pp. 509-516 (2001) Inflection Points on Real Plane Curves Having Many Pseudo-Lines Johannes Huisman Institut Mathématique de Rennes, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France, e-mail: huisman@univ-rennes1.fr Abstract: \font\msbm=msbm10 \def\P{\hbox{\msbm P}} \def\R{\hbox{\msbm R}} A pseudo-line of a real plane curve $C$ is a global real branch of $C(\R)$ that is not homologically trivial in $\P^2(\R)$. A geometrically integral real plane curve $C$ of degree $d$ has at most $d-2$ pseudo-lines, provided that $C$ is not a real projective line. Let $C$ be a real plane curve of degree $d$ having exactly $d-2$ pseudo-lines. Suppose that the genus of the normalization of $C$ is equal to $d-2$. We show that each pseudo-line of $C$ contains exactly $3$ inflection points. This generalizes the fact that a nonsingular real cubic has exactly $3$ real inflection points. Keywords: real plane curve, pseudo-line, inflection point Classification (MSC2000): 14H45, 14P99 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] © 2001 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/BAG/vol.42/no.2/18.html","timestamp":"2014-04-20T21:46:06Z","content_type":null,"content_length":"2383","record_id":"<urn:uuid:63892a50-a09a-4f6b-a317-ac6fb6f4a1ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
πioneers (PI-oneers) Sob. This blog will be down by the end of july but instead there is a new blog at Sorry for the long URL, but oh well. Funny Video Zone will have alot of funny videos such as The Ultimate Showdown for Ultimate Destiny Sorry I haven't posted for a little while. I just finished finals and school for that fact but I still got a few things left in my brain. I will soon be introducing a new video series that you can find on YouTube and Google Video but also on the blog. The name is undecided but I'm thinking of "Minute Mathematician." Don't expect it for a little while. In other news I am thinking of a new blog theme for a new blog (no DUH!). I have had many (actually ALL) comments say this blog is too nerdy. I agree, and I don't care. But a new blog will come up as soon as plans are established. Have you ever been to the crest of a rollercoaster and have seen the whole amusment park and beyond. Well some guy with WAY too much time, created a formula to determine how far out you can see in The formula is pretty easy: (A=altitude in FEET) Distance in Miles= 1.22 X (square root of A) For example if a rollercoaster's crest is 400 feet above sea level than you would see for 24.4 miles out (1.22 X square root of 400[20]) If you ever looked at the seeds of a sunflower you will notice that they will follow a pattern that spirals out. If you count it, the seeds would spiral out this way: This is the Fibonacci Sequence. To find next value just add the two preceding values, e.g. This pattern is all over in nature: in sunflowers and and in pinapples to name a few. Become a Pi-oneer! All posts will be e-mailed to you immediately so you are always in the know! Pi Day is thought of being on March 14 every year, but do remember 22/7 is the closest fraction approximation of pi. In European style dates are written day/month as opposed to American style which is month/day soooo..... 22/7 would translate to (in European style)..... July 22, 2007 Pi day Returns! Many of you have been wondering if I was going to post anymore (hopefully). Recently, I have had restricted acess to the computer due to many physical changes to my house but its now over so I hope to continue posting regularrly. Here's a formula I found out by MYSELF (no books, websites, etc.) If n is any positive number then: 2n+1= (n+1)^2 - n When my parents think of the Simpsons, they think of yellow people who tell dirty jokes. But beneath all the political jokes and grossness, is mathematics. Yes you heard right, mathematics. For more information click here. Try to do these math problems quick. Those last few got pretty hard unless if you know the trick of the 11s. Example: 12 (I will only show the answer in the example.) x11 (You can only do this with 2 digit numbers) 1. Drop down the 1s digit in the non-11 number, this is the ones digit in the answer. 2 2. Add the two digits together in the non-11 number to get the tens digit. 32 3. Drop down the tens digit to get the digit in the hunderds place. 132 Easy as that! A few exceptions: If in step two, the numbers add up to be more then ten, then only use the ones digit and add one to the tens digit in step three. ~Sorry for the bad tutorial. Just type "Multiplying by 11" into google. Interesting fact: There are 519,024,039,293,878,272,000 different combinations on a rubik's cube. Only ONE combination is correct!
{"url":"http://pi-oneers.blogspot.com/","timestamp":"2014-04-21T04:31:41Z","content_type":null,"content_length":"66539","record_id":"<urn:uuid:89f22d58-f29c-4757-9c53-dba8ff6ec056>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming a Truth Table Programming a Truth Table I am trying to create a program that prompts the user to input 2 Equations, do a truth table, then compare them. Most of it is within my ability, but what I am struggling with is the parsing of the inputted equations. For example: F1 = !A * !B + A * B + !A * B F2 = !A + B For something like this, I just made a char pointer that scans through the string looking for multiplication, doing that first, then the addition. My problem is parsing and getting the order of operations down for something like this: F1 = !A * B (!C * !A * C) + B (A + !A * C) F2 = !((A + !B*!C + C) * ( B * C + !A + A * !B * !C)) So how would you parse something with parenthensis. Not necessarily asking for help with code, more theory in breaking this down into proper operational order. Thanks guys Simple. Everytime you encounter an open parantheses '(' simply keep track of where you are parsing. Then when you encounter a closed parantheses ')', calculate your logic up to the previous '(' and treat the problem as you would in your 1st example. Recursion. Any time you see an opening parenthesis, make a recursive call. Any time you see a closing parenthesis, make a return call. Otherwise, loop using standard mathmatics. [edit]Damnit! Stop beating me to the reply! :D[/edit]
{"url":"http://cboard.cprogramming.com/c-programming/25268-programming-truth-table-printable-thread.html","timestamp":"2014-04-19T13:04:50Z","content_type":null,"content_length":"7620","record_id":"<urn:uuid:53398053-32aa-4bcc-89f6-b5810f74517c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
221 Discrete Math for Information Technology Week 4 Mth/221 Discrete Math for Information Technology Week 4 Mth/221 Discrete Math for Information Technology Week 4 * MTH/221 Week Four Individual problems: Ch. 11 of Discrete and Combinatorial Mathematics Exercise 11.1, problems 8, 11 , text-pg:519 Exercise 11.2, problems 1, 6, text-pg:528 Exercise 11.3, problems 5, 20 , text-pg:537 Exercise 11.4, problems 14 , text-pg:553 Exercise 11.5, problems 7 , text-pg:563 Ch. 12 of Discrete and Combinatorial Mathematics Exercise 12.1, problems 11 , text-pg:585 Exercise 12.2, problems 6 , text-pg:604 Exercise 12.3, problems 2 , text-pg:609 Exercise 12.5, problems 3 , text-pg:621 Chapter 11 Exercise 11.1 Problem 8: Figure 11.10 shows an undirected graph representing a section of a department store. The vertices indicate where cashiers are located; the edges denote unblocked aisles between cashiers. The department store wants to set up a security system where (plainclothes) guards are placed at certain cashier locations so that each cashier either has a guard at his or her location or is only one aisle away from a cashier who has a guard. What is the smallest number of guards needed? Figure 11.10 Problem 11: Let G be a graph that satisfies the condition in Exercise 10. (a) Must G be loop-free? (b) Could G be a multigraph? (c) If G has n vertices, can we determine how many edges it has? Exercise 11.2 Problem 1: Let G be the undirected graph in Fig. 11.27(a). a) How many connected subgraphs ofGhave four vertices and include a cycle? b) Describe the subgraph G1 (of G) in part (b) of the figure first, as an induced subgraph and second, in terms of deleting a vertex of G. c) Describe the subgraphG2 (ofG) in part (c) of the figure first, as an induced subgraph and second, in terms of the deletion of vertices of G. d) Draw the subgraph of G induced by the set of vertices U _ {b, c, d, f, i, j}. e) For the graph G, let the edge e _ {c, f }. Draw the subgraph G − e. Figure 11.27 Problem 6: Find all (loop-free)...
{"url":"http://www.termpaperwarehouse.com/essay-on/Mth-221-Discrete-Math-For-Information-Technology/128209","timestamp":"2014-04-18T10:41:45Z","content_type":null,"content_length":"20810","record_id":"<urn:uuid:2fd038dc-25c7-4f92-b645-ab30837965a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
New one-way quantum computer design offers possibility of efficient optical information processing One of the most exciting and diverse fields of science today involves quantum information processing. There are many designs for quantum computers suggested, and a few that have been demonstrated. Among the demonstrated suggestions for a quantum computer is a one-way quantum computation process that makes use of a two-photon four-qubit cluster state. Kai Chen, a scientist at the Physikalisches Institut in Heidelberg, Germany and the University of Science and Technology of China (USTC) in Hefei, China, tells PhysOrg.com, “One-way quantum computing model was proposed years ago, but our experiment is a brand new demonstration of the computing model.” Chen and his team, lead by Prof. Jian-Wei Pan, which consists of colleagues from the Physikalisches Institut as well as from USTC and the National Chiao Tung University in Hsinchu, Taiwan, present their results in a Physical Review Letters piece titled, “Experimental Realization of One-Way Quantum Computing with Two-Photon Four-Qubit Cluster States.” “Our new model of quantum computing is different from the quantum circuit model, which has an input and an output.” Chen says. “We use two-photon cluster states, and information is written onto the cluster, processed, and read out from the cluster by one-particle measurements only.” He does point out that work is needed to produce this method of obtaining output: “We have designed a specific order and choices of measurements to get desired output.” Cluster states in quantum computing are highly entangled states deemed necessary in one-way quantum computing. In the quantum world, entanglement among quantum objects, such as qubits, is described with reference to the others, even though they may be spatially separated. Indeed, Chen and his colleagues performed their experiment showing a two-photon four-qubit cluster state entangling photons in both spatial and polarization modes. Chen says that this demonstration of quantum computing is more efficient than other photonic schemes. “Developing and using two-photon cluster states allows us to be four magnitudes more efficient than the previous sources. We are increasing the efficiency of quantum computing.” He also points out that the new design for photonic quantum computing developed by Pan’s team allows for high fidelity. “With the previous source, there is a lot of intrinsic noise due to multi-photon generation,” Chen says. “Using two-photon, our system offers much lower noise with a very high fidelity quantum gate.” This means that more of the information is passed on, and less of it is lost in background noise. Chen explains that this type of quantum computing is an optical quantum computer, using light. “We have designed a new scheme for producing the four-qubit cluster states, which are based on techniques that we have developed before for generating hyper-entangled states. With our new designs, the scheme is expected to motivate further progress in quantum computing.” He continues: “We think this quantum computing technique with optics has a very bright future.” What kind of a future? Chen and his colleagues are already working on ideas for the future of quantum information processing. “We are working on extending qubit numbers to perform more complicated tasks,” he says. In their experiment Chen and his peers implemented a Grover’s search algorithm. They hope that being able to increase their cluster states to eight qubits or more will “exponentially increase the ability to do quantum computing.” Chen continues: “If we combine our technique of optics with quantum memory using atoms, we can extend our abilities of performing quantum computation and quantum communication. One can think that in the future, we can get a true quantum computer, and have a global quantum network.” Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
{"url":"http://phys.org/news110454259.html","timestamp":"2014-04-18T11:00:10Z","content_type":null,"content_length":"68345","record_id":"<urn:uuid:770de770-a11d-4c6d-801b-c9caca656c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
GMRES: A generalizedminimal residual algorithm for solving nonsymmetric linear systems , 1995 "... In this paper we analyze inexact trust-region interior-point (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..." Cited by 11 (7 self) Add to MetaCart In this paper we analyze inexact trust-region interior-point (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trust-region radius, t... - Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign , 1992 "... . We propose a method for the solution of sparse linear, nonsymmetric systems AX = B where A is a sparse and nonsymmetric matrix of order n while B is an arbitrary rectangular matrix of order n \Theta s with s of moderate size. The method uses a single Krylov subspace per step as a generator of app ..." Cited by 4 (1 self) Add to MetaCart . We propose a method for the solution of sparse linear, nonsymmetric systems AX = B where A is a sparse and nonsymmetric matrix of order n while B is an arbitrary rectangular matrix of order n \ Theta s with s of moderate size. The method uses a single Krylov subspace per step as a generator of approximations, a projection process, and a Richardson acceleration technique. It thus combines the advantages of recent hybrid methods with those for solving symmetric systems with multiple right hand sides. Numerical experiments indicate that provided hybrid techniques are applicable, the method has significantly lower memory requirements and better practical performance than block versions of nonsymmetric solvers such as GMRES. Unlike block BCG it does not require the use of the transpose, it is not sensitive to the right hand sides and it can be used even when not all the elements of B are simultaneously available. AMS(MOS) subject classifications. 65F10,65Y20 1. Introduction. We consider ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3068210","timestamp":"2014-04-18T07:10:46Z","content_type":null,"content_length":"16058","record_id":"<urn:uuid:ffd60dc1-a624-4baa-97d6-3e37d42e2fb8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Patente US5319308 - Real time magnetic resonance analysis with non-linear regression means This application is closely related to the U.S. Pat. No. 5,015,954 issued on 14 May 1991 to Dechene et al., and to U.S. Pat. No. 5,049,819 issued Sep. 17, 1991 to Dechene et al., both entitled "Magnetic Resonance Analysis in Real Time, Industrial Usage Mode"; and to U.S. patent application Ser. No. 07/794,931 filed Nov. 20, 1991 entitled "Improved Magnetic Resonance Analysis in Real Time, Industrial Usage Mode"; and to U.S. patent application Ser. No. 07/885,633 filed May 19, 1992 entitled "NMR Analysis of Polypropylene in Real Time". All of these patents and patent applications are of common assignment with this application, and the disclosures of all are hereby incorporated herein by reference, as though set out at length herein. The present invention relates to the analytic means relating a Nuclear Magnetic Resonance (NMR) free induction decay (FID) curve to the physical quantities of the target nuclei in the samples under test. More particularly, the present invention relates to regression means that provides correlation between the function equations derived from the FID and the types, properties and quantities of target nuclei in the sample under test. Pulsed NMR techniques are used in instruments for the measurement of the type, property and quantity of lattice bound and free, magnetically active nuclei within a sample. Some of the substances and properties that have been measured by NMR techniques are: moisture, polymers and copolymers, oils, fats and crystalline materials. Pulsed NMR uses a burst or pulse of energy that is designed to excite the nuclei of a particular nuclear species of a sample being measured (the protons, or the like, of such sample having first been precessed in an essentially static magnetic field); in other words the precession is modified by the pulse. After the application of the pulse there occurs a free induction decay (FID) of the magnetization associated with the excited nuclei. That is, the transverse magnetization associated with the excited nuclei relaxes back to its equilibrium value of zero. This relaxation produces a changing magnetic field which is measured in adjacent pickup coils. A representation of this relaxation is the FID curve. The analysis method described herein and in the above related patents and applications is to decompose the FID waveform into a group of separate time function equations. The coefficients of these equations are derived from the FID by use of a Marquardt-Levenberg (M-L) iterative approximation that minimizes the Chi-squared function--a technique well known in the art. Some of the time function equations found useful are: Gaussians, exponentials, Abragams, and trigonometric. From these time functions a set of parameters is calculated. Some of these parameters are ratios of the y-axis intercepts, squares and cross products of these ratios, and decay times for each of the time curves. In addition the sample temperature may form the basis for another parameter. But, relating these parameters, quantitatively and qualitatively, back to the species of target nuclei is required. In the above referenced patent applications, the system is calibrated with known samples, and a `regression line` is generated which relates the parameters to the types, properties and quantities of the target nuclei. An unknown sample is introduced and the time functions are derived via the M-L iteration, and the parameters are calculated. The parameters are "regressed" via the "regression line" to yield the types, properties and quantities of target nuclei in the unknown sample. That is, the measured parameters from the unknown FID are used with the "regression line", and the types, properties and quantities in the unknown sample are determined. It is to be understood that the multidimensional "regression line" may not be graphically represented. As a simple regression technique example, consider that the grade point average of each of the students at a college were related to that student's SAT score and high school standing (forming a three dimensional space). The line formed is a "regression line" (which may be graphed). A new student's grade point average may be predicted by placing the student's SAT and high school standing on the "regression line,, and "reading" the grade point average. It is a principal object of the present invention to relate the type, property and quantity of target nuclei of interest accurately and precisely. The above object is met in an NMR system that effects a reliable extraction of free induction decay data in a way that is practical in a wide variety of applications, including industrial and medical. The NMR system is calibrated by measuring known samples of target nuclei and, from the FIDs generated, forming a multi-dimensional, non-linear regression relationship to the types, properties and quantities of target nuclei. The FIDs are decomposed or transformed into a set of equations for the calibration samples from which a set of parameters is generated. From these parameters a non-linear regression function is calculated relating the type, property and quantity of target nuclei to the parameters. An unknown sample FID is decomposed or transformed as were the known samples, the parameters are calculated and these parameters are used with the non-linear regression function to determine the type, property and quantity of target nuclei in the unknown sample. In a preferred embodiment, the FID is decomposed into multiple time equations via M-L processes and parameters are calculated for each of these time equations. In another preferred embodiment the parameters are non-dimensional in order to eliminate concentrations and the like from the measurements. The present invention may be used to advantage with any number or type of time or frequency functions derived from an FID waveform, including Fourier transform functions. Other objects, features, and advantages will be apparent from the following detailed description of preferred embodiments taken in conjunction with the accompanying drawing(s) in which: FIGS. 1-2 is a block/schematic drawing of a pulsed NMR system suitable for measuring a range of industrial materials, FIG. 3 is a graphical representation of an FID and its component curves, FIG. 4 is a flow chart of a preferred embodiment of the present invention, and FIG. 5 is a flow chart of the steps to establish an effective industrial measurement. FIG. 1 shows transverse and cross sections with block diagram inserts of an NMR apparatus and method where the present invention may be used to advantage. An industrial process line IPL has material flowing as indicated by arrow A. Some of the material is captured by a probe P and fed through an inlet line LI to a sample region S1. The region is defined by a tube 98 typically about 30 cm long made of an essentially non-magnetic, nonconducting material which does not itself generate substantially interfering FID signals (glass, certain ceramics, certain plastics or hybrids may be used). The sample region is defined between inlet and outlet valves V1 and V2. Gas jets J are also provided. These are pulsed on/off repeatedly to agitate fluent sample materials during sample admission and expulsion. The region S2 is the critical portion of the sample. It is surrounded by a sample coil 100 tuned to resonance and driven by a tuning circuit 102 and related transmitter/receiver controller 104. Grounded loops 101 are Lenz Law shields which are provided above and below coil 100 to help shape the field of coil 100--i.e., contain the field established by an excitation pulse. The controller 104 includes an on-board microprocessor and required power supply elements, memory, program and I/O decoding suitable to interconnect to the hardware shown and to an external microcomputer 106 with keyboard 108, monitor (or other display) 110, recorder 112 and/or process controller 114 (to control the process at IPL). The operator initiates and controls operation from the display keyboard 108 and the resulting data and signals are subsequently shown on the display 110 and utilized in 112 and/or 114. The computer 106 also controls instrument operation conditions. The region S2 of tube 98 and coil 100 are in a static, but adjustable, crossing magnetic field defined by a magnetic assembly 116 which comprises a yoke 118, pole pieces 120, surrounding Helmholtz coils 124, and a coil current generator 117. The critical sample region S2 of the tube 98 and magnet are contained in a metallic (but non-ferromagnetic) box 126 with highly thermally conductive face-plates 128 and internal partitions 130 and over-all mass related to each other to minimize harmonics and other interferences with a signal emitted from coil 100 to a sample and/or returned from the sample for pick-up by coil 100 and its tuned circuit 102 and transmit/receive controller 104. The magnetic assembly 116 including yoke 118, and other parts therein as shown on FIGS. 1-2, is in turn contained in an environmental control chamber 132 with optional inert gas fill and purge controls (not shown), an internal gas heater 134, a motor M driving fan 136, and a temperature sensor 138 which can be applied to the yoke or other detection region whose temperature is reflective of the temperature at pole pieces 120 and in the sample region therebetween. A thermal controller 140 processes temperature signals from 138 to adjust heating/circulation at 134/136 as a coarse control and to adjust current through the Helmholtz coils 124 at magnet pole pieces 120 as a sensitive and fast fine control, as well as implementing general control instructions of computer 106. Further thermal stabilization may be provided by a closed loop heat exchanger 142 having pump 144 and coils 146 attached to yoke 118 and coils 148 attached to the plates 128 of box 126. The strength, consistency and constancy of the magnetic field between poles 120 in the region S2 of the sample is thus controlled by a uniform base magnetic field in the entire region S2. The Helmholtz coils 124 are energized by the coil current controller 117 to accurately trim the final magnitude of the field in which the sample is placed. This field is the vector addition of the fields due to the magnet poles 120 and the Helmholtz coils 124. The controller 117 sets the current through the Helmholtz coils 124 using current generators. The coils 124 are wound around the magnet pole pieces such that the magnetic field created by the current in the coils 124 can add to or subtract from the field created by the magnet pole pieces. The magnitude of the current through the coils 124 determines the strength of the field added to or subtracted from the field due to the magnet pole pieces (and related yoke structure) alone. The actual determination of the current through the Helmholtz coils is accomplished by carrying out the magnetic energy and resonance techniques hereinafter described in preliminary runs and adjusting Helmholtz current until the maximum sensitive resonance is achieved, and then setting the Helmholtz current off resonance by a given offset, of about 0.1-3 KHz. The major elements of electrical controls are tuner 102, including coils 100 and 101 and variable capacitors 102-1 and 102-2, resistor 102-3 and diodes 102-4 and constructed for tuning to Q of twenty to sixty to achieve coil 100 resonance, and control 104 including a transmit/receive switch 104-1, a transmitter 104-2 and receiver 104-3, a crystal oscillator 104-4, gated pulse generator (PPG) 104-5, and phase shifter 104-6. The crystal provides a nominal twenty Megahertz carrier which is phase modulated or demodulated by the MOD, DEMOD elements of transmitter 104-2 and receiver 104-3. The receiver includes variable gain amplifier elements 104-31 and 104-32 for operation. The analog signals received are fed to a high speed at least 12 bit flash A/D converter 105-1 and internal (to the instrument) CPU element 105-2, which provides data to an external computer 106 which has a keyboard 108, monitor 110, modem 109, recording elements 112 and process controller elements 114, e.g., for control of valves V1, V2 via valve controls 115 and/or to coil current controls 117, all via digital-analog converters (not shown). The analog signal FID curve is conditioned by a Bessel filter which acts as a prefilter and an anti-aliasing filter as the subsequent sampling is done at 10 MHz. After digitization the signal may be time smoothed by a fast Fourier transform filter program. The combination of these filters produces a relative improvement in signal to noise ratios which enhances the accuracy of the system. The excitation of coil 100 and excitation-precession of the sample's proton content and subsequent relaxation/decay produces a received FM signal that, after demodulation, controlled gain amplification, and A/D conversion produces the free induction decay (FID) curve. Referring to FIG. 3, the digitized FID curve data are stored in the external computer 106 where a program finds the best component curves to fit each stored FID curve. In this preferred embodiment there are three component curves, a fast Gaussian, a slow modified Gaussian and an exponential. Other preferred embodiments have more or less than three component curves and other curve types. The determination of the types of curves which make up the FID curve is important because, once the curves are known, they can be extended back to a time origin (shown as A.sub.O, B.sub.O and E.sub.O at tO, i.e., excitation of a Cycle 1), which is close to the center of the transmitted burst signal. This is important since there are saturation effects of the instrument's electronic gear which occur during and immediately after the excitation burst signal. During this time, measurements cannot be accurately taken, yet the area of interest under the curve, which is a measure of the number of nuclei in the sample, extends from the immediate end of the excitation burst to where the curve is too small to be digitized or it is in the noise. The entire curve is decomposed into component curves and these curves are fitted to the data by an iterative process based upon the Marquardt-Levenberg (M-L) approximation technique applied automatically through a structured realization in software. This technique is used to determine the magnitude of all the parameters, constants, frequencies, etc. which best fit the FID curve. This is an iterative technique where the entire curve is determined at once. The M-L iteration process performs the curve fitting by attempting to minimize the Chi-Squared error function (the sum of the squared differences between the measured data points and the data points from the derived equation). The results of the M-L approximation are accepted if the Chi Squared error is small enough, if not, the M-L fitting procedure may be reapplied with a different set of starting guesses. If this process also fails, the sample is discarded and a new sample obtained. The M-L technique is documented in the following references: Ind. Appl. Math., vol. 11, pp. 431-441 by D. W. Marquardt, 1963; Data Reduction and Error Analysis for the Physical Sciences (New York, McGraw Hill), Chapter 11 by Philip R. Bevington, 1969; and The State of the Art in Numerical Analysis (London: Academic Press, David A. H. Jacobs, ed 1977), chapter III.2 by J. E. Dennis. As applied to the measurement regime of interest herein, in a preferred embodiment of the present invention, the selected parameters taken from the derived curves are the y-axis intercept ratios, time constants, frequency terms and other parameters described below. Other known in the art iterative techniques which may be applied instead of or with the Marquardt-Levenberg, include: Gauss-Newton and "steepest descent" (found in the above J. E. Dennis reference), Newton-Raphson (known in the art), or like techniques, including combinations of these techniques. One of the major difficulties in making use of iterative curve fitting techniques (such as Marquardt-Levenberg) is their tendency to reach incorrect solutions. Such solutions frequently (but not always) contain parameters which would imply a negative quantity of protons or an exponential "decay" which grows with time. These incorrect solutions lead to serious errors in the result found for a physical sample, for example, the density in polyethylene or the quantity of xylene solubles in polypropylene. The usual methods of handling these difficulties have been: (1) have a human evaluate the result and eliminate those solutions that are ridiculous, and/or (2) put a series of upper and lower bounds on each parameter beyond which the fitting procedure is forbidden to go. In an on-line situation where readings are generated every few minutes, the first approach obviously cannot be used, and in the case of polyolefins the second approach fails because the bounds for each parameter depend on the actual values of the other parameters (recall that for polypropylene and polyethylene the model equations involve ten or more parameters). We have evolved a Marquardt Reference Ratio (MRR) to handle this difficulty. MRR is a ratio although other techniques (differences, for example) could be used. As discussed herein, the techniques to find a property of an unknown sample include calibration by applying the M-L technique to reach solutions for a group of FIDs of samples with known properties. The various amplitudes and time constants in the solutions are combined to produce a number of ratios, cross products and higher order parameters. These parameters may undergo various non-linear transformations and are finally regressed multi-dimensionally to obtain the coefficients of the regression equation to use in predicting a property of an unknown sample, say, for example, density. Each of the parameters contributes to the overall prediction of density. Moreover, in the nature of things, these parameters tend to be relatively highly correlated among themselves, e.g., a large crystalline content must necessarily correspond to a small amorphous content (comparing the modified Gaussian to the exponential in the polyethylene FID solution). This means that overlapping density information is contained in many of the parameters used in the regression equation. Similar arguments apply to other properties, such as xylene solubles in polypropylene. To make use of this high correlation (continuing the density example), the parameters are divided into subgroups (two roughly equal groups in a preferred embodiment) and each of these groups is regressed on density to obtain two further predictions of density based on each subgroup, as follows: D1 (density)=F(subgroup 1) D2 (density)=G(subgroup 2) Because of the correlation, discussed above, among the parameters, the functions F and G (above) result in predictions D1 and D2 which are only slightly less accurate than the density prediction based on the entire set of variables. The ratio (MRR) or the difference (MDR) are formed as follows: MRR has a nominal value of one and MDR zero. MRR and MDR are sensitive measures of whether or not a particular proposed M-L solution for an unknown sample belongs to the set of (calibrated) data from which the functions F and G were derived. If the calculated ratio or difference of D1 and D2 for a proposed M-L solution for fitting the FID of an unknown sample lies outside reasonably well-defined limits (usually +/-3 sigma), the proposed M-L solution may be assumed to be bad and is discarded. Once the equation of the FID curve is known, each component curve can be extrapolated back to the mid point of the excitation signal to establish the intercept of each said component curve. The resulting data utilized in the computer 106 (FIGS. 1-2) is the equation for the FID curve as composed of a number of component curves. Each of these curves (and their intercepts) has been experimentally and theoretically related to particular nuclei of interest. In particular, when the FID curve equation is determined, the ratios of the y-axis intercepts, the cross product and squares of these ratios and the decay times for each of the curve components, the product temperature and a cosine term form a multidimensional model. Calibration of the system is accomplished by measuring a number of known samples and using the M-L technique to derive the model equation constants associated with each known sample. Various non-linear transforms may then be applied to these constants, usually with the goal of linearizing their relationship to the dependent (i.e., predicted) parameter. Useful non-linear functions include exponential, logarithmic, powers and cross products of the independent (i.e., measured) parameters. FIG. 4 is a flow chart of the steps used in a preferred embodiment of the present invention. The first step is to measure samples with known types and quantities of target nuclei. The FID curve is digitized via a flash converter of at least 12 ands preferably 14 bits accuracy and stored in computer memory. The next step is to apply the M-L iterative process to derive curve coefficients from the stored FIDs to a given Chi Squared error. In step three the ratios of Y-axis intercepts, squares and cross products of these ratios, decay times and temperatures are calculated. In the next step, the various non-linear transformations to be used are determined, and the types, properties and quantities of target nuclei in the known samples are related to the constants by a regression against these transformed parameters--the "regression function." Step five is to record, digitize and store the FID for an unknown sample and derive the curve coefficients. The parameters are calculated for the unknown, and these parameters with desired non-linear transforms are used in the regression equation to determine the actual type, property and quantity of target nuclei in the unknown sample. Ratios are used since constants with dimensions of weight would require the samples to be carefully weighed before measuring. FIG. 5 is a flow chart showing the steps of measurement to establish effective industrial measurement. A single FID curve is established to see if the sample area is clear (Quick FID) in an abbreviated cycle of attempting to establish an FID curve. If the sample region is not clear (N), measurement is interrupted to allow valve V2 to open and jets J and gravity to clear the region. A new Quick FID step establishes clearance. Then another sample is admitted by closing valve V2, opening valve V1 and making such adjustments of probe P and line L1 as may be necessary to assure sample acquisition. Jets J adjust and stabilize the new sample. Temperature controls 134-138 and 142-146, described above, may be used to establish very coarse and less coarse thermal controls countering sample and ambient temperature variations. An electronic signal processing apparatus baseline is established in 3-4 cycles (each having (+) and (-) subcycles with addition of (C+) and (C-) to detect a baseline offset and compensate for it). It would be feasible to avoid this baseline step determination and simply deal with it as an additional parameter (i.e. eleventh dimension in the M-L analysis, but this would increase iteration Further adjustment is established by coils 124 to adjust HO (i.e., resonance) and this is enabled by ten to twenty field check cycles of FID curve generation. The (C-) FID is subtracted from the (C+) FID, (this process eliminates small baseline offsets) to obtain a workable digitized FID signal--which has a maximum value at resonance. HO is adjusted via coil current generator 117 and coils 124 until such maximum is achieved, and then HO is changed to offset the system a known amount from resonance. Adequate field adjustment is usually made in less than seven cycles. Then five to one hundred cycles are conducted to obtain a useable measurement. Each of those five to one hundred cycles involves a modulated transmission/reception/flash A-D conversion, and storage of data. The curves are then averaged for M-L curve fitting, and the above listed intercepts and ratios are established. Similar cycles, but somewhat abbreviated can be applied for Quick FID, field check and baseline correction purposes. Each of the sub-cycles [(+) and (-)] of each such cycle involves a capture and utilization of thousands of FID points in data reduction. Each sub-cycle occurs on the order of a second and the number of such sub-cycles employed depends on the desired smoothing and signal to noise ratio (S/N); generally S/N improves in a square root relationship to the number of cycles. As noted in above cited Dechene et al. references, in requiring greater accuracy and reliability, sample tube composition can distort readings. If glass is not used (and it is preferred to avoid glass in industrial usage), then the replacement should not be a hydrocarbon plastic. But fluorocarbons can be effective in several applications since signals from fluorine appear far from resonance. These signals can be distinguished from hydrogen at the levels of sensitivity required and if desired can be filtered (or distinguished). In other cases of higher sensitivity measurements, e.g., for gauging relative proportions of amorphous and crystalline species in mixtures thereof, the sample container should be glass or non-protonic ceramic. In some instances, however, fluorocarbon or reinforced fluorocarbon can be used acceptably for polymer measurements. In all such cases the point is to avoid sample containers with species that can couple with transmitted energy and generate a FID decay curve mimicking the samples. It will now be apparent to those skilled in the art that other embodiments, improvements, details, and uses can be made consistent with the letter and spirit of the foregoing disclosure and within the scope of this patent, which is limited only by the following claims, construed in accordance with the patent law, including the doctrine of equivalents.
{"url":"http://www.google.es/patents/US5319308?hl=es&ie=ISO-8859-1&dq=flatulence","timestamp":"2014-04-18T23:38:21Z","content_type":null,"content_length":"97683","record_id":"<urn:uuid:aae4ed56-1a99-4996-9613-0fac7caf08ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Fully-compressed suffix trees Results 1 - 10 of 20 , 2009 "... Suffix trees are among the most important data structures in stringology, with a number of applications in flourishing areas like bioinformatics. Their main problem is space usage, which has triggered much research striving for compressed representations that are still functional. A smaller suffix t ..." Cited by 16 (9 self) Add to MetaCart Suffix trees are among the most important data structures in stringology, with a number of applications in flourishing areas like bioinformatics. Their main problem is space usage, which has triggered much research striving for compressed representations that are still functional. A smaller suffix tree representation could fit in a faster memory, outweighing by far the theoretical slowdown brought by the space reduction. We present a novel compressed suffix tree, which is the first achieving at the same time sublogarithmic complexity for the operations, and space usage that asymptotically goes to zero as the entropy of the text does. The main ideas in our development are compressing the longest common prefix information, totally getting rid of the suffix tree topology, and expressing all the suffix tree operations using range minimum queries and a novel primitive called next/previous smaller value in a sequence. Our solutions to those operations are of independent - In ESA , 2011 "... Abstract. Self-indexes can represent a text in asymptotically optimal space under the k-th order entropy model, give access to text substrings, and support indexed pattern searches. Their time complexities are not optimal, however: they always depend on the alphabet size. In this paper we achieve, f ..." Cited by 14 (10 self) Add to MetaCart Abstract. Self-indexes can represent a text in asymptotically optimal space under the k-th order entropy model, give access to text substrings, and support indexed pattern searches. Their time complexities are not optimal, however: they always depend on the alphabet size. In this paper we achieve, for the first time, full alphabet-independence in the time complexities of self-indexes, while retaining space optimality. We obtain also some relevant byproducts on compressed suffix trees. 1 - In Proceedings of the 19th Annual Symposium on Combinatorial Pattern Matching, volume 5029 of LNCS , 2008 "... Abstract. Suffix trees are among the most important data structures in stringology, with myriads of applications. Their main problem is space usage, which has triggered much research striving for compressed representations that are still functional. We present a novel compressed suffix tree. Compare ..." Cited by 13 (10 self) Add to MetaCart Abstract. Suffix trees are among the most important data structures in stringology, with myriads of applications. Their main problem is space usage, which has triggered much research striving for compressed representations that are still functional. We present a novel compressed suffix tree. Compared to the existing ones, ours is the first achieving at the same time sublogarithmic complexity for the operations, and space usage which goes to zero as the entropy of the text does. Our development contains several novel ideas, such as compressing the longest common prefix information, and totally getting rid of the suffix tree topology, expressing all the suffix tree operations using range minimum queries and a new primitive called next/previous smaller value in a sequence. 1 - In Proc. 15th SPIRE, LNCS 5280 , 2008 "... Abstract. A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can ..." Cited by 12 (8 self) Add to MetaCart Abstract. A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. This paper is devoted to studying ways to store massive sets of highly repetitive sequence collections in space-efficient manner so that retrieval of the content as well as queries on the content of the sequences can be provided time-efficiently. We show that the state-of-the-art entropy-bound full-text self-indexes do not yet provide satisfactory space bounds for this specific task. We engineer some new structures that use run-length encoding and give empirical evidence that these structures are superior to the current structures. 1 "... A repetitive sequence collection is a set of sequences which are small variations of each other. A prominent example are genome sequences of individuals of the same or close species, where the differences can be expressed by short lists of basic edit operations. Flexible and efficient data analysis ..." Cited by 11 (9 self) Add to MetaCart A repetitive sequence collection is a set of sequences which are small variations of each other. A prominent example are genome sequences of individuals of the same or close species, where the differences can be expressed by short lists of basic edit operations. Flexible and efficient data analysis on such a typically huge collection is plausible using suffix trees. However, the suffix tree occupies much space, which very soon inhibits in-memory analyses. Recent advances in full-text indexing reduce the space of the suffix tree to, essentially, that of the compressed sequences, while retaining its functionality with only a polylogarithmic slowdown. However, the underlying compression model considers only the predictability of the next sequence symbol given the k previous ones, where k is a small integer. This is unable to capture longer-term repetitiveness. For example, r identical copies of an incompressible sequence will be incompressible under this model. We develop new static and dynamic full-text indexes that are able of capturing the fact that a collection is highly repetitive, and require space basically proportional to the length of one typical sequence plus the total number of edit operations. The new indexes can be plugged into a recent dynamic fully-compressed suffix tree, achieving full functionality for sequence analysis, while retaining the reduced space and the polylogarithmic slowdown. Our experimental results confirm the practicality of our proposal. "... The suffix tree is an extremely important data structure for stringology, with a wealth of applications in bioinformatics. Classical implementations require much space, which renders them useless for large problems. Recent research has yielded two implementations offering widely different space-time ..." Cited by 8 (2 self) Add to MetaCart The suffix tree is an extremely important data structure for stringology, with a wealth of applications in bioinformatics. Classical implementations require much space, which renders them useless for large problems. Recent research has yielded two implementations offering widely different space-time tradeoffs. However, each of them has practicality problems regarding either space or time requirements. In this paper we implement a recent theoretical proposal and show it yields an extremely interesting structure that lies in between, offering both practical times and affordable space. The implementation of the theoretical proposal is by no means trivial and involves significant algorithm engineering. - COMBINATORIAL PATTERN MATCHING. LNCS , 2010 "... The field of compressed data structures seeks to achieve fast search time, but using a compressed representation, ideally requiring less space than that occupied by the original input data. The challenge is to construct a compressed representation that provides the same functionality and speed as t ..." Cited by 7 (1 self) Add to MetaCart The field of compressed data structures seeks to achieve fast search time, but using a compressed representation, ideally requiring less space than that occupied by the original input data. The challenge is to construct a compressed representation that provides the same functionality and speed as traditional data structures. In this invited presentation, we discuss some breakthroughs in compressed data structures over the course of the last decade that have significantly reduced the space requirements for fast text and document indexing. One interesting consequence is that, for the first time, we can construct data structures for text indexing that are competitive in time and space with the well-known technique of inverted indexes, but that provide more general search capabilities. Several challenges remain, and we focus in this presentation on two in particular: building I/O-efficient search structures when the input data are so massive that external memory must be used, and incorporating notions of relevance in the reporting of query answers. , 2011 "... We introduce LZ-End, a new member of the Lempel-Ziv family of text compressors, which achieves compression ratios close to those of LZ77 but performs much faster at extracting arbitrary text substrings. We then build the first self-index based on LZ77 (or LZ-End) compression, which in addition to te ..." Cited by 4 (1 self) Add to MetaCart We introduce LZ-End, a new member of the Lempel-Ziv family of text compressors, which achieves compression ratios close to those of LZ77 but performs much faster at extracting arbitrary text substrings. We then build the first self-index based on LZ77 (or LZ-End) compression, which in addition to text extraction offers fast indexed searches on the compressed text. This self-index is particularly effective to represent highly repetitive sequence collections, which arise for example when storing versioned documents, software repositories, periodic publications, and biological sequence databases. , 2009 "... algorithms ..." "... Abstract. A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can ..." Cited by 3 (1 self) Add to MetaCart Abstract. A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N/n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=503554","timestamp":"2014-04-17T16:04:16Z","content_type":null,"content_length":"38365","record_id":"<urn:uuid:f40e60d9-8193-416a-88e8-1d85fa40ce5d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] 2 Easy questions I guess...how do I do these problems? March 19th 2007, 04:52 PM [SOLVED] 2 Easy questions I guess...how do I do these problems? #1) Suppose that there are 10 candidates for a prospective job, and only 3 of them have taken STA 2023 in the past. If you select two candidates at random from these 10, what is the probability that both candidates have taken STA 2023 in the past? Please type your answer in as a percentage, rounded to the nearest whole percent. (NOTE: Do NOT round any decimals or percentages in the middle of your calculations. Only round your final answer.) #2) Suppose you are playing a game that involves a spinner with 4 possible results. The spinner lands in the red zone 20% of the time, the yellow zone 30% of the time, the green zone 35% of the time, and the purple zone 15% of the time. Assume each spin is independent of all other spins. In the game, you spin twice in a row on a single turn. Let A = {1st spin of a turn lands in the red zone} and B = {2nd spin of a turn lands in the yellow zone}. In a single turn, what is P(A or B)? March 20th 2007, 12:34 PM Hello, YogiBear21! 1) Suppose that there are 10 candidates for a prospective job, and only 3 of them have taken STA 2023 in the past. If you select two candidates at random from these 10, what is the probability that both candidates have taken STA 2023 in the past? P(1st took STA2023) .= .3/10 P(2nd took STA2023) .= .2/9 P(both took STA2023) .= .(3/10)·(2/9) .= .1/15 .≈ .7% #2) Suppose you are playing a game that involves a spinner with 4 possible results. The spinner lands in the red zone 20% of the time, the yellow zone 30% of the time, the green zone 35% of the time, and the purple zone 15% of the time. Assume each spin is independent of all other spins. In the game, you spin twice in a row on a single turn. Let A = {1st spin of a turn lands in the red zone} and B = {2nd spin of a turn lands in the yellow zone}. In a single turn, what is P(A or B)? We are given: .P(A) = 0.2, .P(B) = 0.3 . . Since the events are independent: .P(A ∩ B) = (0.2)(0.3) = 0.06 Formula: .P(A U B) .= .P(A) + P(B) - P(A ∩ B) . . Therefore: .P(A U B) .= .0.2 + 0.3 - 0.06 .= .0.44
{"url":"http://mathhelpforum.com/statistics/12754-solved-2-easy-questions-i-guess-how-do-i-do-these-problems-print.html","timestamp":"2014-04-18T13:51:23Z","content_type":null,"content_length":"6668","record_id":"<urn:uuid:7e21be03-b01b-4a66-bb0f-35eba47b424c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
A most artistic package of a jumble of ideas "... Abstract. In the last fifteen years, the traditional proof interpretations of modified realizability and functional (dialectica) interpretation in finite-type arithmetic have been adapted by taking into account majorizability considerations. One of such adaptations, the monotone functional interpret ..." Cited by 1 (0 self) Add to MetaCart Abstract. In the last fifteen years, the traditional proof interpretations of modified realizability and functional (dialectica) interpretation in finite-type arithmetic have been adapted by taking into account majorizability considerations. One of such adaptations, the monotone functional interpretation of Ulrich Kohlenbach, has been at the center of a vigorous program in applied proof theory dubbed proof mining. We discuss some of the traditional and majorizability interpretations, including the recent bounded interpretations, and focus on the main theoretical techniques behind proof mining. Contents , 2012 "... In 1962, Clifford Spector gave a consistency proof of analysis using so-called bar recursors. His paper extends an interpretation of arithmetic given by Kurt Gödel in 1958. Spector’s proof relies crucially on the interpretation of the so-called (numerical) double negation shift principle. The argume ..." Add to MetaCart In 1962, Clifford Spector gave a consistency proof of analysis using so-called bar recursors. His paper extends an interpretation of arithmetic given by Kurt Gödel in 1958. Spector’s proof relies crucially on the interpretation of the so-called (numerical) double negation shift principle. The argument for the interpretation is ad hoc. On the other hand, William Howard gave in 1968 a very natural interpretation of bar induction by bar recursion. We show directly that, within the framework of Gödel’s interpretation, (numerical) double negation shift is a consequence of bar induction. The 1958 paper [4] of Kurt Gödel presented an interpretation (now known as the dialectica interpretation) of Heyting arithmetic HA into a quantifier-free calculus T of finite-type functionals. The terms of T denote certain computable functionals of finite type (a primitive notion in Gödel’s paper, as it were): the so-called primitive recursive functionals in the sense of Gödel. These terms can be rigorously defined and they include as primitives the combinators (a burocracy of terms for dealing with the “logical ” part of the calculus) and the arithmetical constants: 0, the successor constant and, importantly, the recursors. 1 The dialectica interpretation assigns to each formula A of the language of first-order arithmetic a (quantifier-free) formula AD(x, y) of the language of T, and Gödel showed that if HA ⊢ A, then there is a term t (in which y does not occur free) of the language of T such that T ⊢ AD(t, y). 2 The combinators play a central role in showing the preservation of the interpretation under (intuitionistic) logic and, unsurprisingly, the recursors play an essential role in interpreting the induction axioms. It is convenient to extend the dialectica interpretation to Heyting arithmetic in all finite types HA ω. 3 Within the language of this theory, one can formulate the characteristic principles of the interpretation: 1 The reader can consult [11], [1] or [7] for a precise description of the calculus T and of its terms in particular. These are good references for details concerning the dialectica interpretation. 2 We are taking some liberties here (and will take in the sequel). Rigorously, either one should speak of tuple of variables x: = x1,..., xn and y: = y1,..., ym or allow convenient product types.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13634304","timestamp":"2014-04-19T17:58:38Z","content_type":null,"content_length":"16107","record_id":"<urn:uuid:55f0fb57-1649-4364-bc30-5f16deb52029>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
William Parry Born: 3 July 1934 in Coventry, England Died: 20 August 2006 in Coventry, England William Parry's parents were Richard and Violet Irene Parry. William was always known as Bill. He was born into a large family, being the sixth of his parents' seven children. His father Richard Parry, a sheet-metal worker, and brothers were active trade unionists with fairly left-wing views. Some of his family were members of the Communist Party, so Bill grew up in a family strongly committed to left-wing politics. Bill's three brothers followed their father in becoming sheet-metal workers while only one of his three sisters went on to higher education (and that was as a mature At the end of primary education an examination had been introduced in England in order to determine the type of secondary education that a young person should continue with. Called the 11-plus examination (since it was taken after the age of 11), it began to be used in this way only a year or so before Parry sat it. He did not achieve the standard set for those pupils to go on to grammar school, the route to a university education, and was directed instead into a school teaching technical subjects. Parry attended a school which specialised in woodwork and metalwork but his mathematical abilities were spotted by a teacher who persuaded him not to leave school early. Most pupils attending such technical schools would leave at a young age and take on an apprenticeship with a firm. Continuing with school education in such a technical school was a problem, however, for there were no classes for Parry to attend in mathematics and arrangements had to be made for him to attend mathematics classes at Birmingham Technical College. After passing the examinations at Birmingham Technical College, Parry was allowed to enter University College, London, to study mathematics. He had certainly not been prepared at school for a university education but he did well and was encouraged by his lecturers, including Hyman Kestelman. Parry was active in student politics during his undergraduate years and at this time he joined the Communist Party. He sold the Daily Worker and lost money at poker, missing lectures which did not interest him. He graduated in 1955 and went to Liverpool University to study for a Master's Degree in mathematics. There he came into contact with the Socialist Labour League and he joined them, a move which he later regretted. During his one year course at Liverpool, Parry applied to Imperial College, London, to study there for his doctorate in mathematics. Entering in 1956 his research was supervised by Yael Dowker. During his time as a research student there were protests across the country against atomic weapons and in 1958 the Aldermaston March took place to protest against atomic-weapons research and development at Aldermaston, Berkshire. Parry, and many fellow students, took part on the march and it was there he met Benita Teper who had recently arrived from South Africa, They married in 1958, the year they met, and had a daughter Rachel (born 1967). After being awarded his doctorate by Imperial College, London, Parry was appointed as a lecturer at Birmingham University in 1960. Parry's first paper On the b-expansions of real numbers was published in 1960. There followed a number of papers on ergodic theory. In Ergodic properties of some permutation processes (1962) Parry considered two modifications of a process of Maurice Kendall made by H E Daniels, showing that the first modification and the one-dimensional second modification are ergodic. In 1963 he published An ergodic theorem of information theory without invariant measure generalising the individual version of McMillan's ergodic theorem of information theory without the hypothesis of an invariant probability function. Particularly important in Parry's development as a research mathematician was the year 1962-63 during which he worked at Yale University in the United States with a group of other young mathematicians interested in ergodic theory. Back in Birmingham after his year abroad, his research output moved up to an even higher standard and level of output. He published 4 papers in 1964, 2 papers in 1965 and 5 papers in 1966. Intrinsic Markov chains (1964) came out of his visit to Yale and was published in the Transactions of the American Mathematical Society. H P Edmundson begins his review of the paper writing:- The author investigates the structure of finite-state stochastic processes that are called intrinsically Markovian since they behave like Markov chains because "possible" sequences of the processes are determined by a chain rule. Necessary and sufficient conditions are established for a stochastic process to be intrinsically Markovian. In On Rohlin's formula for entropy (1964), Parry gives a formula for computing the entropy of an ergodic stationary non-atomic stochastic process with a finite number of states. In 1965 Parry left Birmingham and moved to the new Sussex University where he was appointed as a Senior Lecturer in Mathematics [2]:- There he worked on entropy theory showing, amongst other things, that each aperiodic measure-preserving transformation could be viewed as the shift on the realisation space of a stationary, countable state, stochastic process indexed by the integers or the natural numbers. Pete Walters, the author of the obituary [2], was a doctoral student of Parry at Sussex University graduating in 1967. The University of Warwick, in the city of Coventry, opened to undergraduates in 1965. E C Zeeman created a vigorous mathematics department from the day it opened, and by 1968 it had become one of the leading mathematics research schools in Britain. Parry was back working in his home city of Coventry when he was appointed to the University of Warwick as a Reader in 1968. He chose to live in the village of Marton, half-way between Leamington Spa and Rugby. Two years later he was promoted to Professor. The greatest honour given to him was in 1984 when he was elected a fellow of the Royal Society of London. Parry published over 80 papers and a number of fine books. These books included Entropy and generators in ergodic theory (1969), Topics in ergodic theory (1981), and (with Selim Tuncel) Classification problems in ergodic theory (1982). W L Reddy writes about the first of these:- In the preface the author states that his main purpose is to develop the abstract aspects of the subject. This purpose is admirably realized. This book is an attractive and clear introduction to entropy and generators in ergodic theory which allows the reader who is not an expert in ergodic theory to gain an appreciation of the flavour of the subject and an understanding of the important Parry retired in 1999 and was appointed Professor Emeritus. He continued to teach an advanced course for the next three years and continued to attend seminars. His death was as a result of cancer, exacerbated by MRSA. The University of Warwick paid him the following tribute:- Bill was the first appointment in analysis at Warwick. He played a key role in the department, and was Chair of the Department for 2 years. The rapid rise of the Warwick Mathematics Department's international reputation was due to many, among whom Bill featured prominently. His great mathematical achievements were recognized by his early election to the Royal Society. He attracted a number of outstanding Ph.D. students. Bill Parry listed his hobbies as theatre, concerts, and walking. Article by: J J O'Connor and E F Robertson November 2006 MacTutor History of Mathematics
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Printonly/Parry.html","timestamp":"2014-04-19T07:01:20Z","content_type":null,"content_length":"8698","record_id":"<urn:uuid:3e925763-8a88-48aa-8456-2fa6b0b838c5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
The Normal Distribution: Exercise #2 Objective: Compute normal probabilities and quantiles and visualize these values in normal probability and cumulative distributions. Problem Description: The annual rainfall in a certain area is normally distributed with a mean of 25 inches and a standard deviation of 5 inches. The following questions ask you to compute probabilities and quantiles from a normal distribution. The following normal Java applet displays the probabilities of a normal random variable, X, with a mean of 25 in and a standard deviation of 5 in. Use this applet to answer the questions below.
{"url":"http://www.stat.wvu.edu/SRS/Modules/Normal/rainfall.html","timestamp":"2014-04-17T21:23:37Z","content_type":null,"content_length":"2862","record_id":"<urn:uuid:20ce4a98-d97c-40b0-a255-058ea53f257a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Does anyone know the Wh/Mile for Ideal miles? DouglasR | 20. Juni 2013 It should be around 270 for a 85 kWh car. Rod and Barbara did a computation, but I don't have time to look for it now. Try www.volkerize.com. Rod and Barbara | 20. Juni 2013 @ chicagoniner - My estimate for the Wh/M for an Ideal Mile in the 85 kWh vehicle is between 265 Wh/M and 271 Wh/M. There are a number of ways to calculate this value and all of them contain a fair amount of data scatter. Therefore, it takes a lot of data points to hone in on the correct value. I normally use Rated Miles so I don't have a lot of data on Ideal Miles. One of the keys to the puzzle is determining the Max Range charge Ideal Miles for a nominal battery. To my knowledge, Tesla has not officially published a precise value for this. FYI, my calculation for the Wh/M for Rated Miles in the 85 kWh vehicle is between 306 Wh/M and 308 Wh/M. chicagoniner | 20. Juni 2013 I've been getting 281 Wh/Mi average since I got it and I'm consistently beating the rated miles and wondered how much more cautiously I'd have to drive to achieve "Ideal" miles. Bob W | 20. Juni 2013 85,000 Wh / 300 "max ideal miles" = 283.3 Wh/mile ideal range 85,000 Wh / 265 "max EPA test miles" = 320.75 Wh/mile rated range But, you don't get to use all those Wh; the car will read 0 miles left after you've used up 81,620 Wh. 81,620 Wh / 300 = 272 Wh/mile ideal 81,620 Wh / 265 = 308 Wh/mile rated To add even more confusion, the official Dept. of Energy "mileage" sticker for the Tesla Model S displays "38 kW-hrs/100 miles", which is 380 Wh/mile (!). I think this must represents total energy consumed out of the wall socket to charge the car full, including energy lost as heat when charging. So, ignore that for now. As mentioned, the Energy app. currently plots about 308 Wh/mile as a solid line indicating rated range. But from my calculations, the "rated range" displayed on the other display, in the Instrument panel, uses 300 Wh/mile, which is inconsistent, optimistic, and confusing. Example anyone can verify: Energy app. may report that you've averaged 340 wH/mile (my usual), and estimates based on state of charge and that you can go another 170 miles "projected range" (be sure to tap the "Average" button, not the useless "Instant" button). 340 wH / mile * 170 miles = 57,800 wH (57.8 kWh) = your current state of charge remaining 57,800 wH / 300 = ~193 miles rated range (and this will be almost exactly the number displayed on the instrument panel, which just seems wrong). It should be 57,800/308 = ~188 miles. Then again, you know that the Model S will keep going at least until that instrument panel number reaches 0, and maybe 5-15 miles (?) beyond that point, depending on conditions. Projected range of course will always just be an approximation. Got head winds? Climbing a mountain? Is it raining or snowing? Do you have lots of people in the car, and heavy luggage? Is it really cold or really hot? All of these impact the real range far beyond a simple calculation based on state of charge and some estimated or measured average. gasnomo | 20. Juni 2013 Thanks for the info, I had always done the calcs using 85000, where did you find that the car will read 0 miles after using 81.62KWH? DouglasR | 20. Juni 2013 @Bob W - I had never thought of calculating the remaining charge in the battery by using Projected range from the energy app, and then using that to check Rated range on the speedometer. It's pretty interesting that they differ. I usually use the Trip Meter data ("Since Last Charge"). The problem there is that the Trip Meter does not measure energy use when the car is stationary, so any vampire load is not taken into account, whereas the Rated range on the speedometer does reflect vampire load. I suspect, but have not verified, that the trip meter also does not account for accessory load (vampire load) even when the car is moving. BTW, a watt-hour is abbreviated Wh, not wH. The "W" is capitalized because it represents someone's name. So kWh, not Kwh. "A" is also capitalized for Ampere. chicagoniner | 20. Juni 2013 Thanks for the info. I've actually achieved ideal miles for good stretches then. CnJsSigP | 20. Juni 2013 I'm with you, chicagoniner. When I take it easy, I get somewhere in the 280's. I can drive to work, have the car sit overnight for 8 hours keeping the battery happy. In the morning I warm the interior up remotely, then dive home and still get 'rated' mileage. Basically, my miles click off at the same rate as rated at the end of the trip, but my extra efficiency has compensated for all the other losses. My average over the last 2000 miles has been 294Wh/mi. Bob W | 20. Juni 2013 @cfriedberg - The 308 Wh/mi number seems to be the general consensus of what the EPA rated range is, based on this TMC thread and this EV range calculator. Others have discovered that the car has a bit of a mileage reserve below 0 miles displayed, and it will shut down completely well before the battery gets to a zero state of charge (since that would permanently harm or possibly even "brick" the battery). @DouglasR - correct. I had it right at the top ("Wh/mi"), then messed up at the bottom. :-) Oh how I wish this web site supported "Preview Post", and "Edit" features. Brian H | 20. Juni 2013 Also, Ampere, Volt, and Farad. And even Ohm. All names. Rod and Barbara | 21. Juni 2013 @ Bob W – The data you present using the Energy app to calculate the Wh/M for a rated mile is very interesting. I have used a similar method for some time in an effort to calculate the Wh/M of a rated mile. When the trip meter turns over to 30.0 miles, I note the Wh/M in the trip meter, the projected miles for the last 30 miles based on the average Wh/M, and the rated miles. I never thought to check to be sure the Wh/M at that point in time is the same on the trip meter and the Energy app – certainly it should be. These data points have quite a bit of scatter, with the rated mile efficiency ranging from 301 Wh/M to 317 Wh/M. I just went out to my car and got the current data from the Energy app – 369 Wh/M over the last 30 miles, with a projected miles based on the average Wh/ M of 163 miles, and rated miles remaining of 193 miles. This results in a 312 Wh/M for a rated mile (369 x 163 / 193). So I am wondering why you are seeing different results. Do you have a 60 kWh vehicle? The Wh/M for a rated mile is different for the two vehicles – about 307 Wh/M for the 85 kWh vehicle and about 301 Wh/M for the 60 kWh vehicle according to my calculations. Bob W | 27. Juni 2013 Interesting that you see different results. Do you have a P85? I have a standard 85 (not Performance, so 19" tires). I'm running 4.5 (1.33.48). 21" performance tires have higher rolling resistance, so therefore slightly less projected range? 163 projected miles * 369 Wh/mi = 60147 wH (curr. usable state of charge) 60147 wH / 300 wH/mi = 200 mi rated range 60147 wH / 308 wH/mi = 195 mi rated range So I am surprised that you see 193 displayed instead of either of these two numbers. exPGAhacker | 27. Juni 2013 I love you guys! Why anyone would want to drive this car for ideal miles is beyond me!! Drive this baby the way it deserves to be driven. Save all your calculations for when you drive your Leaf or Active E or Focus or whatever else. This car is a performance car... as in speed. Not performance as in "ideal miles Wh/mi" stuff. Help market the car for what it is. This is starting to sound like a tree hugger's EV site! pablodds | 27. Juni 2013 Let her rip guys... I am with exPGAhacker, I drive my P85 like I stole it. Love to see fast cars behind me trying to catch up to see what car it was that just passed them. Mathew98 | 28. Juni 2013 I've been feeling kinda an outcast since my average Wh/mi usage has been in the 600's. Granted I've only driven the car for 200 miles... Hopefully my usage will come down gradually. Who knew driving a MS can cure mid life crisis... DouglasR | 28. Juni 2013 @exPGAhacker - I don't drive fast on road trips because that would take me longer to get where I'm going. Kimscar | 28. Juni 2013 Actually the W/mi is a important number to know exPGAhacker. When you fly a plane you know what speed to glide at to cover the longest distance if you lose your engine. You don't expect to use this but if the situation comes up you have that info. Most of the time the drivers won't need that information in a Model S. But the situation can come up where you don't want to stop and find an outlet or maybe there isn't one in the area where you are or the headwinds are reducing your range etc... Tesla should publish the best W/mi number. NKYTA | 28. Juni 2013 @ exPGAhacker Around town, it's whatever bolting and speeds I feel are safe, and won't get me a ticket. @ DouglasR Yep. Cruise control on, keep an eye on miles left, rated, projected - adjust as necessary. exPGAhacker | 28. Juni 2013 I always get a kick out of needing to explain myself because I didn't do a decent job the first time. It usually happens when I think I'm being somewhat whimsical and funny. I totally get why it's important to maximize the Wh/mi in certain circumstances. But most of us drive the car locally with a long trip or two on occasion. In a single day around town I have yet to even worry about range. The most miles I've driven in a single day of "normal" use is about 170. I rode her hard, too!! When I do a longer trip, I drive with A/C and around the speed limit, plus or minus 10 (15!) MPH. The important thing is the pre-planning. I don't test the limits. I drive for fun and pleasure and make it work by planning and not testing the limits. I didn't buy this beast to worry about ideal watt hour per mile calculations and GLIDING to the nth quadrant degree of percentile expectation of the rated range on the EPA rated amperage thingy on the data scatter of a normal rated battery blah blah blah. Head winds? F**k the head winds! Just drive the car as it's begging to be driven and the only important measurement will be how wide that smile is on your face! Bob W | 28. Juni 2013 Anyone with a P85 (Performance 85) or a 60, please try this: 1. Open the energy app., and tap Average 2. Using only numbers displayed by the app., multiply the Avg. Wh/mi (on the left) by the projected range (on the right). This gives you your current state of charge (in Wh), not counting the ~4% reserve. You should get about the same number no matter what distance you select (5, 15, or 30 miles). 3. Divide the calculated state of charge by the EPA rated range displayed on the instrument panel to get the fixed Wh/mile number used by your car. Please post your car type, and the final number. For my standard 85, it seems to use 300 Wh/mile. I speculate that for a P85, you'll see about 312 Wh/mile. I have no idea what a 60 (or 40) will display, but I'm very curious to know. As for the other comments, the beauty of the Model S is that you can drive it for range and comfort, or you can drive it for "sport" (speed). You can tell a lot about a person's driving habits just by looking at their average Wh/mi in the trip meter. If the auto insurance companies were smart, they would use that number as a factor when calculating your premium. :-) Bob W | 28. Juni 2013 Oh, and please include your software version too, unless it is 4.5 (1.33.48) which is what most of us have at this point I think. Rod and Barbara | 29. Juni 2013 @ Bob W – Thanks for the info on your car configuration. The data difference between our cars is all the more perplexing since we have the same configuration – standard 85, 19” wheels, software v4.5. On June 27 you wrote: “60147 wH / 300 wH/mi = 200 mi rated range 60147 wH / 308 wH/mi = 195 mi rated range So I am surprised that you see 193 displayed instead of either of these two numbers.” The reason the data don’t result in an exact match is that I find a lot of scatter in the data collected as I described. After collecting 60 data points on many different days and in many different situations I found the calculated Wh/M for a rated mile varied from 301 to 317. To make the situation more confusing, an alternate method for calculating the Wh/M from charging data varied from 299 to 310. The best way to handle such significant data scatter is to take lots of data points and assume the data errors will form a normal distribution. Then you can calculate the mean, select a confidence level and calculate the confidence interval for the data. For example, my data show that with a 99% confidence level the actual value of the Wh/M for a rated mile is 307.3 plus or minus 1.0 (i.e. between 306.3 and 308.3). The data that you get from your June 28 request may also show significant scatter. I’ll collect a few data points using your technique over the next week and post the results. On a side note, since the title of this thread is about Ideal Miles, I wonder if you might get more responses to your request for data if you posted in the Calculation of Rated Range thread.
{"url":"http://www.teslamotors.com/de_DE/forum/forums/does-anyone-know-whmile-ideal-miles","timestamp":"2014-04-20T21:21:35Z","content_type":null,"content_length":"51984","record_id":"<urn:uuid:3ff6e741-c9eb-43f9-896b-3f9288d7c9ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
What's next Re: What's next Here's another one: The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=2658","timestamp":"2014-04-19T19:44:59Z","content_type":null,"content_length":"18498","record_id":"<urn:uuid:ff13d64f-40ab-49f3-b028-5ed82d837295>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomtials with multiple roots May 28th 2010, 08:46 AM #1 Apr 2009 Polynomtials with multiple roots Hello All, My book states the following: [PART ONE] If 'b' is any integer and the polynomial f(x)=(x^2)+bx+1 factors (poly mod 9), there exists 3 distinct non-negative integers 'q' less than than 9 so that f(q) = 0(mod 9). How can this be proven? [PART TWO] If 'b' is any integer and the polynomial f(x)=(x^2)+bx+1 factors (poly mod 8), then f(x) is a square. E.g. f(x) = ((x+c)^2)(poly mod 8) where 0<c<8. Can anyone think of values of b that makes this possible? Is there something preventing you from doing either part by force? Just put in every single value of $b$ and see what happens. For example, in part one, try $b=0$. Then, we have the polynomial $f(x)=x^2+1$. It does not factor, so we move on. Again, try $b=1$. It does not factor, so move on. Continue this until you reach $b=8$. If your polynomial factors for some $b$, check that it does have three roots. Sorry, I'm not following how that will help me with either my 1st or 2nd question. Could someone be more direct ? What part of my explanation are you not following? The question asks you to show that for all $b$, if the polynomial $x^2+bx+1$ factors, then it has a certain number of roots. Since we are working modulo 8 or 9, we can just check every single value of $b$ and see if the result is true. (There are only finitely many values.) Are you having a problem with factoring or finding roots of polynomials? The question asks you to show that for all $b$, if the polynomial $x^2+bx+1$ factors, then it has a certain number of roots. Since we are working modulo 8 or 9, we can just check every single value of $b$ and see if the result is true. (There are only finitely many values.) Are you having a problem with factoring or finding roots of polynomials? I brute force it for the FIRST question and all i get is for b=2 i get (x+1)(x+1), but that's it. I still don't understand the SECOND question at all. It's safe to assume $b\in\{0,1,2,3,4,5,6,7,8\}$ since we're working modulo $9$. roninpro says to just plug each possible value of $b$ into $f(x)$ and see what you get. I did but the only one that factored was when b=2 which factored into (x+1)^2 but I don't see how that will help me considering that I know only 1 distinct root. Is this even right or am I doing something wrong? What other ones exist? I'll do one for you. Take $b=7$: $f(x)=x^2+7x+1\equiv x^2-2x+1 \bmod{9}$ So $f(x)\equiv(x-1)^2\bmod{9}$ Now solve $(x-1)^2\equiv0\bmod{9}$ We see $x=\{1,4,7\}$ are roots. So are 1,4,7 the 3 distinct roots? How did you get them? I just need to see one of those conversions like what you did for b = 7. How did you change that over to a mod 9? Since you don't get what we're saying, let me try a different approach. We're given $x^2+bx+1$ factors modulo $9$ i.e. $x^2+bx+1\equiv (x-a)(x-c)\bmod{9}$ But $(x-a)(x-c)=x^2-(a+c)x+ac\equiv x^2+bx+1\bmod{9}$ So $ac\equiv1\bmod{9}\implies c\equiv a^{-1}\bmod{9}$ We then get $x^2+bx+1\equiv x^2-(a+a^{-1})x+1\bmod{9}$ for some $a$ that has an inverse modulo $9$. Here's all $a$ with an inverse: $\begin{tabular}{c | c}\hline<br /> a & a inverse\\\hline<br /> 1 & 1 \\<br /> 2 & 5 \\<br /> 4 & 7 \\<br /> 5 & 2 \\<br /> 7 & 4 \\<br /> 8 & 8 \\\hline<br /> \end{tabular}$ Looking at the table we see there are four cases to consider: $a=1$: $f(x)\equiv x^2-(1+1)x+1\equiv(x-1)^2 \bmod{9}$ $a=2$: $f(x)\equiv x^2-(2+5)x+1\equiv(x+1)^2 \bmod{9}$ $a=4$: $f(x)\equiv x^2-(4+7)x+1\equiv(x-1)^2 \bmod{9}$ $a=8$: $f(x)\equiv x^2-(8+8)x+1\equiv(x+1)^2 \bmod{9}$ Thus if $x^2+bx+1$ factors modulo $9$ then $f(x)\equiv (x\pm1)^2\bmod{9}$. Now check to see there are three roots for both cases (this should be very easy). May 29th 2010, 07:21 PM #2 May 29th 2010, 07:23 PM #3 Apr 2009 May 29th 2010, 07:30 PM #4 May 29th 2010, 07:32 PM #5 Apr 2009 May 29th 2010, 07:34 PM #6 May 29th 2010, 07:39 PM #7 Apr 2009 May 29th 2010, 07:45 PM #8 May 29th 2010, 07:46 PM #9 Apr 2009 May 30th 2010, 12:59 PM #10 May 30th 2010, 01:26 PM #11 Apr 2009 May 30th 2010, 01:36 PM #12 May 30th 2010, 01:56 PM #13 Apr 2009 May 30th 2010, 02:01 PM #14 May 30th 2010, 02:37 PM #15
{"url":"http://mathhelpforum.com/number-theory/146755-polynomtials-multiple-roots.html","timestamp":"2014-04-17T04:09:33Z","content_type":null,"content_length":"83579","record_id":"<urn:uuid:c0ef93c1-60e8-46ed-902e-123c5fb73f1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Isaac Newton Institute for Mathematical Sciences, Cambridge, UK 26 and 27 August, 1997 Organisers: Wolfgang Maass and Chris Bishop FINAL PROGRAMME Tuesday, August 26 9:00 - 10:15 Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) Motivation and Models for Spiking Neurons 10:15 - 10:45 Coffee-Break 10:45 - 12:00 Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) Computation and Coding in Networks of Spiking Neurons 12:00 - 14:00 Lunch 14:00 - 14:40 David Horn (Tel Aviv University, Israel) Fast Temporal Encoding and Decoding with Spiking Neurons 14:40 - 15:20 John Shawe-Taylor (Royal Holloway, University of London) Neural Modelling and Implementation via Stochastic Computing 15:20 - 16:00 Tea Break 16:00 - 16:40 Wolfgang Maass (Technische Universitaet Graz, Austria) A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations 16:40 - 17:20 Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System 17:20 - 18:00 Poster-Spotlights (5 minutes each) 18:00 - 19:00 Poster-Session (with wine reception) 19:00 Barbecue dinner at the Isaac Newton Institute Wednesday, August 27 9:00 - 10:15 Tutorial by Alan F. Murray (University of Edinburgh) Pulse-Based Computation in VLSI Neural Networks : Fundamentals 10:15 - 10:40 Coffee-Break 10:40 - 11:20 Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) Communication and Computation using Spikes in Silicon Perceptive Systems 11:20 - 12:00 David P.M. Northmore (University of Delaware, USA) Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs 12:00 - 14:00 Lunch (During lunch we will discuss plans for an edited book on pulsed neural nets) 14:00 - 14:40 Alister Hamilton (University of Edinburgh) Pulse Based Signal Processing for Programmable Analogue VLSI 14:40 - 15:20 Rodney Douglas (ETH Zurich, Switzerland) A Communications Infrastructure for Neuromorphic Analog VLSI Systems 15:20 - 15:40 Coffee-Break 15:40 - 17:00 Plenary Discussion: Artifical Pulsed Neural Nets: Prospects and Problems ABSTRACTS (in the order of the talks) Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) Motivation and Models for Spiking Neurons In this introductory tutorial I will try to explain some basic ideas of and provide a common language for pulsed neural nets. To do so I will 0) motivate the idea of pulse coding as opposed to rate coding 1) discuss the relation between various simplified models of spiking neurons (integrate-and-fire, Hodgkin-Huxley) and argue that the Spike Response Model (=linear response kernels + threshold) is a suitable framework to think about such models. 2) discuss typical phenoma of the dynamics in populations of spiking neurons (oscillations, asynchronous states), provide stability arguments and introduce an integral equation for the population dynamics. 3) review the idea of feature binding and pattern segmentation by a 'synchronicity code'. Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria) Computation and Coding in Networks of Spiking Neurons This tutorial will provide an introduction to • methods for encoding information in trains of pulses • simplified computational models for networks of spiking neurons • the computational power of networks of spiking neurons for concrete coding schemes • computational consequences of synapses that are not static, but but give different "weights" to different pulses in a pulse train • relationships between models for networks of spiking neurons and classical neural network models. David Horn (Tel Aviv University, Israel) Fast Temporal Encoding and Decoding with Spiking Neurons We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes suffice to transmit information in the brain. Using integrate and fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or inter-spike interval. We suggest using a population average for evaluating the FPT that represents the desired information. Instantaneous lateral excitation among these neurons helps the analysis. By employing a second layer of neurons with variable connections to the first layer, we represent the strength of the input by the number of output neurons that fire, thus decoding the temporal information. Such a model can easily lead to a logarithmic relation as in Weber's law. The latter follows naturally from information maximization, if the input strength is statistically distributed according to an approximate inverse law. John Shawe-Taylor (Royal Holloway, University of London) Neural Modelling and Implementation via Stochastic Computing 'Stochastic computing' studies computation performed by manipulating streams of random bits which represent real values via a frequency encoding. The paper will review results obtained in applying this approach to neural computation. The following topics will be covered: • Basic neural modelling • Implementation of feedforward networks and learning strategies • Generalization analysis in the statistical learning framework • Recurrent networks for combinatorial optimization, simulated and mean field annealing • Applications to graph colouring • Hardware implementation in FPGAs Wolfgang Maass (Technische Universitaet Graz, Austria) A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations A simple extension of standard neural network models is introduced, that provides a model for computations with pulses where both the pulse frequencies and correlations in pulse times between different pulse trains are computationally relevant. Such extension appears to be useful since it has been shown that firing correlations play a significant computational role in many biological neural systems, and there exist attempts tp transport this coding mechanism to artifical pulsed neural networks. Standard neural network models are only suitable for describing computations in terms of pulse rates. The resulting extended neural network models are still relatively simple, so that their computational power can be analyzed theoretically. We prove rigorous separation results, which show that the use of pulse correlations in addition to pulse rates can increase the computational power of a neural network by a significant amount. Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland) Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System Owls can locate sound sources in the complete darkness with a remarkable precision. This capability requires auditory information processing with a temporal precision of less than 5 microseconds. How is this possible, given that typical neurons are at least one order of magnitude slower? In this talk, an integrate-and-fire model is presented of a neuron in the auditory system of the barn owl. Given a coherent input the model neuron is capable to generate precisely timed output spikes. In order to make the input coherent, delay lines are tuned during an early period of the owls development by an unsupervised learning procedure. This results in an adaptive system which develops a sensitivity to the exact timing of pulses arriving from the left and the right ear, a necessary step for the localization of external sound sourcec and hence prey. (Abstracts of Posters: see the end of this listing) Tutorial by Alan F. Murray (University of Edinburgh) Pulse-Based Computation in VLSI Neural Networks : Fundamentals This tutorial will present the techniques that underly pulse generation, distribution and arithmetic in VLSI devices. The talk will concentrate on work performed in Edinburgh, but will include references to alternative approaches. Ancillary issues surrounding "neural" computation in analogue VLSI will be drawn out and the tutorial will include a brief introduction to MOSFET circuits and Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland) Communication and Computation using Spikes in Silicon Perceptive Systems This presentation deals with the principles, the main properties and some applications of a pulsed communication system adapted to the needs of the analog implementation of perceptive and sensory-motor systems. The interface takes advantage of the fact that activity in perception tasks is often sparsely distributed over a large number of elementary processing units (cells) and facilitates the access to the communication channel to the more active cells. The resulting "open loop" communication architecture can be advantageously be used to set up connections between distant cells on the same chip or point to point connections between cells on different chips. The system also lends itself to the simple circuit implementation of typically biological connectivity patterns such as projection of the activity of one cell on a region (its "projective field") of the next neural processing layer, which can be on a different chip in an actual implementation. Examples of possible applications will be drawn from the fields of vision and sensory-motor loops. David P.M. Northmore (University of Delaware, USA) Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs The dendrites of neurons probably play very important signal processing roles in the CNS, allowing large numbers of afferent spike trains to be differentially weighted and delayed, with linear and non-linear summation. Our VLSI neuromorphs capture these essential properties and demonstrate the kinds of computations involved in sensory processing. As recent neurobiology shows, dendrites also play a critical role in learning by back-propagating output spikes to recently active synapses, leading to changes in their efficacy. Using a spike distribution system we are exploring Hebbian learning in networks of neuromorphs. Alister Hamilton (University of Edinburgh) Pulse Based Signal Processing for Programmable Analogue VLSI VLSI implementations of Pulsed Neural Systems often require the use of standard signal processing functions and neural networks in order to process sensory data. This talk will introduce a new pulse based technique for implementing standard signal processing functions - the Palmo technique. The technique we have developed is fully programmable, and may be used to implement Field Programmable Mixed Signal Arrays - making it of great interest to the wider electronics community. Rodney Douglas (ETH Zurich, Switzerland) A Communications Infrastructure for Neuromorphic Analog VLSI Systems Analogs of peripheral sensory structures such as retinas and cochleas, and populations of neurons have been successfully implemented on single neuromorphic analog Very Large Scale Integration (aVLSI) chips. However, the amount of computation that can be performed on a single chip is limited. The construction of large neuromorphic systems requires a multi-chip communication framework optimized for neuromorphic aVLSI designs. We have developed one such framework. It is an asynchronous multiplexing communication network based on address event data representation (AER). In AER, analog signals from the neurons are encoded by pulse frequency modulation. These pulses are abstractly represented on a communication bus by the address of the neuron that generated it, and the timing of these address-event communicate analog information. The multiplexing used by the communication framework attempts to take advantage of the greater speed of silicon technology over biological neurons to compensate for more limited direct physical connectivity of aVLSI. The AER provides a large degree of flexibility for routing digital signals to arbitrary physical locations. Irit Opher and David Horn (Tel Aviv University, Israel) Arrays of Pulse Coupled Neurons: Spontaneous Activity Patterns and Image Analysis Arrays of interacting identical pulse coupled neurons can develop coherent firing patterns, such as moving stripes, rotating spirals and expanding concentric rings. We obtain all of them using a novel two variable description of integrate and fire neurons that allows for a continuum formulation of neural fields. One of these variables distinguishes between the two different states of refractoriness and depolarization and acquires topological meaning when it is turned into a field. Hence it leads to a topologic characterization of the ensuing solitary waves. These are limited to point-like excitations on a line and linear excitations, including all the examples quoted above, on a two-dimensional surface. A moving patch of firing activity is not an allowed solitary wave on our neural surface. Only the presence of strong inhomogeneity that destroys the neural field continuity, allows for the appearance of patchy incoherent firing patterns driven by excitatory Such a neural manifold can be used for image analysis, performing edge detection and scene segmentation, under different connectivities. Using either DOG or short range synaptic connections we obtain edge detection at times when the total activity of the system runs through a minimum. With generalized Hebbian connections the system develops temporal segmentation. Its separation power is limited to a small number of segments. Berthold Ruf und Michael Schmitt (Technische Universitaet Graz, Austria) Self-Organizing Maps of Spiking Neurons Using Temporal Coding The basic idea of self-organizing maps (SOM) introduced by Kohonen, namely to map similar input patterns to contiguous locations in the output space, is not only of importance to artificial but also to biological systems, e.g. in the visual cortex. However, the standard formulation of the SOM and the corresponding learning rule are not suitable for biological systems. Here we show how networks of spiking neurons can be used to implement a variation of the SOM in temporal coding, which has the same characteristic behavior. In contrast to the standard formulation of the SOM our construction has the additional advantage that the winner among the competing neurons can be determined fast and locally. Wolfgang Maass and Michael Schmitt (Technische Universitaet Graz, Austria) On the Complexity of Learning for Networks of Spiking Neurons In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (also known as ``delay'' of a connection). It is known that these delays are tuned in biological neural systems through a variety of mechanisms. We investigate the VC-dimension of networks of spiking neurons where the delays are viewed as ``programmable parameters'' and we prove tight bounds for this VC-dimension. Thus we get quantitative estimates for the diversity of functions that a network with fixed architecture can compute with different settings of its delays. It turns out that a network of spiking neurons with k adjustable delays is able to compute a much richer class of Boolean functions than a threshold circuit with k adjustable weights. The results also yield bounds for the number of training examples that an algorithm needs for tuning the delays of a network of spiking neurons. Results about the computational complexity of such algorithms are also given. Wolfgang Maass and Thomas Natschlaeger (Technische Universitaet Graz, Austria) Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A corresponding layered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. Wolfgang Maass and Berthold Ruf (Technische Universitaet Graz, Austria) It was previously shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a spiking neuron have an approximately linearly rising or linearly decreasing initial segment. This property is satisfied by common models for biological neurons. On the other hand several implementations of pulsed neural nets in VLSI employ pulses that have the shape of step functions. We analyse the relevance of the shape of pulses for the computational power of formal models for pulsed neural nets. It turns out that the computational power is significantly higher if one employs pulses with a linearly increasing or decreasing segment. Ulrich Roth and Tim Schoenauer (Technische Universitaet Berlin, Germany) For image processing or to model brain areas with complex integrate-and-fire neurons, the simulation of networks consisting of several millions of spiking neurons is desirable. Existing hardware platforms are unable to perform the simulation of such complex networks in reasonable time. Therefore, a neurocomputer for spiking neural networks (NESPINN) has been designed and is about to be realized. The entire system comprises 16 similar boards, which communicate via a VME-bus. A network of up to 128K neurons is computed per board in real-time. Each board consists of two connection units, two weight units and an ASIC with four parallel processing units in a SIMD/dataflow architecture . Also, a simulation tool "SimSpinn" written in Java as an interface to existing platforms as well as to the NESPINN-system has been developed. Features of this simulation engine as well as an outlook for a second generation of a neuroaccelerator will be given. A second generation is currently planned at the Technical University of Berlin and this outlook shall encourage comments and suggestions for the new architecture.
{"url":"http://www.newton.ac.uk/programmes/NNM/nnm_pulsed2.html","timestamp":"2014-04-20T16:20:41Z","content_type":null,"content_length":"23221","record_id":"<urn:uuid:0faddd06-c191-47ce-abbe-892daec2b267>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Be Mortgage-Free Faster Some mortgage borrowers have only two things in mind: "How much can I afford?" and "What will my monthly payments be?" They max out their finances on mortgage debt and use an interest-only or negative-amortization mortgage to minimize their monthly payments. Then, they rely upon home price appreciation to eclipse the risks associated with a constant or increasing mortgage balance. In many cases, if these homeowners are fortunate enough to accumulate some equity in their homes, they max out their finances again through home-equity loan or cash-out refinances and then use the proceeds to make additional purchases, pay down consumer debt, or even make additional investments. Sound risky? It is. In this article we'll show you how to make sure you have a mortgage you can afford and to build equity by paying it off quickly. Making Mortgage Math Add Up Every mortgage has an amortization schedule. An amortization schedule is a table that lays out each scheduled mortgage payment in a chronological order beginning with the first payment and ending with the final payment. (To read more on amortizations, see Understanding the Mortgage Payment Structure and Make A Risk-Based Mortgage Decision.) In the amortization schedule, each payment is broken into an interest payment and a principal payment. Early in the amortization schedule, a large percentage of the total payment is interest, and a small percentage of the total payment is principal. As you pay for your mortgage, the amount that is allotted to interest decreases and the amount allotted to principal increases. The amortization calculation is most easily understood by breaking it into three parts: Part 1 - Column 5: Total Monthly Payments The calculation of the total monthly payment is shown by the formula below. A = periodic payment amount P = the mortgage's remaining principal balance i = periodic interest rate n = total number of remaining scheduled payments Part 2 - Column 6: Periodic Interest Copyright © 2007 Investopedia.com Figure 1 The calculation of the periodic interest charged is calculated as shown below: The periodic interest rate (Column 3) x the remaining principal balance (Column 4) Note: The interest rate shown in Column 3 is an annual interest rate. It must be divided by 12 (months) to arrive at the periodic interest rate. Part 3 - Column 7: Principal Payments The calculation of the periodic principal payment is shown by the formula below. The total payment (Column 5) – the periodic interest payment (Column 6) Copyright © 2007 Investopedia.com Figure 2 Figure 2 shows an amortization schedule for a 30-year 8% fixed-rate mortgage. For the sake of space, only the first five and the last five months are shown. The amortization schedule demonstrates how paying an additional $300 each month toward the principal balance of the same mortgage shown in Figure 1 will shorten the life of the mortgage to about 21 years and 10 months (262 total months versus 360), and reduce the total amount of interest paid over the life of the mortgage by $209,948. As you can see, the principal balance of the mortgage decreases by more than the extra $300 you throw at it each month. It saves you more money by cutting down the months of interest charged on the remaining term. For example, if an extra $300 were paid each month for 24 months at the start of a 30-year mortgage, the extra amount by which the principal balance is reduced is greater than $7,200 (or $300 x 24). The actual amount saved by paying the additional $300 per month by the end of the second year is $7,430.42. You've saved yourself $200 in the first two years of your mortgage - and the benefits only increase as they compound through the life of the mortgage! This is because when the extra $300 is applied toward the principal balance of the mortgage each month, a greater percentage of the scheduled mortgage payment is applied to the principal balance of the mortgage in subsequent months. (Find out more about mortgage payments in our Mortgage Basics tutorial.) The True Benefits of Making Accelerated Mortgage Payments The true benefits of making the accelerated payments are measured by calculating what is saved versus what is given up. For example, instead of making an extra $300 per month payment toward the mortgage shown above, the $300 could be used to do something else. This is called a cost-benefit analysis. Let's say that the consumer with the mortgage shown in the amortization schedules above is trying to decide whether to make the $300 per month accelerated mortgage payments. The consumer is considering three alternative choices as shown below. For each option, we'll calculate the costs versus the benefits, or what can be saved versus what is given up. (For the sake of this example, we're going to assume that leveraging any equity in the home through a home equity loan is not an option. We're also going to ignore the tax deductibility of mortgage interest, which could change the numbers slightly.) The homeowner's three options include: 1. Getting a $14,000 five-year consumer loan at an interest rate of 10% to buy a boat. 2. Paying off a $12,000 credit-card debt that carries a 15% annual rate (compounded daily). 3. Investing in the stock market. Option 1: Buying a boat The decision to buy a boat is both a matter of pleasure and economics. A boat - much like many other consumer "toys" - is a depreciating asset. Adding household debt to purchase an illiquid, depreciating asset adds risk to the household balance sheet. This consumer has to weight the utility (pleasure) gained from owning a boat verses the true economics of the decision. We can calculate that a $14,000 loan for the boat at an interest rate of 10% and a five-year term will have monthly payments of $297.46. Cost-Benefit Breakdown If the homeowner had made $300 accelerated payment for the first five years of the mortgage rather than buying a boat, this would have shortened the life of the mortgage by 47 months, saving $2,935.06 for 47 months, 313 months in the future. Using a 3% discount rate this has a present value of $59,501. Additionally, if the accelerated mortgage payments are made, the principal balance of the mortgage will be reduced by an additional $21,599 by the end of the five-year period. This early retirement of debt reduces risk on the household balance sheet. (To learn more about compounding's effects on your loans, see Understanding The Time Value Of Money.) By deciding to purchase the boat, the consumer spends $297.46 per month for five years to own a $14,000 boat. The $297 per month for 60 months equals out to a present value of $16,554. By putting the $300 dollars on the mortgage, this consumer would save $59,501 over the course of the mortgage. Buying the boat would mean spending $16,554 to pay for a $14,000 boat that is likely to have a depreciating resale value. Therefore, the consumer must ask himself if the pleasure of owning the boat is worth the large divide in the economics. Option 2: Paying off a $12,000 credit card debt The daily compounding of credit card interest makes this calculation complex. Credit card interest is compounded daily, but the consumer is not likely to make daily payments. However, the calculation of an amortization schedule says that if the consumer pays about $300 per month for five years, that person can eliminate the credit card debt. As in the first example, making accelerated payments on the mortgage of $300 each moth for the first five years will leave the homeowner with a present value of future payment savings of $59,501. By paying $300 per month for five years to eliminate the credit card debt, the consumer can eliminate $12,000 in credit card debt with a 15% annual interest rate. We know that if the consumer makes accelerated mortgage payments, the credit card debt will continue to accrue interest and the outstanding balance will increase at an increasing rate. If we compound $12,000 daily at an annual rate of 15% for 60 months we get $25,400. If we assume that after making five years of accelerated mortgage payments, the consumer could then start to pay down the credit card debt by $300 per month, it would take more than 50 years at $300 per month to pay off the credit card debt at that point. In this case, paying down the credit card debt first is the most economical choice. Option 3: Invest in the stock market We've already shown that the consumer will save a present value of $59,501 by making accelerated mortgage payments of $300 for the first five years of the mortgage. Before we compare the accelerated mortgage payment savings to the returns that might be made in the stock market over the same time period, we must point out that making any assumptions about stock market returns is extremely risky. Stock market returns are volatile. The historical average annual returns of the S&P 500 Index is about 11%, but some years it is up, and some years it is down. Putting the $300 toward the mortgage means a present value of $59,501 of future mortgage payments, and a reduction of $21,599 in the principal balance of the mortgage over the first five years of the mortgage. This reduces the risks associated with debt. If the consumer decides to invest the $300 monthly over a five-year period in the stock market - assuming an average annual return of 11% - this will yield a total portfolio value of $23,855 which has a present value of $20,536 (discounted at 3%), which is far less than the present value $59,501 realized by making accelerated mortgage payments. However, if we assume the $23,855 will continue to earn an annual return of 11% beyond month 60 - until month 313, the point at which the mortgage payment would be eliminated - the total value of the portfolio at that point would be $239,989. This is greater than the present value of future mortgage payment savings at that future time, which would be $129,998. We could conclude then that investing in the stock market over the long term might make more economical sense - but this would only be a given in a perfect world. Homeowners need to understand that bigger mortgage is compared to the value of the home, the larger the risk they have taken on. They must be also aware that home price appreciation should not be relied on to eclipse the risks of mortgage debt. Furthermore, they need to understand that paying down mortgage debt reduces risk and can be to their economic advantage. One of the key aspects of making accelerated mortgage payments is that each dollar reduction in the outstanding principal balance of a mortgage reduces the amount of interest paid as part of future scheduled payments, and increases the amount of principal paid as part of those same payments. Therefore, a simple calculation that sums up the amount of interest saved over a time period that ends before the loan is paid off does not accurately capture the entire benefit of making accelerated mortgage payments. A present value calculation of the future payment savings is a more accurate analysis. Additionally, every dollar of principal that is paid down early reduces risk on the household balance sheet. Still interested in refinancing your mortgage? Then check out The True Economics Of Refinancing A Mortgage and American Dream Or Mortgage Nightmare? comments powered by Disqus
{"url":"http://www.investopedia.com/articles/pf/07/acceleratedpayments.asp","timestamp":"2014-04-17T00:59:03Z","content_type":null,"content_length":"91034","record_id":"<urn:uuid:27812686-555e-482b-9d66-94bbd2d8f92d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple question about comparing PV's February 10th 2010, 10:51 AM #1 Feb 2010 Simple question about comparing PV's I understand that using the C x [1/r-1/r(1+r)^t] gives the PV value of an annuity one period from now. So if I was asked to decide between paying $10,000 now, or an annuity with a calculated PV of $10,001, would I choose the $10,000 now? Or, do I need to calculate the PV of $10,000 one period from now, such that after one period of interest the PV of $10,000 now becomes greater than $10,001? I understand that using the C x [1/r-1/r(1+r)^t] gives the PV value of an annuity one period from now. So if I was asked to decide between paying $10,000 now, or an annuity with a calculated PV of $10,001, would I choose the $10,000 now? Or, do I need to calculate the PV of $10,000 one period from now, such that after one period of interest the PV of $10,000 now becomes greater than $10,001? The formula for the present value of an annuity is: c * {[1 - 1 / (1+r)^t] / r} I can't follow yours: can you rewrite it with PROPER BRACKETING. And could you also clarify your question; thank you. I understand that using the C x [1/r-1/r(1+r)^t] gives the PV value of an annuity one period from now. So if I was asked to decide between paying $10,000 now, or an annuity with a calculated PV of $10,001, would I choose the $10,000 now? Or, do I need to calculate the PV of $10,000 one period from now, such that after one period of interest the PV of $10,000 now becomes greater than $10,001? I'm not sure I understand your question, but if you have the option between $10,000 and $10,001, you would prefer $10,001. You shouldn't have to do anything more than that if they are both present values already because they are already in the same time period. February 10th 2010, 06:21 PM #2 MHF Contributor Dec 2007 Ottawa, Canada February 14th 2010, 11:23 AM #3 Junior Member Jan 2010
{"url":"http://mathhelpforum.com/business-math/128199-simple-question-about-comparing-pv-s.html","timestamp":"2014-04-20T06:17:22Z","content_type":null,"content_length":"36512","record_id":"<urn:uuid:d363964e-20c1-461e-8c49-d77f7614ddd6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS Procedures for Latent Class Analysis & Latent Transition Analysis Select a Version 1. Are you are using a 32-bit or 64-bit machine? 2. Are you using 32-bit or 64-bit SAS? • In SAS, go to HELP > ABOUT SAS 9. • Under Software Information "W64" = 64-bit SAS. • "W64" not displayed = 32-bit SAS. (Available free to members. Register.) PROC LCA & PROC LTA: Users' Guide: 4-minute video: Install PROC LCA & PROC LTA Need Help? Consult our FAQ. Still have questions? Please email MChelpdesk@psu.edu.
{"url":"https://methodology.psu.edu/downloads/proclcalta","timestamp":"2014-04-21T09:38:42Z","content_type":null,"content_length":"47005","record_id":"<urn:uuid:947b2048-ae42-4cee-9f35-12892b186b6e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums I've been staring at this problem for a couple of hours now. It seems to me that a piece of information was left out of the book. But I guess I could be wrong. Allan can drive his car over a route in 5 hours, and Carla can drive her car over the same route in 4 hours. How long would it take to meet if they started at opposite ends at the same time? Can this problem be solved without a distance or rate? Re: Distance rate time brian12 wrote:Allan can drive his car over a route in 5 hours, and Carla can drive her car over the same route in 4 hours. How long would it take to meet if they started at opposite ends at the same time? Can this problem be solved without a distance or rate? Yeh: use what they show in this "distance" lesson: distance: d time: 5 rate: d/5 distance: d time: 4 rate: ?? Finish the Carla talbe above. They come at each other so you add their rates. What's the total rate? How long to go d at that rate? etc. Re: Distance rate time I appreciate the help. I see I was making the mistake of setting the distance equal to 2d last night. That's what I get for trying to study when I'm tired. Quick question, that page you linked to, I found it a day or two ago and noticed it said "you cannot add rates". I solved an earlier problem in the same book by adding rates, but I figured it was just a coincidence I found the correct answer. Is there some kind of special circumstance where you are allowed to add rates? Re: Distance rate time brian12 wrote:that page you linked to...said "you cannot add rates"....Is there some kind of special circumstance where you are allowed to add rates? You can't add rates for two different parts, like you can add distances and times. Like "1st leg is 100 mi in 2 hrs & 2nd leg is 300 mi in 5 hrs", but you can't add "20 mph on one street & 40 mph on another street" to get "60 mph for the two streets". That's what they mean when they said "you can't add rates". But if you're doing combined speeds, like where they're going toward each other so the "net" speed is the combined/added speed, then it's okay. But when it's like that, you can't count both distances or times so "2d" was wrong. It's kind of a context thing I guess.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=3545","timestamp":"2014-04-19T22:08:11Z","content_type":null,"content_length":"22535","record_id":"<urn:uuid:634acfd3-cd22-445b-ac9d-5c4251f191a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Leicester, MA Algebra 2 Tutor Find a Leicester, MA Algebra 2 Tutor ...In addition, I have over 10 years of experience in proofreading numerous high school and college essays and research papers. I received my TEFL Certification from TEFL Worldwide Prague in 2005. I continued to teach English in Czech Republic until August 2006. 28 Subjects: including algebra 2, English, reading, writing ...The course also sees the introduction of concepts such as exponential and logarithmic functions. Algebra 2 helps students who want jobs that involve chemistry, medicine and physics. It is also helpful in business and economics because algebra teaches how one variable affects another. 27 Subjects: including algebra 2, reading, English, writing I am a professional scientist (geophysicist) who has taught math and science at the high school and college level. Whether the objective is to swim a length of the pool or master calculus-based physics, I have always enjoyed helping students achieve their potential. The one-on-one tutoring experie... 8 Subjects: including algebra 2, calculus, physics, geometry ...Thus I have an informed perspective regarding both teaching and application of these disciplines. Recently I have been accepting some on-line tutoring requests in order to evaluate the Wyzant on-line tutoring facility, which is in beta development, and assess its feasibility for my content. It ... 7 Subjects: including algebra 2, calculus, physics, algebra 1 I am a licensed mathematics teacher, high school athletic coach and small business owner. I tutor for MCAS and special needs students at two area high schools. I teach remedial math courses at a state college for students who have struggled with math courses during their high school years. 21 Subjects: including algebra 2, statistics, GRE, geometry Related Leicester, MA Tutors Leicester, MA Accounting Tutors Leicester, MA ACT Tutors Leicester, MA Algebra Tutors Leicester, MA Algebra 2 Tutors Leicester, MA Calculus Tutors Leicester, MA Geometry Tutors Leicester, MA Math Tutors Leicester, MA Prealgebra Tutors Leicester, MA Precalculus Tutors Leicester, MA SAT Tutors Leicester, MA SAT Math Tutors Leicester, MA Science Tutors Leicester, MA Statistics Tutors Leicester, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Leicester_MA_algebra_2_tutors.php","timestamp":"2014-04-16T19:11:27Z","content_type":null,"content_length":"24148","record_id":"<urn:uuid:3e847f18-c8df-462e-b3f4-132ea38c53c4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Grove Hall, MA Algebra 1 Tutor Find a Grove Hall, MA Algebra 1 Tutor ...I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. In addition, I took Algebra, Calculus, Geometry, Probability and Trigonometry courses in high school, and this knowledge helped me to achieve my goals in research projects involving 4-dimentional ma... 16 Subjects: including algebra 1, calculus, geometry, Chinese I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including algebra 1, calculus, statistics, geometry ...I can help with that. I pride myself on having a high sense of empathy which allows me to understand and relate to the student’s perspective, identify any barriers to learning, and find ways to work around them. I look at not only the current lesson, but also help students to relate prior learn... 23 Subjects: including algebra 1, calculus, geometry, GRE ...As a result, I enrolled in school programs that took me to the University of South Carolina and to South Africa. In May 2010 I graduated with a Bachelor of Science and moved briefly to California, then to the Big Island of Hawaii. From there I arrived in Massachusetts in early December. 49 Subjects: including algebra 1, reading, English, writing ...I have taught Prealgebra for more than 25 years. I have also worked as a tutor of this subject for the same time period. When this is combined with my many years of also teaching Algebra, I believe that you will find that I am well qualified to teach this subject. 6 Subjects: including algebra 1, geometry, algebra 2, prealgebra Related Grove Hall, MA Tutors Grove Hall, MA Accounting Tutors Grove Hall, MA ACT Tutors Grove Hall, MA Algebra Tutors Grove Hall, MA Algebra 2 Tutors Grove Hall, MA Calculus Tutors Grove Hall, MA Geometry Tutors Grove Hall, MA Math Tutors Grove Hall, MA Prealgebra Tutors Grove Hall, MA Precalculus Tutors Grove Hall, MA SAT Tutors Grove Hall, MA SAT Math Tutors Grove Hall, MA Science Tutors Grove Hall, MA Statistics Tutors Grove Hall, MA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Cambridgeport, MA algebra 1 Tutors Dorchester, MA algebra 1 Tutors East Braintree, MA algebra 1 Tutors East Milton, MA algebra 1 Tutors East Watertown, MA algebra 1 Tutors Kenmore, MA algebra 1 Tutors North Quincy, MA algebra 1 Tutors Quincy Center, MA algebra 1 Tutors Readville algebra 1 Tutors South Boston, MA algebra 1 Tutors South Quincy, MA algebra 1 Tutors Squantum, MA algebra 1 Tutors West Quincy, MA algebra 1 Tutors Weymouth Lndg, MA algebra 1 Tutors Wollaston, MA algebra 1 Tutors
{"url":"http://www.purplemath.com/grove_hall_ma_algebra_1_tutors.php","timestamp":"2014-04-19T10:10:33Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:437e8771-da5a-415d-abff-8d2832838c33>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Seven Corners, VA Algebra 2 Tutor Find a Seven Corners, VA Algebra 2 Tutor ...In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting with the student. My broad background in math, science, and engineering combined with my extensive rese... 16 Subjects: including algebra 2, calculus, physics, statistics Hello, I am currently teaching at a high school. I teach in general education and special education classrooms. I have had great success in helping my students maximize their math learning and success. My SOL pass rate is always near the top in FCPS. Students enjoy working with me. I make math fun! 7 Subjects: including algebra 2, geometry, algebra 1, special needs ...Early success in math is essential for so many fields of study. I want my students to feel like math is their strength and not something holding them back. I love teaching test preparation because it helps my students achieve their dreams. 12 Subjects: including algebra 2, geometry, GRE, ASVAB ...My educational background and volunteer experiences qualify me to tutor middle and high school subject material in multiple ways. I graduated with a 4.1 GPA and an International Baccalaureate diploma from J.E.B. Stuart High School. 10 Subjects: including algebra 2, chemistry, physics, geometry My name is Younes. Mathematics was always my strongest suit. I come from a different background. 9 Subjects: including algebra 2, calculus, geometry, algebra 1 Related Seven Corners, VA Tutors Seven Corners, VA Accounting Tutors Seven Corners, VA ACT Tutors Seven Corners, VA Algebra Tutors Seven Corners, VA Algebra 2 Tutors Seven Corners, VA Calculus Tutors Seven Corners, VA Geometry Tutors Seven Corners, VA Math Tutors Seven Corners, VA Prealgebra Tutors Seven Corners, VA Precalculus Tutors Seven Corners, VA SAT Tutors Seven Corners, VA SAT Math Tutors Seven Corners, VA Science Tutors Seven Corners, VA Statistics Tutors Seven Corners, VA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Baileys Crossroads, VA algebra 2 Tutors Belleview, VA algebra 2 Tutors Bon Air, VA algebra 2 Tutors Cameron Station, VA algebra 2 Tutors Crystal City, VA algebra 2 Tutors Falls Church algebra 2 Tutors Greenway, VA algebra 2 Tutors Jefferson Manor, VA algebra 2 Tutors Langley Park, MD algebra 2 Tutors Lincolnia, VA algebra 2 Tutors N Chevy Chase, MD algebra 2 Tutors North Springfield, VA algebra 2 Tutors Pimmit, VA algebra 2 Tutors Rosslyn, VA algebra 2 Tutors Tysons Corner, VA algebra 2 Tutors
{"url":"http://www.purplemath.com/Seven_Corners_VA_algebra_2_tutors.php","timestamp":"2014-04-20T16:20:43Z","content_type":null,"content_length":"24218","record_id":"<urn:uuid:1f60671d-432a-4962-bbaf-101e55041751>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: re: Puzzling Question Replies: 0 re: Puzzling Question Posted: May 25, 1995 10:31 PM Two "reasons" 0! is one: a) When something is defined by a multiplication formula, the "default value" of the formula is generally one, just like a sum starts with zero (good intro or reinforcement for the concept of additive and multiplicative identity). An "empty product" is generally assumed to have a value of one. b) The gamma function gives a value of one for 0!. This is a much more valid "reason" than the last one, since the gamma function can define the factorial for things other than positive integers. Of course, the gamma function involves an improper integral, so the situations in which you could use it would be somewhat limited. (I just joined this list, so I don't know what level people here are Now a question of my own--is there a place that I can download an electronic version of the NCTM standards? (Sorry if this is a frequently asked question, but there were no archives to search to determine this).
{"url":"http://mathforum.org/kb/thread.jspa?threadID=481941","timestamp":"2014-04-16T16:05:15Z","content_type":null,"content_length":"14423","record_id":"<urn:uuid:10f293f8-01e8-4536-a358-d566bb8edfd7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Immutability in C# Part 10: A double-ended queue Comments 28 Based on the comments, the implementation of a single-ended queue as two stacks was somewhat mind-blowing for a number of readers. People, you ain't seen nothing yet. Before we get into the actual bits and bytes of the solution, think for a bit about how you might implement an immutable queue which could act like both a stack or a queue at any time. You can think of a stack as "it goes on the left end, it comes off the left end", and a queue as "it goes on the left end, it comes off the right end". Now we want "it goes on and comes off either end". For short, we'll call a double ended queue a "deque" (pronounced "deck"), and give our immutable deque this interface: public interface IDeque<T> T PeekLeft(); T PeekRight(); IDeque<T> EnqueueLeft(T value); IDeque<T> EnqueueRight(T value); IDeque<T> DequeueLeft(); IDeque<T> DequeueRight(); bool IsEmpty { get; } Attempt #1 We built a single-ended queue out of two stacks. Can we pull a similar trick here? How about we have the "left stack" and the "right stack". Enqueuing on the left pushes on the left stack, enqueuing on the right pushes on the right stack, and so on. Unfortunately, this has some problems. What if you are dequeuing on the right and you run out of items on the right-hand stack? Well, no problem, we'll pull the same trick as before -- reverse the left stack and swap it with the right stack. The trouble with that is, suppose the left stack is { 1000, 999, ..., 3, 2, 1 } and the right stack is empty. Someone dequeues the deque on the right. We reverse the stack, swap them and pop the new right stack. Now we have an empty left-hand stack and { 2, 3, 4, .... 1000 } on the right hand stack. It took 1000 steps to do this. Now someone tries to dequeue on the left. We reverse the right queue, swap, and pop, and now we have { 999, 998, ... 3, 2 }. That took 999 steps. If we keep on dequeuing alternating on the right and left we end up doing on average five hundred pushes per step. That's terrible performance. Clearly this is an O(n^2) algorithm. Attempt #2 Our attempt to model this as a pair of stacks seems to be failing. Let's take a step back and see if we can come up with a recursively defined data structure which makes it more apparent that there is cheap access to each end. The standard recursive definition of a stack is "a stack is either empty, or an item (the head) followed by a stack (the tail)". It seems like we ought to be able to say "a deque is either empty, or an item (the left) followed by a deque (the middle) followed by an item (the right)". Perhaps you have already seen the problem with this definition; a deque by this definition always has an even number of elements! But we can fix that easily enough. A deque is: 1) empty, or 2) a single item, or 3) a left item followed by a middle deque followed by a right item. Awesome. Let's implement it. // WARNING: THIS IMPLEMENTATION IS AWFUL. DO NOT USE THIS CODE. public sealed class Deque<T> : IDeque<T> private sealed class EmptyDeque : IDeque<T> public bool IsEmpty { get { return true; } } public IDeque<T> EnqueueLeft(T value) { return new SingleDeque(value); } public IDeque<T> EnqueueRight(T value) { return new SingleDeque(value); } public IDeque<T> DequeueLeft() { throw new Exception("empty deque"); } public IDeque<T> DequeueRight() { throw new Exception("empty deque"); } public T PeekLeft () { throw new Exception("empty deque"); } public T PeekRight () { throw new Exception("empty deque"); } private sealed class SingleDeque : IDeque<T> public SingleDeque(T t) { item = t; } private readonly T item; public bool IsEmpty { get { return false; } } public IDeque<T> EnqueueLeft(T value) { return new Deque<T>(value, Empty, item); } public IDeque<T> EnqueueRight(T value) { return new Deque<T>(item, Empty, value); } public IDeque<T> DequeueLeft() { return Empty; } public IDeque<T> DequeueRight() { return Empty; } public T PeekLeft () { return item; } public T PeekRight () { return item; } private static readonly IDeque<T> empty = new EmptyDeque(); public static IDeque<T> Empty { get { return empty; } } public bool IsEmpty { get { return false; } } private Deque(T left, IDeque<T> middle, T right) this.left = left; this.middle = middle; this.right = right; private readonly T left; private readonly IDeque<T> middle; private readonly T right; public IDeque<T> EnqueueLeft(T value) return new Deque<T>(value, middle.EnqueueLeft(left), right); public IDeque<T> EnqueueRight(T value) return new Deque<T>(left, middle.EnqueueRight(right), value); public IDeque<T> DequeueLeft() if (middle.IsEmpty) return new SingleDeque(right); return new Deque<T>(middle.PeekLeft(), middle.DequeueLeft(), right); public IDeque<T> DequeueRight() if (middle.IsEmpty) return new SingleDeque(left); return new Deque<T>(left, middle.DequeueRight(), middle.PeekRight()); public T PeekLeft () { return left; } public T PeekRight () { return right; } I seem to have somewhat anticipated my denouement, but this is coding, not mystery novel writing. What is so awful about this implementation? It seems like a perfectly straightforward implementation of the abstract data type. But it turns out to be actually worse than the two-stack implementation we first considered. What are your thoughts on the matter? Next time, what's wrong with this code and some groundwork for fixing it. It does leave a little to be desired. Now instead of an O(n²) algorithm, you have a recursive O(n²) algorithm, so we can hammer both the CPU and the stack with it. Nice. ;) The problem here is the recursive chain that is produced when you call EnqueueXXX and DequeXXX over a non trivial deque. I'm working in a solution that has, for every generarl Deque, a cachedDequeLeft and cachedDequeRight, but I'm not sure if it´s going to work. Cross your fingers :) Aaron: Yep! It's truly awful. Each enqueue is O(n) in stack consumption, time and number of new nodes allocated, so enqueuing n items in a row is O(n²) in time. Olmo: Sounds interesting! For some reason, there's been a lot of buzz lately around immutability in C#. If you're interested in I've been working on it for a while and there is no end. In my implementation Deque is FAST so eneque have to be defined in terms of Deque. The end of the history is that, while you can reuse a lot of this trees (should we call them firs? hehe), for a deque with n elements you need about n^2 that represent every single instance of all the possible subintervals. So having a structure that needs so much memory is stupid. Also, inserting an element stills O(n)... So a way to nowhere ... :S I'm thinking that you could just throw together an immutable version of a doubly-linked list. You did say that the solution was tricky, and this one is downright banal, so it probably isn't what you're looking for, but it *would* work and give O(1) enqueue and dequeue performance. All you'd need is an Element<T> with (read only) Left, Right and Value properties, and you could give the Deque<T> "beginning" and "end" fields of type Element<T>. Throw in a constructor that takes two Element<T> parameters and you've got yourself an immutable Deque. I guess the problem is that it would have to allocate a new Deque<T> for each and every Dequeue operation. The memory performance isn't stellar. Then again, that's exactly what we did for the Queue <T>, so does it make a difference? Actually... never mind, I can see now why that wouldn't work - the Element<T> would have to be mutable. The Deque could look immutable on the outside, but I imagine the point of this exercise is to build the whole thing using completely immutable data structures. So, scratch that, back to square one. :-) Yep, you got it. Doubly-linked lists are always mutable. Going back to attempt #1, if when one stack is empty, you reverse and transfer just *half* the elements from the other stack, you get amortised O(1) performance, and you're done ;) This is mentioned in Chris Okasaki's book "Purely functional data structures" which describes a lot of interesting immutable data structures. Looking forward to seeing your alternative as well! I hope that the half-reverse idea isn't the solution; that was the first thing that popped into my head, but it's only reducing that worst-case O(n²) performance to O(n log n), and it also raises the performance cost of enqueuing everything from one end and dequeuing everything from the other end from O(n) to O(n log n). It's better than what we've got so far, but it ain't great. You could get similar performance using an AVL or red-black tree as the internal data structure; the "beginning" is the outer-left leaf and the "end" is the outer-right. But it's still a high price to pay. Knowing Eric, I'm sure he's got some extremely clever and totally nonintuitive hack that gives O(1) performance almost all the time. Aaron: no, Luke is right. If you are clever about when you rebalance the deque, you can get amortized O(1) performance. You end up having to keep around extra information about how big each queue is, but that's not hard. However, that is in fact not the solution I had in mind. Rather than fixing Attempt #1, I'm going to fix Attempt #2 by being more clever about what exactly goes in the left, middle and right I don't think that's exactly what you meant, and I'm sure I'm missing something, but would return new Deque<T>(left, middle.DequeueRight(), middle.PeekRight()); even work as planned? Wouldn't we end up with the wrong right element after we do this? Did I just say 'wrong right element'? How is it you manage to do this to me almost every time? I was able to make an efficient Deque using 2 "Trimmable Stacks" and keeping Deque Count information, but I am not sure about approach #2. I am thinking it might be possible by making left and right into Stacks and keeping middle a Deque. EnqueueLeft would be: return new Deque<T>(left.Push(value), this, right); I am just not sure if this works for all other operations. Great series by the way. Keep it going! Chris: Why would we end up with the wrong right element? Remember, middle is IMMUTABLE. Dequeuing it does not change the value of middle, it returns a different deque. We can dequeue that thing all we want and its rightmost element is the same as it ever Dr. Blaise: Your intuition is good. I'm going to stick with something-on-the-left, something-in-the-middle and something-on-the-right, and those somethings will be more complex than Attempt #2. It's not going to be exactly as you've sketched out though. I'm not sure if I'm on the right track, but the thing I'm noticing is that since a deque implicitly has 3 pieces, it's easier to deal with 2 elements at a time than it is to deal with just 1. What if you made the left and right sides a kind of "buffer" (stacks with a count, I guess), and did the recursive enqueuing only when you had 2 elements to move? For example... if you're enqueuing from the left, and the left buffer has one or two elements, just tack it on. If it has three, then pop all 3 elements, enqueue the last two onto the inner deque, and put the 1st back on the stack, then enqueue the new element normally. There is some recursion, but 50% of the time the depth is zero, 25% of the time it's one level deep, 12.5% of the time it only goes two levels, etc. Once you hit the end, which would be a SingleDeque, then enqueuing two elements at a time is trivial; just move the left to the right, the first element into a new SingleDeque in the middle, and the last element onto the right. I think this is also "amortized" O(1) performance, right? To dequeue off of either side you just reverse this process, same performance. It can end up being lopsided, but I don't think it matters; if you run out of elements on the "requested" side, just start taking them off the other side - since you've got a maximum of 3 elements to pop at any level before you find the "tail", it's still pretty Is this making any sense or have I officially gone off the deep end?
{"url":"http://blogs.msdn.com/b/ericlippert/archive/2008/01/22/immutability-in-c-part-10-a-double-ended-queue.aspx","timestamp":"2014-04-17T12:43:43Z","content_type":null,"content_length":"123888","record_id":"<urn:uuid:d20befc6-e846-4285-a720-02e9e6263095>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by bex Total # Posts: 81 What are two conditions that must be met before the Extreme Value Theorem may be applied? it approaches positive and negative infinity? What is the end-behavior asymptote for f(x)=(x^2+4x-5)/(x-2) ? There is an oblique asymptote at y=x+6 and a vertical asymptote at x=2. Is one of those the end-behavior asymptote? is it y=2x^2 -4x+14 ? Find the polynomial end-behavior asymptote in the graph of f(x)=(2x^3 +6x-1)/(x+2) If y = y(x), write y in Leibniz notation. If g(x) = a f(x); find g (x). Is it g'(x)=af? Sarah has an 86% average on her 1st 5 tests. If she gets a 92 on her next test, what will her average be then? i wld keep it as " meteorite" (sorry for the confusion) which definition? a meteorite is a stony or metallic object that is the remains of a meteoroid that has reached the earth's surface how has the geography and history of the South helped create a diverse population that continues to grow? i think its C...but i cld be wrong its called weathering - we just did it in science. you get 3 types: biological, physical and chemical weathering. How does Arthur Miller manage to create tension in Act Two of ''A View From The Bridge''? d how does act two reflect all the main themes in the play? I'm terrible at writing essays and I really need help! i'm not asking you to write it for me, just a su... oh, okay, hank you so much! Physics - urgent Please, please, I really need gelp for this. a bowling ball is thrown down from the top of a building at 20 m/s. Find the distance the ball travels in A) 1 s B) 3 s should I make the 20 m/s negative? I'm assuming the ball is just being thrown stright down... I'm using the formula d=V(initial)*t+1/2a*t^2 I'm g... why must stock control be closely monitered ? Algebra 2 honors sorry.im not very good at algebra =[ Algebra 1 honors .09x+.08x(30)= $1380 + $6400 = ? ......i think/hope that this is right.... please help me! How are the flow of matter and the flow of energy through ecosystems different? suppose x S f(t) dt= x^2 - 2x + 1. Find f(x). 1 S = integral and 1 = lower level, x = upper i don't understand what i'm supposed to find/what to do and i didn't make any typos. suppose x S f(t) dt= x^2 - 2x + 1. Find f(x). 1 S = integral and 1 = lower level, x = upper i don't understand what i'm supposed to find/what to do a 70 kg rope climber starts from rest, moves with constant acceleration, and climbs 9 meters up a rope having a maximum strength of 840 N. What is the greatest possible speed she can have afte climbing 9m? i don't get this quote. can someone please explain? who is they? "there is also the difference between merchants barely tolerated by a centralized empire and those whose rulers and governments used them for their imperial cause... they could, in terms of entrepeneursh... earth clay could you tell me how clay plays a cruial part in the origin of life This site should help you: http://en.wikipedia.org/wiki/Origin_of_life#Clay_theory could someone please help me i have to write about the start of life on a extrasolar planet and talk about chemical evolution,, could someone please give me some pointers Is this S103 ECA? yes it is im just need some pointers so that i can do the work in my own words but i don... maths equations could you please help me on this equation r = sx2/2t im really stuck Your non-use of parenthesis makes in difficult to know what the equation is. What are you trying to do with the equation? sorry im trying to make x the subject divide each side by s r/s= x^2 /2t multiply each... solar science could you give me any ideas on the stages of chemical evolution that would need to occur towards the development of life and the timescales many thanks http://en.wikipedia.org/wiki/Origin_of_life chem equations could someone tell me how i could solve this chem equation CH4 + NH3 +O2...> HCN +H20 AND USE THE LOWEST WHOLE NUMBER COEFFICIENT i WOULD BE GRATEFUL FOR ANY POINTERS Try these coefficients. 2,2,3==> 2,6 Hi bex heres a little more help for you. Answer is 2CH4+2NH3+3O2--&g... could someone please crique this algebra x must be the subject 6y = 9 - 3x 3x = 6y + 9 x = 2y + 3 is this correct please Add 3x to each side.Giving 6y+3x=9. Subtract 6y from each side.Giving 3x=9-6y. Divide each side by 3.Giving x=(9-6y)/3. Divide numerator by 3. Giving x=3-2y. science geo I have to write a brief geological history of a region starting with the oldest events schist being the earlisest and then i have sandstone and gabbro, could you see if what i have wrote is correct The schist would have probably originally been deposited first as a layer of mu... I have to write a brief geological history of a region starting with the oldest events with schist being the oldest it says i should not speculate but concentrate on the evidence have what i wrote on the right lines if not can you give me some pointers The schist would have p... is sandstone a sedimentary rock that consists of minerals of quartz and feldspar, does it have any other minerals. I believe it is formed by layers of sand accumulated resulting in sedimentation because the sandstone is formed when compacted under pressure. it is foliated and ... could you please tell me if schist is a sedimentary rock or metamorphic. Is it formed by high temperature and high pressure. I have to draw a sketch of schist would you be able to suggest a web page that shows a sketch Yes it is metamorphic. could you please tell me if this is the correct formula for radiometric date t=1/constant 1n (1+d/p) many thanks ps this is a fantastic website I think that you are after t=1/ë[ln(1+D/P)] where: t = age of the sample D = number of atoms of the daughter isotope in the samp... geology / science would you be able to suggest a good website for me to answer this question granite is intrusive , felsic rock, which may be produced by partial melting or fractional crystallization, Explain how these processes produce felsic magmas from more mafic parent magmas (200 words) wo... geology / science the age of granite can be determind using radiometric dating, explain the basis for the determination of a 238 U- Pb radiometric date of 1120 Ma for a granite. I have as the answer 238 U decays to Pb with an experimental law, that is the amount of U and Pb in granite are funct... Biology help please can someone help i need to know the difference at telophase 2 of meiosis if their had been a crossing over at a postision halfway between the huntingtons disease genes please im finding this last question hard Biology help can anybody please tell me if the diagram to show anaphase 1 is a cell with the 4 genes at the top and 4 genes at the bottom, im struggling with this as there are many diagrams to choose but is this one correct mant thanks can anybody tell me what is meant by the term regulating mortality factor and how it can lead to successful biological control of pest in agriculture HI, I am doing tma 7 aswell. if you send ur email i'll send my answer to you. do you know the answer to question 3? Thanks ... how does the structure of green cell organelles relate to the metabolic process of photosynthesis and aerocic respiration many thanks http://en.wikipedia.org/wiki/Chloroplast http://en.wikipedia.org/ wiki/Light-dependent_reaction http://en.wikipedia.org/wiki/Calvin_cycle I am given the structual formula for lactose howere i have to write out the equation for the hydrolosis of lactose is the following what is required C12H22O11 +H2O>>> C6H12O6 +C6H12O6 all the numbers are subscript, would appreciate if someone would guide me Since this... is anyone able to help i have to write 700 words on the subject of the metabolism of green plant cells in relation to energy exchange processes any guidance would be very much appreciated http:// en.wikipedia.org/wiki/Photosynthesis http://en.wikipedia.org/wiki/Calvin-Benson_cy... biology holly leaf miner can anyone please explain the term regulating mortality factor (regarding the holly leaf miner )and how understanding of this can lead to successful biological control of pests in agriculture and A mutation of c to t in position 3 and another of G to A in position 7 results in the production of a different sequence of ammino acids work out the new sequence and suggest why it might change the function of the new protein I get the sequence of ATAAGCTTT in Mrna AUAAGCUUU ... The human gene for the production of lactose is on chromosome 2. The dna nucleotide sequence of a small part from the middle is 123456789 TACTCGGAA I haVeTO WRITe DOWN THE EQVIVALENT SEQUENCE FOR MRNA AND HENCE WORK out the sequence of amino acids coded in this part of the pol... Could anyone suggest a web site where i could get the structual formulae and the equation for the hydrolosis of lactose thanks Could somebody check this formula for the structure of hyroloysis of lactose C12H22O11+H20 ...> C6H12O6H12O6 OBVIOUSLY THE FIGURES ARE ALL SUBSCRIPT Many thanks Is that not the molecular formulae ??? could you explain the meaning of the phrase glycoside c-1 to c-4 linkage this is in regard to the structural formula of lactose many thanks http://en.wikipedia.org/wiki/Lactose thanks that was a real great help now i understand Is this the stuctual formula for hydolosis of tocainide CH3 NHCOCH^2N(C^2H^3)^2 CH3 WITH THE HEXAGON LINKING THE 2 X CH3 how would i draw a structual formula of the products of the complete hyrolosis of tocainide O CH3-N-C-CH-NH2 CH3 H CH IM told this is tocainide please can anybody help a distressed student the O should be over the C atom hello how would i draw a monomer CH2-CHCH=CH2 I have to draw three monomer units can anyone help can anybody explain why and how a copper based catalyst helps in methonal production and the impact it would have on mass production many thanks A catalyst lowers the activitation energy needed for a reaction and allows reactants to produce more products at a lower temperature... re s103 please could you help me on that question about methonal please please help im running out of time What question? Do you mean methanol? Please post your questions with the subject appropriately labeled, and not addressed to one person. Could anybody help me with this question "In practice the methanol production process is operated at temperatures of 250 to 300 degrees and at a pressure of 50 to 100 atmosphere (50 to 100 times normal atmospheric pressure) in the presence of a copper based catalyst comme... I have seen on the chemistry answers that a symbol is used please can you tell me what it means they are the upside v eg Na^ and CN^ It means that the number or symbol that follows is placed in the top corner of Symbol before it E.g cm^3 or CN^- you just cant write them on her... is NH2 an amine and classed as ammonia and is c=0 or N-H an amide any help would be appreciated http://www.onelook.com/?w=amide&ls=a RNH2 is basic and acts much like ammonia. The -CONH2 group is an please could you tell me how i would draw an hydrogen atom which has only one electron and describe why it has so many spectral lines, I have studied my text books but cant seem to figure it out, many thanks in advance http://images.google.com/images?&hl=en&num=10&btnG=Google+... Dr Bob222 Can you help me with this please - Temp - 650 Pressure - 690mm Hg mass - 0.927g volume - 194cm3 How do I calculate the volume of the gas at a different temp? & at a different pressure? You can do this two ways. The simplest is P1V2/T1 = P2V2/T2 I have a chemical composition of As2O3. How do I calculate the mass of oxygen that combines with 1 mol of arsenic to form the oxide of arsenic? How do I use this to obtain the empirical formula of the oxide? That IS the empirical formula of the compound. You calculated the mas... If the arsenic is 76% and oxygen is 24% how would I calculate the chemical composition of of the oc=xide of arsenic that has been formed? Divide 76 by the atomic weight of As (which is 74.92) and 24 by the atomic weight of O (which is 16.00). The two new numbers that you get w... Temp - 650 Pressure - 690mm Hg mass - 0.927g volume - 194cm3 How do I calculate the volume of the gas? You have the volume, given. How do I calculate the volume at a different temperature? What evidence supports classifying H2SeO3 as an acid? In a water solution, it turns blue litmus red. I have a compound ch3 = ch- ch - oh ch3 the ch3 is under the first ch from the left, i have to identify the monomer unit and draw a structure showing three monomer unit any help i have a compound CH3 = CH - CH - OH and another CH3 under the first CH from the left , I have to id... What is polymerization and what type is involved in forming a polymer when I have CH3 N CH O C and NH3 Can any one help please Polymerization is the chaining of monometers. Look for the double bond, it breaks, and forms a chain. the diagram below shows a structural formulae of 3 cabon compounds labelled 1 2 and 3 and the eaction between compounds 1 and 2. The equation for this reaction is incomplete only the reactants are shown, the missing products being indicatedby a? :o ch2 =ch-ch -oh + ch3 - ch - ... are the noble gases of NCCN cyanogen = Argon HCN hydrogen cyanide = Neon thanks I don't know how to answer this. There are 10 electrons, all bonding electrons in HCN. In the case of F, for example, we say its ion is isoelectronic with Ne; i.e., there are 10 total electrons... is this the strutual formula for hydrogen cyanide H:C:::N and is this correct for cyanogen :N:::C:C:::N any advice woulds be appreciated This is more chemistry than physics. It is important to put the subject line carefully in order for the right people to look at the problem.... if i am trying to find the diameter of an oil molecule and express my answer in scientific notation. the info is drop of oil 0.05 cm3 from a dropper spreads out to 40 cm2 is it vol = l x w x h 0.05cm3 = 20 x 20 xh 0.05 cm3 = 40cm2 after this im stuck as i dont know what the si... which element results if 2 protons and 2 neutrons are ejected from a radium nucleus with atomic num 88 and mass num 226 how do i go about this please can you help If you can add you can do these. 88Ra226 ==>2X4 + zzYww. Now make the numbers add. Protons must be conserved. M... bock 7 s103 is there anybody out there who have looked at tma 06 yet if you have could we pick each others brains lol thanks I looked up tma 06 on google and found many site regarding religion which certainly is not my field. Please clarify your question, if you still have one. If it is a... If i was to draw a diagram of a hydrogenatom with one electron and its many spectral lines would i need to draw the BOhr model with the nucleus in the middle and 3 outer rings or have i lost the plot That is one way. Actually, there are an infinite number of stable orbits that... a drop of oil vol 0.05 cm3 is released from a medicicne dropper on to a calm surface pond where it spreads out to cover an area of 40 cm 3. Assume oil film is uniform thickness equal to diameter of oil molecule. Calculate the diameter of oil molecule and express in scientific ... can anybody give me the density of arsenic oxide at STP is this the correct formula volume = mass/ density thanks Yes, but if you want the density, it is density = mass/volume. The number I have seen floating around on this board is 17.8 g/L but the density of a gas and the de... could anyone tell me why the molecular formula of agas is not always the same as the empirical formula tnx It may be a dimer or a trimer. For example, CH is the empirical formula for acetylene but the molecular formula is HC(triple bond)CH or C2H2. hi can anyone help how do i determine the molecular formular of the oxide of arsenic in its in its gaseous state the answer i got was 17.8 gl -1 for the density, which law do i use is it boyles law or charles law thanks A mol of a gas occupies 22.4 L at STP. If it is 17.8 g/L ... the big idea of energy dont get it :S acids and alkalis - lemons contain citric acid which make them tast sour -Soft drinks contain Phosphoric acid to give thm a shrper tast and to prevent bacteria multiplying at a quicker rate -fruit jucies contain Malic acid ,,, which is found in apples and cherries dunno if this is any help but ...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=bex","timestamp":"2014-04-17T17:11:41Z","content_type":null,"content_length":"27306","record_id":"<urn:uuid:d0b4ca5c-7197-477f-8164-fa1d8979f3b4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Crowdsourcing ConGen - Populations in Hardy-Weinberg Equilibrium 1239 words • 6~9 min read Crowdsourcing ConGen – Populations in Hardy-Weinberg Equilibrium This post is part of the Crowdsourcing ConGen project. Crowdsourcing is the process of opening up a resource to a community for input and contributions. Throughout the coming year I’ll be posting manageable pieces of this document for the audience of Southern Fried Science to read and review. Please visit the main post for an overview. “I have never done anything useful. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world.” ~ Godfrey Harold Hardy The simplest model for a population is one in which the frequencies of alleles and genotypes remains constant from generation to generation. Under this model, there are no outside forces influencing selection, there is no tendency for any allele or genotype to be favored over any other, diploid alleles will recombine randomly in accordance with Mendelian inheritance. A population that behaves this way is said to be in Hardy-Weinberg Equilibrium. This almost never happens. In order for a population to be in Hardy-Weinberg Equilibrium, several assumptions about that population must be true: 1. Individuals within the population must be mating randomly with respect to the specified loci. 2. No mutation can be occurring at the specified loci. 3. There is no selective pressure with respect to the specified loci 4. No migration into or emigration out of the population can be occurring. 5. The population is functionally infinite. Notice that assumptions one through three are with respect to the loci being examined. Population genetics doesn’t look at entire genomes, only a very small subset of particular loci, often chosen because they meet these criteria. These phenomena can be occurring within the population, but as long as the specified loci are independent of these effects, Hardy-Weinberg Equilibrium can still be achieved. The forth assumption is incredibly rare, and if you truly have a population with 100% isolation and absolutely no gene flow, your work as a population geneticist is done. The final assumption is, of course, impossible. Assuming a population under Hardy-Weinberg Equilibrium, a diploid locus with two possible alleles will have the genotypes AA, Aa, and aa. If p is the frequency of allele A and q is the frequency of allele a, then the product of the frequency of the three possible genotypes can be written as: So from this equation we can approximate a population in Hardy-Weinberg Equilibrium. More importantly, we can test whether a set of alleles at a locus is in Hard-Weinberg Equilibrium, and detecting Hardy-Weinberg Equilibrium and deviations from Hardy-Weinberg Equilibrium is the first step in defining populations. Real samples will never behave like the model, but we can ask the question “How close to Hard-Weinberg Equilibrium are these samples?” We can calculate the expected allele frequencies of a population in equilibrium with the following equations derived from the equation above: For these equations, p(hat) is the expected frequency of allele p, q(hat) is the expected frequency of allele q, N[AA] is the number of individuals sampled with genotype AA, N[aa] is the number of individuals sampled with genotype, N[Aa ]is the number of individuals sampled with genotype Aa, and N is the total number of individuals*. So let’s examine a microsatellite locus with two alleles, 171 and 173. Assume a total of 23 individuals. Five individuals have genotype 171/171. Twelve have genotype 171/173. Six have genotype 173/ 173. From this sample set, we can calculate and and compare them to our know observed frequencies. To calculate the expected occurrence of each genotype, you can use the equations: So we find that equals 0.478 and equals 0.522 while the expected occurrence for each genotype is: 171/171 equals 5.3, 171/173 equals 11.5, and 173/173 equals 6.3. A quick summary: Genotype 171/171 □ Observed Frequency – 5 □ Expected Frequency – 5.3 Genotype 171/173 □ Observed Frequency – 12 □ Expected Frequency – 11.5 Genotype 173/173 □ Observed Frequency – 6 □ Expected Frequency – 6.3 Clearly these are not in perfect equilibrium, but have the observed values deviated significantly from Hardy-Weinberg Equilibrium? The simplest way to test this is to use a chi-squared test. In order to perform a chi-squared test, you must calculate the X^2 value for the sum of all genotypes and determine the degrees of freedom. With those two values, you can compare them on a table of chi-squared values to determine if there is significant deviation from Hardy-Weinberg Equilibrium. To calculate the X^2 value, use the equation: For this example the X^2 value equals 0.05. To determine the degrees of freedom, simply subtract the number of alleles from the number of possible genotypes. With two alleles and three genotypes, we have 1 degree of freedom. Take a look at this chi-squared table and determine where our X^2 value falls. By convention, we define significant deviation as any value that has a probability of less than 0.05 on the chi-squared table. This can be interpreted as the probability that the observed values would deviate from the expected values merely by chance is less than 5%. For our observed values to deviate significantly from Hardy-Weinberg Equilibrium, the X^2 value would have to be greater than 3.84, so this example falls well within the limits for Hardy-Weinberg Equilibrium. This means that we can make a few inferences about this locus and this population. This locus is not being selected for either in general or via sexual selection. This locus is also not mutating. Within the sampled population, there is little to no migration or emigration and this population is likely very large. Some of these inferences may be wrong. This could be a population that has undergone a recent bottleneck event, so the genetic diversity may not reveal a new, smaller population size. Sampling may have missed rarer alleles or simply been too small to fully capture the total diversity of the population. But in general, we can assume that this marker is reasonably good for estimating parameters of population genetics. Here is another example to try: 4 individual with genotype AA, 65 individuals with genotype Aa, 8 individuals with genotype aa. Does this sampled population deviate from Hardy-Weinberg Equilibrium? Discovering the reasons for deviation from hardy-Weinberg Equilibrium and defining how the allele frequencies change among populations that may or may not be in Equilibrium is the foundation for the rest of population and conservation genetics. ~Southern Fried Scientist *This example was borrowed from the textbook “Conservation and the Genetics of Populations”
{"url":"http://www.southernfriedscience.com/?p=4496","timestamp":"2014-04-21T15:15:59Z","content_type":null,"content_length":"31491","record_id":"<urn:uuid:61e85540-781b-4726-8647-882f6106779f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Avoiding Mistakes in Population Avoiding Mistakes in Population Modeling: Variability (Stochasticity) List of common mistakes in population modeling Although deterministic measures such as predicted population size and population growth rate may be useful under certain circumstances, variability is such a pervasive feature of natural populations that ignoring it almost always leads to an incomplete picture of the state of the population and an incorrect prediction of its future. Deterministic models make point (single-value) estimates of variables such as future population size, based on point estimates (e.g., average values) of parameters such as survival rates, fecundities, etc. In natural populations, these parameters show variation in time and space, and it is not the average values of these parameters that put the populations at risk; it is the lower values they take (or the tails of their distributions). Therefore, ignoring variability often gives a misleading, optimistic picture of a population's viability. In RAMAS Metapop, natural variability in parameter values (environmental stochasticity) is modeled by random fluctuations in age or stage-specific fecundities and survivor rates, carrying capacities, and dispersal rates, as well as two types of catastrophes. In addition, demographic stochasticity and observation error can be simulated (see the Stochasticity dialog under the Model menu). Demographic stochasticity is the sampling variation in the number of survivors and the number of offspring that occurs (even if survival rates and fecundities were constant) because a population is made up of a finite, integer number of individuals. In RAMAS Metapop, demographic stochasticity is modeled by sampling the number of survivors from a binomial distribution and the number of offspring from a Poisson distribution (Akçakaya 1991). Relative to other factors, demographic stochasticity becomes more important at small population sizes. Demographic stochasticity option should be used for all models, unless you are modeling densities (such as number of animals per km^2) instead of absolute numbers of individuals (however, in this case, note the fact that the program always models abundance as an integer). It is especially important to model demographic stochasticity when modeling impacts of habitat fragmentation. A model that ignores demographic stochasticity will underestimate the increased extinction risk when a population is fragmented into several smaller populations. If the standard deviation estimates you are using incorporate the effects of demographic variability, it is more appropriate to estimate standard deviation without the contribution by demographic stochasticity (see below), than to exclude demographic stochasticity from your model. This is because if the population size decreases in the future, the component of observed variance due to demographic stochasticity in your data will underestimate the variance due to demographic stochasticity in these lower population sizes, thus your model will underestimate the risk of decline or extinction of the population. If the standard deviations are high, the results may be biased because of truncation. In this case, selecting a lognormal distribution instead of a normal distribution may be helpful. Lognormal distribution is recommended if (i) any survival rate or fecundity has a small average value with a large standard deviation, (ii) any survival rate has a high average value with a large standard deviation. For a more detailed discussion, see the help file and the manual. Correlations among vital rates (elements of the stage matrix) increases the variability of the population size, and hence increases the risks of decline or increase. Thus, when correlations are not known, assuming full correlation rather than independence gives results that are more conservative (precautionary). Note that the correlation discussed here is the correlation (across years) among the underlying vital rates, for example among the survival rates (or fecundities) of different age classes; it is not the correlation in the observed number or proportion of survivors (or offspring, in the case of fecundities). The distinction relates to demographic stochasticity: Even if the underlying survival rates have high correlation across years (e.g., between one-year old and two-year old individuals), the observed proportion of survivors may have a low correlation, because the observed proportion will include sampling variation (and demographic stochasticity), which by definition is independent between age classes. Thus, the observed correlation may underestimate the actual correlation if the sample size (or abundance in the observed age classes) is low. (RAMAS Metapop assumes full correlation between survival rates of different stages, and between fecundities of different stages. By default, the program sets full correlation between survival rates, fecundities and carrying capacities, which can be changed by the user). A related issue is the correlation among vital rates of different populations in a metapopulation model, which may have a large effect on simulation results. Models with environmental stochasticity should have a sufficient number of replications to adequately simulate the tails of the distributions of random variables, and to estimate risks with sufficient precision. Use the maximum number of replications (10000) unless you are running a test simulation or making a demonstration. With 1000 replications, the risk curves have 95% confidence interval of about ±0.03 (see the figure and discussion in Chapter 11 of the manual on confidence intervals for risk curves). However, one should also be careful not to be too confident because there are a large number of replications. The confidence limits are purely statistical; they do not tell anything about the reliability of your model. If the model has large uncertainties, the risk results will not be reliable, regardless of the number replications. In some cases, the observed variability in a population is part of a regular oscillation, or other fluctuation that has periods longer than the time step of the model. Examples include multi-annual predator-prey cycles, and long-term oscillations in demographic parameters caused by global cycles such as ENSO. In many cases, annual fluctuations would be superimposed on such cycles in demographic rates (see the figure on the right for an example). When observed variability in a demographic rate includes a long-term (multi-annual) cycle, using the total observed variance as the variance of a short-term (annual) fluctuations in a model could give biased results. Note that the cycles and fluctuations we are interested in here are in vital rates (survival and fecundity); they are not cycles or fluctuations in population size or density. This is an important distinction (see below). The correct way of incorporating an observed pattern of random fluctuations superimposed on longer-term cycles is to model the long-term cycle as a temporal trend in vital rate (using .SCH and .FCH files in the Populations dialog), and to use the remaining (residual) variance to model short-term (annual) fluctuations (environmental stochasticity as modeled in the "Standard deviations" dialog). For an example of this approach used to model predator-induced cyclicity, see Akçakaya et al. (2004b). See the program manual and help files for details about temporal trends files. This issue is related to the temporal autocorrelation in survival and fecundity, i.e., whether deviations in a vital rate are correlated in time (e.g., "bad" years more likely to be followed by "bad" years). Population sizes are often temporally autocorrelated, because population size at any time step is (at least in part) a function of the population size in the previous time step(s). However, this does not necessarily mean that vital rates are temporally autocorrelated. In an exponential (density-independent) model, temporally uncorrelated vital rates ("white noise") will result in a "random walk" pattern of temporally autocorrelated population sizes, associated with spectral reddening. Most natural populations show dynamics "halfway" between white noise and random walk (in population size, not vital rates), a pattern that can be explained by white-noise (uncorrelated) environmental fluctuations and a combination of weak to no density dependence, age structure, and/or measurement (observation) error (see Akçakaya et al. 2003a). For these reasons, in RAMAS Metapop, temporal autocorrelation is not explicitly modeled. However, autocorrelated environmental fluctuations can be added as discussed above, by using temporal trend files in addition to environmental stochasticity in the model. Catastrophes are infrequent events that cause large reductions (or increases) in population parameters. (Events that increase vital rates are called "bonanzas", but both catastrophes and bonanzas can be modeled as Catastrophes in RAMAS Metapop, using different "multipliers".) Some models include catastrophes that would be better modeled as part of normal environmental variability. Many factors that are modeled as catastrophes (such as El Nino, fire, drought) form a continuum, and can be modeled either as catastrophes or as part of normal environmental variability (perhaps overlaid on top of a temporal trend), depending on temporal and spatial scales. The best (perhaps the only) way to determine whether to model such a factor as a catastrophe or as part of annual fluctuations is to check the distribution of vital rates (or other population parameter affected by the factor). If the distribution is bimodal (with some years having distinctly different values), then adding the factor as a catastrophe is justified (for example, see Figure 3 in Akçakaya et al. 2003b). If a model includes a catastrophe (e.g., a hurricane that lowers annual survival rate), then the estimates of the mean and standard deviation of the affected demographic rates (e.g., survival rate) should exclude the rates observed during catastrophe years (e.g., see Akçakaya & Atwood 1997). Otherwise, the model would include the effects of the catastrophe twice, overestimating its impact. Some catastrophes have delayed effects. For example, the effects of a fire that reduces the carrying capacity by destroying part of the available habitat will last for several years until the habitat is fully restored. During this time the carrying capacity may be gradually increasing to its pre-fire level. Similarly, the effects of a toxic spill may last for several years, during which average survival rates may be gradually increasing to their normal level, even as they fluctuate from year to year. In RAMAS Metapop, such effects can be modeled by a combination of catastrophes and temporal trends in average population parameters (such as carrying capacities or survival rate). In modeling such effects, it is important to correctly specify "Time since last catastrophe" and "Reset vital rates" parameters. See the help file or the manual for more details. The standard deviation parameters to model environmental stochasticity should be based on observed temporal variance. Variance components due to sampling and demographic stochasticity must be subtracted from total observed variance. Otherwise, standard deviations may be overestimated, which may cause overestimated risks, as well as truncation (and, consequently, bias) in vital rates. The standard deviation parameters to model environmental stochasticity should be based on the temporal variation in these parameters. When such data are lacking, some modelers use estimates of spatial variation (or even measurement error or sampling variation) from a single time step. There is no reason to expect that spatial variance (or sampling variation) should be similar to temporal Another mistake is to base the estimates on the standard error of mean, rather than on standard deviation of the series of estimates of the vital rate. When "Pool variance for survivals" option is selected in RAMAS Metapop (in the "Advanced stochasticity settings"), the standard deviations must be estimated in a specific way (see the help file). If you are estimating fecundities as a product (e.g., maternity multiplied by zero-year-old survival rate), remember that variance of the product of two random numbers is a function of their means, variances and covariance (see Akçakaya & Raphael 1998 for an example). Truncation of sampled values may introduce a bias, so that the realized means may be different from the average value used as the model parameter. (Note any error messages that the program displays while running a simulation.) The most common causes of truncation (and suggestions for correcting them) are listed below. 1. The standard deviations and means you entered are not based on data. If you guessed the values of stage matrix and standard deviation matrix, you may have overestimated them. The means and standard deviations should be based on observed survival rates. 2. The standard deviations are large because the distribution of the survival rates is bimodal. This may happen, for example, if males and females in the same age class or stage have different average survival rates. In this case, it may be better to include separate stages for males and females, or to model only females. 3. Bimodality may also result from catastrophes, if the survival rates in years with catastrophes are very different than those without catastrophes, and they are combined in a single distribution. In this case, it may be better to add catastrophes to the model explicitly, and separate their effects from normal year-to-year variability. 4. The standard deviations are large because they include variability due to sampling and measurement error and demographic stochasticity, and spatial variability. In this case, the standard deviations estimates should exclude these components (see above). Also, the "Pooled variance for survivals" option (in "Advanced Stochasticity Settings") will reduce truncations. 5. The distribution for environmental variation is Normal when there are survival rates close to 0 or 1. Use lognormal distribution instead. 6. The population you are modeling does not fit the assumption of the program that all survival rates within a population are perfectly correlated. The "Negative correlation for largest survival" option (in "Advanced Stochasticity Settings") provides an alternative assumption. As discussed elsewhere, constraints must be imposed when sampling vital rates, especially survival rates. (RAMAS Metapop does this automatically, as long as the "Constraints Matrix" is properly Even when constraints are imposed, demographic stochasticity may make the number of individuals surviving from a given stage larger than the number in the stage in the previous time step, thus creating "phantom" individuals. This happens if there are two or more survival rates (to different stages) from a given stage, the total survival rate is close to 1.0, and the number of individuals is small. This is automatically corrected by RAMAS Metapop, but if you are using another program, make sure a similar correction is made. The effect of uncertainties in the model structure and parameters on model results get compounded in time. In other words, the range of predicted outcomes expands with time, so that for long time horizons (simulation durations), the results may become too uncertain to be of any use. Long time horizons also make it difficult to justify model assumptions (e.g., that the vital rates will fluctuate around the same average values with the same variability as was observed). In contrast a simulation time horizon that is too short may yield results that are not relevant or interesting. For example, when simulation time horizon is less than the generation time of the species, the risk of extinction or substantial decline may be very low, but this low risk may not be relevant to the management question at hand. Also, as discussed elsewhere, impacts may be assessed as very small when simulation time horizon is too long or too short (see also Akçakaya & Sjögren-Gulve 2000) The appropriate duration depends on the biology of the species (especially its generation time), the amount of data (especially the length of the time period from which data are available), and the question addressed (especially whether an absolute or relative prediction is required). Confusion about the terminology related to uncertainty and variability contributes to some modeling mistakes. As modeled in RAMAS Metapop, natural variability results from temporal and spatial environmental fluctuations, catastrophes, and demographic stochasticity. Natural variability can be modeled and translated into risk (probability) estimates using a stochastic model. Model or parameter uncertainty results from measurement error, lack of knowledge, and model misspecification and biases. It determines model accuracy, and its effects on the uncertainty of model results increases with time. In principle, this kind of uncertainty can be reduced by collecting additional data. This type of uncertainty should be incorporated in the form of ranges (lower and upper bounds) of each model parameter. For more information, see the "Measurement error and Uncertainty" topic in the RAMAS Metapop help file or manual. List of common mistakes in population modeling
{"url":"http://www.ramas.com/CMvar.htm","timestamp":"2014-04-18T18:11:27Z","content_type":null,"content_length":"24828","record_id":"<urn:uuid:7acb5576-500d-475d-84d2-cbd6ba626e82>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Historically first uses of mathematical induction up vote 9 down vote favorite I'm interested in find out what were some of the first uses of mathematical induction in the literature. I am aware that in order to define addition and multiplication axiomatically, mathematical induction in required. However, I am certain that the ancients did their arithmetic happily without a tad of concern about induction. When did induction get mentioned explicitly in the mathematical literature? Definitely this places before about 1800 when the early logicians started formulating axioms for arithmetic. ho.history-overview arithmetic peano-arithmetic soft-question add comment 4 Answers active oldest votes There are several questions here, so my answer overlaps with some of the others. 1. First use of induction in some form. I would nominate the "infinite descent" proof that $\sqrt{2}$ is irrational -- suppose that $\sqrt{2}=m/n$, then show that $\sqrt{2}=m'/n'$ for smaller numbers $m',n'$ -- which probably goes back to around 500 BC. 2. First published use of induction in some form. Euclid's infinite descent proof that every natural number has a prime divisor, in the Elements. up vote 18 down 3. First use of induction in the "base step, induction step" form. I suggest Levi ben Gershon and (more definitely) Pascal, as mentioned in danseetea's answer. vote accepted 4. First mention of "induction". The one suggested by Gerald Edgar is the earliest I know of. 5. First realization that induction is fundamental to arithmetic: Grassmann's Lehrbuch der Arithmetik of 1861, where he defines addition and multiplication by induction, and proves their ring properties by induction. This idea was rediscovered, and built into an axiom system by Dedekind, in his Was sind und was sollen die Zahlen? of 1888. It became better known as the Peano axiom system, when Peano redeveloped it a couple of years later. add comment Induction http://jeff560.tripod.com/i.html up vote 4 Mathematical Induction http://jeff560.tripod.com/m.html down vote Quote: "The term MATHEMATICAL INDUCTION was introduced by Augustus de Morgan (1806-1871) in 1838 in the article Induction (Mathematics) which he wrote for the Penny Cyclopedia. De Morgan 1 had suggested the name successive induction in the same article and only used the term mathematical induction incidentally. The expression complete induction attained popularity in Germany after Dedekind used it in a paper of 1887 (Burton, page 440; Boyer, page 404)." – Qiaochu Yuan May 10 '10 at 18:02 add comment This is not an answer to your question because you ask for explicit mentioning. However I think this is still relevant to the discussion: some claim Levi ben Gershon (early 14th century) used induction in some sense. I read this in John Stillwell's "Mathematics and its history", p193: "Levi ben Gershon comes very close to using mathematical induction, if not actually inventing it .... Rabinovitch (1970) offered an exposition of some of Levi ben Gershon's proofs up vote 4 that certainly seems to show a division into a base step and induction step, but the induction step needs some notational help to become a proof for truly arbitrary n." down vote Rabinovitch (1970) above is "Rabinovitch, N.L. (1970). Rabbi Levi ben Gershon and the origins of mathematical induction. Arch. Hist. Exact Sci., 6, 237-248" add comment EDIT: I should have read the other posted answer before writing the answer below. Obviously, my suggestion that the name "induction" was coined by Poincare is wrong. I am curious, then, as to when the name "induction" gained popular currency. Was Poincare simply taking an established term and turning it to his own purposes in the philosophy of mathematics? In one of his essays (I forget which one, and don't have the reference at hand) Poincare discusses mathematical induction in the formal way that we think of it, and explains that it is this principle that allows mathematical argument to escape the rigid confines of formal tautologies and take flight on mathematical intuition. In fact, he uses the name induction in deliberate analogy with inductive reasoning in science (to be contrasted with the deductive reasoning that underlies logical manipulations of definitions). I don't know how much of his contribution to the formalization of induction is original, and how much he is building on earlier work. The wikipedia article on induction has a small amount of history and mentions Boole, Peano, and Dedekind (all working in the 19th century) and does not mention Poincare, while the wikipedia article mentions Grassmann as well. This suggests that up vote Poincare is indeed building on their earlier formulations. (Aside: I didn't see in either article a statement as to where the precise statement induction originated (in a footnote quoting 2 down from Boole in the induction article, the term induction does not seem to be used), so it seems conceivable that the actual name "mathematical induction" comes from Poincare.) The wikipedia article on induction mentions Bernoulli as an earlier employer of the "inductive hypothesis". It also mentions the well-known "infinite descent" arguments of Fermat, which are a variation on induction (in fact, they are a direct appeal to the well-ordering of the natural numbers), and mentions several earlier examples, going back to ancient times. None of these earlier examples are explicitly applying induction in our modern sense, though; rather, they are making arguments or calculation which are implicitly of an inductive nature. Summary: I hope that someone who knows more history and has more sources than wikipedia at hand will give a more definitive answer, but my guess is that, while inductive style arguments date back to the beginning of mathematics, the precise logical formulation of inductive arguments dates back to the 19th century (and represents part of the concern for logical foundations that developed in that century), and that the actual name "induction" may originate with Poincare (in the early 20th century). @Emerton, I edited out a couple of typos. I hope you don't mind! – Mariano Suárez-Alvarez♦ May 10 '10 at 16:41 Not at all; thank you. – Emerton May 10 '10 at 17:01 add comment Not the answer you're looking for? Browse other questions tagged ho.history-overview arithmetic peano-arithmetic soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/24102/historically-first-uses-of-mathematical-induction/24126","timestamp":"2014-04-17T18:34:25Z","content_type":null,"content_length":"70063","record_id":"<urn:uuid:cac5607a-ad47-4b0b-9e39-60b66122fee3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Tietze transformation Tietze transformations Tietze transformations are a formalisation of the informal substitution methods that are natural when working with group presentations. The four transformations Let $G= \langle X: R\rangle$ be a group presentation, where the ‘specified isomorphism to $G$’ is unspecified! The following transformations do not change the group $G$: T1: Adding a superfluous relation $\langle X: R\rangle$ becomes $\langle X: R^'\rangle$, where $R^' = R \cup \{r\}$ and $r\in N(R)$ the normal closure of the relations in the free group on $X$, i.e., $r$ is a consequence of $R$; T2: Removing a superfluous relation $\langle X: R\rangle$ becomes $\langle X: R^'\rangle$ where $R^' = R - \{r\}$, and $r$ is a consequence of $R^'$; T3: Adding a superfluous generator $\langle X: R\rangle$ becomes $\langle X^': R^'\rangle$, where $X^' = X\cup \{ g\}$, $g$ being a new symbol not in $X$, and $R^' = R\cup\{wg^{-1}\}$, where $w$ is a word in the other generators, that is $w$ is in the image of the inclusion of $F(X)$ into $F(X^')$; T4: Removing a superfluous generator $\langle X: R\rangle$ becomes $\langle X^': R^'\rangle$, where $X^' = X - \{ g\}$, and $R^' = R-\{wg^{-1}\}$ with $w\in F(X^')$ and $wg^{-1}\in R$ and no other members of $R\prime$ involve $g$. Tietze’s theorem Given two finite presentations of the same group, one can be obtained from the other by a finite sequence of Tietze transformations. Tietze’s original paper is • H. Tietze, Über die topologischen Invarianten mehrdimensionaler Mannigfaltigkeiten, Monatsschr. Math. Phys., 19 (1908) 1 –118. See also • W. Magnus and B. Chandler, The history of combinatorial group theory, Springer (1982).
{"url":"http://ncatlab.org/nlab/show/Tietze+transformation","timestamp":"2014-04-18T15:54:43Z","content_type":null,"content_length":"22623","record_id":"<urn:uuid:356b8654-cead-4351-be1c-ab62e0c85ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Relativizing Chaitin's Halting Probability Status: published in the Journal of Mathematical Logic, vol. 5 (2005), pp. 167 - 192. Availability: PostScript, DVI, and PDF Abstract. As a natural example of a 1-random real, Chaitin proposed the halting probability Omega of a universal prefix-free machine. We can relativize this example by considering a universal prefix-free oracle machine U. Let Omega^A[U] be the halting probability of U^A; this gives a natural uniform way of producing an A-random real for every real A. It is this operator which is our primary object of study. We can draw an analogy between the jump operator from computability theory and this Omega operator. But unlike the jump, which is invariant (up to computable permutation) under the choice of an effective enumeration of the partial computable functions, Omega^A[U] can be vastly different for different choices of U. Even for a fixed U, there are oracles A =^* B such that Omega^A[U] and Omega^B[U] are 1-random relative to each other. We prove this and many other interesting properties of Omega operators. We investigate these operators from the perspective of analysis, computability theory, and of course, algorithmic randomness.
{"url":"http://www.math.uchicago.edu/~drh/Papers/omega.html","timestamp":"2014-04-19T14:35:37Z","content_type":null,"content_length":"2348","record_id":"<urn:uuid:dbe5fda6-3d91-40bc-852d-3ae6597359ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
More metric spaces October 14th 2008, 03:08 PM More metric spaces So this has 3 parts. I've gotten somewhere on it, but I'm stuck. Let (X,d) be a metric space. We say that X is d bounded if there is a point a in X and r>0 such that the ball of radius r centered at a, B(a,r)={x in X|d(x,a)<r} contains X. i.e. the ball contains X. 1)Show that if X is d bounded then for every b in X there is an r_b>0 such that X=B(b,r_b) - On this part, I understand that the Ball B(a,r) contains X implies that B(a,r) is a neighborhood of X. Does that mean that X is a neighborhood of the ball inside of it B(b,r_b) would imply that X contains this ball, but I can't get how X=B(b,r_b). 2) Show that X is NOT d bounded if and only if for every r>0, there exists a sequence {a_n} n=1,2,... such that the collection of sets {B(a_i,r)} is pairwise disjoint. - I know that pairwise disjoint means that for every i,j i not equal to j, B(a_i, r) intersected with B(a_j, r) is the empty set. This all means that X is made up of the sets of balls that are not joined in any way. Does this mean that all of these balls make up one big ball that is the neighborhood of the set of neighborhoods? If so, how will I know that X contains this ball, which by definition given X must contain the ball B(a,r) to not be d bounded. 3) Give an example of a d bounded metric space (X,d) for which there is an r>0 and a sequence {a_i}, n=1,2,... such that the collection of sets {B(a_n,r)} is pairwise disjoint. -Basically just give an example of part B. I'm not really sure what exactly is wanted here? Again, anything will help here! thanks! October 15th 2008, 11:44 AM Let (X,d) be a metric space. We say that X is d bounded if there is a point a in X and r>0 such that the ball of radius r centered at a, B(a,r)={x in X|d(x,a)<r} contains X. i.e. the ball contains X. 1)Show that if X is d bounded then for every b in X there is an r_b>0 such that X=B(b,r_b) The condition of being d bounded means that every point in the space X is within distance r of a. It follows (from the triangle inequality) that any two points in the space are within distance 2r of each other. In particular, every point is within distance 2r of b. So B(b,2r) contains the whole space. Take any infinite set with the discrete metric (where d(x,y)=1 whenever x≠y). Then the balls B(x,r) are all disjoint provided that r<1/2. But the space is d bounded because any ball of radius greater than 1 contains the whole space.
{"url":"http://mathhelpforum.com/advanced-algebra/53715-more-metric-spaces-print.html","timestamp":"2014-04-19T06:11:27Z","content_type":null,"content_length":"6807","record_id":"<urn:uuid:fd81052e-9ff2-4ad4-bdd1-2c98f42ba719>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
SailNet Community - View Single Post - Understanding and Using the Magnetic Compass An ocean navigator must have a fundamental background in navigation to ensure an accurate positioning of the vessel. Dead reckoning procedures, aided by basic instruments, will give you the foundation that can help solve the three basic problems of navigation: position, direction to destination, and time of arrival. It is possible, using only a compass and knot meter, to navigate directly to any place in the world. The compass is perhaps the most important instrument on board your vessel. Understanding its operating principles, and correctly using it for navigation, is the hallmark of a successful navigator. Present day magnetic compasses use the same forces that guided ancient mariners. A magnetized needle, in conjunction with a compass card, rotates horizontally. Present day compasses are superior to the ancient ones through a heightened knowledge of magnetic laws and greater precision in construction. The Earth's magnetic lines of force provide the directional information needed to navigate. A compass detects and converts the energy from these magnetic lines of force into a directional display. In order to understand the operation of a ship's compass, it is first necessary to understand some basic information about the Earth's magnetic field. The Earth has some of the magnetic properties of a bar magnet; however, its magnetic poles are not located at its geographic poles, nor are the two poles located exactly opposite each other as on a straight bar magnet. The Earth's magnetic poles can be considered to be connected by a number of lines of force emanating from the south magnetic pole and terminating at the north magnetic pole. These irregular, curved lines are called magnetic meridians. The angle formed at any point between a magnetic meridian and the geographic median (a straight line between the geographic north and south poles) is called variation. Lines connecting points having the same magnetic variation are called isogonic lines. The local variation and its small annual change are noted on the compass rose of all navigational charts. Variation is listed on the chart as east or west. When variation is east, magnetic north is east of true north. Similarly, when variation is west, magnetic north is west of true north. Correction for magnetic variation must be calculated if a compass direction is to be converted to a true direction, or vice versa. The force of the Earth's magnetic field can be divided into two components: the vertical and the horizontal. The relative intensity of these two components varies over the Earth. At the magnetic poles, the vertical component is at maximum strength while the horizontal component is at its minimum strength. At approximately the midpoint between the two poles, the horizontal component is at maximum strength and the vertical is minimal. The magnetic compass indicates direction in the horizontal plane with reference to the horizontal component of the Earth's magnetic field. Therefore, a compass loses its usefulness in areas of weak horizontal forces such as the area around the magnetic poles. Secondary magnetic fields in the vicinity of the compass can also affect the compass readings. These secondary magnetic fields are caused by the presence of ferromagnetic objects, electronics, and electrical wires in the boat. This error can be reduced by changing the position of the small compensating magnets in the compass case. However, it is not possible to remove all of these errors on all headings. The end result will be a compass card showing the number of degrees of error in the compass when you are on various compass headings. This error is called deviation, and like variation must also be considered when determining heading relationships. The correction for variation and deviation is usually expressed as east or west and is computed as a correction to true heading. In order to make this computation easier we usually convert the east or west values to a plus or minus value and add them algebraically. If variation or deviation is east, the sign of the correction is minus and if west, the sign is plus. A good mnemonic for remembering this is "east is least and west is best." Ship's headings are expressed in various ways, according to the basic reference. If the heading is measured in relation to geographic north, it is a true heading. If the heading is in reference to magnetic north, it is a magnetic heading and if it is in reference to the compass lubber line, it is a compass heading. The reason we need to know all of this is that directions on a chart are in relation to true north, while we must use a magnetic compass to steer. Therefore, if you draw a line on the chart from point A to point B and measure the true course between these points, you must then convert this true direction to a compass heading so you can steer the boat from point A to point B. The other side of this coin is that all bearings and courses taken from the ship's compass will be compass bearings or compass courses and must be converted to true bearings or true courses in order to plot them on the chart. While it is true that you can use the magnetic portion of the compass rose to measure and plot magnetic directions, the preferred and professional method is to use always true courses and bearings on the chart and mathematically convert them either to or from compass courses and bearings. Let me summarize these heading relationships: Deviation is the difference between the compass heading and the magnetic heading. Variation is the difference between the magnetic heading and the true heading. The algebraic sum of deviation and variation is the compass error. In order to go from compass to true, use the mnemonic "Can Dead Men Vote Twice At Elections" to remember the conversion process (Compass, Deviation, Magnetic, Variation, True, Add East.) When converting compass heading to true heading, add east deviation and variation and subtract west deviation and variation. To convert from true to compass, use the mnemonic, "T. V. Makes Dull Children All Ways" (True, Variation, Magnetic, Deviation, Compass, Add West.) When converting true heading to compass heading, add west deviation and variation and subtract east deviation and variation. Here are a few examples of the conversion process: │Compass│Deviation │Magnetic│Variation │True│ │358 │5E │003 │6E │009 │ │120 │1W │119 │3E │122 │ │180 │6E │186 │8W │178 │ │240 │5W │235 │7W │228 │ │True│Variation │Magnetic│Deviation │Compass│ │009 │6E │003 │5E │358 │ │122 │3E │119 │1W │120 │ │178 │8W │186 │6E │180 │ │228 │7W │235 │5W │240 │ I know that a GPS unit not only will display true course, but the ones that calculate variation can also display magnetic heading. I hear all the time from beginners that they don't need anything but their GPS. However, the importance of being able to understand and use your compass is undeniable, since it is totally independent of your electronics and ship's power. When all else fails, the compass will still be there to help get you home safely.... if you know how to use it. See more information on this product in the SailNet Store.
{"url":"http://www.sailnet.com/forums/70556-post1.html","timestamp":"2014-04-21T05:51:12Z","content_type":null,"content_length":"44063","record_id":"<urn:uuid:9839aa1d-3464-4ed5-8b15-f9b8538bab0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
[IPython-User] map/reduce examples? Brian Granger ellisonbg@gmail.... Thu Apr 26 00:59:36 CDT 2012 On Wed, Apr 25, 2012 at 10:45 PM, Fernando Perez <fperez.net@gmail.com> wrote: > Hi Darren, > On Wed, Apr 25, 2012 at 7:41 AM, Darren Govoni <darren@ontrenet.com> wrote: >> Hi, >> Is there an example out there of doing map/reduce with ipython? >> I know its probably not that complicated to develop, but wanted to see >> what others had done first. > No, a full-blown example with all the mapreduce semantics hasn't been > implemented, but the key binary tree reduction step was recently > written up by Min, it's in a PR that's almost ready to go in: > https://github.com/ipython/ipython/pull/1295 > If you want to work on providing the remainder of the MapReduce api > along with the canonical wordcount implementation (that's the "hello > world" of the MapReduce universe), it would be great! Let us know and > we can try to finish up that PR so you have that layer ready to work > from. One important point about the Google style map/reduce approach and the parallelization of the reduce step. The binary tree reduction algorithm used in the above PR and my MPI is useful for parallelizing the reduction of a single key/value pair. In those cases you often haven't even explicitly identified a "key" for the set of values. Things like summing up the values in an array fits this model of In the typically applications of map/reduce though, you want to perform the reduction on many keys. The binary tree reduction then looses out because you have to do the full binary tree reduction for each key. The situation is even worse if the data set is sparse - it is possible that not each node has data for a given key. Then you end up walking a binary tree where many of the nodes are empty. In the large key limit, there is a simple and efficient method of reduction that will be even easier to implement. Each key is hashed to an integer mod the number of nodes - that is the node that will perform the final reduction for that key. After performing a local reduction step, each node sends its reduction data for each key to that keys reduction node, which then performs the final reduction step. This should not be difficult to implement in IPython. > Cheers, > f > _______________________________________________ > IPython-User mailing list > IPython-User@scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-user Brian E. Granger Cal Poly State University, San Luis Obispo bgranger@calpoly.edu and ellisonbg@gmail.com More information about the IPython-User mailing list
{"url":"http://mail.scipy.org/pipermail/ipython-user/2012-April/009994.html","timestamp":"2014-04-19T01:59:11Z","content_type":null,"content_length":"5784","record_id":"<urn:uuid:9784a19a-ce85-4bf1-90a9-b5f47d006f4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Fortran Wiki real(x [, kind]) converts its argument x to a real type. FORTRAN 77 and later Elemental function • result = real(x [, kind]) • x - Shall be integer, real, or complex. • kind - (Optional) An integer initialization expression indicating the kind parameter of the result. Return value These functions return a real variable or array under the following rules: 1. real(x) is converted to a default real type if x is an integer or real variable. 2. real(x) is converted to a real type with the kind type parameter of x if x is a complex variable. 3. real(x, kind) is converted to a real type with kind type parameter kind if x is a complex, integer, or real variable. program test_real complex :: x = (1.0, 2.0) print *, real(x), real(x,8) end program test_real See also dble, float
{"url":"http://fortranwiki.org/fortran/show/real","timestamp":"2014-04-18T13:08:10Z","content_type":null,"content_length":"10261","record_id":"<urn:uuid:0c1081ae-a8bd-4244-90c3-f4bfb6a65c8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
AP Chemistry Posted by anonymous on Tuesday, April 24, 2007 at 7:29pm. This questions has 5 parts but I already have the answers for a and b. I need help on the last three parts. In an electrolytic cell, a current of 0.250 ampere is passed through a solution of a chloride of iron producing Fe(s) and Cl2(g). a) write the equation fot he half-reaction that occurs at the anode b) when the cell operates for 2.00 hrs, 0.521 g iron is deposited at one electrode. Determine the formula of the chloride of iron in the original solution. c) write the balanced equation for the overall reaction that occurs in the cell. d) how many liters of Cl2 (g), measured at 25 degrees celsius and 750 mm Hg are produced when the cell operates as described in part (b) ? e. Calculate the current that would produce chlorine gas from the solution at a rate of 3.00 grams per hour. a) 2 Cl- - 2e- --> Cl2 b) 0.0187 mol e- 0.00933 mol Fe- about a 1:2 ratio therefore = Fe3+ I need help on c,d, and e please a)Your equation is not balanced by charge. The charge is -4 on the left and zero on the right. The equation should be 2Cl^- ==> Cl2(g) + 2e b)0.521=55.85= 0.00933 mol Fe(s) 0.0187 Faradays. divide 0.0187/0.0187 = 1.00 0.00933/0.0187 = 0.499 = 0.500 A Faraday will deposit 1 mol of a univalent metal, 0.5 mol of a divalent metal, 0.333 mol of a trivalent metal, etc. SO, the change in electrons must have been 2 and not 3 as your answer suggests, and the formula is FeCl2. Another way is ox state = mol e/mol Fe = 0.01866/0.00933 = 2.000. c). So anode is 2Cl^- ==> Cl2(g) + 2e cathode is Fe^+2 + 2e ==> Fe(s) Add the two to obtain the cell reaction. d). 0.01866 C x 70.91/2 = ??grams Convert to L at STP, then use PV = nRT to convert to the non-standard conditions. e. I would put all of this into a formula and solve for the unknown. A x hrs x 3600 s/hr x molar mass = 96,485 x grams x delta e. molar mass = 70.906 g Cl2/mol Cl2. grams = 3.00 delta e = 2 Solve for A. Check my thinking. Check my arithmetic. b)0.521=55.85= 0.00933 mol Fe(s) I made a typo on (b). It should be 0.521 g/55.85 = 0.00933 mol Fe(s). Related Questions chemistry - The urine of horses are mixed with an excess of hydrochloric acid, ... 3rd grade - alice folded a piece of paper into 12 equal squares and colored them... 8th grade math. - He ripped a piece of paper into three parts, and tore each of ... Discrete Math - A factory makes automobile parts. Each part has a code ... Science 7R - contains all the same kind of cell???? what part of a cell is that... chem - Explain, in terms of electrical energy, how the operation of a voltaic ... Mathematics - A machine takes 4.2 hours to make 7 parts. At that rate, how many ... Algebra 1 - You are making up your own mix of concrete to patch a set of stairs... I need answer fast - A machine will add 128 parts of water to l part of the kool... math - a bathroom cleaner contains 1 part of bleach with 4 parts of water. if ...
{"url":"http://www.jiskha.com/display.cgi?id=1177457357","timestamp":"2014-04-17T16:15:58Z","content_type":null,"content_length":"10276","record_id":"<urn:uuid:bf1031a5-71bd-4a4d-8d9e-d0b50911332d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Journal of Algebraic Combinatorics 10 (1999), 207­225 c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Extended Linial Hyperplane Arrangements for Root Systems and a Conjecture of Postnikov and Stanley CHRISTOS A. ATHANASIADIS athana@math.upenn.edu Department of Mathematics, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104-6395 Received September 2, 1997 Abstract. A hyperplane arrangement is said to satisfy the "Riemann hypothesis" if all roots of its characteristic polynomial have the same real part. This property was conjectured by Postnikov and Stanley for certain families of arrangements which are defined for any irreducible root system and was proved for the root system An-1. The proof is based on an explicit formula [1, 2, 11] for the characteristic polynomial, which is of independent combinatorial significance. Here our previous derivation of this formula is simplified and extended to similar formulae for all but the exceptional root systems. The conjecture follows in these cases. Keywords: hyperplane arrangement, characteristic polynomial, root system 1. Introduction Let A be a hyperplane arrangement in Rn , i.e. a finite collection of affine subspaces of Rn of codimension one. The characteristic polynomial [9, §2.3] of A is defined as (A, q) =
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/529/1622235.html","timestamp":"2014-04-18T23:53:12Z","content_type":null,"content_length":"8438","record_id":"<urn:uuid:b49a9380-3b9b-454e-bdc6-4c1123a3f787>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Image Denoising 1.1 Find the rough position of each grid line; 1.2 Remove the noise using median filtering locally. 2. Cell Extraction Individually 2.1 Guided boundary contour tracing; 2.2 Cell Extraction through 4-neighborhood region growing. 3. Shape from Shading Using Linear Approximation 3.1 Image Irradiance Equation Deductio; 3.2 3-D Shape Reconstructing. 4. Computing RBC shape surface feature 4.1 Compute mean curvature and Gaussian curvature using image convolution; 4.2 Set a threshold for this two curvature. 5: Segmentation through multiscale surface fitting More details is in Section 6: Experiment.
{"url":"http://www.hindawi.com/journals/mpe/2012/194953/alg1/","timestamp":"2014-04-19T08:35:42Z","content_type":null,"content_length":"2140","record_id":"<urn:uuid:1a53236e-9c85-49e3-9f31-f70f8ea6ef7d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Syllabus Entrance MA 120 Basic Concepts of Statistics Pogge, James Todd Mission Statement: The mission of Park University, an entrepreneurial institution of learning, is to provide access to academic excellence, which will prepare learners to think critically, communicate effectively and engage in lifelong learning while serving a global community. Vision Statement: Park University will be a renowned international leader in providing innovative educational opportunities for learners within the global society. Course MA 120 Basic Concepts of Statistics Semester SP 2009 HOD Faculty Pogge, James Todd Title Assistant Professor of Mathematics Degrees/Certificates Ph.D., Mathematics (Number Theory) M.S., Mathematical Sciences (Pure Mathematics) B.S., Mathematics/Computer Science (double major) Office Location SC 105C (Must enter through SC 105) Office Hours Tuesday, 9:30-12:30; Thursday, 9:30-12:30 Daytime Phone 816-584-6575 E-Mail todd.pogge@park.edu Semester Dates 12 January 2009 - 8 May 2009 Class Days -M---F- Class Time 1:50 - 3:05 PM Prerequisites None Credit Hours 3 Triola, Mario F. Elementary Statistics. Tenth Edition. Pearson Addison-Wesley. 2006. (ISBN: 0-321-33183-4) (ISBN 13: 9 780321 331830) Textbooks can be purchased through the Parkville Bookstore Additional Resources: Although, I am clearly biased against calculators, a calculator will be a very useful tool. You should not need an expensive one, but it should have at least a square root key. Other useful features would be an x!, nPr and nCr keys. McAfee Memorial Library - Online information, links, electronic databases and the Online catalog. Contact the library for further assistance via email or at 800-270-4347. Career Counseling - The Career Development Center (CDC) provides services for all stages of career development. The mission of the CDC is to provide the career planning tools to ensure a lifetime of career success. Park Helpdesk - If you have forgotten your OPEN ID or Password, or need assistance with your PirateMail account, please email helpdesk@park.edu or call 800-927-3024 Resources for Current Students - A great place to look for all kinds of information http://www.park.edu/Current/. Course Description: MA120 Basic Concepts of Statistics (GE): A development of certain basic concepts in probability and statistics that is pertinent to most disciplines. Topics include: probability models, parameters, statistics and sampling procedures, hypothesis testing, correlation and regression. 3:0:3 Educational Philosophy: I believe that most things are easy once shown, but tend to be difficult until shown. So a student shall be encouraged to address mathematical concepts several times primarily through doing problems. One needs a vocabulary. So there will be quizzes as we work on a mathematical language for statistics. Class time is primarily presentation with slight time for refinement, but time is made for general questions. However, teaching often takes place during an office hour where individual questions and problems can be addressed more adequately. Learning Outcomes: Core Learning Outcomes 1. Compute descriptive statistics for raw data as well as grouped data. 2. Determine appropriate features of a frequency distribution. 3. Apply Chebyshev's Theorem. 4. Distinguish between and provide relevant descriptions of a sample and a population. 5. Apply the rules of combinatorics. 6. Differentiate between classical and frequency approaches to probability. 7. Apply set-theoretic ideas to events. 8. Apply basic rules of probability. 9. Apply the concepts of specific discrete random variables and probability distributions. 10. Compute probabilities of a normal distribution. 11. Compute confidence intervals of means and percentages. 12. Perform hypothesis tests involving one population. 13. Compute regression and correlation of Bi-variate data. Core Assessment: Description of MA 120 Core Assessment One problem with multiple parts for each numbered item, except for item #3, which contains four separate problems. 1. Compute the mean, median, mode, and standard deviation for a sample of 8 to 12 data. 2. Compute the mean and standard deviation of a grouped frequency distribution with 4 classes. 3. Compute the probability of four problems from among these kinds or combinations there of: a. the probability of an event based upon a two-dimensional table; b. the probability of an event that involves using the addition rule; c. the probability of an event that involves conditional probability; d. the probability of an event that involves the use of independence of events; e. the probability of an event based upon permutations and/or combinations; f. the probability of an event using the multiplication rule; or g. the probability of an event found by finding the probability of the complementary event. 4. Compute probabilities associated with a binomial random variable associated with a practical situation. 5. Compute probabilities associated with either a standard normal probability distribution or with a non-standard normal probability distribution. 6. Compute and interpret a confidence interval for a mean and/ or for a proportion. Link to Class Rubric Class Assessment: Grades are determined from quizzes, tests, and the final examination. Grades of A(90%), B(80%), C(70%), D(60%), and F(less than 60%) will be given. A final is mandatory. The final examination is worth 100 points. The other three tests are all worth 100 points each.There will be quizzes. I will transform the quiz total into 100 points. The instructor reserves the right to change the syllabus. See Above Late Submission of Course Materials: Assignments not submitted on the due date will receive a grade of "zero". Classroom Rules of Conduct: All students are expected to be present and on time with homework assigned completed. Course Topic/Dates/Assignments: Chapters 1 through 12 Academic Honesty: Academic integrity is the foundation of the academic community. Because each student has the primary responsibility for being academically honest, students are advised to read and understand all sections of this policy relating to standards of conduct and academic life. Park University 2008-2009 Undergraduate Catalog Page 87 Plagiarism involves the use of quotations without quotation marks, the use of quotations without indication of the source, the use of another's idea without acknowledging the source, the submission of a paper, laboratory report, project, or class assignment (any portion of such) prepared by another person, or incorrect paraphrasing. Park University 2008-2009 Undergraduate Catalog Page 87 Attendance Policy: Instructors are required to maintain attendance records and to report absences via the online attendance reporting system. 1. The instructor may excuse absences for valid reasons, but missed work must be made up within the semester/term of enrollment. 2. Work missed through unexcused absences must also be made up within the semester/term of enrollment, but unexcused absences may carry further penalties. 3. In the event of two consecutive weeks of unexcused absences in a semester/term of enrollment, the student will be administratively withdrawn, resulting in a grade of "F". 4. A "Contract for Incomplete" will not be issued to a student who has unexcused or excessive absences recorded for a course. 5. Students receiving Military Tuition Assistance or Veterans Administration educational benefits must not exceed three unexcused absences in the semester/term of enrollment. Excessive absences will be reported to the appropriate agency and may result in a monetary penalty to the student. 6. Report of a "F" grade (attendance or academic) resulting from excessive absence for those students who are receiving financial assistance from agencies not mentioned in item 5 above will be reported to the appropriate agency. Park University 2008-2009 Undergraduate Catalog Page 89-90 Disability Guidelines: Park University is committed to meeting the needs of all students that meet the criteria for special assistance. These guidelines are designed to supply directions to students concerning the information necessary to accomplish this goal. It is Park University's policy to comply fully with federal and state law, including Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990, regarding students with disabilities. In the case of any inconsistency between these guidelines and federal and/or state law, the provisions of the law will apply. Additional information concerning Park University's policies and procedures related to disability can be found on the Park University web page: http://www.park.edu/disability . ┃Competency │ Exceeds Expectation (3) │ Meets Expectation (2) │ Does Not Meet Expectation (1) │ No Evidence (0) ┃ ┃Evaluation │Can perform and interpret a hypothesis test with 100% │Can perform and interpret a hypothesis test with at │Can perform and interpret a hypothesis test with less │Makes no attempt to┃ ┃Outcomes │accuracy. │least 80% accuracy. │than 80% accuracy. │perform a test of ┃ ┃10 │ │ │ │hypothesis. ┃ ┃ │ │ │ │Makes no attempt to┃ ┃Synthesis │Can compute and interpret a confidence interval for a │Can compute and interpret a confidence interval for a │Can compute and interpret a confidence interval for a │compute or ┃ ┃Outcomes │sample mean for small and large samples, and for a │sample mean for small and large samples, and for a │sample mean for small and large samples, and for a │interpret a ┃ ┃10 │proportion with 100% accuracy. │proportion with at least 80% accuracy. │proportion with less than 80% accuracy. │confidence ┃ ┃ │ │ │ │interval. ┃ ┃ │ │ │ │Makes no attempt to┃ ┃ │ │ │ │apply the normal ┃ ┃Analysis │Can apply the normal distribution, Central limit │Can apply the normal distribution, Central limit │Can apply the normal distribution, Central limit │distribution, ┃ ┃Outcomes │theorem, and binomial distribution to practical │theorem, and binomial distribution to practical │theorem, and binomial distribution to practical │Central Limit ┃ ┃10 │problems with 100% accuracy. │problems with at least 80% accuracy. │problems with less than 80% accuracy. │Theorem, or ┃ ┃ │ │ │ │binomial ┃ ┃ │ │ │ │distribution. ┃ ┃Terminology│Can explain event, simple event, mutually exclusive │Can explain event, simple event, mutually exclusive │Can explain event, simple event, mutually exclusive │Makes no attempt to┃ ┃Outcomes │events, independent events, discrete random variable, │events, independent events, discrete random variable, │events, independent events, discrete random variable, │explain any of the ┃ ┃4,5,7 │continuous random variable, sample, and population │continuous random variable, sample, and population with│continuous random variable, sample, and population │terms listed. ┃ ┃ │with 100% accuracy. │at least 80% accuracy. │with less than 80% accuracy. │ ┃ ┃Concepts │Can explain mean, median, mode, standard deviation, │Can explain mean, median, mode, standard deviation, │Can explain mean, median, mode, standard deviation, │Makes no attempt to┃ ┃Outcomes │simple probability, and measures of location with 100%│simple probability, and measures of location with at │simple probability, and measures of location with less│define any concept.┃ ┃1,6 │accuracy. │least 80% accuracy. │than 80% accuracy. │ ┃ ┃ │Compute probabilities using addition multiplication, │Compute probabilities using addition multiplication, │Compute probabilities using addition multiplication, │ ┃ ┃Application│and complement rules and conditional probabilities. │and complement rules and conditional probabilities. │and complement rules and conditional probabilities. │Makes no attempt to┃ ┃Outcomes │Compute statistical quantities for raw and grouped │Compute statistical quantities for raw and grouped │Compute statistical quantities for raw and grouped │compute any of the ┃ ┃1,2,3,8,9 │data. Compute probabilities using combinatorics, │data. Compute probabilities using combinatorics, │data. Compute probabilities using combinatorics, │probabilities or ┃ ┃ │discrete random variables, and continuous random │discrete random variables, and continuous random │discrete random variables, and continuous random │statistics listed. ┃ ┃ │variables. All must be done with 100% accuracy. │variables. All must be done with at least 80% accuracy.│variables. All are done with less than 80% accuracy. │ ┃ ┃Whole │Can apply the concepts of probability and statistics │Can apply the concepts of probability and statistics to│Can apply the concepts of probability and statistics │Makes no attempt to┃ ┃Artifact │to real-world problems in other disciplines with 100 %│real-world problems in other disciplines with at least │to real-world problems in other disciplines with less │apply the concepts ┃ ┃Outcomes │accuracy. │80 % accuracy. │than 80% accuracy. │to real-world ┃ ┃7,8 │ │ │ │problems. ┃ ┃Components │ │ │ │Makes no attempt to┃ ┃Outcomes │Can use a calculator or other computing device to │Can use a calculator or other computing device to │Can use a calculator or other computing device to │use any computing ┃ ┃1 │compute statistics with 100% accuracy. │compute statistics with at least 80% accuracy. │compute statistics with less 80% accuracy. │device to compute ┃ ┃ │ │ │ │statistics. ┃ This material is protected by copyright and can not be reused without author permission. Last Updated:1/8/2009 9:19:04 PM
{"url":"https://app.park.edu/syllabus/syllabus.aspx?ID=624122","timestamp":"2014-04-16T22:00:43Z","content_type":null,"content_length":"110196","record_id":"<urn:uuid:6de44d76-bc8b-48b9-ac04-da1c40ed75a9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Realizing groups as commutator subgroups up vote 23 down vote favorite What are the groups $X$ for which there exists a group $G$ such that $G' \cong X$? My considerations: $\bullet$ If $X$ is perfect we are happy with $G=X$. $\bullet$ If $X$ is abelian then $G := X \wr C_2$ verifies $G'=\{(x,x^{-1}): x \in X\} \cong X$. $\bullet$ If $X$ satisfies the following properties: (1) $X \neq X'$, (2) The conjugation action $X \to \text{Aut}(X)$ is an isomorphism, then there is no $G$ such that $G' = X$ (consider the composition $G \to \text{Aut}(X) \cong X \to X/X'$, it is surjective so its kernel contains $X$, contradiction). For instance, the symmetric group $S_n$ verifies (1) and (2) if $n \neq 2,6$. I have been looking for this problem on the web but I didn't find anything. Do you have any reference and/or suggestion on how to solve this problem? gr.group-theory finite-groups add comment 2 Answers active oldest votes A complete answer seems not to be known. Let me give you the following two nearly-contemporaneous references from the mid-70s: Robert Guralnick, On groups with decomposable commutator subgroups Michael Miller, Existence of Finite Groups with Classical Commutator Subgroup Both Guralnick and Miller call groups which are commutator subgroups $C$-groups (though I don't know who, if either, originated the term) and give partial answers to your general question. For example, Theorem 4 from Miller gives the following: Let $G$ be a subgroup of $\operatorname{GL}_n(K)$ containing $\operatorname{SL}_n(K)$ for $K$ a finite field of characteristic not equal to 2. Then $G$ is the commutator subgroup up vote 23 of some group unless it is of odd index and $n$ is even. down vote accepted The groupprops-wiki calls such groups commutator-realizable, and give a basic result on such groups, but mention that this terminology is not standard (though is probably safer than the overloaded term $C$-group.) Edit: Some googling around led to the following slick argument of Schoof (from his Semistable abelian varieties with good reduction outside 15), which is closely related to your observation in bullet (3), and also serves to eliminate the symmetric groups. I'll quote verbatim except for change of variable names: Let $G$ be a group and let $G'$ be its commutator subgroup. Conjugation gives rise to a homomorphism $G \to \operatorname{Aut}(G')$. On the one hand it maps $G'$ to the commutator subgroup of $\operatorname{Aut}(G')$. On the other hand the image of $G'$ is the group $\operatorname{Inn}(G')$ of inner automorphisms of $G'$. Therefore, if a group $X$ is the commutator subgroup of some group, we must have $\operatorname{Inn}(X)\subset \operatorname{Aut}(X)'$. add comment Let me quote some well-known results and perhaps related problems which may be illuminating! Let $G$ be a non-abelian finite $p$-group having cyclic center. Then, there is no finite $p$-group $H$ such that $G$ is isomorphic to a normal subgroup of the derived subgroup $[H,H]$ of $H$. In particular, $G$ cannot be isomorphic to the derived subgoup of some $p$-group $H$. The latter is a famous result due to Burnside. The former is a slight generalization of problem. See H. Heineken, On normal embedding of subgroups. Geom. Dedicata 83, No.1-3, 211-216 (2000). Related problem: Let $V$ be a non-empty set of words in the free group on the countable set $\{x_1,x_2,\dots\}$. We call a group $G$ is {\bf integrable with respect to $V$}, whenever there is a group $H$ such that $G\cong V(H)$, where $V(H)$ is the verbal subgroup of $H$ generated by $V$, i.e., $$V(H)=\langle v(h_1,\dots,h_n) | v\in V, h_i \in H \rangle,$$ (the subgroup of $H$ generated by the values of words of $V$ on the elements of $H$) For example, if one takes $V=\{[x_1,x_2]=x_1^{-1}x_2^{-1}x_1 x_2\}$, then $V(H)$ is the derived subgroup of $H$ for any group $H$ and the problem is the same as it proposed. One may write (maybe for some propaganda) $$\int G \; dV=H \Longleftrightarrow G=V(H).$$ In the case, $V=\{ [x_1,x_2]\}$, $$\int G=H \ Longleftrightarrow G=H'$$ and so $$\int G=H \Longleftrightarrow \int G=H \times A$$ for any abelian group $A$. (This may remind the constant term in the integral of a function!) I have used up vote the latter notation which has no benefit but only perhaps inspiring in [A. Abdollahi, Integral of a group, 29th Iranian International Conference on Mathematics, Amirkabir University of 7 down Technology (Tehran Polytechnic), Iran, March 28-31, 1998.] I would like to say that the problem given a group $G$, find groups $H$ such that $G=[H,H]$" have been studied in a more general contex which may be found with key wordsnormal embedding of subgroups" and the above-mentioned paper of Heineken is a good start. Also it may be worth-mentioning that by a result of Allenby R.B.J.T. Allenby, Normal subgroups contained in Frattini subgroups are Frattini subgroups, Proc. Amer. Math. Soc, Vol. 78, No. 3, 1980, 318- if $N$ is a normal subgroup of a finite group $G$ which is contained in the Frattini subgroup of $G$, then $N=\Phi(U)$ for some finite group $U$. Of course, for the class of finite $p$-groups the Frattini subgroup is the verbal subgroup generated by the words $x_1^p, [x_1,x_2]$. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/85540/realizing-groups-as-commutator-subgroups?sort=votes","timestamp":"2014-04-16T07:37:09Z","content_type":null,"content_length":"57480","record_id":"<urn:uuid:f05138f3-1544-4fcf-a9b6-86e3f5c3ce67>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Tarryall, CO Algebra Tutor Find a Tarryall, CO Algebra Tutor ...For math it helps for it to be fun. I have picked up many math games over the years from attending numerous learning workshops and then using them with my students. It's wonderful to hear students say "now I get it!" I have given science/environmental education programs for children through the state parks, area nature centers & after-school science clubs for many (!!) years. 42 Subjects: including algebra 1, reading, Spanish, writing ...My previous experience with tutoring started when I was asked to tutor a few special needs children. Then during the later few years of my college career, I was employed by the college as a professor's aide. During this time I graded entry and mid-level engineering math and engineering courses. 15 Subjects: including algebra 2, algebra 1, calculus, trumpet ...My graduate work is in architecture and design. I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students. We have worked on applications and how this relates to things in real life. 7 Subjects: including algebra 1, algebra 2, geometry, GRE ...I sure am ready to teach you. For me Spanish was a gateway to starting to travel and experience other cultures. After taking a few years of French, a couple years of Latin, and four years of Spanish in High School: off to college I went to get my degree in Spanish and Communication Studies from Bloomsburg University. 14 Subjects: including algebra 1, Spanish, Microsoft Excel, Microsoft Word ...I am available to tutor math, physics, and test prep. My goal is to work with students to develop understanding and familiarity with the concepts and the mechanics of math and physics. I look forward to working with you!I took Linear Algebra and passed it as an undergraduate student and again as a graduate student at the Air Force Institute of Technology. 16 Subjects: including algebra 1, algebra 2, calculus, physics Related Tarryall, CO Tutors Tarryall, CO Accounting Tutors Tarryall, CO ACT Tutors Tarryall, CO Algebra Tutors Tarryall, CO Algebra 2 Tutors Tarryall, CO Calculus Tutors Tarryall, CO Geometry Tutors Tarryall, CO Math Tutors Tarryall, CO Prealgebra Tutors Tarryall, CO Precalculus Tutors Tarryall, CO SAT Tutors Tarryall, CO SAT Math Tutors Tarryall, CO Science Tutors Tarryall, CO Statistics Tutors Tarryall, CO Trigonometry Tutors Nearby Cities With algebra Tutor Aspen Park, CO algebra Tutors Buckskin Joe, CO algebra Tutors Cadet Sta, CO algebra Tutors Cleora, CO algebra Tutors Crystal Hills, CO algebra Tutors Falcon, CO algebra Tutors Iron City, CO algebra Tutors Keystone, CO algebra Tutors Maysville, CO algebra Tutors Montclair, CO algebra Tutors Parkdale, CO algebra Tutors Rockrimmon, CO algebra Tutors Swissvale, CO algebra Tutors Wellsville, CO algebra Tutors Western Area, CO algebra Tutors
{"url":"http://www.purplemath.com/Tarryall_CO_Algebra_tutors.php","timestamp":"2014-04-21T10:44:46Z","content_type":null,"content_length":"24334","record_id":"<urn:uuid:3d22a8f8-6671-4346-a8f8-9246ff33fdbd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
I have attempted this problem, can you check if I have the right answer? May 30th 2012, 09:53 AM #1 May 2012 I have attempted this problem, can you check if I have the right answer? I have determined that the following integral: <integral over C> (2x+(1/y))dx + (2y-(x/y^2)dy is path independent. I need to find the path integral along a curve, starting at A(1,1) and ending at B(2,2). I get the answer of 7, can anyone verify or correct me in this? Re: I have attempted this problem, can you check if I have the right answer? Is this the integral? $\oint\limits_C (2x+ \frac{1}{y})dx + (2y - \frac{x}{y^2})dy$ If it is, i get $\int_1^2 \int_1^2 \bigg(\frac{(d(2y-\frac{x}{y^2})}{dx} -\frac{d(2x+\frac{1}{y})}{dy}\bigg) dxdy$ $= \int_1^2 \int_1^2 (\frac{-1}{y^2} + \frac{1}{y^2})dxdy$ $= \int_1^2 c dy$ $= \left.cy\right|_1^2 = constant$ Did this in a rush so may have made a mistake. Could you show your steps how you got 7? Re: I have attempted this problem, can you check if I have the right answer? No, that's not what I get. I can't say where you might have made a mistake because you don't say how you got that answer. Re: I have attempted this problem, can you check if I have the right answer? Well the integral is path independent, so it doesn't matter how we get from (1,1) to (2,2). So I took it as 2 separate integrals: (1,1) to (2,1) with the integral between 1 and 2 of (2x+1)dx (here the 1 in 2x+1 comes from substituting our y-value of 1 into it) and then: (2,1) to (2,2) with the integral between 1 and 2 of (2y-(2/y^2))dy (here the 2 in 2/y^2 comes from subsituting our x-value of 2 into it) Then as it is path independent, we can simply add the integrals together, generating an answer of 7? Re: I have attempted this problem, can you check if I have the right answer? Yes, that is good. and then: (2,1) to (2,2) with the integral between 1 and 2 of (2y-(2/y^2))dy (here the 2 in 2/y^2 comes from subsituting our x-value of 2 into it) Then as it is path independent, we can simply add the integrals together, generating an answer of 7? That is also correct. Now, what value do you get for each of those integrals? When I integrate this way I get the same thing as I did before doing it a different way- and it is NOT 7. ( A little simpler is to integrate on the line directly from (1, 1) to (2, 2), y= x. Yet another way is to find an "anti-derivative", a function F(x,y) such that $dF= (2x+ y)dx+ (2y- (x/y^2)) dy$ Last edited by HallsofIvy; May 30th 2012 at 11:53 AM. Re: I have attempted this problem, can you check if I have the right answer? Well, unless I'm doing a classic rookie mistake we have [x^2+x] between 1 and 2 + [y^2+2/y] between 1 and 2, giving (4+2)-(1+1)+(4+1)-(1+3) = 6!!! I knew I'd gone wrong as I was typing, is 6 Re: I have attempted this problem, can you check if I have the right answer? Correction: (4+2)-(1+1)+(4+1)-(1+2)=6 Re: I have attempted this problem, can you check if I have the right answer? Yes, now that is correct. May 30th 2012, 10:56 AM #2 Sep 2010 May 30th 2012, 10:59 AM #3 MHF Contributor Apr 2005 May 30th 2012, 11:15 AM #4 May 2012 May 30th 2012, 11:40 AM #5 MHF Contributor Apr 2005 May 30th 2012, 11:48 AM #6 May 2012 May 30th 2012, 11:53 AM #7 May 2012 May 30th 2012, 11:54 AM #8 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/199453-i-have-attempted-problem-can-you-check-if-i-have-right-answer.html","timestamp":"2014-04-20T16:27:02Z","content_type":null,"content_length":"54488","record_id":"<urn:uuid:386fb07b-9a23-4c71-8da2-3355c8fcde98>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
bius Deltahedra Möbius Deltahedra There are five acoptic deltahedra for which every edge line is on a symmetry plane. See the Möbius Deltahedra paper by Peter Messer. Möbius Triangles are those that occur on the surface of a sphere has been divided its symmetry planes. See George Hart's Symmetry Planes Page. For Tetrahedral symmetry there are three different cases depending on how many planes are present. There can be 0, 3 or 6 planes of symmetry. If 3 tetrahedral planes of symmetry alone would be considered, then the octahedron would be the one and only Möbius Deltahedron for that case. For the purposes of this study only the case of 6 symmetry planes is considered. A sphere divided by the 6 symmetry planes results in 24 spherical triangles of 54-54-90 degrees. Note that the angles of Spherical Triangles can add up to more the 180 degrees. While the following models are not spherical, they help depict the planes of symmetry. The Tetrahedral Symmetry Planes model shows the 6 planes. The tetrahedron, which has been divided by those planes has on its surface, has triangles which are 30-60-90 degrees. Also shown is the same tetrahedron but the vertices have been projected onto a sphere, and the triangles are still flat. Interestingly, whenever we move the central vertices off the plane of the tetrahedron by the same vector, then octahedral symmetry results. This can be seen in the symmetry plane and sphere projection models. Each figure in the following tables lists the symmetry (S) Dn - Dihedral, T - Tetrahedral, O - Octahedral, I - Icosahedral. The total Face, Edge and Vertex counts are given. │ Tetrahedral Symmetry and Mobius Triangles ││ │││┌────────────────────────────────────┐ ││ ││││ Mobius Triangle Tetrahedron SP │ ││ │││├────────────────────────┬───────────┤ ││ ││││ S=O │F=24 E=36 V│ ││ ││││ │ =14 │ ││ │││├────────────────────────┴───────────┤ ││ ││││ │ ││ │││├────────────────────────────────────┤ ││ ││││ off wrl switch │ ││ │││└────────────────────────────────────┘ ││ For Octahedral symmetry, a sphere divided by the 9 symmetry planes results in 48 spherical triangles of 45-60-90 degrees. The Octahedral Symmetry Planes model shows the 9 planes. The cube, which has been divided by those planes has on its surface, has triangles which are 45-45-90 degrees. Also shown is the same cube but the vertices have been projected onto a sphere. │Octahedral Symmetry and Mobius Triangles ││ For Icosahedral symmetry, a sphere divided by the 15 symmetry planes results in 120 spherical triangles of 36-60-90 degrees. The Octahedral Symmetry Planes model shows the 15 planes. The dodecahedron which has been divided by those planes has on its surface, triangles which are 36-54-90 degrees. Also shown is the same dodecahedron but the vertices have been projected onto a │ Icosahedral Symmetry and Mobius Triangles ││ │││ Mobius Triangle Dodecahedron │││ Mobius Triangle Dodecahedron SP │││ │││ S=I │F=120 E=180 V=62 │││ S=I │F=120 E=180 V=62 │││ │││ │││ │││ │││ off wrl switch │││ off wrl switch │││ It turns out that Möbius Deltahedra are simply isomers of the Möbius Triangle versions of the tetrahedron, cube and dodecahedron above. Each one has two isomers, denoted by A and B. This is in keeping with Messer's notation. Notice there is no 24-Deltahedron B displayed, because, as noted by Messer, this isomer has faces which would be split by a symmetry plane. Also known as the Hexaugmented Cube, it is a biform deltahedra and can seen on the The Cundy Deltahedra page. Also, as noted above, once the vertices are raised off the tetrahedron's planes, the symmetry of the 24-Deltahedron isomers becomes octahedral.
{"url":"http://www.interocitors.com/polyhedra/Deltahedra/Mobius/index.html","timestamp":"2014-04-19T20:16:36Z","content_type":null,"content_length":"15666","record_id":"<urn:uuid:38c32c1f-6674-42a2-897d-5a5d8ac2ea78>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
The Differential Equation Model For A Certain... | Chegg.com The differential equation model for a certain position control system for a metal cutting tool is where the actual tool position is x; the desired position is xd (t); and Kp, KI, and KD are constants called the control gains. Use the Laplace transform method to find the unit-step response (that is, xd(t) is a unit-step function). Use zero initial conditions. Compare the response for three cases: a. Kp = 30, KI = KD = 0 b. Kp = 27, KI = 17.18, KD = 0 c. Kp = 36, KI = 38.1, KD = 8.52
{"url":"http://www.chegg.com/homework-help/differential-equation-model-certain-position-control-system-chapter-8-problem-43-solution-9780073385839-exc","timestamp":"2014-04-19T15:16:34Z","content_type":null,"content_length":"30073","record_id":"<urn:uuid:b7d4e464-6dae-4f4b-8cd7-572f200b8adf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Creative ways to teach young children about symmetry? General Question Creative ways to teach young children about symmetry? Trying to conjure up lesson plans, and geometry isn’t my strong suit. In need of creative ideas, please and thanks. Observing members: 0 Composing members: 0 7 Answers Mirrors come to mind, as do scales. You could always do the snowflake cut-out technique where you fold a piece of paper several times, cut out shapes and then unfold. This could show radial symmetry (which may be more advanced than what you’re going for, but it helps illustrate the underlying concept of symmetry). Have the child stand in front of a mirror. Ask him to use both hands, at the same time, to touch his ears, eyebrows, and the corners of his mouth. Then, have him wiggle and wave both arms . Finally, tell him to trace an invisible line from between his eyes, down his nose, and to his chin, and ask him what’s on either side. By then, he’ll likely have a good grasp of symmetry. Cutting out paper snowflakes, you can fold the paper in half to get two sides the same, a common time-saving maneuver. Then the crease is the line of symmetry. Bring in symmetrical leaves. Have the kids fold a piece of 8½×11 paper in half, from the top to bottom. Then have them lay the leaf on the line from the crease, and trace the edge of one side. Then fold the sides back, so the child can see the tracing, and cut it out, cutting both sides out. It will unfold to look like a leaf. Then find something similar that is symmetrical on both axes. Do the same thing with one/fourth of the object on a piece of paper that has been folded into quarters. And lastly, try it with something from nature that has no symmetry. Contrast and compare. How about the good old finger paint butterfly? And, you could take a picture of their face, cut half of it out down the middle, print it out, and have them draw in the missing half based on the one that still there . @Mariah Already took my answer. Snowflakes, hearts, stars, really any shape. Actually, snowflakes are often cut with the paper folded twice now that I think of it, so maybe a heart is the easiest with young children. Most people are not aware of the mathematical definition of symmetry, but it is actually rather simple and easy to teach. Anything (concrete or abstract) is symmetric with respect to an action if the thing is the same after performing the action. In addition to the other ideas suggested, you can use letters of the alphabet. M and H are symmetric with respect to reflection in a vertical mirror. C is symmetric with respect to reflection in a horizontal mirror. N and S are symmetric with respect to rotation by 180 degrees. Answer this question This question is in the General Section. Responses must be helpful and on-topic.
{"url":"http://www.fluther.com/159083/creative-ways-to-teach-young-children-about-symmetry/","timestamp":"2014-04-18T20:47:14Z","content_type":null,"content_length":"41190","record_id":"<urn:uuid:0fbf27a8-bd77-49cd-8f73-b3a1457f6d21>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph of a function 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In mathematics, the graph of a function f is the collection of all ordered pairs (x,f(x)). In particular, graph means the graphical representation of this collection, in the form of a curve or surface, together with axes, etc. Graphing on a Cartesian plane is sometimes referred to as curve sketching. The graph of a function on real numbers is identical to the graphic representation of the function. For general functions, the graphic representation cannot be applied and the formal definition of the graph of a function suits the need of mathematical statements, e.g., the closed graph theorem in functional analysis. The concept of the graph of a function is generalised to the graph of a relation. Note that although a function is always identified with its graph, they are not the same because it will happen that two functions with different codomain could have the same graph. For example, the cubic polynomial mentioned above is a surjection if its codomain is the real numbers but it is not if its codomain is the complex field. The graph of the function $f(x)=\left\{\begin{matrix} a, & \mbox{if }x=1 \\ d, & \mbox{if }x=2 \\ c, & \mbox{if }x=3. \end{matrix}\right.$ is {(1,a), (2,d), (3,c)}. The graph of the cubic polynomial on the real line $f(x)={{x^3}-9x} \!\$ is {(x,x^3-9x) : x is a real number}. If the set is plotted on a Cartesian plane, the result is Tools for plotting function graphs See also External links • Weisstein, Eric W. "Function Graph." From MathWorld--A Wolfram Web Resource.
{"url":"http://psychology.wikia.com/wiki/Graph_of_a_function","timestamp":"2014-04-20T04:01:02Z","content_type":null,"content_length":"64840","record_id":"<urn:uuid:ad708c28-4bc1-4de9-842a-a3f85d3bc4f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
ARCHIMEDES' PRINCIPLE An object is buoyed up with a force equal to the weight of the liquid it displaces. This is known as Archimedes' Principle. As an example, suppose you pushed an empty (and sealed) one-liter milk carton under the water in your bath tub. Since the carton must displace (push aside) one liter of water and one liter of water weighs one kilogram, you would feel the carton pushed upward with a force of one kilogram (minus the weight of the carton). If the weight of the object is less than the weight of the fluid displaced, it floats. If the weight of the object is more than the weight of the fluid displaced, it sinks. We'll be adding interesting info and links here. If you have a good one, we need your feedback
{"url":"http://www.exploratorium.edu/xref/phenomena/archimedes'_principle.html","timestamp":"2014-04-18T03:00:14Z","content_type":null,"content_length":"2736","record_id":"<urn:uuid:56e24058-7e17-48b3-80a3-5a42dfe0f3e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Interest Simple Interest on P Dollars at interest rate r (expressed as a decimal) over time period t is Example:- The Simple Interest on $400 for 3 months at 5% per annum Simpe interest would be Hence Simple Interest on $400 for 3 months at 5% pa Simple Interest is 5 Dollars. Compound Interest The amount A, or future value, of principal P invested at interest rate r (expressed as decimal) compunded m times per year for t years is where , the interest rate per compounding period, and n=mt, the number of compunding periods. For example, Compound interest for $400 at the rate of 10% per annum compounded half yearly for 1 year would be calculated as follows:- n=2 (1year = 2 half years). i=10% pa = 5% per half year Therefore, Compound Interest = $441-$400 = $41 The future value of an Ordinary Annuity is where R is the periodic payment, i the interest rate per period, and n the number of periods. In an ordinary annuity, the payment is made at the end of each period. The present value of an ordinary annuity is Profit and Loss SP=Selling Price, CP=Cost Price Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=34453","timestamp":"2014-04-18T10:40:34Z","content_type":null,"content_length":"20191","record_id":"<urn:uuid:ae912a94-25be-4f6f-bb85-f37606617df9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Here is something to think about that was only casually mentioned in passing in the recent video that was posted. The sunlight you may or may not have experienced today finally managed to reach you after a ~100,000 year long journey since it was originally created at the Sun’s core! Since the speed of light is finite, about 300,000,000 meters/second (or about 671,000,000 miles/hour), it takes time for it to travel from one point in space to another. Given that the distance from Earth to the Sun is about 150,000,000,000 meters (about 93,000,000 miles) it takes about 8 minutes for light to reach us! But this is just the time it takes light to reach us from the surface of the sun. The light coming from the surface of the Sun is itself created as a by-product of nuclear fusion occurring deep in the Sun’s core. Once light is created at at the Sun’s core it begins its journey to the surface of the Sun some 700,000,000 meters (430,000 miles) away from the core. One might assume that this light takes the shortest path and heads straight to the surface, which would only take a couple seconds of travel time. However, this is not the case because there is all kinds of star stuff that gets in the way. An actual photon may only travel a mere fraction of a centimeter (anywhere between .01 and .3 centimeters depending on how close it is to the surface) before it makes a collision with other matter thereby diverting its path to some other random direction. Photons continue moving in these seemingly random trajectories, bumping into other particles along the way, and don’t actually reach the surface until about 100,000 years later (give or take an order of magnitude)! This kind of behavior characterizing the photons motion is modeled by something called a random walk, and is illustrated in a few different instances in the animations above. Random walks have widespread applications through out the sciences and mathematics. The idea of random walks are even used in some computer algorithms to allow for more efficient solutions to some One particular application of personal interest, and a rather abstract generalization of the idea, is the quantum random walk, in which the superposition principle of quantum mechanics is used to put the trajectory into a combination of multiple possible trajectories to assist quantum computers in solving problems. The workings of Grover’s search algorithm can be thought of in this way. This isn’t the only instance that relates quantum mechanics to the workings of the Sun (see here). Anyway, next time you are out in the relentless light of the Sun you may wonder what was going on some 100,000 years ago when that light first originated in the Sun, or maybe even where you’ll be 100,000 years from now when the light being created in the Sun at this moment finally reaches Earth. (GIFs created from this Java app)
{"url":"http://intothecontinuum.tumblr.com/tagged/quantum-computation","timestamp":"2014-04-18T10:34:10Z","content_type":null,"content_length":"106819","record_id":"<urn:uuid:d5c31e5f-7dbe-4363-b2ff-e18d770bec56>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] interleaved indexing Amir amirnntp@gmail.... Fri Jul 18 03:32:17 CDT 2008 A very beginner question about indexing: let x be an array where n = len(x). I would like to create a view y of x such that: y[i] = x[i:i+m,...] for each i and a fixed m << n so I can do things like numpy.cov(y). With n large, allocating y is a problem for me. Currently, I either do for loops in cython or translate operations into correlate() but am hoping there is an easier way, maybe using fancy indexing or broadcasting. Memory usage is secondary to speed, though. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035828.html","timestamp":"2014-04-18T13:43:42Z","content_type":null,"content_length":"3005","record_id":"<urn:uuid:fceb9e68-39c1-4b4e-80c1-04227ec8e51c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Gauss: matrix algebra and manipulation On this page: algebra set functions special operations missing values other functions Algebra involving matrices translates almost directly from the page into GAUSS. At bottom, most mathematical statements can be directly transcribed, with some small changes. 1.1 The basic operators GAUSS has eight mathematical operators and six relational ones. The mathematical ones are + - * / Addition Subtraction Multiplication Division ' % ! ^ Transposition Modulo division Factorial Exponentiation and the six relational operators are: == /= > < >= <= EQ NE GT LT GE LE equals does not equal greater than less than greater than/equals less than/equals Either the symbols or the two-letter acronyms may be used. With respect to logical results, GAUSS standard procedures use the convention "false" = 0 "true" /= 0 and there are four logical operators for these which all return true or false NOT var1 var1 AND var2 var1 OR var2 var1 XOR var2 var1 EQV var2 true if var1 false, and true if var1 true and var2 true, true if var1 true or var2 true, true if var1 true or var2 true but not true if var1 is equivalent to var2 i.e. both vice-versa else false else false both, else false true or both false GAUSS is a "strict" language: if a logical expression has several elements, all the elements of the expression will be checked even if the program has enough information to return true or false. Thus using these logical statements may be less efficient then, for example, using nested IF statements. This is also different from the way some other programs operate. Operators work in the usual way. Thus these operations on matrices a to e are, subject to conformability requirements, all valid operations: a = b+c-d; a = b'*c'; a = (b+c)*(d-e); a = ((b+c)*(d+e))/((b-c)*(d-e)); a = (b*c)'; Notice from this that matrix algebra translates almost directly into GAUSS commands. This is one of GAUSS's strong points. GAUSS will check the conformability of the above operations and reject those it finds impossible to carry out; however, see section 1.2 below. The order of operation is complex; see the section on operators in the manual for details. But essentially the order is left to right with the following rough precedence: multiplication and division addition and subtraction dot relational operators dot logical operators relational operators logical operators row and column indices See the next section for an explanation of dot operators. There are two concatenation operators: ~ horizontal concatenation | vertical concatenation These add one matrix to the right or bottom of another. Obviously, the relevant rows and columns must match. Consider the following operations on two matrices, a and b, with ra and rb rows and ca and cb columns, and the result placed in the matrix c: dimensions of a dimensions of b operation dimensions of c condition ra x ca rb x cb c = a ~ b ra x (ca + cb) ra = rb ra x ca rb x cb c = a | b (ra + rb) x ca ca = cb Parts of matrices may be used, and results may be assigned to matrices or to parts: a = b*c; a = b[r1:r2,c1]*c[r3, c2:c3]; a[r1, c1:c2] = b[r1,.]*c; subject to, in the last case, the recipient area being of the correct size. These operations are available on all variables, but obviously "a=b*c" is nonsensical when b and c are strings or character matrices. However, the relational operators may be used; and there is one useful numerical operator - addition: a = b $+ c; This appends c to b. Note that the operator needs the string signifier "$" to inform GAUSS to do a string concatenation rather than a numerical addition. If you omit the $ GAUSS will carry out a normal addition. For example, b = "hello"; c = "mum"; a = b $+ " " $+ c; PRINT $a; will lead to "hello mum" being printed. With character matrices, the rules for the conformability of matrices and the application of the operator are the same as for mathematical operators (see the next section). Note that, in contrast to the matrix concatenation operators, the overall matrix remains the same size (strings grow) but each of the elements in the matrix will be changed. Thus if a is an r by c matrix of file names, a = a $+ ".RES"; will add the extension ".RES" to all the names in the matrix (subject to the eight-character limit) but a will still be an r by c matrix. If any of the cells then have more than eight characters, the extra ones are cut off. String concatenation applied to strings and string arrays will cause these to grow. Strings and character matrices may be compared using the relational operators. The string signifier $ is not always necessary, but it makes the program more readable and may avoid unexpected results. 1.2 Conformability and the "dot" operators GAUSS generally operates in an expected way. If a scalar operand is applied to a matrix, then the operation will be applied to every element of the matrix. If two matrices are involved, the usual conformability rules apply: Operation Dimensions of b Dimensions of c Dimensions of a a = b * c; scalar 4x2 4x2 a = b * c; 3x2 4x2 illegal a = b * c'; 3x2 4x2 3x4 a = b + c; scalar 4x2 4x2 a = b - c; 3x2 4x2 illegal a = b - c; 3x2 3x2 3x2 and so on. However, GAUSS allows most of the mathematical and logical operators to be prefixed by a dot: a = b.>c; a = (b+c).*d'; a = b.==c; This tells the machine that operations are to be carried out on an "element by element" basis (or ExE, as the oracular manual so succinctly puts it). This means that the operands are essentially broken down into the smallest conformable elements and then the scalar operators are applied. How this works in practice depends on the matrices. To give an example, suppose that mat1 is a 5x4 matrix. Then the following results occur for multiplication: Operation mat2 r x c Result mat1 * mat2 scalar 5x4; mat2 times each element of mat1 mat1 .* mat2 5x4 5x4; mat1[i,j] * mat2[i,j] for all i, j (Hadamard product) mat1 .* mat2 5x1 5x4; the ith element in mat2 is multiplied by each element in the ith row of mat1 mat1 .* mat2 1x4 5x4; the jth element in mat2 is multiplied by each element in the jth column of mat1 mat1 .* mat2 anything else illegal Similarly for the other numerical operators: Operation mat2 r x c Result mat1 ./ mat2 5x4 5x4; mat1[i,j] / mat2[i,j] for all i, j mat1 .% mat2 1x4 5x4; modulus mat1[i,j] / mat2[j] for all i, j mat1 .*. mat2 5x4 25x16; mat1[i, j] * mat2 for all i,j (Kronecker product) 1.3 Relational operators and dot operators For the relational operators, the results are slightly different. These operators return a scalar 0 or 1 in normal circumstances; for example, compare two conformable matrices: mat1 /= mat2 mat1 GT mat2 The first returns "true" if every element of mat1 is not equal to every corresponding element of mat2; the second returns "true" if every element of mat1 is greater than every corresponding element of mat2. If either variable is a scalar than the result will reflect whether every element of the matrix variable is not equal to, or greater than, the scalar. These are all scalar results. Prefixing the operator by a dot means that the element-by-element result is returned. If mat1 and mat2 are both r by c matrices, then the results of mat1 ./= mat2 mat1 .GT mat2 will be a r by c matrix reflecting the element-by-element result of the comparison: each cell in the result will be set to "true" or "false". If either variable is a scalar than the result will still be a r by c matrix, except that each cell will reflect whether the corresponding element of the matrix variable is not equal to, or greater than, the scalar. 1.4 Fuzzy operators In complex calculations, there will always be some element of rounding. This can lead to erroneous results from the relational operators. To avoid this, fuzzy operators are available. These are procedures which carry out comparisons within tolerance limits, rather than the exact results used by the non-fuzzy operators. The commands are with corresponding dot operators and are used, for example FEQ, by result = FEQ (mat1, mat2); This will compare mat1 and mat2 to see whether they are equal within the tolerance limit, returning "true" or "false". Apart from this, the fuzzy operators (and their dot equivalents) operate as the exact relational operators. The tolerance limit is held in a variable called _fcmptol which can be changed at any time. The default tolerance limit is 1.0x10-15. To change the limit simply involves giving this variable a new value: _fcmptol = newValue; Column vectors can be treated like sets for some purposes. GAUSS provides three standard procedures for set operation: to unVec = UNION (vec1, vec2, flag); intVec = INTRSECT (vec1, vec2, flag); difVec = SETDIF (vec1, vec2, flag); where unVec, intVec, and difVec are the results of union, intersection, and difference operations on the two column vectors vec1 and vec2. The scalar flag is used to indicate whether the data is character or numeric: 1 for numeric data, 0 for character. The difference operator returns the elements of vec1 not in vec2, but not the elements of vec2 not in vec1. These commands will only work on column vectors (and obviously scalars). The two vectors can be of different sizes. A related command to the set operators is unVec = UNIQUE (vec, flag); which returns the column vector vec with all its duplicate elements removed and the remaining elements sorted into ascending order. GAUSS provides methods to create and manipulate a number of useful matrix forms. The commonest are covered in this section. A fuller description is to be found in the GAUSS Command to Reference. top 3.1 Some useful matrix types Firstly, three useful matrix creating operations: identMat = EYE (iSize); onesMat = ONES (onesRows, onesCols); zerosMat = ZEROS (zeroRows, zeroCols); These create, respectively: an identity matrix of size iSize; a matrix of ones of size onesRows by onesCols; and a matrix of zeroes of size zeroRows by zeroCols. Note the US spelling. 3.2 Special operations A number of common mathematical operations have been coded in GAUSS. These are simple to use to use and more efficient then building them up from scratch. They are invMat = INV (mat); invPDMat = INVPD (mat); momMat = MOMENT (mat, missFlag); determ = DET (mat); determ = DETL; matRank = RANK (mat); The first two of these invert matrices. The matrices must be square and non-singular. INVPD and INV are almost identical except that the input matrix for INVPD must be symmetric and positive definite, such as a moment matrix. INV will work on any square invertible matrix; however, if the matrix is symmetric, then INVPD will work almost twice as fast because it uses the symmetry to avoid calculation. Of course, if a non-symmetric matrix is given to INVPD, then it will produce the wrong result because it will not check for symmetry. GAUSS determines whether a matrix is non-singular or not using another tolerance variable. However, even if it decides that a matrix is invertible, the INV procedure may fail due to near-singularity. This is most likely to be a problem on large matrices with a high degree of multicollinearity. The GAUSS manual suggests a simple way to test for singularity to machine precision, although I have found it necessary to augment their solution with fuzzy comparisons to ensure a workable result (for an example, see the file SingColl.GL on the code The MOMENT function calculates the cross-product matrix from mat; that is, mat'*mat. For anything other than small matrices, MOMENT(x, flag) is much quicker than using x'x explicitly as GAUSS uses the symmetry of the result to avoid unnecessary operations. The missFlag instructs GAUSS what to do about missing values (see below) - whether to ignore them (missFlag=0) or excise them (missFlag=1 or 2). DET and DETL compute the determinants of matrices. DET will return the determinant of mat. DETL, however, uses the last determinant created by one of the standard functions; for example, INV, DET itself, decomposition functions all create determinants along the way. DETL simply reads this value. Thus DETL can avoid repeating calculations. The obvious drawback is that it is easy to lose track of the last matrix passed to the decomposition routines, and so determinants should be read as soon as possible after the relevant decomposition function has been called. See the Command Reference for details of which procedures create the DETL variable. RANK calculates the rank of mat. 3.3 Manipulating matrices There are a number of functions which perform useful little operations on matrices. Commonly-used ones are: vec = DIAG (mat); mat = DIAGRV (vec); newMat = DELIF (oldMat, flagVec); newMat = SELIF (oldMat, flagVec); newMat = RESHAPE (oldMat, newRows, newCols); nRows = ROWS (mat); nCols = COLS (mat); maxVec = MAXC (mat); minVec = MINC (mat); sumVec = SUMC (mat); DIAG and DIAGRV abstract and insert, respectively, a column vector from or into the diagonal of a matrix. DELIF and SELIF allow certain rows and columns to be deleted from the matrix oldMat. The column vector flagVec has the same number of rows as oldMat and contains a series of ones and zeros. DELIF will delete all the rows from the matrix for which there is a corresponding one in flagVec, while SELIF will select all those rows and throw away the rest. Therefore DELIF and SELIF will, between themselves, cover the whole matrix. DELIF and SELIF must have only ones and zeros in flagVec for the function to work properly. This is something to consider as the vector flagVec is often created as a result of some logical operation. For example, to delete all the rows from matrix mat1 whose first two columns are negative would involve flags = (mat1[1,.] .< 0) .AND (mat1[2,.] .< 0); mat2 = DELIF (mat1, flags); This particular example should work on most systems, as the logical operator AND only returns 1 or 0. But because true is really non-zero (not 1) some operations could lead to unexpected results. DELIF and SELIF also use a lot of memory to run. A program calling these procedures often would be improved by rewriting them (versions can be downloaded from the Web; see the ROWS and COLS return the number of rows and columns in the matrix of interest. MAXC, MINC, and SUMC produce information on the columns in a matrix. MAXC creates a vector with the number of elements equal to the number of columns in the matrix. The elements in the vector are the maximum numbers in the corresponding columns of the matrix. MINC does the same for minimum values, while SUMC sums all the elements in the column. However, note that all these functions return column vectors. So, to concatenate onto the bottom of a matrix the sum of elements in each column would require an additional transposition: sums = SUMC(mat1); mat1 = mat1 | sums'; On the other hand, because these functions work on columns, then calling the functions again on the column vectors produced by the first call allows for matrix-wide numbers to be will return the largest value in mat1, the smallest value, and the total sum of the elements. GAUSS has a number of "non-numbers" which can be used to signify missing values, faulty operations, maths overflow, and so on. These NANs (in GAUSS's terms) are not values or numbers to in the usual sense; although all the usual operations could be carried out with them, the results make no sense. These are just identifiers which GAUSS recognises and acts upon. top Generally GAUSS will not accept these values in numerical calculations, and will stop the program. However, the string operators can be used on these values to test for equalities. To see if the variable var is one of these odd values or not, the code var $== TestValue orvar $/= TestValue would work. The other relational operators would work as well, but the result is meaningless. The TestValues are scattered around the GAUSS manual in excitingly unpredictable places. With empirical datasets, the largest problem is likely to be with missing values. These missing values will invalidate any calculation involving them. If one number in a sequence is a missing value, then the sum of the whole sequence will be a missing value; similarly for the other operators. Thus checking for missing values is an important part of most programs. Missing values can have their uses. They can indicate that a program must stop rather than go any further; they can also be used as flags to identify cells. To this end we have three newMat = MISS (oldMat, badValue); newMat = MISSRV (oldMat, newValue); newMat = MISSEX (oldMat, mask); The first of these converts all the cells in oldMat with badValue into the missing value code. MISSRV does the opposite, replacing missing values in oldMat with newValue. The second can be used to remove missing values from a matrix; however, in conjunction with the first, it can be used to convert one value into another. For example, to convert all the ones in mat1 into twos could be done by: tempMat = MISS (mat1, 1); mat1 = MISSRV (tempMat, 2); This of course assumes that mat1 had no prior missing values to be erroneously convered into twos. MISSEX is similar to MISS, except that instead of checking to see which elements of the matrix mat1 match badValue, GAUSS takes instructions from mask, a matrix of ones and zeros of the same size as mat1. Any ones in mask will lead to the corresponding values in mat1 being changed into missing values. MISS and MISSEX are thus very similar in that MISS (mat1, 2); is virtually equivalent to MISSEX (mat1, mat1.==2); To test for missing values, use missing = ISMISS (mat); missing = SCALMISS (mat); The first of these tests to see whether mat contains any missing values, returning one if it finds any and zero otherwise; the second returns one only if mat is a scalar and a missing 4.1 Non-fatal use of missing values - DOS versions of GAUSS Generally, whenever GAUSS it comes across missing values, the program fails. This is so that missing values will not cascade through the program and cause erroneous results. However, in that case, none of the above code will work. The way to get round this is to use These two commands enable and disable checking for missing values. If GAUSS is ENABLEd, then any missing values will cause the program to crash. When GAUSS is DISABLEd, the checking is switched off and all the above operations with GAUSS can be carried out - along with the inclusion of missing values in calculations and the havoc that could wreak. Whether to switch off missing value checking depends on the situation. If a missing value is not expected but would have a devastating effect on the program, then clearly GAUSS should be ENABLEd. Alternatively, if the program encounters lots of missing data which play no significant part in the results, then GAUSS should probably be DISABLEd. Intermediate cases require more thought. However, ENABLE and DISABLE can be used at any point, and so a program could DISABLE GAUSS while it checks for missing values and then ENABLE GAUSS again when it has dealt with them. There are no firm rules. GAUSS has a large repertoire of functions to perform operations on matrices. For most mathematical operations on or manipulations of a matrix (as opposed to altering the data) there to will be a GAUSS function. Generally, these functions will be much faster than the equivalent user-written code. top To find a function, the GAUSS manuals have commands and operations organised into groups, as does the GAUSS Help system. In addition, each GAUSS function in the Command Reference will indicate what related functions are available. [ previous page ] [ next page ]
{"url":"http://www.trigconsulting.co.uk/gauss/man_algebra.html","timestamp":"2014-04-18T18:27:19Z","content_type":null,"content_length":"44942","record_id":"<urn:uuid:243a6f2f-2c76-4987-b90c-1080b9ab0111>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
perfectly elastic collision so my mass 1 is the ball not moving right?? is this the equation i use to get the answer or do i have to plug it into another one Sorry my mistake. The m1 I wrote on the left should be m2 - the stationary ball. We are trying to calculate the velocity of m2 AFTER impact. The m's should cancel there. Sorry for my typo which I then proceeded to run with. With that final velocity for the stationary ball after impact and the usual equations for conservation of energy and momentum, you should now have 2 equations and 2 unknowns, one of which is the Vo that they ask for.
{"url":"http://www.physicsforums.com/showthread.php?t=269069","timestamp":"2014-04-21T14:44:41Z","content_type":null,"content_length":"31931","record_id":"<urn:uuid:c96262c2-a0b7-48fb-a25b-25932b61241e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the middle group in a SES. I'm in the situation that I have a short exact sequence of groups 1->A->B->C->1. I know that A and C are Z and Z[2] respectively and I know that B is torsion-free. Is this enough information to find B? From what I can find on wikipedia, it seems that, in general, you need the SES to split in order to have a nice presentation of B in terms of A and C, however this would give B being equal to Z+Z [2] which has torsion, so I guess my sequence doesn't split. I have some heuristic evidence which suggests that B is isomorphic to Z, which obviously fits in to the SES. For those interested, I'm trying to show that the braid group on two strings (B) is isomorphic to the integers, given that I've calculated the pure 2-braid group (A). If this isn't enough information, I guess I'll have to appeal to the homomorphisms in the sequence and chase some elements. Last edited by Talith on Sun Feb 26, 2012 2:50 am UTC, edited 1 time in total. Re: Finding the middle group in a SES. Lets make f:A->B be the first homomorphism. If b is in B\f(A), and f(1) = a then B = {a^k * b, a^k | k in Z}. If you specify that b * a = a^k * b and b^2 = a^l for some k and l, then that defines the group operation. Torsion free implies (a^n * b)^2 = a^(n(k+1) + l), so k and l must be such that n(k+1) + l is never 0 for any integer n, so k+1 does not divide l. You also need inverses. So, for all n, there exists an m such that e = (a^n * b)(a^m * b) = a^(n+mk + l) ie for all n there is an m such that n+mk + l = 0. In particular, if b^(-1) = a^n * b, then n + l = 0 and nk + l = 0, so k = 1, and that makes B commutative. From before, l= 2i + 1, and b' = a^{-i}*b is such that (b')^2 = a, and <b'> = B. B is the integers. Re: Finding the middle group in a SES. Need to apologise for a mistake. I know that A and C(not B) are Z and Z[2] respectively and I know that B is torsion-free (that part isn't a mistake). I've corrected the OP now. It looks like you overlooked that though, because your proof seems to work very well jr. I'll read it again in the morning just to make sure I understand the argument and that there are no holes. I might be able to make the proof a bit cleaner with the extra information that I have about the homomorphisms, but it's nice to know that it's not necessary. Thanks a lot for the help. Re: Finding the middle group in a SES. Talith wrote:I might be able to make the proof a bit cleaner with the extra information that I have about the homomorphisms I should hope so, its pretty hacky stuff. I was really just banging stuff together until something shook out. But anyway, np.
{"url":"http://forums.xkcd.com/viewtopic.php?f=17&t=81074","timestamp":"2014-04-20T16:16:35Z","content_type":null,"content_length":"24520","record_id":"<urn:uuid:fc6c6ada-9be9-4d85-9f11-a940d3401056>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Tyrone, GA Trigonometry Tutor Find a Tyrone, GA Trigonometry Tutor ...Combined with my specific efforts to combat these thoughts, the improved grades my students quickly see is enough to change their self image and make them think more highly of themselves. This positive mentality (and knowing how to enact it when needed) will help them throughout their education,... 11 Subjects: including trigonometry, calculus, physics, algebra 1 ...Completed coursework through Multivariable Calculus. I love helping students understand Precalculus! Tutored trigonometry topics during high school and college. 28 Subjects: including trigonometry, calculus, physics, linear algebra ...MonicaI was an A/B student throughout my elementary career. From 2000-2003, I was a substitute teacher for grades K-7 and a Teacher's Assistant for Kindergarten. I taught lessons in reading (phonics/syllables), writing (grammar), and math (all mathematics, sorting and reasoning), as well as, spelling, vocabulary, science, language arts, and social studies. 11 Subjects: including trigonometry, biology, algebra 1, algebra 2 ...My favorite subject to tutor is Calculus 1 because it is fun and I continue to learn each time I explain a concept concerning it. I have had great students who I have tutored compliment me in helping them understand the material. I have also helped them sustain their confidence and improve drastically in their academic performance. 13 Subjects: including trigonometry, calculus, geometry, algebra 1 ...With my concentration in mathematics, I am certain that my hand in helping you with pre-algebra will be crucial to your understanding. I earned As in pre-algebra, algebra I and II, basic math, calculus, etc. Math has different aspects, topics, and areas of study. 42 Subjects: including trigonometry, English, reading, calculus
{"url":"http://www.purplemath.com/Tyrone_GA_Trigonometry_tutors.php","timestamp":"2014-04-18T14:02:18Z","content_type":null,"content_length":"24221","record_id":"<urn:uuid:069a71f4-1917-4cff-9971-6e0caae7242a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
If f(x) is a function and there's some number L with we draw a dashed horizontal line on the graph at height y = L. This line is called a horizontal asymptote. As x approaches ∞, the function f(x) gets closer to L, in the graph the function gets closer to the dashed horizontal line. We also draw a horizontal asymptote at y = L if Then as x approaches -∞, or as we move left on the graph, the function f(x) will approach the dashed horizontal line. It is fine for a graph to cross over its horizontal asymptote(s). We can have something like this, for example: The important thing is that as x gets bigger (or more negative), the function is getting closer to the horizontal asymptote. Finding the horizontal asymptote(s) of a function is the same task as finding the limits of a function f(x) as x approaches ∞ or -∞. The difference is that horizontal asymptotes are drawn as dashed horizontal lines in a graph, while limits (when they exist) are numbers.
{"url":"http://www.shmoop.com/functions-graphs-limits/horizontal-asymptotes-help.html","timestamp":"2014-04-20T05:53:15Z","content_type":null,"content_length":"33504","record_id":"<urn:uuid:86c0cc4e-de1e-4948-9b71-d270a1f5165c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Verifying a trig identity October 22nd 2009, 04:50 PM Verifying a trig identity I am having some trouble verifying this identity ive tried several ways to solve it but for some reason i cant get the right answer it is: the last part reads "negative sine cubed x" ive tried factoring and the double angle formula but nothing seems to work if anyone has any idea to help id appreciate thanks October 22nd 2009, 05:02 PM I am having some trouble verifying this identity ive tried several ways to solve it but for some reason i cant get the right answer it is: the last part reads "negative sine cubed x" ive tried factoring and the double angle formula but nothing seems to work if anyone has any idea to help id appreciate thanks $\sin(2x)\cos{x} - 2\sin{x}$ $2\sin{x}\cos^2{x} - 2\sin{x}$ $2\sin{x}(\cos^2{x} - 1)$ $-2\sin{x}(1 - \cos^2{x})$
{"url":"http://mathhelpforum.com/pre-calculus/109765-verifying-trig-identity-print.html","timestamp":"2014-04-18T12:29:28Z","content_type":null,"content_length":"5715","record_id":"<urn:uuid:fa81db73-40c9-4c4c-b6d6-67d4421ea141>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclidean Distance Definition A selection of articles related to euclidean distance definition. Original articles from our library related to the Euclidean Distance Definition. See Table of Contents for further available material (downloadable resources) on Euclidean Distance Definition. Euclidean Distance Definition is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Euclidean Distance Definition books and related Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/euclidean-distance-definition/","timestamp":"2014-04-20T19:11:11Z","content_type":null,"content_length":"28727","record_id":"<urn:uuid:66757404-a30a-4044-afa9-a0793dcf21a3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
The long-run average cost curve is the envelope of an infinite number of short-run average total cost curves, with each short-run average total cost curve tangent to, or just touching, the long-run average cost curve at a single point corresponding to a single output quantity. The key to the derivation of the long-run average cost curve is that each short-run average total cost curve is constructed based on a given amount of the fixed input, usually capital. As such, when the quantity of the fixed input changes, the short-run average total cost curve shifts to a new The long-run average cost curve can be derived by identifying the factory size (or quantity of capital) that can produce each quantity of output at the lowest short-run average total cost. For example, The Wacky Willy Company has one short-run average total cost curve corresponding to a 10,000 square foot factory, another short-run average total cost curve corresponding to a 10,001 square foot factory, another for a 10,002 square foot factory, etc. Each of these short-run average total cost curves incurs the lowest average total cost for the production of a given quantity of output. The long-run average cost curve is then the combination of all minimum short-run average total cost values. Starting with Five Factories The derivation of a long-run average cost curve can be had using The Wacky Willy Company and the production of Wacky Willy Stuffed Amigos (those cute and cuddly armadillos, tarantulas, and scorpions). The diagram below presents five short-run average total cost curves corresponding to five alternative factory sizes that could be used to produce Stuffed Amigos--10,000 square feet, 20,000 square feet, 30,000 square feet, 40,000 square feet, and 50,000 square feet. These five factors reach minimum short-run average total cost at production levels of 100, 200, 300, 400, and 500 Stuffed Amigos, respectively. In the long run, The Wacky Willy Company can choose either one of these five factory sizes. However, once a factory size is selected, Stuffed Amigos production is confined to the corresponding short-run average cost curve (as well as corresponding short-run average variable cost and short-run marginal cost curves which are not shown) until the quantity of capital is changed in the long The prime question facing The Wacky Willy Company is: Which factory size should it select? The answer directly depends on the quantity of Stuffed Amigos it intends to produce. If it plans to produce somewhere around a 100 Stuffed Amigos, then the smallest factory is appropriate. For the production of 100 Stuffed Amigos, the 10,000 square foot factory has lower short-run average total cost than any of the larger factories. Should The Wacky Willy Company try to produce a mere 100 Stuffed Amigos using the second smallest, 20,000 square foot factory, average total cost is substantially higher. To see how high, extend the 100 Stuffed Amigos quantity until intersecting the second smallest short-run average total cost curve. The reason the larger factory has higher average total cost than the smaller one is largely due to fixed cost. The larger factory has more capital and thus higher total fixed cost. As such, average fixed cost is also higher for the production of the relatively small quantity of 100 Stuffed Amigos. If, for example, the total fixed cost of the 10,000 square foot factory is $1,000 per day, the average fixed cost of producing 100 Stuffed Amigos is $10 per Stuffed Amigo. However, the 20,000 square foot factory has higher total fixed cost, say $2,000 per day. This makes average fixed cost $20 per Stuffed Amigo. With higher average fixed cost, average total cost is also higher. Production Ranges Each of these five factories can produce Stuffed Amigos at a lower cost than the others over a range of production. The specific ranges are given in the exhibit below. The smallest factory has lower average total cost up to 135 Stuffed Amigos, the quantity at which the short-run average total cost curves for the smallest and second smallest intersect. In the range of 200 Stuffed Amigos (precisely from 135 to 240 Stuffed Amigos) the second smallest factory has lower average total cost than the others. The production ranges for the remaining three factors are 240 to 360, 360 to 465, and anything over 465.If The Wacky Willy Company is faced ONLY with the choice of these three factory sizes, it selects the first if it plans to produce up to 135 Stuffed Amigos, the second if it plans to produce between 135 and 240 Stuffed Amigos, the third if it plans to produce between 240 and 360 Stuffed Amigos, the fourth if it plans to produce between 360 and 465 Stuffed Amigos, and the fifth if it plans to produce more than 465 Stuffed Amigos. The long-run average cost curve for The Wacky Willy Company is therefore the lower portions of each of the short-run average total cost curves that lie below the others. Up to 135 Stuffed Amigos, the long-run average cost curve is the short-run average total cost curve for the 10,000 square foot factory. However, between 135 and 240 Stuffed Amigos, the short-run average total cost curve for the 20,000 square foot factory is the long-run average total cost curve. For production between 240 and 360 Stuffed Amigos, the short-run average total cost curve for the middle factory is the long-run average total cost curve. Click the [Lowest Cost Envelope] button to highlight those segments of the five short-run average total cost curves that make up the long-run average cost curve. Adding Another Four If The Wacky Willy Company faces only five factor sizes, this analysis ends right here. However, in reality it is likely to have other options. To add four more factor sizes to the original set of five presented in the following exhibit, click the [More Factories] button.The four additional factories reach their minimum values at 150, 250, 350, and 450 Stuffed Amigos. The inclusion of these additional factories also reduces the production ranges in which the original five factories have lower short-run average total cost than the others. The inclusion of other factories reduces those production ranges even more, until eventually the production range for each factory is a single quantity of output. The combination of the short-run average total cost values corresponding to each "one quantity" production range is then the long run average cost curve. To demonstrate the long-run average cost curve for The Wacky Willy Company click the [Long Run Average Cost] button. <= LONG-RUN AVERAGE COST LONG-RUN INDUSTRY SUPPLY CURVE => Recommended Citation: LONG-RUN AVERAGE COST CURVE, DERIVATION, AmosWEB Encyclonomic WEB*pedia, http://www.AmosWEB.com, AmosWEB LLC, 2000-2014. [Accessed: April 19, 2014]. Check Out These Related Terms... | | | | | | | | Or For A Little Background... | | | | | | | | | | | And For Further Study... | | | | | | Search Again? Back to the WEB*pedia
{"url":"http://www.amosweb.com/cgi-bin/awb_nav.pl?s=wpd&c=dsp&k=long-run+average+cost+curve,+derivation","timestamp":"2014-04-20T00:38:09Z","content_type":null,"content_length":"45286","record_id":"<urn:uuid:82668889-4621-40ee-ae3c-e557c24168d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00339-ip-10-147-4-33.ec2.internal.warc.gz"}