content
stringlengths
86
994k
meta
stringlengths
288
619
Physics Forums - View Single Post - Twin paradox and the nature of time? The acceleration isn't constant in the scenario you're describing in #13. It can be directed "towards the floor" the whole time, but only if the rocket is rotating, and if it is the rotation is The equivalence principle says that there are no local experiments you can perform to distinguish between constant proper acceleration and a constant gravitational field, but the word "local" refers to the fact that the equivalence is only exact in the limit where the "diameter" of the region of spacetime where the experiment is performed goes to zero. The final ages of the two twins depend on the curves in spacetime that represent their motion. There's nothing "local" about that.
{"url":"http://www.physicsforums.com/showpost.php?p=1955747&postcount=16","timestamp":"2014-04-16T16:04:33Z","content_type":null,"content_length":"7947","record_id":"<urn:uuid:63f02f1f-3842-447e-856e-ac1fe4500157>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Eqlog: An Equational Logic Programming Language Eqlog is a programming and specification language which combines constraint logic programming with equational programming. Its default operational semantics is order sorted narrowing, but particular cases can be computed by efficient built in algorithms over suitable data structures, with their functions and relations, including equality, disequality, and the usual orderings for numbers and lists. Initiality in Horn clause logic with equality provides a rigorous semantics for functional programming, logic programming, and their combination, as well as for the full power of constraint programming, allowing queries with logical variables over combinations of user-defined and built in data types. Eqlog has a powerful type system that allows subtypes, based on order sorted algebra. The method of retracts, a mathematically rigorous form of runtime type checking and error handling, gives Eqlog a syntactic flexibility comparable to that of untyped languages, while preserving all the advantages of strong typing. The order sortedness of Eqlog not only greatly increases expressivity and the efficiency of unification, but it also provides a rigorous framework for multiple data representations and automatic coercions among them. Uniform methods of conversion among multiple data representations are essential for reusing already programmed constraint solvers, because they will represent data in various ways. Order sorted algebra provides a precise and systematic equational theory for this, based on initial semantics. Eqlog also supports loose specifications through its so-called theories, and provides views for asserting the satisfaction of theories by programs as well as relationships of refinement among specifications and/or programs. This relates directly to Eqlog's powerful form of modularity, with generic (i.e., parameterised) modules and views, based on the same principles as the OBJ language. Theories specify both syntactic structure and semantic properties of modules and module interfaces. Modules can be parameterised, where actual parameters are modules. Modules can also import other modules, thus supporting multiple inheritance at the module level. For parameter instantiation, a view binds the formal entities in an interface theory to actual entities in a module. Module expressions allow complex combinations of already defined modules, including sums, instantiations and transformations; moreover, evaluating a module expression actually builds a software system from the given components. Thus parameterised programming in Eqlog gives significant support for large programs through module composition. The semantics of module importation is given by conservative extensions of theories in Horn clause logic with equality. The stronger notion of persistent extension underlies generic modules. Two example Eqlog specifications To Systems index page To Joseph Goguen homepage Maintained by Joseph Goguen Last modified 23 February 1999
{"url":"http://cseweb.ucsd.edu/users/goguen/sys/eqlog.html","timestamp":"2014-04-20T08:15:44Z","content_type":null,"content_length":"3799","record_id":"<urn:uuid:b5e4a663-808c-455e-a16e-107e9ece246c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Covering Relation July 10th 2007, 08:30 AM #1 Junior Member Jun 2007 Covering Relation Let (S, $\preceq$) be a poset. We say that an element y $\in$ S covers an element x $\in$ S if x $\prec$ y and there is no element z $\in$ S such that $x \prec z \prec y$. The set of pairs (x,y) such that y covers x is called the covering relation of (S, $\preceq$) What is the covering relation of the partial ordering {(a,b) $\vert$ a divides b} on {1,2,3,4,6,12}? My answer so far is {(1,2), (1,3), (2,4), (2,6), (3,6), (4,12), (6,12)} is this right? Last edited by Discrete; July 10th 2007 at 02:21 PM. Let (S, $\preceq$) be a poset. We say that an element y $\in$ S covers an element x $\in$ if x $\prec$ y and there is no element z $\in$ S such that $x \prec y \prec z$. The set of pairs (x,y) such that y covers x is called the covering relation of (S, $\preceq$) What is the covering relation of the partial ordering {(a,b) $\vert$ a divides b} on {1,2,3,4,6,12}? My answer so far is {(1,2), (1,3), (2,4), (2,6), (3,6), (4,12), (6,12)}is this right? NO indeed! Look very carefully at the definition of covering elements. Is this true, $2 \prec 4 \prec 12$? If it is true, then can 4 cover 2? Try again! Sorry I revised the question! I misstyped it. Is it correct now? Yes now it is correct using the corrected definition. The other definition give maximal elements in chains. July 10th 2007, 11:21 AM #2 July 10th 2007, 02:15 PM #3 Junior Member Jun 2007 July 10th 2007, 02:41 PM #4
{"url":"http://mathhelpforum.com/discrete-math/16711-covering-relation.html","timestamp":"2014-04-16T07:36:14Z","content_type":null,"content_length":"42638","record_id":"<urn:uuid:1507ae0d-0d42-453a-acb5-34af8f227f93>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Correlation coefficient between discrete and continuous variable [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Correlation coefficient between discrete and continuous variables From Steven Samuels <sjhsamuels@earthlink.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Correlation coefficient between discrete and continuous variables Date Thu, 20 Nov 2008 14:04:25 -0500 Stas Kolenikov's -polychoric- (-findit polychoric-) will compute correlations when one (or more of) the discrete variables is derived from an underlying continuous normal distribution. The correlation between a continuous and discrete variable is known as the "polyserial" correlation. On Nov 20, 2008, at 1:08 PM, Sergiy Radyakin wrote: Dear All, a colleague of mine has just hinted me that it may not be straightforward to compute a correlation coefficient when one of the variables is discrete. Until now I never cared, and neither does the Stata manual. In particular it does not require anywhere the variables to be continuous, and the example shows the use of -correlate- command to find a correlation between such discrete variables as -state- and -region- and such continuous variables as -marriage rate-, -divorce rate- (which is also strange since there is no logical ordering of -state- and -region-, but that is a different issue). After looking into the literature, the following paper seems to be most relevant: N.R.Cox "Estimation of the Correlation between a Continuous and a Discrete Variable", Biometrics, Vol.30, No.1 (Mar., 1974), pp. 171-178 In particular my case satisfies the assumptions made in the paper that the discrete value is derived from an underlying continuous variable (so there is ordering: low, medium, or high).The way it is recommended in the paper seems very far away from what Stata seems to be computing according to the manual, in particular it calls for iterative maximum likelihood estimation. Before I start writing any code myself, I would like to ask: Q1: does Stata do any adjustment to the way it computes the correlation coefficient based on the nature of the variable (discrete or continuous)? Q2: is the difference between (the correlation coefficient as estimated by Stata in this case) and (the one computed by the recommended way) practically important? Q3: is there any standard or user-written command to compute the correlation coefficient according to the method described in the paper Q4:I am ultimately interested in the correlation between my observed continuous variable and the unobserved continuous variable, which is represented in the discrete levels. Unfortunately the thresholds are not available to me, so I may not be sure about the size of the intervals. Furthermore, a significant measurement error may be involved, since many interviewers may have eyeballed the continuous variable into different groups differently. Should I instead focus on different measures of correlation? Could you please suggest any ones that better fit the context? Thank you, Sergiy Radyakin * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-11/msg00934.html","timestamp":"2014-04-20T21:15:07Z","content_type":null,"content_length":"8844","record_id":"<urn:uuid:ca48795d-5818-49b6-b584-dcd72136b8ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Two questions about cycles (algebra)... I have two questions: 1) For the example on the second page, I don't understand why they say [tex]\alpha\gamma\alpha^{-1} = (\alpha1 \alpha3)(\alpha2 \alpha4 \alpha7)(\alpha5)(\alpha6)[/tex] instead of [tex]\alpha\gamma\ alpha^{-1} = (\alpha1\alpha^{-1} \alpha3\alpha^{-1})(\alpha2\alpha^{-1} \alpha4\alpha^{-1} \alpha7\alpha^{-1})(\alpha5\alpha^{-1})(\alpha6\alpha^{-1})[/tex]. They say what is consistent with what the theorem says. The theorem says to "apply [itex] \alpha [/itex]" to the symbols in the cycles. If [itex] \alpha,\ p,\ q [/itex] are cycles, It is true that [itex] \alpha (\ p \ q) \ \alpha^{-1} =( \alpha \ p \ \alpha^{-1})(\alpha \ q \ \alpha^{-1}) [/itex] but this is not the content of the theorem. A cycle is not the same as the product of the individual symbols in the cycle. The cycle (1,2,3) is not equal to (1)(2)(3). 2) For the tables at the top of the 2nd page, I don't know how they computed those numbers... For example, In the permutation group [itex] S_4 [/itex], there are 8 different elements of the group that are cycles of length 3. The example (1,2,3) in the table illustrates one of them. (There are 24 = (4)(3)(2) different permutations that can be formed by taking 3 distinct numbers from the set of numbers {1,2,3,4}. However, each permutation such as (1,2,3) is one of 3 representations of the same cycle. (1,2,3) = (2,3,1) = (3,1,2) So there are 8 = 24/3 distinct cycles of length 3 )
{"url":"http://www.physicsforums.com/showthread.php?p=4208473","timestamp":"2014-04-19T07:33:01Z","content_type":null,"content_length":"31838","record_id":"<urn:uuid:4ee72f63-e4c2-45b9-82b4-9efa81c5b454>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Robotics Academy Blog Archive for the ‘FLL’ tag Teacher Appreciation Week is May 6th – 10th and we are celebrating! We LOVE all teachers and appreciate everything they do for their students! Here at the Robotics Academy, we have a special place in our hearts for robotics teachers, mentors and coaches, so this year we want to make sure they get the attention they deserve. Do you know an amazing robotics teacher, mentor, or coach? Let us know who they are and why they are AWESOME! Send us your best story, pictures, and/or video about this person to socialmedia@cs2n.org . We will share several of these stories on the Robotics Academy blog during Teacher Appreciation Week. And the Top Three Stories, voted by us, will each WIN one Classroom Annual License for Robot Virtual Worlds for their teacher/mentor/coach! Stories must be submitted by Wednesday, May 8th at 5pm Eastern Standard Time. We will announce the winners on Friday, May 10, 2013. Please include contact information (name and email/school phone number) for the teacher, mentor, or coach that you’re writing about so we can make sure to get their permission to publish their name on our site. You can send any questions to socialmedia@cs2n.org. Part 4 of 4: Success in Different Forms Table of Contents Part 1: Introduction and Background Part 2: The Range of Strategies Part 3: A Winning Strategy Part 4: Success in Different Forms This is the final article in our 4-part series investigating math in LEGO Robotics Competitions. Part 1 introduced the past research and the context of the present investigation — a local LEGO robotics competition where investigators conducted interviews about team strategies. Part 2 laid out the range of strategies that were observed, and teased out an interesting result that teams using math-based strategies seemed to have widely varying success in the competition, with math-users both leading the pack and trailing at the rear. Part 3 looked in depth at the winning team and found that purposeful use of mathematics was central in both their programming and overall planning strategies. But what about the teams that used math, but still scored low? Would it have been better for them to choose a non-math-based strategy? Focus Team Surveys As mentioned in the previous article in the series, the research team met with four Focus Teams outside of the competition, hoping to gain greater insight into their solution strategies and what they got out of participating in the competition. Each of the four Focus Teams[1] completed two surveys before the competition, and the same two surveys again after the competition. The first survey consisted of 12 test-like questions that asked the students to solve problems involving robot motion (e.g., how many motor rotations are required to make this robot move this distance?). The second survey, however, measured students' attitudes towards robotics and mathematics, including questions about their level of interest in robots and math and also their view of how valuable math is for doing robotics. Who were the Focus Teams? Two of the Focus Teams consisted of students from elementary grades, and are codenamed Team E1 and Team E2. The other two consisted of middle school age students, and are identified as Team M1 and Team M2. Table 1 shows the teams, their grade levels, the number of students on their team, the strategy the team used for their first move, and their rank and best score from the competition. Perhaps not surprisingly, the middle school age teams outperformed the elementary school age teams in the competition as evidenced by their much higher ranks and final scores. This is not a universal effect, however, as there were a number of elementary school age teams who did do very well in the competition (ranked #5, #7, #9, & #12 out of 22 teams). Unfortunately none of those teams were Focus Teams, and so did not take the surveys. Among the Focus Teams, two (Teams E2 & M2) used the math-based Calculate-Test-Adjust strategy for their first move, and the other two teams (Teams E1 & M1) used a non-math-based strategy. This provides a nice contrast to explore the effect of using math in a team's solution. The conclusion from Part 2 does still standout – using a math-based strategy leads to high competition scores in some cases (Team M2), but not in others (Team E2). But there may be more to tell about the teams than just their competition scores. What about the surveys? The Learning Benefits of Using a Math-Based Strategy Figure 1 shows the results from the robot math knowledge survey administered to the Focus Teams. The middle school age teams (Teams M1 & M2) have higher scores overall than the elementary school age teams (Teams E1 & E2), which is not surprising. The older students have more experience with mathematics in general, and it shows when they solve formal problems that make use of math. However, a more interesting pattern can be found by looking not just at the scores, but at the gains. The two teams that used the math-based Calculate-Test-Adjust strategy (Teams E2 & M2) both improved on their survey scores from the beginning of the competition to after, but the teams who used a non-math-based strategy (Teams E1 & M1) did not. This suggests that regardless of a team's initial level, using math in an explicit way in the competition solution improves student use of math when solving more general problems relating to robot movements. If increasing students' problem solving abilities using math is a goal of the robotics team, then just attempting to use a math-based strategy may have real advantages, regardless of how it impacts the team's overall success in the competition. The Attitude Benefits of Using a Math-Based Strategy The second survey measured students’ attitudes toward math and robotics in general. Figure 2 below shows the results from each Focus Team for this survey. Only 1 of the 4 Focus Teams had more positive views in each part of the attitude survey after the competition compared to before, and that was Team E2. Team E2 was the elementary school age team that used the math-based Calculate-Test-Adjust strategy in their solution. For this team, the experience preparing and competing in the competition did have a positive impact on their interests in robotics and mathematics, as well as their views about the value of mathematics in robotics. Remarkably, this positive change in attitudes was attained in spite of the fact that Team E2 did not score highly in the actual competition (ranked #17 out of 22 teams). This result echoes a statement by a number of other coaches who, in the day-of-competition team interviews, stressed that they were participating to provide their students with a positive experience in robotics, not to win the competition. Perhaps it worked. It appears, though, that by using mathematics in the robotics competition, attitudes toward math itself get caught in the updraft, and benefit as well. There are number of positive outcomes that result from participating in a robotics competition. Better problem solving and more positive attitudes toward robotics and mathematics are two outcomes that appear to be attainable. These are in addition to, and possibly even preferable to, performing well in the competition itself! Conclusion #3 – Even when a team's use of math doesn't lead to success on the challenge, just attempting to use math can have other benefits in terms of improving students' understanding and developing more positive attitudes about math and robots. This series has been about finding out what makes successful teams successful. Being older, more experienced, and better at math sure seem like advantages for teams in a robot competition. But those are hardly the point, or even the whole story. The use of mathematics in solution strategies, however, is very much to the point. Every coach has this option, and it appears to pay off in both tangible and intangible ways. A team with a high degree of fluency in mathematics can apply math in creative ways to springboard themselves to the top of the charts. A team that is less comfortable with mathematics but commits to using math anyway sets itself up for a different kind of success – real, measurable gains in student problem-solving capability and attitudes toward robotics and math. If, in trying to create more systematic solutions, students' failed attempts actually help them to understand more about the way the robots work, they will be able to apply those improved understandings to future problems. If, in the challenge of attempting to use math, a student comes to understand the role or context of mathematics better, it makes both robotics and mathematics more interesting, and helps the student to see math as having real, usable value in robotics and the world. And that certainly sounds like a success any FLL coach would be proud to report, trophy or no. Thank you for reading our series on the benefits of math in LEGO robotics competitions. We hope that you found some useful information to think about when working with your team this year for the upcoming FLL competition. Please leave comments to let us know your thoughts on our articles and on the use of math in educational robotics more generally. Also, consider volunteering to help us in our next investigation. There are still many open questions about what helps a team be successful, and we hope to continue to investigate those questions and share what we find with the FLL community (send an email to Eli Silk if you are interested). 1. In actuality, five teams agreed to be Focus Teams, but one team’s data was incomplete and therefore not fully counted. The patterns observed do continue to be true even if the incomplete data is left in. 2. The number of students on the Focus Teams reported in the table is the actual number of students who participated in team activities. This is sometimes different than the number of students who completed the surveys. The surveys were only given on a particular days, on which some students may have been unavailable. The number of students who completed the surveys for each Focus Team is shown in the x-axis of the results figures for the surveys. Part 2 of 4: The Range of Strategies Table of Contents Part 1: Introduction and Background Part 2: The Range of Strategies Part 3: A Winning Strategy Part 4: Success in Different Forms Part 1 of this series set the stage for an investigation into student mathematics usage at a local LEGO robotics competition. In Part 2, we'll take a look at the types of strategies that teams came up with for solving the challenge, and how those different approaches fared in the competition. Interviews with the Teams 22 teams from the greater Pittsburgh area participated in the 2010 May Madness robotics competition. Investigators from the University of Pittsburgh's Learning Research and Development Center (LRDC) and Carnegie Mellon University's Robotics Academy (RA) were able to interview 16 of them about their team sizes, grade levels, and experience levels for both students and mentors. They also asked teams to describe their solutions to the challenge and how they came up with those solutions. The Different Strategies As expected, different teams came up with very different solutions. In fact, they were so different that apples-to-apples comparisons became nearly impossible at the whole-strategy level. Fortunately, every strategy did include one common component: moving the robot to the center of the board to begin scoring points. Table 1 breaks down the different strategies that teams used, and the number of teams that used each approach.[1] That only 3 teams used a (non-rotation) Sensor-Based strategy is likely a direct consequence of the nature of the Botball Hybrid II challenge. In particular, the toilet paper tubes were not steady enough for a robot’s touch sensor to contact them without tipping the tubes over. As a result, teams seeking to score using the tubes had to choose non-contact means of controlling their robot's movement. The 3 teams that did use a Sensor-Based strategy on their first move were all going for the nests, which are much heavier than the toilet paper tubes. However, for various reasons, even these teams abandoned use of their sensors in their moves later in the challenge. In addition, the board surface featured few marked lines, making line-following and line-tracking less attractive. A Math-Based Strategy for Calculating Motor Rotations The remaining 13 teams programmed their initial move using the rotation sensor, effectively moving a set distance forward. However, those 13 teams used decidedly different methods to choose their motor rotation values, especially the initial value. Some teams guessed; others used the view mode; but four teams chose to start with a math-based prediction. These groups all ended up using variants of a three-phase strategy called Calculate-Test-Adjust: 1. Measure the how far the robot has to move and use mathematical means to calculate a (theoretically correct) rotation value for the movement 2. Run the robot with the predicted value 3. Compensate for any observed overshoot or shortfall by making small "tweaks" to the rotation value Students used several different mathematical relationships to arrive at their predictions. For example, one group measured how far the robot moved forward with each motor rotation, then calculated how many of those 1-motor-rotation distances the robot needed to move the total distance to the target. The students then entered this value into their program, tested it, and fine-tuned the value to get the robot to exactly the right spot. One notable quality of this strategy is that it is not purely mathematical – all 4 teams that used Calculate-Test-Adjust for their initial motor rotations value ended up having to refine their value with guessing or with the view mode afterward. A math-based calculation does not appear to be sufficient on its own for this type of problem. The Relative Success of the Different Strategies So how well did these math-using teams fare compared to their sensor-using, guess-and-testing, and view mode-ing peers? The 22 teams were ranked based on their best point score after 3 rounds of the competition. Figure 1 below shows the average rank of the teams who used each strategy. Bigger bars indicate a higher average ranking for the teams using that strategy — meaning teams who used that strategy had better scores in the competition. Looking at the data in this way, the View-Mode strategy was the most effective and the Sensor-Based strategy was the least effective. The Guess-Test-Adjust and the Calculate-Test-Adjust strategies seem to be in the middle and similar to each other. Given that this particular challenge was somewhat biased against the use of sensors, it probably makes sense that teams who used the Sensor-Based strategy did not fare well. But what of the others? The View-Mode strategy did seem to do particularly well. The investigators theorize that this strategy leads to success for two reasons. First, teams that use this strategy can program their movements quickly. Figuring out the correct motor rotations value is straightforward and fast, so that frees the team up to spend their limited time improving other parts of their solution (e.g., making their robot base solid and their attachments functional). Second, the View-Mode strategy is very reliable, so once teams get a motor rotation value by using this strategy, they then have a lot of confidence that that value is the right one and will work well. In essence, the View-Mode strategy is easy to implement quickly and gives very reliable results, which explains why teams who chose that strategy tended to do well in the competition. The Success of the Math-Based Strategy Compared to rolling the robot on the ground and reading a number, both Guess-Test-Adjust and Calculate-Test-Adjust are slow to implement and potentially less reliable as well. And in the results, teams who used these strategies did okay in the competition, but not as well as teams who used the View-Mode strategy… case closed. Right? Averages, it turns out, don't tell the whole story. Calculating the standard deviation of the ranks gives us a sense of how tightly clustered these different success levels are for each strategy. If everything were cut-and-dry, we'd see all the View Mode teams clustered at the top, followed by the test-and-adjust teams, and sensor-based teams at the bottom. Instead, when we add the standard deviation as error bars on the previous bar plot of the average ranks (see Figure 2), some things fall into place, and others fly loose. The View-Mode strategy was the least variable – teams using it were tightly clumped in the rankings – further supporting the idea that that strategy is straightforward and reliable. But the Calculate-Test-Adjust strategy has a huge variability (the error bars span almost the entire range of possible ranks)! Something important remains untold. In fact, a closer look at the 4 teams that used the Calculate-Test-Adjust strategy shows that 2 of them were the top ranked teams in the entire competition (ranked #1 and #2 out of 22 teams). This suggests that using a math-based calculation strategy can be very powerful. At the same time, the other two Calculate-Test-Adjust teams were #17 and #21 out of 22 in the rankings – the complete opposite end of the scoring spectrum. This dramatic separation in performance suggests something powerful (see Figure 3). Perhaps it is not enough to simply use a strategy; the result may hinge dramatically upon the strategy being used right. Perhaps when the Calculate-Test-Adjust strategy is implemented well, it is just as quick and just as reliable as the View-Mode strategy, if not even better. Done without a full understanding, however, the calculations could turn into distractors. The research team theorizes that teams who are fluent with mathematics can use math-based calculations to their advantage by determining the correct motor rotation values for different moves relatively quickly. As with the View-Mode strategy, this time savings frees resources for use on building tasks and fine-tuning overall strategy. Teams that are less fluent in mathematics, however, would take longer to perform the math-based calculations, and make more errors, thus taking time away from working on other important parts of the task. Conclusion #1 – Not many, but some teams do use math. And of the teams that do use math, there is widely varying success, from some of the most successful to some of the least successful. Overall, teams found a range of ways to approach the challenge. Different challenges may favor different types of strategies, but the May Madness event saw a variety of approaches employed. Some strategies did seem to lead to better success in the competition. In particular, the View-Mode strategy seemed to be very successful for teams, presumably because it is quick and reliable. Not many teams chose to use the math-based Calculate-Test-Adjust strategy, but those who did ended up with both the highest scores in the competition, and some of the lowest scores. This suggests that for the math-based strategy more than any other, it matters not just that a team used that strategy, but how they used it. Fortunately, in addition to the day-of-competition interviews, the research team also met with a few of the teams outside of the competition to understand their solution strategies in more depth. The winning team in the competition was one of these. Did using math really help this team be successful? And if it did, then how? Continue on to Part 3 to find out about the winning team's strategy and their use of math. 1. There was one other strategy that teams used to determine how many motor rotations to use in their program. We call this strategy the “Overshooting” strategy because it works in situations where it isn't critical that the robot moves a particular amount as long as the robot moves far enough. For example, when approaching the nests it was okay if the robot went too far because it would just push the nest forward a bit, but the nest would still be in a position where it was easy to grab. This strategy didn't work with the toilet paper tubes, because if the robot went too far and bumped into them, they would fall down and would then be much harder to grab. In cases when overshooting was acceptable, teams were able to choose a motor rotations value that was safely big enough without having to worry if it was exactly right. No team used this strategy on their initial robot movement and teams were more likely to use it when programming the manipulators, so we didn't include it in our primary list of strategies. 2. One could argue that the rotation sensor is a sensor like all the others. In particular, the programming logic is the same, so a strategy that used the rotation sensor could also be labeled Sensor-Based. But here we think the distinction between the rotation sensor and the other sensors (e.g., touch, ultrasonic, and light sensors) is meaningful as the rotation sensor is the only one that will make the robot move with little regard to what is out in the world. Strategies using the other sensors will move varying distances depending on the way the objects in the world are configured, but the rotation sensor strategy (within some error) will always move a consistent amount. Part 3 of 4: A Winning Strategy Table of Contents Part 1: Introduction and Background Part 2: The Range of Strategies Part 3: A Winning Strategy Part 4: Success in Different Forms This is Part 3 of a 4-part series investigating how math may help in LEGO Robotics Competitions. Part 1 introduced the past research and the context of the present investigation — a local LEGO robotics competition where investigators conducted interviews about team strategies. Part 2 laid out the range of strategies that were observed, and teased out an interesting result that teams using math-based strategies seemed to have widely varying success in the competition, with math-users both leading the pack and trailing near the rear. So what was it that the most successful teams did that led to their success? In part 3, we take a look at the winning team's strategy and see how they used math to great effect. A Focus Team In addition to short, standardized interviews with teams on the day of the competition, the research team also sat down for more in-depth interviews with four robotics teams outside of the competition, hoping to gain greater insight into their solution strategies. Two of these Focus Teams were composed of middle school aged students and two of elementary school aged students. One of the Focus Teams – codenamed M2 – happened to be the team that won the competition. Team M2 used a math-based Calculate-Test-Adjust strategy for their first move. They were one of only four teams (out of 16 interviewed) who used a math-based strategy. But Team M2's overall solution was fascinating and worth sharing as there is so much that can be learned from what they did. Team M2 Team M2 was a school-based team consisting of 10 students, all from a gifted program in a suburban school. There were one 8th grader, six 7th graders, and three 6th graders. Four of the students had been to a competition before, but the rest were rookies. Their coach, a gifted teacher from the school, had been a coach for five previous robot competitions, so she was very experienced. They reported spending about 17 total hours preparing for the competition, with about 10 of those hours in just the last two weeks. This was, in fact, on the low end of total preparation time compared to other teams that were interviewed. Team M2 met during normal school hours, when the gifted teacher was able to pull the students from their regular classes, which may have constrained the amount of time they could meet. Team M2's Robots and Game Strategy Team M2 was large enough and had multiple robots, and so were able to split into two sub-teams. They divided the task into missions, with one sub-team working on the toilet paper tubes and the other sub-team working on the nests. They built one robot according to the Robotics Educator Model (REM) given in the LEGO® instructions, although they adapted it by substituting larger wheels. They also built a second robot entirely from scratch. The REM robot had two different attachments: one for collecting the toilet paper tubes and the other for loading and transporting the ping pong balls to the gutter and the empty tubes to the end zone scoring area. The second robot was used to retrieve the nests. They designed this second robot from scratch because they felt they needed a robot that was heavier than the REM robot design in order to effectively pull the nests back. Below are photos of Team M2's robots and attachments. These robots as a whole were not very complex, but each robot design and attachment was well-tuned to specific parts of the challenge. Team M2's Winning Round Team M2 ended up with a high score in the competition of 91 points. See below for a video of their winning round. It is clear from the video of Team M2's robots in action that all of their movements are quick and reliable. They retrieve all three toilet paper tubes very fast and without any fumbling. As mentioned in Part 2, the research team suspects that this is because Team M2 was able to use the Calculate-Test-Adjust strategy to make efficient calculations that got them close to correct motor rotation values very quickly. The time savings allowed them to work on other aspects of the challenge, such as ensuring that both of their robot designs were robust and reliable. This too, shows clearly in the video, as Team M2 uses their multiple robots and attachments to clear advantage. In general, Team M2 is a great example of an efficient and focused team that produced a high-quality solution. Team M2's Other Math Team M2 did use Calculate-Test-Adjust, a math-based strategy for movement, but perhaps the most exceptional aspect of Team M2's strategy was a completely separate use of mathematical thinking. One of the students on Team M2 did a systematic analysis of the points that the team could get based on observations of their practice rounds. She measured the time they took to complete each mission and the points that they could get, and then identified the best ordering to help maximize their total points. She determined that their team could get the toilet paper tubes (and all 9 ping pong balls contained within them) back to base then deposit the balls into the gutter and the tubes into the end zone in 52 seconds for a total of 57 points. Then they would still have time to pursue the nests for additional points. In their winning round (see the video above), they execute this strategy almost perfectly, although a later mission ends up knocking one of their toilet paper tubes from the end zone scoring area. Although the team no longer had documentation of their analysis when interviewers met with them after the competition, the research team attempted to recreate it in Table 1 to illustrate how powerful such an analysis can be. When the points are broken down in this way, it is clear that the large majority of points are to be gained by going after the ping pong balls, half of which are in the toilet paper tubes, and putting them in the gutter. And this is exactly what Team M2 did, doing so very efficiently and reliably. Thus, Team M2′s use of mathematics extended beyond programming into the planning process itself, and appears to have paid off very well. Conclusion #2 – The most successful teams do use math purposefully and efficiently, and their math use is a prominent factor separating their solutions from the solutions of the rest of the teams. It seems reasonable to think that Team M2 was an exceptional team, with some previous competiton experience among its team members, students who were generally considered smart and good at math, and an experienced mentor. Nevertheless, it also seems clear that a big part of Team M2's success was a direct result of their use of math in their solution strategy, and that their math use gave them real advantages at multiple levels. A team that uses math effectively to quickly zero in on correct motor rotation values in their program can save valuable time. That time can then be used to make the rest of the robot more efficient and reliable, or even build a second specialized one to complement the first. In addition, using math as a larger strategy to do more systematic analysis of the points breakdown and the effectiveness of different mission solutions can have a major impact on a team's maximizing its performance at the competition. In reality, of course, not every team will walk in the door with the background to apply math as effectively as Team M2. After all, the observations in Part 2 show that there were teams who tried to use a math-based strategy but ended up performing poorly, and that the View-Mode strategy, which doesn't include any math, was the most straightforward, reliable, and effective strategy on average. Is math only a strategy that should be pursued by “elite” teams and students, then? Of course not! On the contrary, the final article in this series will provide evidence that the use of math in robotics competitions can produce winners in different ways… and more importantly, it can help produce the type of winning that will last long after the competition is over! Part 1 of 4: Introduction and Background Table of Contents Part 1: Introduction and Background Part 2: The Range of Strategies Part 3: A Winning Strategy Part 4: Success in Different Forms Every September, thousands of FIRST® LEGO® League (FLL) coaches and mentors around the world crack their knuckles, dust off their parts bins, and prepare to dive into an intensive three-month odyssey of technical twists and twenty-first century tutelage as they guide their teams to success in the annual FLL competition. But what exactly is success in a world of gameboards and gracious professionalism? Is the highest scorer really the biggest winner? What do students actually gain through their participation? How does it happen? And so, how should the enlightened coach choose from the multitude of competition strategies that lie open as the new season dawns? Perhaps we can learn something from the recent past. Since 1999, the Robotics Academy (RA) has been helping teachers, mentors, coaches, and students have positive educational experiences with robotics. Among other things, the Academy develops curricula , offers teacher professional development, and hosts robotics competitions. Recently, the Robotics Academy — in cooperation with the University of Pittsburgh’s Learning Research and Development Center (LRDC) — has been investigating ways to deepen students' experiences with robotics by incorporating math in their activities. This blog series describes what RA and LRDC researchers found when they interviewed teams at a local LEGO robotics competition, looking to answer a few key questions: • Are there opportunities to use math in a typical robotics competition problem? • Does using math have any impact on a team's score? • Can using math deliver “success” in any other sense? In short, the investigators found that the answer to the all three questions was an overwhelming yes — there are opportunites to use math in a LEGO robotics competition setting, and when teams do use math it seems to be helpful in ways not limited to points and trophies. Ultimately, the research team arrived at 3 major conclusions: 1. Not many, but some teams do use math. And of the teams that do use math, there is widely varying success, from some of the most successful to some of the least successful. 2. The most successful teams do use math purposefully and efficiently, and their math use is a prominent factor separating their solutions from the solutions of the rest of the teams. 3. Even when a team's use of math doesn't lead to success on the challenge, just attempting to use math can have other benefits in terms of improving students' understanding and developing more positive attitudes about math and robots. Each remaining article in this series will examine one of these conclusions in detail and describe how we arrived at each one. But first, let’s set the scene. An Initial Investigation Since 2000, the Robotics Academy has hosted the FIRST LEGO League (FLL) Pittsburgh State Tournament. Last Fall, the Academy decided to see what it could learn by interviewing participating teams on the day of the competition. Robotics Academy researcher Ross Higashi interviewed a sample of the more than 70 teams that competed in the 2009 state competition and put together two Robotics Academy Blog posts that identified who an FLL team is and the connections to Science, Technology, Engineering, and Mathematics that they make. Although praising the article overall, a comment to one of the posts challenged researchers to go into more depth: If “a few highly successful teams have shown great adherence to principles of good design”, can Carnegie-Mellon or FIRST make their stories, plans, approaches available more widely to the community? Otherwise, without good examples, it will remain hit-or-miss for the vast majority of the teams. Richard Ho, comment posted January 30, 2010 on the Robotics Academy Blog And so a followup plan was devised. Surely another round of interviews could find good examples from which the whole FLL community could benefit! The research team’s first opportunity to conduct interviews was at a local competition called May Madness, on Saturday, May 8, 2010 at the Sarah Heinz House in Pittsburgh's North Side neighborhood. Although not as large as the FLL regional championship, the May Madness event attracts the same types of teams and uses similar challenges. The Challenge The 2010 May Madness competition included a number of different events, including separate challenges for different age divisions, different robot platforms such as VEX and TETRIX, and even a non-robotic Alice storytelling competition. To provide the most FLL-relevant information, the interview team focused on the “Botball Hybrid II” LEGO MINDSTORMS NXT challenge, geared toward elementary and middle school age students. Although not quite as complex as typical FLL challenges in terms of the number of missions or the variety of objects on the board, the Botball Hybrid II challenge includes a number of elements that require sophisticated solutions. Two teams occupy the board at the same time, a black team and a white team. Each team can have one robot on the board at a time and the teams start at opposite ends of the board. The object is to get the most points possible in a 90-second round. Points are obtained by collecting ping pong balls and toilet paper tubes of the team's color and also common nests and foam balls. Knocking the ping pong balls loose gets some points, but the most points are obtained by bringing the objects back to a team's end zone. Even more points are obtained by lifting the objects into the gutters on the side of the table See gallery of images below for pictures of the game board, the items on the board and the specifications, a list of the rules of the challenge, and the points system. The Findings and the Future How did teams try to solve this challenge? Did math come into the picture at any point… and if so, did it help? Should coaches bother encouraging students to try using math in a challenge like this? Each remaining article in this series will focus on answering one of these questions. Part 2 of the series describes the range of strategies that teams employed, Part 3 details the winning strategy, and Part 4 discusses some alternative versions of success that were observed. As you read through the research team’s findings and interpretations, please let us know what you think by leaving a comment on the blog! And if you are planning to attend this year's Pittsburgh State Tournament FLL Competition, the Academy would love to have your team be a part of the next round of investigation (send an email to Eli Silk if you are interested). We hope you find these articles helpful and wish you the best of luck in the upcoming competition season! al FLL competition. In Part 1 of our analysis, we look at what an FLL team REALLY is.
{"url":"http://www.robotics-academy.org/blog/tag/fll/","timestamp":"2014-04-17T00:49:05Z","content_type":null,"content_length":"113150","record_id":"<urn:uuid:46e813aa-8eec-497e-99e1-600846d86347>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Y-coordinate of a centroid June 6th 2008, 11:03 AM Y-coordinate of a centroid Find the y-coordinate of the centroid of the semi-annular plane region by $1 <= x^2 + y^2 <= 4$ and $y > 0$. June 6th 2008, 11:28 AM The Mass of the plate is given by the integral but you do need calculus to find the mass you can use geometery $m=\frac{1}{2}\pi (2)^2-\frac{1}{2} \pi (1)^2=\frac{3\pi}{2}$ Where $f(x)=\sqrt{4-x^2} \\\ g(x)=\sqrt{1-x^2}$ So your integal is just distribute the x and integrate with a u sub the centroid is given by $\frac{M_y}{m}$ I hope this helps. Good luck.
{"url":"http://mathhelpforum.com/calculus/40816-y-coordinate-centroid-print.html","timestamp":"2014-04-18T03:08:41Z","content_type":null,"content_length":"6415","record_id":"<urn:uuid:557ea5cd-60df-44fd-b24c-bd86b94fb757>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Engineering Mathematics by K. A. Stroud - Boomerang Books For every $20 you spend on books, you will receive $1 in Boomerang Bucks loyalty dollars. You can use your Boomerang Bucks as a credit towards a future purchase from Boomerang Books. Note that you must be a Member (free to sign up) and that conditions do apply.
{"url":"http://www.boomerangbooks.com.au/Advanced-Engineering-Mathematics/K-A-Stroud/book_9780230275485.htm","timestamp":"2014-04-21T12:34:32Z","content_type":null,"content_length":"40954","record_id":"<urn:uuid:638a9692-0b87-4482-94d8-ed62df7b716d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - coordinate time and proper time. If there are a finite number of clocks then the central clock is not arbitrary. If you have an infinite number of clocks, then the central clock is arbitrary. I don't see how the number of clocks makes a difference. In the ballistic explosion the central non-moving clock is at the heart of the event, experiencing no velocity-based time dilation giving it a unique quality that none of the others have. In the spatial expansion version, I acknowledge that no clock is unique and infact suggest that all the clocks are synchronised since none of them have an actual velocity. In the first case I simply put forward that the central clock is actually central, non-moving and not in a gravitational field making it an example of an otherwise hypothetical concept, that cannot be found so easily in our actual universe. But then, I put the same thing forward for the second example, but this time say that the clocks are representative of the hypothetical coordinate clock.
{"url":"http://www.physicsforums.com/showpost.php?p=3776991&postcount=17","timestamp":"2014-04-20T03:12:38Z","content_type":null,"content_length":"8391","record_id":"<urn:uuid:eb82150c-b840-4db0-9b75-b3c5492b93f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Engineering Mathematics by K. A. Stroud - Boomerang Books For every $20 you spend on books, you will receive $1 in Boomerang Bucks loyalty dollars. You can use your Boomerang Bucks as a credit towards a future purchase from Boomerang Books. Note that you must be a Member (free to sign up) and that conditions do apply.
{"url":"http://www.boomerangbooks.com.au/Advanced-Engineering-Mathematics/K-A-Stroud/book_9780230275485.htm","timestamp":"2014-04-21T12:34:32Z","content_type":null,"content_length":"40954","record_id":"<urn:uuid:638a9692-0b87-4482-94d8-ed62df7b716d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosales, Leobardo - Department of Mathematics, Rice University • Euler-Lagrange Equations Q0: Recall the DIVERGENCE THEOREM. Suppose is a function which • Tilt-Excess Decay Lemma For M an n-rectifiable set let Px = projTxM . Let P = projRn , Q = projRk in • Main Regularity Theorem Theorem 1 (Allard's Regularity Theorem) Suppose (0, 1). Then • The Single and Two-Valued Minimal Surface Equation • Leobardo Rosales Research Statement 1 My research interests are in geometric analysis, particularly geometric measure theory. My • MAIN Regularity Theorem Theorem 1 (ALLARD's Regularity Theorem) There are constants • First some preliminary definitions: 1. Embedded Surface: S is an embedded surface in R3 • Problems in Minimizing Functionals Our goal is minimizise functionals. Let Rn • Lipschitz Approximation Throughout, we will let M be an n-rectifiable set in Rn+k • Tilt-Excess Decay Theorem Theorem 1 For (0, 1), then there are constants c(n, k, ), 0(n, k, ) • Approximation By Harmonic Functions Theorem 1 (Lemma) Given > 0 there is a constant (n, ) > 0 such • Leobardo Rosales Teaching Statement 1 My teaching philosophy stems from the debt I owe to higher education with respect to the • ABSTRACT: Recently, the author gave a complete geometric description of solutions to the Two-Valued Minimal Surface Equation defined over the • For q 2, the q-valued minimal surface equation is a PDE producing solutions u0 C(D \ {0}) for D the open unit disk in R2 • Minimal Immersions with Prescribed Boundary • Discontinuous Solutions to the Two-Valued Minimal Surface Equation • Surfaces Minimizing Boundary-Weighted Area The goal of the Fall VIGRE Geometric Calculus of Variations group is to
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/46/815.html","timestamp":"2014-04-21T09:43:48Z","content_type":null,"content_length":"9272","record_id":"<urn:uuid:ab0d3a62-7ce1-4875-b9e2-0932b31ea521>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Annu. Rev. Astron. Astrophys. 1991. 29: 325-362 Copyright © 1991 by . All rights reserved The source term for Einstein's equations is any conserved stress tensor. If this stress tensor is isotropic with strength p = - c^2 = - cosmological constant. Such a term was originally postulated by Einstein and has an interesting history (see e.g. 124). The value of such a constant is severely constrained by cosmological considerations. Since [] = (8 G H[0]^2) to the critical density, one can safely conclude (in spite of any astronomical uncertainties!) that As discussed below, this small value is a deep mystery. To begin with, nothing prevents the existence of a [0]) in Einstein's equations (as postulated by him). This will make the gravitational part of the Lagrangian dependent on two fundamental constants [0] and G, which differ widely in scale; the dimensionless combination made out of these fundamental constants, (Gc^3)[0], has a value less than 10^-126! The surprise in the smallness of | Quantum field theory provides a wide variety of contributions to V (V[0] to this potential. A potential like will give the same effects, though they differ by the constant term (µ^4 / 4E^4 where E is the energy scale at which the phase transition occurs: at the GUTs transition, it is 10^56 (GeV)^4; at the Salam-Weinberg transition, it changes by 10^10 (GeV)^4. These are enormous numbers compared to the present value of 10^-47 (GeV)^4. How a physical quantity can change by such a large magnitude and finally adjust itself to be zero at such fantastic accuracy is not clear. Finally, one should not forget that the ``zero-point energy'' of quantum fields will also contribute to gravity (91, 131). Each degree of freedom contributes an amount where k[max] is an ultravioqlet cut-off. If we take general relativity to be valid up to Planck energies, then we may take k[max] ^19 GeV and ^76 (GeV)^4. If we assume that all the contributions are indeed there, then they have to be fine-tuned to cancel each other, for no good reason. Before the entry of GUTs into cosmology, we needed to worry only about the first and last contribution, both of which could be tackled in an ad hoc manner. One arbitrarily sets [0] = 0 in the Lagrangian defining gravity and tries to remove the zero-point contribution by complicated regularization schemes. (Neither argument is completely water-tight but both seem plausible.) With the introduction of GUTs and inflationary scenarios however, the cosmological constant becomes a dynamical entity and the situation becomes more serious. Notice that it is precisely the large change in the V ( Several mechanisms have been suggested in the literature to make the cosmological constant zero: supersymmetry (26, 79, 80, 135), complicated dynamical mechanisms (37, 97, 110, F. Wilczek and A. Zee, unpublished), probabilistic arguments from quantum gravity (12, 24, 53, 89), and anthropic principle (123) are only a few of them. None of these seems to provide an entirely satisfactory solution. A somewhat different approach in which scale invariance was used to set 61). The smallness of the cosmological constant is probably the most important single problem that needs to be settled in cosmology. We have no idea as to what this mechanism is, but if it is based on some general symmetry consideration, it may demand vanishing of
{"url":"http://ned.ipac.caltech.edu/level5/Narlikar/Narlikar6.html","timestamp":"2014-04-18T19:16:50Z","content_type":null,"content_length":"9092","record_id":"<urn:uuid:c7a255dd-74ed-4753-b0b0-fc83329d96a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 1 2 Marks Qn and Answers 1. Why is the need of studying algorithms? From a practical standpoint, a standard set of algorithms from different areas of computing must be known, in addition to be able to design them and analyze their efficiencies. From a theoretical standpoint the study of algorithms in the cornerstone of computer science. 2. What is algorithmics? The study of algorithms is called algorithmics. It is more than a branch of computer science. It is the core of computer science and is said to be relevant to most of science, business and technology. 3. What is an algorithm? An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in finite amount of time. 4. Give the diagram representation of Notion of algorithm. 5. What is the formula used in Euclid’s algorithm for finding the greatest common divisor of two numbers? Euclid’s algorithm is based on repeatedly applying the equality Gcd(m,n)=gcd(n,m mod n) until m mod n is equal to 0, since Input Output Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 2 6. What are the three different algorithms used to find the gcd of two numbers? The three algorithms used to find the gcd of two numbers are . Euclid’s algorithm . Consecutive integer checking algorithm . Middle school procedure 7. What are the fundamental steps involved in algorithmic problem solving? The fundamental steps are . Understanding the problem . Ascertain the capabilities of computational device . Choose between exact and approximate problem solving . Decide on appropriate data structures . Algorithm design techniques . Methods for specifying the algorithm . Proving an algorithms correctness . Analyzing an algorithm . Coding an algorithm 8. What is an algorithm design technique? An algorithm design technique is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing. 9. What is pseudocode? A pseudocode is a mixture of a natural language and programming language constructs to specify an algorithm. A pseudocode is more precisethan a natural language and its usage often yields more concise algorithm 10. What are the types of algorithm efficiencies? The two types of algorithm efficiencies are . Time efficiency: indicates how fast the algorithm runs . Space efficiency: indicates how much extra memory the algorithm 11. Mention some of the important problem types? Some of the important problem types are as follows . Sorting . Searching . String processing . Graph problems . Combinatorial problems . Geometric problems . Numerical problems 12. What are the classical geometric problems? The two classic geometric problems are . The closest pair problem: given n points in a plane find the closest pair among them . The convex hull problem: find the smallest convex polygon that would include all the points of antial growth functions? The functions 2n and n! are exponential growth functions, because these two functions grow so fast that their values become astronomically large even for rather smaller values of n. 17. What is worst-case efficiency? The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input or inputs of size n for which the algorithm runs the longest among all possible inputs of that size. 18. What is best-case efficiency? The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input or inputs for which the algorithm runs the fastest among all possible inputs of that size. 19. What is average case efficiency? The average case efficiency of an algorithm is its efficiency for an average case input of size n. It provides information about an algorithm behavior on a “typical” or “random” input. 20. What is amortized efficiency? In some situations a single operation can be expensive, but the total time for the entire sequence of n such operations is always significantly better that the worst case efficiency of that single operation multiplied by n. this is called amortized efficiency. 21. Define O-notation? Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 4 A function t(n) is said to be in O(g(n)), denoted by t(n) O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that T(n) . cg(n) for all n . n0 22. Define -notation? A function t(n) is said to be in (g(n)), denoted by t(n) (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that T(n) . cg(n) for all n . n0 23. Define -notation? A function t(n) is said to be in (g(n)), denoted by t(n) (g(n)), if t(n) is bounded both above & below by some constant multiple of g(n) for all large n, i.e., if there exists some positive constants c1 & c2 and some nonnegative integer n0 such that c2g(n) . t(n) . c1g(n) for all n . n0 24. Mention the useful property, which can be applied to the asymptotic notations and its use? If t1(n) O(g1(n)) and t2(n) O(g2(n)) then t1(n)+t2(n) max {g1(n),g2(n)} this property is also true for and notations. This property will be useful in analyzing algorithms that comprise of two consecutive executable parts. 25. What are the basic asymptotic efficiency classes? The various basic efficiency classes are . Constant : 1 . Logarithmic : log n . Linear : n . N-log-n : nlog n . Quadratic : n2 . Cubic : n3 . Exponential : 2n . Factorial : n! 26. Give an non-recursive algorithm to find out the largest element in a list of n ALGORITHM MaxElement(A[0..n-1]) //Determines the value of the largest element in a given array //Input:An array A[0..n-1] of real numbers //Output: The value of the largest element in A maxval Å a[0] for I Å 1 to n-1 do Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 5 if A[I] > maxval maxval Å A[I] return maxval 27. What are the two basic rules for sum manipulation? The two basic rules for sum manipulation are i l cai = c å i l i l (ai ± bi) = å i l ai ± å i l 28. Mention the two summation formulas? The two summation formulas are i l 1 = u-l+1 where l £ u are some lower and upper integer limits = å = 1+2+3+…+n = n(n+1)/2 » n2/2 Î Q (n2) 29. Write the general plan for analyzing the efficiency for non-recursive The various steps include . Decide on a parameter indicating input’s size. . Identify the algorithms basic operation. . Check whether the number of times the basic operation is executed depends on size of input. If it depends on some additional property the worst, average and best-case efficiencies have to be investigated separately. . Set up a sum expressing the number of times the algorithm’s basic operation is executed. . Using standard formulas and rules of manipulation, find a closed-form formula for the count or at least establish its order of growth. 30. Give a non-recursive algorithm for element uniqueness problem. ALGORITHM UniqueElements(A[0..n-1]) //Checks whether all the elements in a given array are distinct //Input :An array A[0..n-1] //Output Returns ‘true’ if all elements in A are distinct and ‘false’ for I Å to n-2 do for j Å I+1 to n-1 do if A[I] = A[j] return false return true 31. Mention the non-recursive algorithm for matrix multiplication? ALGORITHM MatrixMultiplication(A[0..n-1,0..n-1], B[0..n-1,0..n-1]) //Multiplies two square matrices of order n by the definition based Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 6 //Input : Two n-by-n matrices A and B //Output : Matrix C = AB for I Å 0 to n-1 do for j Å 0 to n-1 do C[I,j] Å 0.0 for k Å 0 to n-1 do C[I,j] Å C[I,j] + A[I,k]*B[k,j] return C 32. Write a non-recursive algorithm for finding the number of binary digits for a positive decimal integer. ALGORITHM Binary(n) // Input A positive decimal integer n // Output The number of binary digits in n’s binary representation count Å 1 while n>1 do count Å count + 1 n Å n/2 return count 33. Write a recursive algorithm to find the n-th factorial number. ALGORITHM F(n) // Computes n! recursively // Input A non-negative integer n // Output The value of n! if n=0 return 1 else return F(n-1) * n 34. What is the recurrence relation to find out the number of multiplications and the initial condition for finding the n-th factorial number? The recurrence relation and initial condition for the number of multiplications M(n)=M(n-1)+1 for n>0 35. Write the general plan for analyzing the efficiency for recursive algorithms. The various steps include . Decide on a parameter indicating input’s size. . Identify the algorithms basic operation. . Check whether the number of times the basic operation is executed depends on size of input. If it depends on some additional property the worst, average and best-case efficiencies have to be investigated separately. . Set up a recurrence relation with the appropriate initial condition , for the number of times the basic operation is executed. . Solve the recurrence or at least ascertain the orders of growth of its 36. Write a recursive algorithm for solving Tower of Hanoi problem. ALGORITHM . To move n>1 disks from peg1 to peg3, with peg2 as auxiliary, first move recursively n-1 disks from peg1 to peg2 with peg3 as auxiliary. Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 7 . Then move the largest disk directly from peg1 to peg3 . Finally move recursively n-1 disks from peg2 to peg3 with peg1 as auxiliary . If n=1 simply move the single disk from source peg to destination peg. 37. What is the basic operation in the Tower of Hanoi problem and give the recurrence relation for the number of moves? The moving of disks is considered the basic operation in the Tower of Hanoi problem and the recurrence relation for the number of moves is given as M(n)=2M(n)+1 for n>1 38. Write a recursive algorithm to find the number of binary digits in the binary representation of an integer. ALGORITHM BinRec(n) // Input A positive decimal integer n // Output The number of binary digits in n’s binary representation if n=1 return 1 else return BinRec(n/2)+1 39. Who introduced the Fibonacci numbers and how can it be defined by a simple Leonardo Fibonacci introduced the fibonacci numbers in 1202 as a solution to a problem about the size of rabbit population. It can be defined by the simple F(n)=F(n-1)+F(n-2) for n>1 And two initial conditions F(0)=0 and F(1)=1 40. What is the explicit formula for the nth Fibonacci number? The formula for the nth Fibonacci number is given by F(n)= 1/ 5 (Fn - Fn) F =(1+ 5 )/2 F =(1- 5 )/2 41. Write a recursive algorithm for computing the nth fibonacci number? ALGORITHM F(n) // Computes the nth Fibonacci number recursively by using the definition // Input A non-negative integer n // Output The nth Fibonacci number if n £1 return n else return F(n-1)+F(n-2) 42. Write a non-recursive algorithm for computing the nth fibonacci number. ALGORITHM Fib(n) // Computes the nth Fibonacci number iteratively by using its definition // Input A non-negative integer n // Output The nth Fibonacci number F[0] Å 0; Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 8 F[1] Å 1 For I Å2 to n do F[I] Å F[I-1]+F[I-2] return F[n] 43. What is the Q(log n) algorithm for computing the nth fibonacci number based There exists a Q(log n) algorithm for computing the nth fibonacci number that manipulates only integers. It is based on the equality ( ) ( 1) ( 1) ( ) F n F n F n F n = ú for n ³ 1 with an efficient way of computing matrix powers 44. What is the general plan for the Empirical Analysis of Algorithm Efficiency? The general plan is as follows . Understand the experiment’s purpose. . Decide on the efficiency metric M to be measured & the measurement unit. . Decide on characteristics of the input sample . Prepare a program implementing the algorithm for the experimentation . Generate a sample of inputs . Run the algorithm on the sample input and record the data observed . Analyze the data obtained 45. Give the various system commands used for timing the program implementing the algorithm. The system commands are as follows UNIX time command C & C++ function clock Java method 8=5 b The value can be chosen as 1 48. What is algorithm visualization? Algorithm visualization is a way to study algorithms. It is defined as the use of images to convey some useful information about algorithms. That information can be a visual illustration of algorithm’s operation, of its performance on different kinds of inputs, or of its execution speed versus that of other algorithms for the same problem. 49. What are the two variations of algorithm visualization? The two principal variations of algorithm visualization” . Static algorithm visualization: It shows the algorithm’s progress through a series of still images . Dynamic algorithm visualization: Algorithm animation shows a continuous movie like presentation of algorithms operations 50. What are the features that are desired in algorithm animation? Peter Gloor, who was the principal developer of Animated Algorithms suggested the following desirable features . Be consistent . Be interactive . Be clear and concise . Be forgiving to the user . Adapt to the knowledge level of the user . Emphasize the visual component . Keep the user interested . Incorporate both symbolic and iconic representations . Include algorithm’s analysis & comparisons with other algorithms for the same problem . Include execution history 51. What are the applications of algorithm visualization? The two applications are . Research: Based on expectations that algorithm visualization may help uncover some unknown feature of the algorithm . Education: Seeks to help students learning algorithms 52. Define Brute force approach? Brute force is a straightforward approach to solving a problem, usually directly based on the problem’s statement and definitions of the concepts involved. The brute force approach is one that is easiest to apply. 53. What are the advantages of brute force technique? The various advantages of brute force technique are Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 10 . Brute force applicable to a very wide variety of problems. It is used for many elementary but important algorithmic tasks . For some important problems this approach yields reasonable algorithms of at least some practical value with no limitation on instance size . The expense to design a more efficient algorithm may be unjustifiable if only> for I Å 0 to n-2 do min Å I for j Å I+1 to n-1 do if A[j] < A[min] min Å j swap A[I] and A[min] 56. What is bubble sort? Another brute force approach to sort a problem is to compare adjacent elements of the list and exchange them if they are out of order, so we end up “bubbling up” the largest element to the last position in the list. The next pass bubbles up the second largest element, and so on until n-1 passes , the list is sorted. Pass I can be represented as follows A0,……, Aj Aj+1,……, An-I-1 An-I £…… £ An-1 In their final positions 57. Give an algorithm for bubble sort? ALGORITHM BubbleSort(A[0..n-1]) //The algorithm sorts array A[0..n-1] by bubble sort //Input: An array A[0..n-1] of orderable elements Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 11 //Output: Arrar A[0..n-1] sorted in ascending order for I Å 0 to n-2 do for j Å 0 to n-2-I do if A[j+1] less than A[j] swap A[j] and A[j+1] 58. Give the benefit of application of brute force technique to solve a problem. With the application of brute force strategy, the first version of an algorithm obtained can often be improved with a modest amount of effort. So a first application of the brute force approach often results in an algorithm that can be improved with a modest amount of effort. 59. Explain about the enhanced version of sequential search. Sequential search simply compares successive elements of a list with a given search key until either a match is encountered or the list is exhausted without finding a match. The enhancement in this version is to append the search key to the end of the list , then the search for the key will have to be successful & so we can eliminate a check for the list’s end on each iteration. 60. Give the algorithm for the enhanced version of sequential search. ALGORITHM SequentialSearch2(A[0..n-1],K) //The algorithm implements sequential search with the search key as //Input: An array A of n elements and a search key K //Output: The position of the first element in a[0..n-1] whose value is equal to K or //–1 if no such element is found A[n] Å K I Å 0 while A[I] ¹ K do I Å I+1 If I n return I else return -1 61. What is the improvement that can be applied to sequential search if the list is The straightforward improvement that can be incorporated in sequential search if a given list is known to be sorted is that searching in the list can be stopped as soon as an element greater than or equal to the search key is 62. Define brute force string matching. The brute force string matching has a given string of n characters called the text and a string of m characters called the pattern, find a substring of the text that matches the pattern. And find the index I of the leftmost character of the first matching substring in the text. 63. Mention the algorithm for brute force string matching ALGORITHM BruteForceStringMatching(T[0..n-1],P[0..m-1]) //The algorithm implements brute force string matching //Input: an array T[0..n-1] of n characters representing text Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 12 //An array P[0..m-1] of m characters representing pattern //Output : The position of the first characters in the text that starts the first //matching substring if search is successful and –1 otherwise for I Å 0 to n-m do j Å 0 while j < m and P[j] = T[I+j] do j Å j+1 if j =m return I return –1 64. Give the general plan for divide-and-conquer algorithms. The general plan is as follows . A problems instance is divided into several smaller instances of the same problem, ideally about the same size . The smaller instances are solved, typically recursively . If necessary the solutions obtained are combined to get the solution of the original problem 65. State the Master theorem and its use. If f(n) Î q(nd) where d ³ 0 in recurrence equation T(n) = aT(n/b)+f(n), then q(nd) if a T(n) Î q(ndlog n) if a=bd a) if a>bd The efficiency analysis of many divide-and-conquer algorithms are greatly simplified by the use of Master theorem. 66. What is the general divide-and-conquer recurrence relation? An instance of size ‘n’ can be divided into several instances of size n/b, with ‘a’ of them needing to be solved. Assuming that size ‘n’ is a power of ‘b’, to simplify the analysis, the following recurrence for the running time is T(n) = aT(n/b)+f(n) Where f(n) is a function that accounts for the time spent on dividing the problem into smaller ones and on combining their solutions. 67. Define mergesort. Mergesort sorts a given array A[0..n-1] by dividing it into two halves a[0..(n/2)- 1] and A[n/2..n-1] sorting each of them recursively and then merging the two smaller sorted arrays into a single sorted one. 68. Give the algorithm for mergesort. ALGORITHM Mergesort(A[0..n-1]) //Sorts an array A[0..n-1] by recursive mergesort //Input: An array A[0..n-1] of orderable elements //Output: Array A[0..n-1] sorted in nondecreasing order if n > 1 copy A[0..(n/2)-1] to B[0..(n/2)-1] copy A[(n/2)..n-1] to C[0..(n/2)-1] Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 13 69. Give the algorithm to merge two sorted arrays into one. ALGORITHM Merge(B[0..p-1], C[0..q-1], A[0..p+q-1]) //Merges two sorted arrays into one sorted array //Input: arrays B[0..p-1] and C[0..q-1] both sorted //Output: sorted array A[0..p+q-1] of the elements of B & C I Å 0; j Å 0; k Å 0 while I < p and j < q do if B[I] £ C[j] A[k] Å B[I]; I Å I+1 A[k] Å C[j]; j Å j+1 k Å k+1 if I = p copy C[j..q-1] to A[k..p+q-1] copy B[i..p-1] to A[k..p+q-1] 70. What is the difference between quicksort and mergesort? Both quicksort and mergesort use the divide-and-conquer technique in which the given array is partitioned into subarrays and solved. The difference lies in the technique that the arrays are partitioned. For mergesort the arrays are partitioned according to their position and in quicksort they are partitioned according to the element values. 71. Give the algorithm for Quicksort. ALGORITHM Quicksort(A[l..r]) //Sorts a array by quicksort //Input: A subarray A[l..r] of A[0..n-1], defined by the left and right indices l & r //Output: the subarray A[l..r] sorted in nondecreasing order if l < r s Å Partition(A[l..r]) 72. Mention the algorithm used to partition an array for quicksort. ALGORITHM Partition(A[l..r]) //Partitions a subarray using the first element as pivot //Input: A subarray A[l..r] of A[o..n-1] //Output: A partition of A[l..r] with split position returned as function’s value p Å A[l] I Å l; j Å r+1 repeat I Å I+1 until A[I]³ p repeat j Å j-1 until A[j] £ p until i ³ j Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 14 return j 73. What is binary search? Binary search is a remarkably efficient algorithm for searching in a sorted array. It works by comparing a search key K with the arrays middle element A[m]. if they match the algorithm stops; otherwise the same operation is repeated recursively for the first half of the array if K < A[m] and the second half if K > A[m]. A[0]………A[m-1] A[m] A[m+1]………A[n-1] search here if KA[m] 74. What is a binary tree extension and what is its use? The binary tree extension can be drawn by replacing the empty subtrees by special nodes in a binary tree. The extra nodes shown as little squares are called external & the original nodes shown as little circles called internal. The extension of a empty binary tree is a single external node. The binary tree extension helps in analysis of tree algorithms. 75. What are the classic traversals of a binary tree? The classic traversals are as follows . Preorder traversal: the root is visited before left & right subtrees . Inorder traversal: the root is visited after visiting left subtree and before visiting right subtree . Postorder traversal: the root is visited after visiting the left and right 76. Mention an algorithm to find out the height of a binary tree. ALGORITHM Height(T) //Compares recursively the height of a binary tree //Input: A binary tree T //Output: The height of T if T = F return –1 else return max{Height(TL), Height(TR)}+1 77. Draw the extension tree of the given binary tree. 78. What is decrease and conquer approach and mention its variations? The decrease and conquer technique based on exploiting the relationship between a solution to a given instance of a problem and a Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 15 solution to a smaller instance of the same problem. The three major variations are . Decrease by a constant . Decrease by a constant-factor . Variable size decrease 79. What is insertion sort? Insertion sort in an application of decrease-by-one technique to sort an array A[0..n-1]. We assume that the smaller problem of sorting an array A[0..n-2] has already been solved to give us a sorted array of size n-1. Then an appropriate position for A[n-1] is found among the sorted element and then the element is inserted. 80. Give the algorithm for insertion sort. //Sorts a given array by insertion sort //Input: An array A[0..n-1] of n orderable elements //Output: Array A[0..n-1] sorted in non-decreasing order for I Å 1 to n-1 do v Å A[I] j Å I-1 while j ³ 0 and A[j] > v do A[j+1] Å A[j] j Å j – 1 A[j+1] Å v 81. What is a tree edge and back edge? In the depth first search forest, whenever a new unvisited vertex is reached for the first time, it is attached as a child to the vertex from which it is being reached. Such an edge is called tree edge because the set of all such edges forms a forest. The algorithm encounters an edge leading to a previously visited vertex other than its immediate predecessor. Such an edge is called a back edge because it connects a vertex to its ancestor, other than the parent, in the depth first search forest. 82. What is a tree edge and cross edge? In the breadth first search forestdifferent representation of the same instance called representation change . Transformation to an instance of a different problem for which the algorithm is already available called problem reduction. 85. What is presorting? Presorting is the idea of sorting a list so that problems can be solved more easier than in an unsorted list. The time efficiency of the algorithm that involve sorting before solving the problem depends on the sorting algorithm being used. 86. Give the algorithm for element uniqueness using presorting? ALGORITHM PresortElementUniqueness(A[0..n-1]0 //Solves the element uniqueness problem by sorting the array first //Input An array A[0..n-1] of orderable elements //Output Returns “true” if A has no equal elements, “false” otherwise Sort the array A for I Å 0 to n-2 do If A[I] = A[I+1] return false return true 87. Compare the efficiency of solving the problem of element uniqueness using presorting and without sorting the elements of the array? The brute force algorithm compares pairs of the array’s elements until either two equal elements were found or no more pairs were left. It’s worst case efficiency was in q(n2). The running time of the presorted algorithm depends on the time spent on sorting and the time spent on checking consecutive elements. The worst case efficiency of the entire presorting based algorithm will be as T(n) = Tsort(n)+Tscan(n) Î q(n log n) + q(n) = q(n log n) 88. Give the algorithm for computing the mode using presorting. ALGORITHM PresortMode(A[0..n-1]) //Computes the mode of an array by sorting it first //Input an array A[0..n-1] of orderable elements //Output The array’s mode Sort the array i Å 0 modefrequency Å 0 while i £ n-1 do runlength Å 1; runvalue Å A[I] Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 17 while i + runlength £ n-1 and A[i + runlength] = runvalue runlength Å runlength + 1 if runlength > modefrequency modefrequency Å runlength; modevalue Å runvalue i Å i + runlength return modevalue 89. Compare the efficiencies of the algorithms used to compute the mode before and after sorting the array of elements. The efficiency of computing the mode before sorting the array for the worst case is a list with no equal elements. For such a list the Ith element is compared with I-1 elements of the auxiliary list of distinct v of orderable items, one element per node, so that all elements in the left subtree are smaller than the element in the subtree’s root and all elements in the right subtree are greater than it. 92. What is a rotation in AVL tree used for? If an insertion of a new node makes an AVL tree unbalanced, the tree is transformed by a rotation. A rotation in an AVL tree is a local transformation of its subtree rooted at a node whose balance has become either +2 or –2; if there are several such nodes, then the tree rooted at the unbalanced node that is closest to the newly inserted leaf is rotated. 93. What are the types of rotation? There are four types of rotations, in which two of them are the mirror images of the other two rotations. The four rotations are . Single right rotation or R-rotation . Single left rotation or L-rotation . Double left-right rotation or LR-rotation . Double right-left rotation or RL-rotation 94. Write about the efficiency of AVL trees? Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 18 As with any search tree , the critical characteristic is the tree’s height. The tree’s height is bounded above and below by logarithmic functions. The height ‘h’ of any AVL tree with ‘n’ nodes satisfies the inequalities log2 n £ h < 1.4405 log2(n+2) – 1.3277 The inequalities imply that the operations of searching and insertion are q(log n) in the worst case. The operation of key deletion in an AVL tree is more difficult than insertion, but it turns out to have the same efficiency class as insertion i.e., logarithmic. 95. What are the drawbacks of AVL trees? The drawbacks of AVL trees are . Frequent rotations . The need to maintain balances for the tree’s nodes . Overall complexity, especially of the deletion operation. 96. What are 2-3 trees and who invented them? A 2-3 tree is a tree that can have nodes of two kinds:2-nodes and 3- nodes. A 2-node contains a single key K and has two children, the left child serves as the root of a subtree whose keys are less than K and the right child serves as the root of a subtree with keys greater than K. A 3-node contains two ordered keys K1 & K2 (K1 less than K2). The leftmost child serves as the root of a subtree with keys less than K1, the middle child serves as the root of a subtree with keys between K1 & K2 and the rightmost child serves as the root of a subtree with keys greater than K2. The last requirement of 2-3 trees is that all its leaves must be on the same level, a 2-3 tree is always height balanced. 2-3 trees were introduced by John Hopcroft in 97. What is a heap? A heap is a partially ordered data structure, and can be defined as a binary tree assigned to its nodes, one key per node, provided the following two conditions are met . The tree’s shape requirement-The binary tree is essentially complete, that is all the leaves are full except possibly the last level, where only some rightmost leaves will be missing. . The parental dominance requirement-The key at each node is greater that or equal to the keys of its children 98. What is the main use of heap? Heaps are especially suitable for implementing priority queues. Priority queue is a set of items with orderable characteristic called an item’s priority, with the following operations . Finding an item with the highest priority . Deleting an item with highest priority . Adding a new item to the set 99. Give three properties of heaps? The properties of heap are . There exists exactly one essentially complete binary tree with ‘n’ nodes. Its height is equal to log2n . The root of the heap is always the largest element Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 19 . A node of a heap considered with all its descendants is also a heap 100. Give the main property of a heap that is implemented as an array. A heap can be implemented as an array by recording its elements in the top-down, left-to-right fashion. It is convenient to store the heap’s elements in positions 1 through n of such an array. In such a representation . The parental node keys will be in the first n/2 positions of the array, while the leaf keys will occupy the last n/2 positions . The children of a key in the array’s parental position ‘i’ (1 £ i£ n/2) will be in positions 2i and 2i+1and correspondingly, the parent of the key in position ‘i’ (2£ i£n) will be in position i/2. 101. What are the two alternatives that are used to construct a heap? The two alternatives to construct a heap are . Bottom-up heap construction . Top-down heap construction 102. Give the pseudocode for Bottom-up heap construction. ALGORITHM HeapBottomUp(H[1..n]) //Constructs a heap from the elements of the given array //Input An array H[1..n] of orderable elements //Output A heap H[1..n] for I Å n/2 downto 1 do k Å I ; v Å H[k] heap Å false while not heap and 2*k £ n do j Å 2*k if j < n if H[j] < H[j+1] j Åj+1 if v ³H[j] heap Å true else H[k] Å H[j]; k Å j H[k] Å v 103. What is the algorithm to delete the root’s key from the heap? ALGORITHM . Exchange the root’s key with the last key K of the heap . Decrease the heap’s size by one . “Heapify” the smaller tree by sifting K down the tree exactly in the same way as bottom-up heap construction. Verify the parental dominance for K: if it holds stop the process, if not swap K with the larger of its children and repeat this operation until the parental dominance holds for K in its new position. 104. Who discovered heapsort and how does it work? Heapsort was discovered by J.W.J. Williams. This is a two stage process that works as follows . Stage 1 Heap construction: construct a heap for a given array. Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 20 . Stage 2 Maximum deletions: Apply the root deletion operation n-1 times to the remaining heap 105. What is dynamic programming and who discovered it? Dynamic programming is a technique for solving problems with overlapping subproblems. These subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems only once and recording the results in a table from which the solution to the original problem is obtained. It was invented by a prominent U.S Mathematician, Richard Bellman in the 1950s. 106. Define transitive closure. The transitive closure of a directed graph with ‘n’ vertices is defined as the n-by-n Boolean matrix T={tij}, in which the elements in the ith row (1£ i£ n) and the jth column (1£j £n) is 1 if there exists a non trivial directed path from the ith vertex to the jth vertex otherwise, tij is 0 107. What is the formula used by Warshall’s algorithm? The formula for generating the elements of matrix R(k) from the matrix R(k-1) is (k) = rij (k-1) or rik (k-1) and rkj This formula implies the following rule for generating elements of matrix R(k) from the elements of matrix R(k-1) . If an element rij is 1 in R(k-1), it remains 1 in R(k) . If an element rij is 0 in R(k-1), it has to be changed to 1 in R(k) if and only if the element in its row ‘i’ and column ‘k’ and the element in its row ‘k’ and column ‘j’ are both 1’s in R(k-1) 108. Give the Warshall’s algorithm. ALGORITHM Warshall(A[1..n,1..n]) //Implements Warshall’s algorithm for computing the transitive closure //Input The adjacency matrix A of a digraph with ‘n’ vertices //Output The transitive closure of the digraph R(0) Å A for k Å 1 to n do for i Å 1 to n do for j Å 1 to n do R(k)[I,j] Å R(k-1)[I,j] or R(k-1)[I,k] and R(k-1)[k,j] return R(n) 109. Give the Floyd’s algorithm ALGORITHM Floyd(W[1..n,1..n]) //Implements Floyd’s algorithm for the all-pair shortest–path problem //Input The weight matrix W of a graph //Output The distance matrix of the shortest paths’ lengths D Å W for k Å 1 to n do for i Å 1 to n do for j Å 1 to n do D[I,j] Å min{D[I,j], D[I,k] + D[k,j]} Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 21 return D 110. How many binary search trees can be formed with ‘n’ keys? The total number of binary search trees with ‘n’ keys is equal to the nth Catalan number c(n) = ÷ ÷ ç çè n + for n >0, c(0) = 1, which grows to infinity as fast as 4n/n1.5. 111. Give the algorithm used to find the optimal binary search tree. ALGORITHM OptimalBST(P[1..n]) //Finds an optimal binary search tree by dynamic programming //Input An array P[1..n] of search probabilities for a sorted list of ‘n’ keys //Output Average number of comparisons in successful searches in the optimal //BST and table R of subtrees’ roots in the optimal BST for I Å 1 to n do C[I,I-1] Å 0 C[I,I] Å P[I] R[I,I] Å I C[n+1,n] Å 0 for d Å 1 to n-1 do for i Å 1 to n-d do j Å i +d minval Å ¥ for k Å I to j do if C[I,k-1]+C[k+1,j] < minval minval Å C[I,k-1]+C[k+1,j]; kmin Å k R[I,j] Å k Sum ÅP[I]; for s Å I+1 to j do sum Å sum + P[s] C[I,j] Å minval+sum Return C[1,n], R 112. What is greedy technique? Greedy technique suggests a greedy grab of the best alternative available in the hope that a sequence of locally optimal choices will yield a globally optimal solution to the entire problem. The choice must be made as follows . Feasible : It has to satisfy the problem’s constraints . Locally optimal : It has to be the best local choice among all feasible choices available on that step. . Irrevocable : Once made, it cannot be changed on a subsequent step of the algorithm 113. Mention the algorithm for Prim’s algorithm. ALGORITHM Prim(G) //Prim’s algorithm for constructing a minimum spanning tree //Input A weighted connected graph G= V, E //Output ET, the set of edges composing a minimum spanning tree of G VT Å {v0} Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 22 ET Å F for i Å 1 to V-1 do Find the minimum-weight edge e*=(v*,u*) among all the edges (v,u) such that v is in VT and u is in V-VT VT Å VT È {u*} ET Å ET È {e*} return ET 114. What are the labels in Prim’s algorithm used for? Prim’s algorithm makes it necessary to provide each vertex not in the current tree with the information about the shortest edge connecting the vertex to a tree vertex. The information is provided by attaching two labels to a vertex . The name of the nearest tree vertex . The length of the corresponding edge 115. How are the vertices not in the tree split into?orithm is one of the greedy techniques to solve the minimum spanning tree problem. It was discovered by Joseph Kruskal when he was a second-year graduate student. 119. Give the Kruskal’s algorithm. ALGORITHM Kruskal(G) //Kruskal’s algorithm for constructing a minimum spanning tree //Input A weighted connected graph G= V,E //Output ET, the set of edges composing a minimum spanning tree of G sort E in non decreasing order of the edge weights w(ei1) £……… £w(eiE) ET Å F Ecounter Å 0 Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 23 k Å 0 while ecounter < V-1 k Å k+1 if ET È {eik} is acyclic ET Å ET È {eik}; ecounter Å ecounter + 1 return ET 120. What is a subset’s representative? One element from each of the disjoint subsets in a collection is used as the subset’s representative. Some implementations do not impose any specific constraints on such a representative, others do so by requiring the smallest element of each subset to be used as the subset’s representative. 121. What is the use of Dijksra’s algorithm? Dijkstra’s algorithm is used to solve the single-source shortest-paths problem: for a given vertex called the source in a weighted connected graph, find the shortest path to all its other vertices. The single-source shortest-paths problem asks for a family of paths, each leading from the source to a different vertex in the graph, though some paths may have edges in common. 122. What is encoding and mention its types? Encoding is the process in which a text of ‘n’ characters from some alphabet is assigned with some sequence of bits called codewords. There are two types of encoding they are . Fixed-length encoding . Variable-length encoding 123. What is the problem faced by variable-length encoding and how can it be avoided? Variable-length encoding which assigns codewords of different lengths to different characters introduces a problem of identifying how many bits of an encoded text represent the first character or generally the ith character. To avoid this prefix-free codes or prefix codes are used. In prefix codes, no codeword is a prefix of a codeword of another character. 124. Mention the Huffman’s algorithm. ALGOITHM Huffman . Initialize n one-node trees and label them with the characters of the alphabet. Record the frequency of each character in its tree’s root to indicate the treebr />126. What is a state space tree? The processing of backtracking is implemented by constructing a tree of choices being made. This is called the state-space tree. Its root represents a initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of the solution, the nodes in the second level represent the choices for the second component and so on. 127. What is a promising node in the state-space tree? A node in a state-space tree is said to be promising if it corresponds to a partially constructed solution that may still lead to a complete solution. 128. What is a non-promising node in the state-space tree? A node in a state-space tree is said to be promising if it corresponds to a partially constructed solution that may still lead to a complete solution; otherwise it is called non-promising. 129. What do leaves in the state space tree represent? Leaves in the state-space tree represent either non-promising dead ends or complete solutions found by the algorithm. 130. What is the manner in which the state-space tree for a backtracking algorithm is constructed? In the majority of cases, a state-space tree for backtracking algorithm is constructed in the manner of depth-first search. If the current node is promising, its child is generated by adding the first remaining legitimate option for the next component of a solution, and the processing moves to this child. If the current node turns out to be non-promising, the algorithm backtracks to the node’s parent to consider the next possible solution to the problem, it either stops or backtracks to continue searching for other possible solutions. 131. What is n-queens problem? The problem is to place ‘n’ queens on an n-by-n chessboard so that no two queens attack each other by being in the same row or in the column or in the same diagonal. 132. Draw the solution for the 4-queen problem. Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 25 133. Define the Hamiltonian circuit. The Hamiltonian is defined as a cycle that passes through all the vertices of the graph exactly once. It is named after the Irish mathematician Sir William Rowan Hamilton (1805-1865).It is a sequence of n+1 adjacent vi0, vi1,……, vin-1, vi0 where the first vertex of the sequence is same as the last one while all the other n-1 vertices are distinct. 134. What is the subset-sum problem? Find a subset of a given set S={s1,………,sn} of ‘n’ positive integers whose sum is equal to a given positive integer ‘d’. 135. When can a node be terminated in the subset-sum problem? The sum of the numbers included are added and given as the value for the root as s’. The node can be terminated as a non-promising node if either of the two equalities holds: s’+si+1>d (the sum s’ is too large) s’+ å + = j i 1 sj less than d (the sum s’ is too small) 136. How can the output of a backtracking algorithm be thought of? The output of a backtracking algorithm can be thought of as an n-tuple (x1, …xn) where each coordinate xi is an element of some finite linearly ordered set Si. If such a tuple (x1, …xi) is not a solution, the algorithm finds the next element in Si+1 that is consistent with the values of (x1, …xi) and the problem’s constraints and adds it to the tuple as its (I+1)st coordinate. If such an element does not exist, the algorithm backtracks to consider the next value of xi, and so on. 137. Give a template for a generic backtracking algorithm. ALGORITHM Backtrack(X[1..i]) //Gives a template of a generic backtracking algorithm //Input X[1..i] specifies the first I promising components of a solution //Output All the tuples representing the problem’s solution if X[1..i] is a solution write X[1..i] for each element xÎSi+1 consistent with X[1..i] and the constraints do X[i+1] Å x Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 26 138. What are the tricks used to reduce the size of the state-space tree? The various tricks are . Exploit the symmetry often present in combinatorial problems. So some solutions can be obtained by the reflection of others. This cuts the size of the tree by about half. . Preassign values to one or more components of a solution . Rearranging the data of a given instance. 139. What is the method used to find the solution in n-queen problem by The board of the n-queens problem has several symmetries so that some solutions can be obtained by other reflections. Placements in the last n/2 columns need not be considered, because any solution with the first queen in square (1,i), n/2£ i£ n can be obtained by reflection from a solution with the first queen in square (1,n-i+1) 140. What are the additional features required in branch-and-bound when compared to backtracking? Compared to backtracking, branch-and-bound requires: . A way to provide, for every node of a state space tree, a bound on the best value of the objective function on any solution that can be obtained by adding further components to the partial solution represented by the node. . The value of the best solution seen so far 141. What is a feasible solution and what is an optimal solution? In optimization problems, a feasible solution is a point in the problem’s search space that satisfies all the problem’s constraints, while an optimal solution is a feasible solution with the best value of the objective function. 142. When can a search path be terminated in a branch-and-bound A search path at the current node in a state-space tree of a branchand- bound algorithm can be terminated if . The value of the node’s bound is not better than the value of the best solution seen so far . The node represents no feasible solution because the constraints of the problem are already violated. . The subset of feasible solutions represented by the node consists of a single point in this case compare the value of the objective function for this feasible solution with that of the best solution seen so far and update the latter with the former if the new solution is better. 143. Compare backtracking and branch-and-bound. Backtracking Branch-and-bound State-space tree is constructed using depth-first search State-space tree is constructed using best-first search Finds solutions for combinatorial nonoptimization Finds solutions for combinatorial optimization problems No bounds are associated with the Bounds are associated with the each Two Mark Questions With Answers Design and Analysis of Algorithms CS1201 27 nodes in the state-space tree and every node in the state-space 144. What is the assignment problem? Assigning ‘n’ people to ‘n’ jobs so that the total cost of the assignment is as small as possible. The instance of the problem is specified as a n-by-n cost matrix C so that the problem can be stated as: select one element in each row of the matrix so that no two selected items are in the same column and the sum is the smallest possible. 145. What is best-first branch-and-bound? It is sensible to consider a node with the best bound as the most promising, although this does not preclude the possibility that an optimal solution will ultimately belong to a different branch of the state-space tree. This strategy is called best-first branch-and-bound. 146. What is knapsack problem? Given n items of known weights wi and values vi, i=1,2,…,n, and a knapsack of capacity W, find the most valuable subset of the items that fit the knapsack. It is convenient to order the items of a given instance in descending order by their value-to-weight ratios. Then the first item gives the best payoff per weight unit and the last one gives the worst payoff per weight 147. Give the formula used to find the upper bound for knapsack problem. A simple way to find the upper bound ‘ub’ is to add ‘v’, the total value of the items already selected, the product of the remaining capacity of the knapsack W-w and the best per unit payoff among the remaining items, which is vi+1/wi+1 ub = v + (W-w)( vi+1/wi+1) 148. What is the traveling salesman problem? The problem can be modeled as a weighted graph, with the graph’s vertices representing the cities and the edge weights specifying the distances. Then the problem can be stated as finding the shortest Hamiltonian circuit of the graph, where the Hamiltonian is defined as a cycle that passes through all the vertices of the graph exactly once. 149. What are the strengths of backtracking and branch-and-bound? The strengths are as follows . It is typically applied to difficult combinatorial problems for which no efficient algorithm for finding exact solution possibly exist . It holds hope for solving some instances of nontrivial sizes in an acceptable amount of time . Even if it does not eliminate any elements of a problem’s state space and ends up generating all its elements, it provides a specific technique for doing so, which can be of some value.
{"url":"http://anna-uniiversity.blogspot.com/2009/05/design-and-analysis-of-algorithm.html","timestamp":"2014-04-20T05:44:33Z","content_type":null,"content_length":"123483","record_id":"<urn:uuid:fcc9812b-85ba-4a80-8529-dbf02e11335e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the shape of the convex $n$ -gon which gives the maximum of a function? up vote 2 down vote favorite Supposing that the length of every edge of the convex $n$-gon $P_1P_2$$\cdots$$P_n$ is 1, what is the shape of the $n$-gon which gives the maximum of the following function $B_n$? $$B_n=\sum_{1\le{i} Here, $|P_iP_j|$ is the Euclidean length of the line segment from $P_i$ to $P_j$. I've already proved that $B_5$ reaches the maximum only if the pentagon is a regular pentagon, but I don't have any good idea for $n\ge6$. I need your help. This question (let's call this B) is related with the following question (let's call this A) which has been asked previously on mathoverflow without receiving any answers. The question A: What is the shape of the $n$ -gon $P_1P_2$$\cdots$$P_n$ which gives the maximum of $A_n$? The quantity $A_n$ is defined by $$A_n =\frac{∑_{i<j≤n} {∣P_iP_j ∣}^2 −∑_i^n∣P_iP_{i+1}∣^ 2} {∑_i^n∣P_iP_{i+1}∣^ 2}$$ Here, $∣P_iP_j∣$ is the Euclidean length of the line segment from $P_i$ to $P_j$. Note that $P_{n+1}=P_1$ and that the $n$-gon can be either convex or concave. I'm interested in $A_n$ because I knew the fact that the maximum of $A_4$ is $1$ only if $P_1P_2P_3P_4$ is a parallelogram. I've already proved the case $n=4$ , but I don't have any good idea for $n\ Could you tell me how to solve this problem? What is the shape of the $n$-gon which gives the maximum of a function? I think I can prove that $A_5$ reaches the max only if the pentagon is a regular pentagon. Please judge whether my proof is correct or not. I use an transforming operation through which the value of $A_5$ doesn't change or increases. Here is the operation which consists of the following steps. First, find the two adjacent edges which has the largest of $||$$P_iP_{i+1}$$|$$-$$|$$P_{i+1}P_{i+2}$$|$$|$, say $P_1P_2$ $\ge$ $P_2P_3$. Next, draw the circle $C_{13}$, the center is the midpoint $M_{13}$ of the line segment $P_1P_3$ and the radius is the line segment $M_{13}P_2$. Next, draw the perpendicular bisector $L_{13}$ of the line segment $P_1P_3$. Final step is to determin an intersection $Q_2$ where the line $L_{13}$ crosses the circle $C_{13}$. This is the transforming operation. The point $Q_2$ is outside of the circle $C_{45}$, the center is the midpoint $M_{45}$ of the edge $P_4P_5$ and the radius is the line segment $M_{45}P_2$. This means the following two. $$|P_2P_1|^2+|P_2P_3|^2=|Q_2P_1|^2+|Q_2P_3|^2$$ and $$|P_2P_5|^2+|P_2P_4|^2\le|Q_2P_5|^2+|Q_2P_4|^2$$ Therefore, the $A_5$ for the pentagon $P_1Q_2P_3P_4P_5$ is larger than, or equals to the $A_5$ for the pentagon $P_1P_2P_3P_4P_5$. Note that the lengths of the other line segment is constant. By using this transforming operation infinitely, you get the pentagon whose edges have the same length. Without loss of generality suppose that the pentagon whose edges have the same length $1$. Then, $A_5=2-\frac25W$. Here, $W=cosA+cosB+cosC+cosD+cosE$. $W$ is min only if the pentagon is a regular pentagon, so the proof is completed. If we can use this transforming operation for $n$-gon, it's sufficient to solve B in order to solve the problem A. This is why I'm asking B. My aim is to solve A. If you have a better idea to solve A , you don't need to solve B. Please teach me how to solve A. geometry convex-geometry polygons In the definition of $A_n$, do you really want to subtract a copy of the denominator from the numerator? This just decreases the value by 1, isn't it? – Sergei Ivanov Jul 20 '13 at 18:46 I see @Joseph O'Rourke already suggested in effect that you consider the problem of maximizing $B_n$ under the weaker condition that $\sum_i \left|P_i P_{i+1}\right|^2 = n$ (with $P_{n+1} \equiv 1 P_1$). If you always translate the polygon to put its centroid $\frac1n \sum_i \vec{P_i}$ at the origin then $\sum_i \left|P_i P_{i+1}\right|^2$ is a positive-definite quadratic form, as is $B_n$, so you're maximizing a Rayleigh quotient, which amounts to computing the dominant eigenvector and eigenspace of a reasonably simple matrix. If the max is attained by regular $n$-gons you're almost done. – Noam D. Elkies Jul 20 '13 at 19:38 @SergeiIvanov: I believe he or she is removing the edges so that the numerator sum runs over only diagonals. Essentially the ratio is diagonals/edges, and could be rephrased that way. – Joseph O'Rourke Jul 20 '13 at 21:05 2 I answered Question A at mathoverflow.net/questions/136678 (where it was first asked). – Sergei Ivanov Jul 20 '13 at 22:09 add comment 1 Answer active oldest votes Just an observation concerning your problem A, maximizing the diagonal$^2$ sum over the edge$^2$ sum. The maximum appears to be achieved by the regular polygons, and moreover, equally achieved by shears of those polygons. Examples are shown below for $n=5,6,8$, where $A_5=\frac{1}{2}(3+\sqrt{5})$, $A_6=5$, and $A_8 \approx 12.656854$: up vote 2 down vote add comment Not the answer you're looking for? Browse other questions tagged geometry convex-geometry polygons or ask your own question.
{"url":"http://mathoverflow.net/questions/137177/what-is-the-shape-of-the-convex-n-gon-which-gives-the-maximum-of-a-function","timestamp":"2014-04-20T01:22:23Z","content_type":null,"content_length":"61045","record_id":"<urn:uuid:7b0be594-d426-414b-b4a6-6aa40b2d855c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Bouncing ball experiment In this experiment students should be in groups of 3. Students will drop a ball from different heights and measure the corresponding bounce. Since each group will use a different ball, they will generate different sets of data. They will be asked to discuss and compare their linear function with that of their classmates. They should practice measuring the ball bounce before they begin to collect data. A lesson plan for grades 8–12 Mathematics Learning outcomes Students will: • create a scatterplot, both manually and with a calculator, from personally collected data. • determine the equation of the best fit line, both manually and with a calculator. Teacher planning Time required for lesson 1 hour • 6-8 bouncing balls • graph paper • yardsticks • rulers Technology resources Graphing calculators • Students must know how to enter ordered pairs on the calculator and how to find the linear regression information. • They must have experience writing the equation of a line given the slope and y-intercept, or given two points. • Ask students what will happen to the bounce of the ball when we drop the ball from higher and higher points. Point out that this is an example of direct variation, or that this type of variation is called “direct” because what happens to one variable (increase or decrease) also happens to the other one. • Could the students predict exactly how high the ball would bounce when dropped from 36″? 72″? How could you make your predictions more accurate? 1. Practice dropping the ball and measuring its bounce from the first height. 2. Drop the ball three times from each height (12″, 24″, 36″, 48″, 60″) before recording the consistent bounce at each height. 3. Using your data, plot the ordered pairs on a separate piece of graph paper. Let X be the independent variable and represent the height from which the ball is dropped. Let Y be the dependent variable and represent the height of the bounce. 4. Draw the best fit line. 5. Write the equation of the line, using a point on the line and an estimated slope, or using two points on the line. 6. Generate a scatterplot using the data from above. 7. Determine the linear regression information and correlation coefficient using the calculator’s linear regression capability. 8. Write the equation of the line from the linear regression information. 9. Graph this equation, along with the one from step 5, on your calculator. • Using your equation from step 5, predict the height of the bounce from a 72″ drop. • Using your equation from step 8, predict the height of the bounce from a 72″ drop. • Actually drop the ball from 72″ and compare the results to your predictions. Which is more accurate? Why do you think this is true? • Predict how the linear equations of other groups will compare to yours. Are their lines going to have larger slope (steeper), smaller slope (flatter), or the same slope? Why do you think this? • Write a concluding statement addressing how the correlation coefficient is useful in this experiment. Supplemental information The students really enjoy the opportunity to collect their very own data. It is a fun way to practice many of the concepts taught with respect to linear functions and their equations. • Common Core State Standards □ Mathematics (2010) ☆ Grade 8 ○ Statistics & Probability ■ 8.SP.1Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association. ■ 8.SP.2Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the... ■ 8.SP.3Use the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. For example, in a linear model for a biology experiment, interpret a slope of 1.5 cm/hr as meaning that an additional... ☆ High School: Algebra ○ Creating Equations ■ ALG.CE.2Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales. ☆ High School: Statistics & Probability ○ Interpreting Categorical & Quantitative Data ■ SP.ICQ.6Represent data on two quantitative variables on a scatter plot, and describe how the variables are related. Fit a function to the data; use functions fitted to data to solve problems in the context of the data. Use given functions or choose a function... North Carolina curriculum alignment Mathematics (2004) Grade 9–12 — Algebra 1 • Goal 1: Number and Operations - The learner will perform operations with numbers and expressions to solve problems. • Goal 4: Algebra - The learner will use relations and functions to solve problems. □ Objective 4.01: Use linear functions or inequalities to model and solve problems; justify results. ☆ Solve using tables, graphs, and algebraic properties. ☆ Interpret constants and coefficients in the context of the problem. Grade 9–12 — Integrated Mathematics 1 • Goal 4: Algebra - The learner will use relations and functions to solve problems. □ Objective 4.01: Use linear functions or inequalities to model and solve problems; justify results. ☆ Solve using tables, graphs, and algebraic properties. ☆ Interpret the constants and coefficients in the context of the problem. Grade 9–12 — Introductory Mathematics • Goal 1: Number and Operations - The learner will understand and compute with real numbers. □ Objective 1.02: Develop flexibility in solving problems by selecting strategies and using mental computation, estimation, calculators or computers, and paper and pencil. • Goal 4: Algebra - The learner will understand and use linear relations and functions. □ Objective 4.03: Solve problems using linear equations and inequalities; justify symbolically and graphically.
{"url":"http://www.learnnc.org/lp/pages/3879","timestamp":"2014-04-20T11:18:10Z","content_type":null,"content_length":"19882","record_id":"<urn:uuid:9549a5de-b15f-428a-b6d2-2e3522d67241>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulation of Blackjack: the odds are not with you July 19, 2013 By Francis Smart I often want to simulate outcomes varying across a set of parameters. In order to accomplish this in an efficient manner I have coded up a little function that takes parameter vectors and produces results. First I will show how to set it up with some dummy examples and next I will show how it can be used to select the optimal blackjack strategy. SimpleSim <- function(..., fun, pairwise=F) { # SimpleSim allows for the calling of a function varying # multiple parameters entered as vectors. In pairwise form # it acts much like apply. In non-paiwise form it makes a # combination of each possible parameter mix # in a manner identical to block of nested loops. returner <- NULL L <- list(...) # Construct a vector that holds the lengths of each object vlength <- unlist(lapply(L, length)) npar <- length(vlength) CL <- lapply(L, "[", 1) # Current list is equal to the first element # Pairwise looping if (pairwise) { # If pairwise is selected than all elements greater than 1 must be equal. # Checks if all of the elements of a vector are equal if (!(function(x) all(x[1]==x))(vlength[vlength>1])) { print(unlist(lapply(L, length))) stop("Pairwise: all input vectors must be of equal length", call. =F) for (i in 1:max(vlength)) { # Loop through calling the function CL[vlength>1] <- lapply(L, "[", i)[vlength>1] # Current list returner <- rbind(returner,c(do.call(fun, CL),pars="", CL)) } # End Pairwise # Non-pairwise looping if (!pairwise) { ncomb <- prod(vlength) # Calculate the number of combinations print(paste(ncomb, "combinations to loop through")) comb <- matrix(NA, nrow=prod(vlength), ncol=npar+1) comb[,1] <- 1:prod(vlength) # Create an index value comb <- as.data.frame(comb) # Converto to data.frame colnames(comb) <- c("ID", names(CL)) for (i in (npar:1)) { # Construct a matrix of parameter combinations comb[,i+1] <- L[[i]] # Replace one column with values comb<-comb[order(comb[,(i+1)]),] # Reorder rows for (i in 1:ncomb) { for (ii in 1:npar) CL[ii] <- comb[i,ii+1] returner <- rbind(returner,c(do.call(fun, CL),pars="", CL)) } # End Non-Pairwise } # END FUNCTION DEFINITION # Let's first define a simple function for demonstration minmax <- function(...) c(min=min(...),max=max(...)) # Pairwise acts similar to that of a multidimensional apply across columns SimpleSim(a=1:20,b=-1:-20,c=21:40, pairwise=T, fun="minmax") # The first set of columns are those of returns from the function "fun" called. # The second set divided by "par" are the parameters fed into the function. SimpleSim(a=1:20,b=-1:-20,c=10, pairwise=T, fun="minmax") # Non-pairwise creates combinations of parameter sets. # This form is much more resource demanding. SimpleSim(a=1:5,b=-1:-5,c=1:2, pairwise=F, fun="minmax") # Let's try something a little more interesting. # Let's simulate a game of black jack strategies assuming no card counting is possible. blackjack <- function(points=18, points.h=NULL, points.ace=NULL, cards=10, cards.h=NULL, cards.ace=NULL, sims=100, cutoff=10) { # This function simulates a blackjack table in which the player # has a strategy of standing (not asking for any more cards) # once he has either recieved a specific number of points or # a specific number of cards. This function repeates itself sims # of times. # This function allows for up to three different strategies to be played. # 1. If the dealer's hole card is less than the cuttoff # 2. If the dealer's hole card is greater than or equal to the cuttoff # 3. If the dealer's hole card is an ace # In order to use 3 level strategies input parameters as .h and .ace # It returns # of wins, # of losses, # of pushes (both player and dealer gets 21) # and the number of blackjacks. # This simulation assumes the number of decks used is large thus # the game is like drawing with replacement. if (is.null(points.h)) points.h <- points if (is.null(points.ace)) points.ace <- points.h if (is.null(cards.h)) cards.h <- cards if (is.null(cards.ace)) cards.ace <- cards.h bdeck <- c(11,2:9,10,10,10,10) # 11 is the ace bdresult <- c(ppoints=NULL, pcards=NULL, dpoints=NULL, dcards=NULL) for (s in 1:sims) { dhand <- sample(bdeck,1) # First draw the deal's revealed card phand <- sample(bdeck,2, replace=T) # Specify target's based on dealer's card if (dhand<cutoff) { pcuttoff <- points ccuttoff <- cards if (dhand>=cutoff) { pcuttoff <- points.h ccuttoff <- cards.h if (dhand==11) { pcuttoff <- points.ace ccuttoff <- cards.ace # player draws until getting above points or card count while ((sum(phand)<pcuttoff)&(length(phand)<ccuttoff)){ phand <- c(phand, sample(bdeck,1)) # If player goes over then player may change aces to 1s if (sum(phand)>21) phand[phand==11] <- 1 # Dealer must always hit 17 so hand is predetermined while (sum(dhand)<17) { dhand <- c(dhand, sample(bdeck,1)) # If dealer goes over then dearler may change aces to 1s if (sum(dhand)>21) dhand[dhand==11] <- 1 bdresult <- rbind(bdresult, c(ppoints=sum(phand), pcards=length(phand), dpoints=sum(dhand), dcards=length(dhand))) # Calculate the times that the player wins, pushes (ties), and loses pbj <- (bdresult[,1]==21) & (bdresult[,2]==2) dbj <- (bdresult[,3]==21) & (bdresult[,4]==2) pwins <- ((bdresult[,1] > bdresult[,3]) & (bdresult[,1] < 22)) | (pbj & !dbj) push <- (bdresult[,1] == bdresult[,3]) | (pbj & dbj) dwins <- !(pwins | push) # Specify the return. blackjack(points=18, sims=4000) # We can see unsurprisingly, that the player is not doing well. blackjack(points=18, points.h=19, sims=4000) # We can see that by adopting a more aggressive strategy for when # the dealer has a 10 point card or higher, we can do slightly better. # But overall, the dealer is still winning about 3x more than us. # We could search through different parameter combinations manually to # find the best option. Or we could use our new command SimpleSim! MCresults <- SimpleSim(fun=blackjack, points=15:21, points.h=18:21, points.ace=18:21, cutoff=9:10, cards=10, sims=100) # Let's now order our results from the most promising. # By the simulation it looks like we have as high as a 50% ratio of loses to wins. # Which means for every win there are 2 loses. # However, I don't trust it since we only drew 100 simulations. # In addition, this is the best random draw from all 224 combinations which each # have different probabilities. # Let's do the same simulation but with 2000 draws per. # This might take a little while. MCresults <- SimpleSim(fun=blackjack, points=15:21, points.h=18:21, points.ace=18:21, cutoff=9:10, cards=10, sims=5000) # Let's now order our results from the most promising. hist(unlist(MCresults[,1]), main="Across all combinations\nN(Win)/N(Loss)", xlab = "Ratio", ylab = "Frequency") # The best case scenario 38% win to loss ratio appears around were we started, # playing to hit 18 always and doing almost as well when the dealer is high # (having a 10 or ace) then playing for 19. # Overall, the odds are not in our favor. For every win we expect 1/.38 (2.63) loses. Highlighted by Pretty R at inside-R.org daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simulation-of-blackjack-the-odds-are-not-with-you/","timestamp":"2014-04-18T13:20:43Z","content_type":null,"content_length":"83765","record_id":"<urn:uuid:6576640c-d9fd-40ea-b4f2-41f655ca2c4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum gauge field theory in Cohesive homotopy type theory on the “synthetic” axiomatization of classical field theory/prequantum field theory within homotopy type theory in a flavor with three higher modalities added: cohesive homotopy type theory. There is also a refinement to genuine synthetic quantum field theory, see at Motivic quantization of local prequantum field theory and at Type-semantics for quantization For a survey of the general picture see at Synthetic Quantum Field Theory. For a more detailed exposition see at Classical field theory via Cohesive homotopy types. We implement in the formal language of homotopy type theory a new set of axioms called cohesion. Then we indicate how the resulting cohesive homotopy type theory naturally serves as a formal foundation for central concepts in gauge quantum field theory. This is a brief survey of work by the authors developed in detail elsewhere (Sh, Sc). Urs Schreiber, Prequantum physics in a cohesive topos, Talk at Quantum Physics and Logic 2011, (pdf) Lecture notes on more aspects of physics with an eye towards string theory in the context of cohesive homotopy type theory is in For further related references see at differential cohomology in a cohesive topos – References.
{"url":"http://www.ncatlab.org/schreiber/show/Quantum+gauge+field+theory+in+Cohesive+homotopy+type+theory","timestamp":"2014-04-20T05:45:39Z","content_type":null,"content_length":"23447","record_id":"<urn:uuid:94176b42-fd31-4a96-9faa-78df1a41c30b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Subtracting Exponents with Same Base but Different Exponent If you only have powers of 3, you cannot possibly have a factor of [itex]8= 2^3[/itex]! Unless you happen to have [itex]8 = 3^2-3^0[/itex]. The correct answer is 1/3 as you were told- write [itex]3^{n+2}[/itex] as [itex]3(3^{n+1})[/itex] and cancel. I get 3/8: [tex]\frac{3^{n+2}}{3^{n+3}-3^{n+1}} = \frac{3 3^{n+1}}{3^2 3^{n+1}-3^{n+1}} = \frac{3}{3^2-3^0} = \frac{3}{9-1} = \frac{3}{8}[/tex] Perhaps HallsofIvy and the person who wrote the book's answer key missed the "-3^(n+1)" in the denominator?
{"url":"http://www.physicsforums.com/showthread.php?t=100697","timestamp":"2014-04-17T03:53:12Z","content_type":null,"content_length":"44263","record_id":"<urn:uuid:730aaafc-c04e-4292-ba8e-77ddc7cbe5dc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: meaningful standards (fwd) Date: May 31, 1995 8:45 PM Author: DoctorCHEK@aol.com Subject: Re: meaningful standards (fwd) In Steve Means last post he also stated: Last week her class started on trig and she asked for HELP with problems in which they were asked to compute the trig ratios for right triangles with varying lengths given. If the sides of these right triangles were given...and the assignment was merely to divide one number by another to find the ratios as you say.... why did this young lady need to ask for help? Please clarify...Harv Becker
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1474082","timestamp":"2014-04-20T15:52:50Z","content_type":null,"content_length":"1441","record_id":"<urn:uuid:b1b179a9-ea5d-41c8-8833-86c144de5f43>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hi, How is finding the square root by successive averaging related to finding a fixed point? It's from the first video lecture, but I am finding it hard to understand. Best Response You've already chosen the best response. The value guess = sqrt(x) is a fixed point of (average guess (/ x guess))) since x/sqrt(x) = sqrt(x). Best Response You've already chosen the best response. Maybe it makes more sense this way: You want to find the square root of any number x. If you look at the function avg(guess, x/guess), you'll see that only if your guess=sqrt(x), does the function valuate as avg(guess, x/guess)=guess. So your parameter is "guess" and the value of the function is also "guess", which is the definition of a fixed point. Bottom line you were not looking for a fixed point, but for a function for which the fixed point is the thing you were looking for and that is the avg function above. Let me know if this was helpful to you. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e0b36de0b8bd74af49d47ed","timestamp":"2014-04-19T02:22:16Z","content_type":null,"content_length":"30713","record_id":"<urn:uuid:9a77c899-cf6a-4683-ae55-21ee9426209e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
The Correction of Aniseikonia with Ophthalmic Lenses KENNETH N. OGLE, "The Correction of Aniseikonia with Ophthalmic Lenses," J. Opt. Soc. Am. 26, 323-337 (1936) Sort: Year | Journal | Reset 1. Cf. Bibliography. 2. Under the shape category, aniseikonia also includes declination errors. Such errors can usually be interpreted in terms of a meridional magnification of the dioptric image at some oblique axis. 3. Zero verging power magnification lenses. 4. The term "retinal image" as used here, describes that section of the bundle of image forming rays in the vitreous humor of the eye which falls upon the retina, and to which the retinal elements 5. The sign system used is as follows: light is incident from the left; distances are measured from surfaces; distances to left are negative; distances to right are positive. All separations for purposes of development are taken negative. A reduced distance is an actual distance in an optical medium divided by the index of refraction of that medium. 6. Surface powers are defined by D=(n-1)/r, where r is the radius of the curve in meters and n is the index of refraction. 7. Cf. T. Smith, "The Primordial Coefficients of Asymmetrical Lenses," Trans. Opt. Soc. 29, 170 (1928). M. Herzberger, Strahlenoptik, Die Grundlehren der mathe-matischen Wissenschaften in Einzeldarstellungen, Bd. 35, 61 (1931). They may be determined by other methods, e.g., cf. C. Pendlebury, Lenses and Systems of Lenses (London, 1884). J. G. Leatham, Symmetrical Optical Instrument (Cambridge, 1908). 8. If such a lens corrects an ametropic eye, for a given object distance, V is the reciprocal of the distance measured from the ocular surface of the lens to the plane that is conjugate to the retina of the ametropic eye. 9. Though this division of the magnification into factors for the eye and correcting lens is the most straightforward from the point of view of geometrical optics, it is not the only one, as will be shown later. 10. Here t, c, h and u are taken positive. 11. This factor is designated the "effectivity factor," by T. Smith. Cf. Trans. Opt. Soc. 26, 31 (1924). 12. These relations, easily found by applying the Gauss theory to the object and image distances referred to a second pair of conjugate points, are identical with the vergence equations of A. Gullstrand. Cf. Helmholtz, Physiological Optics, Eng. Trans., Vol. I, p. 277ff. 13. The entrance pupil of the eye is the image of the real pupil formed by that part of the dioptric system which lies between the real pupil and the object. The distance of the entrance pupil from the pole of the cornea can be measured by means of a corneal microscope. The exit pupil will be the image of the real pupil formed by that part of the dioptric system lying between the true pupil and the retina. 14. This is essentially correct as concerns the human eye—see unpublished work, Determination of the Effective Entrance Pupil, by K. N. Ogle, October, 1933. 15. This may not be strictly true in cases of large spherical aberration. It must be borne in mind that this discussion pertains to centered systems. Actually the pupil is decentered and the dioptric system of the eye is both decentered and tipped relative to visual axis. 16. This relation differs from that of Eq. (19) in that h[e] here is measured from the entrance pupil instead of the cornea. In the schematic eye the entrance pupil lies about 3.2 mm behind the cornea. Becausc this expression is best adapted for either blurred or sharp imagery it is perhaps the best to use empirically. 17. If V[0]=0, D[1]=0, L[0]=angular magnification of plane parallel of thickness t mm. 18. Comparing the ocular images in any other manner explicitly involves the absolute magnitude of one of the k's. The ratio permits one to compare the sizes in percentages. 19. As would be the case in the testing instrument. 20. The Gauss coefficients for a four surface system are [equation] 21. Cf. T. Smith, "Back Vertex Powers," Trans. Opt. Soc., 26, 35 (1924), for case when U=0, i.e., parallel incident light. 22. Taking c, h and u as positive here. 23. In general R differs from unity by a few percent, and it has been convenient to express R as R=1+e, and refer to the percent eikonic difference, viz., 1OOe. 24. Referred to the points before the eyes at which the vergence powers of the trial case lenses were specified. 25. These relations follow since the positions of the images formed by the spectacle lenses must be the same as were the positions of the images formed by the corresponding trial case lenses. 26. This additive property is found in most of the trial case sets manufactured today. 27. The L factor is unity for distant vision and hence does not enter into the cylindrical excess magnification. If cylindrical lenses are included before both eyes in the test, even though one is of zero power, the L factor drops out of the eikonic trial case lens ratio. 28. One means for doing this is to design the shape of the cylindrical lens for distant vision so that its shape factor S[01] just offsets the T factor, i.e., S[01]T=1. A different set of cylinders would be required for near vision unless cylindrical lenses are always placed before both eyes (though one may be of zero power). 29. In terms of the surface powers and thickness of the lens the vertex power (cf. Eq. (19)) can be written V[0]=D[1]+D[2]+e, where e is an allowance factor equal to S[0]D[1]2[c](=D[1]2[c], approximately) that can be found from prepared tables. 30. Dr. E. D. Tillyer first pointed out this simplification. 31. These lenses have excellent field properties with negligible distortion. 32. This follows from the series expansion of log[e] M, i.e., log[e]M=(M-1)-½(M-1)^2 ⅓(M-1)^3.... When ½(M-1)^2 is below precision of measurements, (100) log[e]M is identical with percent OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
{"url":"http://www.opticsinfobase.org/josa/abstract.cfm?uri=josa-26-8-323","timestamp":"2014-04-20T17:55:53Z","content_type":null,"content_length":"71022","record_id":"<urn:uuid:1faa62ab-f04f-41d3-85ea-9feb8f8080a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2006 [00349] [Date Index] [Thread Index] [Author Index] RE: Simple question from a biologist • To: mathgroup at smc.vnet.net • Subject: [mg67180] RE: [mg67139] Simple question from a biologist • From: "David Park" <djmp at earthlink.net> • Date: Sun, 11 Jun 2006 02:18:37 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com Although it hasn't gotten you into any trouble here, in general it is better to use symbol names that start with small case letters because then you can be sure they won't conflict with Mathematica defined symbols. So rewriting and taking the first (and only) solution... sol = Solve[{{{-x, 0, z}, {x, -y, 0}, {0, y, -z}} . {a, b, c} == 0, a + b + c == 1}, {a, b, c}][[1]] {a -> (y*z)/(x*y + x*z + y*z), b -> (x*z)/(x*y + x*z + y*z), c -> (x*y)/(x*y + x*z + y*z)} Then we could, for example, calculate all your rates in a list and check that they are equal. We just write the expression for each rate and then substitute the solution. You don't have to Solve again. {a x, b y, c z} /. sol Equal @@ % {(x*y*z)/(x*y + x*z + y*z), (x*y*z)/(x*y + x*z + y*z), (x*y*z)/(x*y + x*z + y*z)} Or, perhaps more what you want, your could define the rate function as rate[x_, y_, z_] = a x /. sol (x*y*z)/(x*y + x*z + y*z) Then you could, for example, plot the rate as a function of x and y for fixed values of z. Plot3D[rate[x, y, 1], {x, 0.00001, 2}, {y, 0.00001, 2}]; A couple of good books are 'The Beginner's Guide to Mathematica' by Jerry Glynn & Theodore Gray and 'Mathematica Navigator: Graphics and Methods of Applied Mathematics' by Heikki Ruskeepaa. But it is very worthwhile going through Part I of The Mathematica Book. And using Help as much as possible. David Park djmp at earthlink.net From: tnad [mailto:terry_najdi at hotmail.com] To: mathgroup at smc.vnet.net I'm a bit new to this so please bear with me. I solved this eqation: Sol = Solve[{{{-x, 0, z}, {x, -y, 0}, {0, y, -z}}.{A, B, C} == 0, A + B + C == 1}, {A, B, C}] and got the ouputs of A, B and C interms of x,y, and z each. Now I want to express a term called "rate" where rate = Ax = By = Cz in terms of x, y and Z only. So I tried to do this: Solve[rate == Ax , rate] /. Sol but I cannot get the rate in terms of x,y and z. Is there a better way to do this? Also if someone knows of a better tuttorial (better than the built-in tuttorial) for mathematica, please let me know.
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Jun/msg00349.html","timestamp":"2014-04-18T23:59:15Z","content_type":null,"content_length":"36307","record_id":"<urn:uuid:ff77c304-0a54-4b3e-899a-7eba9811c228>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
distance in kms between Bhiloda to Gandhinagar You asked: distance in kms between Bhiloda to Gandhinagar Assuming you meant • Gandhinagar pronunciation (Hindi: เค เคพเค เคงเฅ เคจเค เคฐ Gujarati: เช เชพเช เชงเซ เชจเช เชฐ ), the capital of Gujarat State, India Did you mean? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/distance_in_kms_between_bhiloda_to_gandhinagar","timestamp":"2014-04-20T22:26:14Z","content_type":null,"content_length":"55705","record_id":"<urn:uuid:20509d55-0600-4bbf-93d6-052e6abb5b57>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Singular value decomposition and least squares solutions Results 1 - 10 of 110 - International Journal of Computer Vision , 1992 "... Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orth ..." Cited by 894 (37 self) Add to MetaCart Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orthography without computing depth as an intermediate step. An image stream can be represented by the 2FxP measurement matrix of the image coordinates of P points tracked through F frames. We show that under orthographic projection this matrix is of rank 3. Based on this observation, the factorization method uses the singular-value decomposition technique to factor the measurement matrix into two matrices which represent object shape and camera rotation respectively. Two of the three translation components are computed in a preprocessing stage. The method can also handle and obtain a full solution from a partially filled-in measurement matrix that may result from occlusions or tracking failures. The method gives accurate results, and does not introduce smoothing in either shape or motion. We demonstrate this with a series of experiments on laboratory and outdoor image streams, with and without occlusions. 1 - International Journal of Computer Vision , 1991 "... The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small inter-frame displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade i ..." Cited by 185 (12 self) Add to MetaCart The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small inter-frame displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade in 1981. The method defines the measure of match between fixed-size feature windows in the past and current frame as the sum of squared intensity differences over the windows. The displacement is then defined as the one that minimizes this sum. For small motions, a linearization of the image intensities leads to a Newton-Raphson style minimization. In this report, after rederiving the method in a physically intuitive way, we answer the crucial question of how to choose the feature windows that are best suited for tracking. Our selection criterion is based directly on the definition of the tracking algorithm, and expresses how well a feature can be tracked. As a result, the criterion is optimal by construction. We show by experiment that the performance of both the selection and the tracking algorithm are adequate for our factorization method, and we address the issue of how to detect occlusions. In the conclusion, we point out specific open questions for future research. Chapter 1 - ACM Transactions on Programming Languages and Systems , 1994 "... This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the regis ..." Cited by 173 (8 self) Add to MetaCart This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the register allocator on a suite of FORTRAN programs. It discusses several insights that we discovered only after repeated implementation of these allocators. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors---compi l ers , optimization General terms: Languages Additional Key Words and Phrases: Register allocation, code generation, graph coloring 1. INTRODUCTION The relationship between run-time performance and e#ective use of a machine's register set is well understood. In a compiler, the process of deciding which values to keep in registers at each point in the generated code is called register allocation. Value , 1992 "... This paper surveys the contributions of five mathematicians --- Eugenio Beltrami (1835--1899), Camille Jordan (1838--1921), James Joseph Sylvester (1814--1897), Erhard Schmidt (1876--1959), and Hermann Weyl (1885--1955) --- who were responsible for establishing the existence of the singular value de ..." Cited by 82 (1 self) Add to MetaCart This paper surveys the contributions of five mathematicians --- Eugenio Beltrami (1835--1899), Camille Jordan (1838--1921), James Joseph Sylvester (1814--1897), Erhard Schmidt (1876--1959), and Hermann Weyl (1885--1955) --- who were responsible for establishing the existence of the singular value decomposition and developing its theory. , 1993 "... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values an ..." Cited by 63 (4 self) Add to MetaCart SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left- and right-singular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY Y-MP, IBM RS/6000-550, DEC 5000100, HP 9000-750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse term-document matrices from information retrieval applications. A user-friendly software interface to the package for UNIX-based systems and the Macintosh II/fx is , 2003 "... Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the ent ..." Cited by 62 (2 self) Add to MetaCart Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the entire belief space. However, in real-world POMDP problems, computing the optimal policy for the full belief space is often unnecessary for good control even for problems with complicated policy classes. The beliefs experienced by the controller often lie near a structured, low-dimensional manifold embedded in the high-dimensional belief space. Finding a good approximation to the optimal value function for only this manifold can be much easier than computing the full value function. We introduce a new method for solving large-scale POMDPs by reducing the dimensionality of the belief space. We use Exponential family Principal Components Analysis (Collins, Dasgupta, & Schapire, 2002) to represent sparse, high-dimensional belief spaces using low-dimensional sets of learned features of the belief state. We then plan only in terms of the low-dimensional belief features. By planning in this low-dimensional space, we can find policies for POMDP models that are orders of magnitude larger than models that can be handled by conventional techniques. We demonstrate the use of this algorithm on a synthetic problem and on mobile robot navigation tasks. 1. - IEEE TRANSACTIONS ON MULTIMEDIA , 2002 "... Digital watermarking has been proposed as a solution to the problem of copyright protection of multimedia documents in networked environments. There are two important issues that watermarking algorithms need to address. Firstly, watermarking schemes are required to provide trustworthy evidence for p ..." Cited by 60 (0 self) Add to MetaCart Digital watermarking has been proposed as a solution to the problem of copyright protection of multimedia documents in networked environments. There are two important issues that watermarking algorithms need to address. Firstly, watermarking schemes are required to provide trustworthy evidence for protecting rightful ownership; Secondly, good watermarking schemes should satisfy the requirement of robustness and resist distortions due to common image manipulations (such as filtering, compression, etc.). In this paper, we propose a novel watermarking algorithm based on singular value decomposition (SVD). Analysis and experimental results show that the new watermarking method performs well in both security and robustness. , 1976 "... This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column ..." Cited by 41 (2 self) Add to MetaCart This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain conditions when A has numerical rank r there is a distinguished r dimensional subspace of the column space of A that is insensitive to how it is approximated by r independent columns of A. The consequences of this fact for the least squares problem are examined. Algorithms are described for approximating the stable part of the column space of A. 1. Introduction In this paper we shall be concerned with the following problem. Let A be an m \Theta n matrix with m n, and suppose A is near (in a sense to be made precise later) a matrix B whose rank is less than n. Can one find a set of linearly independent columns of A that span a good approximation to the column space of B? The solution of this problem is important in a number of applications. In this paper we shall be chiefly interested in... - ACM Trans. Math. Software , 1982 "... The most well-known and widely used algorithm for computing the Singular Value Decomposition (SVD) A--- U ~V T of an m x n rectangular matrix A is the Golub-Reinsch algorithm (GR-SVD). In this paper, an improved version of the original GR-SVD algorithm is presented. The new algorithm works best for ..." Cited by 38 (0 self) Add to MetaCart The most well-known and widely used algorithm for computing the Singular Value Decomposition (SVD) A--- U ~V T of an m x n rectangular matrix A is the Golub-Reinsch algorithm (GR-SVD). In this paper, an improved version of the original GR-SVD algorithm is presented. The new algorithm works best for matrices with m>> n, but is more efficient even when m is only slightly greater than n (usually when m ~ 2n) and in some cases can achieve as much as 50 percent savings. If the matrix U ~s exphcltly desired, then n 2 extra storage locations are required, but otherwise no extra storage is needed. The two main modifications are: (1) first triangularizing A by Householder transformations before bldmgonahzing it (thin idea seems to be widely known among some researchers in the field, but as far as can be determined, neither a detailed analysis nor an lmplementatmn has been published before), and (2) accumulating the left Givens transformations in GR-SVD on an n x n array instead of on an m x n array. A PFORT-verified FORTRAN Implementation m included. Comparisons with the EISPACK SVD routine are given.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=275935","timestamp":"2014-04-17T06:30:40Z","content_type":null,"content_length":"38870","record_id":"<urn:uuid:c9aba64c-3adb-4ce5-9301-bea5e8f486b5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Temecula Precalculus Tutor ...Through the opportunity provided by WyzAnt, I would like to help students realize their own potential. My specialties are math and science. I am able to teach up to college level in all the following subjects: math, physics, chemistry, and biology. 38 Subjects: including precalculus, chemistry, statistics, calculus I have more than 20 years of experience teaching and tutoring Mathematics at the elementary, high school and college levels. I've been tutoring Spanish for several years. I have a Bachelor Degree in Mathematics and a Master in Mathematics Education. 8 Subjects: including precalculus, Spanish, geometry, trigonometry Hello! My name is Eric, and I hold a Bachelor's degree in Mathematics and Cognitive Science from the University of California - San Diego. I began tutoring math in high school, volunteering to assist an Algebra 1 class for 4 hours per week. 14 Subjects: including precalculus, calculus, physics, geometry ...I realize that half the battle is figuring out how to approach the test with confidence, and I can get you there, too. I studied for and passed the CBEST test myself. For the math portion: I understand the importance of learning to estimate. 43 Subjects: including precalculus, reading, chemistry, English ...Typically, the SAT math student will need to know all of his/her math through Algebra 2 and some pre-calculus/trig. Many of the SAT or ACT math problems do not take a lot of time to do. The problems are set up in such a way that the student should be looking for the obvious, ie: when something ... 20 Subjects: including precalculus, reading, Spanish, geometry
{"url":"http://www.purplemath.com/temecula_precalculus_tutors.php","timestamp":"2014-04-17T04:42:30Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:11f366e7-6430-4f61-ba51-39ae75b63726>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
News - Wolfram Alpha Pro Available - My Math Forum February 11th, 2012, 05:09 PM #1 Joined: Jan 2008 News - Wolfram Alpha Pro Available Posts: 21 Here is a bit of news I came across recently and thought others here might be interested in. Thanks: 0 From Stephen Wolfram's blog, Wolfram Alpha Pro is now available, and--for now--free trial subscriptions are available. Stephen Wolfram Blog - Announcing Wolfram Alpha Pro http://blog.stephenwolfram.com/2012/02/ ... alpha-pro/ You never know when the free trial period will end, so might as well try it while you can.
{"url":"http://mymathforum.com/math-software/24717-news-wolfram-alpha-pro-available.html","timestamp":"2014-04-16T15:59:59Z","content_type":null,"content_length":"27626","record_id":"<urn:uuid:eac38e4a-d2d7-4f93-959b-859e6d6d7bea>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Solar Cycle Model fails to predict the recent warming Jan-Erik Solheim, Kjell Stordahl and Ole Humlum have recently published two papers supporting the myth that the sun is behind the recent climate changes. They make a model based on variations in the lengths of the solar cycles. The model predicts that we are now in a decade with sharply falling temperatures. However, as will be shown in this post, the model fails to keep up with the recent global warming. Temperatures fitted well with the model until the mid-1970s, but not later. It is therefore extremely unlikely that the prediction about sharply falling temperatures in this decade will be right. The solar intensity varies by approximately 0.1% over a solar cycle. Both the variations and the average value of the intensity differ a little from one cycle to the next. These variations are a solar radiative forcing that affects the energy balance of the Earth. A solar cycle lasts on average for just over 11 years. The previous solar cycle, number 23, ended in November 2008 after having lasted for an unusually long time, well over 12 years. The current solar cycle 24 will probably last for the rest of the decade. Jan-Erik Solheim, Kjell Stordahl and Ole Humlum (hereafter SSH) find a relationship between the mean temperature in a solar cycle and the length of the previous solar cycle [1, 2], which we will refer to as the Solar Cycle Model: A long solar cycle is followed by a solar cycle with a low temperature, and a short solar cycle is followed by a solar cycle with a high temperature. When a solar cycle has ended, its length is known, and the model can predict the temperature in the next solar cycle. Due to the long solar cycle 23, the model predicts that we are now in a solar cycle with sharply falling temperatures. In this blog post, observed mean temperature is the average of the observed global surface temperatures in a solar cycle, and predicted temperature is the prediction by the Solar Cycle Model for the mean temperature in a solar cycle. A prediction is calculated based on the observed mean temperatures in the previous solar cycles and the lengths of the previous solar cycles. Results based on temperatures in the northern hemisphere SSH concentrate on some local temperature series from the northern hemisphere. The HadCRUT3 NH temperature series contains the combined land and sea surface temperatures for the entire northern hemisphere. We first examine how this series matches the Solar Cycle Model. Figure 1 shows the observed mean temperatures in solar cycles 10 to 23 as blue circles, and the mean temperature observed so far in the current solar cycle 24 as a blue star. It also shows the predicted temperatures in solar cycles 14 to 24 as red stars. The horizontal positions of the temperatures are in the middle of the solar cycles that they represent. Figure 1. The observed and predicted temperatures up to now. Figure 1 shows that the observed mean temperatures in solar cycles 21 to 23 are much higher than predicted. SSH predict sharply falling temperatures in the current solar cycle 24. But the mean temperature observed so far in solar cycle 24 is much higher than predicted. Solar cycle 24 still has many years to go, and the situation may of course change. But given the failures of the predictions for cycles 21 to 23, and the high mean temperature observed so far in cycle 24, the model's prediction for cycle 24 is extremely unlikely to be right. Figure 2 shows the temperatures in solar cycles 10 to 21 as a function of the length of the previous solar cycle. Solar cycle 21 ended in October 1986. Figure 2. The observed mean temperatures up to and including solar cycle 21, and the prediction for cycle 21, as a function of the length of the previous solar cycle. The blue trend line sloping downwards in Figure 2 is the best fit to the blue circles, which show the observed mean temperatures in solar cycles 10 to 20. The trend is calculated with linear regression analysis. The observed mean temperatures fit well with the Solar Cycle Model when they are close to the trend line, and they do not fit well when they are far away from it. Solar cycle 20 lasted for 11.6 years, and the prediction for solar cycle 21 is therefore on the trend line at the horizontal value 11.6 years. The dotted red lines show the 95% confidence interval (uncertainty range) for the prediction. The observed mean temperature in solar cycle 21 is displayed as a blue star in Figure 2. It is far above the upper limit of the 95% confidence interval around its prediction. According to the model, there is only a 0.41% probability of measuring such a high mean temperature in solar cycle 21. Solar cycle 20 ended in June 1976. Figure 2 shows that the temperatures measured before the mid-1970s fit very well with the Solar Cycle Model, but that temperatures in the first solar cycle after the mid-1970s do not. Figure 2 shows when the Solar Cycle Model collapsed with respect to the ability to predict future temperatures. The current solar cycle 24 started in December 2008, and it will probably last for the rest of the decade. Figure 3 shows how the Solar Cycle Model predicts its temperature, just as Figure 2 did for cycle 21. Figure 3. The observed mean temperatures up to now, and the prediction for cycle 24, as a function of the length of the previous solar cycle. Due to the high temperatures in solar cycles 21 to 23, the blue circles in Figure 3 are no longer close to the blue trend line. The observed mean temperatures in solar cycles 10 to 23 as a whole do not fit with the Solar Cycle Model, causing the uncertainty of the model to increase and thereby the confidence intervall around the prediction to expand. Despite the greatly expanded confidence interval, the mean temperature observed so far in solar cycle 24 is far above the upper limit of the 95% confidence interval around its prediction. Results based on other temperature series This blog post concentrates on the temperatures in the northern hemisphere, and it therefore shows results based on the HadCRUT3 NH temperature series. Results based on the global HadCRUT3, NASA GISS and NCDC temperature series are about the same as the results based on HadCRUT3 NH; i.e. the observed mean temperatures match the predictions until the mid-1970s, but not later. SSH also use the HadCRUT3 NH temperature series, but they place more emphasis on some local temperature series. This blog post is a basic version of a longer article. It shows that the median of the results based on the local temperature series also diverges from the Solar Cycle Model predictions after the 1970s, in about the same way as does the results based on the HadCRUT3 NH temperature series. The longer article also contains more explanations, plots and statistics. If there is a real, physical reason why temperatures fitted so well with the Solar Cycle Model until the mid-1970s, that reason must be a solar radiative forcing. If so, this forcing must still be present after the mid-1970s, although it no longer dominates. Another forcing, or several, must have become dominant. My analysis says nothing about what the new dominant forcing(s) may be. But it is natural to think of the human-induced forcings, which many scientists claim became dominant in the 1970s. Skeie et al [3] show that the sum of the various human-induced forcings first became positive in the 1970s. Before 1970, the sum was small and the sign varied. After 1970, the sum has increased steadily up to a substantial positive value in 2010. This is the probable explanation why there is an increasing gap between the Solar Cycle Model's predictions and the observed mean temperatures after the middle of the 1970s. A special thanks to Christian Moe for valuable comments and for language editing. 1. Solar Activity and Svalbard Temperatures Jan-Erik Solheim, Kjell Stordahl and Ole Humlum. 2. The long sunspot cycle 23 predicts a significant temperature decrease in cycle 24 Jan-Erik Solheim, Kjell Stordahl and Ole Humlum. 3. Anthropogenic radiative forcing time series from pre-industrial times until 2010 Skeie et. al. Posted by Hans Petter Jacobsen on Wednesday, 19 December, 2012 The Skeptical Science website by Skeptical Science is licensed under a Creative Commons Attribution 3.0 Unported License.
{"url":"http://www.skepticalscience.com/print.php?n=1687","timestamp":"2014-04-21T09:37:17Z","content_type":null,"content_length":"13348","record_id":"<urn:uuid:50a3273a-6ec3-44a6-ac34-204d1d411839>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Summarizable Address Blocks Last month's article discussed basic OSPF. I want to say that with OSPF you should "summarize, summarize, summarize", so we need to discuss what that might mean. And it gives me the chance to sound off about addressing while we're at it. We'll start by reviewing subnetting, since I have a somewhat different perspective on it. Why Subnetting Let's start with a very basic point of view. Routing tells a router how to deliver a packet. The packet being routed has a specific IP address in it. Usually routers don't track every address (host routing), because it isn't necessary or appropriate. Instead, routers (and routing protocols) track subnets, which I like to think of as the names of the "wires" in a network. Of course, what we mean by "wires" is getting more abstract all the time. We used to have thick Ethernet (now that's a solid, definite, no-doubt-about-it wire). Now we have hubs (well, some of us do). And we have VLAN's, which are virtual wires. But the routers still track them: the logic is no different than in the days of 10Base5 cabling. When TCP/IP version 4 was invented, computers (and routers) were scarce (and expensive). The original routing only was able to direct packets based on the Class A, B, C IP addresses everyone knows about. The routers assumed they knew how much of the address was significant for routing, as summarized in the following table. │IP Address │Class│Default Subnet Mask│Subnet Bits│ │1.0.0.0 - 126.0.0.0 │A │255.0.0.0 │/8 │ │128.0.0.0 - 191.0.0.0 │B │255.255.0.0 │/16 │ │192.0.0.0 - 223.0.0.0 │C │255.255.255.0 │/24 │ (127 is class A but reserved for loopback). So a class A address by default has 8 bits significant for routing, class B 16, and class C 24. This is indicated with a slant as shown in the right column. The remaining bits are the host part of the address, assigned to various computers by the administrator. The one bits in the mask (255 is 8 one bits in binary) indicate the network portion of the address, used for routing, and the zero bits indicate the host portion of the address. Internally, the router uses the network mask to select out the bits from a packet to look up in its routing table, by doing a LOGICAL AND operation on the packet address and the mask. As time went on, it became clear that using new network numbers to route to additional Ethernet segments (wires, if you will) was going to use up addresses at a prodigious rate. A network administrator is at liberty to assign host addresses as they wish. So if the administrator assigns addresses so that the first bits in the host part of the address indicate the wire the host is on, and if there is some way to clue the router in on this, more flexible routing is gained. Allowing the administrator to specify the subnet mask was how the router was clued in to this scheme. The following approach is the official approach you'll find in almost every book on the subject. Subnets, the Official Way Let's look at how the router figures out how to route a packet in the presence of subnetting. First we'll do a simple version, then fancier versions. A packet arrives at the router with destination 131.108.5.67, default subnet mask 255.255.0.0 (since it's Class B). The router does a LOGICAL AND in binary: (A programmer thinks of LOGICAL AND as keeping the bits above a '1' in the mask, and producing '0' as the result wherever there is '0' in the mask). The result, translated back to decimal, is All we (or the router) have done is extracted the Class B network part of the address, the 131.108 part, the name of the company or organization that was assigned the address. The LOGICAL AND has extracted the first two octets (bytes) of the address. For simplicity, note that where the mask is 255, we keep the corresponding octet of the address. Where the mask is 0, we have a zero octet in the result. Suppose the router is using a subnet mask of 255.255.255.0. You can think of the '1' bits in the mask, as measuring how much of the address to use for routing. (The mask is always 1's on the left, 0's on the right: that's a rule, the mask must have contiguous one bits). The result is then 131.108.5.0. Now let's try it with the same address, but a mask of 255.255.255.224. The result is 131.108.5.64. So host .67 is in subnet .64. The router looks for a route to subnet 131.108.5.64. Another Way There's another way to reach the same result. We know what happens with mask octets of 255 and 0. If that's all that's in the mask, you've got subnetting on a byte boundary, the whole byte (or multiple bytes) form the subnet, and you can just read the subnet off. Here's the trick: if there's a byte in the mask like 224, then subtract it from 256: 256 - 224 = 32. It turns out that the subnets will be multiples of the result, 32, in the appropriate octet (the one that the 224 was in). Each subnet starts with the multiple of 32, and ends just before the next subnet (multiple of 32). The figure to the left shows this. Each subnet is shown in a box. The number at the top is the name of the subnet, the multiple of 32. The last number in each box is the subnet directed broadcast address, one before the next subnet. These two addresses are in red since they are not valid host addresses within that subnet. The remaining addresses are shown in green, as valid host addresses for the subnet. So to determine the subnet of 131.108.5.67 for mask 255.255.255.224, we need to figure out what is the nearest multiple of 32 that's less than 67, since that's the name of the subnet. This gives us 64 as the subnet. Did you notice? We just figured out the subnet without having to do binary! Let's try another. Say a packet is bound for 10.17.35.185 with mask 255.255.240.0. Then we calculate 256-240 = 16, so our subnet is a multiple of 16 in the third octet, or 32. But this is a class A address, so only the 10 is the given network address. That means the 17 is also part of the name of the subnet. In this case, the subnets look like 10.anything.multiple of 16.0: 10.0.0.0, 10.0.16.0, 10.0.32.0, ..., 10.1.0.0, 10.1.32,0, ..., up to 10.255.240.0. Summarizable Ranges of Addresses Suppose we look at the addresses 200.1.80.0, 200.1.81.0, 200.1.82.0, up through 200.1.95.0. You can tell at a glance that this block of addresses is suitable for summarization with a routing protocol. That's because if you count the numbers 80, 81, ... 95, there are 16 numbers there, and 16 is a power of two. Furthermore, the first number, 80, is also a multiple of that number 16. Whenever this pattern holds, the addresses are suitable. So 145.123.72.0, 145.123.73.0, through 145.123.79.0 will summarize, since from 72 through 79 there are 8 numbers starting with a multiple of 8 (and 8 is a power of two). But 145.123.119.0, 145.123.120.0, through 145.123.139.0 will not summarize (at least not as one routing entry), since 119 through 139 amounts to 20 numbers, and 20 is not a power of two. Similarly, 119 through 126 do not summarize, since there are 8 numbers but they don't start with a multiple of 8. Now comes the tie-in with subnetting as done above. Before, we calculated 256 - mask = multiples of number. We can swap the mask and the "multiples of number". In our first example, we're dealing with 16 numbers and multiples of 16, so we calculate 256 - 16 = 240. The numbers 80 through 95 were in the third octet, so we use mask 255.255.240.0. The claim is then that 200.1.80.0 with mask 255.255.240.0 represents the adresses in the range 200.1.80.0 through the end of 200.1.95.0, namely 200.1.95.255. The figure shows how this is working. If you look at it in binary, the 240 mask is saying that only the left 4 bits of the third octet are significant for routing purposes. In effect, we've tuned out the right 4 bits. The gray box in the figure shows the bits that are "tuned out". But notice that this range of numbers counts in binary through all the possible combinations in those 4 bits. That's because we started with a multiple of 16 and, in effect, counted from 0 to 15 in binary. As long as our address matches the left bits, the 0101, the routing table entry will apply. Another way of saying this: suppose a packet arrives with an address drawn from the range 200.1.80.0 through 200.1.95.255. We mask with 255.255.240.0, which tunes out the right four bits of octet three, and all of octet four. We're left with 200.1.80.0. So 200.1.80.0 with mask 255.255.240.0 indicates how to route for the entire range of addresses 200.1.80.0 through 200.1.95.255! The new way of writing this routing table entry is 200.1.80.0 /20. (Since 20 is 8 + 8 + 4, the number of 1 bits in the mask). In our second example above, 72 through 79 is 8 numbers starting with a multiple of 8, so we calculate 256 - 8 = 248. All the numbers 72 to 79 are in the third octet again, and hence our summary is 145.123.72.0 with mask 255.255.248.0. Any address in the range 145.123.72.0 through 145.123.79.255, when logically ANDed with mask 255.255.248.0, will match 245.123.72.0. So one routing table entry for the latter, with mask 255.255.248.0, suffices to route to all the addresses in this range. The new notation for this routing entry: 145.123.72.0 / 21. (Since 21 is 8 + 8 + 5, the number of 1 bits in the mask). Next Month Next month we'll look at how to think about the size of a /23, and then we'll see how this applies to OSPF. Dr. Peter J. Welcher (CCIE #1773, CCSI #94014) is a Senior Consultant with Chesapeake NetCraftsmen. NetCraftsmen is a high-end consulting firm and Cisco Premier Partner dedicated to quality consulting and knowledge transfer. NetCraftsmen has eleven CCIE's (4 of whom are double-CCIE's, R&S and Security). NetCraftsmen has expertise including large network high-availability routing/ switching and design, VoIP, QoS, MPLS, network management, security, IP multicast, and other areas. See http://www.netcraftsmen.net for more information about NetCraftsmen. . New articles will be posted under the Articles link. Questions, suggestions for articles, etc. can be sent to This email address is being protected from spambots. You need JavaScript enabled to view it. . Copyright (C) 1998, Peter J. Welcher
{"url":"http://www.netcraftsmen.net/resources/archived-articles/569.html","timestamp":"2014-04-17T18:25:41Z","content_type":null,"content_length":"41697","record_id":"<urn:uuid:eff24a3e-eccc-4deb-b4cb-d6a7d34c1688>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
September 2000 Adhesion and Cohesion Hello, I'm doing a research project at Purdue University and one part of the project requires a mathematical analysis of fluid interaction with a surface. Specifically, I need to find an expression for the distance from a surface that the fluid will start to rise (or decline) due to the effects of adhesion and cohesion. An example of this is the meniscus in a cylinder of fluid. I'd imagine this expression would be a function of the properties of the fluid and the contact surface. Thanks. Jeremy Davis I'm afraid this level of detail is a bit beyond the remit of a publication like Plus (which is aimed at school level rather than University), but I can give you some pointers on the web. The rise/ decline can be expressed as a function of the contact angle (which depends on the fluid and the surface). Have a look at: and see whether they give you any help! Robert Hunt Always a correct answer? I desperately need some help and fast. I am an A-level student and have been faced with a 500 word essay on discussing: "The joy of maths is that it always has a correct answer". I've thought of a few things but need more ideas....got any? Thank you!! Dear Karen, As well as Godel's Theorem, which will certainly give you some ideas, you might like to think about the fact that whilst maths problems at SCHOOL are well-understood and the answer is known (by your teacher, for example), most maths problems which are being studied for real by researchers have UNKNOWN answers. So the difficulty is first to decide what the answer is, and then try to prove it. For example, imagine a connected graph (i.e., a collection of coloured points some of which are joined by lines - which could cross over) Now think about the number of different colours needed to colour in this graph so that no two points which are joined by a line have the same colour. The actual answer for the minimum number of colours needed is not known. So far, all that mathematicians have managed to do is find some estimates ("upper bounds") for the correct answer. They are still trying to work out the real answer, but even rough estimates are useful (because the answer has lots of applications in the real world: for instance mobile phone networks only work because of this kind of maths). You can read more about this at Radio Controlled?, an article in Issue 8 of Plus. I do hope that you intend to give credit to Plus in your essay for giving you something to think about - otherwise we might be worried that you're cheating! Good luck! Dr Robert Hunt. (Editor, Plus) Galloping Gyroscopes Firstly, congratulations on your work in bringing this magazine to the web. My question is directed at the authors (Kona Macphee and Hugh Hunt) of an article I recently read about the gyroscopic effect Galloping Gyroscopes. I would much appreciate your passing this message on to either of the authors. In the section titled "Conservation of Angular Momentum" it is stated that If you now tilt the gyroscope to the vertical, so that its spinning disc is in the horizontal plane, you have transferred its angular momentum to the vertical plane. Since the net angular momentum in this plane must remain zero (to conserve angular momentum - it was zero in this plane when we started) you'll find that you and the chair start spinning to compensate! The upshot of this statement seems to be that the angular momentum transfer has been cancelled out. If you consider the gyroscope as a part of a larger system (include the person and the chair), why is it that the angular momentum is not simply transferred from the horizontal to the vertical ? I understand that the net effect has to be zero, but you seem to be saying that the angular momentum in the vertical plane has to be re-directed to the horizontal plane. Any light you can shed on my query will be greatly appreciated. I am trying to get a handle on this topic as I would like to be able to explain why pushing on the right handle bar of a motorcycle causes the bike to veer right. Peter Metcher Dear Peter, The key to understanding this is to realise that angular momentum is a vector quantity, and is conserved as a vector quantity provided no couples act upon the system. It therefore makes sense to look at just the vertical component of this vector - i.e. the angular momentum associated with spin about a vertical axis. The chair swivels freely on a vertical axis, so the Earth is unable to exert a significant couple on the chair/gyroscope system in this direction. The vertical component of angular momentum of the chair/gyroscope system must therefore be constant. As the article says, this component was initially zero. If the gyroscope is turned so its axis is vertical, then the gyroscope contributes a non-zero amount to the vertical angular momentum. Since the total must be zero, the chair/sitter has to revolve in the opposite direction, so contributing an exactly balancing negative angular momentum to the system. Can I get a printed copy? Is it possible to have your publication mailed out for every issue or is it a primarily on-line publication? Triveni Perera Dear Triveni, It is primarily an on-line publication, however it should print reasonably well if you select the 'get printable page' version first. Modelling Problem I am trying to track down the solution to a problem we face in real life. Its about 20 years since I did maths at University and haven't touched it since, so I was wondering if you could provide the answer or point me to a book / person who may be able to answer. We're not really the right people to ask, as Plus is a magazine for A-level pupils! But anyway, your problem as stated is possible to solve... Tank full of liquid, volume = VT Pipework going from tank and returning to tank, volume in pipework = VP Flow rate of liquid through pipe = F. This is the current steady state situation. Then, addition of different material is made to tank. Question, how long, (tank "turnovers") does it take for the composition of the liquid to become "uniform" again????? 1. Instantaneous perfect mixing in the tank. 2. No mixing in the pipework. Real situation, VP= approx 0.6VT. Tank is stirred. Paul Adeney Let the volume fraction of the "different material" in the tank be Hence the new volume fraction is which means (using elementary calculus) that This differential equation is only valid when These differential equations can be solved analytically, but it’s a bit messy. It’s probably easier to solve them on a computer instead. However, of course, the fluid is NEVER perfectly "uniform" again, but merely approaches uniformity! Hence the new volume fraction is Yours, Robert Hunt. (Editor, Plus) Application of Number I am a lecturer specialising in teaching Application of Number. I am rather concerned that this is the Year of Mathematics, but the emphasis seems to be on higher level maths and not on the Application of Number (as in Key Skills). Do you know of anyone who is working in this area? I am keen to get networking!! Lynn Tranter Dear Lynn, Start with the NRICH website and NRICH Primary website. Then follow the link to 'Other Maths Sites'. Plus is aimed at 'A' level and up. NRICH does have quite a bit of number work.
{"url":"http://plus.maths.org/content/letters-1","timestamp":"2014-04-19T06:59:50Z","content_type":null,"content_length":"34979","record_id":"<urn:uuid:caf7eeef-20a3-4e9f-9e20-96bd03c26209>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Jasper, GA Math Tutor Find a Jasper, GA Math Tutor ...In high school Abigail earned the National Merit scholarship through her exemplary SAT scores. She then went on to graduate from the Georgia Tech with a degree in Applied Mathematics and a minor in Economics. She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. 22 Subjects: including algebra 2, reading, differential equations, ACT Math ...I took Differential Equations in high school as well as in college. In addition to taking a semester of differential equations and one of partial differential equations, I took several classes on Advanced Calculus which are proof based classes where you fundamentally prove and construct the logi... 41 Subjects: including calculus, logic, linear algebra, differential equations ...My method of tutoring is to first find out what the students knows through an initial evaluation done in a unique child friendly way.They may not be aware that they are being assessed. Next, I let the student tell me what they do not like about the subject we are working on and then show them fu... 26 Subjects: including algebra 2, phonics, study skills, special needs I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh. During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments. 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have been a private tutor since my freshman year of college in 2009 and have tutored more than 90 students in the last five years. Because I am motivated by my own acquisition of knowledge, I have been able to tutor in many different subjects. I specialize in standardized test preparation including all sections of the PSAT, SAT, ACT, and ASVAB. 26 Subjects: including calculus, SAT math, algebra 1, algebra 2 Related Jasper, GA Tutors Jasper, GA Accounting Tutors Jasper, GA ACT Tutors Jasper, GA Algebra Tutors Jasper, GA Algebra 2 Tutors Jasper, GA Calculus Tutors Jasper, GA Geometry Tutors Jasper, GA Math Tutors Jasper, GA Prealgebra Tutors Jasper, GA Precalculus Tutors Jasper, GA SAT Tutors Jasper, GA SAT Math Tutors Jasper, GA Science Tutors Jasper, GA Statistics Tutors Jasper, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Jasper_GA_Math_tutors.php","timestamp":"2014-04-16T10:42:56Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:3cadb5d5-0ac7-434e-87da-e3a256424c1c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm of the Week: Prim's Minimum Spanning Tree Along with Kruskal’s minimum spanning tree algorithm, there’s another general algorithm that solves the problem. The algorithm of Prim. As we already know the algorithm of Kruskal works in a pretty natural and logical way. Since we’re trying to build a MST, which is naturally build by the minimal edges of the graph (G), we sort them in a non-descending order and we start building the tree. During the whole process of building the final minimum spanning tree Kruskal’s algorithm keeps a forest of trees. The number of trees in that forest decreases on each step and finally we get the minimum weight spanning tree. A key point in the Kruskal’s approach is the way we get the “next” edge from G that should be added to one of the trees of the forest (or to connect two trees from the forest). The only thing we should be aware of is to choose an edge that’s connecting two vertices – u and v and these two shouldn’t be in the same tree. That’s all. An important feature of the Kruskal’s algorithm is that it builds the MST just by sorting the edges by their weight and doesn’t care about a particular starting vertex. In the same time there’s another algorithm that builds a MST – the algorithm of Prim designed by Robert Prim in 1957. The idea behind the Prim’s algorithm is rather different from Kruskal’s approach. During the process of building the MST this algorithm keeps a single tree, which is finally sub-tree of the final minimum weight spanning tree. On each step we chose an edge which we add to the growing tree that finally forms the MST. It is somehow unnatural approach! We start from a given vertex and initially we don’t choose the lightest edge. Thus during the whole process the tree grows, but outside the tree (T) there might be edges that are lighter than those in the tree (i.e. the edge (5, 1) from the tree above is lighter than (2, 5) but (2, 5) is added to the growing tree before the edge (5, 1)). Compared to the Kruskal’s algorithm this time everything seems to be really unnatural. How we should be sure the final tree (T) will be a minimum spanning tree since we don’t get the lightest edge on each step? Actually we are sure that the final tree is a MST because of another obvious feature of the minimum spanning trees. They should “connect” all the vertices of G, thus somehow at least one edge reaching each vertex will appear in the MST. Thus we shouldn’t care where do we start, the only important thing is to choose the lightest edge that’s visible so far. This algorithm looks much like Dijkstra’s shortest path in a graph, because we start from a vertex, we push all the edges starting from this node to a priority queue and we chose the lightest edge. Going to the next node connected by this edge we append to the queue all the edges that aren’t in the queue. That way the queue grows and we get always the lightest edge – thus forming a priority queue. Now let’s summarize the algorithm of Prim Pseudo Code As an initial input we have the graph (G) and a starting vertex (s). 1. Make a queue (Q) with all the vertices of G (V); 2. For each member of Q set the priority to INFINITY; 3. Only for the starting vertex (s) set the priority to 0; 4. The parent of (s) should be NULL; 5. While Q isn’t empty 6. Get the minimum from Q – let’s say (u); (priority queue); 7. For each adjacent vertex to (v) to (u) 8. If (v) is in Q and weight of (u, v) < priority of (v) then 9. The parent of (v) is set to be (u) 10. The priority of (v) is the weight of (u, v) Indeed it looks much like Dijkstra’s algorithm. Here’s a PHP implementation of the algorithm of Prim, which directly follows the pseudo code. // Prim's algorithm define('INFINITY', 100000000); // the graph $G = array( 0 => array( 0, 4, 0, 0, 0, 0, 0, 0, 8), 1 => array( 4, 0, 8, 0, 0, 0, 0, 0, 11), 2 => array( 0, 8, 0, 7, 0, 4, 2, 0, 0), 3 => array( 0, 0, 7, 0, 9, 14, 0, 0, 0), 4 => array( 0, 0, 0, 9, 0, 10, 0, 0, 0), 5 => array( 0, 0, 4, 14, 10, 0, 0, 2, 0), 6 => array( 0, 0, 2, 0, 0, 0, 0, 6, 7), 7 => array( 0, 0, 0, 0, 0, 2, 6, 0, 1), 8 => array( 8, 11, 0, 0, 0, 0, 7, 1, 0), function prim(&$graph, $start) $q = array(); // queue $p = array(); // parent foreach (array_keys($graph) as $k) { $q[$k] = INFINITY; $q[$start] = 0; $p[$start] = NULL; while ($q) { // get the minimum value $keys = array_keys($q); $u = $keys[0]; foreach ($graph[$u] as $v => $weight) { if ($weight > 0 && in_array($v, $keys) && $weight < $q[$v]) { $p[$v] = $u; $q[$v] = $weight; return $p; prim($G, 5); It’s curious to say that the algorithm developed by Robert Prim isn’t developed by him. It’s considered that a Czech mathematician Vojtech Jarnik discovered back in 1930. However now we know this algorithm as the algorithm of Prim, which independently discovered it in 1957 as I said above, and finally Edsger Dijkstra described it in 1959. That’s why his algorithm on finding the single-source shortest paths in a graph looks so much to this algorithm. Perhaps by finding this algorithm on minimum spanning tree Dijkstra discovered how we can find the shortest paths to all vertices using a priority queue. Indeed the paths to all other vertices use the edges of the minimum spanning tree. Just because Jarnik found and described this algorithm 27 years earlier than Robert Prim, today it’s more convenient to call this algorithm the Prim-Jarnik algorithm. Published at DZone with permission of Stoimen Popov, author and DZone MVB. (source) Claude Lalyre replied on Tue, 2012/11/20 - 11:46am The 30's, the 50's and the 60's ! At that time they did not have laptops nor PC, but they were really skilled with algorithms ! They found almost everything at that time, and today we are still using their algorithms. It seems there is no invention anymore... Dilum Ranatunga replied on Tue, 2012/11/27 - 4:38pmClaude Lalyre Claude, I get that feeling sometimes, too. But take heart -- there are new things being imagined everyday. For example, SkipList is from 1990.
{"url":"http://architects.dzone.com/articles/algorithm-week-prims-minimum?mz=36885-nosql","timestamp":"2014-04-21T14:46:18Z","content_type":null,"content_length":"73575","record_id":"<urn:uuid:441ad1a4-c5c0-47e2-b93f-7190440f59d8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel Raytracing: A Case Study on Partitioning and Scheduling on Workstation Clusters - Parallel Computing , 1998 "... The development of parallel programs is primarily concerned with application speed. This has led to the development of parallel applications in which software engineering aspects play only subordinate roles. In order to increase software quality in parallel applications, we motivate the construction ..." Cited by 6 (3 self) Add to MetaCart The development of parallel programs is primarily concerned with application speed. This has led to the development of parallel applications in which software engineering aspects play only subordinate roles. In order to increase software quality in parallel applications, we motivate the construction of parallel programs by composing active objects which interact by means of an object--oriented coordination model. This paper presents a formalism for specifying the behaviour of parallel active objects and a corresponding notion of behavioural types which can be used for verifying whether certain active objects conform to a specified behaviour. Our approach is based on high--level Petri nets which enable (besides other benefits) automated analysis, in particular for automated type checking of active objects. We illustrate the usefulness of our approach by presenting reusable active objects for a manager/worker architecture. Their correct interaction is shown by automated checking of behavioural types. - Proc. High Performance Computing Symposium 2002 , 2002 "... parallel ray tracing, pattern search, transmitter placement. This paper explores the application of a global optimization technique to solve the optimal transmitter placement problem in wireless system design. An efficient pattern search algorithm—DIRECT (DIviding RECTangles) of Jones, Perttunen, an ..." Cited by 6 (4 self) Add to MetaCart parallel ray tracing, pattern search, transmitter placement. This paper explores the application of a global optimization technique to solve the optimal transmitter placement problem in wireless system design. An efficient pattern search algorithm—DIRECT (DIviding RECTangles) of Jones, Perttunen, and Stuckman (1993)—has been connected to a parallel 3D radio propagation ray tracing modeler running on a 200-node Beowulf cluster of workstations. The algorithm optimizes, for a given computational investment, the locations of a specified number of transmitters across the feasible region of the design space. The focus of the paper is on the implementations of the DIRECT algorithm and the parallel 3D ray tracing propagation model. Both simulation results and site measurement data are presented in support of the effectiveness of the present work. 1. - Coordination Languages and Models, number 1282 in Lecture Notes in Computer Science , 1997 "... The aim of this paper is to promote the idea of developing reusable coordination patterns for parallel computing, i.e. customizable components from which parallel applications can be built by software composition. To illustrate the idea, a fundamental manager/worker coordination pattern useful for p ..." Cited by 5 (3 self) Add to MetaCart The aim of this paper is to promote the idea of developing reusable coordination patterns for parallel computing, i.e. customizable components from which parallel applications can be built by software composition. To illustrate the idea, a fundamental manager/worker coordination pattern useful for programming a variety of parallel applications is presented. - in: Proc. Euro PVM/MPI’99, Lecture Notes in Computer Science , 1999 "... . The work presents an MPI parallel implementation of Pov-Ray, a powerful public domain ray tracing engine. The major problem in ray tracing is the large amount of CPU time needed for the elaboration of the image. With this parallel version it is possible to reduce the computation time or to rend ..." Cited by 3 (0 self) Add to MetaCart . The work presents an MPI parallel implementation of Pov-Ray, a powerful public domain ray tracing engine. The major problem in ray tracing is the large amount of CPU time needed for the elaboration of the image. With this parallel version it is possible to reduce the computation time or to render, with the same elaboration time, more complex or detailed images. The program was tested successfully on ParMa2, a low-cost cluster of personal computers running Linux operating system. The results are compared with those obtained with a commercial multiprocessor machine, a Silicon Graphics Onyx2 parallel processing system based on an Origin CC-NUMA architecture. 1 Introduction The purpose of this work is the implementation of a distributed version of the original code of Pov-Ray [1], that is a well known public domain program for ray tracing. The parallelization of this algorithm involves many problems that are typical of the parallel computation. The ray tracing process is very c... - In Procs. Intl. Conf. on Parallel and Distributed Processing Techniques and Applications (PDPTA`98 , 1998 "... To reduce the computation times required for rendering animations, a new incremental raytracing method that computes only the changed parts of images in an animation sequence is proposed. This method is integrated into a parallel version of the POV--Ray raytracing package implemented on a network of ..." Cited by 3 (0 self) Add to MetaCart To reduce the computation times required for rendering animations, a new incremental raytracing method that computes only the changed parts of images in an animation sequence is proposed. This method is integrated into a parallel version of the POV--Ray raytracing package implemented on a network of workstations using the MPI message passing interface. The parallelization relies on a manager/ worker scheme which incorporates dynamic task assignment to achieve load balancing. The results of several experiments indicate that the incremental raytracing method yields a reduction of computation time roughly proportional to the number of changed pixels in a particular animated scene. Almost linear speedups are obtained for the parallel versions running on up to 18 workstations. Keywords: Incremental raytracing, parallelism, network of workstations, MPI 1 Introduction Raytracing [1] is a technique used for producing complex graphics images involving reflections, shadows, transparent objects ... , 2002 "... This thesis investigates several issues in data and computation modeling for scientific problem solving environments (PSEs). A PSE is viewed as a software system that provides (i) a library of simulation components, (ii) experiment management, (ii) reasoning about simulations and data, and (iv) prob ..." Cited by 2 (1 self) Add to MetaCart This thesis investigates several issues in data and computation modeling for scientific problem solving environments (PSEs). A PSE is viewed as a software system that provides (i) a library of simulation components, (ii) experiment management, (ii) reasoning about simulations and data, and (iv) problem solving abstractions. Three specific ideas, in functionalities (ii)-(iv), form the contributions of this thesis. These include the EMDAG system for experiment management, the BSML markup language for data interchange, and the use of data mining for conducting non-trivial parameter studies. This work emphasizes data modeling and management, two important aspects that have been largely neglected in modern PSE research. All studies are performed in the context of S 4 W, a sophisticated PSE for wireless system design. - In Proceedings of the Next Generation Software Workshop, 16th International Parallel and Distributed Processing Symposium (IPDPS’02 , 2002 "... In this paper, a global optimization technique is applied to solve the optimal transmitter placement problem for indoor wireless systems. An e#cient pattern search algorithm---DIRECT (DIviding RECTangles) of Jones, Perttunen, and Stuckman (1993)---has been connected to a parallel 3D radio propagatio ..." Add to MetaCart In this paper, a global optimization technique is applied to solve the optimal transmitter placement problem for indoor wireless systems. An e#cient pattern search algorithm---DIRECT (DIviding RECTangles) of Jones, Perttunen, and Stuckman (1993)---has been connected to a parallel 3D radio propagation ray tracing modeler running on a 200-node Beowulf cluster of Linux workstations. Surrogate functions for a parallel WCDMA (wideband code division multiple access) simulator were used to estimate the system performance for the global optimization algorithm. Power coverage and BER (bit error rate) are considered as two di#erent criteria for optimizing locations of a specified number of transmitters across the feasible region of the design space. This paper briefly describes the underlying radio propagation and WCDMA simulations and focuses on the design issues of the optimization loop. - EUROGRAPHICS SYMPOSIUM ON PARALLEL GRAPHICS AND VISUALIZATION (2004) , 2004 "... This paper investigates assignment strategies (load balancing algorithms) for process farms which solve the problem of online placement of a constant number of independent tasks with given, but unknown, time complexities onto a homogeneous network of processors with a given latency. Results for the ..." Add to MetaCart This paper investigates assignment strategies (load balancing algorithms) for process farms which solve the problem of online placement of a constant number of independent tasks with given, but unknown, time complexities onto a homogeneous network of processors with a given latency. Results for the chunking and factoring assignment strategies are summarised for a probabilistic model which models tasks' time complexities as realisations of a random variable with known mean and variance. Then a deterministic model is presented which requires the knowledge of the minimal and maximal tasks' complexities. While the goal in the probabilistic model is the minimisation of the expected makespan, the goal in the deterministic model is the minimisation of the worstcase makespan. We give a novel analysis of chunking and factoring for the deterministic model. In the context of demand-driven parallel ray tracing, tasks' time complexities are unfortunately unknown until the actual computation finishes. Therefore we propose automatic self-tuning procedures which estimate the missing information in run-time. We experimentally demonstrate for an "everyday ray tracing setting" that chunking does not perform much worse than factoring on up to 128 processors, if the parameters of these strategies are properly tuned. This may seem surprising. However, the experimentally measured efficiencies agree with our theoretical predictions. "... In this paper we present Objective Linda, a coordination model in which objectorientation is combined with uncoupled, generative communication in order to enable object-oriented parallel programming in networked computing resources. Objective Linda provides suitable abstractions for structuring lar ..." Add to MetaCart In this paper we present Objective Linda, a coordination model in which objectorientation is combined with uncoupled, generative communication in order to enable object-oriented parallel programming in networked computing resources. Objective Linda provides suitable abstractions for structuring large software systems, supports interoperability between different programming languages and parallel architectures and simplifies the development of parallel applications. Its use is illustrated by presenting programming examples, a prototype implementation is described, and measurements for evaluating the implementation efficiency and the performance of parallel applications are presented. , 2004 "... Ray-tracing based radio wave propagation prediction models play an important role in the design of contemporary wireless networks as they may now take into account diverse physical phenomena including reflections, diffractions, and diffuse scattering. However, such models are computationally expensi ..." Add to MetaCart Ray-tracing based radio wave propagation prediction models play an important role in the design of contemporary wireless networks as they may now take into account diverse physical phenomena including reflections, diffractions, and diffuse scattering. However, such models are computationally expensive even for moderately complex geographic environments. In this paper, we propose a computational framework that functions on a network of workstations (NOW) and helps speed up the lengthy prediction process. In ray-tracing based radio propagation prediction models, orders of diffractions are usually processed in a stage-by-stage fashion. In addition, various source points (transmitters, diffraction corners, or diffuse scattering points) and different ray-paths require different processing times. To address these widely varying needs, we propose a combination of the phase-parallel and manager/workers paradigms as the underpinning framework. The phase-parallel component is used to coordinate different computation stages, while the manager/workers paradigm is used to balance workloads among nodes within each stage. The original computation is partitioned into multiple small tasks based on either raypath-level or source-point-level granularity. Dynamic load-balancing scheduling schemes are employed to allocate the resulting tasks to the workers.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.46.4296","timestamp":"2014-04-23T19:02:07Z","content_type":null,"content_length":"40754","record_id":"<urn:uuid:0bf36ebc-d85c-4be6-928c-7454dc9bece8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple connectedness via closed curves or simple closed curves? up vote 5 down vote favorite I've recently read some papers and books involving simply connected domains in Euclidean space (dimension at least 2), where domain is an open connected set. The usual definition is a (connected) set for which every continuous closed curve is (freely) contractible while some authors only require that every continuous simple closed curve is contractible. The authors who define simple connectedness using simple closed curves do so in order to use Stokes' Theorem or the Jordan curve theorem somewhere in the sequel; however, they never mention (not even with a reference) that their definition is equivalent to the usual one! My question is if there is a proof written down somewhere (with all the details) proving the equivalence (for domains in $\mathbb{R}^n$ with $n \geq 2$)? If not, does someone know of an "easy" proof using a minimal amount of knowledge, say that of a first course in topology? at.algebraic-topology gn.general-topology 8 Any curve in an open set of euclidean space is homotopic to a piecewise-linear curve (cover the curve with balls contained in the domain...). And in turn, any piecewise-linear curve is homotopic to a product of simple ones (product in the sense of the fundamental group). So if all simple curves are contractible, then all curves are contractible. Btw you may want to worry about base-points, and whether the homotopies (in the book you are reading) are assumed to be through simple curves or not. – Pierre Apr 5 '11 at 20:59 3 @Pierre: your comment seems like a correct answer to me. Would you be so kind as to leave it as such? – Pete L. Clark Apr 6 '11 at 2:55 Topologists invoke "transversality" to such questions, which refers to a series of theorems that rigorously prove intuitive statements like "every continuous function from a circle to a smooth 1 manifold of dimension > 2 is $\epsilon$-homotopic to a smooth embedding", or "every continuous function from a circle to a smooth 2-manifold is $\epsilon$-homotopic to a finite union of simple closed curves." These theorems aren't trivial to prove, but they form the basis on which most statements like the one you ask follow. Careful expositions are found in e.g. in Hirsch's book. – Paul Apr 6 '11 at 4:04 @Pierre: Is there a reference to prove that "any piecewise-linear curve is homotopic to a product of simple ones (product in the sense of the fundamental group)". Although this statement is intuitively clear, how does one prove it carefully? (Last week I've asked a famous topologist exactly this question and he showed me carefully how to do it --- the proof is very long if one were to write it out in all details.) – simply connected Apr 6 '11 at 6:44 add comment 2 Answers active oldest votes As pointed out by Pierre and Paul in comments, there are several standard ways to deal with this kind of issue. A good answer really depends what you're assuming you start from, and where you're trying to go to. The Jordan curve theorem and Stoke's theorem are both fairly sophisticated and difficult for beginners to grasp, so it's a bit hard to see how only analyzing embedded curves is streamlining anything, except perhaps helping with people's intuitive images---but even so, it may do more harm than good. Perhaps it's worth pointing out that this statement is false in greater generality, for instance for closed subsets of $\mathbb R^3$. Here's an example in $\mathbb R^3$: consider a sequence of ellipsoids that get increasingly getting long and thin; to be specific, they can have axes of length $2^{-k}$, $ 2^{-k}$ and $2^k$. Stack them in $\mathbb R^3$ with short axes contained in the $z$-axis, so each one touches the next in a single point with long axes parallel to the $x$-axis, and let $X$ be their union together with the $x$-axis. Any simple closed curve in $X$ is contained in a single ellipsoid, since to go from one to the next it has to cross a single point, so every simple closed curve is contractible. However, a closed curve in the $yz$-cross-section that goes down one side and back up the other sides is not contractible. The fundamental group is in fact rather large and crazy. Anyway, here are some lines of reasoning that can overcome whatever hurdle needs to be ovrcome: 1. PL approximation, as suggested by Pierre: this is easy, the keyword is "simplicial approximation". I'll phrase it for maps of a circle to Euclidean space as in the question, even though essentially the same construction works in far greater generality. Given an open subset $U \subset \mathbb{R}^n$ and given a map $f: S^1 \to U$, then by compactness $S^1$ has a finite cover by neighborhoods that are components of $f^{-1}$ of a ball. If $U_i$ is a minimal cover of this form, there is a point $x_i$ that is in $U_i$ but not in any other of elements of the cover; this gives a circular ordering to the $U_i$. There is a sequence of points $y_i \in U_i \cap U_{i+1}$, indices taken mod the number of elements of the cover. The line segment between $y_i$ and $y_{i+1}$ is contained in $U_i$, since balls are convex. (This generalizes readily to the statement that for any simplicial complex, there is a subdivision where the up vote extension that is affine on each simplex has image contained in $U$. It also generalizes readily to the case that $U$ is an open subset of a PL or differentiable manifold). 14 down vote 2. Raising the dimension: if you take the graph of a map of $S^1$ into a space $X$, it is an embedding. If you're (needlessly) worried about integrating differential forms on non-embedded curves, pull the forms back to the graph, where the curve is embedded. If you want to map to a subset of Euclidean space with the same homotopy type, just embed the graph of the map (a subset of $S^1 \times U$ into $\mathbb R^2 \times U$. (There's a very general technique to do this, if the domain is a manifold more complicated than $S^1$, even when it's just a topological manifold, using coordinate charts together with a partition of unity to embed the manifold in the product of its coordinate charts). 3. The actual issue for integration, using Stoke's theorem etc., is regularity --- to make it simple, restrict to rectifiable curves, and don't worry about embededness. Any continuous map into Euclidean space is easily made homotopic to a smooth curve, by convolving with a smooth bump function---the derivatives are computed by convolving with the variation of the bump, as you move from point to point. 4. Similarly, you can approximate any continuous map by a real-analytic function, if you convolve with a time $\epsilon$-solution of the heat equation (a Gaussian with very small variance, wrapped around the circle). This remains in $U$ if $\epsilon$ is small enough. A real analytic map either has finitely many double points, or is a covering space to its image; in either case you reduce simple connectivity to the case of simple curves. 5. Sard's theorem and transversality, as mentioned by Paul. Sard's theorem is nice and elegant and has many applications, including the statement that a generic smooth map of a curve into the plane is an immersion with finitely many self-intersection points, as is any generic smooth map of an $n$-manifold into a manifold of dimension $2n$. If the target dimension is greater than $2n$, then a generic smooth map is an embedding. It took me a few minutes to figure this out, so in case this helps anyone else to "get" the picture in the second paragraph: a "short circle" means a circle like a belt on one of the (rotationally symmetric) ellipsoids with radius equal to the semi-minor axis, say $2^{-k}$. – j.c. Apr 7 '11 at 1:10 @jc7: Thanks for the comment. I hadn't anticipated this way my phrasing could mislead --- I'll edit and see if I can bring forward the intended mental image. – Bill Thurston Apr 7 '11 at add comment The equivalence is conceptually easy: each closed curve is a union of simple closed curves. If you can contract each simple closed curve, you can contract the whole curve. Each simple closed curve also lives in the set of closed curves, so the equivalence the other direction is simple. This sort of proof shouldn't be too hard for you to construct, assuming you have the up vote knowledge of a first course in topology. Some care might need to be taken in constructing the explicit homotopy and in dealing with a curve which has infinitely many self-intersection -2 down points, but these are both issues you should have seen in such a first course in topology. 2 This seems to fall short of rigorous. Suppose for instance that the image of the curve contains the unit cube in Euclidean space. How are you going to decompose it as a union of simple closed curves? – Pete L. Clark Apr 6 '11 at 2:14 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/60734/simple-connectedness-via-closed-curves-or-simple-closed-curves?sort=oldest","timestamp":"2014-04-16T10:50:02Z","content_type":null,"content_length":"68018","record_id":"<urn:uuid:14911029-f078-4e5e-8825-8cfbb3c5bac1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Condensed Matter C3 Condensed Matter Major Option C3 option coordinator: a [dot] boothroyd1 [at] physics [dot] ox [dot] ac [dot] uk (Prof Andrew Boothroyd) Latest News: 16/10/2013 Students registered for C3 have been assigned to classes. The 4th year condensed matter physics option covers all the topics introduced in the 3rd year course, but at an academically much more satisfying level. The course is primarily aimed at those interested in pursuing a research career, and is designed to take you to a level where you can comprehend research publications over a wide range of areas. Slides from introductory talk on C3 option can be found here. Reading material and vacation exercises for students considering taking C3 in 2013/14 can be found here. Solutions to the vacation problems here. Formal Syllabus • Symmetry. Crystal structure, reciprocal lattice, Brillouin zones — general treatment for non-orthogonal axes. X-ray, neutron and electron diffraction. Disordered materials. • Lattice dynamics. Measurement of phonon dispersion. Thermal properties of crystals. Phase transitions. Soft modes. • Electronic structure of solids. Semiconductors. Transport of heat and electrical current. Fermiology. Landau quantisation. Low-dimensional structures. • Lorentz oscillator model. Optical response of free electrons and lattice. Optical transitions in semiconductors. Excitons. • Isolated magnetic ions. Crystal field effects. Magnetic resonance. Exchange interactions. Localized and itinerant magnets. Magnetic ordering and phase transitions, critical phenomena, spin waves. • Conventional and unconventional superconductors. Thermodynamic treatment. London, BCS and Ginzburg–Landau theories. Flux quantization, Josephson effects, quantum interference. Recommended Reading • Ashcroft and Mermin, Solid State Physics , Saunders, 1976. • Kittel, Introduction to Solid State Physics (8th Ed), Wiley, 2005. • Burns, Solid State Physics, Academic Press, 1990. • Chaikin and Lubensky, Principles of Condensed Matter Physics, Cambridge University Press, 2000. Individual Topics • Dove, Structure and Dynamics, Oxford University Press, 2003. • Hammond, The Basics of Crystallography and Diffraction, Oxford University Press, 2001. • Giacovazzo et al., Fundamentals of Crystallography, Oxford University Press, 2002. • Radaelli, Symmetry in Crystallography, Oxford University Press, 2011. • Singleton, Band Theory and Electronic Properties of Solids, Oxford University Press 2001. • Fox, Optical Properties of Solids, Oxford University Press, 2001. • Blundell, Magnetism in Condensed Matter, Oxford University Press, 2001. • Yosida, Theory of Magnetism, Springer, 1996. • Annett, Superconductivity, Superfluids and Condensates, Oxford University Press, 2004. • Tinkham, Introduction to Superconductivity, McGraw-Hill, 1996. • Blundell, Superconductivity: A Very Short Introduction, Oxford University Press, 2009. Condensed Matter Physics Subdepartment Intercollegiate classes Details of the classes for 2013/14 are listed here Lectures, Handouts and Problem Sets The course is divided into five section: crystal structure & dynamics, optical properties of solids, band theory, magnetism and superconductivity. Details of each part are given below. Lecture notes and problem sheets are only available for download within the university computer network (if you are outside Oxford you must use the login given in the lectures). Crystal Structure & Dynamics (2013/14) Prof PG Radaelli [10 lectures (MT), 2 classes] Problem Sets Lecture Notes Lecture Slides Supplementary Material: Mostly full ab initio derivations of all the important formulas and some additional topic. Reference material: Band Theory and Electronic Properties of Solids (2013/14) Prof RJ Nicholas [10 lectures (MT), 2 classes] Problem Sets Lecture Notes Handout 1 covers revision of 3rd year material Lectures begin with material covered in Handout 2 (Bloch's Theorem) Magnetism (2013/14) Dr R Coldea [7 lectures (HT), 1 class] Superconductivity (2012/13) Dr Paul Goddard (Dr Peter Leek in 2013/14) [7 lectures (TT), 1 class] Optical Properties of Solids (2013/14) Dr LM Herz [6 lectures (TT), 1 class]
{"url":"http://www2.physics.ox.ac.uk/students/course-materials/c3-condensed-matter-major-option","timestamp":"2014-04-18T23:15:55Z","content_type":null,"content_length":"29737","record_id":"<urn:uuid:75f027a8-a0f8-4f5f-aa58-34d6ff6fe958>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction "Truth is stranger than fiction; fiction has to make sense." - Leo Rosten Quantum Mechanics: An Introduction At the smallest, subatomic level, quantum mechanics has made remarkable discoveries about the behaviour of elementary particles. It would seem logical to assume that by studying these fundamental building blocks of our universe we might gain some of our best insights into the true nature of reality. The predictions of quantum mechanics are very much driven by experimental results. Quantum mechanics does not have much to say about "why" things happen, it can just be used to predict "how" things behave, that behaviour being based on the well-established results of experiments. And the most famous of those experiments is the double-slit experiment. In September 2002, the double-slit experiment was voted "the most beautiful experiment" by readers of Physics World. Richard Feynman is said to have remarked that it contains everything you need to know about quantum mechanics. But perhaps the most useful property of the experiment is that it shows just how weird quantum reality can be! In the double-slit experiment an electron gun is aimed at a screen with two slits, and the positions of the electrons is recorded after they pass through one of the two slits, making little dots on the screen. It is found that an interference pattern is produced on the screen just like the one produced by diffraction of a light or water wave passing through the two slits. There are bright bands ("constructive interference") and dark bands ("destructive interference"). This is quite strange: electron particles are interfering with each other as if they were waves. However, things turn much more weird if we only emit one electron at a time. We wait until an electron makes a dot on the screen before emitting another electron, so there is only ever one electron in the system at a time. A pattern slowly builds on the screen, dot-by-dot, as each individual electron hits the screen. What we see is quite incredible: the accumulation of dots on the screen eventually produces a pattern of light and dark bands - an interference pattern emerges even though there is only one electron in the system at any one time! The electron appears to be interfering with itself. So in some strange way it is as if the electron goes through both slits at once!! How can this be? Maybe half the electron is going through one slit and half through the other slit? But if we put a small detector screen on the other side of one slit we only detect whole electrons sometimes passing through (sometimes passing through the other slit). It's as if the electron does indeed pass through both slits at once, but when we make an attempt to detect it, it suddenly decides to act like a single particle which has gone through just one slit! It is only in the last ten years or so that quantum mechanics has at last been able to shed some light on what is happening in the double-slit experiment (this will be considered later in the page on Quantum Decoherence), but before we get to that stage there is a fair bit of theory to be covered. And it's best to start at the beginning. Quantum mechanics could be said to have started in 1900 when Max Planck made the discovery that light, which was considered to be purely wave-like, was in fact composed of energy which came in discrete packets (called "quanta"). In the Planck formula, the energy of the packets, e, is proportional to the light frequency, f, the constant of proportionality being Planck's constant, h: This result suggested that waves (light) were in fact composed of particles. The converse of this result came in 1923 when Louis de Broglie (pronounced to rhyme with "destroy") suggested that matter (particles) behaves as a wave (as is evident in the double-slit experiment), the wavelength, p. Here's the derivation: We now know that absolutely everything in the known universe is made out of these strange particle/wave entities which obey these two formulae for quantum behaviour, given above. How can we make sense of this strange wave/particle duality? Would it be possible to combine these two results in a single equation, and in the process reveal more insights into the true nature of reality at the quantum level? Yes, Erwin Schrödinger did it in 1926. And we'll see that in the next page on The Quantum Casino. The Heisenberg Uncertainty Principle Now let us imagine that we want to measure the position of a particular particle. In order to do this, we must "see" the particle, so we have to shine some light of wavelength, When the light particle (photon) hits the particle under observation it inevitable alters its momentum (speed) according to the result of Louis de Broglie (given above): and on combining these two equations we get: which is the Heisenberg Uncertainty Principle. The Heisenberg Uncertainty Principle tells us that the more accurately we determine a particle's position, The animation shows the relevant spreads in the uncertainty for position and momentum (for a light wave, and the light wave's corresponding photon particle). From the result of de Broglie (considered above) we know that if we have a precise value for a particle's momentum, p, then we have a precise value for the wavelength of the wave, Conversely, the situation when we have a precise position for the particle, e.g., when we shine the light wave onto a screen and detect the proton, is shown in the above animation when the position value becomes a single vertical point. You will see that occurs, the corresponding uncertainty in the particle's momentum spreads out to infinity, i.e., we can now no longer determine the wavelength or the colour of the light wave. In the wave/particle duality model this shows the case when the light is acting like a particle, not a wave. Comments are now closed on this page. Hi Andrew, thanks for these wonderfully entertaining pages - I'm in the process of reading them all. I noticed a small typo on this page: s/proton/photon/g (1 occurrence). - John, 31st July 2007 Thanks so much for calling these pages "entertaining". That's probably the most satisfying thing you could have said. If I've succeeded in presenting physics in an entertaining manner (rather than dry and dull technical papers) then I'm very happy. Thanks also for the typo spot. I've fixed the text now. Typo and broken link reports are always very welcome. - Andrew Thomas, 31st July 2007 I'm no maths/physics student but surely if you measure a particles position TWICE you can determine it's momentum. If particle z is at location a,b,c (3d co-ordinates) and then measured one tick later at a2,b2,c2 why isn't it's momentum obvious? - Neil Barnes, 11th September 2007 That's a really good question, Neil, but you're thinking in terms of classical physics. What you suggest is perfectly correct for large, "ordinary objects" which are large enough so that quantum effects are not evident. But down at the smallest scales, things just don't work like that. When you start dealing with the smallest particles, they behave very differently, counter-intuitively. After all, how would we measure the position of the smallest, elementary particle? We could only do that by interacting with it, by getting the original particle to collide with another (measuring) particle - thus inevitably modifying the very position measurement we were trying to obtain! Even with large objects we have to modify them when we take a measurement. For example, when we take the temperature of an object we have to remove a small sample of heat from the object. Or when you check the air pressure in your car tyres you have to take a bit of air out of the tyres. These effects are so small with large objects that we do not notice them (so your proposal would be fine in that case), but we can't ignore these effects when we deal with quantum objects. So these elementary particles behave very differently to "ordinary objects". They behave according to quantum mechanical rules: sometimes they behave like a wave, sometimes like a particle. The more we constrain one property (say, the position property) to act like a particle, localised in space, the more the other property (momentum) acts like a wave, spread out in space. Wikipedia provides a helpful analogy with sound waves which resembles your "two points in space" idea: "It is meaningless to ask about the frequency spectrum at a single moment in time, because the measure of frequency is the measure of a repetition recurring over a period of time. Indeed, in order for a signal to have a relatively well-defined frequency, it must persist for a long period of time, and conversely, a signal that occurs at a relatively well-defined moment in time (i.e., of short duration) will necessarily encompass a broad frequency band." See So the more you try to pin-down the frequency of a wave, you need to stretch out your time measurements (the two position measurements you mention), and the more accurately you know the time, the less accurate is the frequency measurement. But thanks for your idea. You'd be correct for everyday objects, just not for quantum mechanical objects. - Andrew Thomas, 11th September 2007 In the Double slit experiment-If we shoot only one electron at a time at the two slitted screen, and do not attempt to measure which slit it went thru, do we get a separate interference pattern from each such single electron, or only from the conglomeration of many electrons? - Paul Gayet, 3rd October 2007 I see what you're getting at, and the text was not clear about this. Each electron, after it has passed through the slits, makes only a single point on our detector screen: when we try to detect a particle, we see a particle. We see the interference pattern in the distribution of dots on the screen - that's when you see the light and dark areas. So we need many electrons making many dots. I've just amended the text to make that clearer. - Andrew Thomas, 3rd October 2007 Hi Andrew! I'm just a dilettante reader about quantum physics, but even so I want to thank you for stopping the final meltdown of all my neurons with your intelligible explanations. I'm just curious: there is no info about who you are and what is your work and why you do this kind of mathematical charity ?! All comments date from few months ago. Hv u started this site so recently? Francisco Peres, 48, Curitiba - Brasil francperes@uol.com.br - Francisco Peres, 4th November 2007 Hi Francisco, thanks for your comment. I've been doing this site for three years but it only went live this year. I do it because I love it and I'm fascinated by these subjects. The underlying principles are usually surprisingly simple, and I think it's important to get that across. - Andrew Thomas, 4th November 2007 what the double slit experiment is telling us is that subatomic particles are able divide their energy and still maintain the properties of the original particle. What enables them to rejoin? - eleeo, 13th November 2007 Eleeo, you'll have to read the remaining pages! Interpretations of quantum mechanics suggest that there is no such object as the "original particle" to which you refer. Instead, before we detect the particle, we must consider it to be unreal - and we have to consider it as being potentially in all positions at once! Only after we detect the particle does it become "real", with a clearly-defined position. But that's all covered in the later pages. - Andrew Thomas, 13th November 2007 Hi there...just wanted to let you know this is a fascinating website. I am not a physicist, rather a ghost hunter who relies on scientific backing. Great resource for understanding how paranormal behavior is possible through quantum! - Rebel, 25th February 2008 Oh well, that wasn't quite my intention but that's certainly interesting! - Andrew Thomas, 25th February 2008 Hi this is Tiffany in Vancouver Wa. USA I am so glad I found this web site because you present the material so that I can understand it. It really gets me thinking alright. A real good starting point for my study of Quantum Mechanical Concepts. I am fond of saying that we are all here because we probably can be because I believe this to be true. Well--that is all I have to say. Tiffy - Tiffany, 19th March 2008 I am an engineer and I have at least 10 years of reading physics behind me, but this is one of the most accessible explanations I have read. Normally, I doze off in the middle. And here I am, taking notes!... 1/f ))) - DJ Fadereu, 21st March 2008 Hi Andrew, Your derivation of the uncertainity principle is very counter-intuitive. I'm not being able to understand how you derived - p = h/lambda => delta p = h/delta lamba this seems incorrect (from a calculus viewpoint) Wouldn't an approach using operators be a more correct approach (although, it will be a more mathematically rigorous approach)? - Kedar, 17th May 2008 Hi Kedar, yes, you're right - it's not particularly rigorous. I'm trying to explain quantum mechanics to everyone, and show that even someone with school-level maths can follow the derivation of the Uncertainty Principle! Hence showing that quantum mechanics is nothing to be scared of. Thanks for your comment. - Andrew Thomas, 17th May 2008 Hi Andrew, Lovely effort. However, would be worth checking your technical details in authoritative texts, even beginning with an undergraduate text such as Tipler's "Physics". E=hf and p=h/lambda were known to be true for photons. de Broglie then POSTULATED that these relations hold also for material particles. See Tipler for an explanation of how he came to make these postulates. For a material particle, the relationship between the REST MASS of a particle and its energy is a little complicated. Even if you use the relativistic mass, m, when, indeed, then still p = m u, where u is the velocity of the particle. In particular, See Tipler for more details. - Dick, 19th May 2008 Thanks for that, Dick. Like I said in my comment above, I'm not trying to be rigorous at all (yes, get a textbook I want rigour). I'm just trying to convey an understanding in the simplest form possible. - Andrew Thomas, 19th May 2008 Hi, Andrew, That's fine. But, you will get physics students and even potential physics students coming to your pages. Indeed, you see from Kedar's comment how important it is to get whatever technical details you give right. So, it's best then if you don't include any technical stuff at all or otherwise try and get it checked out by a physicist friend who knows his/her stuff. You should find something like Tipler's "Physics" quite simple and experts have contributed to it. For more advanced material, I would suggest you make reference to where you've got it from; it will help people who are so enthused by your pages as to want to delve further into the subject. Cheers, Dick. P.S. In response to Kedar's query: p = h/lambda => delta p = delta(h/lamba) = h*delta(1/lamba) delta(1/lamba) = -(1/(lambda^2))*delta(lambda) Well, approximately, as this is only strictly true if delta is an infinitesimal! - Dick, 21st May 2008 Thanks for that. I will think about what you say. I would say, though, that I think there is nothing technically incorrect about any of the maths on the site. But do let me know if you find any error! Thanks. - Andrew Thomas, 21st May 2008 Errors! On this page, which I've taken some care to look at, as I pointed out in a previous email, for a material particle, p = m u < m c i.e., p = mc ONLY for massless particles such as the photon. p is not equal to mc for a material particle with non-zero rest mass. Also for light of a definite wavelength, lambda, i.e., for a plane wave, Delta x is actually infinite, not lambda. And, in this case, Delta p is zero, as p has the definite value, p = h/lambda The rigorous way to derive the Uncertainty Relation is to do it for a wave packet. As I suggest, have a look at Tipler's "Physics" or similar simple text, if interested. Although in Tipler the general wave result for a wave packet is just quoted! But, it does give you an extra factor of 1/(2*pi). Mathematically, there's a further factor of 1/2 (see Wikipedia on the Uncertainty Principle). However, this might be a good case where it might have been best to just quote the result and used your wave pictures in momentum and in position to illustrate the Principle. Cheers:) - Dick, 22nd May 2008 Hi Dick, I'm very keen to correct errors and always welcome feedback. Thanks. Considering your first point, you're right about the energy of particles - I have modified the text to make it clear I am dealing with photons. Good spot, thanks. For the second point, we are taking a measurement of position of the particle (using incident light), so delta x is finite not infinite. So the lambda refers to the wavelength of the *incident* light used in the measurement, and it is the resolving power of that light which gives the delta x. That incident light then affects the momentum of the initial photon (so delta p is not zero). So it's the wavelength of the *incident* light which connects the change in position AND momentum of the initial photon. I think it's a really simple and ingenious derivation, and I stand by it. People have nagged me before about my maths so hopefully I have ironed out all the remaining errors on the site. But thanks for the details in your own comment and the reference to Tipler's book - people can follow that up if they want rigour. On this site, though, I keep it simple and clear. - Andrew Thomas, 22nd May Hi, Andrew, I don't want to go into it in detail. Remember that in Quantum Mechanics, the probability of finding a particle at a particular position, x, is proportional to (equal if psi(x) is a normalized wave function) where psi(x) is its wave function. For a wave with the definite wavelength, lambda, psi(x) = exp(i(kx-wt)), where k is the wave number, 2*pi/lambda, and w is the angular frequency. In which case, |psi(x)|^2 = 1, whatever the x. Thus, there is an equal probability of finding the particle anywhere. Hence, the uncertainty in the particle position is infinite. Just look at your own picture for a pure sinusoidal wave in x. In particular, the wave has a definite wavelength. There you can see that the particle position is very uncertain, but on the other hand the plot for p is just a spike, i.e., p is a definite number and so there is no uncertainty in p! Cheers:) - Dick, 23rd May 2008 Hi, Andrew, My apologies, as I had glanced over your work. Didn't realize that you were proposing a thought experiment to try to derive the Uncertainty Principle. The worry I have about the thought experiment is that it would also apply for a classical system, yet the Uncertainty Principle concerns the intrinsic quantum mechanical property of objects, their wave/particle duality. Unfortunately, the rigorous way of deriving it is quite mathematical. Cheers:) - Dick, 23rd May 2008 Hi Dick, thanks. Your comment (above) did get stored - my anti-spam software is over-zealous! - Andrew Thomas, 23rd May 2008 Hi, Andrew, Still worried about your thought experiment. What do you mean by stating that light of a particular wavelength, lambda, has resolution, Delta x approx= lambda? Consider a plane wave incident on an infinitely massive point object. Then, by observing the scattered spherical wave, the object's position is determined with certainty as being at the centre of the sphere of the scattered wave. Thus, in this case, Delta x = 0. As I said, I don't see where in your thought experiment the instrinsic quantum mechanical nature of the object being observed comes in. Cheers:) - Dick, 3rd June 2008 Hi Dick, you're not happy, are you! The thought experiment I described was basically Heisenberg's Gamma Ray Microscope: I agree about your second point, and that link I just posted says it well: "Looking closer at this picture, modern physicists warn that it only hides an imaginary classical mechanical interaction one step deeper, in the collision between the photon and the electron. In fact Heisenberg's microscope, although it was a big help in developing and teaching the quantum theory, is not itself part of current understanding. The true quantum interaction, and the true uncertainty associated with it, cannot be demonstrated with any kind of picture that looks like everyday colliding objects. To get the actual result you must work through the formal mathematics that calculates probabilities for abstract quantum states." - Andrew Thomas, 3rd June 2008 Much of this stuff i don't totally understand. I do know that i find it fascinating and enjoy reading about it. Do you have to be a genius to actually be able to get into this stuff? Or could an average guy like me get into college for it? - Victor Wilson, 31st August 2008 Victor, a fairly decent mathematics education would help, but even if you don't have the maths you should be able to follow the principles of quantum mechanics. Basically, there are several competing interpretations (i.e., guesses at what is going on "behind the scenes") but nobody on the planet knows which interpretation is correct (or, indeed, if none of the interpretations is correct). So as long as you can understand the principles behind the interpretations you know as much about the truth of quantum mechanics as any Nobel prize winner! In fact, you could even propose your own interpretation of quantum mechanics, and - as long as it agreed with experimental observation - it would be just as valid as the existing interpretations. I can recommend "Quantum" by Jim Al-Khalili as a good introductory book. - Andrew Thomas, 31st August 2008 I am actually inquiring of one of your pieces of art. I googled 'inside out' and a piece of yours (orange/brown)came up. I am organizing a conference for young women focusing on making the inside better so the outside can be its best (life/body) and I would like to use this piece of art on my posters and promo material. May I? - Jean, 6th October 2008 Yes, thanks for asking. Good luck with the conference. - Andrew Thomas, 6th October 2008 Hey Andrew, I am astonished of how simple you explain quantum mechanics, I've never found something like that. Now my problem is, you say that delta(x) ~~ lambda How do you get to this formula? Thanks - Christoph, 10th October 2008 Thanks, Christoph. The relationship between the resolving power of a microscope and the wavelength of light is described here: It shows that there is a close relationship between delta(x) and the wavelength of the illuminating light. - Andrew Thomas, 10th October 2008 Thank you for this succinct and exceptionally clear explanation of the uncertainty principle. It is now very clear to me where Heisenberg and De Broglie erred. The uncertainty principle only applies to measurement using electromagnetic waves. It implies that there is a limit to the resolving power of any measuring instrument based on electro-magnetism that can ever be build. However, Heisenberg and De Broglie commit a logical fallacy in thinking that a particle cannot exist with a distinct position and momentum at the same time just because those parameter cannot be measured with accuracy at the same time using electromagnetism! There could exist other measuring systems that do not rely on EM waves, maybe using dark energy or neutrinos. I would sincerely appreciate it if you can comment on my thought (info (at) biotele (dot) com) Thank you - biotele, 30th October 2008 Hi Biotele. No, I'm afraid my description might have misled you as quantum mechanics says there is a fundamental limitation on our knowledge no matter what method we use. We can never know a particle's position and momentum at the same time by any method. I just used the example of electromagnetic radiation as it was easier to understand and not too mathematical. - Andrew Thomas, 30th October 2008 This is one of the few resources (that I'm aware of) which explains things in a such way so everyone can understand how things work at the microscopic level, I must say. However, there are still some concepts that I don't understand. Where does the uncertainty principle apply to? Elementary particles and photons? Why is it impossible to measure both position and momentum precisely? Is it because the observer changes the nature of things? "when we have light behaving as a normal light wave (with a colour) we can have no idea where the corresponding particle is" -- I thought light was made of electric and magnetic fields only. So where does the particle you're talking about come in? About the gifs you provided what are those waves refering to? Is it the space where you might find the particle within their envelope? According to de Broglie postulate do particles whether behave literally like a wave or appear to act like one? It doesn't make sense to me imagine that a particle following a wave path will produce the same results as a real wave (ie. radiation)... This kind of questions are poorly described in almost every resource (articles, books, etc), even from lecturers. That's why quantum mechanics is not a easy subject to grasp, at least for me as I like to understand things in detail. - Tiago, 2nd November 2008 Hi Tiago, thanks. In the most general sense, the uncertainty principle applies to everything physical, because everything is made of particles. So we can never know the exact position and momentum of ANYTHING. However, the effect would only have a noticeable impact for particles. You asked: '"Why is it impossible to measure both position and momentum precisely?" Well, if we knew the answer to that we would get a Nobel Prize. It's just the way the mathematics of quantum mechanics works. It's just the way the quantum world is. You asked: "I thought light was made of electric and magnetic fields only. So where does the particle you're talking about come in?" Light can be considered as a particle OR a wave, the particle is a photon. You need to read about wave/particle duality: Particles behave like a REAL wave, not as a particle following a wave. It's all very strange! But that's quantum mechanics. - Andrew Thomas, 3rd November 2008 Firstly, thanks for your reply. Imagining we were the same size as the smallest particles... Would we be able to measure them (position and momentum) precisely? Do the particles always behave like a wave when they aren't under any measurements? Is that why the quantum world is probabilistic? Are their wave-behaviour related to their wavefunction? - Tiago, 6th November 2008 Hi Tiago, "Imagining we were the same size as the smallest particles... Would we be able to measure them (position and momentum) precisely?" That's very imaginative! I wouldn't imagine that would make a difference - we seem fundamentally limited in our knowledge no matter what method we use. You asked "Do the particles always behave like a wave when they aren't under any measurements?" Well, it's true to say that before we detect a particle's position it acts as though it is in a superposition state, so yes, it behaves like a wave. You then asked: "Is that why the quantum world is probabilistic?" Well, not really, but when we detect the particle's position we get a random result, so that is where the probability comes in. You then asked: "Are their wave-behaviour related to their wavefunction?" Well, yes, the wave behaviour IS related to the wavefunction in that two wavefunctions for two different particles can add together to create constructive or destructive interference (just like a wave). And after we know the shape of the final wavefunction, Max Born said the probability of finding the particle at a certain position was equal to the square of the wavefunction. But the wavefunction is more than a simple probability wave as two wavefunctions can introduce interference effects (seen in the double-slit experiment) but probabilities can never be negative (and so can never produce interference). - Andrew Thomas, 6th November 2008 "we seem fundamentally limited in our knowledge no matter what method we use" - What does this uncertainty has really to do with then? Is it due to the wave-particle duality or we cannot measure anything precisely (ie. I can't tell you that my desk is exactly 1m wide, because it's precise real value goes to infinty (for instance, 1.00010000200001000300...m))? What do you mean with superposition state? You're saying that before any measurements a particle is spread in space so it doesn't have a well defined position only probabilities where it might be; therefore the particle seems to exist in different places at the same time? Is that when quantum mechanics gets bizarre? :P I thought the wavefunction would give you the probability of finding a particle in x at a/any given time... I can't understand the difference between the wavefunction itself and its square... - Tiago, 6th November 2008 Hi Tiago, at its most fundamental level quantum mechanics is impossible to understand from a "intuitive, common-sense" viewpoint. There's just nothing in our everyday experience which behaves the same as quantum mechanics, so the intuitive examples you provide (such as the length of the desk) are really not applicable. We can only express quantum behaviour through mathematic equations, and it is the mathematics which tells us that there is a fundamental uncertainty about our knowledge (through the uncertainty principle formula). Moving on to consider the wavefunction, the wavefunction is a very peculiar entity which again has no real parallel in our everyday experience as we can only express it accurately by using mathematics - the phase of the wave is a complex number. This is described in more detail on the "Quantum Casino" page. But, yes, by squaring the wavefunction we convert the complex number to a real, normal number which we can take to be the probability of finding the particle. - Andrew Thomas, 7th November 2008 Well Andrew, I guess my problem is that I was trying to understand the quantum world from a "intuitive, common-sense" viewpoint. So let me see if now I can understand it. The uncertainty principle is a built-in uncertainty in nature so to speak. It also comes from the fact that the observer plays an essential role in the behaviour of particles. If we aren't taking measurements about the position of a particle, the particle itself is spread out over space (wave behaviour), and as we don't know its position the particle seems to exist in every possible position where it might occur (superposition). Therefore the better we know about its momentum (I'm following the maths but this seems to make no sense at all lol), because its wavelength is related to where the particle might be found. However, if we try to measure the position of a particle, the wavefunction (wave behaviour) collapses and the momentum/wavelength (again this only makes sense using the maths lol... momentum of the particle or momentum of where it might be found?) is no longer certain. Did I mention something wrong? Am I in a good way about the understanding of quantum mechanics? - Tiago, 9th November 2008 Tiago, yes, you've got it! It's a shame we can't explain it in easier terms, but that is basically the way it works. - Andrew Thomas, 10th November 2008 Hello everybody, Could one say: the future is superposition, the now is the wavecollaps and the past has no Heisenberg uncertainty ? - Frans , 21st December 2008 Hi Frans, yes, you could certainly say that. What you are suggesting is basically the quantum mechanical arrow of time (the wavefunction only collapses in the forward time direction). Maybe have a read on the section on entropy on the Quantum Decoherence page. - Andrew Thomas, 22nd December 2008 Hi Andrew, I am a sophomore in college completing an independent study in physics/mathematics on quantum mechanics. Specifically, the double-slit up to the uncertainty principle.--I just wanted to say that your clarity is unrivaled and I legitimately understand some quantum theory because of it. Thanks a ton, you've been very helpful. - Christopher Giuliano, 18th January 2009 Thanks a lot, Christopher. I'm glad it helped. - Andrew Thomas, 19th January 2009 i m doing MSC in physics .this site helps me so much in QM so thanks making this wonderful site frm INDORE MP (INDIA) - sapana, 23rd January 2009 The electron gun in Young's double slit experiment can presumably be aimed. 1)What happens as the gun is turned? 2)Laser light can be aimed precisely. What if you aim the gun to always pass through one slit? - Kingstilts, 24th January 2009 Hi Kingstilts, yes, you make a good point in that most descriptions of the double-slit experiment contain unstated assumptions regarding the type of light and the position of the slits. Firstly, you have to use a laser as the light has to have a single wavelength (monochromatic) in order to produce the interference pattern. In practice this means that the slits have to be less than a millimetre apart so they can both be in the laser beam. Thanks for your comment. - Andrew Thomas, 26th January 2009 What can one say Andrew other than Quantum Physics is amongst the most difficult of subjects to teach well and properly, and in this page and the 12 that follow (so far) you have not just met the challenge but have met and as we say in America, OWNED it. Well well, done, very well done. Thank you. I hyperlink your home page often to those who wish to be introduced to the clearest (that I can find) introduction to this very intimidating, but shouldn't be intimidating, subject. Thanks again. I particularly love the way you begin with the notorious (yet wonderful) double-slit experiment, then quickly segue into Uncertainty. I have never read a clearer more concise condensation in my life. I call Uncertainty "The Wall," because when I learned it at age 22, it's non common-sensical result inflamed and killed what until then was a deep love for Math and Science, and not my bad professor's fault (though he was bad not uncommon given America's "Publish or Perish" higher education philosophy ... plus he ws Physics Dept. teaching 3rd-year Engineers Intro to Physics 2), but rather MY fault for not putting in what was probably the 30 extra minutes to understand what Heisenberg was really saying. I have since returned, because it is a REAL fact of life, Quantum Physics. Without Quantum Tunneling in particular, there is no transistor, no Microchips, no Electronic Computers, or cell phones. We all use Quantum Physics every day. Take nothing for granted. - Greg Sivco, USA-NJ, 23rd February 2009 Thanks a lot, Greg. I quite like the idea of "owning" quantum physics! - Andrew Thomas, 24th February 2009 "Own" is American slang for "you totally understand the subject." And yes you do own it. :-) I'm a bit sad you don't get more hits. I assume your Marketing method is "word of mouth"? I ask .... Can we do better? IMO, every college kid studying this subject for the first time should come here first to make this allegedly "difficult" subject understandable, and before their first class in the subject. How can we help? - Greg Sivco, 23rd March 2009 Hi Greg, I get about 200 vistors a day which is a number I am more than happy with (the site ranks highly on Google for just about all its subject areas, and there's a lot of links on Wikipedia to the site). Most of those visitors appear to be just casual visitors who do not stay long. But about 5% take a real interest in the subject (people like yourself). To be honest, I'm really only interested in that "5%". I'm not particularly interested in increasing my visitor numbers as this is very much a niche field so I'd only be sucking-in more casual visitors if I increased my visitor numbers. - Andrew Thomas, 24th March 2009 Hi Andrew, I admire your knowledge greatly, but find it or science lacking only the simplist wisdom or truth. GUT., TOE.. If your looking for the truth of nature, One need only to remove any uncertainties or doubts. Once removed, both mathematically and emperically, = or equal is all that remains. Measure was the flaw. And truth was much more simple than thought. E = mc2 has been reduced to truth and I thought you would like to know. Have a truly equal day, Most people prefer theories and faiths over truth, because for most, truth has yet to be seen. - Michael J Ahles, 2nd April 2009 OK Mike but don't you accept that there are theories out there that ARE true, and that one of the faiths may be correct? - Greg Sivco, 2nd April 2009 This is an excellent QM presentation. In the 2 slit experiment you are claiming no other electrons in the system; what about the electrons of the material forming the 2 slits? What if it is the absence of matter (slits) that influence the spatial permittivity that causes the diffraction patterns? I don't really believe the test system has been thoroughly thought out! Physical reality is neither weird nor strange; just unknown. Just thought I would give you all something to ponder other than accepting absurd mathematical paradigms for reality. - David, 6th April 2009 That's a good point about the electrons in the slits, David. Those electrons form part of the system as well, and the interaction of the beam with those electrons is all part of the dynamics of the As you will see on the Quantum Entanglement page, we have to consider the slits, the beam, and the screen as just **one** entangled system. - Andrew Thomas, 7th April 2009 If visible light is reflected by particles of unknown location, is it natural to assume that the more discrete portions of the EM spectrum are invisible to us because of this. Because they are definite, real, unchangeable. Whereas the flip of wind that catches the leaf in the tree across the street, is visible to us because it is glowing with possibility, and our curiosity-driven brainputers attach significance and meaning in order to study an aspect of our own nature more closely. - Sol, 6th May 2009 I have a layman Q: as photon is massless, how doe a single photon has energy? According to E=1/2*M*V^2 or E= MC^2, a massless photon can not have energy. As a photon is massless, then it shall not affect a particle when we use a photon to observe a particle. - Wang, 10th May 2009 I have another Q: are there any tests done to test if space/time is quanta or not? - Wang, 10th May 2009 Hi Wang, firstly, a photon is considered to be pure energy, so no problem there. Secondly, yes, there are experimental attempts to show that space is quantised: Andrew Thomas, 11th May 2009 Andrew, in the last link you provided, I think there is a small error. See if you notice it and confirm if I'm right, thanks: "In these theories the vacuum is treated as an 'energy foam' in which microscopic quantum energy fluctuations occur on approximately the Planck length (10E-33 cm) and time (10E-19 GeV)" The 'error" I believe is in the units of time. Shouldn't they be "sec" not "GeV"? - Steven Colyer, 11th May 2009 That's a good spot, Steven. They actually mean the Planck energy (1.22 × 10E19 GeV). - Andrew Thomas, 11th May 2009 I'm making my way through your entire presentation, which is, well, ace. I shall probably prod you at intervals and will do so right now. There's actually an error -- rather, a misinterpretation -- towards the end of this particular page; you refer to the wavelength of the light as being, or perhaps as producing, its 'colour'. This isn't so, if you will excuse me. The colour of a beam or patch of light is not defined by its wavelength, but by the effect a light-beam of that wavelength has upon my conscious mind. This effect results from the conversion of photons by the retina into nerve impulses which are transmitted along the optic nerve to the primary optic cortex of my brain; these impulses are mingled with, and modified by, nerve impulses from other parts of my brain -- all of this being performed according to rules and processes which we understand quite adequately in physical terms -- and are then transferred, this time by a process about which we have not the faintest beginnings of an inkling of an idea, to 'my mind', where they produce the sensation of a colour. This 'sensation' -- one of a group of entities called 'qualia' in neurology -- has a property which we cannot even describe, let alone define or measure, but which we call 'colour.' We have, though, developed an accepted coding system, which allows us to label the conscious sensation thus produced as one (or more) of a series of names, such as 'red', 'blue', 'green' or 'pink' and even, imprecisely but understandably, as 'sort of greenish-brown.' These are the names of colours, and the name I give to a colour depends upon the wavelength of the light which originally impinged upon my retina and produced it. Importantly, though, and unlike wavelength, colour cannot be directly described by me to you, nor by you to anyone else. The only way we can use those words to one another and be understood when we do so is simply and solely because, as each of us has grown up, we have become used to pointing at different objects and saying to one another "that's a red ball", or (of a cloudless sky) "that's blue" or of grass that it is There is no way of knowing whether what I experience when I describe the sky as blue is the same as what you experience when you see the same patch of sky. We use the word 'blue' purely as a conventional label, so that we can talk to one another about the way things look to each of us. We cannot measure 'blueness', as we can measure the wavelength of blue light in nanometers. Blueness (we might fairly say) is not amenable to physics. We might also describe it as being 'immaterial'. And the interface between the material world of conventional physics which we access via our brains and other physical instruments, and the immaterial world which we access with our minds, is going to become crucial to our understanding of The Way Things Are, via the quantum world which, crucially, includes observation by the mind. - Martin Woodhouse, 22nd May 2009 Hi Martin, thanks for that. Yes, it's a perhaps a bit sloppy to directly equate wavelength with colour without stating that some form of human interpretation is required. - Andrew Thomas, 25th May Appears that the word "of" is missing after "position" in "Now let us imagine that we want to measure the position a particular particle.". - Hamish, 8th June 2009 Thanks a lot, Hamish. I'll fix that now. - Andrew Thomas, 8th June 2009 Most simplistic explanation I have encountered on the web! I honestly don't understand the mathematics involved, but I should probably mention I am 13 years of age... However, I am very interested in quantum physics and will surely come back and check this site out once more after I receive my high school diploma ;) I'm always up to a good challenge and understand completely this material is not for me... - Dan, 31st July 2009 Awesome, being able to understand Einstein's derivations sounds like fun! ;) Thanks for everything, in the mean time, I've got some reading to do! - Dan, 2nd August 2009 great site! I have a question, if we could for the double slit experiment, slow the speed of the electron or any other type of particle so it is travelling at much less than the speed of light, would we still get the interference pattern? - quincy, 25th September 2009 Electrons do not travel at the speed light. Only photons (radiation quanta) do. To date the largest object sent through a double-slit experiment are C-sub60 Buckminsterfullerene molecules, aka "buckyballs." They're working on dead viruses now, but the larger the object the harder it is to detect their wavelengths. As far as varying speed is concerned, that's an interesting question and I haven't thought about it. My gut says it wouldn't matter, but I'm not sure. - Greg sivco, 26th September 2009 Thank you Andrew for the way you approach this topic--I am still reading my way through the articles but it's already been very useful, including the comments and some links therein. I am a non-physicist with no calculus but a 40-year on and off interest in QM and related topics. Lately I've been looking for REALLY simple ways to convey the 'weirdness' to high school students--this really helps; it's a pleasure. Cheerio from the US! - deWitt, 2nd October 2009 According to Jim Al-KHALILI (QUANTUM PAGE 69) "The Uncertainty Principle is not as a result of the experimenter giving the electron (particle) an unavoidable random kick through the act of locating its position and hence changing its speed and direction. Rather it is a consequence of the nature of the wavefunctions that describe the electron's possible location and state of motion even before we look." - Neil Hudson, 23rd November 2009 Indeed, Jim Al-Khalili's book is excellent and he's quite correct. Although the mathematical workings I presented arrive at the correct answer, you should take this idea of "kicking" the particle using a photon with a pinch of salt. It's not really a valid way of considering the uncertainty principle. You have to just consider the maths, and the maths alone. No physical analogy is suitable. - Andrew Thomas, 23rd November 2009 It has been proposed that humans have reached the height of our knowledge of Quantum Mechanics. Some say that the human state is not physically capable of discovering anything more about the nature of the universe on a quantum scale. I do not agree with this. In my opinion, as long as the human brain has logic, then there are no limits to what we can learn. In general, we do not give ourseleves enough credit for our intelligence, as the only life we can compare ourselves with are animals. I think by any standards, knowing about the basic unit of matter (the atom) and seeing impossibly distant galaxies using only our five senses, we have done exceedingly well. Just because we don't understand something at first, it doesn't mean we won't eventually. Albert Einstein admitted that he was not hugely more intelligent than anyone else in his field, he just stuck at the problem for longer. Tenacity is the key, and as long as we 'stick at the problem' we will eventually master it. After all, what did they say about planes at first? They said it was impossible to fly like a bird, and yet we did it. They said it would be impossible to walk on the moon, and yet we did it. These are testaments to out ingenuity, and as now, some speculate, we have almost reached the hight of our knowledge, and that we cannot learn much more- I say, we will do it! post your opinions on this, thank you. - Shaun, 10th January 2010 Hi Shaun, yes, I absolutely agree that there seems to be no end in sight for human progress, and the implications of this extraordinary progress and not widely appreciated (you might want to have a look at the final page of this website on "The Intelligent Universe" for more about this idea). However, when it comes to quantum mechanics, there is the possibility that there are some things which we are *fundamentally* forbidden from knowing - no matter how advanced our technology. For example, the Heisenberg Uncertainty Principle described on this page is a good example of this: we are *fundamentally* prohibited from knowing both the position and momentum of a particle at the same time. That's a fundamental limitation on our knowledge - nothing to do with the inadequacy of our measuring equipment. So maybe there are some things which we can just never know? However, I tend to agree with you that as our technology and understanding increases we will find that most of these apparently fundamental limitations to our knowledge will disappear. - Andrew Thomas, 10th January 2010
{"url":"http://www.ipod.org.uk/reality/reality_quantum_intro.asp","timestamp":"2014-04-18T22:04:20Z","content_type":null,"content_length":"67750","record_id":"<urn:uuid:feee4772-fbc0-4656-8b0e-71b1edb377cb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring the circumference of the earth in ancient Greece Many scientists in Ancient Greece believed the world was round. But none of them knew how big it was until the third century BC when Eratosthenes (c.276-194 BC), chief librarian of Alexandria, devised an ingenious way to measure the earth’s size. Eratosthenes knew of a special well near Syene, Egypt. At noon on June 21, the longest day of the year, the sun’s rays penetrated all the way to the bottom of the well. This meant that the sun was directly overhead. Eratosthenes realized that if the sun was directly overhead in Syene, then its rays must be hitting at an angle in Alexandria, which was due north. If he could measure the angle by which the sun was off center, then he would have the clue he needed to extrapolate the size of the earth. So, at noon on June 21 in Alexandria, he took a measuring stick and captured the angle cast by its shadow. Eratosthenes knew that the angle of the shadow was equivalent to the angle formed by the two cities and the center of the earth. So, he divided the size of that angle by 360, the number of degrees in a circle, to determine the fraction of the earth that separated the two cities. The answer was one-fiftieth. In other words, if you walked back and forth between Syene and Alexandria fifty times, then you would have walked the equivalent of the earth’s circumference. All that remained was to measure the precise distance between the two cities. Eratosthenes hired a pacer, a professional walker trained in taking perfectly equal steps. From measurements of the pacer, Eratosthenes estimated the circumference of earth to be 24,700 miles. Today, using the same principle developed by Eratosthenes 2,000 years ago, modern instruments estimate the distance around the equator to be 24,902 miles. – The Intellectual Devotional by David S. Kidder and Noah D. Oppenheim
{"url":"http://rogercostello.wordpress.com/2007/12/09/measuring-the-circumference-of-the-earth-in-ancient-greece/","timestamp":"2014-04-21T04:32:02Z","content_type":null,"content_length":"47303","record_id":"<urn:uuid:32e2e8f0-9cb9-44a3-b524-5a5999d56215>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Compactness and (weak) completeness G. Aldo Antonelli antonelli at ucdavis.edu Wed Dec 29 20:11:20 EST 2010 On 12/29/10 9:00 AM, Stephen Simpson wrote: > It is provable in RCA_0 that the following are pairwise equivalent. > 1. The completeness theorem for countable languages. > 2. The compactness theorem for countable languages. > 3. WKL_0, i.e., RCA_0 + Weak K"onig's Lemma. > However, full K"onig's Lemma is equivalent over RCA_0 to the much stronger > system ACA_0. > A reference is Section IV.3 of my book Subystems of Second Order > Arithmetic. The question (which at least I took myself to be presenting to the list) is whether the *weaker* form of completeness also requires König's lemma. By the weaker version of completeness I meant the claim that every valid first-order formula is provable, the proof of which usually proceeds by extracting a counter-model from a non-terminating proof search. The reason this seems to require a combinatorial principle such as (weak) KL is that a non-terminating truth tree (for instance) is such that, for every n, it contains an open branch of length n. KL then delivers an infinite open branch, from which a counter-model can be extracted in standard ways. Thomas Forster put forward that such use of KL for weak completeness can be dispensed with in the case of countable languages. Is this true or (which comes to same thing) mentioned in SSOA? -- Aldo G. Aldo Antonelli Professor of Philosophy University of California, Davis antonelli at ucdavis.edu +1 530 554 1368 More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-December/015218.html","timestamp":"2014-04-21T15:17:41Z","content_type":null,"content_length":"4171","record_id":"<urn:uuid:eb080900-f193-4ca1-ace0-8a15ea0e4144>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The domain of the function f(x) = 2x + 1 is x > 1, Is this True? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Thank You Best Response You've already chosen the best response. The domain of all polynomials is R, or (-inf,inf). Best Response You've already chosen the best response. dont you want to know answer? Best Response You've already chosen the best response. A already know it, just wanted to double check Best Response You've already chosen the best response. Best Response You've already chosen the best response. good job Best Response You've already chosen the best response. @ chocodropa7 Hint: if you already know the answer, you would get a quicker response if you post it as an "answer check". Best Response You've already chosen the best response. How do you do that? I'm new Best Response You've already chosen the best response. In your question, you post something like: Please check my answer, then you post the question and your answer. Others would know that you have worked on it, and therefore won't be actually doing the homework for you, and most would be glad to check it for you right away. Best Response You've already chosen the best response. Ok, Thank You Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50de2501e4b0f2b98c86eb8c","timestamp":"2014-04-19T15:29:43Z","content_type":null,"content_length":"53710","record_id":"<urn:uuid:a56c8f37-7e89-4064-9370-f5d20ccacdef>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
D'Agostini - Probability and Statistics G. D'Agostini - Probability and Statistics [ Bayesian inference and its application to physics measurements ] (For most material in Italian see here) "Bayesian reasoning in data analysis - A critical introduction", World Scientific Publishing 2003 (soft cover 2013) "Così è... probabilmente. Il saggio, l'ingenuo e la signorina Bayes", with Dino Esposito (in Italian), December 2013 (PoD). PAPERS etc. (included selected slides) • Contributions at the School on Bayesian Analysis in Physics and Astronomy in Stellenbosch, South Africa, November 2013: → see dedicated web page • "Probabilistic Reasoning in Frontier Physics", lecture at HASCO Summer School 2013 (slides available at indico). • "Probability, propensity and probability of propensities", talk given at the Garching's Bayes Forum, February 2013. • "Judgement Leanings, a graphical way to hopefully improve the perception and the communication of the `weights of evidence' and of the 'intensities of beliefs", poster at PESFoG2012: Slides • : "Probably a discovery: Bad mathematics means rough scientific communication", arXiv:1112.3620v2: pdf (383k) [html version, containing also additional related links]. • "Reti Bayesiane: da modelli di conoscenza a strumenti inferenziali e decisionali" (con Serena Cenatiempo e Aldo Vannelli), Notiziario Tecnico di Telecom Italia, 2010, Numero 3, pp 16-25 (copia locale, pdf, 836k). • "Improved iterative Bayesian unfolding", arXiv:1010.0632v1: ps.gz (384k), pdf (321k). • "On the Peirce's balancing reasons rule failure in his "large bag of beans" example", arXiv:1003.3659v1: ps.gz (113k), pdf (148k). • "A defense of Columbo (and of the use of Bayesian inference in forensics): A multilevel introduction to probabilistic reasoning", arXiv:1003.2086v2: ps.gz (3.4M), pdf (771k) and html (the automatic translation from latex is quite poor). • "L'inferenza probabilistica: ruolo nelle scienze sperimentali e suggerimenti per il suo insegnamento" (Probabilistic inference: role in the experimental sciences and suggestion for its teaching) invited contribution to the Dossier Statistica, TRECCANI Scuola, February 2010. • "On the so-called Boy or Girl Paradox", arXiv:1001.0708v1: ps.gz (93K) and pdf (150K). • Sleeping Beauty problem: • "About the proof of the so called exact classical confidence intervals. Where is the trick?", physics/0605140v2: ps.gz (73K) and pdf (125K). • "Dalle osservazioni alle ipotesi scientifiche" (From the observations to the scientific hypotheses), in Italian: invited contribution to TRECCANI Scuola, Aprile 2006. Versione locale. • "Fits, and especially linear fits, with errors on both axes, extra variance of the data points and other complications", physics/0511182: ps.gz (191K), pdf (259K) and HTML. • “C'è statistica e statistica”, risposta ad una domanda su “statistica gaussiana e statistica bayesiana” rivolta a Scienza per tutti dei Laboratori Nazionali di Frascati, pubblicata su Scaffali, 30 marzo 2005 (pdf, 45kB; copia locale). • "Telling the truth with statistics". Lectures for the CERN Academic Training, 21-25 February 2005. □ Pdf files of transparancies: 1, 2, 3, 4, 5 (to be viewed in fullscren with acroread). □ Transparencies (as well as videos) also available at the CERN agenda server. □ The ultimate confidence intervals calculator [see lecture 2, p.24 (slide 95)]. (The title of lectures, commented in the first lecture, was proposed by the organisers.) • "From Observations to Hypotheses: Probabilistic Reasoning Versus Falsificationism and its Statistical Variations". Invited talk at the 2004 Vulcano Workshop on Frontier Objects in Astrophysics and Particle Physics, Vulcano (Italy) May 24-29, 2004, physics/0412148v2: ps.gz (111k) or pdf (259k) file; HTML version. • "Inferring the success parameter p of a binomial model from small samples affected by background", physics/0412069: ps.gz (1.1M) or pdf (403k) file; HTML version. • "Asymmetric uncertainties: sources, treatment and possible dangers", physics/0403086: ps.gz (357k) or pdf (236k) file; HTML version. • "Bayesian inference in processing experimental data: principles and basic applications", invited paper for Reports on Progress in Physics [66 (2003)1383] Preprint: physics/0304102: ps.gz (281k), pdf (473k), dvju (304k); HTML version. The same issue of RPP contains also a review by Volker Dose on "Bayesian inference in physics: case studies": 66 (2003)1421 (many related publications of Dose and collaborators on data analysis can be found in the IPP Bayesian Data Analysis Group web page). • "Bayesian model comparison applied to the Explorer-Nautilus 2001 coincidence data", with Pia Astone and Sabrina D'Antonio, gr-qc/0304096: ps.gz (164k), pdf (160k) and html (160k) versions. Published in Class. Quant. Grav. 20 (2003) 769. Note added (December 2004): as stated in my contribution to Vulcano04 (footnote 9), this paper should be taken more for its methodological contents than for the physical outcome. • "Search for correlation between GRB's detected by BeppoSAX and gravitational wave detectors EXPLORER and NAUTILUS", astro-ph/0206431 (with Explorer/Nautilus and BeppoSAX Collaborations). Published in Phys Rev D66 (2002) 102002. • "Minimum bias legacy of search results", Seventh Topical Seminar on "The legacy of LEP and SLC", Siena, Italy, 8-11 October 2001. hep-ph/0201070: ps.gz (35k) or PDF (143k) file. Nucl Phys Proc Suppl 109 (2002) 148-152. • "Inference of rho and eta of the CKM matrix - A simplified, intuitive approach" (based on a series of lectures given at the VI LNF Spring School "Bruno Touscheck", Frascati, Italy, May 2001, hep-ph/0107067): ps.gz (1.1MB) or PDF (1.6MB) file. html version • "2000 CKM-Triangle Analysis: A critical review with updated experimental inputs and theoretical parameters" (with M. Ciuchini, E. Franco, V. Lubicz, G. Martinelli, F. Parodi, P. Roudeau A. Stocchi) hep-ph/0012308: ps.gz (486k) or PDF (1.2MB) file. Published in JHEP 0107 (2001) 013. I had some fun with referees also with this paper: For more info and plots about the Unitary Triangle analysis, see www.UTfit.org. • ``Role and meaning of subjective probability: some comments on common misconceptions'', talk at the XXth International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, July 8-13,2000, Gif sur Yvette (Paris), France: (Maxent00): postscript file or PDF file (also at arXiv: physics/0010064) AIP Conference Proceedings (Melville) Vol. 568 (2001) 23-30. • ``Teaching Bayesian statistics in the scientific curricula'', The ISBA Newletter (later renamed 'bulletin'), Vol. 7, No. 1 (March 2000) pp. 18-19, (local txt file). • ``Uncertainties due to imperfect knowledge of systematic effects: general considerations and approximate formulae'' (with Mirko Raso), CERN-EP/2000-026, February 2000, hep-ex/0002056: postscript file or PDF file. A reduced version of this paper has become a part of Chapter 12 of Bayesian reasoning in data analysis. • ``Sceptical combination of experimental results: General considerations and application to epsilon-prime/epsilon", CERN-EP/99-139, October 1999, hep-ex/9910036: postscript file or PDF file. This paper will remain unpublished. People interested in sociology of science might want to give a look at 1st referee report, my replay, 2nd referee report, 2nd replay and final verdict. This paper, with little changes, has become Chapter 11 of Bayesian reasoning in data analysis. Please refer to that book for citations. • ``Inferring the intensity of Poisson processes at limit of detector sensitivity (with a case study on gravitational wave burst search)" (with Pia Astone, special acknowledgements to Maddalena...), CERN-EP/99-126, August 1999, hep-ex/9909047: postscript file or PDF file. The substance of this paper has become a part of Chapter 13 of Bayesian reasoning in data analysis. • "Teaching statistics in the physics curriculum. Unifying and clarifying role of subjective probability", invited contribution to the special theme issue of the American Journal of Physics on Thermal and Statistical Physics (H. Gould and J. Tobochnik eds.): • ``Bayesian reasoning in high energy physics. Principles and applications'', CERN Yellow Report 99-03, July 1999 (vi + 175 pages). □ html version (latex2html translation not really perfect...) □ ps/pdf version (by parts): (Also available at CERN Yellow Report server). For FAQ's and clarifications see here (all comments are welcome!). A reviewed and extended version of the report has been published in 2003 by World Scientific Publishing. • ``Overcoming prior anxiety'', invited contribution to the monographic issue of the Revista de la Real Academia de Ciencias on Bayesian Methods in the Sciences (J.M. Bernardo ed.); postscript file or PDF file (also at Los Alamos xxx archive: physics/9906048). • Tutorial at Los Alamos Laboratory (March 1-3, 1999): for writeup of matter concerning lessons see report based on CERN lectures. The "six boxes" example, of which the viewgraphs are available in pdf format at LNAL, has been included in the AJP paper ("Teaching statistics ..."), see above in this list. • ``Constraints on the Higgs Boson Mass from Direct Searches and Precision Measurements'' (with G. Degrassi), DFPD-99/TH/02, February 1999, hep-ph/9902226, published in Eur. Phys. J. C10 (1999) 663-675: local postscript file or PDF file of preprint. You might be interested in what a `first rate' frequentist referee would think about the paper: see referee report. Updated analysis as contribution to the Workshop on `Confidence limits' (see above): hep-ph/0001269. • "Observation of a tossed coin remaining vertical" (15 Nov 1998), plus other oddities. • "Bayesian Reasoning versus Conventional Statistics in High Energy Physics", invited talk at the XVIII International Workshop on Maximum Entropy and Bayesian Methods (Maxent98): postscript file or PDF file (also at Los Alamos xxx archive: physics/9811046). html version at Caltech • "Tutorial on measurement uncertainty" at EC Summer School: Bayesian Signal Processing, Cambridge, July 1998. (No proceedings; note in preparation with some original material only presented • "Jeffreys Priors versus Experienced Physicist Priors - Arguments against Objective Bayesian Theory", contributed paper to "Valencia 6" (1998): postscript file or PDF file (also at Los Alamos xxx archive: physics/9811045). • ``Bayesian Reasoning in High Energy Physics: Principles and Applications'', lecture notes of academic training at CERN (25-29 May 1998)- videos of lectures also available at CERN Library. ** Final version **: CERN Yellow Report 99-03 (see above). • ``Measurements errors and measurement uncertainty - critical review and proposals for teaching'', Internal Report n. 1094, Phys. Dept., May 1998: postscript file or PDF file, in italian (86 this is my manifesto concerning the use of subjective probability for the evaluation of uncertainty in measurement as well as for a teaching method of physics laboratory based on the Bayesian motto "Learning by Experience". • ``A theory of measurement uncertainty based on conditional probability'' ROME1-1079-1996, Nov 1996, physics/9611016 (presented at JSM 1996 at Chicago, IL). • ``Probability and Measurement Uncertainty in Physics - a Bayesian Primer'' DESY-95-242, Roma1 N. 1070, hep-ph/9512295 (original postscript file also in DESY). An updated version is contained in the CERN Yellow Report 99-03 (see above). A rivised version is now included in Bayesian Reasoning Several ideas of this paper have been applied in the following notes by Mirko Raso (including Bayesian unfolding and treatment of correlations due to systematics): □ ZEUS-96-132, "Measurement of F[2]^ep from the 1995 shifted vertex data": ps and pdf file. □ ZEUS-98-074, "An update of the F[2]^ep using the 1995 shifted vertex data": ps and pdf file; html version. □ ZEUS-98-075, "A measurement of the sigma[ep] for Q^2 < 0.3 GeV^2, as a by product of the F[2]^ep measurement": ps and pdf file; html version. • Bayesian unfolding: For examples of analysis in which the Bayesian unfolding has been used, see Mirko Raso collaboration notes on low x Deep Inelastic Scattering at HERA. (see also SLAC SPIRES Database for some papers that used the program). →Improved version Other papers with some relevant issue on data analysis: • ZEUS Note (1995) advertising ISO Guide: postscript file; • ``On the use of the covariance matrix to fit correlated data'', DESY-93-175, December 1993, Nucl. Instr. and Meth. in Phys. Res. A346 (1994) 306. (scanned version of DESY preprint at KEK); • ``Quark-gluon jet separation in the photoproduction region with a neural network algorithm'' (with G. Barbagli and D. Monaldi), Roma1-992, Feb. 1992; Proc. of Workshop on Physics at HERA, Hamburg, Germany, October 29-30, 1991; • "Limits on the electron compositeness from the Bhabha scattering at PEP and PETRA". 25th Rencontre de Moriond on Electroweak Interactions and Unified Theories, Les Arcs, France, Mar 4-11, 1990. (scanned version of Rome preprintat KEK); • "Determination of alpha_s and the Z0 mass from measurements of the total hadronic cross section in e+e- annihilation" (with W. De Boer and G. Grindhammer), Phys. Lett. B229 (1989) 160. (scanned version of DESY preprint at KEK); • "Determination of alpha_s and sin^2theta(W) from the R measurements at PEP and PETRA", 22th Rencontre de Moriond, Les Arcs, France, Mar 15-21 1997, Editions Frontières, pp. 325-336. • "Determination of alpha_s and sin^2theta(W) from measurements of the total hadronic cross section in e+e- annihilation" (with CELLO Collaboration), Phys. Lett. 183B (1987) 400. (scanned version of DESY preprint at KEK); Back to G.D'Agostini Home Page
{"url":"http://www.roma1.infn.it/~dagos/prob+stat.html","timestamp":"2014-04-16T13:05:57Z","content_type":null,"content_length":"35639","record_id":"<urn:uuid:7b2a77e6-10cc-4a35-b7e7-b415046ca111>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there elementary-school curricula that capture the joy of mathematics? up vote 25 down vote favorite UPDATE: Wow, thank you everyone for the great insights! A couple of months ago I stumbled across Paul Lockhart's essay A Mathematician's Lament and it made perfect sense to me. I'm not meaning to argue this essay one way or the other, except to say that 12 years of what I did in math class really isn't mathematics as you -- and I, as an "enthusiastic amateur" -- enjoy it. We're probably going to homeschool our daughter, who will be kindergarten age next fall. I feel there's a place for knowing your times tables and the like, but there's also a place for knowing that mathematics is more than arithmetic and formulas: it's discovery, playing with ideas, etc. Where I'm going is that I know enough to understand the difference, but I'm not quite confident enough to teach (or lead) this process effectively, since I'm not a professional mathematician. Are there any curricula that would help provide some structure to facilitate this kind of learning, so that there is one less student who has a shortchanged opinion of the mathematics profession? Or, alternatively, if you felt that your elementary school math education hit the mark, how was it done? Thanks for your time! mathematics-education soft-question I converted the question to wiki for you. The "community wiki" checkbox should be just under the input field, above the preview. – Anton Geraschenko Nov 11 '09 at 18:07 I thought sure I looked for it there ... I'm not a stranger to Stack* sites. But thanks for fixing it in any case. – user1020 Nov 11 '09 at 19:28 The CW checkbox will not appear until you have amassed at least 10 reputation points so it may not have been there before asking the question. – Jason Jan 7 '11 at 1:06 Just to mention this, I think it would be an excellent idea for the mathematics community to "crowdsource" a pool of inquiry-based questions that develop the ideas of the elementary mathematics curriculum in interesting ways. Such a list of questions could be compiled into a good IBL script for teaching mathematics "from the ground up". – Jon Bannon Oct 8 '13 at 23:18 "Are there elementary-school curricula that capture the joy of mathematics?" or even "Are there elementary-school curricula that capture mathematics?". I'd answer but I have learned better than that :-) – Wlodzimierz Holsztynski Oct 9 '13 at 3:17 add comment 11 Answers active oldest votes As far as a full curriculum goes, I don't believe there is one that does exactly what you want. Books (in the United States, at least) divide into two camps: "Constructivist" (e.g. Everyday Math, Connected Math) "Traditional" (e.g. Saxon, Singapore) Now, any search you make that even has a whiff of these terms will summon up loud and angry missives (try this article from the New York Times for an idea). Constructivist curriculum is an attempt to catch the "joy of mathematics" approach to learning; for example rather than a worksheet with addition problems there might be a question about all the different possible sets of numbers that add up to 20. The downside (as pointed out by the article above) is that (especially when taught by teachers who aren't themselves strong in mathematics) it can lead to basic skills being missed. up vote 9 down vote This is a problem Lockheart's Lament acknowledges. He seems to think students won't miss anything important. This can be true if the person steering the education is a mathematician, but with a non-specialist (i.e. most elementary school educators and homeschoolers) things can go horribly wrong. Now, it's possible to balance to pull off a fantastic curriculum, but the ones I know about (say, at the Russian School of Mathematics in Boston) are, as self-described by the teachers, not following a curriculum at all. That's great if the teachers are experts, but put homeschoolers in a quandry. I think the world is still waiting for an inquiry-type elementary curriculum that can be followed by non-experts and doesn't shortchange basic skills. So for now I'd suggest: a.) Pick a traditional curriculum (Singapore is fine, although do shop around). b.) Supplement. This very question is filling with lots of suggestions. I would read Papert's Mindstorms and "The Children's Machine", as they better explain the constructivist/constructionist point of view. A textbook is the antithesis of that philosophy, 1 so I have to agree that if you are homeschooling and are not an expert, you should use a traditional curriculum. However, you might read what people like Paolo Blikstein, Seymour Papert, and Idit Harel have done with computers in teaching mathematics. – Michael Hoffman Nov 12 '09 at 0:39 add comment Hello! I am mathematician and a homeschooler (that is - my children are homeschooled). If you want to capture the joy of mathematics, or the joy of anything for that matter, then it is my up vote personal belief that the best curriculum is no curriculum. That last sentence is a bit of an exaggeration, but still my experience has shown that the most important things children discover 5 down themselves, and the most joyful moments of discovery are when one discovers something for oneself. Since you emphasize joy in your question, this is my answer. Er... I hope you don't mean this literally. Truly non-structured education (where you don't think about what to cover in what order, what is a prerequisite to what, etc.) seems to me like 1 it would always be truly horrible (at least, it always was for me). An approach with "no curriculum" would only seem appropriate to me if you are so comfortable teaching mathematics that you know a curriculum innately, i.e. if you already know innately which concepts you want your children to discover, in what order, and how to make sure they do. – Ilya Grigoriev Dec 18 '09 at 17:24 I am a mathematician who was homeschooled, and is now homeschooling his children. In high school I was "unschooled" as Orr describes. My parents were neither educators, nor did they know 1 what materials were best, and so I just bought a bunch of books about what I was interested in. The books necessarily organized the information of their respective topics. I think that any "loss" in this approach is offset by the deep engagement that pure inquiry allows. At the end of the day, I'm not a Fields medalist but still love mathematics and so my work is a natural extension of what I have always done. – Jon Bannon Oct 7 '13 at 14:27 Having written the above, though, let me say that I plan on providing a Dewey-esque "organization of experience" for my children. I plan to attempt to unobtrusively organize what they encounter as Ilya describes. Provided this can be done well, it will maintain the spirit of inquiry and avoid a lot of wheel-spinning. – Jon Bannon Oct 7 '13 at 14:32 add comment Our kids get their math at home; a combination of Singapore math and Gelfand's algebra. The second ref is the "beuty of math" stuff you refer to; obligatory quote: "Problem 231: Find a recording of Bach's Well-Tempered Clavier and enjoy it". Problem level is certainly higher than Kindergarden though. Gelfand wrote some other books aimed at kids, but IMHO the other ones up vote 4 are not that nice. down vote add comment Arthur Benjamin has a series of video lectures called The Joy of Mathematics which I think is pretty good. It's not a curriculum, but all the lectures (or at least I'll the ones I've seen) introduce some new concepts and explore them in a pretty nice way. I'm not sure if it's appropriate kindergarten material, but I'm pretty sure most of it is appropriate elementary school material. Here's a list of the lecture titles: 1. The Joy of Math—The Big Picture 2. The Joy of Numbers 3. The Joy of Primes 4. The Joy of Counting 5. The Joy of Fibonacci Numbers 6. The Joy of Algebra 7. The Joy of Higher Algebra 8. The Joy of Algebra Made Visual 9. The Joy of 9 10. The Joy of Proofs 11. The Joy of Geometry up vote 3 12. The Joy of Pi down vote 13. The Joy of Trigonometry 14. The Joy of the Imaginary Number i 15. The Joy of the Number e 16. The Joy of Infinity 17. The Joy of Infinite Series 18. The Joy of Differential Calculus 19. The Joy of Approximating with Calculus 20. The Joy of Integral Calculus 21. The Joy of Pascal's Triangle 22. The Joy of Probability 23. The Joy of Mathematical Games 24. The Joy of Mathematical Magic Benjamin's book Secrets of Mental Math also looks good for teaching kids to play around with arithmetic and develop number sense. add comment Personally I was not homeschooled, but from around 4th grade I attended a math circle in Boston. Many of the other students were homeschooled, and I think this sort of thing fills exactly the niche you are asking about. So I suggest looking into math circles in your area, although with the caveat that depending on where you are, the local math circles may be targets at older students than the one in Boston (which is particularly welcoming for kindergarten-aged kids) and/or more focussed on problem-solving. The Boston math circle is collaborative and inquiry based. As an example, a class of kindergarten kids might spend 10 weeks playing with math, starting with a question like "Are there numbers between numbers?" or "How many squares fit in a up vote circle?" 3 down vote Another recommendation: Conway's "The book of numbers" and Courant & Robbins' "What is Mathematics?" both have a wealth of material of sort you're interested in, perfectly suited to elementary school students (although perhaps only after they have some foundation in basic arithmetic). 1 And maybe Sam feels guilty plugging a book for which he provided the appendix, but "Out of the Labyrinth" is a really fantastic read for anyone who wants to know more about the Math Circle. – Jonah Ostroff Nov 12 '09 at 4:47 add comment Montessori programs, maybe? And there are schools that lack curricula. In "Sudbury" schools, i.e. those that emulate the Sudbury Valley School, no one is given any disciplined instruction in any subject until and unless they request it. The original SVS claims that in more than four decades of their existence, all of their pupils have learned to read before leaving and 80% have gone on to colleges up vote 3 or universities. Schools with curricula can't generally claim those things. down vote I think the requirement that all pupils must learn mathematics has led to a lot of dishonesty: University graduates have been taught that mathematics consists of clerical skills to be memorized and followed algorithmically. 1 The Montessori materials for teaching arithmetic and even algebra (the exact same materials work for both!) are among the best I've ever seen. They are physical and concrete and yet convey correctly the abstract concepts involved. – Deane Yang Jan 6 '11 at 17:52 add comment In our kitchen, as the boys were growing up, we had "count-by" tables posted at random locations. In order to teach my youngest son how to multiply two digit numbers, we focused upon learning squares. This idea proceeded along the lines of $20\times 20 =400$. What is $2\times 20$? What is $2\times 20 +1$? What is $20\times20 + 2 \times 20 +1$? What is $21\times 21$? Essentially, I taught him to internalize $n^2+2n+1$, $n^2-2n+1$, $n^2 \pm 4n+4$, and $n^2\pm 6n +9$. In this way, if a square was proximate to a square he knew, he could compute the unknown square. Approximating products as the square of the average is a good trick, and finally, you can compute the product of two numbers of the same parity by squaring the average and subtracting the square of the difference to the average. If the numbers have differing parity, then you need to subtract numbers of the form $n(n+1)$. A lot of algebraic rules become easier to understand if up vote alternative algorithms to multiplication are employed. 2 down vote I also strongly encourage counting by eggs (via dozens) $1/12$, $1/6$, $1/4$, $1/3$, $5/12$, $1/2$, $7/12$, $2/3$, $3/4$, $5/6$, $11/12$, 1 dozen. And counting by other fractional quantities. You will have to train yourself to do the mental arithmetic as you teach your children. Finally, look at puzzle books and problem solving books, and look for games that involve dice and (ordinary) playing cards. add comment There are many entry points into a love of mathematics. If by elementary school you mean grades k-5 there are not a lot of tools that such students have so pattern hunting is part of the fun. However, in a general way, I think geometry and combinatorial geometry have been short changed as an entry, especially for very young kids. As students get further along in their education I think the possibilities pick up. In my own opinion I know of no better way to get started attracting kids to mathematics than the books of Martin Gardner: up vote However, many children of parents who love mathematics may not share this passion with their parents. 2 down vote I also regret that knowing the role of mathematics in new technologies and the applications of mathematics in general get short shrift with people of all ages. I am biased because I am one of the co-author of this book, but I think the book For All Practical Purposes (FAPP) gives some of the sense of excitement involved in using mathematics in a variety of settings. FAPP has been through many editions and some of the early editions are probably available cheaply on the web. I was involved with the first 4 chapters which deal with urban operations research problems (Eulerian circuits- applications to street sweeping, etc.; Hamiltonian circuits - applications to package pick up and delivery, etc.) This book is designed for liberal arts students in college but parents may find it of interest, too. Joe Malkevitch 1 Love Martin Gardner! Also appreciate you comment about the possibility that my daughter won't love mathematics as much as I. But at the very least I want her to know that mathematics isn't just what is in her school math book. – user1020 Nov 11 '09 at 19:39 There is indeed a lot of mathematics that does not appear in any particular book. However, since your daughter is still so very young (not yet in kindergarten) you might consider getting her "mathematical toys" which you can have fun playing with along with her. For example, d-stix, jovo toys, polydrons (though some of these have small parts which may be dangerous for small children) make it possible to construct various kinds of polyhedra. There are also kits for making tiling patterns. It seems to me some of the comments made here apply primarily to older children. – Joseph Malkevitch Nov 12 '09 at 4:36 add comment You might also be interested in watching the documentary How do they do it in Hungary? by teachers.tv up vote 2 down vote add comment Here is something I've considered. I only have the first volume, though, and am not sure if it gets at the mathematics the way I'd like. One thing I can say is that the book is very funny to my 5 year old. up vote 1 down Another resource that I've heard is good but have not yet tried is Beast Academy, which was produced by the Art of Problem Solving. These present mathematics topics in a graphic novel vote format. Children purportedly have a hard time putting them down. add comment Perhaps MEP Math? up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged mathematics-education soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/5074/are-there-elementary-school-curricula-that-capture-the-joy-of-mathematics/5080","timestamp":"2014-04-16T14:15:00Z","content_type":null,"content_length":"107241","record_id":"<urn:uuid:ee39cac7-89e0-4c56-b077-d46329ba946a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
simple.scatterplot: Two way distributions John Verzani’s book has a title page that shows a scatterplot with histograms of x and y variables along the two axes. It is a very powerful way of looking at two distributions. The plot was generated through a function simple.scatterplot. The function is made available as part of the UsingR package, which can be installed from CRAN. The syntax of simple.scatterplot is indeed quite simple and can be modified to, for example, show boxplots instead of histograms on the side. That would be really interesting!! How will you get boxplots instead of histograms, by simple.scatterplot? If you just write simple.scatterplot and press return, you will see the syntax of simple.scatterplot function. You will notice that the trick is in defining a layout containing three boxes, one for the main plot and one each for the histograms/boxplots. Then the function just makes the three plots. We can easily replace the command for histogram with a command for boxplot. We can modify the syntax and create another function that will give us stata type two way boxplots. Will do that and post it here in a couple of days. Wow! I could do it. I am learning. Thanks so much my R-teacher
{"url":"http://agrarianresearch.org/blog/?p=80","timestamp":"2014-04-19T02:08:12Z","content_type":null,"content_length":"64866","record_id":"<urn:uuid:378171b1-ecce-43c1-a582-fb8ad3461a19>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
formula de la parabola formula de la parabola Related topics: Prentice Hall Algebra 1 Free Answers what website can solve hyperbola simplifying radicals equations math factor calculator activity on completing the square ti 83 online scientific calculator soft school math solving for square root fraction variables Practice Tests For Testing Out Of College Algebra And Trigonometry Author Message dn6tish Posted: Friday 29th of Dec 08:17 Holla guys and gals! Recently I hired a math tutor to help me with some topics in math . My problem areas included topics such as formula de la parabola and adding exponents. Now that instructor turned out to be so dumb , that instead of helping me now I’m even more confused than I used to be. I still can’t crack problems on those topics. And the exam time is nearing. I need someone to help me out. Is there anything in particular that can be done to get some sort of help? I have a fairly large set of questions to help me learn these topics, but the problem is I just can’t crack them, no matter how much effort I put in. Please help! espinxh Posted: Sunday 31st of Dec 07:03 Hello dear, formula de la parabola can be really tough if your concepts are not clear. I know this program , Algebrator which has helped manybeginners clear their concepts. I have used this software a couple of times when I was in college and I recommend it to every novice . From: Norway Admilal`Leker Posted: Sunday 31st of Dec 18:22 Algebrator really is a great piece of algebra software. I remember having difficulties with difference of squares, distance of points and logarithms. By typing in the problem from workbook and merely clicking Solve would give step by step solution to the math problem. It has been of great help through several College Algebra, Pre Algebra and Remedial Algebra. I seriously recommend the program. From: NW AR, USA Diocx Posted: Monday 01st of Jan 07:44 Ok, after hearing so much about Algebrator, I think it definitely is worth a try. How can I get hold of it? Thanks! From: Hopefully, soon to be somewhere else Momepi Posted: Tuesday 02nd of Jan 17:24 Life can be tough when one has to work along with their studies. Visit http://www.algebra-cheat.com/graphing-linear-equations-in-two-variables.html, I am sure it will help you. From: Ireland
{"url":"http://www.algebra-cheat.com/algebra-cheat/parallel-lines/formula-de-la-parabola.html","timestamp":"2014-04-18T00:12:43Z","content_type":null,"content_length":"32757","record_id":"<urn:uuid:85d40088-cf5e-4015-8de4-78457582ff80>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 36 - Advances in Neural Information Processing Systems , 1990 "... We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improve ..." Cited by 420 (5 self) Add to MetaCart We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application. 1 INTRODUCTION Most successful applications of neural network learning to real-world problems have been achieved using highly structured networks of rather large size [for example (Waibel, 1989; LeCun et al., 1990)]. As applications become more complex, the networks will presumably become even larger and more structured. Design tools and techniques for comparing different architectures and minimizing the network size will be needed. More impor... - IEEE Transactions on Neural Networks , 1997 "... In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole ..." Cited by 66 (2 self) Add to MetaCart In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state space search, we first describe the general issues in constructive algorithms, with special emphasis on the search strategy. A taxonomy, based on the differences in the state transition mapping, the training algorithm and the network architecture, is then presented. Keywords--- Constructive algorithm, structure learning, state space search, dynamic node creation, projection pursuit regression, cascade-correlation, resource-allocating network, group method of data handling. I. Introduction A. Problems with Fixed Size Networks I N recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among... , 1992 "... Methods to speed up learning in back propagation and to optimize the network architecture have been recently studied. This paper shows how adaptation of the steepness of the sigmoids during learning treats these two topics in a common framework. The adaptation of the steepness of the sigmoids is obt ..." Cited by 38 (0 self) Add to MetaCart Methods to speed up learning in back propagation and to optimize the network architecture have been recently studied. This paper shows how adaptation of the steepness of the sigmoids during learning treats these two topics in a common framework. The adaptation of the steepness of the sigmoids is obtained by gradient descent. The resulting learning dynamics can be simulated by a standard network with fixed sigmoids and a learning rule whose main component is a gradient descent with adaptive learning parameters. A law linking variation on the weights to variation on the steepness of the sigmoids is discovered. Optimization of units is obtained by introducing a tendency to decay to zero in the steepness values. This decay corresponds to a decay of the sensitivity of the units. Units with low final sensitivity can be removed after a given transformation of the biases of the network. A decreasing initial distribution of the steepness values is suggested to obtain a good compromise between s... - Artif. Neural Networks , 1994 "... This work explores diverse techniques for improving the generalization ability of supervised feed-forward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a sa ..." Cited by 31 (22 self) Add to MetaCart This work explores diverse techniques for improving the generalization ability of supervised feed-forward neural networks via structural adaptation, and introduces a new network structure with sparse connectivity. Pruning methods which start from a large network and proceed in trimming it until a satisfactory solution is reached, are studied first. Then, construction methods, which build a network from a simple initial configuration, are presented. A survey of related results from the disciplines of function approximation theory, nonparametric statistical inference and estimation theory leads to methods for principled architecture selection and estimation of prediction error. A network based on sparse connectivity is proposed as an alternative approach to adaptive networks. The generalization ability of this network is improved by partly decoupling the outputs. We perform numerical simulations and provide comparative results for both classification and regression problems to show the generalization abilities of the sparse network. 1 - IEEE Trans. Neural. Networks , 1997 "... Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists o ..." Cited by 28 (0 self) Add to MetaCart Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach. Index Terms — Feedforward neural networks, generalization, hidden neurons, iterative methods, least-squares methods, network pruning, pattern recognition, structure simplification. I. , 1991 "... The gain of a node in a connectionist network is a multiplicative constant that amplifies or attenuates the net input to the node. The objective of this article is to explore the benefits of adaptive gains in back propagation networks. First we show that gradient descent with respect to gain greatly ..." Cited by 24 (1 self) Add to MetaCart The gain of a node in a connectionist network is a multiplicative constant that amplifies or attenuates the net input to the node. The objective of this article is to explore the benefits of adaptive gains in back propagation networks. First we show that gradient descent with respect to gain greatly increases learning speed by amplifying those directions in weight space that are successfully chosen by gradient descent on weights. Adpative gains also allow normalization of weight vectors without loss of computational capacity, and we suggest a simple modification of the learning rule that automatically achieves weight normalization. Finally, we describe a method for creating small hidden layers by making hidden node gains compete according to similarities between nodes, with the goal of improved generalization performance. Simulations show that this competition method is more effective than the special case of gain decay. * In press: IEEE Transactions on Systems, Man and Cybernetics. S... , 1995 "... In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard back-propagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additiona ..." Cited by 21 (0 self) Add to MetaCart In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard back-propagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. The Hong Kong University of Science & Technology Technical Report Series Department of Computer Science 1 Introduction In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among them, the class of multi-layer feedforward networks is perhaps the most popular. Standard back-propagation performs gradient descent only in the weight space of a network with fixed topology; this approach is analogous to ... "... Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if t ..." Cited by 21 (4 self) Add to MetaCart Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. "Grow and Learn" (GAL) is a new algorithm that learns an association at one-shot due to being incremental and using a local representation. During the , 1995 "... Many scientific and industrial problems can be better understood by learning from samples of the task at hand. For this reason, the machine learning and statistics communities devote considerable research effort on generating inductive-learning algorithms that try to learn the true "concept" of a ta ..." Cited by 19 (3 self) Add to MetaCart Many scientific and industrial problems can be better understood by learning from samples of the task at hand. For this reason, the machine learning and statistics communities devote considerable research effort on generating inductive-learning algorithms that try to learn the true "concept" of a task from a set of its examples. Often times, however, one has additional resources readily available, but largely unused, that can improve the concept that these learning algorithms generate. These resources include available computer cycles, as well as prior knowledge describing what is currently known about the domain. Effective utilization of available computer time is important since for most domains an expert is willing to wait for weeks, or even months, if a learning system can produce an improved concept. Using prior knowledge is important since it can contain information not present in the current set of training examples. In this thesis, I present three "anytime" approaches to connec... , 1992 "... (or derived) decision metrics are exemplified by MinLoad, which denotes the least among all the Load values. ##################################################################################### # SENDER-SIDE RULES (s) Possible-destinations = { site: Load(site) - Reference(s) < d(s) } Destination = ..." Cited by 17 (4 self) Add to MetaCart (or derived) decision metrics are exemplified by MinLoad, which denotes the least among all the Load values. ###################################################################################### SENDER-SIDE RULES (s) Possible-destinations = { site: Load(site) - Reference(s) < d(s) } Destination = Random(Possible-destinations) IF Load(s) - Reference(s) > q 1 (s) THEN Send RECEIVER-SIDE RULES (r) IF Load(r) < q 2 (r) THEN Receive Figure 3. The load-balancing policy considered in this thesis The sender-side rules are applied by the load-balancing software at the site of arrival (s) of a task. Reference can be either 0 or MinLoad; the other parameters --- d, q 1 , and q 2 --- take non-negative floating-point values. A remote destination (r) is chosen randomly from Destinations, a set of sites whose load index falls within a small neighborhood of Reference. If Destinations is the empty set, or if the rule for sending fails, then the task is executed locally at s, its site of arrival; ot...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=68068","timestamp":"2014-04-17T14:33:43Z","content_type":null,"content_length":"41102","record_id":"<urn:uuid:e5ec6ffd-6887-4273-b14c-53ecfacb2527>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference Angle What is a Reference Angle? Rules of Angles and Reference angle Positive angles go in a counter clockwise direction. Below is a picture of a positive fifty degree angle Quadrant I Evrey positive angle in quadrant I is already acute...so the reference angle is the measure of the angle itself: Quadrant II To find the reference angle measuring x ° for angle in Quadrant III, the formula is $$ 180 - x^{\circ} $$ . Quadrant III To find the reference angle measuring x ° for angle in Quadrant III, the formula is $$ x - 180 ^{\circ} $$ . Quadrant IV To find the reference angle measuring x ° for angle in Quadrant IV, the formula is $$360 ^{\circ} -x $$ . What is the reference angle for the angle in the graph below? Remember that the reference angle always uses the x-axis as a frame of reference. What is the reference angle for a 210° angle? Remember that the reference angle always uses the x-axis as a frame of reference. What is the reference angle for a 300° angle? Remember that the reference angle always uses the x-axis as a frame of reference. Word Problems Problem 1) What is the reference angle for an angle that measures 91 ° ? Problem 2) What is the reference angle for an angle that measures 250 ° ? Back To
{"url":"http://www.mathwarehouse.com/trigonometry/reference-angle/finding-reference-angle.php","timestamp":"2014-04-16T18:57:14Z","content_type":null,"content_length":"33334","record_id":"<urn:uuid:460be937-7832-49de-a289-f96accaed0d3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite series (2) Combining toghether the well known expressions... $\int_{-\pi}^{\pi} \cos^{2n} x\cdot dx = \frac{2\cdot \pi}{2^{2n}}\binom {2n}{n} = \frac{2\cdot \pi}{2^{2n}}\cdot \frac{(2n)!}{(n!)^{2}}$ (1) ... and... $\cosh x = \sum_{n=0}^{\infty} \frac{x^{2n}}{(2n)!}$ (2) ... we obtain... $\int_{-\pi}^{\pi} \cosh(2\cdot \sqrt{t}\cdot \cos x)\cdot dx= \int_{-\pi}^{\pi} \sum_{n=0}^{\infty} \frac{2^{2n}\cdot t^{n}\cdot \cos^{2n} x}{(2n)!} \cdot dx =$ $= 2 \pi \cdot \sum_{n=0}^{\infty} \frac{2^{2n}\cdot t^{n}}{(2n)!} \cdot \int_{-\pi}^{\pi} \cos^{2n} x\cdot dx= 2 \pi \cdot \sum_{n=0}^{\infty} \frac{t^{n}} {(n!)^{2}}$ (3) Setting t=1 in (3) we arrive to see that is... $\sum_{n=0}^{\infty} \frac{1}{(n!)^{2}} = \frac{1}{2 \pi} \int_{-\pi}^{\pi} \cosh(2\cdot \cos x)\cdot dx = \frac{1}{\pi} \int_{-1}^{1} \ frac{\cosh 2\xi}{\sqrt{1-\xi^{2}}} \cdot d\xi$ (4) The integral in (4) can probably be 'attacked' using the residue theorem... if we demonstrate that it does'nt contain the term $\pi$ the goal is realized... Kind regards $\chi$$\sigma$ Last edited by chisigma; May 23rd 2009 at 11:05 AM. Reason: an x instead of a t in (3)... sorry... Some years ago in an other 'matemathical challenge' I had to find the 'generating function' of the sequence... $a_{n} = \frac{1}{(n!)^{2}}$ (1) After some effort I arrived to the identity... $\frac {1}{2\pi} \int_{-\pi}^{\pi} \cosh(2\cdot \sqrt{t}\cdot \cos x)\cdot dx = \sum_{n=0}^{\infty} \frac{t^{n}}{(n!)^{2}}$ (2) I don't think that (2) in very important for some application, but in any case it is interesting. As most of series of this type in converges for $|t|<1$, and that means that (2) is defined also for $-1 < t < 0$, where the argument of cosh (*) is imaginary... Kind regards $\ Unfortunately, $\frac{1}{\pi} \int_{-1}^{1} \frac{\cosh 2\xi}{\sqrt{1-\xi^{2}}} d\xi$ is not very nice to compute. It turns out that $\frac{1}{\pi} \int_{-1}^{1} \frac{\cosh 2\xi}{\sqrt{1-\xi^{2}}} d \xi = I_0(2)$, where $I_n(z)$ is the Modified Bessel Function of the First Kind If you look at the identities for $I_0(z)$, it becomes evident that showing the irrationality of $I_0(2)$ is a fairly difficult thing to do. (Also, check out the last identity listed in the link. Last edited by chiph588@; March 28th 2010 at 11:47 AM.
{"url":"http://mathhelpforum.com/math-challenge-problems/90129-infinite-series-2-a.html","timestamp":"2014-04-20T07:32:12Z","content_type":null,"content_length":"61658","record_id":"<urn:uuid:38c5a6c8-c219-46e5-b29b-7e1906f6358f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Seminar Details: LANS Informal Seminar "Nonlinear Least Squares: Unwrapping Blackboxes" DATE: December 17, 2008 TIME: - SPEAKER: Stefan Wild, MCS LOCATION: Building 221 Conference Room A261, Argonne National Laboratory One wouldn't want to count the number of scientific and engineering application areas where complex numerical simulators are now used. Many of these simulators rely on legacy codes and hence demand the use of derivative-free optimization algorithms, which treat the objective as a blackbox. In this talk I will discuss my recent efforts to unwrap some of these blackboxes to exploit the structure inherent in model calibration, parameter estimation, and synchronization problems (AKA nonlinear least squares). We extend our existing algorithm to one which builds an interpolating model for each of the simulator-dependent residuals. I'll show results supporting the improved efficiency of exploiting the structure of these problems and outline some directions of future research to solve ever-challenging problems. Please send questions or suggestions to Krishna: snarayan at mcs.anl.gov.
{"url":"http://www.mcs.anl.gov/research/LANS/events/listn/detail.php?id=510","timestamp":"2014-04-19T19:38:52Z","content_type":null,"content_length":"2947","record_id":"<urn:uuid:c22762ae-3b35-4914-bad4-61c07f1ffa02>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
TDC: Determination of Moments from Doppler Spectra From the thousands of transmitted pulses of energy, the profiler records one Doppler spectrum for each range resolution volume. These spectra are recorded and archived for future calculations. The spectra can be described by the first three moments. The following sections describe how the moments are determined from the original Doppler velocity spectrum and how the moments are converted to reflectivity factor, mean Doppler velocity, and spectral width. Spectral Moment Determination and Quality Control. From the recorded Doppler velocity spectrum the first three moments are determined and correspond to the signal power, reflectivity weighted Doppler velocity, and spectrum variance using the In order to obtain calibrated profiles, the signal power from the zeroth moment is converted to signal-to-noise ratio defined by The signal-to-noise ratio is used in the calculations below. The moments are integral equations using the starting and ending velocities: v[1], and v[2]. It is possible to use the whole Doppler velocity spectrum (from v[1] = -V[Nyquist], and v[2] = +V[Nyquist] ) in the moment calculations, but these calculations are subject to bias due to multiple peaks and noise in the spectra. An automated procedure developed at the NOAA Aeronomy Laboratory to separate multiple peaks in the spectral is used to isolate the largest amplitude peak from smaller peaks. In brief, the automated procedure selects the largest spectral peak and follows the spectrum down to the noise floor or until the spectrum increases in magnitude consistent with another spectral peak. This procedure is separating the Bragg scattering return caused by turbulence from clear-air motions from the Rayleigh scattering return from hard distributed targets. This automated procedure is a single peak picking method with an intelligent neighboring peak separator. By stopping at the valley between neighboring peaks, this peak picking method is not as biased as naive moment estimators that do not separate different spectral peaks. The limitation with this method is that the largest magnitude peak is selected regardless of the selected peaks at neighboring spectra both in altitude and in time. This method can not provide both the Bragg and Rayleigh components of the spectra. Only the moments from the largest magnitude peak are retained. The NOAA Aeronomy Laboratory is developing multi-peak peaking methods that enable the separation of the Bragg and Rayleigh scattering components in the Doppler Spectra. Delete observations below the Threshold of Detectability. The empirically derived Threshold of Detectability determines the minimum signal-to-noise ratio for atmospheric observations. All observations below this threshold are not considered to result from atmospheric scattering processes. The Threshold of Detectability is range independent. The Threshold of Detectability, threshold, in log units is defined as: where NFFT is the number of FFTs averaged to produce the final spectra, and NPTS is the number of points in the spectra. If an observed spectra has a signal-to-noise ratio less than the Threshold of Detectability, the output file will contain the 'bad data' variable. For each profile, correct the signal-to-noise ratio for enhanced noise. For Doppler spectra that span a large fraction of the Nyquist velocity range, there may not be enough spectral points to accurately represent the true noise level. For these broad Doppler spectra, the calculated noise may overestimate the true noise. Thus, the signal-to-noise ratio may be too low, and the subsequent equivalent reflectivity factor may also be too low. The goal is to replace the elevated noise values with a good estimate of the noise. Since the noise is range independent, the noise at the 10 highest range gates (where the spectra are not broad) are averaged to form an estimate of the true noise for that profile. The noise at each range is adjusted to produce an adjusted signal-to-noise ratio. This adjusted signal-to-noise ratio is used to calculate the equivalent reflectivity factor. The mean noise at the 10 highest range gates is expressed where noise(j) is the noise in linear units at the j^th range gate and there is a total of n range gates. The noise and mean noise can be converted to log units using The signal-to-noise ratio, s2n(j), can be expressed in log units by where s2n(j) is in linear units expressed for each range gate. The adjusted signal-to-noise ratio, s2n'[log], is calculated in log space using The adjusted signal-to-noise ratio is calculated in linear space using Calculate the equivalent reflectivity factor from the adjusted signal-to-noise ratio. The liquid water equivalent reflectivity factor is determined from the adjusted signal-to-noise ratio and the range gate distance by where ALRC is the Aeronomy Laboratory Radar Constant, NPW is the pulse width in nanoseconds, NCI is the number of coherent integrations, range(j) is the range gate distance in meters, and s2n'(j) is the adjusted signal-to-noise ratio expressed in linear units. The units of the ALRC are defined such that the units of z[e](j) are mm^6 m^-3. Typically, ALRC is a constant for a particular installation. The equivalent reflectivity factor can be expressed in log units by and has units of dBZe. All calculations are in reference to liquid water equivalent reflectivity factor. The minimum detectable reflectivity factor is defined using (8) and setting s2n'(j) to the Threshold of Detectability. Adjust the equivalent reflectivity by the Time Domain Averaging filter (TDA filter). Coherent integration is a digital filtering process used by profilers. Coherent integration does not increase the signal-to-noise per unit bandwidth in the signal band, but it simply filters out much of the wideband noise. This digital filter is called the Time Domain Averaging filter (TDA filter). One side effect of using coherent integration is the decreased power return at frequencies different than zero Doppler shift. This decreased power follows the sinc function with a power response of unity at zero Doppler velocity and the first null located at +/- 2 v[Nyquist] velocities. The power response for the TDA filter is expressed where v is the velocity in ms^-1, and v[Nyquist] is the Nyquist velocity in ms^-1 defined by where IPP is the inter-pulse period. The calculated equivalent reflectivity factor can be corrected by the inverse of the TDA transfer function and is expressed where z(j) is the reflectivity at the j^th range gate, and <v(j)> is the reflectivity-weighted mean Doppler velocity at the j^th range gate. For simplicity, the subscript "TDA" will be omitted in all references to the equivalent reflectivity factor, even though the correction has been applied. Define Doppler velocity such that downward motion is negative. The Doppler velocity recorded by the profiler is defined as positive motion toward the radar. The sign of the mean reflectivity-weighted Doppler velocity is inverted in the precipitation data products to be consistent with the meteorological convention of downward motion having a negative value. Define Spectral Width as twice the spectrum standard deviation The spectrum variance is defined as the second moment The spectral width is defined as twice the spectrum standard deviation and is expressed
{"url":"http://esrl.noaa.gov/psd/psd3/boundary/MstToga/determine_moments.html","timestamp":"2014-04-17T17:06:09Z","content_type":null,"content_length":"20833","record_id":"<urn:uuid:1489f11b-1011-4486-b35d-18e25f3476e4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Considering just positive integers, there are 6 factorizations of 495 into two factors: 1*495, 3*165, 5*99, 9*55, 11*45 and 15*33. Each of these corresponds to one of the six (x,y) pairs bobbym listed in post #4. For example: 1*495=248^2-247^2 and 24^2-9^2 = (24+9)(24-9) = 33*15. In general for an odd composite number each of its unique factorizations (other than a perfect square factorization) corresponds to a difference of squares. For 9 = 1*9 we get 5^2-4^2 = (5+4)(5-4) = 9*1 but 3*3 has no difference of squares representation unless we allow zero: 3^2-0^2 = (3+0)(3-0) = 3*3. But we were talking about POSITIVE integers. If M is an odd composite number and M=n*m where n and m are different, we get as a difference of squares factorization. If I recall correctly this was involved in one of Fermat's methods of factoring odd composites. Have a grrreeeeaaaaaaat day!
{"url":"http://www.mathisfunforum.com/post.php?tid=18909","timestamp":"2014-04-16T10:32:19Z","content_type":null,"content_length":"18992","record_id":"<urn:uuid:0a8c1b6f-ca58-471d-986b-df1887e3ccf3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I'm doing this physics experiment tomorrow and i need help! The aim is to find the resultant of two forces....can anyone please give me a theory, discussion and conclusion for this lab??? i don't fully understand it:( • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5112d30fe4b0e554778a9d3d","timestamp":"2014-04-21T02:27:02Z","content_type":null,"content_length":"35694","record_id":"<urn:uuid:3897389d-5c38-4bab-93f6-9b4d883ba1b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A block is pulled by a string that makes an angle of 25º to the horizontal. If the mass of the block is 12.0 kg and the coefficient of friction is 0.25, what force would keep the block moving at a constant velocity? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5071c3a3e4b04aa3791da2d2","timestamp":"2014-04-20T14:04:31Z","content_type":null,"content_length":"376190","record_id":"<urn:uuid:d6d6a426-f0c8-46c5-8040-36083a263bb7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Updated DM4,5,6,C and PM5,6 Comparison Chart Great work erayser! INFO--> DM, Proto, Shocktech, UL Frame weights & shots! DM info taken from DM4/5/6 manual based on 68/4500 tank. Proto manual does not list the same info unfortunately. Industry standard is without barrel, battery, ASA, reg. OP = Operation Pressure DM = Dye Matrix DM6 = 1.70lbs or 1lbs 11oz; 9.0"L x 7.5"H; 145psi OP, 1300 shots DMC = 2.05lbs or 2lbs 01oz; 9.0"L x 8.0"H; 175psi OP, 1200 shots DM5 = 2.15lbs or 2lbs 02oz; 9.0"L x 8.0"H; 175psi OP, 1200 shots DM4 = 2.30lbs or 2lbs 05oz; 9.5"L x 8.5"H; 175psi OP, 1100 shots PM = Proto Matrix PM6 = 1.75lbs or 1lbs 12oz; 9.0"L x 8.0"H; 185psi OP, 1300 shots PM5 = 2.00lbs or 2lbs 00oz; 9.0"L x 8.5"H; 225psi OP, 1300 shots SM = Shocktech Matrix SM5 = 1.88lbs or 1lbs 14oz; 9.0"L x 7.5"H; 175psi OP, 1200 shots SM4 = 1.94lbs or 1lbs 15oz; 9.5"L x 7.5"H; 175psi OP, 1100 shots Hyper 2 with elbow = .31lbs or 5.0oz. UL 14" barrel = .28lbs or 4.5oz. Alkaline 9-volt = .10lbs or 1.6oz. DM5 stock frame = .61lbs or 9.8oz. DM5 UL frame = .50lbs or 8.0oz. PM5 stock frame = .51lbs or 8.2oz. PM5 UL frame = .41lbs or 6.5oz. UL Frame Info per Buckeye Matrix Comparison Info: 06 Alias Intimidator = 1.6lbs or 1lbs 10oz; 9.0"L x 8.0"H; ? psi OP, ? shots CP Short HP Reg with elbow = .23lbs or 3.7oz. Compressed Air info anyone? Air has an average molecular weight of 29 (28 for N2*.8 + 32 for O2 * .2) 1 mole of gas at STP occupies a volume of 22.4 L, therefore, 29g {~1 oz)of air at STP occupies 22.4 L Take you tank volume in Liters (ft3X28.3 or cu in X .0164) and correct to STP (forget the temperature, just multiply the volume by 204 or 306{3000psi/14.7psi; 4500/14.7psi}. This gives you the gas volume at STP. Divide this volume by 22.4 and you get the weight of air in oz. Using that, you get the following information: 3000 PSI = 204 atm 4500 PSI = 306 atm 47ci = 0.77 L 68ci = 1.11 L 88ci = 1.44 L 114ci = 1.87 L So now, just plug it into what I said above, and you get these weights: 47ci 3000PSI tank: 7.0 ounces of air 68ci 3000PSI tank: 10.1 ounces of air 88ci 3000PSI tank: 13.1 ounces of air 114ci 3000PSI tank: 17.0 ounces of air 47ci 4500PSI tank: 10.5 ounces of air 68ci 4500PSI tank: 15.1 ounces of air 88ci 4500PSI tank: 19.7 ounces of air 114ci 4500PSI tank: 25.5 ounces of air Air Info per Thordic- AO PS. See FAQ Halo Report: boards, batteries, myths, upgrades. Thanks and good luck, Give your Halo/Reloader The BONE!
{"url":"http://www.pbnation.com/showthread.php?t=1384522&page=2","timestamp":"2014-04-19T00:58:08Z","content_type":null,"content_length":"208339","record_id":"<urn:uuid:2928cf0a-3ed3-4c73-8235-f7d2ab974de0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: [Re: How to program a loop to calculate the value of an observat [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: [Re: How to program a loop to calculate the value of an observation] From wgould@stata.com (William Gould, Stata) To statalist@hsphsun2.harvard.edu Subject Re: st: [Re: How to program a loop to calculate the value of an observation] Date Mon, 05 Feb 2007 11:24:57 -0600 Andreas Reinstaller <Andreas.Reinstaller@wu-wien.ac.at> wrote, > I have found a solution that does not strike me to be particularly > elegant, but that works. I just use the indices of the variables in > Stata, where NoEnt_nsc1 TO_nsc1 TO_Nace NoEmpl_nsc1 LET are the > variables I use: > In a simple do file I have this code > [do-file omitted] > For 12000 obs it takes about 15 minutes to run on a dual core pentium > with 2.8ghz and Windows XP in Stata 8.2. > If somebody has ideas about how to improve the speed, I should be grateful. The code in the omitted do-file is complicated and I do not know what it does. I have, however, performed a line-by-line translation of the code into Mata, where it should run faster. I cannot vouch for the fedelity of my translation. The translation reads: ------------------------------------------------ begin do-file ------ sort country time SecCode sizeclass generate ntx=_n generate newntx=. tomata // <- creates Mata vectors from every variable size = st_nobs() for (i=1; i<=size; i++) { nof = NoEnt_nsc1[i] if (nof==.) nof = 0 szs = 0 if nof > 0 { if (nof == 1) { szs = (NoEnt_nsc1[i]*(100*(TO_nsc1[i]/NoEnt_nsc1[i])/TO_Nace[i] )^2) else { for (j=1; j<=nof; j++) { sz = ((100*( (TO_nsc1[i]/NoEmpl_nsc1[i]*LET[i]) + (2*j* (TO_nsc1[i] - ( NoEnt_nsc1[i]*(TO_nsc1[i]/ NoEmpl_nsc1[i]*LET[i]) )) /(NoEnt_nsc1[i]*(NoEnt_nsc1[i]-1))) ) / TO_Nace[i])^2) szs = szs + sz newntx[i] = szs by country time SecCode: egen Ave_HHI_nsc1=sum(newntx) ------------------------------------------------ end do-file ------ To run the above Andreas will need -tomata-. This program was discussed in a previous Mata Matters column. To get the command, type . ssc install tomata In his code, Andreas has various Stata variables such as NoEnt-nsc1. All -tomata- does is create Mata vectors from the Stata variables, so I can refer to them in the Mata code. The vectors created a views onto the Stata data, so each takes only a few bytes of memory. As Nick Cox mentioned, I suspect that if we knew the formula of what was being implemented, a more efficient way could be found to make the calculation. Nonetheless, it looks to me as if Andreas has code from another system that the knows works, and sometimes the easiest thing is just to translate blindly. -- Bill * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-02/msg00126.html","timestamp":"2014-04-17T09:46:30Z","content_type":null,"content_length":"8263","record_id":"<urn:uuid:3e1c2e11-9355-4466-be4d-76653b781838>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-compact complex surfaces which are not Kähler up vote 16 down vote favorite Not every complex manifold is a Kähler manifold (i.e. a manifold which can be equipped with a Kähler metric). All Riemann surfaces are Kähler, but in dimension two and above, at least for compact manifolds, there is a necessary topological condition (i.e. the odd Betti numbers are even). This condition is also sufficient in dimension two, but not in higher dimensions. Therefore the task of finding examples of compact complex manifolds which are not Kähler is reduced to topological considerations. In the non-compact setting, we can also find such manifolds. For example, let $H$ be a Hopf surface, which is a compact complex surface which is not Kähler. Then for $k > 0$, $M_{k+2} = H\times\ mathbb{C}^k$ is a non-compact complex manifold which is not Kähler - any submanifold of a Kähler manifold is Kähler, and $H$ is a submanifold of $M_{k+2}$. This generates examples in dimensions three and above. So I ask the following question: Does anyone know of some (easy) explicit examples of non-compact complex surfaces which are not Kähler? 1 Any complex submanifold of a Kähler manifold is Kähler, sure. Are you allowed to change the complex structure on $H\times\mathbb C^k$ to look for a Kähler structure? For example $S^3\times S^1$ is compact and not Kähler (no dimension-2 homology so no symplectic form), but you can embed it as the standard spheres of $\mathbb C^2\times\mathbb C$, and cutting out origins you get an alternative Kähler structure on $S^3\times S^1\times\mathbb C$. – Elizabeth S. Q. Goodman Feb 20 '12 at 7:11 5 I haven't given this much thought, but would the Hopf surface with a point deleted work? – David Speyer Feb 20 '12 at 7:29 add comment 2 Answers active oldest votes Following David Speyer's suggestion, let $X=\mathbb{C}^2-\{0\}/\lbrace(x,y)\mapsto (2x,2y)\rbrace$ be the standard Hopf surface. The image, $E$, of the $x$-axis is an elliptic curve. Remove a point of $X-E$ to get $Y$. The second Betti number $b_2(Y)=0$ because it is homeomorphic to $S^3\times S^1-pt$. If $Y$ were Kähler then $\int_E\omega\not=0$, where $\omega$ up vote 19 down is the Kähler form, and this would imply that $b_2(Y)\not=0$. vote accepted Very very nice! – diverietti Feb 20 '12 at 16:53 Oh right--so, no $\omega$-positive curves in an exact symplectic manifold, in particular the 2nd-homology obstruction still is useful for closed curves in an open Kähler manifold. Nice. – Elizabeth S. Q. Goodman Feb 20 '12 at 17:34 the point is that often a non-kähler, and hence non-projective, manifold does not have any closed submanifold at all... so the hopf surface is a quite particular case... – diverietti Feb 20 '12 at 17:44 @diverietti: How sure are we about that "often"? The main examples of non-Kahler manifolds - Hopf manifolds and the Iwasawa manifold - are torus fibrations and thus contain submanifolds. The only examples I know of manifolds that contain no submanifolds are general tori and certain hyperkahler manifolds, and both are Kahler. – Gunnar Magnusson Feb 20 '12 at 21:59 @Gunnar: you might also know about Inoue surfaces, which are compact complex non-Kahler surfaces of class VII with $b_2=0$, and which have no complex curves. Higher-dimensional 4 versions of Inoue surfaces are the Oeljeklaus-Toma manifolds, which are non-Kahler and also have no closed complex subvarieties other than points, see arXiv.org/abs/1009.1101 – YangMills Feb 21 '12 at 0:39 add comment From any compact non-Kähler surface $X$ remove a point $p$. You're left with a non-Kahler, non-compact surface. For the proof, see Théorème 2.3 in A. Lamari - Courants kählériens et surfaces compactes. First, by a theorem of Shiffman, a Kähler form on $X\setminus {p}$ extends as a closed positive current to all of $X$. Then, locally around $p$, the singularity at $p$ up vote 8 can be fixed by using convolutions to obtain a smooth Kähler form on $X$. down vote Could you please give a reference of Shiffman's theorem? And spend some more words on the regularization part? It would be interesting! – diverietti Feb 21 '12 at 7:52 Shiffman's extension theorem appeared in "Extension of positive line bundles and meromorphic maps", Invent. Math., 15(1972), 332-347. It is the Main Lemma on page 333. I think the regularization part is due to Miyaoka "Extension theorems for Kähler metrics", Proc. Japan Acad., 50(1974), 407-410. You might want to check this paper for details on the regularization. It also appears in Lamari's paper cited above. – user20497 Feb 21 '12 at 9:21 Thanks! I'll take a look! – diverietti Feb 21 '12 at 23:50 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry dg.differential-geometry complex-geometry complex-manifolds kahler-manifolds or ask your own question.
{"url":"http://mathoverflow.net/questions/88996/non-compact-complex-surfaces-which-are-not-kahler","timestamp":"2014-04-19T07:51:05Z","content_type":null,"content_length":"69168","record_id":"<urn:uuid:c2770c7d-d190-492e-aeb2-2e2c4cf399e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
[OS X TeX] change letters in math mode? Bruno Voisin bvoisin at mac.com Thu Oct 21 10:55:58 CEST 2004 Le 21 oct. 04, à 10:19, Ingo Reich a écrit : > I finally got my Trump Mediaeval fonts working and think about using > them together with my Lucida Bright fonts for math (a good alternative > seems to be the pxfonts). First loading lucida (as usual, or just > lucbmath.sty) and then the Trump Med. fonts however gives a bit > strange looking result, since the roman letters in math mode are not > substituted by the Trump Med. fonts. Somewhat modifying "lucbmath.sty" > I tried to change this, but the result was simply that not only the > (italic) roman letters changed, but also the greek letters (and maybe > some other symbols, I don't really know). Is there a way (and is it > reasonable) to only change the (italic) roman letters (brackets etc.), > but keep the greek ones and the other symbols (mathematical relations > etc.)? Assuming the font is named "trump", you may try putting in the preamble of your LaTeX file stuff like: \SetSymbolFont{operators}{bold} {\encodingdefault}{trump}{b}{n} \SetMathAlphabet{\mathrm}{bold} {\encodingdefault}{trump}{b}{n} \SetMathAlphabet{\mathit}{bold} {\encodingdefault}{trump}{b}{it} The Greek letters, corresponding to the math alphabets letters or mathupright, should not be affected. > And another problem: I'd also like to load a *scaled* version of the > sans serif font that comes with the lucida package (the > 'hls'-family), but independently from the lucida stylefiles. Any > suggestions? You'll have to rewrite the corresponding .fd file, for example /usr/local/teTeX/share/texmf.tetex/tex/latex/lucida/t1hls.fd, replacing stuff like s*[0.95] hlsr8t but this might conflict with the [lucidascale], [nolucidascale] and [lucidasmallscale] options defined in Have you tried simply loading the [lucidasmallscale] option? Bruno Voisin --------------------- Info --------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Post: <mailto:MacOSX-TeX at email.esm.psu.edu> More information about the Macostex-archives mailing list
{"url":"http://tug.org/pipermail/macostex-archives/2004-October/010532.html","timestamp":"2014-04-19T08:21:05Z","content_type":null,"content_length":"5396","record_id":"<urn:uuid:02e2a6d1-aa8a-4838-bc92-5f891e0e9fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the Inverse of Square Root Function - ChiliMath To find the inverse of a square root function, it is crucial to sketch or graph the given problem first to clearly identify what the domain and range. Since I will utilize the domain and range of the original function to describe the domain and range of the inverse function by interchanging them. If you need additional information what I meant by "domain and range interchange" between the function and its inverse, see my previous lesson about this. Direction: Find the inverse functions of the following square root functions, if they exist. In addition, state their domain and range. 1) See solution 2) See solution 3) See solution 4) See solution 5) See solution Example 1: Find the inverse function of Every time I encounter a square root function with a linear term inside the radical symbol, I always think of it as "half of parabola" that is drawn sideways. Since this is the positive case of the square root function, I am sure that its range will become increasingly more positive, in plain words, skyrocket to positive infinity. This particular square root function has this graph, with its domain and range identified. From this point, I will have to solve for the inverse algebraically by following the suggested steps. Basically, replace f(x) by y, interchange x and y in the equation, solve for y which soon will be replaced by the appropriate inverse notation, and finally state the domain and range. Remember to use the techniques in solving radical equations to solve for the inverse. Squaring or raising to the second power the square root term should eliminate the radical. However, you must do it to both sides of the equation to keep it balanced. Make sure that you verify the domain and range of the inverse function from the original function. They must be "opposite of each other". Placing the graphs of the original function and its inverse in one coordinate axis... Can you see their symmetry along the line y = x? See green dashed line. Example 2: Find the inverse function of This function is the "bottom half" of a parabola because the square root function is negative. That negative symbol is just −1 in disguise. In solving the equation, squaring both sides of the equation makes that −1 "disappear" since (−1)^2 = 1. Its domain and range will be the swapped "version" of the original function. Example 3: Find the inverse function of This is the graph of the original function showing both its domain and range. Determining the range is usually a challenge. The best approach to find it is to use the graph of the given function with its domain. Analyze how the function behaves along the "y-axis" while considering the x-values from the domain. Here are the steps to solve or find the inverse of the given square root function. As you can see, it's really simple. Make sure that you do it carefully to prevent any unnecessary algebraic errors. Example 4: Find the inverse function of This function is one-fourth (quarter) of a circle with radius 3 located at Quadrant II. Another way of seeing it, this is half of the semi-circle located above the horizontal axis. I know that it will pass the horizontal line test because no horizontal line will intersect it more than once. This is a good candidate to have an inverse function. Again, I am able to easily describe the range because I have spent time to graph it. Well, I hope that you realize the importance of having a visual aid to help determine that "elusive" range. The presence of a squared term inside the radical symbol tells me that I will apply square root operation in both sides of the equation to find the inverse. By doing so, I will have a plus or minus case. This is a situation where I will make a decision which one to pick as the correct inverse function. Remember that inverse function is unique therefore I can't allow to have two How will I decide which one to choose? The key is to consider the domain and range of the original function. I will swap them to get the domain and range of the inverse function. Use this information to match which of the two candidate functions satisfy the required conditions. Although they have the same domain, the range here is the "tie-breaker"! The range tells us that the inverse function has minimum value of y = − 3 and maximum value of y = 0. The positive square root case fails this condition since it has a minimum at y = 0 and maximum at y = 3. The negative case must be the obvious choice, even with further analysis. Example 5: Find the inverse function of It's helpful to see the graph of the original function because we can easily figure out both its domain and range. The negative sign of the square root function implies that it is found below horizontal axis. Notice that this is similar to Example 4 . It is also one-fourth of a circle but with radius of 5. The domain forces the quarter circle to stay in Quadrant IV. This is how we find its inverse algebraically. Did you pick the correct inverse function out of the two possible ones? The answer is the case with the positive sign. Now what? You can learn how to find the inverse of other kinds of functions. Go top of the page to check it out.
{"url":"http://www.chilimath.com/algebra/advanced/inverse/find-inverse-square_root-function.html","timestamp":"2014-04-21T15:10:21Z","content_type":null,"content_length":"28052","record_id":"<urn:uuid:f8d6f737-106d-43dc-9855-8b946fa36aba>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
New York Math Tutor ...Teaching/tutoring is my passion and I am willing to go the extra mile to make sure I help every student I tutor. When you hire me as your professional math tutor, you get so much more than just an hourly rate tutor. You're getting a tutor who is going to work around the clock to ensure your son/daughter's academic performance increases. 9 Subjects: including SAT math, algebra 1, algebra 2, geometry As a student at SUNY Geneseo I graduated Cum Laude with a bachelor's of science in biology. I took the MCAT and received a score of 40 (14PS-14VR-12B), which placed me in approximately the 99.75 percentile of all test takers. I took the test July 27th 2013. 24 Subjects: including algebra 1, algebra 2, ACT Math, chemistry ...I've also worked at an Apple retail store. I especially enjoy helping people better understand and get the most out of their technology. I am an advanced computer user and deeply familiar with Microsoft Windows. 37 Subjects: including trigonometry, calculus, chemistry, computer science ...Soon I will begin a masters degree at Columbia University. As a student, I know the stress that some students have over subjects that trouble them, and this allows me to be patient and help them work through the problems and build up their confidence. For the past 8 months I have tutored for th... 4 Subjects: including algebra 1, prealgebra, SAT math, elementary math ...I was often considered advanced while in school but I did not always apply myself. Teachers did not usually take the time to make the class interesting to an advanced student. Consequently, I did not always get good grades. 11 Subjects: including geometry, algebra 1, reading, public speaking
{"url":"http://www.purplemath.com/new_york_ny_math_tutors.php","timestamp":"2014-04-17T22:13:25Z","content_type":null,"content_length":"23496","record_id":"<urn:uuid:0cc7092e-0c44-4193-abce-2af0e72938a4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference between Maths and Science: Philosophy Forums Difference between Maths and Science Difference between Maths and Science #41 - Posted Jul 17, 2011 - 10:21 AM: Legion wrote: Math is a study of language. Science is a study of natural systems. I'm not so sure. What do you mean by "a study of" Do you mean that the object of math is language. For that kind of statement, I think you will need a convincing argument, as it's somewhat counterintuitive. At least to me. •Rubinho Math obviously needs language. The natural languages work out pretty well, but sometimes, it needs a formal language too. But should we actually say that math is to language as science is Initiate to natural systems? I think science looks at how nature works and then makes assumptions about fundamental principles. (After being hit by a truck a certain number of times, anyone will believe that force Usergroup: equals mass times acceleration). So the object of science is natural events, right? Not the laws or principles themselves. They remain hypotheses. I mean it's one thing to say 'from all we Members observe nature works this way' but it's another to say 'nature works this way--we know that for a fact' Jul 17, Now, if natural events are the object of science and natural laws are the axioms of this disicpline what takes these roles when we are talking about math? Well, as for for the axoims that 2011 is pretty clear. Some of the fundamental axioms would be Peano's. And there are others. Taken together, there seems to be a set of axioms from which all mathematical statements can be deduced. Is that not so? Topics: 2 But what takes the role of the object. So what is for math what natural events (e.g. something falling to the ground, something blowing up in a fireball, a tree growing, etc.) are for Total science. Well, we could say numbers. Okay, but is that all there's to it? Aren't there also sets, functions, vectors, matrices, and all sorts of things? And are they really the object of Posts: 56 math in the same way as natural events (or phenomena) are the objects of science. I don't think that these mathematical things are really the object of math. I think that they are for a mathematician what substances, bodies, species are for a scientist. But the object of math is what a mathematician studies. And what a mathemtatician studies is what happens to those things when we do certain things with them. So we actually kind of experiment with things too in math. We do things with numbers and sets and functions and so on and see what happens. Right, I mean what math is about is not to say that *there are* numbers, or *there are* sets, etc, etc. We try to make statements about how these things combine. Part of the difference is, then, that science looks at things which are visible. And math looks at things which are thinkable (this is actually a Platonic difference). The object of science is natural phenomena. The object of math is ... well, mathematical events, i.e. what happens to sets, numbers, etc. when we try certain things. The things (as opposed to objects) that science looks at are material. The things that math looks at are immaterial. They are actually metaphysical while science things are physical. It's interesting to see what this means for statements that try to express natural facts by method mathematical language. 'force equals mass times accelarion' should actually be: if you do to mass (in Kg) and acceleration (in m/s^2) what a mathematician does when he multiplies two numbers, you get force (in N). Because actually, only a mathematician knows what multiplication really is and only he knows how to handle the '25' in 25N. It is the scientist who can handle the 'N'.
{"url":"http://forums.philosophyforums.com/threads/difference-between-maths-and-science-46587-5.html","timestamp":"2014-04-18T08:10:45Z","content_type":null,"content_length":"27010","record_id":"<urn:uuid:6ca786d6-6352-43e0-8693-cea70d00e3b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Canonical Proofs for Linear Logic Programming Frameworks - Department of Computer Science, RMIT , 1997 "... Logic programs consist of formulas of mathematical logic and various proof-theoretic techniques can be used to design and analyse execution models for such programs. We briefly review the main problems, which are questions that are still elusive in the design of logic programming languages, from a p ..." Cited by 4 (4 self) Add to MetaCart Logic programs consist of formulas of mathematical logic and various proof-theoretic techniques can be used to design and analyse execution models for such programs. We briefly review the main problems, which are questions that are still elusive in the design of logic programming languages, from a prooftheoretic point of view. Existing approaches and analyses which lead to the various languages are all rather sophisticated and involve complex manipulations of proofs. All are designed for analysis on paper by a human and many of them are ripe for automation. We aim to perform the automation of some aspects of proof-theoretic analyses, in order to assist in the design of logic programming languages. In this paper we describe the first steps towards the design of such an automatic analysis tool. We investigate the usage of particular proof manipulations for the analysis of logic programming strategies. We propose a more precise specification of sequent calculi inference rules that we use ... - In Workshop on Object-based Parallel and Distributed Computation, OBPDC'95 , 1995 "... There are several major approaches to model concurrent computations using logic. In this context, one aim can be to achieve different forms of programming as logic, object-oriented or concurrent ones in a same logical language. Linear logic seems to be well-suited to describe computations that are c ..." Cited by 1 (1 self) Add to MetaCart There are several major approaches to model concurrent computations using logic. In this context, one aim can be to achieve different forms of programming as logic, object-oriented or concurrent ones in a same logical language. Linear logic seems to be well-suited to describe computations that are concurrent and based on state transitions. In this paper, we propose and analyze a framework based on Full Intuitionistic Linear Logic (FILL), logical fragment with potentialities for non-determinisms management, as foundation of concurrent object-oriented programming, following the two paradigms proof-search as computation and proofs as computations.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2052463","timestamp":"2014-04-20T01:37:59Z","content_type":null,"content_length":"15861","record_id":"<urn:uuid:0975a7ac-ed4c-4be8-8541-1262ddfbf23a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 2 Identical wires A and B have the same length L and carry the same current I.Wire A is bent to form a square of side a.B1 and B2 are the values of magnetic induction at the center of the circle and center of the square respectively.Find B1/B2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ffee36fe4b09082c06feea6","timestamp":"2014-04-18T08:34:36Z","content_type":null,"content_length":"179171","record_id":"<urn:uuid:a891accc-0b21-4072-821f-b74d1636f14c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Thinking in Epidemiology • Provides an alternative and intuitive understanding of statistical modeling using vector geometry • Uses vector geometry extensively to explain the problems with collinearity in linear models and other complex statistical models • Introduces a wide range of statistical models as analytical tools: from simple regression, analysis of covariance, multilevel models, latent growth models, growth mixture models to partial least square regression • Examples come from real research settings and are discussed in great detail without over-simplification • Discusses vital but often poorly understood statistical concepts, such as mathematical coupling, regression to the mean, co-linearity, reversal paradox and statistical interaction While biomedical researchers may be able to follow instructions in the manuals accompanying the statistical software packages, they do not always have sufficient knowledge to choose the appropriate statistical methods and correctly interpret their results. Statistical Thinking in Epidemiology examines common methodological and statistical problems in the use of correlation and regression in medical and epidemiological research: mathematical coupling, regression to the mean, collinearity, the reversal paradox, and statistical interaction. Statistical Thinking in Epidemiology is about thinking statistically when looking at problems in epidemiology. The authors focus on several methods and look at them in detail: specific examples in epidemiology illustrate how different model specifications can imply different causal relationships amongst variables, and model interpretation is undertaken with appropriate consideration of the context of implicit or explicit causal relationships. This book is intended for applied statisticians and epidemiologists, but can also be very useful for clinical and applied health researchers who want to have a better understanding of statistical thinking. Throughout the book, statistical software packages R and Stata are used for general statistical modeling, and Amos and Mplus are used for structural equation modeling. Table of Contents Uses of Statistics in Medicine and Epidemiology Structure and Objectives of This Book Nomenclature in This Book Vector Geometry of Linear Models for Epidemiologists Basic Concepts of Vector Geometry in Statistics Correlation and Simple Regression in Vector Geometry Linear Multiple Regression in Vector Geometry Significance Testing of Correlation and Simple Regression in Vector Geometry Significance Testing of Multiple Regression in Vector Geometry Path Diagrams and Directed Acyclic Graphs Path Diagrams Directed Acyclic Graphs Direct and Indirect Effects Mathematical Coupling and Regression to the Mean in the Relation between Change and Initial Value Historical Background Why Should Change Not Be Regressed on Initial Value? A Review of the Problem Proposed Solutions in the Literature Comparison between Oldham’s Method and Blomqvist’s Formula Oldham’s Method and Blomqvist’s Formula Answer Two Different Questions What Is Galton’s Regression to the Mean? Testing the Correct Null Hypothesis Evaluation of the Categorisation Approach Testing the Relation between Changes and Initial Values When There Are More than Two Occasions Analysis of Change in Pre-/Post-Test Studies Analysis of Change in Randomised Controlled Trials Comparison of Six Methods Analysis of Change in Non-Experimental Studies: Lord’s Paradox ANCOVA and t-Test for Change Scores Have Different Assumptions Collinearity and Multicollinearity Introduction: Problems of Collinearity in Linear Regression Mathematical Coupling and Collinearity Vector Geometry of Collinearity Geometrical Illustration of Principal Components Analysis as a Solution to Multicollinearity Example: Mineral Loss in Patients Receiving Parenteral Nutrition Solutions to Collinearity Is ‘Reversal Paradox’ a Paradox? A Plethora of Paradoxes: The Reversal Paradox Background: The Foetal Origins of Adult Disease Hypothesis (Barker’s Hypothesis) Vector Geometry of the Foetal Origins Hypothesis Reversal Paradox and Adjustment for Current Body Size: Empirical Evidence from Meta-Analysis Testing Statistical Interaction Introduction: Testing Interactions in Epidemiological Research Testing Statistical Interaction between Categorical Variables Testing Statistical Interaction between Continuous Variables Partial Regression Coefficient for Product Term in Regression Models Categorization of Continuous Explanatory Variables The Four-Model Principle in the Foetal Origins Hypothesis Categorization of Continuous Covariates and Testing Interaction Finding Growth Trajectories in Lifecourse Research Current Approaches to Identifying Postnatal Growth Trajectories in Lifecourse Research Partial Least Squares Regression for Lifecourse Research OLS Regression PLS Regression Concluding Remarks Author Bio(s) Dr Yu-Kang Tu is a Senior Clinical Research Fellow in the Division of Biostatistics, School of Medicine, and in the Leeds Dental Institute, University of Leeds, Leeds, UK. He was a visiting Associate Professor to the National Taiwan University, Taipei, Taiwan. First trained as a dentist and then an epidemiologist, he has published extensively in dental, medical, epidemiological and statistical journals. He is interested in developing statistical methodologies to solve statistical and methodological problems such as mathematical coupling, regression to the mean, collinearity and the reversal paradox. His current research focuses on applying latent variables methods, e.g. structural equation modeling, latent growth curve modelling, and lifecourse epidemiology. More recently, he has been working on applying partial least squares regression to epidemiological data. Prof Mark S Gilthorpe is professor of Statistical Epidemiology, Division of Biostatistics, School of Medicine, University of Leeds, Leeds, UK. Having completed a single honours degree in mathematical Physics (University of Nottingham), he undertook a PhD in Mathematical Modelling (University of Aston in Birmingham), before initially embarking upon a career as self-employed Systems and Data Analyst and Computer Programmer, and eventually becoming an academic in biomedicine. Academic posts include systems and data analyst of UK regional routine hospital data in the Department of Public Health and Epidemiology, University of Birmingham; Head of Biostatistics at the Eastman Dental Institute, University College London; and founder and Head of the Division of Biostatistics, School of Medicine, University of Leeds. His research focus has persistently been that of the development and promotion of robust and sophisticated modelling methodologies for non-experimental (and sometimes large and complex) observational data within biomedicine, leading to extensive publications in dental, medical, epidemiological and statistical journals. Editorial Reviews "The graphical explanations proposed are quite convincing and these tools should be more exploited in statistical classes." —Sophie Donnet, Université Paris-Dauphine, CHANCE, 25.4
{"url":"http://www.crcpress.com/product/isbn/9781420099911","timestamp":"2014-04-16T19:50:33Z","content_type":null,"content_length":"101073","record_id":"<urn:uuid:329360a3-cc90-4dc4-ba22-6ea762663412>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
A153320 - OEIS %S 5,17,19,59,61,191,227,521,641,683,709,857,863,919,983,1031,1039,1097, %T 1117,1123,1151,1229,1423,1543,1579,1621,1699,1733,1759,1867,1871, %U 2153,2237,2287,2357,2383,2557,2621,2879,2971,3301,3329,3371,3581,3847,4021 %N Primes p such that p^2+-48 are also primes. %t fQ[n_]:=PrimeQ[n^2-48]&&PrimeQ[n^2+48];lst={};Do[If[fQ@Prime[n],AppendTo[lst,Prime[n]]],{n,7!}];lst %o (MAGMA) [p: p in PrimesUpTo(5000)|IsPrime(p^2-48) and IsPrime(p^2+48)] [From Vincenzo Librandi, Jan 30 2011] %Y Cf. A153116, A153119, A153120 %K nonn %O 1,1 %A _Vladimir Joseph Stephan Orlovsky_, Dec 23 2008
{"url":"http://oeis.org/A153320/internal","timestamp":"2014-04-18T15:10:14Z","content_type":null,"content_length":"7258","record_id":"<urn:uuid:822b087c-88cf-4702-9fef-ffc42dae7bf8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
The Dream of Mind and Machine by Edward Rothstein The Dream of Mind and Machine Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter Basic Books, 777 pp., $18.50 “Well now, would you like to hear of a race-course, that most people fancy they can get to the end of in two or three steps, while it really consists of an infinite number of distances, each one longer than the previous one?” —Lewis Carroll What the Tortoise Said to Achilles During the nineteenth century, mathematics, the acknowledged foundation of all the sciences, began to turn its attention to its own foundations. With the work of Boole, De Morgan, Frege, Peano, Russell, and others, a great project began to take shape: mathematics would examine its own structure with all the rigor it had brought to other explorations. The goal was a system that would be unquestionable and secure; a mathematical language would be developed whose simplicity and clarity would dispel all doubts, reveal all mathematical truths. The goal of the project was similar to the dream of many seventeenth-century thinkers who wished to create or discover a “universal language” in which a set of symbols would be used to describe all of man’s knowledge. Its structure would be so tied to the structure of the universe that the language itself could be used to discover truths. Its syntax would prevent falsity. There would be no ambiguous meanings. Leibniz, with his ideas of a precise symbolic language and a calculus of reasoning, was, in fact, a major influence on the nineteenth-century systemization of logic which was to provide the syntax of the new mathematical language. In 1884, for example, Gottlob Frege claimed that arithmetic is “simply a development of logic,” and in the Principia Mathematica Russell and Whitehead attempted to show that all of “pure mathematics” might be derived from the propositions of logic. The full extent of the project was articulated in 1904 by David Hilbert: mathematics would be based upon a system of axioms similar to the one later elaborated in Principia Mathematica. New mathematical statements would be produced according to a set of rules; no two statements produced would contradict each other, making mathematics consistent. Moreover, the system would “fill” its chosen universe; it would be complete, producing every truth. Mathematical knowledge would then be secure. Just by manipulating signs of mathematics according to the rules of logic—its “grammar”—one could eventually discover all necessarily true statements, and one would never produce a falsehood. The dream of producing such a language was shattered in 1931 by a twenty-five-year-old Austrian mathematician, Kurt Gödel. In a paper entitled (in English translation) “On Formally Undecidable Propositions of Principia Mathematica and Related Systems I,” he showed that in sufficiently powerful mathematical systems, consistency could not be proven in the way Hilbert wished, and more importantly, that such systems were incomplete: there are true statements in mathematics that the system will not produce, true statements, that is, that cannot be proven. This result has since taken its place among the other metaphorically powerful scientific results of our century—the Theory of Relativity and the Uncertainty Principle. Gödel’s result is known as the Incompleteness Theorem. The proof of the theorem is so shrouded in abstraction that it is impenetrable to an outsider; one can hardly read Gödel’s original paper without previous preparation.^* Yet in the contemporary intellectual climate it has become almost essential to gain some understanding of Gödel’s theorem, it is not only called upon in discussions of philosophy of mathematics, but also in essays on science, music, and literature. There is, fortunately, an excellent book about the theorem by Ernest Nagel and James R. Newman that gives the historical background and a simplified exposition that is as direct and brief as its title: Gödel’s Proof. There is also a playful presentation of the theorem in Raymond Smullyan’s recent What Is the Name of This Book? Now Douglas Hofstadter has also written a welcome guide for the layman through the mathematical thickets, but his intentions are more grandiose. “I began,” he writes, “intending to write an essay at the core of which would be Gödel’s Theorem. I imagined it would be a mere pamphlet.” What emerged instead is a large, expensive, eclectic, beautifully designed and lucidly written book that attempts to link Gödel’s theorem to the art of M.C. Escher and to the music of J.S. Bach. The text is punctuated with over 150 illustrations, ranging from Escher’s prints to a drawing of an arch constructed by termites. Hofstadter discusses cellular reproduction and Zen koans. Each chapter is preceded by playful dialogues between Achilles and the Tortoise from Zeno’s paradox, and assorted other characters, including the author himself. The dialogues are modeled on Bach’s musical compositions and bear titles like “Three-Part Invention,” “Chromatic Fantasy, and Feud,” and “The Magnificrab, Indeed.” The book, Hofstadter assures us, has “many levels of meaning”; it is meant to symbolize “at once Bach’s music, Escher’s drawings, and Gödel’s Theorem.” Hofstadter, an Assistant Professor of Computer Science at Indiana University, is also interested in arguing in this “metaphorical fugue on minds and machines” that someday it will be possible to create “Artificial Intelligence” in a computer. If Gödel’s theorem shattered a sort of dream, it seems as if Hofstadter is intent on creating another one, a dream filled with wordplay and association in which Gödel plays a metaphorical role. Such a dream could have dissipated into chaos; but instead the book is exhilarating, challenging, valuable, and frustrating. Hofstadter writes directly and playfully for the lay reader, explaining the most abstract and wideranging arguments in short sections of great virtuosity. He is sophisticated in his understanding of the systems he explores and is adventurously speculative about their limits. But the book resists simple evaluation; it is at once surprisingly subtle and annoyingly naïve, exuberantly clever and embarrassingly silly. In order to understand Hofstadter’s project, we shall first have to understand the one that preceded Gödel. It might seem, at first, that the mathematical project that Gödel shattered was a totally unrealistic one, particularly if it is seen as part of a continuing search for a “universal language.” But some such project was needed: the certainty of mathematical knowledge after the nineteenth century really was in doubt. Difficulties had arisen even in that most concrete field, geometry. The ancient “axiomatic method,” in which certain self-evident fundamental propositions are used as basic axioms from which the theorems may be derived, itself began to pose problems. There had always been a problem with Euclid’s system, despite its longevity. The first four of Euclid’s five axioms from which he derived plane geometry had seemed indubitable (e.g., a straight line segment can be drawn joining any two points). But the fifth axiom (whose modern equivalent is: through a point outside a line only one line can be drawn parallel to the given line) had always seemed less than self-evident, and over the centuries there were numerous attempts to demonstrate its superfluity by deriving it from the first four. Finally, it was recognized that it could not be proven; it was, in fact, independent of the other axioms. Yet this independence is puzzling, for if the fifth axiom were contradicted, if, for example, one were to assert that through a point outside a line no line can be drawn parallel to a given line, and if this new axiom were added to the first four, the results seem absurd. One would find, for example, that the sum of the angles of a triangle would be more than 180 degrees. Several mathematicians simultaneously realized that by reinterpreting certain undefined terms in Euclidean geometry, like “line,” sense could be made out of the absurdities of the system with a different fifth axiom. The system above, for example, could be modeled by a sphere in which “lines” are interpreted to be great circles (like longitudinal or equatorial circles). Not only do the first four axioms still hold, but so does the new fifth axiom: through a point outside a line (a great circle) no line (great circle) can be drawn parallel to it (since all great circles intersect). And on a sphere the sum of the angles of a triangle is indeed greater than 180 degrees. This new interpretation yielded a non-Euclidean geometry. The discovery of non-Euclidean geometries raised serious questions about how mathematics was to be understood. Mathematicians tended to assume that the terms of mathematics, whether “point,” or “line,” or “numeral,” referred clearly to objects in our experience. This was no longer certain. In fact, by reinterpreting those terms, one could generate quite different systems, some of which might have nothing to do with out experience at all. As Nagel and Newman put it: “It came to be acknowledged that the validity of a mathematical inference in no sense depends upon any special meaning that may be associated with the terms.” The business of mathematics was not interpretation, or truth, but logical derivation. But if it was difficult in a field as concrete as geometry to determine which statements could be proven and what the contradictions were, how much more difficult would it be in more abstract systems? All of mathematics was becoming more abstract in the nineteenth century; it was no longer tied to a description of our universe, its space, or its numbers. It was also, then, cut off from the confirmation experience might give to mathematics. How could one be certain that one was not producing contradictions in these unearthly realms? David Hilbert, who did more than any other contemporary mathematician to refine and clarify the axiomatic foundations of geometry, believed that only the axiomatic method could guarantee the certainty of mathematical results. His dream, mentioned earlier, was nurtured by this belief. He proposed a radical “formalization” of mathematics; all mathematical signs would be drained of meaning in order to avoid such confusions as arose in geometry. Statements in mathematics would be no more than “strings” of such arbitrarily chosen signs as “strokes,” hyphens, and letters. Axioms would be strings that were given with the system. Then by using specified mechanical rules, these strings could be manipulated to produce other strings to be known as “theorems.” A “derivation” of a theorem would just be an “array” of strings, arranged according to the rules. Mathematics would become a mechanical, syntactical game. The study of such mathematical systems, known as “meta-mathematics,” was also to be systematized. The questions of meta-mathematics were substantial. For example, how can we determine whether a particular string is a theorem, i.e., whether it can be proven? Given a particular interpretation of the system, are there any contradictions within it? Hilbert’s formalist project has had enormous influence. By treating systems as if they were purely syntactic he focused attention upon the minute nuances in mathematical thought and helped make abstract rigor the measure of mathematical presentation. And by defining the realm of meta-mathematics, Hilbert explicitly raised questions about truth and provability in mathematics. The most thorough attempt to ground mathematics in a system was made by Russell and Whitehead in Principia Mathematica: all of mathematics was to be developed out of the rules of a logical calculus. Given such a system, Hilbert asked, can we prove that it is both consistent and complete, that it contains no contradictions and yields every true statement? If we could, if just a formalized arithmetic would have these properties, we would be on our way toward the construction of a powerful mathematical language. Hilbert recognized that in accomplishing this dual task of proving both consistency and completeness, one could not appeal to ordinary mathematical methods, for it was just those methods whose legitimacy was in question. Nor could proofs in meta-mathematics make reference to infinite numbers of strings or operations since such infinite sets had created paradoxes and confusion in the mathematics of the time. Such proofs would have to construct any mathematical object they wished to make use of, demonstrating its existence clearly and concretely. How did Gödel overturn Hilbert’s project? He proved that there can be no proof of consistency with Hilbert’s restrictions, and that powerful mathematical systems can never be made complete: there will always be some true statement which the system cannot prove. The irony is that in proving these results he remained within Hilbert’s restrictions upon meta-mathematical proofs. The Incompleteness Theorem, for example, is proven by an actual construction of a true nonprovable statement. In fact, the technique behind that construction is nearly as important as the result itself. Gödel began with the formal system of Principia Mathematica, but as he noted, he could have chosen any formal system that represented arithmetic. In such a system familiar arithmetical facts, such as “1 + 1 = 2,” would be represented by signs in the system that could be manipulated as if they had no meaning. Hofstadter, in his fine presentation of Gödel’s proof, uses a formal system he calls “Typographical Number Theory” (TNT) because, like all formal systems, it can be said to manipulate non-numerical characters: meaningless signs are shifted around according to given mechanical rules. These signs, of course, can also be given precise interpretations. For example, one axiom of TNT, “∀ a: (a + 0) = a,” means “0 added to any number yields the original number”; and the string “(S0 + S0) = SS0” means “1 + 1 = 2.” But because TNT is a formal system the sign “0” should not be seen as a numeral, and the sign “+” not as the operation of addition; these are just signs that appear where they do because of certain syntactical rules: Because such signs have no complicated meanings, Gödel was able to devise a “code” for the formal system that assigned a number to each formal sign. Each string was matched with a particular number determined by its signs, and, since a proof is nothing but an array of strings, it too could be represented in the code with its own “Gödel number.” Hofstadter, using a different code from Gödel’s, maps each sign of TNT onto a three-digit number. “S,” for example, is coded as “123,” and “+” is coded as “112.” The string “(S0 + S0) = SS0,” for example, has a Gödel number in Hofstadter’s coding of “362,123,666,112,123,666,323,111,123, 123,666.” If the statement appeared in a proof, this number would be surrounded by others. Gödel’s code is quite similar to standard cryptograms. If one were to code letters of the alphabet into numbers, for example, one could code any message, and, conversely, given a string of numbers in code, one would be able to tell if they “made sense,” if they could be decoded, and what the message was. Such a coding would divide all strings of numbers into two classes: those that could be translated into a message and those that couldn’t, either because the numbers did not correspond to coded letters or because they did but made no sense. Similarly Gödel’s coding allowed one to characterize any number: a number would be a Gödel number if and only if it could be translated into a theorem or a proof in the formal system. In Hofstadter’s code, for example, one would be able to tell if a given number was a Gödel number by dividing it into groups of three digits; each of those groups would have to correspond to a sign in the system and the translated string of signs would not only have to “make sense” but also be a theorem or proof. One could similarly recover any string or proof given a Gödel number. This entire procedure may seem, as it did to me when I first learned the proof, ridiculously frivolous. The code is a peculiar interpretation of the formal system that seems to accomplish nothing more than moving the signs even further away from the theoretical facts expressed in basic numerals, like “1 + 1 = 2.” Yet, as Hofstadter clearly explains, the importance of the coding is that the formal system which it “maps,” or encodes, into numbers is a formal system which represents statements about numbers. This makes the meta-mathematical system more accessible, for a statement in the meta-mathematical system is a statement about the strings of TNT—e.g. “’(S0 + S0) = SS0’ is a theorem.” By using the code, such a statement can be translated into a statement about numbers—“The number ‘362,123,666,…,123,666’ [above] is a Gödel number [i.e., it corresponds to a string which is a theorem, produced by the formal system].” Meta-mathematical statements about theorems, then, may be considered statements about Gödel numbers. But there is still another twist: since one can define a Gödel number arithmetically, statements about Gödel numbers are statements in number theory which can be represented in the formal system TNT. So meta-mathematical statements can be represented within the system; statements about TNT can be represented within TNT: the formal system can, in a precise way, “talk” about itself. This would hardly be an achievement in a “natural” language like English, but it is remarkable for a system as seemingly silent as arithmetic. In English difficulty arises from this reflective power; the statement “This statement is false” is the classic example. If it is false the statement is true, and if it is true it is false. Using his code, Gödel created a similar statement in the formal system, namely, a string G which could be interpreted as saying “G is not provable.” This is not, in a formal system, a mere paradox. G is either provable or not. If it is provable, then since arithmetic is consistent, G is true. But G “says” “G is not provable,” so if it were true, it would be false and we have a contradiction. G, then, is not provable. But that is exactly what it “says,” so it is true. And so, we have an actual example of a true statement that cannot be proven. Arithmetic and all equally powerful systems are thus incomplete. Moreover, Gödel showed that arithmetic is essentially incomplete—there is no way to add strings to the system as axioms that will make it complete. This result may not, in our relativistic and uncertain age, have the same shattering effect upon us as it had upon mathematicians nearly fifty years ago, who believed that with axioms and logic one could reach, if not every philosophical truth, at least every mathematical one. But the gap between provability and truth has nevertheless taken on a larger metaphorical significance. Gödel’s theorem has been used, for example, to argue that the natural world will always elude our most powerful theories; man’s knowledge can never reach all of what is. The theorem has also been used in controversies in computer science. For, as Hofstadter makes clear, formal systems, with their syntactic shuttling of signs, are at the source of all computer activity. In 1936, before the first modern computer was built, Alan Turing created a formal theory of how a computer would work. And a formal system also lies at the lowest level of the most advanced contemporary machines; one finds, in the “hardware,” a formal system in which “bits” of binary information—1s and 0s—are manipulated as mechanically as Hilbert’s laws of inference manipulate mathematical strings. Gödel’s theorem would seem then to imply something quite precise about the limitations of computers. Hofstadter discusses an article by J.R. Lucas which had led to an excited interchange among philosophers and computer scientists. “Gödel’s theorem,” Lucas wrote, “seems to me to prove that Mechanism is false, that is, that minds cannot be explained as machines.” For a machine can easily be programmed to generate the theorems of a particular formal system, but it will fail to detect the Gödelian pitfalls latent in that system, which, while they can never be generated as theorems by the machine, still can be seen to be true by human minds. Lucas’s arguments are intricate, but the conclusion is that “the mind, being in fact ‘alive,’ can always go one better than any formal, ossified, dead system can.” As Michael Polanyi put it: “The powers of the mind exceed those of a logical inference machine.” This argument, and other, similar ones, uses Gödel’s proof to elevate the mind but at the expense of a machine. Hofstadter will have none of this. He believes that what holds for the machine also holds for the brain. For it too is a formal system. It has a basic “hardware” composed of perhaps ten thousand million neurons, each one of which either fires or doesn’t fire, according to strict mechanical laws. Over thirty years ago, in his classic Cybernetics, Norbert Wiener wrote: “The all-or-none character of the discharge of the neuron is precisely analogous to the single choice made in determining a digit on the binary scale.” If the lower level of the brain is also a formal system, then it too should be subject to the limitations of “Gödelization” that Lucas and others argue machines are subject to. Hofstadter gives a convincing mathematical argument why the mind is as limited as a machine. But more importantly, it is one of the themes of Gödel, Escher, Bach that while there may be a formal system underlying all mental activity, the mind somehow transcends the formal system which supports it. Once the underlying formal system is powerful enough to reflect itself, even on the most elementary levels, Hofstadter argues, a new dynamic enters into the brain. Hofstadter makes an extended comparison between the brain and an ant colony; the complex organization of the ant colony displays a strategy and awareness while the individual ants seem to play the parts of neurons. In the brain, too, clusters of activity are formed, interpretation begins, different levels begin to interact. Hofstadter writes: “Every aspect of thinking can be viewed as a high-level description of a system which, on a low level, is governed by simple, even formal rules.” The method of argument used in Gödel’s proof becomes more important for Hofstadter than its limiting result. The proof’s usage of codes, its creation of mappings, its mixing of levels, and dizzying self-reference, all seem to Hofstadter to carry metaphorical implications for the activity of intelligence itself. The most important notion for Hofstadter is the “indirect self-reference” found in the Gödel string. He makes this idea central to his text in numerous playful allusions, and in serious and sustained analysis. “Indirect self-reference” is one of the richest and most vivid of his conceptions; it is, he argues, a crucial process of our minds when we correct ourselves, solve puzzles, engage in interpretations. Without some ability to refer back to our own mental activity, and to transform it, intelligence itself would be impossible. The mind, Hofstadter writes, seems to act like the two hands drawing each other in one of Escher’s finest drawings. Such “strange loops,” he claims, may even be at the heart of life itself; in one of his more elaborate and daring metaphors, Hofstadter examines the transfer of information within the cell and finds the same formal complexities he found in mathematical logic and the human brain. DNA, for example, contains at once the program for the cell’s activity, the data which are manipulated by particular enzymes, and the language transcribed by RNA: it is a formal string which is interpreted on different levels. In fact, the entire cellular, mechanism, involving transcription and translation of the genetic code from a DNA strand that indirectly directs its own self-replication is mapped by Hofstadter in an elaborate chart onto the interpretations and codings of Gödel’s theorem, with its self-referring string. Hofstadter has, it turns out, chosen an idiosyncratic Gödel numbering system for his proof so he could set up a close identification between that code and the recently deciphered and equally arbitrary genetic code. If life can grow out of the formal chemical substrate of the cell, if consciousness can emerge out of a formal system of firing neurons, then so too, Hofstadter seems to argue, will computers attain human intelligence. In a rudimentary way, the elementary formal systems of the computer already hint at the complexities of intelligence (perhaps because they have been designed intelligently). The basic strings of binary information are grouped into “words” and interpreted in a variety of ways: as numbers, addresses, commands. Higher-level computer languages hide the electronic clatter below, grouping the mechanical transfers of “bits” of information into patterns, aiming eventually at the high-level language of the programmer’s daily life. There is, Hofstadter acknowledges, a long way to go, but eventually, he thinks, a computer will be programmed to be indistinguishable from the human mind. This is not the dream of a solitary crank. Within the last few decades, artificial intelligence has become a major field of research. Margaret Boden, in her recent survey, Artificial Intelligence and Natural Man, conveys the tremendous excitement and the stubborn perseverance of the researchers in the field. Much has already been achieved: there is a master program for playing checkers; computers can understand spoken English in specific contexts, engage in analogical thinking, read Chinese characters. These are hardly signs of a general intelligence, but Hofstadter is frequently persuasive about the potentials of AI if only because of his insights into the processes involved in solving problems. In fact, his own intelligence is so interesting and lively that one feels such a mind would succeed in mirroring itself in its activities in some way, even in attempting to do so literally in a computer program. “Indirect self-reference,” a character called the Author says in one of Hofstadter’s dialogues, “is my favorite topic.” He does not only want to discuss it in its various forms—in Gödel’s proof, in intelligence, in the cell—he wants to create it in the text as well, create a literary model, turn the entire book into “one big self-referential loop.” Its end turns back to the beginning. The book claims to reveal many meanings, many hidden truths. Hofstadter quotes Hans David who writes of Bach’s Musical Offering, “the reader, performer, or listener is to search for the Royal Theme in all its forms. The entire work, therefore, is a ricercar in the original, literal sense of the word.” Hofstadter playfully and immodestly hints at similar “unending subtleties” that might appear in the “many levels of meaning” in his own work. “By seeking, you will discover,” he quotes Bach. Much of what is to be discovered, however, has more to do with Escher than with Bach. Escher’s drawings are often amusing tricks or puzzles, exploiting self-reference, level interaction, and figure/ ground play. They are coolly intriguing, unsettling at times, but are, for the most part, propositional tricks in picture form. Similarly, Hofstadter is attracted to verbal trickery; he hides acrostics, anagrams, puns, and self-referential jokes throughout the text. One of the dialogues is called “Crab Canon”; it attempts to imitate a musical crab canon in which a voice is played backward against itself. But the light-hearted “Crab Canon” later becomes the subject of a serious examination of multileveled meaning. Its “epigenesis” is shown to be a literary example of the phases of cell division, with discussions of its prophase, metaphase, anaphase, and telophase. And when Hofstadter studied a DNA strand that interested him in relation to the canon, “I saw that the ‘A,’ ‘T,’ ‘C’ of Adenine, Thymine, Cytosine coincided—mirabile dictu—with the ‘A,’ ‘T,’ ‘C’ of Achilles, Tortoise, Crab [the characters in the dialogue].” This type of wit becomes all too solemn. Many of Hofstadter’s discussions also confuse profound insights with the most banal ones. He makes a useful analogy, for example, between the figure and ground in a drawing, and theoremhood and non-theoremhood in mathematics; one implication of Gödel’s theorem is that in certain formal systems, a figure and ground will not carry the same information. But there is no similar revelation to be gained in casually mentioning that, in music, melody is figure and harmony the ground. And again: Hofstadter indulges in a metaphorical consideration of the well-known theorem of the logician Alfred Tarski—which states that, there is no mechanical criterion for determining the truth of statements in certain mathematical systems, including arithmetic. Hofstadter says that there is no decision procedure for beauty in art either and he suggests some link with Tarski’s theorem, which hardly seems significant. Such links become a serious problem because the book is concerned directly with the nature of links, or “maps,” between formal systems. The book is strongest in its explications of formal systems, in its discussions of Gödel, the brain, the computer, the cell. Hofstadter then uses various formal devices for connecting the systems he discusses. Texts and ideas are pared down to a syntax; maps and charts identify the most diverse objects through their shared structure. Such identifications, Hofstadter believes, contain suggestive meanings, as with the mapping showing the correspondence of the self-replication system of the cell and the self-referential system of Gödel’s code. But Hofstadter’s “maps” can also be outrageously silly, as when he has his characters discuss tests for the “genuineness” of Zen koans, as a parallel to tests for whether number theoretical statements are theorems. Similarly, in identifying characters’ initials with DNA’s bases, or his book with the Musical Offering, he is punning on the lowest “structural” or “syntactic” level, and it is difficult to imagine such mappings having the higher significance Hofstadter implies they have. Formalisms outside mathematics must be filled out with interpretations before they are cavalierly mapped; the mapping may touch only on the most trivial aspects of the structure. The difficulty, then, lies not in creating formal parallels but in judging interpretations, not in the syntax but in the semantics. Curiously Hofstadter has no illusions about this. He discusses varying significances and the limitations of formalism quite clearly. But though he will talk generally about the complexities of language or music or beauty, he is much more at ease in expounding formalisms. He is drawn, for example, to the propositional in the visual arts: he discusses Escher and Magritte because they “illustrate” Hofstadter’s notions about levels and self-reference (often, it must be said, not at all subtly). And while Hofstadter loves Bach’s music, Bach’s presence in this book is based almost entirely upon his formal tricks in the canons and fugues. Hofstadter is particularly interested in a canon in the Musical Offering which, as it is played over and over, modulates up a key each time; this is Hofstadter’s prime example of “strange loops” in Bach’s music. But this is a “low level” fact, as Hofstadter might put it, and while he may recognize that music’s meaning lies in more complex relations, he still treats Bach on this trivial level, implying that the same self-reference that lies at the heart of Gödel’s and Escher’s work is also the source of Bach’s greatness. But every formal observation Hofstadter makes about Bach can be made of more contemporary composers who are explicitly formalists. Some of Stockhausen’s descriptions of his compositions are no more than formalist discussions of feed-back, loops, generation, etc. Yet Hofstadter in this book suggests no way of discussing the significant differences between such music and Bach’s. Hofstadter, we come to see, does not have much to say about how musical meaning may arise from musical form, nor about how to approach similar problems of meaning in language or the arts. He has great insight into formal systems and procedures of intelligent reasoning, but he does not succeed in overcoming the limitations of his own formal inquiry, even though the transcendence of merely formal inquiry is one of the main concerns of his book. Hofstadter hopes to demonstrate transcendence by establishing correspondences involving self-reference between Gödel’s proof, the cell, and the mind. He intends to reveal similar structures, to show that what holds in one system may well hold for another, to argue that unpredictable consequence may come into play when formal systems reach a certain level of complexity; Gödel’s string may be produced, life may emerge, and intelligence may be created. But these correspondences are rough and metaphorical, and are more assertive than convincing. In order to show how such systems might reach inventiveness Hofstadter himself would have to articulate a theory of meaning that he seems to be reaching for one that goes beyond a formalist notion of “exotic isomorphism.” He would have to move from syntactic, structural, and formal links of varying value and depth to a consideration of the semantics of language and art. In one of Zeno’s paradoxes it is proven that in a race Achilles could never catch up with the Tortoise if the Tortoise had a head start, since Achilles would first have to cover half the distance between them, then a quarter, and an eighth, and so on ad infinitum. Lewis Carroll’s dialogue “What the Tortoise Said to Achilles,” which Hofstadter reprints and takes as a model, is a brilliant logical analogue to Zeno’s paradox, as the Tortoise shows how Achilles can never “catch up” with the conclusion of a simple syllogism. He takes a proposition Z from Euclid which follows logically from the premises A and B, and challenges Achilles to make him accept the conclusion Z. This means, however, that the Tortoise must accept another statement C, namely: “If A and B are true, then Z must be true.” The Tortoise still does not accept Z, so there is yet another statement D which the Tortoise must accept: “If A, B and C are true, then Z must be true.” And so on and on. As Hofstadter points out, there is a confusion in the paradox: a rule of inference (A and B imply Z) is taken to be a proposition itself. But the question of the infinite regress remains and recalls Wittgenstein’s questioning: why should one be compelled to accept a logical conclusion? Hofstadter would answer that there is always an “inviolable substrate” in the cell, in the mind, in the computer—a formal system which is the limit of infinite regress, the end of all searches for grounding one’s knowledge. I don’t see why we should accept this solution: a logical problem of infinite regress is not solved by reference to “hardware.” I prefer one of Hofstadter’s other formulations: “You can’t go on defending your patterns of reasoning forever. There comes a point where faith takes over.” And so, finally in this book, it does. Hofstadter has faith that AI will succeed in its quest, just as earlier Hilbert believed he would in his. For the achievement of AI’s program there may indeed be no obstacle comparable to the one that blocked Hilbert. But there is no clear way of knowing. Hofstadter does not minimize the difficulties, but they seem formidable. The brain itself may be more of a probabilistic system than a formal one, more of a cloud than a clock, to use Karl Popper’s metaphors, and its interaction with the rest of the body would also make it difficult to consider it as a different independent system. Researchers in AI hope they will never have to model the physiology of the brain; they concentrate upon the workings of the mind, attempting to mirror its features. But this project seems just as enormous. The achievement of AI’s project would mean that one would have a program, a finite set of instructions, that would give a literal structure of the human mind, a “string” of statements in which one could read a universal grammar of creativity. Such a project is probably of the same order of difficulty as the search for the secrets of life itself. But in some sense this dream of AI is the archetypal dream of much of contemporary inquiry. It is not to find a “universal language” whose syntax would reveal the truth of the world—Gödel proved such a language to be impossible—but to find a “universal hermeneutics” that would reduce the fullness of the world to an underlying syntax, to basic “structures,” and that would, conversely, be able to read the complexity surrounding us in these basic strings. This is the dream not only of AI but of much contemporary genetics, linguistics, and advanced literary theory, all of which stress the importance of self-reference. One does not need to appeal to Gödel’s theorem to believe as I do in the validity and value of much of this search, without believing the mysteries will ever be solved. “In a way,” Hofstadter confides at the beginning, “this book is a statement of my religion.” It clearly sets a transcendent goal which may always lie infinitely far away, but Hofstadter’s enthusiasm, boldness, and intelligence so engage us that if by the end we seem no closer to that goal, we have nevertheless been enriched by where we have been. 1. * A translation of Gödel’s paper along with papers of Frege, Russell, Hilbert, and others is included in From Frege to Gödel: A Sourcebook in Mathematical Logic 1879-1931, edited by Jean van Heijenoort (Harvard University Press, 1967), an anthology of important papers in mathematical logic.↩
{"url":"http://www.nybooks.com/articles/archives/1979/dec/06/the-dream-of-mind-and-machine/","timestamp":"2014-04-18T19:05:21Z","content_type":null,"content_length":"76957","record_id":"<urn:uuid:a55a7f41-e4ec-46eb-8615-0afd96b105eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Imagining Numbers: (particularly the square root of minus fifteen) by Barry Mazur Imagining Numbers: (particularly the square root of minus fifteen) (2003) Recently added by stevemwilliams, rapunzellindemann, private library, yrb, Caomhghin, steve_plunkett, maclain, steingart, richland, n71 No current Talk conversations about this book. This is an interesting little book and I thoroughly enjoyed it. It sets out to help the user understand and, more importantly, visualise, imaginary numbers (i.e. the square-root of -1). The author tries to use poetic analogies, and historical perspective, as well as a lot of maths, to achieve this. Mazur does a great job with the maths. The explanations are so clear and the author gently leads the reader forward to interesting insights, and even encourages you to reach for a pen and paper to help with the understanding. He also details the historical story of how the leap was made away from the linear integer line to be able to see how imaginary numbers exist on the complex plane. For me the poetic analogies were the weakest aspect of this book. They did provide a break and a contrast, but it's the lucidity of the maths and gentle progress towards deeper understanding which makes this a good book. I want to read it again. This book talks about imagination & the history of how complex numbers were imagined & understood by the mathematicians who first came across them. It is readable and well-written. It is maybe a bit Literally a case of "mathematics for poets." The gentlest of intros to imaginary and complex numbers. It certainly doesn't explain things like raising one complex number to the power of another. Series (with order) Canonical title Original title Alternative titles Original publication date People/Characters Rafael Bombelli Girolamo Cardano Important places Important events Related movies Awards and honors First words Preface -- 1, 2, 3,... The "counting numbers" are part of us. [Chapter I -- The Imagination and Square Roots] -- I. Picture this. Picture Rodin's Thinker, crouched in mental effort. Last words Disambiguation notice Publisher's editors Publisher series References to this work on external resources. Barry Mazur Imagining Numbers Amazon.com Product Description (ISBN 0374174695, Hardcover) How the elusive imaginary number was first imagined, and how to imagine it yourself Imagining Numbers (particularly the square root of minus fifteen) is Barry Mazur's invitation to those who take delight in the imaginative work of reading poetry, but may have no background in math, to make a leap of the imagination in mathematics. Imaginary numbers entered into mathematics in sixteenth-century Italy and were used with immediate success, but nevertheless presented an intriguing challenge to the imagination. It took more than two hundred years for mathematicians to discover a satisfactory way of "imagining" these numbers. With discussions about how we comprehend ideas both in poetry and in mathematics, Mazur reviews some of the writings of the earliest explorers of these elusive figures, such as Rafael Bombelli, an engineer who spent most of his life draining the swamps of Tuscany and who in his spare moments composed his great treatise "L'Algebra". Mazur encourages his readers to share the early bafflement of these Renaissance thinkers. Then he shows us, step by step, how to begin imagining, ourselves, imaginary numbers. (retrieved from Amazon Mon, 30 Sep 2013 13:24:33 -0400) (see all 3 descriptions) No library descriptions found. Swap Ebooks Audio 2 avail. 2 pay — 6 wanted Popular covers
{"url":"http://www.librarything.com/work/46600","timestamp":"2014-04-17T18:54:11Z","content_type":null,"content_length":"75670","record_id":"<urn:uuid:1fce09e9-d42b-4bf1-a5ad-3ef9796474b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellipse Fit - File Exchange - MATLAB Central [semimajor_axis, semiminor_axis, x0, y0, phi] = ellipse_fit(x, y) x - a vector of x measurements y - a vector of y measurements semimajor_axis - Magnitude of ellipse longer axis semiminor_axis - Magnitude of ellipse shorter axis x0 - x coordinate of ellipse center y0- y coordinate of ellipse center phi - Angle of rotation in radians with respect to the x-axis Algorithm used: Given the quadratic form of an ellipse: a*x^2 + 2*b*x*y + c*y^2 + 2*d*x + 2*f*y + g = 0 (1) we need to find the best (in the Least Square sense) parameters a,b,c,d,f,g. To transform this into the usual way in which such estimation problems are presented, divide both sides of equation (1) by a and then move x^2 to the other side. This gives us: 2*b'*x*y + c'*y^2 + 2*d'*x + 2*f'*y + g' = -x^2 (2) where the primed parametes are the original ones divided by a. Now the usual estimation technique is used where the problem is presented as: M * p = b, where M = [2*x*y y^2 2*x 2*y ones(size(x))], p = [b c d e f g], and b = -x^2. We seek the vector p, given by: p = pseudoinverse(M) * b. From here on I used formulas (19) - (24) in Wolfram Mathworld: Please login to add a comment or rating.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/22423-ellipse-fit","timestamp":"2014-04-16T20:26:26Z","content_type":null,"content_length":"31343","record_id":"<urn:uuid:4d3c6e25-ce7c-4dcd-9fdf-c1389f8e8e30>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by serin on Sunday, August 26, 2012 at 11:14am. analyze the graph of the function. what is the domain of f(x)? f(x)=x^2+x-72/x+7 {x|x is not equal to -7} there are other answers, but i think its this one, am i correct? • math - Reiny, Sunday, August 26, 2012 at 11:32am correct if you denominator is x+7 (the way you typed it, the denominator is x ) I would write the answer as {x|x ∈ R , x ≠ -7} • math - serin, Sunday, August 26, 2012 at 11:35am reiny the way you typed it is not an option and yes x+7 is the denominator • math - serin, Sunday, August 26, 2012 at 11:36am {x|x 0 and x -7} is the closet to it • math - serin, Sunday, August 26, 2012 at 11:38am {x|x 0 and x -7} sorry the not equal signs didnt post • math - Reiny, Sunday, August 26, 2012 at 11:54am Too bad my answer is not an option. In the domain it should be stated that x can be any real number except -7 there are different ways to say this, my statement is just one of those ways. To say {x|x is not equal to -7} does not tell the whole story and in my opinion is insufficient. • math - Steve, Sunday, August 26, 2012 at 3:12pm While technically you are correct, Reiny, I'd have to take the answer in the likely context. This is obviously Algebra II or some such, and we are likely dealing only with real, or maybe complex So, restricting our "domain", as it were, to that area, x not equal to 7 pretty much sums it up. If the problem was multiple choice, then I'd have picked the answer serin gave. Otherwise, maybe x real and ≠ 7 {x|x real , x ≠ -7} would have provided the required details. Related Questions analyze the graph of the function - analyze the graph of the function R(x)=x^2-... College alg - Analyze the graph of the following function as follows: (a) Find ... Algebra - analyze the graph f(x)=(x-2)^3(x-3)^2(x-4) a)end behavior: find the ... college algebra--need help please!! - (Using the seven steps to analyzing the ... math - using the 7 steps outlined in sec 4.3 of your book,analyze the graph of ... college algebra !!! - analyze the graph of the following function. r(x)= x^2+x-6... College Algebra - 1.Answer the following for the given quadratic function. f(x... algebra - 3. write the function whose graph is the graph of y=sqrt of x, but is ... pre-calculus - analyze the graph of the function r(x)= 13x + 13 ________ 5x + 15... Math - flickr.(dotcom)/photos/77304025@N07/6782789880/ Create a function that ...
{"url":"http://www.jiskha.com/display.cgi?id=1345994062","timestamp":"2014-04-20T11:56:27Z","content_type":null,"content_length":"10264","record_id":"<urn:uuid:eca94a95-ce65-465d-9c94-eb4c3c26506c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Victor Nistor, Jinchao Xu, Stephanie Zerby, Xiantao Li, Yuxi Zheng, Kris Jenssen, Hope Shaffer. Title: Optimal mixing and optimal stirring for fixed energy, fixed power or fixed palenstrophy flows Seminar: Computational and Applied Mathematics Colloquium Speaker: Evelyn Lunasin, U. Michigan Mathematics We consider passive scalar mixing by a prescribed divergence-free velocity vector field in a periodic box and address the following question: Starting from a given initial inhomogeneous distribution of passive tracers, and given a certain energy budget, power budget or finite palenstrophy budget, what incompressible flow field best mixes the scalar quantity? We focus on the optimal stirring strategy recently proposed by Lin, Thiffeault and Doering (2011) that determines the flow field that instantaneously maximizes the depletion of the $H^{-1}$ mix-norm. We present an explicit example demonstrating finite-time perfect mixing with a finite energy constraint on the stirring flow. On the other hand, we establish that the $H^{-1}$ mix-norm decays at most exponentially in time if the two-dimensional incompressible flow is constrained to have constant palenstrophy. Finite-time perfect mixing is thus ruled out when too much cost is incurred by small scale structures in the stirring. We conjecture an exponential lower bound on the $H^{-1}$ mix-norm in the case of finite power constraint and discuss some related problems from other areas of analysis that are similarly suggestive of this conjecture. Room Reservation Information Room Number: MB106 Date: 09 / 07 / 2012 Time: 03:35pm - 04:25pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=13492","timestamp":"2014-04-19T17:09:22Z","content_type":null,"content_length":"4620","record_id":"<urn:uuid:49bd687b-20e8-4491-a723-884bd9f9143b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about regression on Daniel J. Hocking Blog Archives Here is improved code for calculating QIC from geeglm in geepack in R (original post). Let me know how it works. I haven’t tested it much, but is seems that QIC may select overparameterized models. In the code below, I had to replace <- with = because wordpress didn’t except <- within code or pre tags. It should still work just fine. Here is a quick example of how to run this function. First, highlight and run the code below in R. That will save the function in your workspace. Then run your gee model using geeglm in geepack (package available from CRAN). Next, run QIC(your_gee_model) and you get the QIC. You can then repeat this with alternative a priori models. Below the function is an example using data available as part of geepack. [UPDATE: IMPROVED CODE AND EXTENSIONS ARE NOW AVAILABLE ON https://github.com/djhocking/qicpack INCLUDING AS AN R PACKAGE] # QIC for GEE models # Daniel J. Hocking QIC = function(model.R) { model.indep = update(model.R, corstr = "independence") # Quasilikelihood mu.R = model.R$fitted.values y = model.R$y type = family(model.R)$family quasi.R = switch(type, poisson = sum((y*log(mu.R)) - mu.R), gaussian = sum(((y - mu.R)^2)/-2), binomial = sum(y*log(mu.R/(1 - mu.R)) + log(1 - mu.R)), Gamma = sum(-y/(mu.R - log(mu.R))), stop("Error: distribution not recognized")) # Trace Term (penalty for model complexity) omegaI = ginv(model.indep$geese$vbeta.naiv) # Omega-hat(I) via Moore-Penrose generalized inverse of a matrix in MASS package #AIinverse = solve(model.indep$geese$vbeta.naiv) # solve via indenity Vr = model.R$geese$vbeta trace.R = sum(diag(omegaI %*% Vr)) px = length(mu.R) # number non-redunant columns in design matrix # QIC QIC = 2*(trace.R - quasi.R) [EDIT: original post was missing '*'] #QICu = (-2)*quasi.R + 2*px # Approximation assuming model structured correctly output = c(QIC, quasi.R, trace.R, px) names(output) = c('QIC', 'Quasi Lik', 'Trace', 'px') Here’s an example you can run in R. dietox$Cu = as.factor(dietox$Cu) mf = formula(Weight ~ Cu * (Time + I(Time^2) + I(Time^3))) gee1 = geeglm(mf, data = dietox, id = Pig, family = poisson, corstr = "ar1") mf2 = formula(Weight ~ Cu * Time + I(Time^2) + I(Time^3)) gee2 = geeglm(mf2, data = dietox, id = Pig, family = poisson, corstr = "ar1") anova(gee1, gee2) mf3 = formula(Weight ~ Cu + Time + I(Time^2)) gee3 = geeglm(mf3, data = dietox, id = Pig, family = poisson, corstr = "ar1") gee3.I = update(gee3, corstr = "independence") gee3.Ex = update(gee3, corstr = "exchangeable") sapply(list(gee1, gee2, gee3, gee3.I, gee3.Ex), QIC) In the output of this model it suggests that model gee1 is the best model. I have some concerns that QIC will almost inevitably choose the most complex model. More testing with simulated data will be [,1] [,2] [,3] [,4] [,5] QIC -333199.7 -333188.0 -333187.5 -333181.8 -333153.6 Quasi Lik 166623.0 166622.7 166620.4 166622.2 166615.4 Trace 23.2 28.7 26.6 31.3 38.6 px 861.0 861.0 861.0 861.0 861.0 You will get warnings when running this model because it uses a Poisson distribution for continuous data. I will work on finding a better example in the future before I make this available as an R I recently gave a talk at the Ecological Society of America (ESA) annual meeting in Portland, OR and a poster presentation at the World Congress of Herpetology meeting in Vancouver, BC, Canada. Both presentations were comparing generalized linear mixed models (GLMM) and generalized estimating equations (GEE) for analyzing repeated count data. I advocate for using GEE over the more common GLMM to analyze longitudinal count (or binomial) data when the specific subjects (sites as random effects) are not of special interest. The overall confidence intervals are much smaller in the GEE models and the coefficient estimates are averaged over all subjects (sites). This means the interpretation of coefficients is the log change in Y for each 1 unit change in X on average (averaged across subjects). Below you can see my two presentations for more details. I am comparing estimates from subject-specific GLMMs and population-average GEE models as part of a publication I am working on. As part of this, I want to visualize predictions of each type of model including 95% confidence bands. First I had to make a new data set for prediction. I could have compared fitted values with confidence intervals but I am specifically interested in comparing predictions for particular variables while holding others constant. For example, soil temperature is especially important for salamanders, so I am interested in the predicted effects of soil temperature from the different models. I used the expand.grid and model.matrix functions in R to generate a new data set where soil temperature varied from 0 to 30 C. The other variables were held constant at their mean levels during the study. Because of the nature of the contrast argument in the model.matrix function, I had to include more than one level of the factor “season”. I then removed all season except spring. In effect I am asking, what is the effect of soil temperature on salamander activity during the spring when the other conditions are constant (e.g. windspeed = 1.0 m/s, rain in past 24 hours = This code is based on code from Ben Bolker via http://glmm.wikidot.com # Compare Effects of SoilT with 95% CIs newdat.soil <- expand.grid( SoilT = seq(0, 30, 1), RainAmt24 = mean(RainAmt24), RH = mean(RH), windspeed = mean(windspeed), season = c("spring", "summer", "fall"), droughtdays = mean(droughtdays), count = 0 newdat.soil$SoilT2 <- newdat.soil$SoilT^2 # Spring newdat.soil.spring <- newdat.soil[newdat.soil$season == 'spring', ] mm = model.matrix(terms(glmm1), newdat.soil) Next I calculated the 95% confidence intervals for both the GLMM and GEE models. For the GLMM the plo and phi are the low and high confidence intervals for the fixed effects assuming zero effect of the random sites. tlo and thi account for the uncertainty in the random effects. newdat.soil$count = mm %*% fixef(glmm1) pvar1 <- diag(mm %*% tcrossprod(vcov(glmm1),mm)) tvar1 <- pvar1+VarCorr(glmm1)$plot[1] newdat.soil <- data.frame( , plo = newdat.soil$count-2*sqrt(pvar1) , phi = newdat.soil$count+2*sqrt(pvar1) , tlo = newdat.soil$count-2*sqrt(tvar1) , thi = newdat.soil$count+2*sqrt(tvar1) mm.geeEX = model.matrix(terms(geeEX), newdat.soil) newdat.soil$count.gee = mm.geeEX %*% coef(geeEX) tvar1.gee <- diag(mm.geeEX %*% tcrossprod(geeEX$geese$vbeta, mm.geeEX)) newdat.soil <- data.frame( , tlo.gee = newdat.soil$count-2*sqrt(tvar1.gee) , thi.gee = newdat.soil$count+2*sqrt(tvar1.gee) The standard error of the fixed effects are larger in the GEE model than in the GLMM, but when the variation associated with the random effects are accounted for, the uncertainty (95% CI) around the estimates is greater in the GLMM. This is especially evident when the estimated values are large since the random effects are exponential on the original scale. This can be seen in the below plots Although this plot does the job, it isn’t an efficient use of space, nor is it easy to compare exactly where the different lines fall. It would be nice to plot everything on one set of axes. The only trouble is that all the lines could be difficult to see just using solid and dashed/dotted lines. To help with this, I combine the plots but added color and shading using the polygon function. The code and plot are below plot(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count.gee), xlab = "Soil temperature (C)", ylab = 'Predicted salamander observations', type = 'l', ylim = c(0, 25)) , rev(newdat.soil.spring$SoilT)) , c(exp(newdat.soil.spring$thi.gee) , rev(exp(newdat.soil.spring$tlo.gee))) , col = 'grey' , border = NA) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$thi.gee), type = 'l', lty = 2) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$tlo.gee), type = 'l', lty = 2) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count.gee), type = 'l', lty = 1, col = 2) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count), col = 1) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$thi), type = 'l', lty = 2) lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$tlo), type = 'l', lty = 2) Now you can directly compare the results of the GLMM and GEE models. The predicted values (population-averaged) for the GEE is represented by the red line, while the average (random effects = 0, just fixed effects) from the GLMM are represented by the solid black line. The dashed lines represent the 95% confidence intervals for the GLMM and the shaded area is the 95% confidence envelope for the GEE model. As you can see, the GEE has much higher confidence in it’s prediction of soil temperature effects on salamander surface activity than the GLMM model. This would not be apparent without visualizing the predictions with confidence intervals because the standard errors of the fixed effects were lower in the GLMM than in the GEE. This is because the SEs in the GEE include the site-level (random effect) variation while the GLMM SEs of the covariates do not include this variation and are interpreted as the effect of a change of 1 X on Y at a given site. When conducting any statistical analysis it is important to evaluate how well the model fits the data and that the data meet the assumptions of the model. There are numerous ways to do this and a variety of statistical tests to evaluate deviations from model assumptions. However, there is little general acceptance of any of the statistical tests. Generally statisticians (which I am not but I do my best impression) examine various diagnostic plots after running their regression models. There are a number of good sources of information on how to do this. My recommendation is Fox and Weisberg’s An R Companion to Applied RegressionApplied Regression Analysis and Generalized Linear Modelshere. The point of this post isn’t to go over the details or theory but rather discuss one of the challenges that I and others have had with interpreting these diagnostic plots. Without going into the differences between standardized, studentized, Pearson’s and other residuals, I will say that most of the model validation centers around the residuals (essentially the distance of the data points from the fitted regression line). Here is an example from Zuur and Colleagues’ excellent book, Mixed Effects Models and Extensions in Ecology with R So these residuals appear exhibit homogeneity, normality, and independence. Those are pretty clear, although I’m not sure if the variation in residuals associated with the predictor (independent) variable Month is a problem. This might be a problem with heterogeneity. Most books just show a few examples like this and then residuals with clear patterning, most often increasing residual values with increasing fitted values (i.e. large values in the response/dependent variable results in greater variation, which is often correct with a log transformation). A good example of this can be see in (d) below in fitted vs. residuals plots (like top left plot in figure above). These are the type of idealized examples usually shown. I think it’s important to show these perfect examples of problems but I wish I could get expert opinions on more subtle, realistic examples. These figures are often challenging to interpret because the density of points also changes along the x-axis. I don’t have a good example of this but will add one in when I get one. Instead I will show some diagnostic plots that I’ve generated as part of a recent attempt to fit a Generalized Linear Mixed Model (GLMM) to problematic count data. The assumption of normality (upper left) is probably sufficient. However, the plot of the fitted vs. residuals (upper right) seems to have more variation at mid-level values compared with the low or high fitted values. Is this patten enough to be problematic and suggest a poor model fit? Is it driven by greater numbers of points at mid-level fitted values? I’m not sure. The diagonal dense line of points is generated by the large number of zeros in the dataset. My model does seem to have some problem fitting the zeros. I have two random effects in my GLMM. The residuals across plots (5 independent sites/subjects on which the data was repeatedly measured – salamanders were counted on the same 5 plots repeatedly over 4 years) don’t show any pattern. However, there is heterogeneity in residuals among years (bottom right). This isn’t surprising given that I collected much more data over a greater range of conditions in some years. This is a problem for the model and this variation will need to be modeled better. So I refit the model and came up with these plots (different plots for further discussion rather than direct comparison): Here you can see considerable variation from normality for the overall model (upper left) but okay normality within plots (lower right). The upper right plot is an okay example of what I was talking about with changes in density making interpretation difficult. There are far more points at lower values and a sparsity of points are very high fitted values. The eye is often pulled in the direction of the few points on the right creating difficult in interpretation. To help with this I like to add a loess smoother or smoothing spline (solid line) and a horizontal line at zero (broken line). The smoothing line should be approximately straight and horizontal around zero. Basically it should overlay the horizontal zero line. Here’s the code to do it in R for a fitted linear mixed model (lme1): plot(fitted(lme1), residuals(lme1), xlab = “Fitted Values”, ylab = “Residuals”) abline(h=0, lty=2) lines(smooth.spline(fitted(lme1), residuals(lme1))) This also helps determine if the points are symmetrical around zero. I often also find it useful to plot the absolute value of the residuals with the fitted values. This helps visualize if there is a trend in direction (bias). It can also help to better see changes in spread of the residuals indicating heterogeneity. The bias can be detected with a sloping loess or smooth spline. In the lower left plot, you can see little evidence of bias but some evidence of heterogeneity (change in spread of points). Again, I an not sure if this is bad enough to invalidate the model but in combination with the deviation from normality I would reject the fit of this model. In a mixed model it can be important to look at variation across the values of the random effects. In my case here is an example of fitted vs. residuals for each of the plots (random sites/subjects). I used the following code, which takes advantage of the lattice package in R. # Check for residual pattern within groups and difference between groups xyplot(residuals(glmm1) ~ fitted(glmm1) | Count$plot, main = “glmm1 – full model by plot”, panel=function(x, y){ panel.xyplot(x, y) panel.loess(x, y, span = 0.75) panel.lmline(x, y, lty = 2) # Least squares broken line And here is another way to visualize a mixed model: You can see that the variation in the two random effects (Plot and Year) is much better in this model but there are problems with normality and potentially heterogeneity. Since violations of normality are off less concern than the other assumptions, I wonder if this model is completely invalid or if I could make some inference from it. I don’t know and would welcome expert opinion. Regardless, this model was fit using a poisson GLMM and the deviance divided by the residual degrees of freedom (df) was 5.13, which is much greater than 1, indicating overdispersion. Therefore, I tried to fit the regression using a negative binomial distribution: # Using glmmPQL via MASS package #recommended to run model first as non-mixed to get a starting value for the theta estimate: glmNB1 <- glm.nb(count ~ cday +cday2 + cSoilT + cSoilT2 + cRainAmt24 + cRainAmt242 + RHpct + soak24 + windspeed, data = Count, na.action = na.omit) # Now run full GLMM with initial theta starting point from glm glmmPQLnb1 <- glmmPQL(count ~ cday +cday2 + cSoilT + cSoilT2 + cRainAmt24 + cRainAmt242 + RHpct + soak24 + windspeed, random = list(~1 | plot, ~1 | year), data = Count, family = negative.binomial (theta = 1.480, link = log), na.action = na.exclude) Unfortunately, I got the following validation plots: Clearly, this model doesn’t work for the data. It is quite surprising given the fit of the poisson and that the negative binomial is a more general distribution than the poisson and handles overdispersed count data well usually. I’m not sure what the problem is in this case. Next I tried to run the model as if all observations were random: <!– /* Font Definitions */ @font-face {font-family:Cambria; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 0 0 0 1 0;} / * Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.Section1 {page:Section1;} — glmmObs1 <- lmer(count ~ cday +cday2 + cSoilT + cSoilT2 + cRainAmt24 + cRainAmt242 + RHpct + soak24 + windspeed + (1 | plot) + (1 | year) + (1 | obs), data = Count, family = poisson) Again I end up with more problematic validation/diagnostic plots: So that’s about it for now. Hopefully this post helps some people with model validation and interpretation of fitted vs. residual plots. I would love to hear opinions regarding interpretation of residuals and when some pattern is too much and when it is acceptable. Let me know if you have examples of other more subtle residual plots. Happy coding and may all your analyses run smoothly and provide clear interpretations!
{"url":"http://danieljhocking.wordpress.com/tag/regression/","timestamp":"2014-04-19T01:55:48Z","content_type":null,"content_length":"65278","record_id":"<urn:uuid:afb62e46-5dc1-4246-a747-9312c3e46173>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: FAN AND MEDAL WILL BE REWARDED <3 • 7 months ago • 7 months ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Need any help? Best Response You've already chosen the best response. Yes please ... Best Response You've already chosen the best response. You don't necessarily have to do it for me , just please explain to me how to get the answer . Best Response You've already chosen the best response. in question 1 the plant is already 2 cm high. so the graph with an initial value of 2 is your answer. Best Response You've already chosen the best response. Initial value meaning slope ? :o Best Response You've already chosen the best response. And they are both the same question , I just numbered the pages because not all of the graphs would fit on one page Best Response You've already chosen the best response. @macknojia how about this question ? Best Response You've already chosen the best response. initial value is where your x=0, look at the graph on question 1 it starts from 2 (y value). slope is the rate at which your graph is increasing. Best Response You've already chosen the best response. its the graph a. look at the points the graph is talking about (1,2) and (3,6) . Best Response You've already chosen the best response. That makes sense !! Thank you !! Best Response You've already chosen the best response. This one ?? Best Response You've already chosen the best response. basically find the slope in every condition. in the first case take 2 ponts and use the slope formula. in the second case use the equation of a line to find m(slope) and in the third case they tell you the rate at which the function changes Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5226408ae4b0750826e2beb3","timestamp":"2014-04-18T16:23:50Z","content_type":null,"content_length":"65717","record_id":"<urn:uuid:77cfb1b3-f201-4768-90f5-f310de63f59f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
multidimensional rotation terminology up vote 3 down vote favorite Given an element $g$ of the orthogonal group $O(n)$, is there a name for the subspace of $R^n$ that's fixed by $g$, and a name for the orthogonal complement of this space? (The latter is what I really want to know. I'm guessing that the former is typically called the fixed subspace, but the latter subspace is more arcane. I'm inclined to call it the equatorial subspace, but I'd rather not give it a name if it already has one, or use the term "equatorial subspace" if it already means something else.) linear-algebra classical-groups 1 If you do call the latter the "equatorial subspace", clearly you should call the former the "polar subspace". – Tom Goodwillie Nov 7 '12 at 4:42 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged linear-algebra classical-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/111678/multidimensional-rotation-terminology?answertab=active","timestamp":"2014-04-20T08:44:56Z","content_type":null,"content_length":"46550","record_id":"<urn:uuid:b157fe1c-c2e0-48f8-afc1-0515c7570ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Summer School Very Long Baseline Interferometry and the VLBA Proceedings from the 1993 NRAO Summer School, NRAO Workshop No. 22, held in Socorro, New Mexico 23-30 June 1993 J. A. Zensus, P. J. Diamond, and P. J. Napier (Eds.) ASP Conference Series, Vol. 82, 1995 This book is intended to serve two primary purposes: to provide an introduction and reference of the basic hardware and software concepts relevant for very long baseline interferometry (VLBI), and to describe the fundamental design properties, observing capabilities, and analysis approaches of the VLBA. The book should be useful for both experienced VLBI practitioners and novices to the field. We recommend two outstanding complementary texts: Interferometry and Synthesis in Radio Astronmy by A. R. Thompson, J. Moran, and G. W. Swenson, Jr., and Synthesis Imaging in Radio Astronomy, edited by R. A. Perley, F. R. Schwab, and A. H. Bridle (ASP Conference Series, Vol. 6). Order Information The proceedings are published (August 1995) and distributed by the Astronomical Society of the Pacific, Copyright © 1995. Orders can be placed by phone or fax (with credit card) directly to the ASP (Phone: 1-800-962-3412 or 1-415-337-2126, fax: 1-415-337-5205). The cost is $36.- for members of the ASP or participants of the summer school, and $40.- for non-members (plus shipping/tax). The NRAO is not selling or distributing copies of the book. Conference photo Below are links to preprints of the individual chapters in this book (in Postscript format). All material is Copyright © 1995, Astronomical Society of the Pacific. Please contact the authors for permission to reproduce in any way. Part 1: Basic Theory Interferometry and Coherence Theory B. Clark bclark@nrao.edu [PS:0.5 Mb] A brief overview of the fundamental principles underlying radio interferometry is presented using the terminology of modern optics. This includes the fundamental equations of aperture synthesis, a definition of terms for interferometers, and a brief discussion of the problem of calibrating interferometer phase. Correlator Theory J. Romney jromney@nrao.edu [PS:0.6 Mb] This lecture presents the general concepts of correlation and spatial coherence functions, as they apply to VLBI and VLBI correlator implementations. A uniform treatment, based in the spectral domain, is applied to both the conventional lag and the spectral or FX correlator architectures. Parallel presentations of both architectures are given, including their strengths and weaknesses, although more emphasis is placed on the FX correlator because it is less familiar. Imaging Concepts T. Cornwell tcornwel@nrao.edu [PS:5.7 Mb] The three principal areas of imaging are discussed: Fourier-inversion, deconvolution, and self-calibration. The emphasis is on the major concepts in these areas. Other chapters in these proceedings cover the practical details of how one drives a particular imaging algorithm. Part 2: The Very Long Baseline Array VLBA Design: Goals and Implementation P. Napier pnapier@nrao.edu [PS:0.9 Mb] The Very Long Baseline Array (VLBA) is a highly flexible radio telescope designed for research in high resolution astronomy and geodesy. It is intended to make the technique of Very Long Baseline Interferometry (VLBI) available to observers who are not experts in radio interferometry. The top-level design goals for the Very Long Baseline Array are discussed and details of the design and performance of the antennas are presented. The VLBA Receiving System:Antenna to Data Formatter R. Thompson athompson@nrao.edu [PS:0.9 Mb] Details of the design and performance are given for that part of the VLBA electronics system located between the output of the feeds and the input to the formatter. This includes low-noise receivers, hydrogen maser and local oscillators, IF and baseband frequency converters, digital samplers and the calibration system. VLBA Data Flow: Formatter to Tape A. Rogers arogers@wells.haystack.edu [PS:1.1 Mb] This chapter presents details of the design and performance of the formatter unit, which prepares digital data samples for recording, and of the tape recorders used to record the data collected at each VLBA site. The VLBA Correlator J. Benson jbenson@nrao.edu [PS:0.3 Mb] This chapter presents details of the design and the data processing capabilities of the VLBA correlator. What the VLBA Can Do For You C. Walker cwalker@nrao.edu [PS:1.7 Mb] This chapter gives a basic description of what can be done with the VLBA. First the types of observations for which the array was built are reviewed. The special characteristics of the array that facilitate such observations are described. Then the major factors that determine whether a particular experiment can be done are discussed in some detail. These factors are the resolution, uv coverage, sensitivity, and image quality. Part 3: VLBI Data Analysis Calibration Techniques for VLBI J. Moran moran@cfa.harvard.edu and V. Dhawan vdhawan@nrao.edu [PS:1.2 Mb] This chapter discusses the various techniques used to calibrate the amplitude and phase of VLBI data. The effect of residual calibration errors on image quality is also considered. Spectral Line VLBI M. Reid reid@cfa.harvard.edu [PS:0.8 Mb] The special problems encountered when performing spectral-line VLBI observations are discussed in this Chapter. In particular, the usefulness of Fourier transforming measured visibility data with respect to time, delay or frequency for purposes of fringe detection and calibration is clarified. Fringe Fitting W. Cotton bcotton@nrao.edu [PS:0.8 Mb] The effects of errors in the model used by a VLBI correlator and ways to calibrate these errors are discussed. First, a relatively theoretical discussion of the causes, effects, and methods of correcting these correlator model errors are given. Later sections provide practical demonstrations of the effects and techniques for dealing with them. Practical Data Analysis P. Diamond pdiamond@nrao.edu [PS:1.1 Mb] Practical guidelines to data processing in AIPS are given. Practical VLBI Imaging C. Walker cwalker@nrao.edu [PS:2.2 Mb] Practical aspects of VLBI imaging are discussed, with emphasis on the choices of parameters. Particular attention is paid to the interaction between the parameter choices and both the image appearance and the fit of the model [PS:image) to the data. The process is illustrated using an image made from 13 cm data from a VLBA geodesy experiment. Non-Imaging Data Analysis T. Pearson tjp@astro.caltech.edu [PS:1.5 Mb] In many types of observation it is impossible or inappropriate to make an image from the visibility data. This chapter addresses ways of interpreting visibility data directly, with an emphasis on model-fitting techniques. Part 4: Advanced VLBI Topics Polarization VLBI W. Cotton bcotton@nrao.edu [PS:0.8 Mb] Theoretical and practical aspects of imaging VLBI polarization data are discussed, with emphasis on linear polarization measurements. Multi-Frequency Synthesis J. Conway jconway@nrao.edu and R. Sault rsault@nrao.edu [PS:0.8 Mb] Theory and practice of the technique of multi-frequency synthesis (MFS) are described, with emphasis on its application to VLBA observations. VLBI Phase Referencing A. Beasley tbeasley@nrao.edu and J. Conway jconway@nrao.edu [PS:0.4 Mb] Phase-referencing allows phase calibration of VLBI data. Using regular observations of a nearby calibrator source, delay, delay-rate, and phase corrections can be derived and their effects removed from the target source visibility data. This technique allows imaging much weaker objects than is possible via fringe-fitting methods. We examine the main types of phase error in VLBI observations (geometrical, instrumental, atmospheric, and ionospheric errors), and we demonstrate how phase-referencing reduces or removes them. Maximum switching times and angles, and the image dynamic ranges and sensitivities achievable using phase-referencing are discussed. Geodetic Measurements with VLBI D. Shaffer dshaffer@bootes.gsfc.nasa.gov [PS:0.7 Mb] The intent of this chapter is to familiarize VLBI observers with some of the capabilities of geodetic VLBI as well as provide a brief introduction to planning and analyzing a geodetic experiment. I have assumed that in many cases the observer will be more interested in getting results than in acquiring a deep understanding of the software used and the science behind it. Somebody who is interested in becoming a geodesist, with all the understanding that implies, should attach himself/herself to one of the extant geodetic VLBI groups. E. Fomalont efomalon@nrao.edu [PS:11.7 Mb] The special VLBI techniques used to obtain accurate absolute or relative positions of radio sources are discussed. Part 5: Other Networks and Global VLBI Global VLBI Networks R. Schilizzi rts@nfra.nl [PS:2.4 Mb] Current and future characteristics of the European VLBI Network (EVN) are described, and a brief description of the Asia-Pacific Telescope (APT) is given. Part 6: Planning a VLBI Observation Practical Experiment Preparation J. Wrobel jwrobel@nrao.edu [PS:0.2 Mb] A step-by-step guide to planning a VLBI imaging project is presented, with emphasis on practical aspects. azensus@nrao.edu 15 June 1995
{"url":"http://www.cv.nrao.edu/vlbabook/","timestamp":"2014-04-19T04:38:58Z","content_type":null,"content_length":"13308","record_id":"<urn:uuid:7b1a7b91-2bf4-439e-9e2e-6d2b2cdb4ee3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Inferno's BLOOMFILTER(2 Bloomfilter - Bloom filters include "sets.m"; include "bloomfilter.m"; bloomfilter := load Bloomfilter Bloomfilter->PATH; init: fn(); filter: fn(d: array of byte, logm, k: int): Sets->Set; A Bloom filter is a method of representing a set to support probabilistic membership queries. It uses independent hash functions of members of the set to set elements of a bit-vector. Init should be called first to initialise the module. Filter returns a Set s representing the Bloom filter for the single-member set {d}. K independent hash functions are used, each of range [0, 2^logm), to return a Bloom filter 2^logm bits wide. It is an error if logm is less than 3 or greater than 30. Bloom filters can be combined by set union. The set represented by Bloom filter a is not a subset of another b if there are any members in a that are not in b. Together, logm, k, and n (the number of members in the set) determine the false positve rate (the probability that a membership test will not eliminate a member that is not in fact in the set). The probability of a false positive is approximately (1-e^(-kn/(2^logm))^k. For a given false positive rate, f, a useful formula to determine appropriate parameters is: k=ceil(-log₂(f)), and logm=ceil(log₂(nk)). Create a 128 bit-wide bloom filter f representing all the elements in the string array elems, with k=6. A, B, None: import Sets; for(i:=0; i<len elems; i++) f = f.X(A|B, filter(array of byte elems[i], 7, 6)); Test whether the string s is a member of f. If there were 12 elements in elems, the probability of a false positive would be approximately 0.0063. if(filter(array of byte s, 7, 6).X(A&~B, f).eq(None)) sys->print("'%s' might be a member of f\n", s);
{"url":"http://www.vitanuova.com/inferno/man/2/bloomfilter.html","timestamp":"2014-04-19T22:06:06Z","content_type":null,"content_length":"3199","record_id":"<urn:uuid:c5c71989-6f45-4bbf-8e6a-dcb0133c19e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors Neenah, WI 54956 A Mom Who is Good at Math :)) ...My three children are grown now - one is a musician, one is an engineer and one is a nursing student. I like working with kids and seeing them discover that they really can do , Geometry, Trigonometry, Calculus, Finite, Probability etc. etc. AND they can... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Greenville_WI_algebra_tutors.aspx","timestamp":"2014-04-20T03:44:28Z","content_type":null,"content_length":"57722","record_id":"<urn:uuid:bb64c2d7-ba6d-4042-aa80-b7259e88342f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Amount of aluminum in vaccines - detailed list post #1 of 77 12/23/07 at 1:34pm Thread Starter I keep seeing posts stating that Pediarix has a lot less aluminum than its separate parts. Actually, the amount in Pediarix is only 25 mcg less than the sum of its parts. If you add up the separate vaccines that go into Pediarix, you can see that Pediarix only has 25 mcg less than its separate vaccines. Here is the list out of Dr Sears book. The only one that he mentions that actually has a lot LESS than the sum of its parts is Comvax. I hope this clears up some misinformation that has been posted. If anyone sees anything incorrect that I have posted, please feel free to correct it. "ActHib - 0 mcg PedVaxHib - 225 mcg Prevnar - 125 mcg Daptacel (DTaP) - 330 mcg Tripedia (DTaP) - 170 mcg Infanrix (DTaP) - 625 mcg Recombivax (HepB) - 250 mcg Engerix B (HepB) - 250 mcg Polio - 0 mcg MMR - 0 mcg Chic Pox - 0 mcg Hep A - 250 mcg Combo Vaccines: Comvax (hep B as Recombivax and HIB as PedVaxHIB) - 225 mcg. This particular combo vax has LESS aluminum than getting the shots separately. Pentacel (DTaP as Daptacel, HIB as ActHIB and polio) - 1500 mcg. This shot has a lot MORE aluminum that the sum of its parts. You actually get 5 times the amount of aluminum than if you were to get the shots separately. Pediarix (DTaP as Infanrix, hepB as EngerixB and polio) - 850 mcg TriHIBit (DTaP as Tripedia and HIB as ActHIB) - 170 mcg" It is amazing how much they can vary for the same vaccine. And sad that it is not regulated more to minimize effects on children. I just read the new Mothering article about Aluminum and immediately wanted to look up the mcg for the one vaccine Milena got before we stopped (tripedia). Of the 3 DTaPs, it appears to have the least amount of aluminum. Which I guess is reassuring?? ARGH. I'm SO glad we stopped vaxxing when we did. I am shocked at how many parents say "oh the thimerosal is out now, so they're fine." which is true until they find the NEXT thing. Originally Posted by BethSLP I am shocked at how many parents say "oh the thimerosal is out now, so they're fine." which is true until they find the NEXT thing. That's exactly right. Even Dr Sears, who seems very pro-vaccine in general, stated in his book that he is afraid that aluminum is going to be the "next thimerosal". The amounts listed above are very concerning, seeing that the amount that is safe on any given day for an infant is the following: Page 198 Dr Sears book - "Using the 5 microgram per kilogram per day criterion from the first FDA document as a minumum amount we know a healthy baby can handle, a twelve-pound, two-month-old baby can safely get at least 30 micrograms of aluminum in one day." Originally Posted by TanyaS It is amazing how much they can vary for the same vaccine. And sad that it is not regulated more to minimize effects on children. It is regulated: www.hhs.gov/nvpo/nvac/documents/Aluminumws.pdf Originally Posted by amydep The amounts listed above are very concerning, seeing that the amount that is safe on any given day for an infant is the following: Page 198 Dr Sears book - "Using the 5 microgram per kilogram per day criterion from the first FDA document as a minumum amount we know a healthy baby can handle, a twelve-pound, two-month-old baby can safely get at least 30 micrograms of aluminum in one day." I am not questioning whether this appears in Sear's book, I don't have it, but I don't think this figure is correct. Humans ingest much more than this starting in infancy. Originally Posted by Science Mom It is regulated: www.hhs.gov/nvpo/nvac/documents/Aluminumws.pdf I am not questioning whether this appears in Sear's book, I don't have it, but I don't think this figure is correct. Humans ingest much more than this starting in infancy. That was the meeting where no one could find where the maximum dosage figure came from, right? Where they were saying the original research was lost? SM, I know aluminum is regulated. I said it was sad that it was not MORE regulated. Based on the list in the OP, the amount of aluminum seems to vary quite a bit between vaccines for the same VPDs. And thanks for that link. There is a lot of information in there that I will read after the holidays. If you are referring to the San Juan meeting, then yes, I believe so. The amount of aluminum in vaccines is considerably lower than the maximum isn't it? Originally Posted by amydep I keep seeing posts stating that Pediarix has a lot less aluminum than its separate parts. Actually, the amount in Pediarix is only 25 mcg less than the sum of its parts. If you add up the separate vaccines that go into Pediarix, you can see that Pediarix only has 25 mcg less than its separate vaccines. Here is the list out of Dr Sears book. The only one that he mentions that actually has a lot LESS than the sum of its parts is Comvax. I hope this clears up some misinformation that has been posted. If anyone sees anything incorrect that I have posted, please feel free to correct it. "ActHib - 0 mcg PedVaxHib - 225 mcg Prevnar - 125 mcg Daptacel (DTaP) - 330 mcg Tripedia (DTaP) - 170 mcg Infanrix (DTaP) - 625 mcg Recombivax (HepB) - 250 mcg Engerix B (HepB) - 250 mcg Polio - 0 mcg MMR - 0 mcg Chic Pox - 0 mcg Hep A - 250 mcg Combo Vaccines: Comvax (hep B as Recombivax and HIB as PedVaxHIB) - 225 mcg. This particular combo vax has LESS aluminum than getting the shots separately. Pentacel (DTaP as Daptacel, HIB as ActHIB and polio) - 1500 mcg. This shot has a lot MORE aluminum that the sum of its parts. You actually get 5 times the amount of aluminum than if you were to get the shots separately. Pediarix (DTaP as Infanrix, hepB as EngerixB and polio) - 850 mcg TriHIBit (DTaP as Tripedia and HIB as ActHIB) - 170 mcg" The amount of aluminum in the recommended indiv. dose of a biological product shall not exceed 1.250mg. When you give multiple vaccines containing aluminum it can exceed this. Originally Posted by julieann199930 The amount of aluminum in the recommended indiv. dose of a biological product shall not exceed 1.250mg. When you give multiple vaccines containing aluminum it can exceed this. References please. I found the following to be quite interesting as well, specifically the first two, I'll quote the whole list for posterity. http://www.hhs.gov/nvpo/nvac/documents/Aluminumws.pdf (your link, SM) The second panel discussed “what we don’t know” about aluminum-containing adjuvants and identified the following areas to be more thoroughly studied. 1. Toxicology and pharmacokinetics of aluminum adjuvants. Specifically, the processing of aluminum by infants and children. 2. Mechanisms by which aluminum adjuvants interact with the immune system. 3. Necessity of adjuvants in booster doses. 4. Definition of frequency and duration of the MMF lesion in normal people. 5. Role of aluminum in the pathophysiology of the MMF 6. Human control studies to assess the relationship between the “symptom complex” identified by Dr. Gherardi in patients who have the MMF lesion and the MMF lesion. 7. New adjuvant development. 8. Expanded trials of IM rather than the SQ route of injection for anthrax vaccine and non-needle vaccine administration How about some perspective on what exactly a milligram of neurotoxin can do to the human brain. Aluminum is injected in milligram amounts. Yet it only takes .002 mg (or 2 mcg) of aluminum to alter normal gene expression in the brain. Insider-do you have a reference for that? I am in a discussion with someone about this and that would be the perfect fact to include, but I can't find sources. The minimal reduction of aluminium is not worth it especially when you consider that it is then all entering the one injection site. Aluminium is known to cause the lumps in the muscle with some shots-increasing that dose of aluminium into the one spot rather than spreading it around can only be a bad thing, can only increase side effects like that. Here is a link to a post in a thread in the archives with some references on the dangers of aluminum, to get you started. Originally Posted by insider How about some perspective on what exactly a milligram of neurotoxin can do to the human brain. Aluminum is injected in milligram amounts. Yet it only takes .002 mg (or 2 mcg) of aluminum to alter normal gene expression in the brain. The daily estimated intake of an average adult is 8 mg - I will repeat, per day. Originally Posted by insider How about some perspective on what exactly a milligram of neurotoxin can do to the human brain. Aluminum is injected in milligram amounts. Yet it only takes .002 mg (or 2 mcg) of aluminum to alter normal gene expression in the brain. Yes, please references. I don't think how much aluminum an adult eats is really relevant. I'm concerned with how much gets to a baby's brain. That's where a neurotoxin does its damage. Thanks for sharing! I had no idea that the amount wasn't much less. Originally Posted by huggerwocky The daily estimated intake of an average adult is 8 mg - I will repeat, per day. Yes and we've been through the whole injection vs. ingestion thing before, too. No point in entering that whole debate again, because it really seems like we go over this topic again and again and again. I don't see much disagreement that there is a difference in the two when we're talking about pretty much every other drug. Edit: Your own link says the average infant's daily intake is less than 1 mg. It also says this: While it is true that most of our daily intake of aluminum comes from food, only a very small percentage - usually less than 1% - is actually absorbed by the body. Emphasis mine. Originally Posted by Science Mom If you are referring to the San Juan meeting, then yes, I believe so. The amount of aluminum in vaccines is considerably lower than the maximum isn't it? The max is .85 milligrams. How much might someone recieve via vaxes on one day at a max? (with 4 al adjuvanted vaxes on one day). Also, how sound is the science behind the .85 milligram limit? Was that limit designed for an infant or an adult? How can we check that if no one knows where it came from??? If it's from the distant past, they might have had some goofy ideas about things back then. Or maybe not. Who knows? I found an old link to the Puerto Rico discussion on it, btw: post #2 of 77 12/23/07 at 2:38pm post #3 of 77 12/23/07 at 3:17pm • Joined: 3/2005 • Posts: 1,997 • Location: Houston, TX post #4 of 77 12/23/07 at 4:48pm post #5 of 77 12/23/07 at 4:57pm Thread Starter post #6 of 77 12/23/07 at 6:43pm post #7 of 77 12/23/07 at 6:54pm • Joined: 4/2005 • Posts: 8,995 • Location: in la la land, or so they say... post #8 of 77 12/23/07 at 7:06pm post #9 of 77 12/23/07 at 7:10pm post #10 of 77 12/23/07 at 8:16pm • Joined: 6/2005 • Posts: 537 post #11 of 77 12/23/07 at 8:26pm post #12 of 77 12/23/07 at 9:01pm post #13 of 77 12/23/07 at 10:27pm • Joined: 4/2003 • Posts: 657 post #14 of 77 12/23/07 at 10:36pm • Joined: 10/2005 • Posts: 208 post #15 of 77 12/23/07 at 10:54pm • Joined: 12/2002 • Posts: 6,049 • Location: the Seacoast of Bohemia post #16 of 77 12/24/07 at 1:12am • Joined: 6/2004 • Posts: 5,544 post #17 of 77 12/24/07 at 1:36am post #18 of 77 12/24/07 at 1:39am • Joined: 8/2004 • Posts: 715 post #19 of 77 12/24/07 at 1:47am • Joined: 12/2004 • Posts: 6,383 post #20 of 77 12/24/07 at 3:03am • Joined: 4/2005 • Posts: 8,995 • Location: in la la land, or so they say...
{"url":"http://www.mothering.com/community/t/815768/amount-of-aluminum-in-vaccines-detailed-list","timestamp":"2014-04-18T21:15:28Z","content_type":null,"content_length":"238066","record_id":"<urn:uuid:13a9bdc7-88ae-406f-9eb6-0d701bb6e4a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I'm not getting this whole special system of equations thing. Please help me understand Question: Which system has an infinite number of solutions? A: y = 2x - 5 and - 2 = y - 2x. B: x + 2 = y and 4 = 2y - x. C: y + 3 = 2x and 4x = 2y - x. D: 2y + 6 = 4x and - 3 = y - 2x. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510c3244e4b09cf125bc5e4b","timestamp":"2014-04-19T02:24:35Z","content_type":null,"content_length":"44773","record_id":"<urn:uuid:6b41da58-2fab-4135-b4ef-977d67305b4a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient SAT Techniques for Absolute Encoding of Permutation Problems: Application to Hamiltonian Cycles Last modified: 2009-10-22 We study novel approaches for solving of hard combinatorial problems by translation to Boolean Satisfiability (SAT). Our focus is on combinatorial problems that can be represented as a permutation of n objects, subject to additional constraints. In the case of the Hamiltonian Cycle Problem (HCP), these constraints are that two adjacent nodes in a permutation should also be neighbors in the original graph for which we search for a Hamiltonian cycle. We use the absolute SAT encoding of permutations, where for each of the n objects and each of its pos- sible positions in a permutation, a predicate is defined to indicate whether the object is placed in that position. For implementation of this predicate, we compare the direct and logarithmic encodings that have been used previously, against 16 hierarchical parameterizable encodings of which we explore 416 instantiations. We propose the use of enumerative adjacency constraints—that enumerate the possible neighbors of a node in a permutation — instead of, or in addition to the exclusivity adjacency constraints — that exclude impossible neighbors, and that have been applied previously. We study 11 heuristics for efficiently choosing the first node in the Hamiltonian cycle, as well as 8 heuristics for static CNF variable ordering. We achieve at least 4 orders of magnitude average speedup on HCP benchmarks from the phase transition region, relative to the previously used encodings for solving of HCPs via SAT, such that the speedup is increasing with the size of the graphs.
{"url":"http://aaai.org/ocs/index.php/SARA/SARA09/paper/viewPaper/837","timestamp":"2014-04-19T13:25:28Z","content_type":null,"content_length":"13162","record_id":"<urn:uuid:fd7538b8-499d-473c-b42a-fe25fe3ff3e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
The Curious Case of the Eiffel Tower Debate has simmered among engineers over just why Gustave Eiffel designed his famous tower the way he did. Now it appears that the matter has been put to rest, thanks in part to the equations of Michigan Tech mathematician Iosif Pinelis. Eiffel Tower Pinelis, a professor of mathematical sciences, first became intrigued by the problem in 2002, when mechanical engineering professor Patrick Weidman of the University of Colorado at Boulder visited Michigan Tech. Weidman presented two competing mathematical theories, each purporting to explain the Eiffel Tower's elegant design. The first, by Christophe Chouard, argued that Eiffel engineered his tower so that its weight would counterbalance the force of the wind. Chouard had developed a complicated equation to support his theory, but finding its solutions was proving difficult. "Weidman and the mathematicians whom he had consulted could only find one solution, a parabola, of the infinitely many solutions that Chouard's equation must have," Pinelis said. As anyone who has survived high-school geometry can testify, the Eiffel Tower's profile doesn't look anything like a parabola. Weidman asked Michigan Tech mathematicians if they could come up with any other solutions. Pinelis went back to his office and soon found an answer confirming Weidman's conjecture that Chouard's theory was wrong. It turns out that all existing solutions to Chouard's equation must either be parabola-like or explode to infinity at the top of the tower. "The Eiffel Tower does not explode to infinity at the top, and its profile curves inward rather than outward," Pinelis notes. "That pretty much rules out Chouard's equation." According to the second, longstanding theory of Eiffel Tower design, the wind pressure is counterbalanced by tension between the elements of the tower itself. After examining Pinelis's equations, Weidman went to the historical record and found an 1885 letter from Eiffel to the French Civil Engineering Society affirming that Eiffel had indeed planned to counterbalance wind pressure with tension between the construction elements. Using that information, Weidman and his colleagues developed an equation whose solutions yielded the true shape of the Eiffel Tower. "The funny thing for me was that you didn't have to go into the historical investigation to disprove a wrong theory," Pinelis says. "The math confirms the logic behind the design. For me, it was more fun to do the math."
{"url":"http://www.mtu.edu/research/archives/magazine/2006/stories/eiffel-tower/","timestamp":"2014-04-17T18:36:57Z","content_type":null,"content_length":"18065","record_id":"<urn:uuid:c904dfbf-33be-431b-b009-c47abdcd4c38>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Pointwise Convergent & Monotone March 21st 2009, 06:32 PM Pointwise Convergent & Monotone Given $f_n(x) \rightarrow F$ pointwise on a domain $D$, where each $f_n(x)$ is monotone (either increasing or decreasing) on $D$, show that $F$ is monotone on $D$. so what I have is: assume that f_n(x) is monotone increasing then: $f_{n+1}(x) \geq f_x(n)$ therefore $f_{n+1}(x) -f_x(n) \geq 0$ by definition (the one that I am using) a function is pointwise convergent if and only if $\lim_{n\rightarrow \infty} f_n(x) = F(x)$ and if and only if given an $\varepsilon >0 \ \exists \ n_0$ (dependent on both $\varepsilon$ and $x$) $\in \ \mathbb{N}$ such that $|f_n(x)-F(x)|<\varepsilon$ so based on that if I let: $\lim_{n\rightarrow \infty} f_n(x) = F(x)$ and $\lim_{n\rightarrow \infty} f_{n+1}(x) = F'(x)$ therefore $\lim_{n\rightarrow \infty} f_{n+1}(x) = F'(x) \geq \lim_{n\rightarrow \infty} f_n(x) = F(x)$ so $F'(x) -F(x) \geq 0$ hence as $f_n(x)$ increase so does $F(x)$ is this correct? March 22nd 2009, 01:03 PM Given $f_n(x) \rightarrow F$ pointwise on a domain $D$, where each $f_n(x)$ is monotone (either increasing or decreasing) on $D$, show that $F$ is monotone on $D$. so what I have is: assume that f_n(x) is monotone increasing then: $f_{n+1}(x) \geq f_x(n)$ therefore $f_{n+1}(x) -f_x(n) \geq 0$ No. The function f_n(x) is increasing in x, not n. The definition should be that if y>x then f(y)>f(x). by definition (the one that I am using) a function is pointwise convergent if and only if $\lim_{n\rightarrow \infty} f_n(x) = F(x)$ and if and only if given an $\varepsilon >0 \ \exists \ n_0$ (dependent on both $\varepsilon$ and $x$) $\in \ \mathbb{N}$ such that $|f_n(x)-F(x)|<\varepsilon$ so based on that if I let: $\lim_{n\rightarrow \infty} f_n(x) = F(x)$(correct) and $\lim_{n\rightarrow \infty} f_{n+1}(x) = F'(x)$No. This should be $\color{red}\lim_{n\rightarrow \infty} f_n(y) = F(y)$ ... from which you should be able to deduce that F(y) > F(x).
{"url":"http://mathhelpforum.com/differential-geometry/79838-pointwise-convergent-monotone-print.html","timestamp":"2014-04-18T12:13:43Z","content_type":null,"content_length":"12555","record_id":"<urn:uuid:4304b6f3-b80b-414c-bb65-24c4c7e1ca7e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Agashe, Amod - Department of Mathematics, Florida State University • 1.1. MORPHISMS 1 Algebraic Geometry • July 7, 2009 15:24 WSPC/INSTRUCTION FILE vissqijnt Squareness in the special L-value and special L-values of twists • Rational torsion points on elliptic curves Amod Agashe • The Birch and Swinnerton-Dyer formula for modular abelian varieties of analytic rank zero • The Manin Constant, Congruence Primes, and the Modular Degree # • Generating the Hecke algebra Amod Agashe, William A. Stein • On the notion of visibility of torsors Amod Agashe • The special L-value of the winding quotient of level a product of two distinct primes • Visibility and the Birch and Swinnerton-Dyer conjecture for analytic rank one • Rational torsion in elliptic curves and the cuspidal Amod Agashe • Theorie des nombres/Number Theory On invisible elements of the Tate-Shafarevich group • Some calculations regarding torsion and component groups Note: What I am calling observations below are really hunches based on part of the data; they • The Birch and Swinnerton-Dyer conjecture Proposal Text • The modular number, the congruence number, and • Constructing elliptic curves with known number of points • The Birch and Swinnerton-Dyer formula for modular abelian • Algebraic Geometry I Class Notes Instructor: Amod Agashe • Agebraic Geomerty I Lectures 6 and 7 • Algebraic Geometry Notes 16, 17 November 5, 2008 • Agebraic Geomerty I Lectures and • The Manin Constant, Congruence Primes, and the Modular Degree • Constructing elliptic curves with known number of points • Algebraic Geometry I Lectures 9, 10, and 11 • Visibility of Shafarevich-Tate Amod Agashe • Rational torsion in elliptic curves and the cuspidal • Visibility for analytic rank one Amod Agashe • Theorie des nombres/Number Theory On invisible elements of the TateShafarevich group • Visibility of Shafarevich-Tate Groups of Abelian Varieties • Rational torsion in elliptic curves and the cuspidal • The Birch and SwinnertonDyer formula for modular abelian • Math. Res. Lett. 17 (2010), no. 00, 10001100NN c International Press 2010 A VISIBLE FACTOR OF THE HEEGNER INDEX • The Birch and Swinnerton-Dyer conjectural formula • MATHEMATICS OF COMPUTATION Volume 00, Number 0, Pages 000--000 • Suchandan Pal December 1, 2008 • Appendix by A. Agashe and W. Stein. In this appendix, we apply a result of J. Sturm* to obtain a bound on the number of • Pure and Applied Mathematics Quarterly Volume 2, Number 2 • Appendix by A. Agashe and W. Stein. In this appendix, we apply a result of J. Sturm* to obtain a bound on the number of • Algebraic Geometry I Lectures 22 and 23 • 1 Various remarks and comments at the begin-ning of the lecture regarding previous lectures • A visible factor of the special L-value Amod Agashe • December 23, 2008 Definition 1.1. If ( a) is a homogeneous ideal of S then we define V(( a)) • Mod-p reducibility, the torsion subgroup, and the Shafarevich-Tate group • The Modular Degree, Congruence Primes, and Multiplicity One Amod Agashe 1 • The modular degree, congruence number, and • Algebraic Geometry I Lectures 14 and 15 October 22, 2008 • Generating the Hecke algebra Amod Agashe, William A. Stein • Visibility and the Birch and Swinnerton-Dyer conjecture for analytic rank zero • The Modular number, the Congruence number, and Multiplicity One Abstract. Let N be a positive integer and let f be a new- • Algebraic Geometry I Lectures 3, 4, and 5 • arXiv:1012.0094v1[math.NT]1Dec2010 Periods of quadratic twists of elliptic curves • The Birch and Swinnerton-Dyer formula for modular abelian varieties of analytic rank zero
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/01/016.html","timestamp":"2014-04-24T11:59:27Z","content_type":null,"content_length":"14570","record_id":"<urn:uuid:4fd945cf-8b2a-4960-8aa0-621c480e83e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Boston Algebra 2 Tutor Find a Boston Algebra 2 Tutor ...I look forward to hearing from you!I have academic experience in Prealgebra and the base curriculum for Algebra 1, including the following: Expressions and Equations, Linear Equations, Representing Linear Equations, Linear Inequalities, Systems of Linear Equations, Polynomials, Factoring and Quad... 38 Subjects: including algebra 2, reading, English, physics ...Many of these other related areas of mathematics can be found in my profile including, algebra, calculus, computer science, discrete mathematics, differential equations, geometry, linear algebra, probability, and statistics. Finite mathematics is an overbroad subject serving as a typical busines... 63 Subjects: including algebra 2, chemistry, English, reading Hi! My name is Rebecca and I am currently a freshman at Northeastern University studying International Business. I've always loved math and am able to tutor students up to Algebra 2. 11 Subjects: including algebra 2, reading, geometry, algebra 1 I am a school psychologist at a high school, with experience tutoring students both at the elementary and high school levels. I have also worked extensively with special needs students. My goal is to be supportive of student needs and to understand not only what students have difficulty with , but... 30 Subjects: including algebra 2, reading, English, writing ...I think instilling confidence in students is the key to successful tutoring. Every student is capable of thriving in the subject they are struggling in. They just need a strong foundation and a positive, patient person to help them establish that! 36 Subjects: including algebra 2, chemistry, English, reading
{"url":"http://www.purplemath.com/boston_ma_algebra_2_tutors.php","timestamp":"2014-04-18T19:04:24Z","content_type":null,"content_length":"23686","record_id":"<urn:uuid:f6c5f07c-fa27-4217-8e7b-d88667367b20>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
derivative of the logarithm of a complete homogeneous polynomials up vote 0 down vote favorite I have the following complete homogeneous polynomial of degree $r$: $p_(x_1, x_2,...x_n) = \sum_{i_1 + i_2 + ... +i_n = r, i_k\in {0,1,..r}} \phi_{i_1}(x_2)\phi_{i_2}(x_2)...\phi_{i_n}(x_n) $ where the functions $\phi$ have the form $\phi_{j} (x_k) =a_{jk}x_k^j$ and the $a_{jk}$ are real positive numbers, that is $a_{jk}>0$. Specifically we set $a_{0k}=1$ $ \forall k$. Therefore the above polynomial is complete in the sense the all the combinations of the $i_k$ are included in it. In particular if all the $a_{jk}$ are one, we have the complete symmetric polynomial. The polynomial is then defined by $(r+1)\times n$ coefficients. I have the following question: I define the following derivatives: $D_l(i_1,i_2,..i_l) = \frac{\partial^l log(p) }{\partial x_{i_1} \partial x_{i_2} ... \partial x_{i_l}}$ where all the subscripts (the variables involved) $i_1, i_2,...i_l$ are different and $l\leq n$. Then my question is: Can we obtain any set of conditions for the coeffcients $a_{jk}$ so that the following inequalities hold for positive variables $x_1,x_2,..x_n$: $D_l(i_1,i_2) <0$ for all $(i_1,i_2) \in \{1,..n\}^2$. $D_l(i_1,i_2,i_3) >0$ for all $(i_1,i_2,i_3) \in \{1,..n\}^3$. $D_l(i_1,i_2,i_3,i_4) <0$ for all $(i_1,i_2,i_3,i_4) \in \{1,..n\}^4$. .... and finally $(-1)^n D_n=(-1)^nD_n(1,2,3,,...n) <0$ . Note that $D_l(i_1) >0$ trivially. Any kind of help with this problem would be acknowledged. Another simpler version of this problem arises from making the coefficients dependent of $n$ parameters $b_k, k=1,...n$. for example we could define $a_{1k} =b_k$ and $a_{jk} =a_{j-1,k}\times j$ or something similar. The above problem is equivalent to the following: find a multivariate discrete discrete distribution $(I_1,..I_n)$ whose support is the set of nonnegative integers $i_k\in {0,1,..r}$ with $i_1 + i_2 + ... +i_n = r$ and which is given by: $Prob(I_1=i_1,...I_n=i_n) = \frac{ \phi_{i_1}(x_2)\phi_{i_2}(x_2)...\phi_{i_n}(x_n) }{p(x_1,...x_n)} = \frac{ a_{i_11}x_1^{i_1} a_{i_22}x_2^{i_2} ...a_{i_nn}x_n^{i_n} }{p(x_1,...x_n)} (*)$ Then the above conditions on the derivative the logarithm of the polynomial are equivalent to: $E[H_{j_1} H_{j_2} ] <0$, $E[H_{j_1} H_{j_2} H_{j_3} ] >0$, $E[H_{j_1} H_{j_2} H_{j_3} H_{j_4} ] <0$e .... $(-1)^n E[H_{1} H_{2} ..H_{n}] <0$, with all the subscripts $j_k$ being different and where $H_{j_k}=I_{j_k}-E[I_{j_k}]$ is the mean centered variable. This equivalence arises from the following consideration: the distribution given by (*) is an exponential familiy distribution in the "parameters" $x_1,...x_n$ and the polynomial $p(x_i,...x_n)$ is its normalizing constant (the constant that makes the probability sum to one). Then, the derivatives $D_l(i_1,i_2,..i_l)$ give the above expections. ra.rings-and-algebras st.statistics polynomials add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ra.rings-and-algebras st.statistics polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/136978/derivative-of-the-logarithm-of-a-complete-homogeneous-polynomials","timestamp":"2014-04-17T04:18:18Z","content_type":null,"content_length":"48012","record_id":"<urn:uuid:cddec2be-6a1e-4390-b914-6f65c954b6b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Flattening techniques of Raynaud and Gruson up vote 3 down vote favorite Suppose $R$ is an adic valuation ring with a finitely generated ideal of definition. Let $A$ be an $R$-algebra of topologically finite type, i.e. $A$ is isomorphic to $R<\zeta_1,\zeta_2,...,\zeta_n>/ \mathfrak{a}$, where $\mathfrak{a}$ is an ideal of the restricted power series $R$-algebra $R<\zeta_1,\zeta_2,...,\zeta_n>$. Claim: If $A$ is flat over $R$, then $A$ is in fact of topologically finite presentation, i.e. we can assume $\mathfrak{a}$ to be finitely generated. What would be the idea of proving this? How to understand the flatness condition here? ac.commutative-algebra formal-schemes Have you looked in the early parts of the paper "Formal and Rigid Geometry I" by Bosch and L\"utkebohmert, or perhaps the part II sequel? It is addressed in one or both of those papers for sure. – user28172 May 28 '13 at 23:29 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra formal-schemes or ask your own question.
{"url":"http://mathoverflow.net/questions/132081/flattening-techniques-of-raynaud-and-gruson","timestamp":"2014-04-17T04:42:17Z","content_type":null,"content_length":"47179","record_id":"<urn:uuid:459ac8f7-548b-4431-8124-a7f3f6f7d654>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamical Systems June 3rd 2005, 08:00 AM #1 Jun 2005 Dynamical Systems Q!. Decide whether each of the following func are 1-1,onto,homeomorphisms or diffeomorphisms a. f(x)=X to the power5/3 b. f(x)=X to the power4/3 c. f(x)=3x+5 d. F(x)=e to the power x e. f(x)=1/x f. f(x)=1/x to the power 2 Q2. Identify which of the following subsets of R are closed,open or neither a. (x,where x is an interger) b. (x,where x is a rational number) c. (x,where x=1/n for some natural number n) d. (x,sin(1/x)=0) e. (x,xsin(1/x)=0) f. (x,sin(1/x)>0) Q3. Sketch the graph of B(x) Q4. Prove that B'(0)=0 Q5. Inductively prove that B^n(0)=0 for all n.Conclude that B(x) is a c infiniti function. Q6. modify B(x) to construct a C infiniti function C(x) whcih satisfies a. C(x) =0 if x is less than or equal to 0 b. C(x) =1 if x is greater that or equal to 1 c. C'(x)>0 if 0<x<1 Q7. Modify C(x) to construct a C infiniti bump function D(x) on the interval [a,b], where D(x) satisfies a. D(x) =1 for a<x<b b. D(x) = 0 for x<alpha and x>beta where a<alpha and beta>b c. D'(x) not equal 0 on the intervals (alpha,a) and (b,beta) Q8. Use a bump function to construct a diffeomorphism f;[a,b] goes to [c,d] which satisfies f'(a)=f'(b)=1 and f(a) =c,f(b)=d Please any kind of help would be appreciated. Q!. Decide whether each of the following func are 1-1,onto,homeomorphisms or diffeomorphisms a. f(x)=X to the power5/3 b. f(x)=X to the power4/3 c. f(x)=3x+5 d. F(x)=e to the power x e. f(x)=1/x f. f(x)=1/x to the power 2 a) homeo from R to R, 1-1 and onto, not diffeo b) none of the above c) diffeo from R to R, 1-1 and onto d) diffeo from R to R+, 1-1 e), f) none of the above You still should convince yourself of every single assertion and give a proof. June 4th 2005, 10:46 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/396-dynamical-systems.html","timestamp":"2014-04-17T21:01:51Z","content_type":null,"content_length":"33052","record_id":"<urn:uuid:01bb4a62-dc63-4966-85a7-4818c08def55>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Videos for spheres - Homework Help Videos - Brightstorm 17 Videos for "spheres" How to calculate the volume of a sphere. How to calculate the surface area of a sphere given a radius. How to calculate the diameter of a sphere, given its volume. How to calculate the radius of a sphere given a radius. How to calculate the surface area of a sphere given the volume. How to define a circle and a sphere using the word locus. How to calculate the volume of a hemisphere. How to calculate the volume of a hemisphere with a piece missing. How to calculate the surface area of a hemisphere. How to find the locus of points that are equidistant from the endpoints of a segment. How to find the locus of points that are equidistant from the sides of an angle. How to find the ratio of surface areas and volumes of two spheres. How to find an equation of the plane tangent to a sphere. How to find the equation of a sphere with center (a,b,c) and radius r. How to find the center and radius of a sphere when its equation is not in center-radius form by completing the square. How to find the equation of a sphere with radius r when it is centered at the origin.
{"url":"http://www.brightstorm.com/tag/spheres/","timestamp":"2014-04-16T19:37:02Z","content_type":null,"content_length":"57678","record_id":"<urn:uuid:7c81de22-204f-4c08-a1a9-5269516c4870>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Seminar: Number Theory and Automorphic Forms Organizers: Lior Silberman, Julia Gordon, Bill Casselman. Contact: lior @ math.ubc.ca ; MATH 229B ; 604-827-3031 Winter 2011: Integral Quadratic forms • Reading Course number: MATH 620A, section 201 to my homepage. Clarification: the content on these pages is made available for traditional reuse, but is expressly excluded from the terms of UBC Policy 81. Last modified Wednesday January 26, 2011
{"url":"http://www.math.ubc.ca/~lior/teaching/1011/620A_W11/","timestamp":"2014-04-19T07:19:01Z","content_type":null,"content_length":"2884","record_id":"<urn:uuid:868e80ad-621a-40a2-9f23-be7e7fc1ed01>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
More Differentiation Questions December 10th 2008, 06:06 PM More Differentiation Questions I guess i should first ask, is it proper form here to start new threads or edit my old ones... either way, i'll post in this new one for now. Trying to work out y' of the square root of x-1 / x+1 (all terms inside the root, didnt know how to indicate that in text) I assume that I make it (x-1 / x+1)^1/2 but i get into a very tangly equation when I use the quotient rule. Is this the right way to start here? December 10th 2008, 06:14 PM Have to use chain rule and quotient rule. y = sqrt(x-1 / x+1) y = (x-1 / x+1)^0.5 dy/dx = 0.5(x-1 / x+1)^-0.5 multiply d/dx(x-1/x+1) Can you take it from here ? I guess i should first ask, is it proper form here to start new threads or edit my old ones... either way, i'll post in this new one for now. Trying to work out y' of the square root of x-1 / x+1 (all terms inside the root, didnt know how to indicate that in text) I assume that I make it (x-1 / x+1)^1/2 but i get into a very tangly equation when I use the quotient rule. Is this the right way to start here? December 10th 2008, 07:54 PM i got the chain rule part, but im getting tangled up in the algebra i think. I multiply .5(x-1/x+1)^.5 by the derrivitive which i worked out to be 2/(x+1)^2 and get confused. December 10th 2008, 08:52 PM y = sqrt(x-1 / x+1) y = (x-1 / x+1)^0.5 dy/dx = 0.5(x-1 / x+1)^-0.5 multiply d/dx(x-1/x+1) dy/dx = 1/2(x-1 / x+1)^-1/2 multiply ((x+1)(1) - (x-1)(1) / (x+1)^2) dy/dx = 1/2(sqrt(x+1)/sqrt(x-1)) multiply 2 / (x+1)^2 dy/dx = sqrt(x+1)/sqrt(x-1)(x+1)^2 December 10th 2008, 09:08 PM You can always use synthetic or long division to reduce the equation inside the radical to: 1- (2)/(x+1) Makes it a little more bearable to derive and you no longer have to worry about applying the quotient rule to the interior of the radical.
{"url":"http://mathhelpforum.com/calculus/64449-more-differentiation-questions-print.html","timestamp":"2014-04-21T11:48:01Z","content_type":null,"content_length":"6240","record_id":"<urn:uuid:31d8abc1-0751-4c5b-a0e5-68ef74cdfe35>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Pythagoreans and Music Replies: 13 Last Post: Oct 23, 1997 12:11 PM Messages: [ Previous | Next ] Re: Pythagoreans and music Posted: Apr 18, 1996 10:36 AM On 18 Apr 1996, Daniel A. Asimov wrote: > In article <960417183558_193158113@emout15.mail.aol.com> Marksaul@aol.com writes: > >In a message dated 96-04-16 18:58:40 EDT, you write: > > > >>I think that that (2^(6/12) in equal-tempering) is part > >>of what's known as the Devil's Triad, AKA a diminished 7th; > >>at any rate, a nice refernce is _The Myth of Invariance_ > >>by McSomebody. > > > >I've never heard of the Devil's Triad, but the Devil's interval is an > >augmented fourth (or diminished fifth: they're the same in the tempered > >scale). The ratio is 2^(6/12). It's the interval you hear in the solo > >violin at the beginning of Saint-Saen's Danse Macabre. > ------------------------------------------------------------------------ > A solo violin is often played so its scales are NOT even-tempered, > but Pythagorean (or so I've heard). > So -- in for example Saint-Saens's "Danse Macabre", just what interval > is actually played by the violin for this "Devil's interval" ? > Is is really 2^(6/12) = sqrt(2) = 1.414..., or is it an approximating > rational number like 7/5 = 1.4 ? > --Dan Asimov After joining this conversation this morning, it occurred to me that (of course) the MOST dissonent interval will be that corresponding to the Golden ratio 1.618... . Then around noon, I opened a paper someone had sent to me, and found it to contain the words: "Although many contemporary music composers have been intending to gain control of the golden section, ... ... the golden section gives the critical dissonance." Quite a coincidence! Does anyone know of any "traditional" name for the "golden section interval"? John Conway Date Subject Author 4/12/96 dennis wallace 4/13/96 Re: Pythagoreans and music John Conway 4/16/96 Re: Pythagoreans and music Brian Hutchings 4/17/96 Re: Pythagoreans and music Marksaul@aol.com 4/17/96 Re: Pythagoreans and music Brian Hutchings 4/18/96 Re: Pythagoreans and music John Conway 4/18/96 Re: Pythagoreans and music John Conway 4/18/96 Re: Pythagoreans and music Daniel A. Asimov 4/18/96 Re: Pythagoreans and music John Conway 4/18/96 Re: Pythagoreans and music Brian Hutchings 4/19/96 Re: Pythagoreans and music Brian Hutchings 10/20/97 Geathsha Connachta , Ara PennDragon 10/22/97 Re: Pythagoreans in general Eileen M. Klimick Schoaff 10/23/97 Re: Pythagoreans in general Floor van Lamoen
{"url":"http://mathforum.org/kb/message.jspa?messageID=1075927","timestamp":"2014-04-18T03:00:31Z","content_type":null,"content_length":"33241","record_id":"<urn:uuid:84dab9f4-3d3e-4d52-8c63-7327e1b9d03d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Central and Bicentral vertex in graph January 29th 2011, 06:26 AM #1 Junior Member Jun 2010 Central and Bicentral vertex in graph How do i prove that tree has exactly one center vertex or possibly 2 (bicentral) if the two are neighbour vertexes, i need unformal prove if possible, thanks! This is certainly no proof. I can give you a method of finding the center (bicenter) of a tree. Suppose that $n>2$. First, remove all vertices of degree one along with any incident edges. We still have a tree remaining. If the number of remaining vertices is more than two, then repeat the process. If the number of remaining vertices is two you have a bicenter; if it is one you have the center. the algorithm is ok ill check it, just one quick question, in the recurrent steps do i check if i got more then two vertices of degree one to continue with the process or it's about all degree vertices, thanks! I don't know exactly what that means. The idea is to work the tree down until one vertex or two vertices of degree one remain. i meant when i firstly start, i see all the leafs and get them off the tree altogether with the edges incident with those vertices, now i got new leafs cause previously i deleted the edges that were connecting these new leafs with the vertices that previously were leafs, so now i check if my tree contains two or less vertices if so i stop and i conclude that if there's two vertices one of those could be either center or bicenter vertex (one or another bout cold permute the role), if previously the condition does not hold that means i must continue to remove leafs altogether with the edges incidenting those leafs, eventually ill end up with tree with 1 or 2 vertex. Is that what you talked about? My question up there was: in the step when you check if in the tree exists more than two vertices, do you check for vertices with degree one, or you check for vertices no matter the degree they got, but it is clear to me now so thanks You keep removing leafs and their stems, removing all on the same level, until there remains only one vertex, there remains two vertices of degree one. as i noted, i always get the root to be one of the vertices to be center/bicenter isn't obvious that i can pick the root and one of it's children as the vertices i look for, cause in a rooted tree if you keep on removing the leafs you will eventually end up with the root and one child to it.. as i told you, the root is the central node always, cause if you just take the tree and redraw it with that all of the child nodes of his you put around him evenly, you will note that the root is actually the center of the graph hence it's central node, and as i told you you can take one of his children randomly and assign it as bicenter node that's it, is just an eye to see it =), thanks January 29th 2011, 07:05 AM #2 January 29th 2011, 07:36 AM #3 Junior Member Jun 2010 January 29th 2011, 07:46 AM #4 January 29th 2011, 08:14 AM #5 Junior Member Jun 2010 January 29th 2011, 08:22 AM #6 January 29th 2011, 08:33 AM #7 Junior Member Jun 2010 January 29th 2011, 09:06 AM #8 Junior Member Jun 2010
{"url":"http://mathhelpforum.com/advanced-math-topics/169642-central-bicentral-vertex-graph.html","timestamp":"2014-04-21T00:22:57Z","content_type":null,"content_length":"52210","record_id":"<urn:uuid:4dfe0813-91c3-4363-b915-5e6cee67fa60>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with calc 1 problem: rate of change. November 21st 2010, 04:42 PM #1 Nov 2010 Help with calc 1 problem: rate of change. I need help with this problem. Could someone show me how to get the correct answer. I get a wrong answer every time. A point P is moving along the curve whose equation is y=√x. Suppose that x is increasing at the rate of 40 units/s when x=3. How fast is the distance between p and the point (2,0) changing\ at that instant? Please show your work. Thanks so much Last edited by mr fantastic; November 22nd 2010 at 12:38 AM. Reason: Re-titled. I need help with this problem. Could someone show me how to get the correct answer. I get a wrong answer every time. A point P is moving along the cruve whose equation is y=√x. Suppose that x is increasing at the rate of 40 units/s when x=3. How fast is the distance between p and the point (2,0) changing\ at that instant? Please show your work. Thanks so much how about you show us what you've done? what is the distance between P and (2,0)? i got it thanks for your reply though November 21st 2010, 04:53 PM #2 November 21st 2010, 05:38 PM #3 Nov 2010
{"url":"http://mathhelpforum.com/calculus/164006-help-calc-1-problem-rate-change.html","timestamp":"2014-04-19T23:04:36Z","content_type":null,"content_length":"35835","record_id":"<urn:uuid:d1852e67-0d60-437d-91ff-65506b84f67b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Germansville Math Tutor Find a Germansville Math Tutor ...The parents of these students tell me they've seen a great improvement since I began working with their children. I passed Linear Algebra with an A in the Fall of 2010 at Kutztown University. I have completed many projects over the last 20 or more years in cross-stitching, sewing, and crochet. 43 Subjects: including trigonometry, linear algebra, ASVAB, logic ...I specialize in micro- and macroeconomics, from an introductory level up to an advanced level. I have master's degree work in labor economics, financial analysis and game theory. I have the utmost confidence in my ability to relate this material in a comprehensible manner to the student. 19 Subjects: including calculus, Microsoft Excel, precalculus, statistics ...Though I go to school and teach for a living, I'm also a writer. Last year I published my first novel. Creative and analytical writing appeal to me especially, not to mention fiction. 34 Subjects: including precalculus, ACT Math, SAT math, English ...I am a professional actress as well as a designer, director, dramaturge, stage manager, and writer. Since the age of four, I have been studying and practicing theatre. I graduated from the Lehigh Valley Charter High School for the Performing Arts as a Theatre Major and am currently pursuing a theatre degree in college. 28 Subjects: including trigonometry, public speaking, writing, precalculus ...I have had several students return saying how my way of tutoring continues to help them in their current studies. In my years of experience, I've learned that many students learn with many different ways. I believe that every student can learn, however, the tutor must reach them at their level. 8 Subjects: including linear algebra, Mathematica, algebra 1, prealgebra
{"url":"http://www.purplemath.com/Germansville_Math_tutors.php","timestamp":"2014-04-18T14:05:39Z","content_type":null,"content_length":"23807","record_id":"<urn:uuid:a3cd8a59-3b00-4c58-a614-27aecd2b0100>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Conversion factors for old french measurement units Measurement Conversion Factors The following are conversion factors for old measurement terms that may have been used in the Châteauguay Valley. The old french terms were used in Quebec, Canada and Louisiana, USA. Some modern terms are included for completeness. The units of inches, feet, yards, and miles, unless further qualified, are the English or US units in common use today. The numbers in square brackets [n] indicates the source of the information listed at the bottom of the page. For other terms, see the web links below, especially Russ Rowlett's Dictionary of Units of Measurement. Arpent (Arpen) Bushel (British and US) Rod (Pole) Some Links regarding Conversion Factors and Old Measurement info: □ A Dictionary of Units of Measurement. A listing of just about every imaginable unit, old and new, used in the world. Authored by Russ Rowlett of the University of North Carolina at Chapel Hill, NC. 1/ Templeton's Pocket Companion,2nd Ed., D. Appleton & Co., New York, 1853. 2/ Handbook of Chemistry and Physics,55th Ed., CRC Press, Cleveland,OH 1974 3/ Chemical Engineers' Handbook, 2nd Ed., Edited by John H.Perry, McGraw-Hill Book Co., New York, 1941 4/ Carte de L'Isle de Montreal et de ses Environs, H.Bellin, 1744 5/ Larousse French-English/English-French Dictionary, Unabridged Ed., Larousse, Paris 1993 6/ A Dictionary of Units of Measurement - online. 7/ Jacques Proot (author of the Anglo-Saxon Weights and Measures web site no longer on-line). 8/ The History of the County of Huntingdon and of the Seigniories of Chateauguay & Beauharnois, Robt. Sellar, Huntingdon, 1888
{"url":"http://www.rootsweb.ancestry.com/~qcchatea/factors.htm","timestamp":"2014-04-20T08:52:31Z","content_type":null,"content_length":"13692","record_id":"<urn:uuid:bf458288-332e-42f3-8951-d24cbf2d9db0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
subwords of the fibonacci word up vote 13 down vote favorite The Fibonacci word is the limit of the sequence of words starting with "0" and satisfying rules $0 \to 01, 1 \to 0$. It's equivalent to have initial conditions $S_0 = 0, S_1 = 01$ and then recursion $S_n= S_{n-1}S_{n-2}$. I want to know what words cannot appear as subwords in the limit $S_\infty$. At first I thought 000 and 11 were the only two that could not appear. Then I noticed 010101. Is there any characterization of which words can or cannot appear as subwords of the Fibonacci word? Loosely related, this word appears as the cut sequence of the line of slope $\phi = (1 + \sqrt{5})/2$ though the origin. co.combinatorics ds.dynamical-systems combinatorics-on-words 1 Some additional info: the Fibonacci word is not cube-free. For example, it contains the subword "10010 10010 10010". (Another comment: perhaps someone should make a "combinatorics-on-words" tag.) – Joel Reyes Noche Apr 4 '11 at 6:14 According to oeis.org/A003849, the first 24 terms are "010010100 10010 10010 10010". But the 10th to 24th terms are a cube. – Joel Reyes Noche Apr 4 '11 at 6:35 A list of the first 1652 subwords by T.D. Noe (according to oeis.org/A003849) can be found at oeis.org/A003849/a003849.txt – Joel Reyes Noche Apr 4 '11 at 6:51 @Joel: You are right. Was it $n+3$ instead of $n+2$. I guess I did not remember correctly. – Mark Sapir Apr 4 '11 at 7:20 @Mark. Thanks for the references and info. – Joel Reyes Noche Apr 4 '11 at 7:45 show 5 more comments 5 Answers active oldest votes The Fibonacci word is one of the Sturmian words, so its complexity is $n+1$, that is the number of different subwords of length $n$ is $n+1$. So most words are not subwords of the Fibonacci word. There are, as far as I remember 12 different but equivalent definitions of Sturmian words. Some of them give restrictions on possible subwords (see Algebraic combinatorics up vote 23 on words by Lothaire, and an article by Berstel there). down vote 2 A proof that the Fibonacci word's subword complexity is p_f(n)=n+1 can be found in Section 10.5 of Allouche and Shallit's Automatic Sequences (2003, Cambridge University Press). – Joel Reyes Noche Apr 4 '11 at 6:24 add comment The easiest way (linear-time, computationally speaking) to determine whether a finite word $w$ is a factor (a subword) of the Fibonacci word $S_\infty$ is the following: • Remove a trailing 0 from $w$, if present (just one); if $w$ begins with 1, add a leading 0; • The word thus obtained should be uniquely parsed with (written as a concatenation of) 0 and 01; if not, then $w$ is not a factor of $S_\infty$ and you are done. If $w=x_1x_2\cdots x_k$ is such a parsing, let $y_i=0$ for all $i$ such that $x_i=01$, and $y_i=1$ otherwise (that is, if $x_i=0$). • Apply the same algorithm on the new word $w'=y_1\cdots y_k$ • The original $w$ is a factor of $S_\infty$ if and only if you eventually reach the word 0 or 1 by recursively applying the above procedure. up vote 12 down Correctness can be easily proved, as the Fibonacci word is the limit of the substitution $0\to 01$, $1\to 0$ (folklore, see e.g. Lothaire's Algebraic combinatorics on words). For instance, vote $w=1010010010100$ is a factor since the sequence of words generated by the above algorithm is: $$w,\: 00101001,\: 10010,\: 010,\: 0\;.$$ If you need a more dynamical point of view, Sturmian shifts (such as Fibonacci) are neither of finite type nor sofic. However, it is not hard to get the list of minimal forbidden factors of the Fibonacci word, as follows. Let $S_i'$ be the $i$-th palindromic prefix of $S_\infty$, which you can obtain by removing the last two characters in $S_i$. Then a finite word $w$ is a factor of the Fibonacci word if and only if it does not contain any of the following as factors, for all $k\geq 1$: $$1S_{2k-1}'1,\quad 0S_{2k}'0\;.$$ In other terms, the sequence of minimal forbidden factors is 11, 000, 10101, 00100100, 1010010100101, … See for instance Mignosi et al., Words and forbidden factors add comment I don't know if there is a simple characterization, but it seems there is a simple algorithm. See Bartosz Walczak, A simple representation of subwords of the Fibonacci word, available up vote 4 down at http://tcs.uj.edu.pl/~walczak/fibonacci.pdf add comment The Fibonacci sequence is a quasiperiodic sequence. Quasiperiodic sequences (and more generally: quasiperiodic lattices) can be generated e.g. by the "strip projection method". Roughly speaking: Take the lattice $\mathbb{Z}^2$ and translate the unit cube $[0,1]^2$ along the line through the origin with irrational scope, thus getting a strip. Consider all edges of $\mathbb {Z}^2$ which lay inside the strip. Then the sequence of vertical and horizontal edges is a quasiperiodic sequence. Please cf. e.g. http://arxiv.org/pdf/cond-mat/9903010v1.pdf, fig.4.2 for the case of a Fibonacci sequence (slope = golden ratio 0.618...). Thus, to test whether a word is a subword of the Fibonacci word, just "lift" it to a path in the edges graph of $\mathbb{Z}^2$ and test whether it is contained in a strip of golden ration up vote slope and corresponding widths (e.g. by projecting orthogonally). 1 down vote You can even expand this method to get the (asymptotic) frequence of the subword in the Fibonacci word, which is proportional to the difference of the length of the orthogonal projection of the subword to the length of the projection of the strip (i.e. = length of projection of the unit square by definition). The easiest case are of course the two building plots of the sequence, which lift to a vertical resp. horizontal edge of the lattice. Thus the ration of the occurence frequency is again the golden ration. But you can immediately calculate in the same way the of the frequency of any subword. add comment There are already some excellent answers to this question, but as you may now appreciate the Fibonacci word is highly structured and has a diversity of interesting properties which restrict its class of subwords. I noticed that none of the answers so far addresses directly the fact that the Fibonacci word -- like all Sturmian words -- is balanced. To say that a word is balanced means that for any pair of subwords of equal length, the number of zeros in each subword must either be equal to one another, or differ by exactly one. The simplest unbalanced word is 0011, because the number of zeros in the subwords 00 and 11 are respectively two and zero. Since there are only $O(n^3)$ balanced words of length $n$ this property quite severely restricts the number of allowed subwords. The examples of forbidden words which you list -- 000, 11 and 010101 -- are themselves balanced words, but the balance rule nonetheless prevents them from occurring in the Fibonacci word. To up vote see this we can argue as follows. Since 101 occurs in the Fibonacci word it cannot contain 000, because the number of zeros in these two subwords would differ by two. Similarly because it 1 down contains 00 it cannot contain 11, and because it contains 00100 it cannot contain 10101. If an infinite word is balanced then the ratio of zeros to ones in its subwords of length $n$ converges rapidly and uniformly to a constant ratio in the limit as $n \to \infty$. In the case of the Fibonacci word this limit is the golden ratio. From a dynamical perspective this convergence is due to the unique ergodicity of the associated symbolic dynamical system, but it also has strong connections to continued fractions: you will notice that the continued fraction approximations to the golden ratio arise as ratios of zeroes to ones in certain subwords of the Fibonacci word. The excellent books by Lothaire (Algebraic combinatorics on words, mentioned above) and by Fogg (Substitutions in dynamics, arithmetics and combinatorics) are good references for the Fibonacci word and for Sturmian and balanced words more generally. add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics ds.dynamical-systems combinatorics-on-words or ask your own question.
{"url":"https://mathoverflow.net/questions/60514/subwords-of-the-fibonacci-word/60515","timestamp":"2014-04-17T07:45:45Z","content_type":null,"content_length":"77267","record_id":"<urn:uuid:10fcb06c-c9b2-4924-85c9-fffdc45ffd4b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Plot of the factorial maps with polygons of contour by level of a factor s.chull {ade4} R Documentation Plot of the factorial maps with polygons of contour by level of a factor performs the scatter diagrams with polygons of contour by level of a factor. s.chull(dfxy, fac, xax = 1, yax = 2, optchull = c(0.25, 0.5, 0.75, 1), label = levels(fac), clabel = 1, cpoint = 0, col = rep(1, length(levels(fac))), xlim = NULL, ylim = NULL, grid = TRUE, addaxes = TRUE, origin = c(0,0), include.origin = TRUE, sub = "", csub = 1, possub = "bottomleft", cgrid = 1, pixmap = NULL, contour = NULL, area = NULL, add.plot = FALSE) dfxy a data frame containing the two columns for the axes fac a factor partioning the rows of the data frame in classes xax the column number of x in dfxy yax the column number of y in dfxy optchull the number of convex hulls and their interval label a vector of strings of characters for the point labels clabel if not NULL, a character size for the labels, used with par("cex")*clabel cpoint a character size for plotting the points, used with par("cex")*cpoint. If zero, no points are drawn col a vector of colors used to draw each class in a different color xlim the ranges to be encompassed by the x axis, if NULL, they are computed ylim the ranges to be encompassed by the y axis, if NULL they are computed grid a logical value indicating whether a grid in the background of the plot should be drawn addaxes a logical value indicating whether the axes should be plotted origin the fixed point in the graph space, for example c(0,0) the origin axes include.origin a logical value indicating whether the point "origin" should be belonged to the graph space sub a string of characters to be inserted as legend csub a character size for the legend, used with par("cex")*csub possub a string of characters indicating the sub-title position ("topleft", "topright", "bottomleft", "bottomright") cgrid a character size, parameter used with par("cex")* cgrid to indicate the mesh of the grid pixmap an object 'pixmap' displayed in the map background contour a data frame with 4 columns to plot the contour of the map : each row gives a segment (x1,y1,x2,y2) area a data frame of class 'area' to plot a set of surface units in contour add.plot if TRUE uses the current graphics window The matched call. Daniel Chessel xy <- cbind.data.frame(x = runif(200,-1,1), y = runif(200,-1,1)) posi <- factor(xy$x > 0) : factor(xy$y > 0) coul <- c("black", "red", "green", "blue") s.chull(xy, posi, cpoi = 1.5, col = coul) Worked out examples > library(ade4) > ### Name: s.chull > ### Title: Plot of the factorial maps with polygons of contour by level of > ### a factor > ### Aliases: s.chull > ### Keywords: multivariate hplot > ### ** Examples > xy <- cbind.data.frame(x = runif(200,-1,1), y = runif(200,-1,1)) > posi <- factor(xy$x > 0) : factor(xy$y > 0) > coul <- c("black", "red", "green", "blue") > s.chull(xy, posi, cpoi = 1.5, col = coul)
{"url":"http://pbil.univ-lyon1.fr/ADE-4/ade4-html/s.chull.html","timestamp":"2014-04-19T07:49:12Z","content_type":null,"content_length":"5545","record_id":"<urn:uuid:041d727a-5c3f-4966-b06f-939ea94f5dec>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
String Seminars This series consists of talks in the area of Superstring Theory. We discuss boundary conditions and domain walls in 4d N=4 SYM, focusing on those preserving 4 supercharges. Along the way we revisit the old problem of the quantum-corrected moduli space of 3d N=2 The principal chiral sigma model (PCSM) in 1+1 dimensions is asymptotically free and has as SU(N)-valued field with massive excitations. We have found all the exact form factors and two-point function of the Noether-current operators at large N using the integrable bootstrap program. At finite N, only the first non-trivial form factors are found, which give a good long distance approximation for the two-point function. We show how to use these new exact results to study non-integrable deformations. One example is the PCSM coupled to a Yang-Mills field. Motivated by the connection between 4-manifolds and 2d N=(0,2) theories, we study the dynamics of a fairly large class of 2d N=(0,2) gauge theories. We see that physics of such theories is very rich, much as the physics of 4d N=1 theories. We discover a new type of duality that is very reminiscent of the 4d Seiberg duality. Surprisingly, the new 2d duality is an operation of order three. We study the low energy physics and use elliptic genus to detect dynamical supersymmetry breaking. I will be discussing relation between scale and conformal symmetry in unitary Lorentz invariant QFTs in four dimensions. In this talk we will discuss how giant gravitons and their open string interactions emerge from super Yang-Mills Theory. This is accomplished by diagonalizing the one loop dilatation operator on a class of operators with bare dimension of order N. From the result of this diagonalization, the Gauss Law governing the allowed open string excitations of giant gravitons is clearly visible. In addition, we show that this sector of the theory is integrable. Three-dimensional N=2 theories with a U(1)_R symmetry, can be placed on a compact three manifold M preserving some supersymmetry if and only if M admits a transversely holomorphic foliation (THF). I will show that the partition function of the resulting theory is independent of the metric and depends holomorphically on the moduli of the THF. When applied to supersymmetric field theories on manifolds diffeomorphic to S^3 and S^2 x S^1, this result explains many of the In M-theory, the only AdS7 supersymmetric solutions are AdS7 × S4 and its orbifolds. In this talk, I will describe a classification of AdS7 supersymmetric solutions in type II supergravity. While in IIB none exist, in IIA with Romans mass (which does not lift to M-theory) there are many new ones. The classification starts from a pure spinor approach reminiscent of generalized complex geometry. Without the need for any Ansatz, the method determines uniquely the form of the metric and fluxes, up to solving a system We study the conformal bootstrap for 3D CFTs with O(N) global symmetry. We obtain rigorous upper bounds on the scaling dimensions of the first O(N) singlet and symmetric tensor operators appearing in the \phi_i x \phi_j OPE, where \phi_i is a fundamental of O(N). Comparing these bounds to previous determinations of critical exponents in the O(N) vector models, we find strong numerical evidence that the O(N) vector models saturate the bootstrap constraints at all values of
{"url":"https://www.perimeterinstitute.ca/video-library/collection/string-seminars?page=1","timestamp":"2014-04-21T13:17:33Z","content_type":null,"content_length":"58427","record_id":"<urn:uuid:f47474f9-3652-474e-8294-7ee008aff6a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Eleven Dimensions in String Theory 2012-Mar-05, 02:11 AM #1 Eleven Dimensions in String Theory Is this mainstream? The theory of eleven dimensions (11D) actually was first postulated nearly 60 years ago using the Standard Model. ( ) It sat there as a possibility until it was pushed into prominence with String over 20 years ago when String ideas started to prove themselves as the only possible way of expanding the Standard Model to encompass everything. Since the 11 D Standard matched 11 D String this has become the key for advanced theories during the last 20 years. So we are referring to an idea that is half a century old and has survived decades of scientific debate and has only gotten stronger support. The Wired article has been based on incomplete data. This is common with popular journalism. Even journalists in technical magazines don't have the required background to have followed fully the debate. The idea that main stream science doesn't consider String strong enough is invalid. Everyone of the top theoretical physicists in the world use String. Hawking is not alone in his use of the Here is how we get the 11 plus dimensions. Consider a shadow on the ground. You don't have to look at the object making the shadow to calculate its size and shape because you know how light is projected and the angles that the light has traveled. So from a two dimensional object you can calculate what the three dimensional object looks like that projected the 2D shadow (you can take this up to 3D shadow and 4D object if you measure over time). Doing the same type of measurements on our 4 D universe you can calculate that 11 Ds or more project themselves onto our 4 D universe by the way those shadows caused by the 11D make themselves felt on our 4D. Both the General theory and String come up with at least 11-dimensions. Since the 11-D General theory was developed in the 1950s this is a relatively old physics concept. There are just enough gaps in the real world that require more than 4-D to get the mathematics explaining them to work. Consider this: You have a shadow moving on the 2-D ground. You use mathematics to explain how it moves and distorts during the day. The math becomes very complicated. Now instead of using the 2-D math you create a 4-D world where there is a stationary tree that doesn't change directions or size and project a shadow onto the ground as a light source (sun) moves. The math is much easier and you have rendered the 4-D world into a 3-D event (motion across a 2-D ground with time the third dimension). This is what is happening with the 11-D universe and our 4-D ability to see it. The math tells us that there is a projection from more dimensions into our 4-D shadow world. We are now in the realm of CSI. We get clues about the crime, or in this case the way the universe works. We then build up what has to happen for these events to occur. Blood on the ground means that someone was bleeding. Typing the blood tells us about the person or persons who bled. We have never and can not physically see the event but the clues tell us that it happened. We can not see the 11-D but the physical clues tell us that they are there. M-Theory is not a theory in the mold that most people think of. It does little to explain a specific physical event. It is a framework that explains how the math describing something such as Quarks and the math explaining something such as a super nova relate to each other and how to pull the overlap between the two extremely different events together. It sets the parameters on how these different equations have to be framed so there can be a passing of information between the completely different events. M-theory is a theory that sets the conditions that all of the other, more specific, theories have to meet. Yep .. 11 dimensions String and M-Theory gets my vote. Mainstream 'Mathematical Theoretical Physics'. (Not so keen about the rest, though). Last edited by Selfsim; 2012-Mar-05 at 02:22 AM. Reason: Added qualifiers I've heard (and this is completely unverified) that 1 time, 3 real and 3 complex make 10. Where is that quote from? Is it mainstream? It's closer to mainstream than a lot of wild ideas, and the math works out (or so I'm told). The problem (as I understand it) with string theory is that there are still a few competing approaches to it, and none of them have been tested experimentally. Mainly because nobody can come up with an experiment to test the predictions. The energy requirements are beyond current technology, or something like that. "For shame, gentlemen, pack your evidence a little better against another time." -- John Dryden, "The Vindication of The Duke of Guise" 1684 The problem (as I understand it) with string theory is that there are still a few competing approaches to it, and none of them have been tested experimentally. Mainly because nobody can come up with an experiment to test the predictions. The energy requirements are beyond current technology, or something like that. Not quite … see here … String Theory Test Progress. So much emphasis is commonly placed on the test difficulty issues .. and yet I rarely see anyone attempting to explain why M-Theory manages to explain super condensed physics behaviours and the fundamental particles. Rather than spend this thread poking at the well-known gaps, why not have more explanations for why it manages to hang together so consistently, when it could easily fall apart for any number of reasons ? Its overall high degree of self-consistency is the more perplexing and interesting issue, as far as I'm concerned … and it is almost always avoided for some reason ... why is that ? So do dimensions multiply with each other when there are more than four? Where is that quote from? Is it mainstream? It's closer to mainstream than a lot of wild ideas, and the math works out (or so I'm told). The problem (as I understand it) with string theory is that there are still a few competing approaches to it, and none of them have been tested experimentally. Mainly because nobody can come up with an experiment to test the predictions. The energy requirements are beyond current technology, or something like that. Thanks - the quote is from a collection of posts here in a discussion of Stephen Hawking's book The Grand Design. I lack the knowledge of physics to assess it, and find the idea of more than four dimensions impossible to imagine. So much emphasis is commonly placed on the test difficulty issues .. and yet I rarely see anyone attempting to explain why M-Theory manages to explain super condensed physics behaviours and the fundamental particles. Except it doesn't really explain the fundamental particles - it just makes their properties depend on the configuration of the Calabi-Yau manifold which is not that much better than just plugging some numbers in. And posits that they are 'made up from' a more fundamental object that can have several configurations. I'm more of a fan of how it offers a means to explain why 3+1 dimensions are what we see. String theory is a nice idea. It may even be right. Far too early to say. The linked thread seems to have almost no actual tests for it. One speculative cosmological model, one attempt to link it to quantum information theory and a few other hints. It is not the only contender for the next standard model, it is one of a few. So long as that is never forgotten (a few of the more evangelical string theorists seem to forget that, intoxicated by its mathematical depth I guess) it is a valuable piece of the speculative physics mainstream. I shall remain sceptical... I claim to understand infinity. A concept I grasp. I also claim to understand a three dimensional space concept with a addendum of time as a locater in time space.. I have failed to establish a need for any other dimensional concept.. Is it MAINSTREAM.. No it's not. As any explanation I might except as mainstream would need to be of science and testable.. and to be honest. Those that pro-port a many dimension reality are a small group of self appointed experts talking to themselves.. and amongst themselves.. I am NOT included and might be quite wrong. However... I would not be alone in thinking science fiction can be found as true, or not. To lift this eleven dimension proposal from fiction to fact simply requires a explanation.. When I see that explanation I will judge it., and still waiting.... The Emperor is naked. Eh. It may not be "mainstream" but it is an interesting study of how mainstream scientists brainstorm and evaluate new ideas. Many of the documentaries about string theory are a who's who scientists getting camera time; what they lack in detail (ok, sometimes they just laugh at questions) is made up for in publicising science in general. The subject reminds me of the "What if our star had a Nemesis companion that rained comets down on Earth?" semi-serious theory. While not likely or perhaps completely impossible with the 'scopes and surveys we have today, it is a fun experiment because the limits are easy to describe the effects are immediately important. "Triangles are my favorite shape "Three points where two lines meet" Tessellate, Alt-J I don't think much of that article exerpt, wherever it's from. It leaves out so much. Superstrings is a developing theory. There were two superstring revolutions, which, well, revolutionized the investigation over the last several decades. In 10 dimensions, there were five viable string theories. Witten added one dimension in order to unify the five, but that wasn't 60 years ago. It was Kaluza and Klein and Gunnar Nordstrom who first tried the idea of adding dimensions in an attempt to unify existing theories, and that was 90-100 years ago, and they only went to 5D. I don't know what 11D theory the article is referencing from 60 years ago, and the article doesn't say. But the number of extra dimensions is not just ad hoc. Fortunately, string theory doesn't go about things in such an ad hoc fashion, picking an arbitrary number of dimensions, scaling up the size of the matrix or Riemann metric tensor, and seeing what forces you can or cannot accommodate. Instead, the theory tells you exactly how many dimensions are needed for the job, and that number is ten – the four dimensions of the “conventional “ spacetime we probe with our telescopes, plus six extra dimensions. -- Shing-Tung Yau I don't know how "mainstream" is defined, but string theory remains speculative. Speculative or not, the investigation has led to the solution of long-standing problems and conjectures in mathematics, so it has been productive. Some physicists say string theory is so beautiful and perfect, it must be right. And it may be, but we're not there yet. We were looking for a theory that explains our universe and why it is the way it is, with the physical constants the way they are, etc. Yau proved that 6D Calabi-Yau manifolds exist, mathematically, and some years later physicists decided Calabi-Yaus were just what string theory needed for the compactified dimensions. A lot of work has been done with this framework, but it's not really a given. Some investigators are looking into non-Kahler manifolds (Calabi-Yaus are Kahler). The unexpected result is that strings and Calabi-Yau lead us to the conclusion that there must be somewhere on the order of 10^500 viable solutions. That's a far cry from the single one we were looking for that explained our universe. Some physicists have taken this result and proposed highly controversial We happen to be living in a time when theory has gone far ahead of observation. Everyone is entitled to his own opinion, but not his own facts. Can't let this (or your last post#10) go past without comment Cougar ... (thanks for both of them. I get the firm feeling that we're heading towards a time where models will supersede, (or more like .. consume), traditional science 'Theory'. The mathematics in String Theory, (ST), for example, seems too complicated for most humans to manage and keep track of, so I might have a guess and say that somewhere, somehow, a computing solution will be developed to overcome our shortcomings in this area. (If it hasn't already). The behaviour of collective humans is already being directly influenced by extraordinarily complex climate and economic models, (and the seemingly obligatory, accompanying system crashes and their impacts (eg: the apparently unavoidable stock exchange crashes, of recent times)). I'm unsure whether the traditional philosophically based methods of distinguishing speculation, from theory, will continue to have quite the same meaning in the brave new 'modelled universes', created on our behalf, within these modelling environments (??). Does anyone really have a grasp of the full impact ST's 'extra dimensions' on our understanding of Astrophysical phenomena ..?.. and are we destined to have to blindly accept the output of some algorithm so complex, no human will ever really be able to evaluate its true physical authenticity/appropriateness ? The curious thing would seem to be that the data you mention, also seems well suited to exactly the same computing environments … when this data comes together with humanly incomprehensible modelling algorithms, whose output then is used to synthesise other models, do we also ultimately lose track of physical reality ? Reading through and around this thread.. I take exception to... "No human will ever really be able to evaluate its true physical authenticity..." I claim to. It's very easy. It's rubbish. It seems to me that a minority attempt to stamp their concepts of reality upon the rest of us. I repeat. It's rubbish. All I ask is 'show me ?' Be brave. Step forward. Speak clearly.. You understand String Theory as well as Witten and co then? You are welcome to believe that you have the authority and knowledge to dismiss it like that but I am not sure I believe you do. I may not like it all that much but I respect the minds behind it and would be very, very loath to just write it all off as rubbish. Yes 'Shaula' that would be my view.. I make assumptions of clarity. Uncomplicated and true.. No hidden agendas. Yes, that is why I have now asked a separate question.. That I do at this point dismiss it all as nonsense. and yes... I have read much of this subject., and in reading so much have not gained clarity.. Respect for the collective minds of many, I do doubt my resolve.. I find myself in a quandary. So, argument by incredulity. As long as you accept that your attitude to the theory is unscientific and based on your personal beliefs (and hence not one you should try to foist onto others) then that is all fine. I try to keep my personal beliefs out of any assessment I make of String theory when asked a question. I can point to weaknesses it has and to strengths. But I would never dismiss it as nonsense without understanding it. I don't think he's multiplying the 3 Real times the three Complex to get nine, probably just means that the complex ones have two dimensions each, the real and the imaginary. Thanks - the quote is from a collection of posts here in a discussion of Stephen Hawking's book The Grand Design. I lack the knowledge of physics to assess it, and find the idea of more than four dimensions impossible to imagine. It's from three separate posts, all by the poster WONK. The first two paragraphs are the last post to the thread (Sun Mar 04, 2012 5:41 pm), the third paragraph is from the middle of a post (Sat Mar 03, 2012 11:08 pm) and is the part you responded to. Last three paragraphs come his 4th post to that thread (Thu Nov 25, 2010 11:56 am). Could you please discuss the options for parity assignments in (on-shell) N = 2 five-dimensional Yang-Mills- Einstein supergravity theories (YMESGTs) coupled to tensor and hypermultiplets on the orbifold spacetime M[4] x S^1/Z[2] for us ? I've been wondering about this for some time now. Many thanks, Unless time is a complex value, which SR predicts when velocity is superluminal. I really don't know anything about string theory, but if 11 dimensions helps to better explain the four dimensions that I'm familiar with, then I'll believe it, although I would be forced to defer to the "experts" in telling me that the math is indeed correct. I'll be very supportive if it helps to do away with dark energy and dark matter (which I suspect are the modern day equivalent of epicycles; they help theory match the observations, but are fundamentally a theoretical dead end). Well, epicycles had a very specific rationale and procedure for explaining (and predicting) the observations. With dark matter, we have no specific understanding of what it is or how it works. In this case, the theory of strings may come to the rescue. All the particles and bosons have "superpartners" in string theory. One or more of the superpartners may turn out to be the dark matter we've been looking for. But as of yet, no superpartner has ever been detected. With ongoing operations at CERN, this may change in the not-too-distant future. Or it may not. When one doesn't know one way or the other, pronouncing it rubbish is premature. Everyone is entitled to his own opinion, but not his own facts. Astromark, do you believe black holes to be nonsense? They first existed only in the minds of theoretical physicists. They still have not been directly observed. They are considered real by inference; by observing the speed of stars orbiting close to them and other factors. You must believe atoms exist as we can now directly observe them but they too once existed only in the minds of theoretical physicists. How about electrons? We still haven't directly observed them. Are they nonsense? No, because the model we use for practical electronics, etc. employs them, but they once were considered nonsense by many people. Please Mark, get over this notion that if you can't directly see, smell, hear, taste or touch something, to discuss its possibility is utter nonsense. I think your frustration is primarily caused by knowing that you cannot discuss many theoretical models because you would have to invest much time and learning to get up to speed on them. I'm in the same boat, but so what. I can't understand doctors discussing theoretical operating techniques for the future or chemists discussing properties of a theoretical new compound, so should I write off what they're discussing as nonsense? Of course not. New discoveries are often born in theoretical models -- please don't try to stifle them unless they are being prematurely proposed as fact. "There are powers in this universe beyond anything you know. There is much you have to learn. Go to your homes. Go and give thought to the mysteries of the universe. I will leave you now, in peace." --Galaxy Being 2012-Mar-05, 02:20 AM #2 2012-Mar-05, 02:28 AM #3 2012-Mar-05, 02:35 AM #4 2012-Mar-05, 02:48 AM #5 2012-Mar-05, 03:18 AM #6 2012-Mar-05, 06:33 AM #7 Order of Kilopi Join Date Mar 2010 United Kingdom 2012-Mar-05, 08:56 AM #8 2012-Mar-05, 12:22 PM #9 2012-Mar-05, 02:32 PM #10 2012-Mar-06, 01:36 AM #11 2012-Mar-06, 05:07 AM #12 2012-Mar-06, 07:13 PM #13 2012-Mar-06, 07:18 PM #14 Order of Kilopi Join Date Mar 2010 United Kingdom 2012-Mar-06, 08:20 PM #15 2012-Mar-06, 08:34 PM #16 Order of Kilopi Join Date Mar 2010 United Kingdom 2012-Mar-06, 08:59 PM #17 2012-Mar-06, 09:02 PM #18 2012-Mar-06, 09:36 PM #19 Established Member Join Date Mar 2002 2012-Mar-08, 08:31 PM #20 2012-Mar-08, 09:57 PM #21 2012-Mar-11, 04:49 AM #22 2012-Apr-30, 05:00 AM #23 Join Date Apr 2012
{"url":"http://cosmoquest.org/forum/showthread.php?129169-Eleven-Dimensions-in-String-Theory","timestamp":"2014-04-20T05:44:17Z","content_type":null,"content_length":"163394","record_id":"<urn:uuid:ee1711e9-319b-42c9-951e-92334506be1e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Does this series converge? November 3rd 2008, 05:54 PM Does this series converge? Let $g_k(x)= \left\{\begin{array}{cc} \frac {1}{k^2},&\mbox{ if }<br /> |x| \leq k\\ \frac {1}{x^2}, & \mbox{ if } |x|>k\end{array}\right.$ Does the series $\sum ^ \infty _{k=1} g_k(x)$ converges pointwise or uniformly? Proof so far. Define the partial sum $s_n= \sum ^n_{k=1} g_k (x)$. Now, in the case that $|x| \leq k$, we have $s_n(x)= \sum ^n_{k=1} \frac {1}{k^2} = \sum ^n_{k=1} ( \frac {1}{k})^2$ Now, does this one converges to $\frac {1}{k^2}$, if it does then it is pointwise. In the case that $|x| > k$, then $s_n = \sum ^ n_{k=1} \frac {1}{x^2} = \frac {n-1}{x^2} = \frac {n}{x^2} - \frac {1}{x^2}$ Well, then this guy doesn't converge to $g_k$, so it ain't pointwise convergence then? Thanks, people, I'm really lost in series convergence here... November 3rd 2008, 09:31 PM if $|x| > k,$ then $g_k(x)=\frac{1}{x^2} < \frac{1}{k^2}.$ thus: $g_k(x) \leq \frac{1}{k^2},$ for all $x \in \mathbb{R}, \ k \in \mathbb{N}.$ hence by Weierstrass M-test your series is uniformly convergent. Weierstrass M-test
{"url":"http://mathhelpforum.com/calculus/57400-does-series-converge-print.html","timestamp":"2014-04-17T11:46:20Z","content_type":null,"content_length":"8248","record_id":"<urn:uuid:c65af9aa-9157-4ad2-a0de-f3827b6e5191>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about Metric space 1. 26417 Question about Metric space Let X be a metric space and x0 in X. Define a function f: X --> R (all real numbers) by f(x) = d(x,x0). Show that f is continuous. HINT: Prove the variant of the triangle inequality which says |d(x,z)-d(y,z)|< d(x,y) for any x,y,z in X This show show to prove a function in a metric space is continuous.
{"url":"https://brainmass.com/math/algebraic-geometry/26417","timestamp":"2014-04-18T15:50:02Z","content_type":null,"content_length":"27476","record_id":"<urn:uuid:b9fb1be7-4038-4c48-b5c3-8dac469bec75>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
- Edexcel Who is this course for? Mathematics at AS and A Level is a course worth studying in its own right. It is challenging but interesting. It builds on work met at GCSE, but involves new ideas that some of the greatest minds of the millennium have produced. During AS, we offer options in Statistics and Mechanics, for those of you who have particular degree courses in mind. AS in Mathematics is very valuable as a supporting subject to many courses at A Level and degree level, especially in the sciences, Geography, Psychology, Sociology and medicine. A Level mathematics is a much sought after qualification for entry to a wide variety of full-time courses in Higher Education. There are also many areas of employment that see Mathematics A Level as an important or vital qualification. Formal Entry Requirements All students studying for A Level would be expected to have five GCSEs A*-C in academic subjects (of which two must be B grades) including GCSE English Language. We will count Level 2 Btec Diplomas towards this total, but only merits and distinctions will be counted and each diploma will count as one GCSE. Additionally, you will need GCSE Maths at B or above in the higher paper. What does the course involve? Mathematics at AS and A level is divided into three branches: Pure Mathematics (Modules C1, C2, C3, C4) Pure Mathematics at AS and A level extends your knowledge of topics such as algebra and trigonometry as well as introducing new ideas such as calculus. If you enjoyed the challenge of problem solving at GCSE then you should find this course very appealing. (Modules M1, D1) Mechanics describes the motion of objects and how they respond to forces acting upon them, from cars in the street to satellites orbiting a planet. You will learn techniques of mathematical modelling by turning a complicated problem into a simpler one that can be analysed and solved using mathematics. Many of the ideas you will meet form an essential introduction to modern fields of study such as cybernetics, robotics, biomechanics and sports science, as well as the more traditional areas of engineering and physics. (Modules S1, D1) Statistics covers the analysis of numerical data in order to arrive at conclusions about it. Many of the ideas met have applications in a wide range of other fields: from assessing what car insurance costs to how likely the Earth is going to be hit by a comet. In order to get an AS Level, you will need to take three modules. For a full A Level, you will need to take three further modules. We offer two options: Pure and Mechanics : This option is usually taken by students studying Science (especially Physics), Engineering or Construction. Pure and Statistics : This option is usually taken by students studying Business (Economics, Accounting or Business Studies), Psychology or Biology. How will I be assessed? Both AS and A level Mathematics are assessed through a series of written examinations. If necessary, students can retake any module. There is no course work involved in assessing mathematics. Where can I go next? Most A level students go on to study at university. Some have used Mathematics to go directly into a career in accountancy. The study of mathematics opens the door to many varied professions. Obvious choices would be in the area of Science and Finance. To see the many varied careers that a student of mathematics and statistics may follow, from games programmer to weather forecasting, go to
{"url":"http://www.sandwell.ac.uk/Parents-and-Guardians/Subjects/A-Levels/Mathematics.aspx","timestamp":"2014-04-23T16:57:18Z","content_type":null,"content_length":"24719","record_id":"<urn:uuid:cce972a9-4239-46ea-80a3-807ef62d6730>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluating a limit to infinity lim x/(sqrt(x^2+1000)) x-> + inf the answer is not as important as its how its actually done. Thanks in advance. ou can try L'Hopital's but i think it will be a pain here. (we can use L'Hopital's if when we take the limit our function goes to $\frac 00$ or $\frac {\infty}{\infty}$) instead you should realize that as $x \to \infty$ the + 1000 does not matter. thus $\lim_{x \to \pm \infty} \frac x{\sqrt{x^2 + 1000}} = \lim_{x \to \pm \infty}\frac x{\sqrt{x^2}} = \lim_{x \to \pm \infty} \frac x{|x|} = \left \{ \begin{array}{lr} 1 & \mbox{ as } x \to \infty \\ -1 & \mbox{ as } x \to - \infty\end{array} \right.$
{"url":"http://mathhelpforum.com/calculus/22522-evaluating-limit-infinity-print.html","timestamp":"2014-04-24T10:36:12Z","content_type":null,"content_length":"6891","record_id":"<urn:uuid:62485ee4-8e9c-4a95-be84-24530d5eda0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Perfect square sum problem December 8th 2011, 07:02 AM #1 Nov 2011 Extend the fraction 163/101 with a number such that the resulting fraction will have a numerator and a denominator whose sum are square numbers. Please provide the smallest of such number. I've had some problems with this task, hope that you can help me. Last edited by emilhp; December 8th 2011 at 11:03 AM. re: Perfect square sum problem let $\frac{163}{101} = \frac{a}{c}$ $\frac{a}{c} = \frac{ab}{cb}$ sum of the numerator and denominator ... $ab+cb = b(a+c)$ for the sum $b(a+c)$ to be a perfect square, $264b = (2^3 \cdot 3 \cdot 11 \cdot b)$ would have to be a perfect square ... re: Perfect square sum problem Isn't extending a fraction just multiply it by a number? If that is the case, wouldn't the answer be $(\frac{161}{101})(\frac{161}{101})=(\frac{161^2}{1 01^2}) or \frac{25921}{10201}$ Re: Perfect square sum problem Re: Perfect square sum problem Re: Perfect square sum problem December 8th 2011, 09:54 AM #2 December 8th 2011, 09:56 AM #3 Nov 2011 December 8th 2011, 11:29 AM #4 Nov 2011 December 8th 2011, 12:23 PM #5 December 8th 2011, 01:07 PM #6
{"url":"http://mathhelpforum.com/algebra/193777-perfect-square-sum-problem.html","timestamp":"2014-04-20T16:44:32Z","content_type":null,"content_length":"49124","record_id":"<urn:uuid:9b017953-cea9-4128-86a0-253afed942a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Remembering Trig Functions Date: 09/27/2001 at 20:50:40 From: Kim Subject: Remembering trig functions First of all, thanks for taking the time to read this. We are required to memorize the sine, cosine, tangent, secant, cosecant, and cotangent for 30, 45, and 60 degrees for precalculus class. I've made flashcards, but it's still pretty difficult. Do you have any tips on how to remember these? Date: 09/27/2001 at 21:04:58 From: Doctor Peterson Subject: Re: Remembering trig functions Hi, Kim. People have different ways to remember; you may have to think about what sorts of things you remember most easily, and why, and then design a plan for memorization that fits your style. I can only tell you what fits my own style. I like pictures and patterns; I learn connected facts better than isolated facts. So I first remember these relations among the trig + + + /| /| /| 1/ |sin sec/ |tan csc/ |1 /x | /x | /x | +---+ +---+ +---+ cos 1 cot (Actually, I remember the first of these and the definitions of the other functions in terms of the sine and cosine in order to reconstruct the other two.) Then I remember two special triangles: + + sqrt(2)/ | /| / |1 2 / |sqrt(3) /45 | /60| +-------+ +---+ You can figure these out for youself using the Pythagorean theorem if you need to, or just memorize the numbers on the pictures. Now you can just read off the function values from the pictures. For example, the sine of 60 degrees is the ratio of the "opposite" side (vertical) to the hypotenuse, from my first picture; in the 60-degree triangle, this is sqrt(3)/2. Those ratios that I use often I will just know; but when I have trouble, I go back to these pictures in my mind to recall a ratio. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54182.html","timestamp":"2014-04-16T08:18:00Z","content_type":null,"content_length":"7008","record_id":"<urn:uuid:66a680d5-143b-4222-ae36-4cfe3af70ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00451-ip-10-147-4-33.ec2.internal.warc.gz"}