content
stringlengths
86
994k
meta
stringlengths
288
619
What are rogue waves? | EarthSky.org What are rogue waves? Image Credit: sub_lime79 Rogue waves are gigantic ocean waves that can be over 100 feet high that seem to appear out of nowhere. Rogue waves are gigantic ocean waves that can be over 100 feet high. Rogue waves are not tsunamis, said Tulane University physicist Lev Kaplan. Tsunamis are caused by undersea earthquakes. Rogue waves seem to appear out of nowhere, he said. Let’s take an example. Imagine you’re in a storm where the average wave is 20 feet tall, right? Then, all of a sudden, one wave appears, or maybe two or three waves appear, that are 70 feet tall. Now, that would be an example of a rogue wave. But they can also appear in completely calm seas, so in that sense they’re quite unpredictable. Which is why rogue waves can be dangerous to fishermen and rig workers, or even cruise ships. Dr. Kaplan explained that, while rogue waves are still not well understood, two factors are thought to contribute to their formation. The first is the force of ocean currents. The second is the collision of several individual waves, which can combine their power. He explained: This is a little like light being focused or concentrated in a microscope, or in a telescope. Sometimes you get focusing and increased wave heights. Scientists will never be able to predict exactly when and where a rogue wave will show up, Kaplan said, because the sea is too chaotic a system. But, today, computer modeling techniques are being developed to help forecast the probability that a rogue wave will occur in certain place at a certain time. What we certainly hope is to be able to forecast risk, so we might be able, for example, by looking at the ocean in real time and measuring sea conditions, we might be able to say, over the next 24 hours, in this particular area of ocean, the risk of a wave occurring is 100 times greater than average. Ocean movement is so complex, he said, that it could be 20 years before such a system is usable. Dr. Kaplan explained that he devised his mathematical models not by looking at ocean waves, but by looking at electrons – tiny subatomic particles that have wavelike movement. He said: The way that we got into this research is that several years ago we were trying to understand electron transport in nano structures, which are systems on a very, very, very small scale. And if you look at electrons, they actually behave like waves. What we decided to do was to take some of the mathematical tools that we used to understand how electrons move, and apply them to understand the behavior of waves in the ocean. He explained in greater detail how the math behind his rogue wave computer model works: If you take a random incoming sea and you assume that the currents that are bending the waves left and right are also completely random, then sometimes you get focusing and increased wave heights, and sometimes you can get de-focusing and decreased wave heights. But you can show, mathematically, when the probability of seeing an extremely tall waves goes up. And further, you can calculate how often rogue waves will occur if you know how fast the currents are, and how fast the waves are traveling. Dr. Kaplan described a recent occurrence of a freak wave that caused fatalities: There was a rogue wave that occurred in the Mediterranean last year, and happened to encounter a cruise ship. Two people died on account of that incident. The wave in the Mediterranean was only 26 feet tall, but it fits the description of rogue wave, in the sense that it came out of nowhere and is much much taller than the surrounding sea. Another well-known incident: 2001, in the south Atlantic ocean, it was a wave almost 100 feet tall that hit two cruise ships, that year. He underscored that the research he’s doing right now on rogue waves is very basic. It will be several decades before a good rogue wave forecasting system is in place, he said.
{"url":"http://earthsky.org/earth/lev-kaplan-rogue-waves-are-not-tsunamis","timestamp":"2014-04-19T17:02:09Z","content_type":null,"content_length":"34871","record_id":"<urn:uuid:7225ef8d-d853-4405-86ca-077307f2d177>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Sector Complexity: Solution Space-Based Method 1. Introduction In Air Traffic Control (ATC), controller workload has been an important topic of research. Many studies have been conducted in the past to uncover the art of evaluating workload. Many of which have been centered on the sector complexity or task demand based studies [1,2,3,4]. Moreover, all have the aim to understand the workload that was imposed on the controller and the extent to which the workload can be measured. With the growth in world passenger traffic of 4.8% annually, the volume of air traffic is expected to double in no more than 15 years [5]. Although more and more aspects of air transportation are being automated, the task of supervising air traffic is still performed by human controllers with limited assistance from automated tools and is therefore limited by human performance constraints [6 ]. The rise in air traffic leads to a rise in the Air Traffic Controller (ATCO) task load and in the end the ATCO’s workload itself. The 2010 Annual Safety Review report by European Aviation Safety Agency (EASA) [7] indicates that since 2006, the number of air traffic incidents with direct or indirect Air Traffic Management (ATM) contribution has decreased. However, the total number of major and serious incidents is increasing, with incidents related to separation minima infringements bearing the largest proportion. This category refers to occurrences in which the defined minimum separation between aircraft has been lost. With the growth of air traffic, combined with the increase of incidents relating to separation minima infringements, a serious thought have to be put into investigating the causes of the incidents and plans on how to solve them. Initiatives to design future ATM concepts have been addressed in both Europe and the United States, within the framework of Single European Sky ATM Research (SESAR) [8] and Next Generation Air Transportation System (NextGen) [9]. An increased reliance on airborne and ground-based automated support tools is anticipated in the future ATM concept by SESAR and NEXTGEN. It is also anticipated that in both SESAR and NEXTGEN concepts a better management of human workload will be achieved. However, to enable that, a more comprehensive understanding of human workload is required, especially that of controllers. This chapter wil start with a discussion on sector complexity and workload and is followed by a deliberation of previous and current sector complexity and workload measures. Next, a method called the Solution Space Diagram (SSD) is proposed as a sector complexity measure. Using the SSD, the possibility of measuring different sector design parameters are elaborated and future implications will be 2. Sector complexity and workload ATCO workload is cited as one of the factors that limit the growth of air traffic worldwide [10,11,12]. Thus, in order to maintain a safe and expeditious flow of traffic, it is important that the taskload and workload that is imposed on the ATCO is optimal. In the effort to distinguish between taskload and workload, Hilburn and Jorna [1] have defined that system factors such as airspace demands, interface demands and other task demands contribute to task load, while operator factors like skill, strategy, experience and so on determine workload. This can be observed from Figure 1. ATCOs are subject to multiple task demand loads or taskloads over time. Their performance is influenced by the intensity of the task or demands they have to handle. Higher demands in a task will relate to a better performance. However, a demand that is too high or too low will lead to performance degradation. Thus, it is important that the demand is acceptable to achieve optimal performance. The workload or mental workload can be assessed using a few methods such as using performance-based workload assessment through primary and secondary task performance, or using subjective workload assessment through continuous and discrete workload ratings, and lastly using physiological measures. However, because physiological measures are less convenient to use than performance and subjective measures, and it is generally difficult to distinguish between workload, stress and general arousal, these are not widely used in assessing workload [13]. Previous studies have also indicated that incidents where separation violations occurred can happen even when the ATCO’s workload is described as moderate [14,15]. These incidents can be induced by other factors such as inappropriate sector design. Sector design is one of the key components in the airspace complexity. Airspace complexity depends on both structural and flow characteristics of the airspace. The structural characteristics are fixed for a sector, and depend on the spatial and physical attributes of the sector such as terrain, number of airways, airway crossings and navigation aids. The flow characteristics vary as a function of time and depend on features like number of aircraft, mix of aircraft, weather, separation between aircraft, closing rates, aircraft speeds and flow restrictions. A combination of these structural and flow parameters influences the controller workload [16]. A good airspace design improves safety by avoiding high workload for the controller and at the same time promotes an efficient flow of traffic within the airspace. In order to have a good airspace design, the ATC impact of complexity variables on controller workload has to be assessed. Much effort has been made to understand airspace complexity in order to measure or predict the controller’s workload. In this chapter the solution space approach is adopted, to analyze in a systematic fashion how sector designs may have an impact on airspace complexity, and ultimately the controller 2.1. Previous research on complexity factors The Air Traffic Management (ATM) system provides services for safe and efficient aircraft operations. A fundamental function of ATM is monitoring and mitigating mismatches between air traffic demand and airspace capacity. In order to better assess airspace complexity, methods such as ‘complexity maps’ and the ‘solution space’ have been proposed in Lee et. al [17] and Hermes et al. [18]. Both solutions act as an airspace complexity measure method, where a complexity map details the control activity as a function of the parameters describing the disturbances, and the solution space details the two-dimensional speed and heading possibilities of one controlled aircraft that will not induce separation violations. Much effort has been made to understand airspace complexity in order to measure the controllers’ workload. Before introducing the solution space approach, first some more common techniques are briefly discussed. 2.1.1. Static density One of the methods to measure complexity is the measurement of aircraft density and it is one of the measures that are commonly used to have instant indication of the sector complexity. It is defined as the number of aircraft per unit of sector volume. Experiments indicated that, of all the individual sector characteristics, aircraft density showed the largest correlation with ATCO subjective workload ratings [19,20]. However, aircraft density has significant shortcomings in its ability to accurately measure and predict sector level complexity [19,21]. This method is unable to illustrate sufficiently the dynamics of the behavior of aircraft in the sector. Figure 2 shows an example where eight aircraft flying in the same direction do not exhibit the same complexity rating when compared to the same number of aircraft flying with various directions [18]. 2.1.2. Dynamic density Another measurement of sector complexity is dynamic density. This is defined as “the collective effort of all factors or variables that contribute to sector-level ATC complexity or difficulty at any point of time” [19]. Research on dynamic density by Laudeman et al. [22] and Sridhar et al. [16] has indicated few variables for dynamic density and each factor is given a subjective weight. Characteristics that are considered include, but not limited to the number of aircraft, the number of aircraft with heading change greater than 15° or speed change greater than 10 knots, the sector size, and etc. The calculation to measure dynamic density can be seen in Equation (1). where dynamic density is a summation of the Dynamic Variable (DV) and its corresponding subjective weight (W). The calculation of the dynamic density is basically based on the weights gathered from regression methods on samples of traffic data and comparing them to subjective workload ratings. Essentially, the assignment of weights based on regression methods means that the complexity analysis based on dynamic density could only be performed on scenarios that differ slightly from the baseline scenario. Therefore the metric is not generally applicable to just any situation [18]. 2.1.3. Solution space-based approach Previous work has shown that the SSD is a promising indicator of sector complexity, in which the Solution Space-based metric was proven to be a more objective and scenario-independent metric than the number of aircraft [18,23,24]. The Forbidden Beam Zone (FBZ) of Van Dam et al. [25] has been the basis for representing the SSD. It is based on analyzing conflicts between aircraft in the relative velocity plane. Figure 3 (a) shows two aircraft, the controlled aircraft (A[con]) and the observed aircraft (A[obs]). In this diagram, the protected zone (PZ) of the observed aircraft is shown as a circle with radius of 5NM (the common separation distance) centered on the observed aircraft. Intrusion of this zone is called a ‘conflict’, or, ‘loss of separation’. Two tangent lines to the left and right sides of the PZ of the observed aircraft are drawn towards the controlled aircraft. The area inside these tangent lines is called the FBZ. This potential conflict can be presented on a SSD. Figure 3 (b) shows the FBZ in the SSD of the controlled aircraft. The inner and outer circles represent the velocity limits of the controlled aircraft. Now, if the controlled aircraft velocity lies inside the triangular-shaped area, it means that the aircraft is headed toward the PZ of the observed aircraft, will eventually enter it, and separation will be lost. The exploration of sector complexity effects on the Solution Space parameters and, moreover, workload is important in order to truly understand how workload was imposed on controllers based on the criteria of the sector. Having the hypotheses that sector parameters will have a direct effect on the SSD geometrical properties, the possibility of using the SSD in sector planning seems promising. Figure 4 shows the relationship between taskload and workload as described by Hilburn and Jorna [1], where we adapted the position of sector complexity within the diagram. The function of the SSD is included as a workload measure [18,23,24] and alleviator [26] and also the possibility of aiding sector planning through SSD being a sector complexity measure [24]. Initial work by Van Dam et al. [25] has introduced the application of the Solution Space in aircraft separation problems from a pilot’s perspective. Hermes et al. [18], d’Engelbronner et al. [23], Mercado-Velasco et al. [26] and Abdul Rahman et al. [24] have transferred the idea of using the Solution Space in aircraft separation problems for ATC. Based on previous research conducted, a high correlation was found to exist between the Solution Space and ATCO’s workload [18,23,24]. Abdul Rahman et al. [24] also investigated the possibility of measuring the effect of aircraft proximity and the number of streams on controller workload using the SSD and have discovered identical trends in subjective workload and the SSD area properties. Mercado-Velasco et al. [26] study the workload from a different perspective, looking at the possibility of using the SSD as an interface to reduce the controller’s workload. Based on his studies, he indicated that the diagram could indeed reduce the controller’s workload in a situation of increased traffic level [26]. 3. Complexity measure using the solution space diagram The results gathered here are based on offline simulations of more than 100 case studies with various situations as detailed in this chapter. The affected SSD area has been investigated to understand the effects of sector complexities on the available solution space. Conclusions from previous work by Hermes et al. [18] and d’Engelbronner et al. [23] stated that the available area in the Solution Space that offers solutions has a strong (inverse) correlation with ATCO workload. In this case study, two area properties were investigated in order to measure the complexity construct of the situation, which are the total area affected (A[total]) and the mean area affected (A[mean]) for the whole sector. The A[total] percentage is the area covered by the FBZs as a percentage of the total area between the minimum and the maximum velocity circles in the SSD, based on the currently controlled aircraft. The A[mean] percentage affected is the A[total] affected for all aircraft in the sector divided by the number of aircraft. This will give an overview of the complexity metric for the whole sector. Both measures were used as a complexity measure rating, based on the findings in earlier studies where the A[total] and A[mean] showed to have a higher correlation with the controller’s workload than the static density [24]. 4. Sector complexity variables Previous research on sector complexity showed that the aircraft intercept angle [27,28,29], speed [27] and horizontal proximity [3,16] are some of the variables that are responsible for the sector complexity. The goal of the present study is to systematically analyze the properties of the SSD due to changes in the sector design. It is hypothesized that using these properties we can obtain a more meaningful prediction of the sector’s complexity (or task demand load) than existing methods. In a first attempt, we studied the effects of aircraft streams’ (that is, the airways or routes) intercept angles, the speed differences and horizontal proximity between aircraft, and also the effect of number of aircraft and their orientation on the SSD. For this purpose, several cases were studied. The cases that were being investigated involved two intercepting aircraft at variable intercept angles, route lengths, and speed vectors. Quantitative analysis was conducted on the SSD area properties for the mentioned sector variables. In the study of quantitative measurement of sector complexity, it was assumed that a denser conflict space results in a higher rating for the complexity factor. IIn later stage, a human-in-the-loop experiment will be conducted to verify the hypotheses gathered from the quantitative study and will provide a better understanding on the relationship between the SSD area properties and the workload as indicated by the subject. Figure 5 shows an example of one of the case studies with the speed vectors, route length, horizontal proximity, initial position, corresponding angle between the aircraft and the intercept angle properties. One sector complexity factor was changed at the time in order to investigate the effects of that factor on the SSD. Changes in these factors will be translated into differences in the geometry of the FBZ and area affected on the SSD. The diagram we hereby elaborate is based on three important assumptions. First, both aircraft are on the same flight level and are not ascending or descending during the flight. Secondly, it is assumed that both aircraft have the same weight classes and will have the same minimum and maximum velocities. Lastly, the minimum separation distance, represented by a PZ with radius of 5 NM around each aircraft, is to be maintained at the same size at all time. Different complexity factors are compared using a quantitative analysis. 4.1. Horizontal proximity Previous research on sector complexity has shown that the aircraft horizontal proximity [3,16] is one of the variables that is responsible in the sector complexity construct. There are several relationships that can be gathered from the FBZ. In order to analyze the relationship between FBZ and time to conflict and the position of aircraft, some parameters have to be determined. These parameters can be found in Figure 6 where the absolute and relative space of the FBZ was illustrated in Figure 6 (a) and (b), respectively. In the absolute space (Figure 6 (a)), two aircraft situation with distance between aircraft (d) and minimum separation distance (R) were illustrate. The FBZ is then translated into the relative space (Figure 6 (b)) where the same situation was projected with the assumption that the controlled aircraft will be in direct collision with the observed aircraft in the future. Based on the figures, it is observed that the FBZ and the corresponding Solution Space share similar geometric characteristics. These, as shown in Figure 6, make it clear that: The separation between aircraft in terms of time and horizontal proximity can be directly observed on the SSD through the width of the FBZ. A narrow FBZ translates to a longer time until loss of separation and also a larger separation distance between both aircraft. The relation can be seen in Equation (5) [34] and Equation (6), where the time (t) and distance between aircraft (d) is inversely proportional to the width (w) of the FBZ. The importance of horizontal proximity has also been stressed in other research where it is indicated that aircraft that fly closer to each other have a larger weight on the Dynamic Density [3,16]. In order to see the effect of horizontal proximities on the SSD and to confirm the previous study, more than 50 position conditions with intercept angle of either 45°, 90° or 135°, were studied. To simulate horizontal proximity, aircraft were assigned with a different route length at a different time instance. It is important to ensure that only one property is changed at a time. During this study, the velocity of both aircraft was maintained at same speed at all times. The effect of the horizontal proximity on the SSD is shown in Figure 7. The situation in Figure 7 is based on aircraft flying with a fixed heading angle of 90°, while both aircraft having the same speed vector of 200 knots, but having a different route length. From the analysis, it was found that aircraft that are further apart from each other have a narrower FBZ width than the ones being closer to each other. This can be seen in Figure 7 with aircraft progressing from being nearest (Figure 7 (a)) to furthest (Figure 7 (d)) apart from one another. The same pattern also applies to other intercept angles studied. The area affected is less dense for aircraft with a larger horizontal proximity where the area affected within the SSD decreases from 11% for the case in Figure 7 (a) to 6% for case in Figure 7 (d). This also shows that a large horizontal separation between aircraft result in a less dense SSD, thus a lower complexity metric. A narrower width also implicate that there are more options to solve a conflict. This can be seen in Figure 7, where in Figure 7 (a) and (b), there is no room for AC2 to resolve the conflict using a speed-only correction, whereas in Figure 7 (c) and (d) the conflict can be resolve by either increasing or decreasing the AC2 speed. Similar patterns were observed with different speed settings and speed boundaries in conjunction with different intercept angles. Figure 8 illustrates the percentage area covered as a function of the horizontal distance and the intercept angle while having the same velocity vector. It can be seen from this figure that the area properties decrease with larger distances between both aircraft at any intercept angle. The regression rate of the SSD area properties against the horizontal distance is also similar with any other intercept angle as indicated by Equation (6) regarding the width of the 4.2. Speed variations A previous study by Rantenan and Nunes [27] has suggested speed as a confounding factor to conflict or intercept angles and the ability to detect a conflict. It was indicated in their research that increasing the speed differential between converging objects increased the temporal error, resulting in a lower accuracy. This is due to the fact that the controller now has to integrate two (rather than one) pieces of speed information and project their implications. This shows the importance of studying the effect of speed variations to the sector complexity, especially when coupled with the intercept angle. A number of cases of aircraft pairs at the same distance between each other were investigated in this preliminary study. The first observation is illustrated in Figure 9 where the speed and the heading of the observed aircraft can be seen on the SSD mapping of the controlled aircraft through the position of the tip of the FBZ. This is because the FBZ is obtained by transposing the triangular-shaped conflict zone with the observed aircraft velocity vector. In a case such as seen in Figure 9 (a) to (c), an aircraft with the same horizontal separation at an intercept angle of 90° between each other will result in a different SSD as a function of the 150, 200 and 250 knots speed settings. In Figure 9, AC1 will encounter a separation violation problem in the future with AC2 when the aircraft maintains its current heading and speed. However, giving speed or heading instructions to one or both aircraft can resolve the future separation issue. In this case, an increase (Figure 9 (a)) or decrease (Figure 9 (c)) in speed for AC2 will solve the future separation issue. It is not desired for on-course aircraft to change the heading angle in order to fulfill efficiency constraints, however, if required to maintain safety, it may be the proper way to resolve a conflict, such in Figure 9 (b). It is found that the higher the speed of the observed aircraft, the more the FBZ in the SSD is shifted outwards. The changes in the speed only affect the currently controlled aircraft’s SSD. Because there is no change of speed for the controlled aircraft, AC2, the corresponding diagram for AC1 observing AC2 remains the same during the change of speed vector in AC1. The total area affected on the SSD depends on the relative positions and the intercept angle of both aircraft, where a shift outwards will be translated as more or less SSD area percentage affected. This can be seen by comparing Figure 9 (a) to (c) where a shift outwards results in more area affected within the SSD, which gives the value of 8%, 11% and 15% area affected for cases (a), (b), (c), respectively. Hence it can be hypothesized that larger relative speeds can result in a higher or lower complexity metric, depending on the position and intercept angle of the aircraft. The effect of speed differences was also investigated further for aircraft intercepting at 45°, 90° and 135° with more possible cases, and the results are illustrated in Figure 10. Differences in intercept angle, speed limit band (which may represent differences in aircraft performance limits or aircraft types) and the size of the speed limit were investigated. Figure 10 shows the effect of speed differences on a 180 - 250 knots speed band, with both AC1 and AC2 at either 30 NM or 40 NM distance from the interception point at different intercept angles. Both aircraft’s initial speeds were 250 knots and to illustrate the effect of speed variations, one of the aircraft was given a gradual speed reduction toward 180 knots. The diamond shapes in Figure 10 indicate the minimum difference needed for aircraft not to be in a future separation violation. Based on Figure 10, the effect of speed and distance is evident with 45°, 90° and 135° intercept angles showing a decrease in the SSD area properties with a larger relative distance while maintaining the trends of the graph. In 90° and 135° cases, larger distances also indicated that a smaller speed difference (marked with diamond) was needed in order for both aircraft not to be in a future separation violation. Figure 10 also shows that aircraft flying at a smaller intercept angle needed less speed difference than aircraft flying larger intercept angle to avoid future separation violation caused by having the same flight path length to the intercept The effect of the intercept angle on the other hand shows different patterns in SSD area properties in regards to the speed variations. A 45° intercept angle showed an increase of SSD area properties up until the intermediate speed limit followed by a decrease of SSD area properties with increased speed differences. However, for 90° and 135° intercept angle cases, the reduction of speed is followed by a continuing decrease in SSD area properties. Differences in the pattern also indicated a difference in sector complexity behavior toward distinctive intercept angle. The effects of speed limit bands for 45° intercept angle cases are illustrated in Figure 11 and 12. Figure 11 (a) shows the effect of different speed band values while maintaining the same size of the controlled aircraft speed performance and Figure 12 (b) shows the effect of different sizes of the speed band. Based on both figures, irrespective of the speed band ranges (aircraft speed performance limit) or speed band size, the same pattern in area properties were found, in all eight scenarios. The only difference was the peak value of the SSD area properties (Figure 11 (a)) is greater for speed bands with higher speed limits. This is due to the fact that with the same position between both aircraft, higher speed (for AC1 in this case) indicates a higher possible relative speed (Vrel) for the maximum speed band, thus implicating a broader FBZ (can be seen in Equation (6) and Figure 11 (b)). The same pattern was illustrated with different speed band sizes (Figure 12) with higher peaks of the SSD area values for higher AC1 speeds. 4.3. Intercept angle Based on previous researches, the ability of the controller to ascertain whether or not an aircraft pair will lose separation (more commonly known as conflict detection) is affected by a variety of variables that include, but are not limited to, the convergence angle [27,28,29]. However, previous research also found that conflict angle as a factor affecting conflict detection ability, is often confounded with speed [27]. Nonetheless, in order to understand the intercept angle as part of the sector complexity measure, the effect of intercept angle on the SSD area property is important. There are several types of crossing angles that are being studied. The main goal of the study was to investigate the effect of crossing angle towards sector complexity through the SSD. The effect of different intersection angles on the SSD is shown here for the case where the route length between AC1 and AC2 remains constant and equal at all time. Both aircraft were flying the same speed vector of 200 knots, but with different heading angles for AC2, which are 45°, 90° and 135°. The negative intercept angles were assigned for aircraft coming from the left, while positive intercept angles were assigned for aircraft coming from right. As seen here, only the changes in the heading angle were investigated, while other variables were fixed to a certain value. From the analysis, it is found that the larger the heading angles of intersecting aircraft, the less dense the area within the SSD. Figure 13 shows the resulting SSD for different intercept angles. Figure 13 also shows the effect of aircraft coming from right (Figure 13 (a) to (c)) or from the left (Figure 13 (d) to (e)) side of the controlled aircraft. It is concluded here that aircraft coming from any direction with the same intercept angle and route length will demonstrate the same complexity measure due to the symmetrical nature of the conflict For aircraft with 45°, 90° and 135° intercept angles, the SSD area properties are 14%, 11% and 8%, respectively. The same area properties hold for the opposite angle. This also shows that a larger intercept angle results in a lower complexity metric based on the properties of the SSD, because the solution area covered with the conflict zone is smaller. However, this condition only applies if the observed aircraft has a route length larger or equal to the controlled aircraft. This also means that the condition where the effects of intercept angles on the complexity metric is only valid when the observed aircraft is approaching from a certain direction. 4.3.1. Front side and backside crossings It was found that there are differences between observing an aircraft crossing in front or from the backside of the controlled aircraft with an increasing intercept angle. A case study was conducted where an aircraft observed front side and backside crossings at an angle of 45° and 135°. Both aircraft had the same speed of 220 knots and intercepted at the same point of the route, giving the same flight length for each case observed (see Figure 5). In a case where the controlled aircraft, which was farther away, was observing an intercept of an observed aircraft crossing in front at a certain angle, the area affected was increasing with an increasing intercept angle. The area affected measured in this case was 3% for 45° intercept angle (Figure 14 (a)) compared to 5% area affected for the 135° intercept angle (Figure 14 (b)). On the other hand, in a case where the controlled aircraft was observing an aircraft crossing from the backside, the area affected was decreasing with increasing intercept angle. The area affected measured in this case is 8% for 45° intercept angle (Figure 14 (c)) compared to 3% for 135° intercept angle (Figure 14 (d)). These area-affected values concluded that a slightly higher complexity metric was found with an increasing intercept angle when the observed aircraft was already present in the sector and passing the controlled aircraft from the front side. The opposite situation appeared when the observed aircraft was approaching a sector and crossed the observed aircraft from the backside. To extensively study the effect of intercept angle and the relative aircraft distance on the SSD area properties, several other cases were looked into and the results are illustrated in Figure 15. Figure 15 showed static aircraft at 35 NM distance from the intercept point, observing an incoming or a present aircraft in the sector at a variable intercept angle. Based on the initial study, it can be seen that observing present aircraft in the sector (with a distance from the intercept point less than 35 NM) will lead to an increase of SSD area properties with an increasing intercept angle. Despite this result, it was observed that a larger intercept angle for incoming aircraft (aircraft with distance more than 35 NM) results in a less dense area inside the SSD with an increasing intercept angle. The results gained here, matches the initial observations discussed earlier. Figure 16 shows the effect of intercept angle and the relative aircraft distance to the intercept point from a different perspective, where the effect of different intercept angle on the distance towards the intercept point was focused. From the figure it is observed that a larger distance for larger intercept angles (120°, 135° and 150°) results in a continuing decrease of SSD area properties, thus relating to a lower complexity metric, whereas a larger distance for smaller intercept angles (30° to 90°) result in an initial increase of SSD area properties, thus relating to a larger complexity metric and followed by decreasing SSD area properties after a certain distance (more than 35 NM). This also suggested that for a bigger intercept angle, the increase in distance always relates to a less complex situation whereas for a smaller intercept angle, the increase of distance up to a point where the length path is equal relates to a more complex situation. 4.3.2. Time to conflict The effect of intercept angle on the sector complexity construct was also investigated from a different perspective, namely the Time to Conflict (TTC). As illustrated in Figure 17 (a), with a fixed TTC at 500 seconds, a larger conflict angle will result in lower SSD area properties, thus a lower sector complexity construct. However, this can be due to the larger distance between the aircraft for larger conflict angles, even with the same TTC value. Having said that, this also indicates that with a larger intercept angle, a later conflict detection and lower initial situation awareness are predicted. An example of the progression of a future conflict that will occur at an equal time in the future with different conflict angles is shown in Figure 17 (b). Based on Figure 17 (b), a larger conflict angle results in lower SSD area properties, and also has a faster rate of SSD progress toward total SSD occupation. 4.4. Number of aircraft and aircraft orientation One of the methods to measure sector complexity is through the measurement of aircraft density. Aircraft density is one of the measures that is commonly used to have instant indication of the sector complexity. It is defined as the number of aircraft per unit of sector volume. This section discusses the effects of the number of aircraft within a sector on the SSD area properties together with the aircraft heading orientations. Figures 18 and 19 show the number of aircraft and the traffic orientation that was investigated here. An example SSD for two aircraft, AC1 and AC2 as indicated in Figure 18 and 19 were illustrated for all cases. For all four situations, all aircraft are free of conflicts. In a four-aircraft situation, illustrated in Figures 18 (a) and (d), an A[mean] of 9% and 16%, respectively, were gathered whereas in a six-aircraft situation, illustrated in Figure 19 (a) and (d), an A[mean] of 15% and 20%, respectively were gathered. Based on the SSD area properties, it was clear that more aircraft relates to a higher SSD area properties comparing cases in Figure 18 (a) to Figure 19 (a). The corresponding SSD also illustrates the effect of adding two aircraft to AC1 and AC2 where additional two FBZ were present in Figure 19 (b) and (c) if compared to Figure 18 (b) and (c). This case study also agrees with the notion that aircraft orientation also influences the complexity construct of a sector through cases illustrated in Figure 18 and Figure 19. Here it can be seen that cases with converging aircraft ((Figure 18 (d) and Figure 19 (d)) result in higher SSD area properties than cases where all aircraft have an equal heading (Figure 18 (a) and Figure 19 (a)). The SSD also showed the effect of heading with Figure 18 (b) and (c) showing the FBZ of aircraft with one heading and Figure 18 (e) and (f) showing the FBZ of aircraft with several headings. The same four- aircraft situation in Figure 18 and six-aircraft situation in Figure 19 showed to be more complicated with several aircraft headings. The area properties of the situation in Figure 18 (d) (A[ mean] of 16%) and Figure 19 (a) (A[mean] of 15%) also showed that the SSD has the potential to be a good sector complexity measure that is, it has the capability to illustrate that more aircraft does not necessarily mean higher complexity, but that the orientation of aircraft within the sector matters more. 5. Solution space diagram in measuring workload The complexity construct is an intricate topic. It is interrelated between multiple complexity variables, and altering one variable in a single scenario may result in changing other aspects of complexity. In order to measure complexity, it is hypothesized that sector complexity can be measured through the controller’s workload based on the notion that the controller workload is a subjective attribute and is an effect of air traffic complexity [30]. The controller’s workload can be measured based on a subjective ratings in varying scenario settings. From the many different measurement techniques for subjective workload, the Instantaneous Self Assessment (ISA) method is one of the simplest tools with which an estimate of the perceived workload can be obtained during real-time simulations or actual tasks [33]. This method requires the operator to give a rating between 1 (very low) and 5 (very high), either verbally or by means of a keyboard, of the workload he/ she perceives. While the problems encountered in Air Traffic Control have a dynamic character and workload is likely to vary over time because of the changes in the traffic situation that an ATCO is dealing with, the measurement of workload through ISA should also be made at several moments in time. To enable the SSD to become an objective sector complexity and workload measure, the correlation between the subjective ratings given by participant and the SSD area properties should be studied at several moments in time. Figure 20 shows examples of correlation study between SSD area properties and workload [24]. The plots show the subjective workload ratings in conjunction with the SSD area properties taken every minute in six different scenarios per subject. A total of 120 subjective ratings were gathered together with 120 SSD instants where SSD area assessments were conducted. With these practice, the correlation between SSD area properties and workload as indicated by controller can be Previous experiments have shown that using the SSD area properties, a higher correlation than the static density was found [23,24]. The possibility of using the SSD in measuring workload as a function of different sector design parameters were also explored with the SSD area properties and showed to be capable of illustrating the same trend in the complexity measure with the ISA ratings [ 24]. However, to understand more on the complexity construct, a more focused study is needed to study different sector complexity effects on the SSD such as the number of streams, the orientation of the streams, the position of in-point and out-point of a route within the sector and etc. This preliminary study will then serve as the driver of a more elaborated research in the future. 6. Future research The exploration of sector complexities on the Solution Space parameters and moreover workload is important in order to truly understand how workload is imposed on controllers. Because this preliminary investigation showed that various sector parameters and traffic properties are reflected by the geometry of conflict and solution spaces geometry in the SSD, the possibility of using the SSD in sector planning seems promising. This has also opened up a possibility of quantifying workload objectively using the SSD as a sector complexity and workload measure. Apart from using the SSD for offline planning purposes, having the capability to quantify sector complexity and/or workload has also a potential role in dynamic airspace assessment. This enables a more dynamic airspace sectorization or staff-planning than using the conventional maximum-number-of-aircraft limit that is primarily driven by the air traffic controller’s ability to monitor and provide separation, communication and flow-control services to the aircraft in the sector. Other than using the SSD as a sector planning aid, it is also envisioned that in the future, the SSD can be used as an operation tool. It is anticipated that by using the SSD as a display, controllers will have an additional visual assistance to navigate aircraft within the airspace. The SSD can serve as a collision avoidance tool or also a support tool for ATCOs, to indicate sector bottlenecks and hotspots. Finally, the possibility of implementing the SSD in a three-dimensional problem is not far to reach. Initial studies have been conducted on an analytical 3D SSD [31] and an interface-based 3D SSD [32 ]. In the analytical solution, the 3D SSD area for the observed aircraft (A[obs]) were comprised of two intersecting circles (both from the top and the bottom of the protected area) and the flight envelope of the controlled aircraft (A[con]) comprising the rotation of the performance envelope around its vertical axis with 360 degrees, resulting in a donut-shaped solution space. A simplified diagram of the solution space constructed by the protected area of the observed aircraft and the flight envelope of the controlled aircraft is illustrated in Figure 21. Further studies need to be conducted to verify the capability of the 3D SSD in efficiently measuring workload or sector complexity. In a different study, the altitude dimensional was integrated into a 2D-based SSD ATCO display [32]. The altitude extended SSD was calculated by filtering the aircraft in accordance to their Altitude Relevance Bands and cut off the SSD conflict zones by the slowest and fastest possible climb and descent profiles. In this way, the algorithm can discard conflict zones that can never lead to a conflict. Based on this algorithm, a display prototype has been developed that is able to show the effect of altitude changes to the controller. This display will be used in the future to perform a human-in-the-loop experiment to assess the benefits of including altitude information in the 2D SSD ATCO displays. 7. Discussion The SSD represents the spaces of velocity vectors that are conflict free. The remaining conflict areas were used as an indication of the level of difficulty that a controller has to handle. When conflict zones in the SSD occupy more area, fewer possible solutions are available to resolve future separation violations. The capability of SSD area properties in measuring the dynamic behavior of the sector was proven in previous studies [23,24]. The ongoing research is aimed at understanding the possibility of using the SSD in investigating the effects of various sector design properties on complexity and controller workload. Based on the results gathered from the simulations, the complexity measure of intercept angle, aircraft speed, horizontal proximity, the number of aircraft, and the effect of aircraft orientation can be illustrated through the covered area percentage of the SSD. Each sector complexity factor is portrayed differently on the SSD. It is assumed that a denser area is related to a higher complexity measure. From the initial study conducted, it is concluded that a higher intercept angle, results in a smaller complexity metric, but also that this condition only applies if the observed aircraft has a route length larger or equal than the controlled aircraft. For horizontal proximity properties, it was found that further apart aircraft have a lower complexity metric. The effect of speed on the other hand depends on the position and intercept angle of the observed aircraft where a larger speed may result in higher or lower complexity metric. The number of aircraft within a sector also has a high implication on sector complexity and this was also portrayed in the SSD. However, the importance of the aircraft orientation was also an important characteristic that has an effect on the SSD area properties and thus, sector complexity. However, it should be noted that these sector complexity parameters did not change individually at each instant, because of the dynamic behavior of the aircraft within the sector. As an initial stage of an investigation, this case study will provide the basis for hypotheses that will be tested systematically in subsequent studies. To further understand the behavior of the SSD it is important to investigate other and more combinations of sector complexity metrics. In future studies, the findings regarding the relationship between sector complexity factors and SSD metrics should be validated by means of human-in-the-loop experiments to also get the ATCO’s insight on the perceived workload and how this can be related to the sector complexity mapped on the SSD.
{"url":"http://www.intechopen.com/books/advances-in-air-navigation-services/measuring-sector-complexity-solution-space-based-method","timestamp":"2014-04-17T22:12:13Z","content_type":null,"content_length":"126929","record_id":"<urn:uuid:40723c50-1595-401c-a492-cca177d0e6a8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
note blokhead Here is a simple approach: <p> Say your starting vertex is A. First, find the [http://mathworld.wolfram.com/BiconnectedGraph.html|biconnected component] containing A. Biconnectivity means that there is no single vertex whose removal would disconnect the graph. But a consequence of this condition is what we're really looking for: <p> <ul><li> Two vertices are in the same biconnected component if and only if there is a cycle in the graph that visits both. </ul> <p> Now that you have A's biconnected component, there are a few cases: <p> <ul> <li> The component contains just the single vertex A. This means A has 0 or 1 friends, and a cycle of your definition is not possible. <li> The component has more than just A, but A is immediate friends with everyone in the component. Then all of A's non-immediate friends are outside of this component and therefore do not share a common cycle with A. This means that there is no cycle of your definition (you insist that the cycle visit a non-immediate-friend of A). <li> Otherwise, there is a non-immediate friend of A within the component, say B. By the definition of biconnectivity, there is a cycle that visits both A and B. You can find it by any sort of search (breadth-first or depth-first) from A. Just find two vertex disjoint paths from A to B. You can restrict your search to within this biconnected component for efficiency (the cycle connecting the two vertices must stay within the biconnected component). </ul> <p> Sorry this is at such a high level with no code ;), but I think you should have no problem, if you were already considering implementing A* search on your own. Check out [mod://Graph].pm, it has a method for computing biconnected components, and that will be the main step in this algorithm. <p> Now, if you are interested in finding the <i>smallest</i> such cycle, I'm not sure exactly how to do it. You might want to find the non-immediate-friend of A who is closest to A, and use him as the basis of your cycle. But I don't think this is guaranteed to give the shortest cycle overall. But it would at least be a reasonable heuristic. <!-- Node text goes above. Div tags should contain sig only --> <div class="pmsig"><div class="pmsig-137386"> <p> blokhead </div></div> 644868 644868
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=644964","timestamp":"2014-04-17T10:11:06Z","content_type":null,"content_length":"2948","record_id":"<urn:uuid:f515c1c8-3718-4262-9f28-7ff59842f6c4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Grammar Algebra? Mon, 2 Nov 1992 13:41:48 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: jan@klikspaan.si.hhs.nl Organization: Compilers Central Date: Mon, 2 Nov 1992 13:41:48 GMT Keywords: parse, theory, comment References: 92-10-126 92-10-122 The moderator writes: > Intersection and difference [of grammars] start to get interesting. Anton Marin Ertl writes: > Very interesting, as we can use intersection to construct all-time > favourites like a^n b^n c^n: Which is not definable with a context-free grammar. But intersection can be done with regular languages: - Starting with two languages defined by deterministic finite automata. - Merge the automata by assigning a new start state and epsilon transitions from the new start state to the start states of both originals. - Create a deterministic equivalent with the subset construction with a small modification: Final states are sets of states containing final states from both originals. - Next remove all states that are sets of states of only one of the original automata together with all transitions to or from them. - An alternative for the last step is converting the deterministic automaton to a grammar and making that `proper' to remove all useless parts. Or do I overlook something? Jan Schramp, Haagse Hogeschool - Sector Informatica. Louis Couperusplein 19, 2514 HP Den Haag, Netherlands. E-mail: jan@si.hhs.nl [I'd think you might have trouble intersecting machines with equivalent but not identical sets of states. -John] Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/92-11-007","timestamp":"2014-04-17T19:12:52Z","content_type":null,"content_length":"5647","record_id":"<urn:uuid:2d174d15-a7b9-4a6a-93e0-3bf983b92b9b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Resistive Bridge Basics: Part Two Keywords: Bridge Circuits, Bridges, Wheatstone Bridge, Sigma Delta, Sigma-Delta Converters Related Parts Resistive Bridge Basics: Part Two Abstract: Bridge circuits are a very useful way to make accurate measurements of resistance and other analog values. This article, the continuation from Part One, covers how to interface bridge circuits implemented with higher signal-output silicon strain gauges to analog-to-digital converters (ADCs). Featured are sigma-delta ADCs, which provide a low-cost way to implement a pressure sensor when utilizing silicon strain gauges. Part one of this topic, application note 3426, "Resistive Bridge Basics: Part One", discusses why resistive bridges are used, basic bridge configurations, and discusses bridges with small-output signals like those made from bonded wire or foil strain gauges. This application note focuses on the high-output silicon strain gauges. This application note, Part Two, focuses on the high-output silicon strain gauges and their excellent fit with high-resolution, sigma-delta analog-to-digital converters (ADCs). Examples of how to calculate the required ADC resolution and the dynamic range for uncompensated sensors are given. This note shows how the characteristics of ADCs and silicon strain gauges can be exploited to create simpler ratiometric circuits and a simplified circuit for applications using current-driven sensors. Silicon Strain-Gauge Background The advantage of the silicon strain gauge is its high sensitivity. Strain in silicon causes its bulk resistance to change. This results in signals an order of magnitude larger than those from foil or bonded-wire strain gauges, where resistance changes are only due to the dimensional changes of the resistor. The large signal from silicon strain gauges allows lower cost electronics to be used with them. The cost and difficulty, however, of physically mounting and attaching wires to these small brittle devices has limited their use in bonded strain-gauge applications. Nonetheless, the silicon strain gauge is optimal in MEMS (Micro Electro Mechanical Structures) applications. Using MEMS, mechanical structures can be created in silicon, and multiple strain gauges can be manufactured as an integral part of those mechanical structures. Thus, the MEMS process provides a robust, low-cost solution to the total design problem without the need to handle individual strain The most common example of a MEMS device is the silicon pressure sensor, which first became popular in the 1970s. These pressure sensors are fabricated using standard semiconductor processing techniques, plus a special etching step. The special etch selectively removes silicon from the back side of the wafer to create hundreds of thin square diaphragms with strong silicon frames surrounding them. On the front side of the wafer, one strain-sensitive resistor is implanted on each edge of each diaphragm. Metal traces connect the four resistors around an individual diaphragm to create a fully active Wheatstone bridge. A diamond saw is then used to free the individual sensors from the wafer. At this point the sensors are fully functional, but must have pressure ports attached and wires connected before they are useful. These small sensors are inexpensive and relatively robust. There is a negative, however. These sensors suffer from large temperature effects and have a wide tolerance on initial offset and sensitivity. Pressure Sensor Example For illustrative purposes, a pressure sensor is used here; the principles involved are applicable to any system using a similar type of bridge as a sensor. One model for the output of a raw pressure sensor is seen in Equation 1. The magnitude and range of the variables in Equation 1 yield a wide range of V[OUT] values for a given pressure (P). Variations in V[OUT] exist between different sensors at the same temperature and for a single sensor as the temperature varies. To provide a consistent and meaningful output, each sensor must be calibrated to compensate for part-to-part variations and drift over temperature. For many years calibration was done with analog circuitry. Modern electronics, however, are making digital calibration cost competitive with analog, and the resulting accuracy of digital calibration can be much better. A few analog "tricks" can be used to simplify digital calibration without sacrificing accuracy. V[OUT] = V[B] × (P × S[0] × (1 + S[1] × (T - T[0])) + U[0] + U[1] × (T - T[0])) (Eq. 1) Where V[OUT] is the output of the bridge, V[B] is the bridge excitation voltage, P is the applied pressure, T[0] is the reference temperature, S[0] is the sensitivity at the T[0], S[1] is the temperature coefficient of sensitivity (TCS), U[0] is the offset or unbalance of the bridge at T[0] with no pressure applied, and U[1] is the offset temperature coefficient (OTC). Equation 1 uses first-order polynomials to model the sensor. For many applications it may be necessary to use higher-order polynomials, piecewise linear techniques, or even piecewise second-order approximations with a lookup table for the coefficients. Regardless of which model is used, digital calibration requires the ability to digitize V[OUT], V[B], and T, as well as a way to determine all the coefficients and perform the necessary calculations. Equation 2 is Equation 1 rearranged to solve for P. Equation 2 more clearly shows the information needed for the digital computation, typically by a microcontroller (µC), to output an accurate pressure value. P = (V[OUT]/V[B] - U[0] - U[1] × (T-T0))/(S0 × (1 + S[1] × (T-T[0])) (Eq. 2) Brute-Force Circuit The brute-force method shown in the Figure 1 circuit uses a single high-resolution ADC to digitize V[OUT] (AIN1/AIN2), temperature (AIN3/AIN4), and V[B] (AIN5/AIN6). These measurements are then sent to a µC where the actual pressure is calculated. The bridge is powered directly from the same power supply as the ADC, the voltage reference, and the µC. A resistance temperature detector (RTD), denoted Rt in the schematic, measures temperature; the input MUX on the ADC allows for measurement of the bridge, the RTD, or the supply voltage. To determine the calibration coefficients, the entire system (or at least the RTD and bridge) is placed in an oven and measurements are made at several temperatures as a calibrated pressure source stresses the bridge. The measurement data is then manipulated by the test system to determine the calibration coefficients. The resulting coefficients are downloaded to the µC and stored in nonvolatile memory. Figure 1. Circuit directly measures the variables needed to calculate the actual pressure (excitation voltage, temperature, and bridge output). Key considerations in designing such a circuit are the dynamic range and ADC resolution. The minimum requirements will depend on the application and the exact specifications of the sensor and RTD used. For illustrative purposes, the following specifications are used. System specifications • Full-scale pressure: 100psi • Pressure resolution: 0.05psi • Temperature range: -40°C to +85°C • Power supply: 4.75 to 5.25V Pressure sensor specifications • S[0] (sensitivity): 150 to 300µV/V/psi • S[1] (temperature coefficient of sensitivity): -2500ppm/°C, max • U[0] (offset): -3 to +3mV/V • U[1] (offset temperature coefficient): -15 to +15µV/V/°C • R[B] (input resistance): 4.5k • TCR (temperature coefficient of resistance): 1200ppm/°C • RTD: PT100 □ Alpha: 3850ppm/°C (ΔR/°C = 0.385Ω nominal) □ Value at -40°C: 84.27Ω □ Value at 0°C: 100Ω □ Value at 85°C: 132.80Ω □ For more details on the PT100, see Maxim application note 3450, "Positive analog feedback compensates PT100 transducer." Voltage Resolution The minimum acceptable voltage resolution is based on the smallest response of V[OUT] to the minimum detectable pressure change. This condition occurs when using the lowest sensitivity sensor at the maximum temperature with the lowest supply voltage. Note that the offset terms in Equation 1 are not factors here because resolution is only dependent on the response to pressure. Using Equation 1 and the appropriate assumptions from above: ΔV[OUT] min = 4.75V (0.05psi/count 150µV/V/psi × (1+ (-2500ppm/°C) × (85°C -25°C)) ≈ 30.3µV/count Therefore: minimum ADC resolution = 30µV/count Input Range Input range is determined by the largest possible input voltage and the smallest, or most negative, input voltage. The conditions that create the largest value of V[OUT] in Equation 1 are: maximum pressure (100psi), minimum temperature (-40°C), maximum supply voltage (5.25V), an offset of 3mV/V, offset TC of -15µV/V/°C, a TCS of -2500ppm/°C, and the highest sensitivity die (300µV/V/psi). The most negative signal will be with no pressure applied (P = 0), the supply voltage at 5.25V, an offset of -3mV/V, at a temperature of -40°C, with an OTC of +15µV/V/°C. Again, using Equation 1 and the appropriate assumptions from above: V[OUT] max = 5.25V × (100psi · 300µV/V/psi × (1+ (-2500ppm/°C) × (-40°C - 25°C)) + 3mV/V + (-0.015mV/V/°C) × (-40°C - 25°C)) - 204mV V[OUT] min = 5.25 × (-3mV/V + (0.015mV/V/°C × (-40°C - 25°C))) - -21mV Therefore: ADC input range = -21mV to +204mV Bits of Resolution The nominal ADC for this application has an input range of -21mV to +204mV and a voltage resolution of 30µV/count. The total number of counts for this ADC would be (204mV + 21mV)/(30µV/count) = 7500 counts, or slightly less than 13-bits of dynamic range. A 13-bit converter would meet the requirements of this application, if the sensor's output range exactly matches the input range of the ADC. As -21mV to +204mV does not match the input range of common ADCs, either the input signal must be level-shifted and amplified, or a higher resolution ADC must be utilized. Fortunately modern sigma-delta converters with their high resolution, bipolar inputs, and internal amplifiers, make the use of a higher resolution ADC practical. These sigma-delta ADCs provide an economic solution without requiring additional components. This not only reduces board size, but it also eliminates the drift errors associated with the amplification and level-shifting circuitry that would otherwise be needed. Typical sigma-delta converters operating from a 5V supply will use a 2.5V reference and have an input range of ±2.5V. To meet the resolution requirements of our pressure-sensor application, such an ADC would need a dynamic range of (2.5V - (-2.5V))/(30µV/count) = 166,667 counts. This is equivalent to 17.35 bits of resolution and well within the capability of many ADCs, such as the 18-bit MAX1400. If a SAR ADC were needed, it would be quite expensive to use an 18-bit converter in a 13-bit application that will yield an 11-bit result. Employing an 18-bit (17-bits plus sign) sigma-delta converter is, however, quite practical , even if the three MSBs are essentially unused. Besides being affordable, the sigma-delta converter has a high input impedance and excellent noise An alternate approach to an 18-bit ADC uses a lower resolution converter with an internal amplifier, such as the 16-bit MAX1416. Selecting an internal gain of 8 has the effect of shifting the ADC reading 3 bits toward the MSB, thereby using all the converter's bits and reducing the converter requirement to 15 bits. When choosing between a high-resolution converter without gain, and a lower resolution converter with gain, be sure to consider the noise specifications at the applicable gain and conversion rate. The useful resolution of a sigma-delta converter is frequently limited by its Temperature Measurement If the only reason for measuring temperature is to compensate the pressure sensor, then the temperature measurement does not need to be accurate, only repeatable with a unique temperature corresponding to each measured value. This allows a lot of flexibility and loose design criteria. There are three basic design requirements: avoid self-heating, have adequate temperature resolution, and stay within the ADC's measurement range. Selecting a maximum voltage for Vt that is close to the maximum pressure signal ensures that the same ADC and internal gain can be used for temperature and pressure measurement. In this example the maximum input voltage is +204mV. To allow for resistor tolerances, the maximum temperature voltage can be conservatively selected as +180mV. Limiting the voltage across Rt to +180mV also eliminates any problems with self-heating of Rt. Once the maximum voltage has been selected, the value of R1 is calculated to provide this maximum voltage at 85°C (Rt = 132.8Ω) when V[B] = 5.25V. R1 can be calculated with Equation 3 where Vtmax is the maximum voltage allowed across Rt. Temperature resolution is then found by dividing the ADC's voltage resolution by the change in Vt with temperature. Equation 4 summarizes the temperature resolution calculation. (Note: The calculated minimum voltage resolution is used in this example, which creates a conservative design. You may wish to use the actual noise-free resolution of the ADC.) R1 = Rt × (V[B]/Vtmax - 1) (Eq. 3) R1 = 132.8Ω × (5.25V/0.18V - 1) ≈ 3.7kΩ T[RES] = V[RES] × (R1 + Rt)²/(V[B] × R1 × ΔRt/°C) (Eq. 4) Where T[RES] is the temperature-measurement resolution in °C per count of the ADC. T[RES] = 30µV/count × (3700Ω + 132.8Ω)²/(4.75V Ω 3700Ω × 0.38Ω/°C) ≈ 0.07°C/count A 0.07°C temperature resolution will be adequate for most applications. If, however, higher resolution in needed, several options are available: use a higher resolution ADC; replace the RTD with a thermistor; or utilize the RTD in a bridge circuit so that a higher gain can be used inside the ADC. Note that to achieve a useful temperature reading, the software must compensate for any changes in the supply voltage. An alternate approach connects R1 to V[REF] instead of V[B]. This makes Vt independent of V[B], but also increases the load on the voltage reference. Brute Force with a Touch of Elegance Silicon strain gauges and ADCs have some characteristics that allow the circuit in Figure 1 to be simplified. From Equation 1 it can be seen that the output of the bridge is directly proportional to the supply voltage (V[B]). Sensors with this characteristic are called ratiometric sensors. Equation 5 is a generic equation for all ratiometric sensors with temperature-dependent errors. Equation 5 can be created by starting with Equation 1 and replacing everything to the right of V[B] with the general function (p,t), where p is the intensity of the property being measured and t is the V[OUT] = V[B] × (p,t) (Eq. 5) ADCs also have a ratiometric property; their output is directly proportional to the ratio of the input voltage and the reference voltage. Equation 6 describes a generic ADC's digital reading (D) in terms of the input signal (Vs), the reference voltage (V[REF]), the full-scale reading (FS), and the scale factor (K). The scale factor accounts for variations in architecture, as well as any internal amplification. D = (Vs/V[REF])FS × K (Eq. 6) The performance of the ADC can be seen by replacing Vs in Equation 6 with the equivalent of V[OUT] from Equation 5. The result is Equation 7. D = (V[B]/V[REF]) × (p,t) × FS × K (Eq. 7) In Equation 7 the ratio of V[B] to V[REF] is important, but their absolute values are not. Consequently, the voltage reference in the circuit shown in Figure 1 is not needed. The reference voltage for the ADC can come from a simple resistor-divider that maintains a constant ratio of V[B]/V[REF]. This change not only eliminates the voltage reference, but it also eliminates the need to measure V[B] and all the software required to compensate for changes in V[B]. This technique works for all ratiometric sensors. The temperature sensor created by placing R1 in series with Rt is also ratiometric, so the voltage reference is not needed for temperature measurement either. This circuit is shown in Figure 2. Figure 2. An example of a ratiometric circuit. The output of the pressure sensor, the RTD voltage, and the reference voltage for the ADC are all directly proportional to the supply voltage. This eliminates the need for an absolute voltage reference and simplifies the calculations necessary to determine the actual pressure. Eliminating the RTD Silicon-based resistors are highly temperature sensitive, a property that can be exploited by using the bridge resistance as the temperature sensor for the system. This not only reduces cost, but it also yields better results because it eliminates any temperature gradient that may have existed between the RTD and the stress-sensitive bridge. As mentioned previously, absolute accuracy of the temperature measurement is not important as long as the temperature measurement is repeatable and unique. The requirement for uniqueness limits this temperature-sensing method to bridges whose impedance remains constant as pressure is applied. Fortunately, most silicon sensors use a fully active bridge that meets this requirement. Figure 3 is a circuit where a temperature-dependent voltage is created by placing a resistor (R1) in series with the low-voltage side of the bridge. Adding this resistor reduces the voltage across the bridge and hence its output. This is generally not a large voltage reduction, but it may be enough to require an increase in gain or a decrease in the reference voltage. Equation 8 can be used to calculate a conservative value of R1. It works well when R1 < R[B]/2, which will be true for most applications. R1 = (R[B] × V[RES])/(V[DD] × TCR × T[RES] - 2.5 × V[RES]) (Eq. 8) Where R[B] is the input resistance of the sensor bridge, V[RES] is the voltage resolution of the ADC, V[DD] is the supply voltage, TCR is the temperature coefficient of resistance of the sensor bridge, and T[RES] is the desired temperature resolution. Figure 3. An example of a ratiometric circuit that uses the output of the bridge for pressure measurement and the resistance of the bridge for temperature measurement. Continuing with the previous example and assuming a desired temperature resolution of 0.05°C, R1 = (4.5kΩ × 30µV/count)/(((5V × 1200ppm/°C × 0.05°C/count) - 2.5) × 30µV/count) = 0.6kΩ. This result is valid since R1 is less than half of R[B]. In this example, adding R1 will cause a 12% drop in V[B]. In selecting the converter, however, it was necessary to round up from 17.35 bits to 18 bits of resolution. This increase in resolution more than compensates for the reduction in V[B]. As temperature increases, the resistance of the bridge rises causing more voltage to be dropped across it. This change in V[B] with temperature creates an additional TCS term. Fortunately this term is positive and the inherent TCS of the sensor is negative, so placing a resistor in series with the sensor will actually reduce the uncompensated TCS error. The calibration techniques above are still valid; they just need to compensate a slightly smaller error. Current-Driven Bridges A special class of silicon piezoresistive sensors exists that are referred to as constant-current sensors or current-driven sensors. These sensors have been specially processed so that when they are powered by a current source, the sensitivity is constant over temperature (TCS ≈ 0). It is common for current-driven sensors to have additional resistors added that eliminate, or significantly reduce the offset error and OTC error. In essence, analog techniques are being used to calibrate the sensor. This frees the designer from the expensive task of measuring every part over temperature and pressure. The absolute accuracy of these sensors over a wide temperature range is generally not as good as the sensors calibrated digitally. Digital techniques can still be used to improve the performance of these sensors, and temperature information is easily obtained by measuring the voltage across the bridge, which will typically increase at a rate greater than 2000ppm/°C. The circuit in Figure 4 shows a current source powering the bridge. The same voltage reference used to establish a constant current also supplies the reference voltage for the ADC. Figure 4. This circuit uses a current driven sensor powered by a conventional current source. Eliminating the Current Source Understanding how current-driven sensors compensate for STC allows the circuit in Figure 5 to achieve the same results as the circuit in Figure 4 without including a current source. Current-driven sensors still have an excitation voltage (V[B]), however, V[B] is not fixed by a voltage supply. V[B] is determined by the bridge's resistance and the current through the bridge. As mentioned earlier, silicon resistors have a positive temperature coefficient. This causes V[B] to increase with temperature when the bridge is powered from a current source. If the bridge's TCR is equal in magnitude and opposite in sign to TCS, then V[B] will increase with temperature at the right rate to compensate for decreasing sensitivity, and TCS will be near zero over a limited temperature Figure 5. Circuit uses a current-driven sensor, but does not require a current source or a voltage reference. An equation for the ADC's output in the circuit in Figure 4 can be obtained by starting with Equation 7 and replacing V[B] with I[B] times R[B]. This results in Equation 9, where R[B] is the input resistance of the bridge and I[B] is the current through the bridge. D = (I[B] × R[B]/V[REF]) × (p,t) × FS × K (Eq. 9) The circuit shown in Figure 5 can provide the same performance as the circuit in Figure 4, but without using a current source or voltage reference. This can be shown by comparing the output of the two circuits. The output of the ADC in Figure 5 is found by starting with Equation 7 and substituting the appropriate equations for V[B] and V[REF]. This results in Equation 10. Repeat of Equation 7: D = (V[B]/V[REF]) × (p,t) × FS × K In the circuit in Figure 5, V[B] = V[DD] × R[B]/(R1 + R[B]) And V[REF] = V[DD] × R1/(R1 + R[B]) Substituting these into Equation 7 yields Equation 10. D = (R[B]/R1) × (p,t) × FS × K (Eq. 10) If R1 is selected to equal V[REF]/I[B], then Equations 9 and 10 are identical, which, in turn, shows that the circuit in Figure 5 provides the same results as the circuit in Figure 4. For identical results, R1 must equal V[REF]/I[B], but this is not a requirement to achieve proper temperature compensation. As long as R[B] is multiplied by a temperature-independent constant, temperature compensation will be achieved. The value of R1 can be selected to best fit the system requirements. When using the circuit in Figure 5, it is important to remember that the ADC's reference voltage changes with temperature. This makes the ADC unsuitable for monitoring other system voltages. In fact, if a temperature-sensitive measurement is needed for additional compensation, it can be obtained by using an additional ADC channel to measure the supply voltage. Also, when using the circuit in Figure 5, care should also be taken to ensure that V[REF] is within the specified range for the ADC. The relatively large output of silicon piezoresistive strain gauges allows them to interface directly with low-cost, high-resolution, sigma-delta ADCs. This eliminates the cost and errors associated with amplification and level-shifting circuits. In addition, the thermal properties of these strain gauges and ratiometric characteristics of the ADC can be used to significantly reduce the complexity of highly accurate circuits. Next Steps EE-Mail Subscribe to EE-Mail and receive automatic notice of new documents in your areas of interest. Download Download, PDF Format (107kB) APP 3545: Jun 20, 2005 TUTORIAL 3545, AN3545, AN 3545, APP3545, Appnote3545, Appnote 3545
{"url":"http://www.maximintegrated.com/app-notes/index.mvp/id/3545","timestamp":"2014-04-18T02:59:03Z","content_type":null,"content_length":"89808","record_id":"<urn:uuid:57257c3f-415c-45e8-9379-5aabf6c22057>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Math: Sexist, Racist, or Just Hard? Real Math: Sexist, Racist, or Just Hard? Real Math: Sexist, Racist, or Just Hard? A group of 200 prominent mathematicians and scientists has called on U.S. Education Secretary Richard W. Riley to rescind his department's ringing endorsement of 10 elementary and secondary mathematics programs, arguing that the programs are damaging to children because they omit instruction in basic mathematics skills. While agreeing that children need to master basic skills, Riley defended the endorsed programs by claiming each had improved student learning. Last fall the U.S. Department of Education (DoEd) endorsed a Top 10 list of elementary and secondary mathematics programs favored by its own Mathematics and Science Expert Panel. Five programs received “exemplary” status, and five others were named “promising.” In write-ups of the programs on the government Web site, the panelists said this about the “promising” Everyday Mathematics for K-6: “This enriched curriculum includes such features as problem-solving about everyday situations; linking past experiences to new concepts; sharing ideas through discussion; developing concept readiness through hands-on activities and explorations; cooperative learning through partner and small-group activities; and enhancing home-school partnerships.” To which San Francisco Chronicle columnist Debra J. Saunders responded: “Sounds more like marriage counseling than math class.” Indeed, virtually all of the DoEd-blessed curricula extol the merits of “real world” or “real life” applications of math, with lots of group work, partner quizzes, student role-playing, journals with children’s entries on how they feel about math, copious use of calculators, and group estimating. That’s according to the official descriptions. In general, the federal government’s Top 10 are from what is called the 'Whole Math' genre--a kissing cousin of Whole Language--where basic skills and teacher-directed instruction are played down in favor of pupil-led discovery, or constructivism. MathLand, another program designated as “promising” by the DoEd panel, has an exercise called Fantasy Lunch. Second-graders are invited to conjure up their fantasy lunch, then draw it, and finally cut up the imaginary food and put it into a bag. Connected Math, rated “exemplary,” emphasizes higher-level thinking skills, but California rejected the program for middle-school students because it omits the division of fractions and other basic computational skills. The constructivist approach to mathematics has its fans, notably the National Council of Teachers of Mathematics (NCTM). This is the group that spurred the Whole Math movement with its 1989 standards, to which DoEd’s Top 10 adhere. When The Wall Street Journal recently editorialized against Whole Math, several supporters of Everyday Math fired back on January 13 with letters, contending the program has helped students grasp mathematical concepts, which in turn has brought about increases in achievement. But DoEd’s unqualified embrace of the constructivist approach--sometimes called the "New-New Math"--prompted a counterattack by the heaviest artillery yet in the Math Wars. On November 18, 1999, Secretary Richard Riley and staff spilled their morning coffee over a full-page Washington Post advertisement signed by 200 mathematicians, scientists, and other experts calling on Riley to withdraw the federal endorsement of the 10 math programs. Among the signers were four Nobel laureates in physics and two winners of the Fields Medal, the highest honor for mathematicians. The high-powered group protested the absence of active research mathematicians from DoEd’s Expert Panel. They also objected that DoEd's Top-10 programs omitted basic skills, such as multiplying multi-digit numbers and dividing fractions. “These programs [the Top 10] are among the worst in existence,” said Cal State/Northridge math professor David Klein, who helped draft the letter. “It would be a joke except for the damaging effect it has on children.” Some of the panelists fought back. For example, Steven Leinwand accused the 200 scholars of being interested in “math for the elite” alone. Leinwand, math consultant for Connecticut’s education department, said the NCTM and DoEd believe “math needs to empower all students.” However, it was Leinwand who in 1994 wrote in Education Week that continuing to teach children multi-digit computational algorithms was “downright dangerous.” Although a statutory prohibition prevents DoEd from dictating curricula, Congress provided a way around that restriction in 1994 when it passed the Goals 2000: Educate America Act. Title IX called on DoEd’s Office of Educational Research and Improvement to set up Expert Panels to endorse top programs in gender equity, safe and drug-free schools, technology, and math and science. Title IX, like Goals 2000 itself, stressed the idea of equalizing academic outcomes for all sub-groups in the student population. Secretary Riley commented that NCTM has published “the prevailing standards in the country, so we thought that would make sense.” But critics see a deliberate integration of ideological agendas. The architects of NCTM’s 1989 standards declared that social injustices had given white males an advantage over women and minorities in math, and they promised NCTM’s reinvented math would equalize scores. Equality would be achieved by eliminating the “computational gate.” Klein argues this Whole Math approach “hurts the students with the least resources the most" by depriving them of the computational basics they need as a foundation for higher math. "If kids get a good, solid program in arithmetic, they have a good chance of learning algebra," he explained, "and algebra’s one of the main gates into colleges.” The Whole Math programs are based on the assumption that "minorities and women are too dumb to learn real mathematics," he said. Robert Holland is a senior fellow at the Lexington Institute, a public-policy think tank in Arlington, Va. His e-mail address is rholl1176@yahoo.com. Articles By Robert G. Holland
{"url":"http://news.heartland.org/newspaper-article/2000/03/01/real-math-sexist-racist-or-just-hard","timestamp":"2014-04-16T13:13:48Z","content_type":null,"content_length":"37648","record_id":"<urn:uuid:990830c4-e7d9-49f1-902c-2fbb2262ebd4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Elimination technique questions that frankly, I don't understand. November 26th 2009, 09:37 PM #1 Nov 2009 Elimination technique questions that frankly, I don't understand. 1) Eliminate t to give an equation that relates x and y. x=(e^(2t))+9 and y=e^(7t) 2) Eliminate t to give an equation that relates x and y. x=tan(t) and y=sec^2(t)-3 3) Show that this equation represents a circle by rearranging it into the centre-radius form of the equation of a circle. State the coordinates of the center and the radius of the circle: LOLWUT?! Help would be soo much appreciated. 1. Solve the 2nd equation for t: $y=e^{7t}~\implies~7t=\ln(y)~\implies~\boxed{t=\fra c17 \ln(y)}$ and plug in this term into the first equation: $x=\left(e^{2 \cdot \frac17 \ln(y)}\right)+9~\implies~x=\sqrt[7]{y^2}+9~\implies~y^2=(x-9)^7\ ,\ x>9$ 2) Eliminate t to give an equation that relates x and y. x=tan(t) and y=sec^2(t)-3 This one is for you! 3) Show that this equation represents a circle by rearranging it into the centre-radius form of the equation of a circle. State the coordinates of the center and the radius of the circle: 1. Re-arrange the equation: $21x^2 -4x+21y^2+84y = -83$ 2. Complete the squares: $21\left(x^2 - \frac4{21} x + \frac{4}{441}\right) +21\left(y^2+4y+4\right) = -83+21 \cdot \frac4{441} + 21 \cdot 4 = \frac{25}{21}$ 3. Divide the equation by the leading factor of the brackets on the LHS: $\left(x-\frac2{21}\right)^2+(y+2)^2=\left(\frac5{21}\right )^2$ 4. Determine the coordinates of the centre and the length of the radius. You're dope. Thanks So "you're dope" is different from "you're a dope"? I would have done (1) slightly differently, solving only for $e^t$ rather than t itself. From the first equation, [/tex]x= e^{2t}+ 9[/tex], $x- 9= e^{2t}= (e^t)^2$, $e^t= (x-9)^{1/2}$. From the second equation, $y= e^{7t}= (e^t)^7$, $e^t= y^{1/7}$. Putting those together, $(x-9)^{1/2}= y^{1/7}$, which, taking both sides to the 14th power, is the same as $(x-9)^7= y^2$ November 26th 2009, 10:24 PM #2 November 26th 2009, 10:27 PM #3 Nov 2009 November 27th 2009, 02:49 AM #4 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/pre-calculus/116968-elimination-technique-questions-frankly-i-don-t-understand.html","timestamp":"2014-04-16T14:57:05Z","content_type":null,"content_length":"44534","record_id":"<urn:uuid:278509ab-c98a-42d2-97f0-0b3e9fb0d727>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
[LLVMbugs] [Bug 7319] New: Clang rejects enum compare gcc doesn't and prints insane candidate list bugzilla-daemon at llvm.org bugzilla-daemon at llvm.org Tue Jun 8 15:39:18 CDT 2010 Summary: Clang rejects enum compare gcc doesn't and prints insane candidate list Product: clang Version: unspecified Platform: PC OS/Version: All Status: NEW Severity: normal Priority: P Component: C++ AssignedTo: unassignedclangbugs at nondot.org ReportedBy: clattner at apple.com CC: llvmbugs at cs.uiuc.edu, dgregor at apple.com On this code: #include <iostream> typedef enum { } MyEnum; template<typename X> bool operator>(const X &inX1, const X &inX2) return inX2 < inX1; int main (int argc, const char * argv[]) { MyEnum e1, e2; if (e1 > e2) std::cout << "its larger!\n" << std::endl; std::cout << "its smaller!\n" << std::endl; std::cout << "Done!\n" << std::endl; return 0; GCC accepts this code but clang rejects it. Clang also does it in a particularly horrible way, producing: t.cc:19:8: error: use of overloaded operator '>' is ambiguous if (e1 > e2) ~~ ^ ~~ t.cc:10:6: note: candidate function [with X = MyEnum] bool operator>(const X &inX1, const X &inX2) t.cc:19:8: note: built-in candidate operator>(MyEnum, MyEnum) if (e1 > e2) t.cc:19:8: note: built-in candidate operator>(int, int) t.cc:19:8: note: built-in candidate operator>(double, int) t.cc:19:8: note: built-in candidate operator>(unsigned int, int) t.cc:19:8: note: built-in candidate operator>(unsigned long, int) t.cc:19:8: note: built-in candidate operator>(long double, int) t.cc:19:8: note: built-in candidate operator>(long long, int) t.cc:19:8: note: built-in candidate operator>(float, int) t.cc:19:8: note: built-in candidate operator>(unsigned long long, int) t.cc:19:8: note: built-in candidate operator>(long, int) t.cc:19:8: note: built-in candidate operator>(int, long) t.cc:19:8: note: built-in candidate operator>(int, long long) t.cc:19:8: note: built-in candidate operator>(int, unsigned int) t.cc:19:8: note: built-in candidate operator>(int, unsigned long) t.cc:19:8: note: built-in candidate operator>(int, unsigned long long) t.cc:19:8: note: built-in candidate operator>(int, float) t.cc:19:8: note: built-in candidate operator>(int, double) t.cc:19:8: note: built-in candidate operator>(int, long double) t.cc:19:8: note: built-in candidate operator>(unsigned long long, double) t.cc:19:8: note: built-in candidate operator>(unsigned long long, long double) t.cc:19:8: note: built-in candidate operator>(unsigned long long, float) t.cc:19:8: note: built-in candidate operator>(unsigned long long, unsigned long t.cc:19:8: note: built-in candidate operator>(unsigned long long, unsigned t.cc:19:8: note: built-in candidate operator>(float, long) t.cc:19:8: note: built-in candidate operator>(float, long long) t.cc:19:8: note: built-in candidate operator>(float, unsigned int) t.cc:19:8: note: built-in candidate operator>(float, unsigned long) t.cc:19:8: note: built-in candidate operator>(float, unsigned long long) t.cc:19:8: note: built-in candidate operator>(float, float) t.cc:19:8: note: built-in candidate operator>(unsigned long long, unsigned int) t.cc:19:8: note: built-in candidate operator>(float, double) t.cc:19:8: note: built-in candidate operator>(long double, long double) t.cc:19:8: note: built-in candidate operator>(long double, double) t.cc:19:8: note: built-in candidate operator>(long double, float) t.cc:19:8: note: built-in candidate operator>(long double, unsigned long long) t.cc:19:8: note: built-in candidate operator>(long double, unsigned long) t.cc:19:8: note: built-in candidate operator>(long double, unsigned int) t.cc:19:8: note: built-in candidate operator>(long double, long long) t.cc:19:8: note: built-in candidate operator>(long double, long) t.cc:19:8: note: built-in candidate operator>(double, long double) t.cc:19:8: note: built-in candidate operator>(double, double) t.cc:19:8: note: built-in candidate operator>(double, float) t.cc:19:8: note: built-in candidate operator>(double, unsigned long long) t.cc:19:8: note: built-in candidate operator>(double, unsigned long) t.cc:19:8: note: built-in candidate operator>(double, unsigned int) t.cc:19:8: note: built-in candidate operator>(double, long long) t.cc:19:8: note: built-in candidate operator>(double, long) t.cc:19:8: note: built-in candidate operator>(float, long double) t.cc:19:8: note: built-in candidate operator>(unsigned int, long) t.cc:19:8: note: built-in candidate operator>(long long, long double) t.cc:19:8: note: built-in candidate operator>(long long, double) t.cc:19:8: note: built-in candidate operator>(long long, float) t.cc:19:8: note: built-in candidate operator>(long long, unsigned long long) t.cc:19:8: note: built-in candidate operator>(long long, unsigned long) t.cc:19:8: note: built-in candidate operator>(long long, unsigned int) t.cc:19:8: note: built-in candidate operator>(long long, long long) t.cc:19:8: note: built-in candidate operator>(long long, long) t.cc:19:8: note: built-in candidate operator>(long, long double) t.cc:19:8: note: built-in candidate operator>(long, double) t.cc:19:8: note: built-in candidate operator>(long, float) t.cc:19:8: note: built-in candidate operator>(long, unsigned long long) t.cc:19:8: note: built-in candidate operator>(long, unsigned long) t.cc:19:8: note: built-in candidate operator>(long, unsigned int) t.cc:19:8: note: built-in candidate operator>(long, long long) t.cc:19:8: note: built-in candidate operator>(long, long) t.cc:19:8: note: built-in candidate operator>(unsigned long long, long long) t.cc:19:8: note: built-in candidate operator>(unsigned long long, long) t.cc:19:8: note: built-in candidate operator>(unsigned long, long double) t.cc:19:8: note: built-in candidate operator>(unsigned long, double) t.cc:19:8: note: built-in candidate operator>(unsigned long, float) t.cc:19:8: note: built-in candidate operator>(unsigned long, unsigned long t.cc:19:8: note: built-in candidate operator>(unsigned long, unsigned long) t.cc:19:8: note: built-in candidate operator>(unsigned long, unsigned int) t.cc:19:8: note: built-in candidate operator>(unsigned long, long long) t.cc:19:8: note: built-in candidate operator>(unsigned long, long) t.cc:19:8: note: built-in candidate operator>(unsigned int, long double) t.cc:19:8: note: built-in candidate operator>(unsigned int, double) t.cc:19:8: note: built-in candidate operator>(unsigned int, float) t.cc:19:8: note: built-in candidate operator>(unsigned int, unsigned long long) t.cc:19:8: note: built-in candidate operator>(unsigned int, unsigned long) t.cc:19:8: note: built-in candidate operator>(unsigned int, unsigned int) t.cc:19:8: note: built-in candidate operator>(unsigned int, long long) 1 error generated. Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug. More information about the LLVMbugs mailing list
{"url":"http://lists.cs.uiuc.edu/pipermail/llvmbugs/2010-June/013338.html","timestamp":"2014-04-18T15:38:49Z","content_type":null,"content_length":"10618","record_id":"<urn:uuid:b2bbe986-214c-4fca-b3af-93af946abfec>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Difference_map_algorithm difference map algorithm is a search algorithm for general constraint satisfaction problems. It is a in the sense that it is built from more basic algorithms that perform sets. From a mathematical perspective, the difference map algorithm is a dynamical system based on a Euclidean space . Solutions are encoded as fixed points of the mapping. Although originally conceived as a general method for solving the phase problem, the difference map algorithm has been used for the boolean satisfiability problem, protein structure prediction, Ramsey numbers, diophantine equations, and Sudoku. Since these applications include NP-complete problems, the scope of the difference map is that of an incomplete algorithm. Whereas incomplete algorithms can efficiently verify solutions (once a candidate is found), they cannot prove that a solution does not exist. The difference map algorithm is a generalization of two iterative methods: Fienup's hybrid input-output phase retrieval algorithm and the Douglas-Rachford algorithm for convex optimization. Iterative methods, in general, have a long history in phase retrieval and convex optimization. The use of this style of algorithm for hard, non-convex problems is a more recent development. The problem to be solved must first be formulated as a set intersection problem in Euclidean space: find an in the intersection of sets . Another prerequisite is an implementation of the projections that, given an arbitrary input point , return a point in the constraint set that is nearest to . One iteration of the algorithm is given by the mapping: x → D(x) = x + β [P[A](f[B](x)) - P[B](f[A](x)) ] , f[A](x) = P[A](x) - (P[A](x)-x)/β , f[B](x) = P[B](x) + (P[B](x)-x)/β . The real parameter β can have either sign; optimal values depend on the application and are determined through experimentation. As a first guess, the choice β = 1 (or β = -1) is recommended because it reduces the number of projection computations per iteration: D(x) = x + P[A](2 P[B](x) - x) - P[B](x) . The progress of the algorithm is monitored by inspecting the norm of the difference of the two projections: Δ = | P[A](f[B](x)) - P[B](f[A](x)) | . When this vanishes, at fixed points of the map, a point common to both constraint sets has been found and the algorithm is terminated. The set of fixed points in a particular application will normally have a large dimension, even when the solution set is a single point. Example: logical satisfiability Incomplete algorithms, such as stochastic local search , are widely used for finding satisfying truth assignments to boolean formulas. As an example of solving an instance of with the difference map algorithm, consider the following formula (~ indicates NOT): (q[1] or q[2]) and (~q[1] or q[3]) and (~q[2] or ~q[3]) and (q[1] or ~q[2]) To each of the eight literals in this formula we assign one real variable in an eight dimensional Euclidean space. The structure of the 2-SAT formula can be recovered when these variables are arranged in a table: x[11] x[12] (x[21]) x[22] (x[31]) (x[32]) x[41] (x[42]) Rows are the clauses in the 2-SAT formula and literals corresponding to the same boolean variable are arranged in columns, with negation indicated by parentheses. For example, the real variables x[11], x[21] and x[41] correspond to the same boolean variable (q[1]) or its negation, and are called replicas. It is convenient to associate the values 1 and -1 with TRUE and FALSE rather than the traditional 1 and 0. With this convention, the compatibility between the replicas takes the form of the following linear equations: x[11] = -x[21] = x[41] x[12] = -x[31] = -x[42] x[22] = -x[32] The linear subspace where these equations are satisfied is one of the constraint spaces, say A, used by the difference map. To project to this constraint we replace each replica by the signed replica average, or its negative: a[1] = (x[11] - x[21] + x[41]) / 3 x[11] → a[1] x[21] → -a[1] x[41] → a[1] The second difference map constraint applies to the rows of the table, the clauses. In a satisfying assignment, the two variables in each row must be assigned the values (1, 1), (1, -1), or (-1, 1). The corresponding constraint set, B, is thus a set of 3^4 = 729 points. In projecting to this constraint the following operation is applied to each row. First, the two real values are rounded to 1 or -1; then, if the outcome is (-1, -1), the larger of the two original values is replaced by 1. Examples: (-.2, 1.2) → (-1, 1) (-.2, -.8) → (1, -1) It is a straightforward exercise to check that both of the projection operations described minimize the Euclidean distance between input and output values. Moreover, if the algorithm succeeds in finding a point x that lies in both constraint sets we know (i) the clauses associated with x are all TRUE and (ii) the assignments to the replicas are consistent with a truth assignment to the original boolean variables. To run the algorithm one first generates an initial point x[0], say -0.5 -0.8 (-0.4) -0.6 (0.3) (-0.8) 0.5 (0.1) Using β = 1, the next step is to compute P[B](x[0]) : 1 -1 (1) -1 (1) (-1) 1 (1) This is followed by 2P[B](x[0]) - x[0], 2.5 -1.2 (2.4) -1.4 (1.7) (-1.2) 1.5 (1.9) and then projected onto the other constraint, P[A](2P[B](x[0]) - x[0]) : 0.53333 -1.6 (-0.53333) -0.1 (1.6) (0.1) 0.53333 (1.6) Incrementing x[0] by the difference of the two projections gives the first iteration of the difference map, D(x[0]) = x[1] : -0.96666 -1.4 (-1.93333) 0.3 (0.9) (0.3) 0.03333 (0.7) Here is the second iteration, D(x[1]) = x[2] : -0.3 -1.4 (-2.6) -0.7 (0.9) (-0.7) 0.7 (0.7) This is a fixed point: D(x[2]) = x[2]. The iterate is unchanged because the two projections agree. From P[B](x[2]) , 1 -1 (-1) 1 (1) (-1) 1 (1) we can read off the satisfying truth assignment: q[1] = TRUE, q[2] = FALSE, q[3] = TRUE. Chaotic dynamics In the simple 2-SAT example above, the norm of the difference map increment Δ decreased monotonically to zero in three iterations. This contrasts the behavior of Δ when the difference map is given a hard instance of 3-SAT, where it fluctuates strongly prior to the discovery of the fixed point. As a dynamical system the difference map is believed to be chaotic, and that the space being searched is a strange attractor. Phase retrieval In phase retrieval a signal or image is reconstructed from the modulus (absolute value, magnitude) of its discrete Fourier transform. For example, the source of the modulus data may be the Fraunhofer diffraction pattern formed when an object is illuminated with coherent light. The projection to the Fourier modulus constraint, say P[A], is accomplished by first computing the discrete Fourier transform of the signal or image, rescaling the moduli to agree with the data, and then inverse transforming the result. This is a projection, in the sense that the Euclidean distance to the constraint is minimized, because (i) the discrete Fourier transform, as a unitary transformation, preserves distance, and (ii) rescaling the modulus (without modifying the phase) is the smallest change that realizes the modulus constraint. To recover the unknown phases of the Fourier transform the difference map relies on the projection to another constraint, P[B]. This may take several forms, as the object being reconstructed may be known to be positive, have a bounded support, etc. In the reconstruction of the Wikipedia logo, for example, the effect of the projection P[B] was to zero all values outside a rectangular support and also to zero all negative values within the support.
{"url":"http://www.reference.com/browse/wiki/Difference_map_algorithm","timestamp":"2014-04-19T12:43:00Z","content_type":null,"content_length":"92555","record_id":"<urn:uuid:baac448b-7f14-433b-a558-01400bd46891>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulating the Gambler’s Ruin April 14, 2013 By Wesley The gambler’s ruin problem is one where a player has a probability p of winning and probability q of losing. For example let’s take a skill game where the player x can beat player y with probability 0.6 by getting closer to target. The game play begins with player x being allotted 5 points and player y allotted 10 points. After each round a player’s points either decrease by one or increase by one we can determine the probability that player x will annihilate player y. The player that reaches 15 wins and the player that reach zero is annihilated. There is a wide range of application for this type of problem that goes being gambling. This is actually a fairly simple problem to solve on pencil and paper and to determine an exact probability. Without going into too much detail we can determine the probability of annihilation by $\ frac{1-\left(\frac{q}{p}\right)^i}{1-\left(\frac{q}{p}\right)^N}$. In this example it works out to be $\frac{1-\left(\frac{.4}{.6}\right)^5}{1-\left(\frac{.4}{.6}\right)^{10}} \approx 0.8703$. But this is a relatively boring approach and coding up an R script makes everything that much better. So here is a simulation of this same problem estimating that same probability plus it provides additional information on the distribution of how many times this game would have to be played. gen.ruin = function(n, x.cnt, y.cnt, x.p){ x.cnt.c = x.cnt y.cnt.c = y.cnt x.rnd = rbinom(n, 1, p=x.p) x.rnd[x.rnd==0] = -1 y.rnd = x.rnd*-1 x.cum.sum = cumsum(x.rnd)+x.cnt y.cum.sum = cumsum(y.rnd)+y.cnt ruin.data = cumsum(x.rnd)+x.cnt if( any( which(ruin.data>=x.cnt+y.cnt) ) | any( which(ruin.data< =0) ) ){ cut.data = 1+min( which(ruin.data>=x.cnt+y.cnt), which(ruin.data< =0) ) ruin.data[cut.data:length(ruin.data)] = 0 n.reps = 10000 ruin.sim = replicate(n.reps, gen.ruin(n=1000, x.cnt=5, y.cnt=10, x.p=.6)) ruin.sim[ruin.sim==0] = NA hist( apply(ruin.sim==15 | is.na(ruin.sim), 2, which.max) , nclass=100, col='8', main="Distribution of Number of Turns", xlab="Turn Number") abline(v=mean(apply(ruin.sim==15 | is.na(ruin.sim), 2, which.max)), lwd=3, col='red') abline(v=median(apply(ruin.sim==15 | is.na(ruin.sim), 2, which.max)), lwd=3, col='green') x.annihilation = apply(ruin.sim==15, 2, which.max) ( prob.x.annilate = length(x.annihilation[x.annihilation!=1]) / n.reps ) state.cnt = ruin.sim state.cnt[state.cnt!=15] = 0 state.cnt[state.cnt==15] = 1 mean.state = apply(ruin.sim, 1, mean, na.rm=T) plot(mean.state, xlim=c(0,which.max(mean.state)), ylim=c(0,20), ylab="Points", xlab="Number of Plays", pch=16, cex=.5, col='green') lines(mean.state, col='green') points(15-mean.state, pch=16, cex=.5, col='blue') lines(15-mean.state, col='blue') daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simulating-the-gamblers-ruin/","timestamp":"2014-04-16T22:26:19Z","content_type":null,"content_length":"38060","record_id":"<urn:uuid:676b4465-afe7-4754-ba07-6991e5401205>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: DECRYPTION PROCESSOR AND DECRYPTION PROCESSING METHOD Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A decryption processor for calculating a plaintext through decryption of a ciphertext c includes, a first part that calculates m' through modular exponentiation modulo a first prime number p wherein an exponent is a shifted value of d (mod (p-1)), and a base is a value of c (mod p); a second modular exponentiation part that calculates m' through modular exponentiation modulo a second prime number q, wherein an exponent is a value of d (mod (q-1)) and a base is a value of c (mod q); a composition part that calculates m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' and a private key u corresponding to p (mod q); and a shift release part that calculates the plaintext m through calculation of m (mod n)) (mod n) by using the value m A decryption processor for calculating a plaintext m through decryption of a ciphertext c by using a first prime number p, a second prime number q, a public key e, and a private key d, the decryption processor comprising:a first modular exponentiation part that calculates a value m' through modular exponentiation modulo the first prime number p, wherein an exponent is a value obtained by shifting, with a numerical value s, a value d calculated in accordance with d (mod (p-1)), and a base is a value c calculated in accordance with c (mod p);a second modular exponentiation part that calculates a value m' through modular exponentiation modulo the second prime number q, wherein an exponent is a value obtained by shifting, with the numerical value s, a value d calculated in accordance with d (mod (q-1)), and a base is a value c calculated in accordance with c (mod q);a composition part that calculates a value m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' calculated respectively by the first modular exponentiation part and the second modular exponentiation part and a private key u corresponding to a calculation result of p (mod q); anda shift release part that calculates the plaintext m through calculation of m (mod n)) (mod n) by using the value m calculated by the composition part. The decryption processor according to claim 1,wherein the first modular exponentiation part, the second modular exponentiation part, and the shift release part use a random number of two bits or less as the numerical value s. The decryption processor according to claim 1,wherein the first modular exponentiation part, the second modular exponentiation part, and the shift release part use a constant of two bits or less as the numerical value s. The decryption processor according to claim 1,wherein the shift release part calculates the plaintext m by calculating c (mod n) by a left-to-right binary method. The decryption processor according to claim 1,wherein the shift release part calculates the plaintext m by a right-to-left binary method. A computer readable medium recording a program causing a computer to execute a decryption processing method for calculating a plaintext m through decryption of a ciphertext c by using a first prime number p, a second prime number q, a public key e, and a private key d, the method comprising:calculating a value m' through modular exponentiation modulo the first prime number p, wherein an exponent is a value obtained by shifting, with a numerical value s, a value d calculated in accordance with d (mod (p-1)), and a base is a value c calculated in accordance with c (mod p);calculating a value m' through modular exponentiation modulo the second prime number q, wherein an exponent is a value obtained by shifting, with the numerical value s, a value d calculated in accordance with d (mod (q-1)), and a base is a value c calculated in accordance with c (mod q);calculating a value m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' calculated respectively by the value m' calculation step and the value m' calculation step, and a private key u corresponding to a calculation result of p (mod q); andcalculating the plaintext m through calculation of m (mod n)) (mod n) by using the value m calculated in the value m calculation step. The computer readable medium according to claim 6, wherein the value m' calculation step, the value m' calculation step, and the plaintext m calculation step use a random number of two bits or less as the numerical value s. The computer readable medium according to claim 6, wherein the value m' calculation step, the value m' calculation step, and the plaintext m calculation step use a constant of two bits or less as the numerical value s. The computer readable medium according to claim 6, wherein the plaintext m calculation step uses a left-to-right binary method to calculate c (mod n). The computer readable medium according to claim 6, wherein the plaintext m calculation step uses a right-to-left binary method. A method for calculating a plaintext m through decryption of a ciphertext c by using a first prime number p, a second prime number q, a public key e, and a private key d, to be executed by a computer, the method comprising:calculating a value m' through modular exponentiation modulo the first prime number p, wherein an exponent is a value obtained by shifting, with a numerical value s, a value d calculated in accordance with d (mod (p-1)), and a base is a value c calculated in accordance with c (mod p);calculating a value m' through modular exponentiation modulo the second prime number q, wherein an exponent is a value obtained by shifting, with the numerical value s, a value d calculated in accordance with d (mod (q-1)), and a base is a value c calculated in accordance with c (mod q);calculating a value m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' calculated respectively by the value m' calculation step and the value m' calculation step, and a private key u corresponding to a calculation result of p (mod q); andcalculating the plaintext m through calculation of m (mod n)) (mod n) by using the value m calculated by the value m calculation step. The method according to claim 11, wherein the value m' calculation step, the value m' calculation step, and the plaintext m calculation step use a random number of two bits or less as the numerical value s. The method according to claim 11, wherein the value m' calculation step, the value m' calculation step, and the plaintext m calculation step use a constant of two bits or less as the numerical value s. The method according to claim 11, wherein the plaintext m calculation step uses a left-to-right binary method to calculate c (mod n). The method according to claim 11, wherein the plaintext m calculation step uses a right-to-left binary method. This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2009-8464, filed on Jan. 19, 2009, the entire contents of which are incorporated herein by reference. FIELD [0002] Embodiments discussed herein are related to a decryption processor and a decryption processing method. BACKGROUND [0003] The cryptosystem is roughly divided into a common key cryptosystem and a public key cryptosystem. In the system designated as the common key cryptosystem, the same key (secret key) is used for encryption and decryption, and the security is retained by keeping the secret key as information unknown to a third party other than a transmitter and a receiver. In the public key cryptosystem, different keys are used for encryption and decryption, and the security is retained by keeping a key (private key) used for decryption of a ciphertext as secret information of a receiver alone while a key (public key) used for encryption is open to the public. One of techniques of the field of cryptography is decryption technique. The decryption technique is a technique of guessing secret information such as a secret key on the basis of available information such as a ciphertext, and there are various methods for the decryption technique. One method in the spotlight recently is designated as power analysis attack (hereinafter referred to as "PA"). The PA is a method developed by Paul Kocher in 1998, in which power consumption data obtained by providing various input data to an encryption device included in a smartcard or the like is collected and analyzed so as to guess key information stored in the encryption device. It is known that a secret key of both the common key cryptosystem and the public key cryptosystem may be guessed from an encryption device by employing the PA. There are two kinds of PA, that is, single power analysis (hereinafter referred to as "SPA") and differential power analysis (hereinafter referred to as "DPA"). The SPA is a method for guessing a secret key on the basis of the feature of single power consumption data of an encryption device, and the DPA is a method for guessing a secret key by analyzing differences among a large number of pieces of power consumption data. At this point, an RSA cryptosystem will be described. The RSA cryptosystem security is based on difficulty of prime factorization. Although it is easy to calculate a composite number n=p×q on the basis of two prime numbers p and q of 1024 bits each, it is difficult to obtain the prime factors p and q on the basis of the composite number n alone (i.e., prime factorization is difficult), which is the premise of the security of the RSA cryptosystem. The RSA cryptosystem has two functions of encryption and decryption. Two kinds of decryptions are known: one is decryption not using Chinese remainder theorem (hereinafter referred to as "CRT") (i.e., decryption without the CRT) and the other is decryption using the CRT (i.e., decryption with the CRT). The encryption, the decryption without the CRT and the decryption with the CRT are respectively illustrated in FIGS. 13, 14 and 15. The encryption process and the decryption process without the CRT respectively illustrated in FIGS. 13 and 14 are very simple. In the encryption process, a ciphertext c is output through modular exponentiation of c:=m (mod n) modulo a composite number n wherein the base is a plaintext m and the exponent is a public key e. In the decryption without the CRT, a plaintext m is output through modular exponentiation of (mod n) modulo a composite number n wherein the base is a ciphertext c and the exponent is a private key d. Incidentally, the private key d has a value satisfying a relationship with the public key e of e×d=1 (mod (p-1)(q-1)). With respect to the calculation of modular exponentiation, a plurality of calculation algorithms are known including a binary method and a window method, and resistance to the SPA or the DPA depends upon the algorithm to be employed. The decryption with the CRT is a rapid version algorithm attained by reducing the amount of computation of the decryption without the CRT. In general, the amount of computation of the modular exponentiation is in proportion to (bit length of exponent)×(bit length of modulus)×(bit length of modulus). For example, with respect to the RSA cryptosystem wherein each of prime factors p and q is a 1024-bit value and a composite number n is a 2048-bit value, the bit length of the private key d is 2048 bits. This is because e×d=1 (mod (p-1)(q-1)), namely, d=e (mod (p-1)(q-1)), and the private key d has a value satisfying 0<d<(p-1)×(q-1), and therefore, the bit length of the private key d is equal to (p-1)(q-1), namely, 2048 bits. In this case, the necessary amount of computation of the modular exponentiation is 2048×2048×2048=8589934592. In general, the bit length of an exponent is substantially the same as the bit length of a modulus in the RSA decryption without the CRT. In other words, the amount of computation of the RSA decryption without the CRT is in proportion to the third power of the bit length of a modulus. On the contrary, the decryption with the CRT illustrated in FIG. 15 is known to have the amount of computation reduced to 1/4 of that of the decryption without the CRT. The decryption with the CRT includes the following three stages of CRT-1, CRT-2 and CRT-3: CRT-1: Modular of a ciphertext c modulo p or q (steps 301 and 302 of FIG. 15 CRT-2: Modular exponentiation modulo p or q (steps 303 and 304 of FIG. 15 CRT-3: Calculation of a result of modular exponentiation modulo n based on the result of the modular exponentiation modulo p and q (CRT composition) (step 305 of FIG. 15 The most part (95% or more) of the decryption with the CRT corresponds to the modular exponentiation of the stage CRT-2, which is modular exponentiation modulo a prime number p or q wherein a base is =c(mod p) or c =c(mod q) and an exponent is a private key d =d (mod (p-1)) or d =d (mod (q-1)). The bit length of the modulus p or q is a half of that of the composite number n, namely, 1024 bits, and the bit length of the exponent d or d is also a half of that of the private key d, namely, 1024 bits. Accordingly, the amount of computation of the modular exponentiation to be performed at step 303 or 304 is 1024×1024×1024=1073741824, which is 1/8 of the amount of computation of the modular exponentiation for the bit length of 2048 bits. Since the processing with the 1/8 amount of computation is repeated twice, the amount of computation of the decryption with the CRT is 1/8×2=1/4 of the amount of computation attained without the CRT. When the decryption with the CRT is employed, the amount of computation one fourth of that attained by the decryption without the CRT, namely, an operation speed four times as high as that attained by the decryption without the CRT, may be realized. On the other hand, the decryption with the CRT has a disadvantage that it includes a large number of operations using the prime number p or q as illustrated in FIG. 15. Since the security of the RSA cryptosystem is based on the difficulty of the prime factorization of n=p×q, the RSA cryptosystem loses the security if the value of the prime number p or q is revealed to an attacker. Since the power consumption tends to be correlated with the prime number p or q in such operation processing using the prime number p or q, there is a problem that the prime number p or q is easily revealed through the PA. The PA is known as means for an attacker to attack an encryption device implementing the decryption without the CRT of FIG. 14 or the decryption with the CRT of FIG. 15 , that is, processing using a private key, for obtaining a private key d, d , d , p, or q. Now, conventionally known SPA or DPA attack against the decryption of FIG. 14 or 15 will be described. (Power Analysis Attack) (Outline of SPA) At this point, the outline of the SPA will be described. The SPA is an attack made for guessing a private key used in an encryption device by using information obtained through observation of a single power waveform. This is an effective attack against an encryption device in which there is correlation between the content of the encryption and the shape of a power consumption waveform. (Power Analysis Attack 1 using SPA (targeting decryption with CRT): Attack 1) Now, power analysis attack using the SPA targeting the decryption with the CRT (hereinafter referred to as the attack 1) will be described. SPA attack targeting the decryption with the CRT illustrated in FIG. 15 is disclosed in Japanese Patent No. 4086503. The disclosed attack targets the remainder processing with the prime number p or q performed at step 301 or 302. It depends upon the implementation form of the remainder processing of step 301 or 302 whether or not the attack succeeds. In the implementation form for succeeding, when Z=X mod Y is to be calculated, X and Y are compared with each other, and when X<Y, Y is output as a remainder result Z, and merely when X≧Y, a modular exponentiation Z=X (mod Y) is calculated to be output as described below. As the premise for holding the attack disclosed by Patent No. 4086503, an encryption device should perform the decryption with the CRT by employing this implementation. Specifically, the following processing is performed in the operation of Z=X (mod Y) in this method: if (X<Y) then output X as Y if (X≧Y) then calculate Z=X (mod Y) and output Z(this processing is hereinafter designated as "processing MOD_ALG"). In the processing MOD_ALG, the input X and the modulus Y are compared with each other, and the modular exponentiation is not executed when X<Y, and the modular exponentiation is executed merely when X≧Y. In other words, it is determined whether or not the modular exponentiation is to be executed in accordance with the relationship in magnitude between X and Y. If the attacker can observe the execution of the modular exponentiation by using power consumption, the relationship in magnitude between X and Y, that is, internal data of the encryption device, may be known in accordance with the power consumption. When this property is applied to step 301 or 302 of FIG. 15 , the attacker can decrypt the prime number p or q. At step 301 or 302, the remainder processing with the prime number p or q is performed on the input ciphertext c. It is noted that in the implementation in an encryption device such as a smartcard, although the private key (d , d , p, q or u) is a value that is held within the device and cannot be externally input, the ciphertext c is a value that may be externally input by a third party. In other words, the attacker can determine whether c<p or c≧p with respect to the controllable ciphertext c by observing the power consumption in the remainder processing of step 301 or 302. When such determination is made, the prime number p can be easily obtained by using dichotomizing search illustrated in FIG. 16. FIG. 16 illustrates an algorithm for narrowing candidate values for the prime number p by repeatedly halving a difference between the maximum value and the minimum value of p-ε with the minimum value of p-ε held as p and the maximum value of p-ε held as p In the above, ε is a parameter corresponding to the maximum value of a decision error occurring in the power analysis and ε≧0. The magnitude of the parameter ε depends upon the attacking method to be employed. The parameter ε changes in accordance with means for determining at step 404 whether or not p +ε<p. As the means for this determination, when it is determined whether p <p or p ≧p by executing the SPA against the decryption with the CRT with CRT_DEC(p ) input, ε=0. When the DPA described below is employed, the parameter ε is approximately 1000 times as large as that used in the SPA. As illustrated at step 401, p is initialized to an initial value of 0 and p is initialized to an initial value of 2α (wherein α is the bit length of the prime number p). Thereafter, in a loop of steps 402 through 407, processing for narrowing the range of the prime number p by halving a difference between p and p is performed. This narrowing processing is performed by calculating a median value p of p and p and determining the relationship in magnitude between p and p through the attack using the power consumption. As illustrated at step 403, the median value p of p and p is given as p )/2. It is determined whether or not p +ε<p with respect to the thus given value p by the attack using the power consumption. When p +ε<p is true, it means that p . Therefore, while keeping p as the maximum value, p is set as a new minimum value of p-ε, and hence, processing of p is performed at step 405. When p +ε<p is false, it means that p ≧p-ε. Therefore, while keeping p as the minimum value, p is set as a new maximum value of p-ε, and hence, processing of p is performed at step 405 (whereas the symbol ":=" means that the result of the right side is substituted in the left side). By repeating the above-described processing, the processing for halving a difference between the maximum value p and the minimum value p of the prime number p is repeated, and when the difference is as small as p ≦π as illustrated at step 402, it is determined that the range of the prime number p has been sufficiently narrowed, and candidate values of the prime number p are output. At step 408, processing for determining the maximum value and the minimum value of the prime number p on the basis of the range of p-ε sufficiently narrowed (to a difference not more than π) is performed. In the case where p when ε≧o, the minimum value of the prime number p is p and the maximum value is p +ε, and therefore, the processing of p +ε is executed with respect to the maximum value p of the prime number p. As a result of the processing, p At step 409, [p , p +1, . . . , p ] are output as the candidate values of the prime number p and the processing is terminated. Since the number of candidate values of the prime number p is halved every time the loop of steps 403 through 407 is executed, the repeat of the loop is terminated in calculation time in proportion to α. For example, when the prime number p has a bit length of 1024 bits, the number of repeating the loop is 1024 at most, and thus, the prime number p can be very efficiently obtained. (Outline of DPA) Next, the DPA will be described. The DPA is an attack for guessing a private key used in an encryption device by observing a plurality of power waveforms and obtaining differences among the plural power waveforms. The DPA is effective in an environment where there is correlation between data read/written in an encryption device and power consumed in the read/write. It is known in general that power consumption has a property to be increased in proportion to the number of one's (1's) of binary data included in data read/written in an encryption device. In the DPA, this property is used for obtaining a private key. (Power Analysis Attack 1 using DPA (targeting decryption without CRT): Attack 2) Now, the power analysis attack by using the DPA targeting the decryption without the CRT (hereinafter designated as the attack 2) will be described. Among attacks using the DPA against the RSA cryptosystem, the most popularly known method is an attack for obtaining an exponent d by measuring power consumption in executing modular exponentiation of c (mod n). This attack is effective against the decryption without the CRT illustrated in FIG. 14 . When such a private key is revealed to an attacker, an arbitrary ciphertext can be decrypted, and hence, the security of the RSA cryptosystem cannot be retained. In other words, the private key d is a significant property to be protected from the attack by the SPA or the DPA similarly to the prime numbers p and q. In order to make this attack succeed, the attacker is required to know the processing method of the modular exponentiation algorithm executed within the encryption device. The types of processing methods of modular exponentiation algorithm are basically roughly divided into the binary method and the window method and the types are very limited, and therefore, even when all attacking methods imaginable with respect to each of the types of the modular exponentiation algorithm are tried, it is merely several times as many efforts for the attacker, and hence, this requirement does not cause a serious problem for the attacker. Assuming that the modular exponentiation algorithm implemented in an encryption device is the window method and that an attacker knows it, the attacking method for obtaining an exponent d on the basis of the power consumption in the modular exponentiation of c (mod n) will be described. Although the window method is exemplarily employed in the following description, the DPA is effective against another modular exponentiation algorithm such as the binary At this point, an operation by the window method and the DPA attack against the window method will be described. The modular exponentiation is a process for calculating v satisfying a relationship of (mod n) among an exponent d, a base c and a modulus n. As an algorithm for efficiently performing this process, the window method is known. Assuming that the binary expression of the exponent d is expressed as d=(d -1, d -2, . . . , d , FIG. 17 illustrates an algorithm of the modular exponentiation for calculating m=c (mod n) by the window method. The outline of the operation performed in FIG. 17 is illustrated in FIG. 18 The operation of FIG. 17 will now be described. First, processing of creating a table w satisfying a relationship of w[x]=c (mod n) is performed for 0<x<2 . After creating the table, u/k sequences b (i=0, 1, . . . ) are created as block values obtained by dividing d=(d -1, d -2, . . . , d of u bits by every k bits, namely, blocks b +k-1, . . . , d . Table indexing processing by using each block b ]) and 2 multiplication of m:=m k (mod n) are repeated for calculating m=c (mod n). Now, a method in which an attacker guesses an exponent d used within an encryption device employing the window method by using the DPA will be described. In the RSA cryptosystem, the exponent d is a private key and is a significant property to be protected from an attacker. Since the exponent d generally has a value of 1024 or more bits, if the value is to be obtained by brute force approach, it takes 2 efforts and hence is impossible. In the DPA, however, attention is paid to the processing of the window method for dividing the exponent d by every k bits. For example, in the processing illustrated FIG. 18 , the exponent d is divided into blocks b by every 4 bits, and intermediate data of each block b , that is, m:=m×w[b ] (mod n), is calculated. Since the value of m:=m×w[b ] is read/written as internal data of an encryption device, the attacker can obtain information of the block b by measuring power consumption in reading/writing the calculation result m of m:=m×w[b ]. The block b is data as small as k bits (which is 4 bits in the exemplary case of FIG. 18 ), and therefore, when the brute force approach to the k-bit value b is repeated with respect to all the bit values of the exponent d, the attacker can efficiently obtain the value of the exponent d. For example, when k=2 and d is a 2048-bit value, the exponent d is divided into 1024 2-bit blocks b , and there is no need for the attacker to execute the brute force approach to all the bit values of 2048 bits but merely 2 bits, namely, four, kinds of brute force approaches are repeated 1024 times, and the number of necessary efforts is 4×1024=4096 alone, and thus, the value of the exponent d can be efficiently obtained. In the brute force approach with respect to every k bits, it is necessary for the attacker to select a correct value out of 2 candidate values by the DPA, and the method for selecting a correct value will now be described. For example, when k=2 and d=(d , d , d , d , d , d , divided blocks are b , d , b and b , and in the modular exponentiation by the window method illustrated in FIG. 17, m=c (mod n) is calculated through the following processing 1 through processing 5: ] (mod n)=c (mod n) Processing 1 (mod n)=c b2 (mod n) Processing 2 ] (mod n)=c 1 (mod n) Processing 3 (mod n)=c b1 (mod n) Processing 4 ] (mod n)=c 0 (mod n)=c (mod n) Processing 5 If the attacker knows that the encryption device implements the window method, the attacker also knows that the aforementioned processing 1 through 5 are performed in the encryption device. Therefore, values of b , b and b , that is, candidate values of b , are guessed through the DPA performed as follows, so as to guess the value of the exponent d: 501: The encryption device is provided with N values a (wherein i is 1, 2, . . . and N) as bases so as to cause it to calculate a (mod n). Data of power consumed in the device at this point, i.e., power consumption data P(a , time), is measured with respect to each value of i. 502: A 2-bit value b is predicted as a value b' , and the following procedures (1) and (2) are repeated until it is determined that b (1) With attention paid to intermediate data v of the processing 1, a value of m=a ' (mod n) is simulated on the basis of the predicted value b' , and the data P(a , time) (wherein i=1, 2, . . . and N) is classified into two sets G and G , time)|least significant bit of a '2 (mod n)=1] , time)|least significant bit of a '2 (mod n)=0] (2) A power difference curve Δ expressed as Δ=(average power of G )-(average power of G ) is created on the basis of the sets G and G . As a result, for example, in a time-power curve as illustrated in FIG. 19(A), when a spike as illustrated in FIG. 19(B) appears, it is determined that b (namely, b is successfully guessed), and when a substantially even curve as illustrated in FIG. 19(C) is obtained, it is determined that b 503: A 2-bit value b is predicted as a value b' , and the following procedures (1) and (2) are repeated until it is determined that b (1) With attention paid to intermediate data v of the processing 3, a value of m=a 1' (mod n) is simulated on the basis of the previously guessed value b and the predicted value b' , and the data P(a , time) (wherein i=1, 2, . . . and N) is classified into two sets G and G , time)|least significant bit of a '1 (mod n)=1] , time)|least significant bit of a '1 (mod n)=0] (2) A power difference curve Δ expressed as Δ=(average power of G )-(average power of G ) is created on the basis of the sets G and G . As a result, when a spike as illustrated in FIG. 19(B) appears, it is determined that b (namely, b is successfully guessed), and when a substantially even curve as illustrated in FIG. 19(C) is obtained, it is determined that b 504: A 2-bit value b is predicted as a value b' , and the following procedures (1) and (2) are repeated until it is determined that b (1) With attention paid to intermediate data v of the processing 5, and a value of m=a 0' (mod n) is simulated on the basis of the previously guessed values b and b and the predicted value b' , and the data P(a , time) (wherein i=1, 2, . . . and N) is classified into two sets G and G , time)|least significant bit of a '0 (mod n)=1] , time)|least significant bit of a '0 (mod n)=0] (2) A power difference curve Δ expressed as Δ=(average power of G )-(average power of G ) is created on the basis of the sets G and G . As a result, when a spike as illustrated in FIG. 19(B) appears, it is determined that b (namely, b is successfully guessed), and when a substantially even curve as illustrated in FIG. 19(C) is obtained, it is determined that b When b is correctly predicted, the value of m simulated by the attacker is generated also in the encryption device to be read/written, and therefore, since a differential power waveform in which the numbers of zero's (0's) and one's (1's) included in the value m are extremely biased between the sets G and G as in the aforementioned sets G and G is created, there arises a difference in the power consumption, and this difference in the power consumption is observed as a spike waveform as illustrated in FIG. 19(B). When b is incorrectly predicted, the value of m simulated by the attacker is not generated in the encryption device, and a value completely different from the simulated value is read/written, and therefore, even when a differential power waveform in which the numbers of zero's (0's) and one's (1's) included in the value m are extremely biased between the sets G and G as in the aforementioned sets G and G is created, a spike waveform cannot be obtained. When the prediction of b is incorrect, the sets G and G are sets obtained by randomly classifying the whole set G of the data P(a , time) (wherein i=1, 2, . . . , N) into two groups, and therefore, the average power consumption is substantially equivalent between the sets G and G , resulting in a substantially even differential waveform as illustrated in FIG. 19(C). (Power Analysis Attack 2 using DPA (targeting decryption with CRT): Attack 3) Next, power analysis attack using the DPA targeting the decryption with the CRT (hereinafter designated as the attack 3) will be described. The attack using the SPA against the stage CRT-1 of the decryption with the CRT, namely, the modular exponentiation of a ciphertext (base) c using prime numbers p and q, has been already described. The DPA is also applicable to this processing. In the attack using the SPA, with respect to the base c controlled by an attacker and input to the encryption device, it is determined whether c≧p or c<p by using a single power consumption waveform. On the contrary, in the attack using the DPA, with respect to a base c input to the encryption device, it is determined whether or not c+ε<p by using a difference among a plurality of power consumption waveforms, whereas ε is an error parameter. When it is successfully determined that c+ε<p, candidate values of the prime number p can be narrowed by using the dichotomizing search illustrated in FIG. 16. Even when the search as illustrated in FIG. 16 is employed, however, the number of candidate values of the prime number p cannot be reduced to ε+π or smaller. When the number of candidate values of the prime number p is sufficiently small (of, for example, ε+π<2 ) for the brute force approach, however, the value ε+π does not cause a serious problem for narrowing the value of the prime number p. The SPA attack against the stage CRT-1 described above is carried out on the assumption that the modular exponentiation algorithm represented by Z=X (mod Y) is performed in accordance with the processing MOD_ALG, namely, that the algorithm for switching the processing in accordance with the relationship in magnitude between X and Y is implemented, and on the other hand, the DPA attack described below is effective against an encryption device always executing the operation Z=X (mod Y) regardless of the relationship in magnitude between X and Y. [0060]FIG. 20 illustrates an algorithm for determining, with respect to a parameter x controllable by an attacker, whether or not x+ε<p by using the DPA. Differently from the attack using the SPA, this determination is made not for obtaining accurate decision but for determining whether or not x+ε<p with respect to the error parameter ε. When the error parameter ε is too small, there is a possibility that accurate determination cannot be made depending upon the power consumption characteristic of the encryption device. This is because of the difference between the SPA where the determination is made by using a single power waveform and the DPA where the determination is made by using differences among a plurality of waveforms, and the error parameter ε is in proportion to the number of waveforms necessary for successfully performing the DPA. It is known in general that the DPA is successfully carried out by using differences among approximately 1000 pieces of data, and therefore, the error parameter ε has a value also as small as approximately 1000. The principle for successfully performing the attack algorithm illustrated in FIG. 20 will be described. The result of the modular exponentiation represented by Z=X (mod Y) is always Z=X regardless of the implemented algorithm of the modular exponentiation when X<Y. Specifically, the value Z, that is, the output result Z to be read or written in the encryption device, is X (i.e., Z=X) when X<Y. In the above described sets G and G , with respect to all bases a represented as x≦a <x+ε, when a <p, namely, when x+ε<p, a value calculated as a (mod p) is always a , and this value is read/written in a memory within the encryption device. The numbers of zero's (0's) and one's (1's) included in the sets G ,j and G ,j as all the operation results of a (mod p) are greatly biased with respect to all difference curves with j=0, 1, . . . and log ε-1, and therefore, a spike as illustrated in FIG. 19(B) appears on power difference curves obtained as G ,j with respect to all values of j. On the contrary, when a ≧p with respect to all bases a represented as a =x, x+1, . . . , x+ε, namely, when x≧p, the operation result of a (mod p) is always a p wherein λ is an integer. When the error parameter ε is sufficiently smaller than the prime number p, the integer λ is highly likely to be a constant λ regardless of the value of i, and therefore, the operation result of a (mod p) is a p. The value of a and the 0th, 1st, . . . , or log ε-1th bit value from the least significant bit of a -λp are the same or different depending upon the influence of the propagation of carry through subtraction of λp. Specifically, the 0th, 1st, . . . , or log ε-1th bit value from the least significant bit of a -λp is not always the same as the 0th, 1st, . . . , or log ε-1th bit value from the least significant bit of a and is varied depending upon the values of a and λp. In other words, a spike does not always appear on all the power difference curves obtained as G ,j, but no spike appears or merely a spike with a small height appears depending upon the value of j, and a sufficiently high spike cannot be obtained with respect to all the values of j. The same is true when, with respect to all bases a represented as a =x, x+1, . . . , x+ε, some a satisfy a ≧p and the other a satisfy a <p, and also in this case, a spike does not appear with respect to all the values of j. Accordingly, when a sufficiently high spike as illustrated in FIG. 19(B) appears on a power difference curve obtained as G ,j, it can be determined that x+ε<p. (Countermeasure against Power Analysis Attack) Against the RSA cryptosystems illustrated in FIGS. 14 and 15, the attacking methods by the SPA or the DPA described as the attack 1, the attack 2 and the attack 3 above are known. Also, countermeasures against these attacks are known. Now, conventionally known two types of countermeasures (i.e., a countermeasure 1 and a countermeasure 2) against the attacks 1, 2 and 3 will be (Countermeasure 1) The countermeasure 1 is illustrated in FIG. 21 . In FIG. 21 , steps 1101 and 1102 correspond to the stage CRT-1, steps 1103, 1104, 1105 and 1106 correspond to the stage CRT-2, and steps 1107 and 1108 correspond to the stage CRT-3. Constants R, R and R used in FIG. 21 are constants stored in an encryption device and have values not open to the public. Through the processing using these constants, the attacks 1 and 3 can be prevented. Differently from the decryption method of FIG. 15 , at steps 1101 and 1102, with respect to a new base c×R, which is obtained by multiplying a constant R satisfying R>p and R>q by c, modular exponentiation of c' :=c×R (mod p) and c' :=c×R (mod q) is executed. At steps 1103 and 1104, exponential modular exponentiations modulo p and q wherein bases are these c' and c' thus corrected by R and exponents are d and d are executed, and the result is stored as m' and m' . The resultant calculated values are m' (mod p)=R (mod p) and m' (mod q)=R (mod q). When these values are compared with m (mod p) and m (mod q), which are calculated through the modular exponentiation performed at steps 303 and 304 of FIG. 15 , there is a difference derived from the constant R or R . Processing for correcting this difference for calculating c (mod p) and c (mod q) is executed at steps 1105 and 1106. This processing is executed by using previously calculated constants R dp (mod p) and R dq (mod q) through calculation of m (mod p)=c dp (mod p)=c (mod p) and m (mod q)=c dq (mod q)=c (mod q). The correction for m (mod p) and m (mod q) is processing to be performed for CRT composition performed at step 1107. When these values are provided as inputs for the CRT composition of step 1107, m :=((u×(m )) (mod q))×p+m (mod n) is calculated to be output. Through the countermeasure 1 illustrated in FIG. 21 , the processing for executing the modular exponentiation with p and q after multiplication by the constant R is performed at steps 1101 and 1102, resulting in realizing the countermeasure against the attack 1. Since R is the constant satisfying R>p and R>q, relationships of c×R≧p and c×R≧q always hold excluding a case of a special input of c=0, and hence, in the calculation of Z=X (mod Y) of the processing (MOD_ALG), there always arises branching of X≧Y alone, and hence, the attacker cannot obtain effective information. Merely when c=0, branching of X<Y is caused, but this merely leads to obvious information of 0<p. Accordingly, when the countermeasure 1 illustrated in FIG. 21 is employed, the attacker cannot obtain effective information about p through the branching processing of MOD_ALG, and thus, the attack 1 can be prevented. Furthermore, the countermeasure 1 illustrated in FIG. 21 also exhibits an effect to prevent the attack 3 for the following reason: Since c×R (mod p) and c×R (mod q) are calculated at steps 1101 and 1102 by using the constant R unknown to an attacker, the attacker cannot guess the value of c×R about c and hence cannot guess the value of c×R (mod p) as well. If the value of R is known to the attacker, a similar attack can be executed by executing the attack 3 with c=g×R (mod n) input instead of c. This is because the value calculated at step 1101 is c×R (mod p)=(g×R )×R (mod p)=g (mod p) in this case, and the modular exponentiation is executed with respect to g, which can be controlled by the attacker, and hence, the attacker can attain a situation similar to that in the attack 3. A relational expression of R (mod n)=R (mod p) is used in this case, and this relational expression is derived from a generally known property, about n=p×q and an arbitrary integer a, of a (mod n)=a (mod p)=a (mod q). When R is an unknown constant, however, the attacker cannot calculate g×R (mod n) by using g, and hence, the countermeasure 1 attains security against the attack 3. In other words, the security of the countermeasure 1 is attained on the assumption that the constants R, R and R have values unknown to an attacker. As long as these constants are unknown to the attacker, the security is retained but there is a potential risk as follows: when common constants are used in all solids of the encryption device, if these constants are revealed from one solid, there is a potential risk that the security of all the solids is endangered. Furthermore, when the countermeasure 1 is employed, since it is necessary to store the constants R, R and R within the device, cost of memory addition for recording these values is required. Since the constant R satisfies the relationships of R>p and R>q, a memory area with at least a bit length of p or q is necessary. Assuming that the bit length of p or q is a half of the bit length of n, the memory area necessary for the constant R is an area of (log n)/2 bits. The memory area necessary for each of R and R is the same as that of p or q and is an area of (log n)/2 bits. In total, the memory area necessary for storing the constants R, R and R is an area of 3(log n)/2 bits. In general RSA cryptosystem, a value not less than 1024 bits is used as n, and therefore, a memory area of 1536 bits or more is necessary. Additional cost of the amount of computation is that of multiplication by R performed at steps 1101 and 1102 and that of multiplication by R and R performed at steps 1105 and 1106, but the additional cost of these amounts of computation occupies a very small proportion in the whole amount of computation and is negligibly small. In summary, the countermeasure 1 can prevent the attacks 1 and 3. The additional cost necessary for the countermeasure 1 is the memory area for storing the constants R, R and R , and the necessary memory area is evaluated as 3 (log n)/2 bits (i.e., at least 1536 bits). Moreover, as a potential risk, when the constants R, R and R are commonly used in all solids of the encryption device, it is possible that the security of all the solids is endangered when these constants are revealed from one solid. (Countermeasure 2) A variety of countermeasures are known as a method for preventing the attack 2. All the countermeasures include, in common, processing of generating a random number within an encryption device in executing the calculation of c (mod n) and randomizing intermediate data generated in the middle of the calculation of c (mod n) by using a random number. In the attack 2, an attacker simulates intermediate data created in the middle of the calculation of c (mod n) based on the input c and creates the difference curve represented by G on the basis of the simulation. Therefore, the simulation performed in the attack 2 is made invalid by randomizing the intermediate data obtained in the middle of the calculation, so as to prevent the attack 2. Although the intermediate data generated in the middle of the calculation of the modular exponentiation of c (mod n) is randomized in this method, it is necessary to ultimately output the same value c (mod n) as in the general modular exponentiation, and therefore, it is also necessary to release the randomization. As a countermeasure against the attack 2 through the randomization of the intermediate data, a variety of methods are known, which are different from one another in the method of randomizing and the method of releasing the randomization. Additional cost of the amount of computation and the memory necessary for the countermeasure depends upon the difference in these methods. As a typical countermeasure against the attack 2, randomization of an exponent will now be described (as a countermeasure 2). [0077]FIG. 22 illustrates a countermeasure against the attack 2 through the randomization of an exponent (i.e., the countermeasure 2). As a basic idea of this countermeasure, the randomization of an exponent used in the modular exponentiation is employed as the countermeasure against the attack 2. The randomization of an exponent is performed by using a randomized exponent d'=d+r×φ(n) instead of an exponent d, whereas r is a random number of 20 bits, φ(x) is an order against a modulus x, and the order against the modulus x has a property of a.sup.φ(x) (mod x)=1 with respect to an arbitrary integer a. When there is a relationship of n=p×q between prime numbers p and q, it is known that φ(n)=(p-1)(q-1), φ(p)=p-1 and φ(q)=q-1. Since a bit string of the exponent d+r×φ(n) given by the random number r of 20 bits is randomly varied, the intermediate data obtained in the middle of the calculation of the modular exponentiation is randomized, but an ultimately calculated value is always equal to c (mod n) (see FIG. 23 ). The ultimately calculated value is always equal to c (mod n) because c (mod n), and owing to the property of the order, c.sup.φ(n)=1 (mod n) holds with respect to an arbitrary integer c, and therefore, c (mod n)=c (mod n)=c (mod n) holds with respect to an arbitrary random number r. Additional cost, accompanying the countermeasure 2, of computation time is caused because d'=d+r×φ(n) is used instead of the exponent d. While the bit length of the exponent d is log (n), the bit length of d' is r×φ(N), which is given as log +20. The processing time necessary for the modular exponentiation is obtained as (bit length of modulus)×(bit length of modulus)×(bit length of exponent). When the countermeasure 2 is employed, the bit length of the exponent is increased from log (n) to log n+20, and therefore, the computation time is increased, as compared with the computation time when the countermeasure 2 is not employed, to (log n). When log n=1024, 1044/1024=1.02, and therefore, the computation time is slightly increased as the additional cost, but this increase occupies a very small proportion in the whole computation time. Therefore, the countermeasure 2 is known as an efficient countermeasure. As additional cost of a memory area, a 20-bit area for storing the random number r and a log n-bit area for storing the order φ(n) that is not used in the decryption without the CRT illustrated in FIG. 14 are necessary. In summary, the countermeasure 2 can prevent the attack 2. The additional cost of the amount of computation necessary for the countermeasure 2 corresponds to the cost of using the exponent d'=d+r×φ (n) instead of the exponent d, and the amount of computation is (log n) times as large as that in the processing not employing the countermeasure illustrated in FIG. 14 . When n has a 1024-bit value, however, the increased amount of computation is as small as 2%. As the additional cost of the memory area, a memory area of (20+log n) bits in total is necessary for both the random number r and the order φ(n). Since n is generally a value of 1024 or more bits, an additional memory of 1044 bits or more is necessary. (Summary of Countermeasure 1 and Countermeasure 2) At this point, features of the conventionally known countermeasures 1 and 2 will be summarized. The countermeasure 1 (namely, the countermeasure for the decryption method of FIG. 15 ) is effective against the attacks 1 and 3, and the additional cost of the amount of computation is the same as that illustrated in FIG. 15 and the additional cost of the memory is 3 (log n)/2 bits (≧1536 bits). Incidentally, when the constants R, R and R are commonly used in all solids, the countermeasure 1 has a problem that all the solids may be made vulnerable if the constants R, R and R are revealed. On the other hand, the countermeasure 2 (namely, the countermeasure for the decryption method of FIG. 14 ) is effective against the attack 2, and the additional cost of the amount of computation is (log n) times as large as that of FIG. 14 , and the additional cost of the memory is (20+log n) bits (≧1044 bits). (Problems of Countermeasures 1 and 2) As described so far, the attacks described as the attacks 1, 2 and 3 are known against the RSA decryptosystems illustrated in FIGS. 14 and 15, and these attacks can be prevented by the conventional countermeasures described as the countermeasures 1 and 2. In other words, the conventionally known attacks 1, 2 and 3 can be prevented by the conventionally known countermeasures 1 and 2. Incidentally, guess methods using the SPA or the DPA for the common key cryptosystem such as DES or AES and guess methods using the SPA or the DPA for the RSA cryptosystem or the public key cryptosystem such as elliptical curve cryptosystem are disclosed in documents mentioned below. Also, a decryption method highly secured against a side channel attack is also disclosed in documents mentioned below. International Publication WO00/59157 pamphlet Paul Kocher, Joshua Jaffe, and Benjamin Jun, "Differential Power Analysis", in proceedings of Advances in Cryptology-CRYPTO '99, Lecture Notes in Computer Science vol. 1666, Springer-Verlag, 1999, pp. 388-397 Thomas S. Messerges, Ezzy A. Dabbish and Robert H. Sloan "Power Analysis Attacks of Modular exponentiation in Smartcards", Cryptographic Hardware and Embedded Systems (CHES'99), Lecture Notes in Computer Science vol. 1717, Springer-Verlag, pp. 144-157 Jean-Sebastein Coron, "Resistance against Differential Power Analysis for Elliptic Curves Cryptosystems", Cryptographic Hardware and Embedded Systems (CHES'99), Lecture Notes in Computer Science vol. 1717, Springer-Verlag, pp. 292-302, 1999 Alfred J. Menezes et al., "HANDBOOK OF APPLIED CRYPTOGRAPHY" (CRC press) pp. 615 SUMMARY [0091] According to an aspect of the invention, a decryption processor for calculating a plaintext m through decryption of a ciphertext c by using a first prime number p, a second prime number q, a public key e and a private key d, includes, a first modular exponentiation part that calculates a value m' through modular exponentiation modulo the first prime number p, wherein an exponent is a value obtained by shifting, with a numerical value s, a value d calculated in accordance with d (mod (p-1)) and a base is a value c calculated in accordance with c (mod p); a second modular exponentiation part that calculates a value m' through modular exponentiation modulo the second prime number q, wherein an exponent is a value obtained by shifting, with the numerical value s, a value d calculated in accordance with d (mod (q-1)) and a base is a value c calculated in accordance with c (mod q); a composition part that calculates a value m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' calculated respectively by the first modular exponentiation part and the second modular exponentiation part, and a private key u corresponding to a calculation result of p (mod q); and a shift release part that calculates the plaintext m through calculation of m (mod n)) (mod n) by using the value m calculated by the composition part. The object and advantages of the invention will be realized and achieved by means of the elements and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed. BRIEF DESCRIPTION OF DRAWINGS [0094]FIG. 1 is a diagram illustrating a countermeasure against attack 4 utilizing randomization of a base according to a presupposed technology of an embodiment of the invention. [0095]FIG. 2 is a diagram illustrating an exemplary hardware configuration of a decryption processor according to the embodiment. [0096]FIG. 3 is a diagram illustrating exemplary functional blocks of the decryption processor of the embodiment. [0097]FIG. 4 is a diagram illustrating exemplary basic processing performed by the decryption processor of the embodiment. [0098]FIG. 5 is a diagram illustrating an attack algorithm targeting a decryption processor executing a read/write operation of an operation result of x (mod p) on a data value controllable by an attacker in which the DPA is used for determining whether or not (x+ε) [0099]FIG. 6 is a diagram illustrating expressions used for obtaining an approximate value of p in the embodiment. FIG. 7 is a diagram illustrating exemplary processing of Example 1 according to the embodiment. [0101]FIG. 8 is a diagram illustrating exemplary processing of Example 2 according to the embodiment. FIG. 9 is a diagram illustrating exemplary processing of Example 3 according to the embodiment. [0103]FIG. 10 is a diagram illustrating exemplary processing of Example 4 according to the embodiment. FIG. 11 is a diagram illustrating exemplary processing of Example 5 according to the embodiment. [0105]FIG. 12 is a diagram illustrating exemplary processing of Example 6 according to the embodiment. [0106]FIG. 13 is a diagram illustrating a method of RSA cryptosystem. [0107]FIG. 14 is a diagram illustrating a method of decryption without CRT for the RSA. [0108]FIG. 15 is a diagram illustrating a method of decryption with the CRT for the RSA. FIG. 16 is a diagram illustrating an attack algorithm for narrowing a range of p through a combination of dichotomizing search and power analysis. FIG. 17 is a diagram illustrating modular exponentiation by the window method. [0111]FIG. 18 is a diagram illustrating processing in the window method. FIGS. 19(A), 19(B) and 19(C) are diagrams respectively illustrating a power consumption curve, a power difference curve having a spike, and a substantially even power difference curve. [0113]FIG. 20 is a diagram illustrating an attack algorithm targeting a decryption processor executing a read/write operation of an operation result of x (mod p) on a data value x controllable by an attacker in which the DPA is used for determining whether or not x+ε<p. [0114]FIG. 21 is a diagram illustrating a countermeasure algorithm against attacks 1 and 3. [0115]FIG. 22 is a diagram illustrating a countermeasure algorithm against attack 2. [0116]FIG. 23 is a diagram illustrating a process for randomizing intermediate data of the countermeasure 2 and outputting an operation result obtained after releasing the randomization. [0117]FIG. 24 is a diagram explaining the contents of attack 4. DESCRIPTION OF EMBODIMENTS [0118] The aforementioned countermeasures, however, do not always have resistance against a new attacking method. In general, a countermeasure in security is meaningless unless the countermeasure is equally resistive against all attacking methods. For example, even through one countermeasure may prevent nine kinds of attacking methods out of ten kinds of attacking methods, if the countermeasure cannot prevent the remaining one attacking method, a private key is revealed by that attacking method, and hence, an attacker may freely decrypt all encrypted data. Accordingly, a countermeasure for an encryption device is preferably be resistive against all attacking methods. Therefore, a countermeasure is preferably resistive not only against the conventional attacks 1, 2 and 3 but also against a new attacking method realized by extending the conventional attacking methods. Now, an example of the new attacking method obtained by extending the conventional attack 3 will be described (as attack 4), and it will be described that the attack 4 cannot be prevented by the conventional countermeasures 1 and 2. (Attack 4) The new attacking method is achieved by extending the attack against the decryption with the CRT described as the attack 3 and illustrated in FIG. 15 . The attack 3 targets the modular exponentiation expressed as c :=c (mod p) and c :=c (mod q) performed at steps 301 and 302 of FIG. 15 . Since the attack targets this processing, a modular exponentiation for c, which is controllable by an attacker, is carried out, and the attack based on the DPA illustrated in FIG. 20 is employed for determining whether or not c<p-ε for the prime number p (wherein ε is an error parameter with a small value of approximately 1000), and this information is used for narrowing candidate values of the prime number p to a number applicable to brute force approach. When this idea is applied so as to employ the attacking method based on the DPA illustrated in FIG. 20 in the processing of m (mod p) and m (mod q) of step 303 of FIG. 15 , the number of candidate values of the prime number p may be similarly narrowed. Hereinafter, this attack is designated as the attack 4, and this attacking method will now be described. The basic idea of the attack 4 is illustrated in FIG. 24 . This idea is different from that of the attack 3 in a method of giving the processing CRT_DEC(c) of FIG. 15 . In the conventional method, an attacker generates x for the determination of x+ε<p, and the resultant x is directly given to the input c of FIG. 15 as CRT_DEC(x). On the contrary, in the attack 4, an attacker generates x for the determination of x+ε<p, and a value y=x (mod n) calculated based on public keys e and n is generated with respect to x, so as to give CRT_DEC(y) as the input to the processing of FIG. 15 . Since the public keys e and n have values open to the outside of the encryption device, the attacker can freely generate x (mod n) from x. When x (mod n) is given instead of x, in the modular exponentiation performed at steps 303 and 304 of FIG. 15 , namely, at steps 1513 and 1514 of FIG. 24 , processing for respectively calculating m (mod p)=x (mod p) and m (mod q)=x (mod q) and reading/writing the resultant values in a memory of the encryption device is caused (the equalities hold because of a known property that (a =1 (mod p) and (a =1 (mod q) hold for an arbitrary integer a with respect to e, d , d , p and q, that is, keys of the RSA cryptosystem). In other words, power consumption substantially equal to power consumption caused in executing the modular exponentiations of x (mod p) and x (mod q) for x controllable by the attacker is caused, and therefore, the attacking method based on the DPA illustrated in FIG. 20 may be employed. When the attacking method based on the DPA illustrated in FIG. 20 is employed, the number of candidate values of the prime number p may be narrowed to a number applicable to exhaustive search through the dichotomizing search illustrated in FIG. 16, so as to obtain the prime numbers p and q. This attack 4 may not be prevented by the conventional countermeasures 1 and 2 due to the following reasons. In the countermeasure 1, in order to correctly execute the CRT composition, the constants and R are multiplied for calculating m (mod p) and m (mod q) at steps 1105 and 1106 of FIG. 21 corresponding to a stage immediately before the CRT composition. In other words, when c=x (mod n) is input, the calculation results obtained at steps 1105 and 1106 are x (mod p) and x (mod q), and therefore, the attack 4 may be successful. Alternatively, the countermeasure 2 is not a countermeasure against the decryption with the CRT, but is a countermeasure against the modular exponentiation, and hence is applicable to the modular exponentiation of steps 303 and 304 of FIG. 15 . Even through this application, however, the attack 4 may not be prevented for the following reason: although intermediate data of the modular exponentiation is randomized as illustrated in FIG. 23 , ultimately calculated values are always constant, and when this processing is applied to the modular exponentiation performed at steps 303 and 304 of FIG. 15 , m (mod p) and m (mod q) are calculated. Therefore, the attack 4 may be executed. In this manner, there is a problem that the conventional countermeasures 1 and 2 are not resistive against the attack 4. In an embodiment described below, a countermeasure against problem 1 of vulnerability against the attack 4 will be described. Furthermore, the countermeasure provided in this embodiment not only prevents the attack 4 but may also incur minimum additional costs for the computation and memory. Before describing the countermeasure of this embodiment, a basic countermeasure for preventing the attack 4 will be described as a premise. This countermeasure is also based on the idea of changing the value of a base, and although the idea of changing the value of a base may cope with the attack 4, the countermeasure has another problem, which will be described below. Thereafter, the countermeasure against the attack 4 according to this embodiment will be described. The countermeasure described as the premise (hereinafter referred to as the countermeasure 3) is illustrated in FIG. 1 . In this countermeasure, the decryption with the CRT is not directly executed for a base c but a random number S is generated (at step 1601) and the decryption with the CRT, namely, CRT_DEC (c×S (mod n)), is executed after randomization with c×S (mod n), and the result is stored in a work variable area W (at step 1602). As a result, W=(c×S) (mod n) is calculated. In order to release the randomization with the random number S, the decryption with the CRT is executed with an inverse of the random number S in accordance with processing of CRT_DEC (S (mod n)), and the result is stored as m. As a result, m=S d (mod n) is calculated. Ultimately, an operation of m:=(c×S) (mod n) is performed in accordance with m:=W×m (mod n), and the result of releasing the randomization is stored as m. Through the series of calculations, a base input in the processing of CRT_DEC is randomized with S, and therefore, the attack 4 may be prevented. On the contrary, additional cost accompanying the countermeasure is caused. As additional cost of the amount of computation, since the decryption with the CRT is executed twice, the amount of computation is twice as large as that of FIG. 15 . As additional cost of the memory area, a work area for storing the random number S and a work area W for storing the result of the decryption with the CRT performed at step 1602 are additionally desired. Since the random number S has the same bit length as the prime numbers p and q, a memory of (log n)/2 bits is preferred for the random number S. Since a bit length the same as that of n is preferred for the work area W, a memory of (log n) bits is preferred for the work area W. This evaluation of the additional cost is a minimum cost evaluation independent of the form of implementation and is a very optimistic evaluation obtained by ignoring additional costs of a memory area for calculating c×S (mod n) at step 1602, a temporary memory area for calculating S (mod n), and the amount of computation for calculating the inverse S of the random number S. In summary, although the attack 4 may be prevented through the randomization of a base by the countermeasure 3, there is a problem that additional costs due to the amount of computation and the memory are generated. The amount of computation is twice as much as that of the processing illustrated in FIG. 15 , and the additional cost of the memory is (3log n)/2 bits. The problems of the countermeasure 3 are as follows: (Problem 2) the amount of computation is twice as large as that of the processing not employing the countermeasure illustrated in FIG. 15 ; and (Problem 3) a memory area of (3log n)/2 bits 1536 bits) is desired as the additional memory. EMBODIMENT [0136] With respect to a decryption processor for overcoming the aforementioned problems of the presupposed technology, an example of the hardware configuration is illustrated in FIG. 2 and an example of functional blocks is illustrated in FIG. 3 The hardware configuration of the decryption processor will be first described with reference to FIG. 2 . The decryption processor 10 of this embodiment may be built in an encryption device such as a smartcard. As illustrated in FIG. 2 , the decryption processor 10 of this embodiment includes an ECC (Elliptic Curve Cryptosystem) processor 101, a CPU (Central Processing Unit) 102, a ROM (Read-Only Memory) 103, an I/F 104, an EEROM (Electrically Erasable ROM) 105, a RAM (Random Access Memory) 106, and a data bus 107 connecting the elements. Furthermore, it is assumed that the decryption processor 10 has terminals Vcc and GND connected to an oscilloscope 20 measuring the power consumption for performing the PA. The ECC processor 101 performs the following processing on a ciphertext C, which is externally obtained through the I/F 104, based on a private key d stored in the EEROM 105. Also, the CPU 102 controls the decryption processor 10. The ROM 103 stores programs to be executed by the ECC processor 101 and the CPU 102. The I/F 104 mediates input and output of data to and from the decryption processor 10. The EEROM 105 is a ROM in which data is electrically erasable and stores the private key d for the ECC. The RAM 106 temporarily stores programs to be executed by the ECC processor 101 and the CPU 102. Next, an example of the functional blocks of the decryption processor 10 will be described with reference to FIG. 3 . The decryption processor 10 includes a modular exponentiation part 1 (including a first modular exponentiation part and a second modular exponentiation part), a composition part 2, and a shift release part 3. These functional parts are realized by the ECC processor 101 executing a program having an algorithm described below. In calculating a plaintext m from a ciphertext c by using prime numbers p and q, a public key e and a private key d, the modular exponentiation part 1 calculates a value m' through modular exponentiation modulo p wherein an exponent is a value obtained through a shift operation, with a numerical value s, of a value d calculated in accordance with d (mod (p-1)), and a base is a value c calculated in accordance with c (mod p). Furthermore, the modular exponentiation part 1 calculates a value m' through modular exponentiation modulo p wherein an exponent is a value obtained through the shift operation, with the numerical value s, of a value d calculated in accordance with d (mod (q-1)), and a base is a value c calculated in accordance with c (mod q). The composition part 2 calculates a value m through calculation of ((u×(m' ) (mod q))×p+m' by using the values m' and m' calculated by the modular exponentiation part 1 and a private key u corresponding to the calculation result of p (mod q). The shift release part 3 calculates the plaintext m by calculating m (mod n)) (mod n) by using the value m calculated by the composition part 2. The basic idea for overcoming the aforementioned problems of the presupposed technique is illustrated in FIG. 4 . When the countermeasure of the presupposed technique is to be employed for preventing the attack 4, the aforementioned problems are caused. Specifically, in employing the method in which a base c is randomized, since it is preferable to release the result of the randomization, the computation time is preferably doubled for executing the release processing. Such a problem is caused because the base c of the decryption with the CRT is randomized in the countermeasure of the presupposed technique. When a constant or a random number used in randomization is indicated by R, calculation expressed as m:=(c×R) (mod n) is executed for ultimately calculating c (mod n). In other words, in order to release the randomization of a randomized value expressed as (c×R) and return it to c (mod n), a value expressed by (R (mod n) is desired. In order to obtain this value, the decryption with the CRT is additionally performed once in the countermeasure of the presupposed technique, and hence, the amount of computation is increased. In other words, in order to execute randomization of a base, the amount of computation or a memory area may be increased as additional cost. In consideration of this, according to this embodiment, in order to overcome the problems, shifting (and randomization) with an exponent is employed instead of the randomization with a base. When such a randomized or shifted value is released before the CRT composition, however, a value of c (mod p) or c (mod q) is generated in the middle of the calculation, which causes vulnerability against the attack 4, and therefore, the randomized or shifted data is released after completing the CRT composition. The release of the shifting or randomization is performed as follows. Through the CRT composition at step 1905, m -s (mod n) is calculated, and in accordance with the result of this calculation, c (mod n) is multiplied in accordance with m:=m (mod n)=c (mod n)=c (mod n) at step 1906, and thus, the shifting or randomization is released so as to obtain c (mod n). When the shifting/randomization is released after the CRT composition, the attack 4 may be prevented, and the problem 1 of the vulnerability against the attack 4 may be overcome. If s has a large value of 1024 bits or the like, a large amount of computation or a large memory area is desired as the additional cost in this embodiment, but the security may be retained with s set to a small value of approximately 2 or 3 when the shifting with an exponent is employed as in this embodiment, and therefore, the additional cost for this countermeasure is very small. Specifically, an effort to calculate (c ) (mod n) based on c for multiplying by the calculation result of step 1905 corresponds to the additional cost. Since c is already given, the amount of computation for calculating c (mod n) based on s with a small value of approximately 2 or 3 is s at most, which is negligibly small as compared with the entire amount of computation. Thus, the problem 2 may be overcome. Furthermore, an additional memory for calculating c (mod n) corresponds to a memory area for storing the parameter s, which has a value of log s bits. When s has a small value of approximately 2 or 3, the bit value is 2 bits at most, which is a very small memory area as the additional cost. Thus, the problem 3 is overcome. Furthermore, a condition for security of this embodiment against the attack 4 is that there is a relationship of e×s>3 between the public key e and the shift parameter s. Since the public key e of the RSA cryptosystem satisfies e≧3, this condition is always met when the parameter s has a value not less than 2, and hence, the parameter s may be a small value of approximately 2 or 3. The reason why the condition for the security against the attack 4 is e×s>3 will be described later. As far as the condition of e×s>3 is met, the security may be retained even if the value of the parameter s is open to an attacker. In other words, even if the parameter s is revealed, the security may be retained. Accordingly, when the method of this embodiment is employed, the decryption with the CRT that addresses all the problems 1, 2 and 3 may be realized. At this point, the criterion, e×s>3, of the security against the attack 4 in this embodiment will be described. As the criterion of the parameter setting for attaining security against the attack 4 by employing the method of this embodiment, the condition e×s>3 met by the public key e and the shift parameter s of the exponent is recommended. Now, the reason for this criterion of the security will be described. In describing the reason for the criterion of the security, an attack corresponding to an extended type of the attack 4 will be described. When this extended type attack is used, bit values corresponding to upper 1/g bits out of all the bits of the prime number p may be guessed by measuring the power consumption in calculation of x (mod p) with respect to x controllable by an attacker. This property is expressed as processing of (EXTEND_DPA) as follows: (EXTEND_DPA): When the decryption processor 10 performs calculation of x (mod p) for a prime number p, an attacker may obtain bit values corresponding to upper 1/g bits out of all the bits of the prime number p by executing the DPA utilizing the power consumption in the For example, in an encryption processor performing calculation of x (mod p), when this extended type attack is used, bit values of upper 1/3 bits of the prime number p are revealed. Even when part of the bit values of the prime number p is revealed, the value of the prime number p is not always obtained. A general criterion of an allowable range of partial bit value leakage of a prime number p is disclosed in Johannes Blomer and Alexander May, "New Partial Key Exposure Attacks on RSA", CRYPTO 2003, pp. 27-43, LNCS2729. According to the criterion disclosed in this document, it is known that the prime factorization of n=p×q succeeds when bit value of upper 1 /2 bits of the prime number p is revealed. Accordingly, in consideration of the extended type attack, partial bit values of the prime number p to be revealed are preferably suppressed to be smaller than upper 1/2 bits. Such an extended type attack is assumed because when the exponential shifting with the parameter s is employed, although calculation of x (mod p) is avoided, calculation of y.sup.(es-1) (mod p) is executed instead. This is because when c=x (mod n) is substituted in the calculation of m' -s (mod p) of step 1903, in consideration of the property of (x =x (mod p), m' -s (mod p)=(x s=x.sup.(1-es) (mod p)=y.sup.(es-1) (mod p) is given (wherein y=x (mod n) that is equal to x (mod p)). Since the countermeasure of this embodiment includes this calculation, even when the extended DPA of the attack 4 is executed against the calculation, the revealed partial bit values of the prime number p are preferably suppressed to be smaller than the upper 1/2 bits. When the processing of (EXTEND_DPA) is applied, it is understood that e×s<1>2 is the condition for the security. In other words, even when the extended type attack is employed, e×s>3 is the condition for retaining the security of the RSA cryptosystem. Next, the principle and the method of the extended type attack of the attack 4 described as the processing of (EXTEND_DPA) will be described. In the attack 4, it is determined whether or not x+ε<p with respect to x controllable by an attacker by using the DPA attack illustrated in FIG. 20 . The DPA attack of FIG. 20 may determine whether or not x+ε<p for the following reason. With respect to a data string a controllable by the attacker and satisfying x≦a ≦x+ε, when power consumption in reading/writing data expressed as a (mod p) is measured so as to create difference curves of steps 1002 and 1003, if a sufficiently high spike appears on all the difference curves, it may be determined that a <p with respect to all the data strings a , and hence, it is determined that a This determination method may be extended to a case where power consumption in reading/writing data expressed as (a (mod p), wherein g is a constant, is measured. This extended type DPA is illustrated in FIG. 5 The principle of successfully performing this attack is the same as that of the attack illustrated in FIG. 20 . The result of modular exponentiation expressed as Z=X (mod Y) is Z=X when X <Y regardless of the algorithm implemented for the modular exponentiation. In other words, the value Z to be read or written in the decryption processor 10 as the output result Z is X when X<Y. In the above-described sets G and G , when (a <p, namely, (x+ε) <p, with respect to all bases a expressed as x≦a <x+ε, all the values calculated as a (mod p) are a , which is read/written in a memory within the decryption processor 10. The numbers of zero's (0's) and one's (1's) included in the sets G ,j and G ,j including the calculation results of a (mod p) are largely biased in all the difference curves with j=0, 1, . . . , log ε-1, and hence, a spike as illustrated in FIG. 19(B) appears on the power difference curve represented by G ,j with respect to all values of j, otherwise no spike or a low spike appears. Accordingly, when a spike as illustrated in FIG. 19(B) appears in a sufficient height on the power difference curve represented by G ,j, it may be determined that (x+ε) When the power analysis illustrated in FIG. 5 is applied to step 404 of the dichotomizing search algorithm of FIG. 16, the maximum value of x satisfying (x+ε) >p may be obtained. In other words, an integer value of x with which the value of (x+ε) is the closest to p may be obtained. When x is obtained, the attacker may obtain bit values of upper 1/g bits of the prime number p. This is because (x+ε) =p, namely, x=p /g-ε, and hence, when x is raised to the power g, an approximate value of p may be obtained. The calculation expressions are illustrated in FIGS. 6(A) and 6(B), and in the expression of FIG. 6 (A), a term in which the degree of p is not more than (g-2)/g is much smaller than the magnitude of p, and hence may be approximated as expressed in FIG. 6 (B). In other words, when x is raised to the power g, the approximate value of p may be obtained within an error range of εg×p.sup.(g-1)/g. This error, εg×p.sup.(g-1)/g, affects bit values of lower (g-1)/g bits of the prime number p, and hence, the bit values of the upper 1/g bits are not affected by this error. Therefore, the attacker obtains the value x with which (x+ε) is the closest to p through the dichotomizing search of FIG. 16 and the DPA of FIG. 5 , and when x is calculated with respect to the obtained x, bit values of the upper 1/g bits of the prime number p may be guessed. EXAMPLES [0160] According to the algorithm illustrated in FIG. 4 , 2×3=6 examples may be assumed: two examples about the type of the error parameter s (i.e., a random number or a constant) and three examples about the calculation method for c (mod n) of step 1906 (i.e., two examples of multiplications performed by log s times by employing the binary method and one sort of multiplication performed by s times). Now, the respective examples will be described. In calculating c (mod n) by the binary method, the amount of computation is reduced to 2×log s while one work variable of log n bits is additionally desired. For this additional variable, an additional memory of log n bits is desired. Even though such a work memory is additionally used, the additional memory is still smaller than the additional memory of 3(log n)/2 bits described as the problem 3, and hence, this method is a superior method. As far as a small parameter s of approximately 2 or 3 is used, the example where the multiplication is performed by s times for calculating c (mod n) is more efficient because the example does not need a work memory and the number of multiplications is substantially the same. FIG. 7 illustrates an algorithm used in Example 1. In this example, an error parameter s is given as a constant, and for calculating c (mod n), c is multiplied by s times (at step 2106). Differently from the decryption with the CRT illustrated in FIG. 15 , shifting processing of an exponent is executed by using the constant s at steps 2103 and 2104 and c is multiplied by s times at step 2106 for releasing the result of the shifting processing after the CRT composition. The additional cost of the amount of computation corresponds to the amount of computation preferable for processing. When log n is expressed as T, the amount of computation for the modular exponentiation of steps 2103 and 2104 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by s times, the amount of computation is s×T . Specifically, while the basic amount of computation is T /4, the additional amount of computation is sT , and a proportion therebetween is (s×T /4)=(4s)/T. In the case where, for example, T=log n=1024, when the parameter s has a value of approximately 2 or 3, the proportion of the additional amount of computation is 12/1024 or less, and thus, the amount of computation is increased by approximately 1%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. The additional cost of memory is a memory area log s for storing the constant s and a memory area for the s multiplications for releasing the shifting. When the parameter s has a small value of approximately 2 or 3, the additional memory area is 2 bits at most, and this additional cost is negligibly small. [0164]FIG. 8 illustrates an algorithm used in Example 2. In this example, an error parameter s is given as a random number (at step 2200), and for calculating c (mod n), c is multiplied by s times (at step 2206). Differently from the decryption with the CRT illustrated in FIG. 15 , randomization processing of an exponent is executed by using the random number s at steps 2203 and 2204 and c is multiplied by s times at step 2206 for releasing the result of the randomization processing after the CRT composition. The additional cost of the amount of computation corresponds to the amount of computation for this processing. When log n is expressed as T, the amount of computation for the modular exponentiation of steps 2203 and 2204 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by s times, the amount of computation is s×T . Specifically, while the basic amount of computation is T /4, the additional amount of computation is sT , and a proportion therebetween is (s×T /4)=(4s)/T. In the case where, for example, T=log n=1024, when the random number s is randomly selected from 1, 2 and 3, the proportion of the additional amount of computation is 12/1024 or less, and thus, the amount of computation is increased by approximately 1%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. Even when the parameter s is a random number of 4 bits, 4s/T=16×4/1024<0.07, and hence the amount of computation is increased by 7%, and therefore, the increase is negligibly small. When the parameter s has a further larger value, however, an example employing the binary method for calculating c (mod n) as in examples 3, 4, 5 and 6 below is more efficient from the viewpoint of the amount of computation. The additional cost of memory is a memory area log s for storing the random number s and a memory area for the s multiplications for releasing the randomization. When the parameter s has a small value of approximately 2 or 3, the additional memory area is 2 bits at most, and this additional cost is negligibly small. FIG. 9 illustrates an algorithm used in Example 3. In this example, an error parameter s is given as a constant, and for calculating c (mod n), multiplication is performed by 2×log s times by employing a left-to-right binary method (at steps 2307 through 2310). The additional cost of the amount of computation corresponds to the amount of computation for executing the left-to-right binary method and is given as 2×log s. When log is expressed as T, the amount of computation for the modular exponentiation of steps 2303 and 2304 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by 2×log s times, the amount of computation is 2×log . Specifically, while the basic amount of computation is T /4, the additional amount of computation is (2×log , and a proportion therebetween is (2×log s)/T. In the case where, for example, T=log n=1024, when an 8-bit value is used as the constant s, the proportion of the additional amount of computation is 64/1024<0.07, and thus, the amount of computation is increased by approximately 7%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. The additional cost of memory is a memory area log s for storing the constant s and a memory area log n of a work area W for executing the left-to-right binary method. When the parameter s has a value of approximately 8 bits, the sum of the additional memories is 8+log n. As compared with the additional amount of memory of 1.5×log n preferable in the countermeasure 3, considering that log n≧1024 in a general RSA parameter, 8+log n, and thus, the processing with a smaller amount of additional memory may be realized in this example. [0170]FIG. 10 illustrates an algorithm used in Example 4. In this example, an error parameter s is given as a constant, and for calculating c (mod n), multiplication is performed by 2×log s times by employing a right-to-left binary method (at steps 2407 through 2410). This example is the same as Example 3 except that the right-to-left binary method is employed. The additional cost of the amount of computation corresponds to the amount of computation for executing the right-to-left binary method and is given as 2×log s. When log n is expressed as T, the amount of computation for the modular exponentiation of steps 2403 and 2404 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by 2×log s times, the amount of computation is 2×log . Specifically, while the basic amount of computation is T /4, the additional amount of computation is (2×log , and a proportion therebetween is (2×log s)/T. In the case where, for example, T=log n=1024, when an 8-bit value is used as the constant s, the proportion of the additional amount of computation is 64/1024<0.07, and thus, the amount of computation is increased by approximately 7%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. The additional cost of memory is a memory area log s for storing the constant s and a memory area log n of a work area W for executing the right-to-left binary method. When the parameter s has a value of approximately 8 bits, the sum of the additional memories is 8+log n. As compared with the additional amount of memory of 1.5×log n for the countermeasure 3, considering that log n≧1024 in a general RSA parameter, 8+log n, and thus, the processing with a smaller amount of additional memory may be realized in this example. FIG. 11 illustrates an algorithm used in Example 5. In this example, an error parameter s is given as a random number, and for calculating c (mod n), multiplication is performed by 2×log s times by employing the left-to-right binary method (at steps 2507 through 2510). This example is the same as Example 3 except for step 2500 where the parameter s is given as a random number. The additional cost of the amount of computation corresponds to the amount of computation for executing the left-to-right binary method and is given as 2×log s. When log n is expressed as T, the amount of computation for the modular exponentiation of steps 2503 and 2504 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by 2×log s times, the amount of computation is 2×log . Specifically, while the basic amount of computation is T /4, the additional amount of computation is (2×log , and a proportion therebetween is (2×log s)/T. In the case where, for example, T=log n=1024, when an 8-bit value is used as the constant s, the proportion of the additional amount of computation is 64/1024<0.07, and thus, the amount of computation is increased by approximately 7%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. The additional cost of memory is a memory area log s for storing the random number s and a memory area log n of a work area W for executing the left-to-right binary method. When the parameter s has a value of approximately 8 bits, the sum of the additional memories is 8+log n. As compared with the additional amount of memory of 1.5×log n preferable in the countermeasure 3, considering that log n≧1024 in a general RSA parameter, 8+log n, and thus, the processing with a smaller amount of additional memory may be realized in this example. [0176]FIG. 12 illustrates an algorithm used in Example 6. In this example, an error parameter s is given as a random number, and for calculating c (mod n), multiplication is performed by 2×log s times by employing the right-to-left binary method (at steps 2607 through 2610). This example is the same as Example 4 except for step 2600 where the parameter s is given as a random number. The additional cost of the amount of computation corresponds to the amount of computation for executing the right-to-left binary method and is given as 2×log s. When log n is expressed as T, the amount of computation for the modular exponentiation of steps 2603 and 2604 is 2×(T/2) /4. On the other hand, the amount of computation for one multiplication of c modulo n is (bit length of modulus n)×(bit length of modulus n), and hence, when the multiplication is executed by 2×log s times, the amount of computation is 2×log . Specifically, while the basic amount of computation is T /4, the additional amount of computation is (2×log , and a proportion therebetween is (2×log s)/T. In the case where, for example, T=log n=1024, when an 8-bit value is used as the constant s, the proportion of the additional amount of computation is 64/1024<0.07, and thus, the amount of computation is increased by approximately 7%. In this manner, the influence of the additional amount of computation on the entire amount of computation is negligibly small. The additional cost of memory is a memory area log s for storing the random number s and a memory area log n of a work area W for executing the right-to-left binary method. When the parameter s has a value of approximately 8 bits, the sum of the additional memories is 8+log n. As compared with the additional amount of memory of 1.5×log n preferable in the countermeasure 3, considering that log n≧1024 in a general RSA parameter, 8+log n, and thus, the processing with a smaller amount of additional memory may be realized in this example. The effects attained by this embodiment will now be described. According to the present invention, all of the problems 1 through 3 may be addressed, and the security may be retained even when a parameter s, that is, a shift value, is revealed. The problem 1 is the vulnerability against the attack 4, and the attack 4 may be prevented by the method described in this embodiment. The problem 2 is the additional cost of the amount of computation, and when the method described in this embodiment is employed, the doubled amount of computation as required in the countermeasure 3 is not necessary, but the countermeasure against the attack 4 may be realized with overhead of the amount of computation as small as 1% through 7% as compared with that of the decryption method of FIG. 15 not employing the countermeasure. The problem 3 is the additional cost of memory, and when Example 1 or 2 of the embodiment is employed, the additional amount of memory is merely log s bits. While the additional memory of 1536 bits is necessary in the countermeasure 3 assuming that n has a bit value of 1024 bits, a small parameter s of approximately 2 or 3 is used in Example 1 or 2, and hence, additional memory of merely 2 bits is desired. Thus, Example 1 or 2 provides a superior method. Alternatively, when Example 3, 4, 5 or 6 of this embodiment is employed, the additional memory of log s bits is desired. Although this is a larger amount of additional memory than that of Example 1 or 2, when n has a bit value of 1024 and s has a bit value as small as approximately 8 bits, the additional memory amount is 1032 bits. Therefore, this method is superior to the countermeasure 3 where the additional memory amount is 1536 bits. Moreover, since the shift value s used in this embodiment is a value that causes no problem in the security even if externally revealed, the decryption processor of this embodiment uses no fixed parameter that endangers the whole decryption processor when externally revealed. Accordingly, the present embodiment provides a superior method. Furthermore, the decryption processor of this embodiment may be provided as a computer composed of a central processing unit, a main memory, and an auxiliary memory, etc. Also, a program for causing the computer used as the decryption processor to execute the aforementioned steps may be provided as a decryption processing program. When the program is stored in a computer-readable recording medium, the computer used as the decryption processor may execute the program. The computer-readable recording medium includes an internal memory device to be internally loaded in a computer, such as a ROM, a RAM and a Hard disk drive; a portable recording medium such as a CD-ROM, a flexible disk, a DVD disk, a magneto-optical disk and an IC card. Patent applications by Kouichi Itoh, Kawasaki JP Patent applications by FUJITSU LIMITED Patent applications in class Public key Patent applications in all subclasses Public key User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100232603","timestamp":"2014-04-21T08:53:02Z","content_type":null,"content_length":"152229","record_id":"<urn:uuid:f1ed1ec2-ba97-4f64-8063-35eadecb4e2d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
[pypy-svn] r77834 - pypy/extradoc/talk/pepm2011 arigo at codespeak.net arigo at codespeak.net Tue Oct 12 15:54:09 CEST 2010 Author: arigo Date: Tue Oct 12 15:54:05 2010 New Revision: 77834 Typos. Swap $u$ and $v$ in \texttt{get}, to match the \texttt{set}. Modified: pypy/extradoc/talk/pepm2011/escape-tracing.pdf Binary files. No diff available. Modified: pypy/extradoc/talk/pepm2011/math.lyx --- pypy/extradoc/talk/pepm2011/math.lyx (original) +++ pypy/extradoc/talk/pepm2011/math.lyx Tue Oct 12 15:54:05 2010 @@ -60,7 +60,7 @@ \begin_layout Standard \begin_inset Formula \begin{eqnarray*} -u,v,w & \in & V\mathrm{\,\, are\, Variables\, in\, the\, original\, trace}\\ +u,v & \in & V\mathrm{\,\, are\, Variables\, in\, the\, original\, trace}\\ u^{*},v^{*},w^{*} & \in & V^{*}\,\mathrm{\, are\, Variables\, in\, the\, optimized\, trace}\\ T & \in & \mathfrak{T}\mathrm{\,\, are\, runtime\, types}\\ F & \in & \left\{ L,R\right\} \,\mathrm{\, are\, fields\, of\, objects}\\ @@ -78,7 +78,7 @@ \begin_layout Standard \begin_inset Formula \begin{eqnarray*} & & v=\mathrm{new}(T)\,\,\mathrm{make\, a\, new\, object}\\ - & & v=\mathrm{get}(u,F)\,\,\mathrm{read\, a\, field}\\ + & & u=\mathrm{get}(v,F)\,\,\mathrm{read\, a\, field}\\ & & \mathrm{set}\left(v,F,u\right)\,\,\mathrm{write\, a\, field}\\ & & \mathrm{guard}(v,T)\,\,\mathrm{check\, the\, type\, of\, an\, object}\end{eqnarray*} @@ -206,7 +206,7 @@ \begin_inset Text \begin_layout Plain Layout -\begin_inset Formula ${\displaystyle \frac{\,}{v=\mathtt{get}(u,F),E,H\overset{\mathrm{run}}{\Longrightarrow}E\left[v\mapsto H\left(E\left(u\right)\right)_{F}\right],H}}$ +\begin_inset Formula ${\displaystyle \frac{\,}{u=\mathtt{get}(v,F),E,H\overset{\mathrm{run}}{\Longrightarrow}E\left[u\mapsto H\left(E\left(v\right)\right)_{F}\right],H}}$ @@ -385,7 +385,7 @@ \begin_inset Text \begin_layout Plain Layout -\begin_inset Formula ${\displaystyle \frac{E(u)\in\mathrm{dom}(S)}{v=\mathtt{get}(u,F),E,S\overset{\mathrm{opt}}{\Longrightarrow}\left\langle \,\right\rangle ,E\left[v\mapsto S(E(u))_{F}\right],S}}$ +\begin_inset Formula ${\displaystyle \frac{E(v)\in\mathrm{dom}(S)}{u=\mathtt{get}(v,F),E,S\overset{\mathrm{opt}}{\Longrightarrow}\left\langle \,\right\rangle ,E\left[u\mapsto S(E(v))_{F}\right],S}}$ @@ -408,7 +408,7 @@ \begin_inset Text \begin_layout Plain Layout -\begin_inset Formula ${\displaystyle \frac{E(u)\notin\mathrm{dom}(S)\, v^{*}\,\mathrm{fresh}}{v=\mathtt{get}(u,F),E,S\overset{\mathrm{opt}}{\Longrightarrow}\left\langle v^{*}=\mathtt{get}(E(u),F)\right\rangle ,E\left[v\mapsto v^{*}\right],S}}$ +\begin_inset Formula ${\displaystyle \frac{E(v)\notin\mathrm{dom}(S)\, u^{*}\,\mathrm{fresh}}{u=\mathtt{get}(v,F),E,S\overset{\mathrm{opt}}{\Longrightarrow}\left\langle u^{*}=\mathtt{get}(E(v),F)\right\rangle ,E\left[u\mapsto u^{*}\right],S}}$ Modified: pypy/extradoc/talk/pepm2011/paper.tex --- pypy/extradoc/talk/pepm2011/paper.tex (original) +++ pypy/extradoc/talk/pepm2011/paper.tex Tue Oct 12 15:54:05 2010 @@ -542,11 +542,11 @@ The static object associated with $p_{5}$ would know that it is a -\texttt{BoxedInteger}, and that the \texttt{intval} field contains $i_{4}$, the -one associated with $p_{6}$ would know that its \texttt{intval} field contains -the constant -100. +\texttt{BoxedInteger} whose \texttt{intval} field contains $i_{4}$; the +one associated with $p_{6}$ would know that it is a \texttt{BoxedInteger} +whose \texttt{intval} field contains the constant -100. -The following operations, that use $p_{5}$ and $p_{6}$ could then be +The following operations on $p_{5}$ and $p_{6}$ could then be optimized using that knowledge: @@ -580,13 +580,13 @@ a static object is stored in a globally accessible place, the object needs to actually be allocated, as it might live longer than one iteration of the loop and because the partial evaluator looses track of it. This means that the static -objects needs to be turned into a dynamic one, \ie lifted. This makes it +object needs to be turned into a dynamic one, \ie lifted. This makes it necessary to put operations into the residual code that actually allocate the static object at runtime. This is what happens at the end of the trace in Figure~\ref{fig:unopt-trace}, when the \texttt{jump} operation is hit. The arguments of the jump are at this point static objects. Before the -jump is emitted, they are \emph{lifted}. This means that the optimizers produces code +jump is emitted, they are \emph{lifted}. This means that the optimizer produces code that allocates a new object of the right type and sets its fields to the field values that the static object has (if the static object points to other static objects, those need to be lifted as well) This means that instead of the jump, @@ -617,6 +617,7 @@ The final optimized trace of the example can be seen in +XXX why does it says ``Figure 4.1'' here but ``Figure 4'' in the label? The optimized trace contains only two allocations, instead of the original five, and only three \texttt{guard\_class} operations, from the original seven. @@ -640,7 +641,7 @@ \emph{Object Domains:} - u,v,w & \in & V & \mathrm{\ variables\ in\ trace}\\ + u,v & \in & V & \mathrm{\ variables\ in\ trace}\\ T & \in & \mathfrak{T} & \mathrm{\ runtime\ types}\\ F & \in & \left\{ L,R\right\} & \mathrm{\ fields\ of\ objects}\\ l & \in & L & \mathrm{\ locations\ on\ heap} @@ -675,8 +676,8 @@ variables are locations (i.e.~pointers). Locations are mapped to objects, which are represented by triples of a type $T$, and two locations that represent the fields of the object. When a new object is created, the fields are initialized -to null, but we require that they are immediately initialized to a real -location, otherwise the trace is malformed. +to null, but we require that they are initialized to a real +location before being read, otherwise the trace is malformed. We use some abbreviations when dealing with object triples. To read the type of an object, $\mathrm{type}((T,l_1,l_2))=T$ is used. Reading a field $F$ from an @@ -687,7 +688,7 @@ Figure~\ref{fig:semantics} shows the operational semantics for traces. The interpreter formalized there executes one operation at a time. Its state is -represented by an environment and a heap, which are potentially changed by the +represented by an environment $E$ and a heap $H$, which are potentially changed by the execution of an operation. The environment is a partial function from variables to locations and the heap is a partial function from locations to objects. Note that a variable can never be null in the environment, otherwise the trace would @@ -699,12 +700,12 @@ $E[v\mapsto l]$ denotes the environment which is just like $E$, but maps $v$ to -The new operation creates a new object $(T,\mathrm{null},\mathrm{null})$, on the +The \texttt{new} operation creates a new object $(T,\mathrm{null},\mathrm{null})$ on the heap under a fresh location $l$ and adds the result variable to the environment, -mapping to the new location $l$. +mapping it to the new location $l$. -The \texttt{get} operation reads a field $F$ out of an object and adds the result -variable to the environment, mapping to the read location. The heap is +The \texttt{get} operation reads a field $F$ out of an object, and adds the result +variable to the environment, mapping it to the read location. The heap is The \texttt{set} operation changes field $F$ of an object stored at the location that @@ -771,53 +772,54 @@ The state of the optimizer is stored in an environment $E$ and a \emph{static heap} $S$. The environment is a partial function from variables in the -unoptimized trace to variables in the optimized trace (which are written with a +unoptimized trace $V$ to variables in the optimized trace $V^*$ (which are +themselves written with a $\ ^*$ for clarity). The reason for introducing new variables in the optimized trace is that several variables that appear in the unoptimized trace can turn into the same variables in the optimized trace. Thus the environment of the -optimizer serves a function similar to that of the environment in the semantics. +optimizer serves a function similar to that of the environment in the semantics: sharing. The static heap is a partial function from $V^*$ into the -set of static objects, which are triples of a type, and two elements of $V^*$. -An variable $v^*$ is in the domain of the static heap $S$ as long as the +set of static objects, which are triples of a type and two elements of $V^*$. +A variable $v^*$ is in the domain of the static heap $S$ as long as the optimizer can fully keep track of the object. The image of $v^*$ is what is statically known about the object stored in it, \ie its type and its fields. The fields of objects in the static heap are also elements of $V^*$ (or null, for short periods of time). -When the optimizer sees a new operation, it optimistically removes it and +When the optimizer sees a \texttt{new} operation, it optimistically removes it and assumes that the resulting object can stay static. The optimization for all further operations is split into two cases. One case is for when the involved variables are in the static heap, which means that the operation can be performed at optimization time and removed from the trace. These rules mirror -the execution semantics closely. The other case is that nothing is known about -the variables, which means the operation has to be residualized. +the execution semantics closely. The other case is for when not enough is known about +the variables, and the operation has to be residualized. -If the argument $u$ of a \texttt{get} operation is mapped to something in the static +If the argument $v$ of a \texttt{get} operation is mapped to something in the static heap, the get can be performed at optimization time. Otherwise, the \texttt{get} -operation needs to be emitted. +operation needs to be residualized. If the first argument $v$ to a \texttt{set} operation is mapped to something in the static heap, then the \texttt{set} can performed at optimization time and the static heap -is updated. Otherwise the \texttt{set} operation needs to be emitted. This needs to be -done carefully, because the new value for the field stored in the variable $u$ +updated. Otherwise the \texttt{set} operation needs to be residualized. This needs to be +done carefully, because the new value for the field, from the variable $u$, could itself be static, in which case it needs to be lifted first. -I a \texttt{guard\_class} is performed on a variable that is in the static heap, the type check +If a \texttt{guard\_class} is performed on a variable that is in the static heap, the type check can be performed at optimization time, which means the operation can be removed if the types match. If the type check fails statically or if the object is not -in the static heap, the \texttt{guard\_class} is put into the residual trace. This also needs to +in the static heap, the \texttt{guard\_class} is residualized. This also needs to lift the variable on which the \texttt{guard\_class} is performed. Lifting takes a variable that is potentially in the static heap and makes sure that it is turned into a dynamic variable. This means that operations are -emitted that construct an object that looks like the shape described in the +emitted that construct an object with the shape described in the static heap, and the variable is removed from the static heap. Lifting a static object needs to recursively lift its fields. Some care needs to be taken when lifting a static object, because the structures described by the static heap can be cyclic. To make sure that the same static object is not lifted -twice, the liftfield operation removes it from the static heap \emph{before} +twice, the \texttt{liftfield} operation removes it from the static heap \emph{before} recursively lifting its fields. @@ -1120,7 +1122,7 @@ hint in first js paper by michael franz \cite{mason_chang_efficient_2007} -SPUR, a tracing JIT for C# seems to be able to remove allocations in a similar +SPUR, a tracing JIT for C\# seems to be able to remove allocations in a similar way to the approach described here, as hinted at in the technical report \cite{XXX}. However, no details for the approach and its implementation are More information about the Pypy-commit mailing list
{"url":"https://mail.python.org/pipermail/pypy-commit/2010-October/043611.html","timestamp":"2014-04-18T08:12:15Z","content_type":null,"content_length":"15682","record_id":"<urn:uuid:03f55f36-5497-42ca-bc8c-78fcc3dd8e86>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Williams College Office of the Registrar: Courses 2000-2001 CSCI 256(S) Algorithm Design and Analysis Given a list of descriptions of all of the buildings in Manhattan, how could you effectively produce a description of the Manhattan skyline as seen from a boat on the East River? Or, suppose that for all of the towns in the U.S., you have information on all of the roads that connect them. How would you determine the shortest route between any two of the towns? The most obvious ways of solving these problems turn out to be very inefficient. This course is concerned with investigating methods of designing efficient and reliable algorithms to solve these and other computational problems. By carefully analyzing the underlying structure of the problem to be solved it is often possible to dramatically decrease the resources (amount of time and/or space) needed to find a solution. Through this analysis we can also give proof that an algorithm will perform correctly and determine its running time and space requirements. We will present several algorithm design strategies that build on data structures and programming techniques introduced in Computer Science 136. These include: induction, divide-and-conquer, dynamic programming, and greedy algorithms. Particular topics to be considered will include shortest path and other network problems; problems in computational geometry; searching, sorting and order statistics and some advanced data structures such as balanced binary search trees, heaps, and union-find structures. In addition, an introduction to complexity theory and the complexity classes P and NP will be provided. As time permits, additional topics such as probabilistic and parallel algorithms will be studied. Evaluation will be based primarily on problem assignments, programs, and exams. Prerequisites: Computer Science 136 and Mathematics 251. Hour: BRUCE
{"url":"http://web.williams.edu/admin/registrar/catalog/depts0001/csci/csci256.html","timestamp":"2014-04-20T18:35:59Z","content_type":null,"content_length":"2910","record_id":"<urn:uuid:6b6328cb-f9f8-40bb-a6e8-0a99c8f1b43e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply hi Agnishom and bobbym Here's two new diagrams. I constructed the square ABCD. I put point G somewhere on AD I made an isosceles triangle FGC so that FG = FC I found the centre of the square and rotated point F by 270 degrees around this point to fix H. That makes BH = FG = FC I constructed a line, perpendicular to GF at G and bisected the angle to make FGJ = 45. As you can see in the first diagram J and H are different points. But, this method of construction allows me to move G along the line. My plan was to find the position where J and H coincide. Because of the size of the points, it's hard to be exact about this (Euclid had dimensionless points but they cost extra So I measured GHB and GJB and tried to make them equal. The second diagram shows my best attempt. So the 90 case seems to be a special case and the only one where FGH = 45. (Not counting G at A or B) Your original diagram does make it look like there's a right angle there, but, in geometry, it is dangerous assuming things just because they look like it. Later I'll have a go at the proof when I assume the angle is 90.
{"url":"http://www.mathisfunforum.com/post.php?tid=18478&qid=241820","timestamp":"2014-04-16T07:35:16Z","content_type":null,"content_length":"22115","record_id":"<urn:uuid:57aa1337-0ca5-4ad7-a78f-bcd4c806d7ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Operational eruption forecasting at high-risk volcanoes: the case of Campi Flegrei, Naples High risk volcanic events are commonly preceded by long periods of unrest during which scientists are asked to provide near real-time forecasts. The rarity of such events, inaccessibility of the underground volcanic system, non-linear behaviors, and limited datasets constitute major sources of uncertainty. In order to provide reasoned guidance in the face of uncertainties, monitoring observations and conceptual/theoretical models must be incorporated into a formal and structured probabilistic scheme using evidence science principles. As uncertainty and subjectivity are inescapable components of volcanic hazard forecasts, they must be dealt with and clearly communicated to decision-makers and society. Here, we present the set-up of an automated near-real-time tool for short-term eruption forecasting for Campi Flegrei caldera (CFc), Italy. The tool, based on a Bayesian Event Tree scheme, takes account of all the available information, and subjectivity of choices is dealt through a 5-year-long elicitation experiment with a team of about 30 of the major experts of the geological history, dynamics and monitoring of CFc. The tool provides prompt probabilistic assessment in near real-time, making it particularly suitable for tracking a rapidly evolving crisis, and it is easily reviewable once new observations and/or models become available. The quantitative rules behind the tool, which represent the group view of the elicited community of experts, are defined during a period of quiescence, thus allowing prior scrutiny of any scientific input into the model, and minimizing the external stress on scientists during an actual emergency phase. Notably, the results also show that CFc may pose a higher threat to the city of Naples than the better-known Mount Vesuvius. Volcanic hazard; Eruption forecasting; Aleatory and epistemic uncertainty; Expert elicitation; Bayesian event tree The Campi Flegrei caldera (CFc) directly threatens a population of several hundred thousands who lives inside the caldera, and the city of Naples itself (∼1 million inhabitants), just outside the caldera. The latest eruption occurred in 1538, ∼ 4,000 years after the previous one that closed a period of intense eruptive activity (Orsi et al.1996). The 1538 eruption was preceded by uplift of the caldera floor, seismic swarms, and visible variations in fumarolic output that lasted at least several decades (Guidoboni and Ciucciarelli 2011). The eruption was explosive and resulted in the construction of the new hill of Monte Nuovo in the western caldera sector (di Vito et al. 1987). After about 4 centuries of caldera subsidence, the present unrest started in the 1950’s in the form of uprise of the caldera floor, seismic swarms, and changes in the flow, areal extent, and composition of fumaroles (Del Gaudio et al. 2010; Orsi et al. 1999). Periods of unrest concentrated at discrete periods of time, with major crises occurred in 1969-71 and 1982-84, the latter culminating in the evacuation of about 40,000 people from the city of Pozzuoli. Several other minor uplift periods have followed and continue, requiring the development of plans for scientific and civil protection operations. Such plans depend on the capability to interpret in real-time the observed dynamics and anticipate at least several days in advance the occurrence of a new eruption. The extreme complexity of volcanic processes, nonlinearities, limited knowledge, and large number of degrees of freedom make deterministic predictions of volcanic system evolution extremely difficult, if not impossible (Mader et al. 2006; Newhall and Dzurisin 1988). The additional complexity of decision-making and civil protection operations, especially in highly inhabited areas like CFc, requires evaluations to be made on time windows up to weeks, further amplifying the influence of uncertainties. As a consequence, a probabilistic approach is needed in order to manage the uncertainties and build a quantitative reference frame for managing scientific evidence within a rational decision-support process (Marzocchi and Woo 2009). However, past pre-eruptive data at CFc are not available, with the exception of the descriptive, macroscopic observations reported in the chronicles related to the 1538 eruption (Guidoboni and Ciucciarelli 2011). This is unfortunately a common situation at volcanoes globally. On the other hand, volcanologists have developed sophisticated conceptual and theoretical models and deployed advanced monitoring systems that provide relevant information about the status of the volcano. The problem is therefore to integrate such heterogeneous information into a formal probabilistic scheme for eruption forecast. With such a purpose, we have set up a real-time tool for short-term eruption forecasting at CFc (BETEF_CF; see Figure 1A). The statistical model adopted is BET_EF (Bayesian Event Tree for Eruption Forecasting, Marzocchi et al. 2008). BET_EF is based on an Event Tree logic (Newhall and Hoblitt 2002), in which branches are logical steps from a general starting event (the onset of unrest, node 1), through specific subsequent events (the presence of magma driving the unrest, node 2), to the final outcome (the onset of an eruption, node 3), as reported in Figure 1A. BET_EF assesses probabilities at all nodes through Bayesian inference, including any possible source of information (theoretical beliefs, models, past data, and volcano monitoring), accounting for both aleatory and epistemic uncertainty. Then, the probability of eruption is calculated by multiplying the probabilities at each node. Using a simplified formalism the probability of eruption is given by Figure 1. Schematic representation of the BET_EF model’s settings (Marzocchi et al.2008). In panel A, the three nodes of the Event Tree are represented. At each node, a Bayesian inference scheme is performed assuming a Beta distribution for the probability, both for the analysis of anomalies (panel B) and for the background analysis (panel C). BET_EF automatically switches between these two regimes, based on the observed state of unrest (P[unrest]in panel B). During unrest episodes, the model is based on the analysis of monitoring anomalies (panel B), and it is set through the parameters T1, T2and w[i], thresholds and weight of each monitoring measure, respectively. On the left, an example of fuzzy threshold is reported, where in x-axis the possible values for the parameter are reported, while in the y-axis is represented the degree of truth of the statement ’the parameter is anomalous’, given a measurement equal to x. On the right, the basic principles of the transformation from anomalies to probabilities are reported; Bayesian inference is performed on the parameters a and b. The background assessment is based on Bayesian inference on probabilities (panel C), where theoretical models set prior distributions (through the average Λand the equivalent number of data Θ), updated with the available past data (through the number of successes y and of trials n). More details can be found in the text and in Marzocchi et al. (2008). The method is described in details in (Marzocchi et al. 2008), and a free generic software tool is available online (Marzocchi et al. 2009), whose input could be defined by users in order to be applied to different volcanoes. A key feature of BET_EF is that it automatically updates the forecast procedure depending on the occurrence of relevant anomalies in the volcanic activity. Whenever anomalies occur, BET_EF bases its forecast on the interpretation of the evolving monitoring measures (see Figure 1B). When only background activity is registered, the eruption forecast addresses only the expected long-term activity (see Figure 1C). The definition of what is background vs anomaly and the interpretation of anomalies represent the core of the analysis, i.e., the selection of the parameters of interest, and the quantitative definition of anomalies. For CFc, the lack of previous pre-eruptive observations makes this analysis a necessary but rather subjective step that can be treated formally through expert opinion. Expert opinion analysis is well established in many fields, including global political trends and economics, whenever decisions are made under limited knowledge and a high level of subjectivity (e.g., Cornish 1977), and in volcanology (Aspinall 2006; Neri et al. 2008). Weighting of experts varies, but it is a fundamental part of the elicitation process (e.g., Cook 1991), even though often equal-weighted procedures are still considered. Here we adopt a consensus-based expert scoring scheme and an innovative expert elicitation method that uses a structured and iterative process for developing consensus. This scheme resembles in its basic principles the Delphi method (e.g., Linstone and Turoff 1975), but it is targeted to overcome its main critics (vague questionnaire items, not equal treatment of respondents, significant number of dropouts, see Cook 1991and references therein). In this process, expert opinion is weighted on the basis of mutual recognition among experts expressed through a regularly repeated blind procedure. Consensus, in our opinion, is indeed critical for the effective applicability of the results. Differing from most of expert elicitations (e.g., Neri et al. 2008), experts have been asked to select monitoring parameters and relative critical thresholds at each node of the event tree, instead of being asked directly for probabilities. In this way the individual and collective specialist knowledge is more effectively exploited, since the experts are asked to discuss and express themselves directly in their field of expertise: their knowledge enters the statistical model without the filter of personal sensitivity to probabilities, a subject that is unfamiliar to many volcanologists. In the following, we report the set-up of BETEF_CF, based on (i) the results of a 5-year-long elicitation experiment and (ii) the analysis of CFc “background” activity. The applicability BETEF_CF is then demonstrated with a retrospective analysis of the observed unrest dynamics at CFc in the period 1981-2010. Development of the model BETEF_CF The goal of this paper is the set up of the model BET_EF for CFc (hereinafter, BETEF_CF). This model estimates, in near-real time, the probability of occurrence in the time window (t[0],t[0] + τ)of episodes of “unrest^′^′(node 1), “magmatic unrest^′^′(node 2) and “eruption” (node 3). Note that, in this formulation, we concentrate on the magmatic activity only. Of course, one of future developments will be the parallel treatment of non-magmatic phenomena, such as phreatic eruptions. For practical reasons, τis set to 1 month, as for for Mt. Vesuvius (Marzocchi et al. 2004; Marzocchi et al. 2008). BETEF_CF switches between two distinct regimes, hereinafter referred to as short-term and long-term analyses, that is: In both cases, the choice of the parameters and relative thresholds, at all the nodes, is the core of BETEF_CF, since it controls all probabilities assessments in the short-term regime, and it defines the reference background status of CFc (no unrest) for the long-term assessments. The subjectivity of this choice is herein dealt with expert elicitation. •When the state of unrest is detected at t=t[0]by BETEF_CF, all further probabilistic assessments are based on the analysis of changes in the volcanic system in rather short time frames (days to weeks). This situation is hereinafter referred to as short-term assessment. More precisely, monitoring anomalies are transformed (using a simple transfer function, see Figure 1B) into subjective probability distributions relative to the occurrence of “magmatic unrest” and “eruption”, respectively. Here, the basic input for BETEF_CF is to define the anomalies to be accounted for at each node. This goal is achieved defining (i) a list of parameters of interest at each node and (ii) thresholds (in a fuzzy perspective) to identify anomalies for each of these parameters. •When anomalies are not observed at time t=t[0], BETEF_CF considers the so-called background probabilities, hereinafter also referred to as long-term probabilities. Such long-term probabilities are based on theoretical models and the analysis of past data since 1980 (date after which anomalies can be reasonably defined with the available recordings from the monitoring system of CFc), considering (i) the definition of unrest used for short-term assessments, (ii) the fact that no anomalies are recorded at t=t[0], and (iii) the fact that CFc has been experiencing a long-term uplift since the 1970s. Result 1: expert elicitations We invited experts to multiple panels. The goal of each panel was to define the input for the model BETEF_CF. At each panel, each expert defined a list of parameters, their relevant thresholds to define the occurrence of anomalies, and their weights indicating the perceived importance of each parameter. The parameters are relative to different nodes of the event tree in Figure 1A. At each panel, the opinion of each expert was weighted by their peers. Five elicitation sessions were organized, preceded by seminars, analysis of previous elicitation results, and debate, and followed by public discussion, during approximately 5 years covering two sequential projects funded by the Italian Dipartimento della Protezione Civile (INGV-DPC 2005,2007). A complete list of the elicited experts can be found in Endnotes and in Selva et al. (2009). During the five expert meetings, the quantitative definition of monitoring parameters, and their availability in real-time from the monitoring network at CFc, were carefully considered. In addition, for each parameter, the concept of an anomaly’s inertia has been developed, which defines how long a given change remains significant for forecasting purposes: for example, if a new fracture opens today, for how many days will this count as an “anomaly” before it is no longer significant? During the first elicitations (I and II), each expert was free to define both parameters and inertia. After these elicitation sessions, each proposed definition was collectively discussed. After elicitation II, a committee (subset of the group of experts) was put in charge of preparing a list of parameters complete with their operative definition and inertia, based on the first two elicitations proposals and subsequent discussions. This list was collectively reviewed before elicitation III, and adopted from there on. Of course, these definitions are reviewable in future, and indeed few minor changes were discussed and implemented before elicitation IV and V. Note that, according to the operative definition of parameters’ inertia, it decreases from node 1 to 3, consistent with the view that changes are expected to become progressively more rapid when approaching the eruption. This is of course an assumption that reflects the group’s view, and it may significantly affect the model’s forecasts if only few anomalies, with effective inertia much greater than the defined one, are recorded before an event. This was considered an acceptable assumption. In each elicitation session, experts were individually elicited in a blind procedure, with the following objectives: (i) An individual weight w[e]was anonymously assigned to each expert by the other members of the panel. To do this, each expert voted up to 5 other experts with a weight of 1 or 2 (self-voting was not permitted). This vote was about each expert’s understanding of CFc. The expert’s weight w[e]is then computed as the sum of all votes received. (ii) Each expert identified the monitoring parameters and thresholds that are relevant at each node of the event tree. One expert could also select a parameter, without defining thresholds, if he/she judged this out of his/her own expertise. In addition, at nodes 2 and 3 (magmatic unrest and eruption, respectively), for each parameter, the expert selected a weight (equal to 1 or 2, where i runs over all parameters) to indicate how informative an anomaly of that parameter is at that node. Each parameter received a score s that is the sum of the weights of the experts w[e]that indicated that parameter, as computed in step (i). Two score thresholds (s[M]and s[m]) were defined after each elicitation session in order to classify the parameters according to high, intermediate and low score. The parameters with high score (s≥s[M]) were selected, whereas the parameters with intermediate score (s[M]>s>s[m]) were still selected, but assigning them a probability of acceptance p[a]equal to s−s[m]s[M]−s[m]. The parameters with low score (s≤s[m]) were rejected. (iii) Thresholds values and weight for each parameter were identified from the estimates provided by the experts through a weighted procedure, with weights w[e]. Lower and upper thresholds were selected as the 50-th percentile of the corresponding distribution. The weight of each parameter for forecasting purposes (w[i]in Figure 1) is assigned as the product of the 50-th percentile of the distribution multiplied by the probability of acceptance p[a]. The results of each elicitation session are reported in Tables 1, 2 and 3 for seismologic, geodetic, and geochemical parameters, respectively. Note that parameters are of two types: fuzzy or boolean. For fuzzy parameters, two thresholds are reported, between which measures progressively evolve from “normal” to “anomalous”. In particular, they should be interpreted as in Figure 1B. For example, considering the results of Elicitation V, the number of VT per day (M > 0.8) is considered surely anomalous if greater than 15, possibly anomalous if between 5 and 15, and not anomalous if less than 5. Boolean parameters, reported as YES/NO, represents single observations which alone represent anomaly. Table 1. Elicitation results for seismological parameters Table 2. Elicitation results for geodesy parameters Table 3. results for geochemistry and thermal parameters The tables illustrate the progressive convergence of the expert group decisions from highly scattered initial views toward a shared and stable group opinion. The results of subsequent elicitation sessions also show convergence of opinions towards a few stable parameters at each node (Figure 2). Some initial inconsistent definitions of parameters and thresholds were removed through time (e.g., Presence of CLVD, since the present resources at CFc do not allow its real-time assessment). Through the years we observed a progressively more willingness of single experts to openly illustrate the limits as well as the success of their models, and to become more open to modifying previously preferred quantifications in favor of others that emerged collectively from the expert group decision process (e.g., minimum magnitudes, thermal anomalies, seismic event counting). Figure 2. Convergence in the number of selected parameters through elicitation sessions. The number of selected parameters (here we only show those with p[α]=1) decreases significantly through elicitation sessions, showing the convergence process of experts. The vertical dashed line highlights the significant change occurred after elicitation III, when the number of the selected parameters fell. The results of the last elicitation do not differ substantially from those of the previous session, and results show that both the number (in figure) and the definition of parameters (in Tables 1, 2 and 3) are stable. Thus, those results represent the outcome of the experiment. More details can be found in the text. In Table 4, we report the results of the last elicitation. Noteworthy, the trends evident in the table do not reflect any choice of one single expert or subsets of experts, rather, emerge as an intrinsic group decision. It is therefore remarkable that the elicitation process produced a clear and consistent picture of the expected dynamics that might lead to a possible eruption at CFc. An example can be seen by inspecting the seismic parameters at the three nodes. The “Unrest” node turns out to be sensitive simply to the occurrence of earthquakes; at the “Magmatic” node, depth of hypocenters and waveforms become relevant; finally, acceleration of seismic activity is believed to be critical at “Eruption” node. Similar consistent trends also emerge from the parameters referring to geodetic and geochemical observations, overall providing a scientifically plausible and sound picture. It is also worth noting that the relevance of fuzzy parameters progressively decreases, when moving from node 1 to 3, while the relevance of Boolean parameters increases. This reflects (i) the decrease in confidence of experts (there is previous instrumental experience of unrest episodes at CFc, while that experience is missing for pre-eruptive phases), and (ii) the global experience suggesting that an eruption at a long-dormant volcano is usually preceded by macroscopic (easily visible) escalation of phenomena. In order to check the stability of the results and the possible existence of systematic divergence of opinions between high-weight and low-weight (or zero-weight) experts, the entire procedure was repeated by assigning to each elicited expert the same weight (w[e]=1). The results show that the individual-weighting and equal-weighting produce similar probability distributions, but that individual weighting yielded narrower distributions or more unanimity than equal weighting, especially around parameters judged to be critical. In other words, individual-weighting results are less dispersed than equal weighting results, and hence provide more informative distributions, even though 50th percentile values are similar. For sake of example, in Figure 3, we report the comparison between the individual-weighted and equal-weighted results for node 1, in selecting the parameters (upper panel) and in assessing lower thresholds (bottom panel). Analytical results for all nodes and all parameters are available at Selva et al. (2009). Figure 3. Sensitivity test on expert’s weighting scheme. Top panel, comparison of the parameter’s scores s, as assessed through individual-weighted (assessed from expert’s weights w[e]) and equal-weighted (assessed imposing w[e]=1) procedures, relative to node 1 of elicitation V. The results from the two methods appear well correlated, showing that the selection of parameters (s>s[m]) is rather stable with respect to w[e]. In the bottom panel, we report the statistics on the lower threshold for selected parameters at node 1 of elicitation V, evaluated in the individual-weighted (red) and equal-weighted (blue) procedures. Bars indicate confidence interval (80%) and stars represent the median. The results show that the two procedures results in equivalent medians, but the equal-unweighted procedure generally provides larger confidence intervals. Equivalent results for all nodes and parameters can be found in Selva et al. (2009). Result 2: BETEF_CF settings The BETEF_CF code yields long- and short-term eruption forecasting in the form of a probability distributions of expected frequencies of each node’s event (“unrest per month” for node 1, “magmatic unrest” given unrest for node 2, “eruption” given magmatic unrest for node 3), see Figure 1A. The parameters of each distribution are set through a Bayesian inference according to the logic described in Figure 1, panels B and C, and in the text. Here, we report in details how the elicitation results, together with other relevant models/past data, are used to parameterize BETEF_CF at each node. Node 1: defining background state and unrest phase at CFc Node 1 of the Event Tree considers whether there is either (i) unrest, or (ii) no unrest, in the time interval (t[0],t[0] + τ), where t[0] is the present time, and τ is the time window considered (1 month in this application). The definitions of background and unrest are necessarily subjective, since they have to reflect the specific aim of the forecast. Slow secular subsidence over preceding centuries was interrupted by caldera floor uplift beginning about 60 years ago. However, classifying all of the past 60 years as “unrest” is useless for short-term forecast and decision makers. Instead, unrest is pragmatically defined as a state of the volcano that forces us to face the question at node 2: is what is being observed due to magma movements? The corresponding definition of background state is therefore that of a “normal” state in the frame of the present long-lasting unrest at Campi Flegrei. In this respect BETEF_CF code requires as input a list of monitoring parameters and their thresholds that identify anomalies with respect to the background activity, i.e., a phase of unrest. The output of the expert elicitation sessions for node 1 is reported in Table 4. When at least one anomaly is detected by BETEF_CF, the probability of unrest is set to the degree of unrest, that is, the largest degree of anomaly detected all over the parameters (see Marzocchi et al. 2008and Figure 1B). When no anomalies are detected, the BETEF_CF estimates the long-term probability of unrest. To do so, we set the prior information to a uniform distribution (maximum ignorance, i.e., Λ[1]=0.5 and Θ [1]=1, see Figure 1B and Marzocchi et al. 2008). In order to define the likelihood distribution in the Bayesian inference scheme, we divide the period 1981-2009 into subsequent non-overlapping time windows of length τ=1 month. Then, we count the number of months that started with no unrest up to that time window (number of ’trials’) and the number of times that a new unrest episode starts within the time window, the latter corresponding to the number of observed unrest episodes up to then (number of ’successes’) (Marzocchi et al. 2008; Sandri et al. 2009). In this count, we also include partial unrest episodes with a fractional value equal to the measured Degree of Unrestη (see Marzocchi et al. 2008, ESM). With these parameters, the probability distribution relative to the occurrence of an unrest (node 1) in the next τ is completely defined for each of the time windows, and it changes with time as new information is acquired in a sort of learning procedure. At the end of the examined period (Jan. 1st 1981 - Dec. 31st 2009), the number of trials at node 1 was n[1]=306, while the successes were y[1]=7.4. The posterior distribution is therefore a Beta distribution with parameters α=8.4 and β=299.6. Node 2: magmatic unrest In case of unrest, we must focus on quantifying whether the unrest is due (i) to new magma, or (ii) to other causes (e.g., hydrothermal, tectonics, etc.). A hydrothermal eruption could threaten areas within a few kilometers of a vent. In CFc this is serious and deserves attention in future work. Here, though, we focus only on magmatic unrest that can lead to magmatic eruptions. The distinction between magmatic and non-magmatic unrest involves some subjective considerations because the presence of magma in a volcanic system is obvious. Pragmatically, we identify magmatic unrest when magma is in motion (e.g., significant reactivation of convection in a magma chamber, of dyke intrusion). If BETEF_CF detects unrest at node 1, the short-term analysis is based on the anomalies recorded to the parameters relative to node 2. The output of the expert elicitation sessions for node 2 is again reported in Table 4. These anomalies are then transformed into probability distributions. When BETEF_CF does not detect unrest at node 1, the probabilistic analysis at node 2 is based on the long-term assessment. In this case, the prior information is given by a uniform distribution (Λ[2] =0.5 and Θ[2]=1, see Figure 1B and Marzocchi et al. 2008) since specific knowledge on the relative frequency of occurrence of magmatic unrest episodes, with respect to other unrest types, is not available. Similarly, no past data are used since the magmatic vs. hydrothermal origin of the unrest episodes since 1981 is still debated (e.g., Bonafede 1991; De Siena 2010, and references therein). For this reason, we have chosen not to consider past data at node 2. The posterior distribution is therefore a Beta distribution with parameters α=β=1. Node 3: magmatic eruption In case of unrest with a magmatic origin, at node 3 we consider whether (i) the magma will reach the surface (i.e., it will erupt), or (ii) it will not, in the time interval (t[0],t[0] + τ). If BETEF_CF detects unrest at node 1, the short-term analysis is based on the anomalies recorded to the parameters relative to node 3. The output of the expert elicitation sessions for node 3 is again reported in Table 4. These anomalies are then transformed into probability distributions. When BETEF_CF does not detect unrest at node 1, the probabilistic analysis at node 3 is based on the long-term assessment. In this case, prior information was derived from the worldwide database of unrest at calderas similar to CFc (Newhall and Dzurisin 1988). Such database shows a frequency of unrest culminating in an eruption at silicic calderas (with unrest in the past 100 years and repose of more than 100 years, as in CFc case) of about 1 out of 6. Here, allowing for our ignorance on the nature of unrest (see above for Node 2), we estimate that 50%of unrest might be magmatic and so the prior best guess at Node 3 (Λ[3], see Figure 1B and Marzocchi et al. 2008) is set at (1/6)/0.5= 0.33. For this prior model, we also set the maximum allowed epistemic uncertainty (the equivalent number of data Θ[3]=1; see Figure 1B and Marzocchi et al. 2008). Note that an informative (even if weak) prior model is introduced only at node 3, and not at the previous nodes. This reflects the effective lack of credible models about volcanic unrest episodes. Indeed, future improvement on this issue will allow us to use more informative prior distributions at the different nodes of BET_EF. At node 3, past data are represented by the number of eruptions (’successes’) compared to the number of observed magmatic unrest episodes (node 2) which we do not know (see above). Thus, allowing again for an expectancy of 50%of magmatic unrest out of all unrest episodes, the background assessment at node 3 of BETEF_CF accounts for no observed eruption (y[3]=0) out of n[3]=0.5∗y[1]=3. 7supposedly magmatic unrest episodes since 1981. With these parameters, the background probability distribution relative to node 3 (eruption) is completely defined by a beta distribution with parameters α=0.67 and β=5.03. Result 3: Current and retrospective application The main result of this paper is the set up of the BETEF_CF model, as reported in Tables 4 and 5. This model, based on the group opinion of experts, is able to analyze the continuous flux of information coming from the CFc monitoring system, estimating the monthly probability of eruption in almost real-time through Eq. 1. Table 5. Background settings of BETEF_CF If no anomalies are detected, BETEF_CF provides the background monthly probability of eruption at CFc (Figure 1C). As shown above, this background analysis is based on theoretical models and data since 1981 and it accounts only for the long-term ongoing uplift dynamics of CFc. The long-term (background) expected (mean) eruption probability, updated to the 31st December 2009, in the following 1 month is 1.6·10^−3, with a 80%confidence interval [4·10^−5−4·10^−3]defined by the 10-th and 90-th percentiles of the distribution. Such confidence interval reflects the epistemic uncertainty on the expected probability estimate, as it propagates through nodes 1 to 3. Noteworthy, this estimated background monthly probability of eruption is of the same order of magnitude of the better-known Mt. Vesuvius, as estimated through an analogous procedure in Marzocchi et al. (2008). This implies that the hazard exposure of Naples due to CFc, even in quiet periods, is higher than for Vesuvius, given that expected eruption sizes are comparable (Marzocchi et al. 2004; Orsi et al. 2009), but the city center is closer to the eruptive vents of CFc, and more directly downwind (Selva et al. 2010, 2012 Whenever anomalies are detected, monitoring measures start being informative about the short-term behavior of the system, and the forecasts provided by BETEF_CF account for their fast evolution in near real-time. Such a strategy has been shown to provide results in agreement with more classical processes of experts’ decision during crises (e.g., Sandri et al. 2009; Lindsay et al. 2010), usually involving the set-up of a team receiving real-time data and discussing them collectively to achieve consensus. BETEF_CF can speed delivery of analysis to decision-makers. In Figure 4 we show a retrospective application of the BETEF_CF code to track the unrest evolution at CFc in the period 1981-2010. At the beginning of this time interval, the monitoring capability was not comparable to the present one; this inhomogeneity poses some constraints to the resolution of the probability variations through time. Nonetheless, this example highlights the main features of the BETEF_CF code applied to a real case. In Figure 4 we also report the eruption probability distribution at three different times; each distribution displays the estimated probability (central value) and the associated epistemic uncertainty (dispersion around the central value) (Marzocchi et al. 2008). Figure 4. Retrospective application of BETEF_CF from 1981 on. At each time t[0], BETEF_CF is calibrated with the data for t<t[0]. In panel A, we report the average (best estimate) probability of unrest (blue), magmatic unrest (green) and eruption (red) for the following 1 month. In panels B, C and D, at three different time, we report a snapshot of the cumulative distribution (percentiles) of the probability of eruption, highlighting the epistemic uncertainty on the estimated probability. Spikes in the probability values (main figure) represent unrest episodes, during which monthly probabilities are much greater than the background ones. The major unrest period 1982-84 (Barberi et al. 1984), as well as each one of the minor uplift phases that followed, are correctly identified as anomalous. In particular, BETEF_CF shows that starting from mid-1982 the volcano was certainly in an anomalous state (probability 100%at node 1), the average probability that the unrest was due to active magma movements was about 70%, and the probability of an eruption on a time window of one month was about 20%, with a peak of nearly 40%in the period June-September 1983 (in October the evacuation took place). Such a high value is in agreement with the perception of some volcanologists at the time (Civetta and Gasparini 2012), even if explicit quantifications of probabilities were not available. In late 1984 the eruption probability returned to lower values around 10%, and the crisis was definitely over at the beginning of the following year. The so-called mini-uplift phases that punctuated the activity of CFc from year 2000 are similar to each other in terms of probabilities, with the eruption probability always less than 10%. The BETEF_CF represents a valuable tool that can be used in real-time during an episode of caldera unrest. However, it is not intended to replace advisory groups, expert panels, or other means of evaluation that are commonly set up during major crises; rather, it represents an additional powerful tool that can help by focusing discussion and by saving time in exploring the changing eruption parameter-uncertainty space (Lindsay et al. 2010). Some of the characteristics of BETEF_CF make this procedure unique and highly desirable: i) the estimates from BETEF_CF are quantitative and reproducible, allowing therefore a fully transparent process of scientific evaluation during the crisis; ii) they are not the product of one single expert or restricted to a limited sub-group of experts, but instead represent a decision distilled from a large community, thus giving more robustness to the forecasts; iii) the forecasts are unaffected by temporal, political or sociological demands during the crisis, but are objective since based on new data and previously quantified consensus views; iv) the calibration through expert elicitation can be updated with the most recent and robust scientific results. The expert community decides through a blind process if new results should be included in BETEF_CF, and the weight to assign to them, so that scientific robustness is the only driver of new updates; v) BETEF_CF provides a clear aid to volcano scientists during a crisis, represented by the interpretation of observations and provision of forecasts, helping distinguish in a clear and unambiguous way the role of volcanologists from that of decision-makers. Whilst we report here the specific case of CFc, the approach can be generalized to other volcanoes where little or no pre-eruptive data are available. We report a complete list of the participants to all the elicitation sessions, in strict alphabetical order. In parenthesis, we report the elicitation at which single researchers participated. This list can be found also at the elicitations’ website (Selva et al. 2009). 1. Belardinelli M.E. (V), University of Bologna, Italy 2. Berrino G. (I,III), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 3. Bianco F. (I,III,IV,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 4. Bonafede M. (I,II,III,V), University of Bologna, Bologna, Italy 5. Bruno P.P. (I), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 6. Caliro S. (III,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 7. Chiodini G. (I,II,III,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 8. Civetta L. (I,II,III,IV,V), University of Naples “Federico II”, Naples, Italy 9. D’Auria L. (I), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 10. De Siena L. (III,IV), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 11. De Vita S. (II), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 12. Del Pezzo E. (III,IV,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Italy 13. Del Gaudio C. (I,II,III), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 14. Di Vito M.A. (I,II,III,IV), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Italy 15. Festa G. (III,IV,V), University of Naples “Federico II”, Naples, Italy 16. Giudicepietro F.(III,IV,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 17. Longo A. (V), Sezione di Pisa, Istituto Nazionale di Geofisica e Vulcanologia, Pisa, Italy 18. Macedonio G. (II,III,IV), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 19. Martini M. (II), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 20. Marzocchi W. (I,II,III,IV,V), sezione Roma 1, Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy 21. Montagna C.P. (III,IV,V), sezione di Pisa, Istituto Nazionale di Geofisica e Vulcanologia, Pisa, Italy 22. Moretti R. (II,III,IV,V), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy, and Second University of Naples, Naples, Italy 23. Neri A. (I), sezione di Pisa, Istituto Nazionale di Geofisica e Vulcanologia, Pisa, Italy 24. Orsi G. (I,II,III,IV), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 25. Papale P. (I,II,III,IV,V), sezione di Pisa, Istituto Nazionale di Geofisica e Vulcanologia, Pisa, Italy 26. Ricciardi G.P. (I,II,III), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 27. Ricco C. (I), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 28. Rinaldi A.P. (IV), sezione di Bologna, Istituto Nazionale di Geofisica e Vulcanologia, Bologna, Italy 29. Russo G. (I,III), Osservatorio Vesuviano, Istituto Nazionale di Geofisica e Vulcanologia, Naples, Italy 30. Saccorotti G. (II,III,IV,V), sezione di Pisa, Istituto Nazionale di Geofisica e Vulcanologia, Pisa, Italy 31. Sandri L. (II,III,IV,V), sezione di Bologna, Istituto Nazionale di Geofisica e Vulcanologia, Bologna, Italy 32. Sbrana A. (I), University of Pisa, Pisa, Italy 33. Scandone R. (III,IV,V), University of Rome “Roma 3”, Rome, Italy 34. Scarlato P. (IV,V), sezione Roma 1, Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy 35. Scarpa R. (III,IV), University of Salerno, Salerno, Italy 36. Todesco M. (I,III,IV), sezione di Bologna, Istituto Nazionale di Geofisica e Vulcanologia, Bologna, Italy CFc: Campi Flegrei caldera; BET: Bayesian Event Tree; BET_EF: Bayesian Event Tree for Eruption Forecasting; BETEF_CF: Bayesian Event Tree for Eruption Forecasting, implemented for Campi Flegrei, Italy; VT: volcano tectonic events; LP: long-period event; VLP: very-long period event; ULP: ultra-long period event; CLVD: Compensated-Linear-Vector-Dipole event. Authors’ contributions JS coordinated the writing of the paper, prepared all relative materials, implemented software codes and performed the analysis. WM and JS conceived and co-led the elicitation experiment. PP coordinated the elicitation meetings with WM. LS participated in code development and data analysis. All authors read and approved the final manuscript. The work described in this paper has been carried out in the framework of the INGV-DPC projects Progetto V3: Ricerche sui vulcani attivi, precursori, scenari, pericolosità e rischio (2004-2006), and Progetto V1: UNREST - Realizzazione di un metodo integrato per la definizione delle fasi di unrest ai Campi Flegrei (2007-2009), funded by the Italian Civil Protection ’Dipartimento della Protezione Civile’ (INGV-DPC 2005, 2007). The analysis of the background activity and the publication have been also supported by the project Quantificazione del Multi-Rischio con approccio Bayesiano: un caso studio per i rischi naturali della città di Napoli (2010-2013), funded by the Italian Ministry of Education, Universities and Research (Ministero dell’Istruzione, dell’Università e della Ricerca). Sign up to receive new article alerts from Journal of Applied Volcanology
{"url":"http://www.appliedvolc.com/content/1/1/5","timestamp":"2014-04-20T05:42:49Z","content_type":null,"content_length":"130154","record_id":"<urn:uuid:f88b0b62-2fe6-49de-9204-7d90a8783d11>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Counterintuitive Data Science Methods May Yield Keener Analytical Insights - Smarter Analytics Blog The postings on this site solely reflect the personal views of each author and do not necessarily represent the views,positions,strategies or opinions of IBM or IBM management. IBM reserves the right to remove content deemed inappropriate. Sort by: Most recent Most visits Counterintuitive Data Science Methods May Yield Keener Analytical Insights James Kobielus 06000021Q7 james_kobielus@hotmail.com | 2013-08-02 12:42:05.0 1 Comments | 5,622 Visits Mathematics is not a hermetic metaphysical pursuit, but rather a field where researchers craft and tweak algorithmic approaches that are suited to various problem domains. The best mathematicians know it's a dead-end to develop new approaches with any or all of these limitations: have no real-world applications, consume an inordinate amount of computing resources, and/or are so complex that almost no one else understands or knows how to apply them. The best statistical-analysis algorithms provide tools for collective discovery of quantitative relationships--preferably, where science comes into the picture, of an empirical nature. However, sometimes the traditional approaches get in the way of data-driven insight extraction. The underlying algorithms can just as easily obscure key quantitative relationships as reveal them. New branches of the mathematical arts often emerge to help scientists see patterns that are otherwise dark. Think of Newton, modern physics, and the pivotal impact of the calculus. Think of Mandelbrot, modern chaos theory, and fractal dimensionality. As more scientists incorporate big data into their working methods, they're going to re-assess whether the mathematical and statistical algorithms in their data-science toolkits are as effective at peta-scale as in "small data" territory. One key criterion is whether machine-learning algorithms can continue to calculate "good enough" predictions from data at extreme scales. One key way to define "good enough" is "efficiently executable with available big-data platforms in a acceptable timeframe while delivering actionable results." In that regard, I recently came across an excellent article presenting a new mathematical approach for tuning otherwise "inferior" machine-learning algorithms for big data. Within the context of the article, the author, Brian Dalessandro, essentially defines "inferior" as any algorithm that degrades the quality of training-set data that is used to tune the statistical model. What was most noteworthy about the discussion was the counterintuitive thrust of the approach: an algorithm that is inferior on one attribute (e.g., data quality) can also be superior on others (e.g., predictive accuracy, efficient linear scaling, cost-effectiveness on big-data platforms). Dalessandro outlines an approach that relies on "stochastic gradient descent" (SGD) and feature-hashing algorithms to reduce the "dimensionality" (i.e., the number of features/variables) being modeled. From a statistical analysis standpoint, the dimensionality-reduction approach increases one type of modeling error ("optimization error") in order to reduce the other types ("estimation error" and "approximation error") that contribute to modeling accuracy. Dalessandro makes it clear why this algorithmic approach is suited to big data: "By choosing SGD, one introduces more optimization error into the model, but using more data reduces both estimation and approximation errors. If the data is big enough, the trade-off is favorable to the modeler." Essentially, it's favorable to the modeler in analytical problem domains, such natural language processing, to which the approach's optimization errors are not showstoppers. He also mentions other benefits, such as enabling more complex feature/variable sets to be modeled in constrained memory resources and providing a more privacy friendly way to store and use personal data. But he also notes a trade-off: the approach introduces more chaos into the modeling results. Though highly arcane, this is the soul of practical data science: fitting the mathematical, statistical, and algorithmic approaches to the problem at hand and adapting them to the big-data resources at our disposal. Like any engineering discipline, this involves making trade-offs among algorithmic approaches. It's applied math on the proverbial steroids. Connect with me on Twitter @jameskobielus 1 Comments [Trackback] [...] Read More: www-304.ibm.com/connections/blogs/smarteranalytics/entry/counterintuitive_data_science_methods_may_yield_keener_analytical_insights?lang=en_us
{"url":"https://www-304.ibm.com/connections/blogs/smarteranalytics/entry/counterintuitive_data_science_methods_may_yield_keener_analytical_insights?lang=en_us","timestamp":"2014-04-19T02:52:59Z","content_type":null,"content_length":"121751","record_id":"<urn:uuid:ad2ceb53-2c3c-490a-9319-fcd6d0c5c3b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/katlin95/medals","timestamp":"2014-04-18T18:42:39Z","content_type":null,"content_length":"97884","record_id":"<urn:uuid:f09cf3a6-c99d-49a9-969c-6b6502f2a0b6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory text book for Linear Recurrence Sequences up vote 0 down vote favorite What is a good introductory text for linear recurrence sequences? What all are the necessary prerequisite for it? (My background is in Euclidean Fourier Analysis.) After browsing through several books, my perception is one is supposed to know a fare bit of algebraic number theory , algebraic geometry, Diophantine Equations etc. (I am not sure if the subjects I mentioned are the only or even the appropriate areas, so please correct me if I am wrong) Is there a book which builds/gives the necessary material as it progresses. I will appreciate any suggestion which you may think is going to be helpful. Thank you reference-request exponential-polynomials 1 What do you want to know about linear recurrences? Most of their basic properties are summarized in the relevant chapters of Stanley's Enumerative Combinatorics. – Qiaochu Yuan Oct 25 '10 at 14:27 @Qiaochu Yuan Thanks for the reference. I am interested in understanding the results related to zero multiplicity. – Vagabond Oct 25 '10 at 14:47 1 The book "Finite Fields", by Rudolf Lidl and Harald Niederreiter, contains a nice chapter on the linear recurring sequences. – Amy Glen Oct 25 '10 at 14:58 @Amy Glen Thank you. – Vagabond Oct 25 '10 at 15:30 1 I searched google and got Allen's thesis Multiplicites of Linear Recurrence Sequences. math.ucla.edu/~pballen/PAllen_MMath_Thesis.pdf – SandeepJ Oct 25 '10 at 17:34 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged reference-request exponential-polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/43522/introductory-text-book-for-linear-recurrence-sequences","timestamp":"2014-04-20T01:38:13Z","content_type":null,"content_length":"52258","record_id":"<urn:uuid:044ea9d1-1ba9-4d4d-a7ad-af9ce37b7869>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics and L-functions of elliptic curves. Computer calculations. (English) Zbl 0629.14002 Applications of algebraic K-theory to algebraic geometry and number theory, Proc. AMS-ISM-SIAM Joint Summer Res. Conf., Boulder/Colo. 1983, Part I, Contemp. Math. 55, 79-88 (1986). [For the entire collection see Zbl 0588.00014.] The paper concerns some computer calculations done by Grayson in the fall of 1981 to compare the value of the regulator on ${K}_{2}$ of an elliptic curve with the value of the L-function at $s=2$. All curves on the Swinnerton-Dyer table [see “Modular functions of one Variable. IV” Proc. Internat. Summer School 1972, Univ. Antwerp, RUCA, Lect. Notes Math. 476 (1975; Zbl 0315.14014)] with Weil conductor $\le 180$, negative discriminant, and a rational torsion point of order $\ge 5$ were considered. These computations are explained by a modified form of a conjecture advanced by Bloch and Beilinson. They provide evidence for a lot of exotic relations between special values of Eisenstein-Kronecker- Lerch series and values of Hasse-Weil L-functions for elliptic curves without complex 14-04 Machine computation, programs (algebraic geometry) 14C35 Applications of methods of algebraic $K$-theory 14G10 Zeta-functions and related questions 14H45 Special curves and curves of low genus 14H52 Elliptic curves
{"url":"http://zbmath.org/?q=an:629.14002","timestamp":"2014-04-17T09:43:50Z","content_type":null,"content_length":"22159","record_id":"<urn:uuid:d8ad4054-c246-405b-a00e-def00968e205>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Lindale, GA ACT Tutor Find a Lindale, GA ACT Tutor As a person that went to two different high schools, I had trouble fitting in. I solved my own problem by going to study hall and help others in chemistry, which I did for at least a year. The school gave me an award for my efforts I did that year. 17 Subjects: including ACT Math, chemistry, calculus, geometry ...My strength lies not just in knowledge of the subjects but in the process of how to integrate study habits with the use of internet technologies to make any challenging subject manageable. I understand how difficult it can be find the right tutor and to trust that you will get value for the time, effort and money you invest. I have helped many friends and family with difficult 29 Subjects: including ACT Math, reading, writing, geometry ...These students were all diagnosed with one of the following: Asperger's, autism, bi-polar, split personality, ADD, ADHD as well other unique disorders. My students ranged from 4th through 12th grade. Most of my students read at least 3-4 below their grade level. 47 Subjects: including ACT Math, chemistry, English, physics I have 33 years of Mathematics teaching experience. During my career, I tutored any students in the school who wanted or needed help with their math class. I usually tutored before and after school, but I've even tutored during my lunch break and planning times when transportation was an issue. 13 Subjects: including ACT Math, calculus, algebra 1, algebra 2 ...From ages 12-17 I was selected for Georgia All State Band all 6 years for percussion. I was section leader of my high school drumline in grades 10, 11, and 12. I began teaching percussion at age 16 when I was in 11th grade. 19 Subjects: including ACT Math, chemistry, physics, calculus Related Lindale, GA Tutors Lindale, GA Accounting Tutors Lindale, GA ACT Tutors Lindale, GA Algebra Tutors Lindale, GA Algebra 2 Tutors Lindale, GA Calculus Tutors Lindale, GA Geometry Tutors Lindale, GA Math Tutors Lindale, GA Prealgebra Tutors Lindale, GA Precalculus Tutors Lindale, GA SAT Tutors Lindale, GA SAT Math Tutors Lindale, GA Science Tutors Lindale, GA Statistics Tutors Lindale, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/lindale_ga_act_tutors.php","timestamp":"2014-04-20T06:40:14Z","content_type":null,"content_length":"23621","record_id":"<urn:uuid:47d46d0c-c825-497d-89e2-22a392edac51>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume II: The Universal Law. The General Theory of Physics and Cosmology (Concise Version) by Georgi Stankov, Copyright 1999, 290 pages Short Summary The concise version of volume II on physics and cosmology is a further elaboration of the integrated physical and mathematical axiomatics of the Universal Law to a unified theory of physics and cosmology and can be read independently of volume I on physics and mathematics. It contains the most important derivations of the Universal Law which succinctly prove that there is only one Universal Law of Nature. However, we recommend the reader to begin with volume I and only then proceed with volume II. This volume contains the basic achievements of the new General Theory of Physics and Cosmology of the Universal Law. It covers the physical theory as presented in standard textbooks on physics and cosmology for students worldwide and goes further than that by presenting new derivations of fundamental natural constants and particular physical laws from the Universal Law which are not known to physicists yet. The book has adopted the common didactic pattern of presenting the physical theory as it can be found in most standard textbooks on this subject. It begins with a thorough introduction into the basic statements of the new physical and mathematical axiomatics of the Universal Law in form of a classical scientific publication, which, nonetheless, represents a completely new theoretical approach in this discipline (see scientific publication on the new axiomatics below). In volume I, the new axiomatics was derived ontologically and epistemologically from the primary term of human consciousness by presenting abundant empirical evidence from various fields of science and human cognition. Its validity was presented as the crowning result of an extensive empirical and introspective research effort. In the concise version of volume II, the new physical and mathematical axiomatics is placed at the beginning, so that all subsequent results and mathematical derivations in physics and cosmology are grounded in this irrefutable system of logical human thinking. This is a completely new didactic and ontological approach in science and physics. Exactly this new approach makes the comprehension of the physical stuff by the reader much easier, as it automatically eliminates all the semantic and gnostic confusion that is currently observed in conventional physics and hinders an understanding of the Nature of All-That-Is. After the new physical and mathematical axiomatics is thoroughly introduced, the book proceeds with the mathematical applications of the Universal Law in classical mechanics, wave theory, thermodynamics, electromagnetism and quantum mechanics, and ends up with the theory of relativity and cosmology, and their explanation within the new physical axiomatics. Each chapter contains numerous exercises concerning practical applications of the Universal Law that should be solved by the students. In some cases possible solutions are suggested. There are many chapters in the book that contain new derivations of natural constants and physical laws, as well as further applications of the Universal law that are presented for the first time in the history of this discipline and go beyond conventional physical knowledge.This chapters are specially designated as to make the reader aware of their novelty. This includes many new fundamental physical constants and laws, such as Stankov’s law on photon thermodynamics, which builds the theoretical background for the future use of free photon energy in the 5th dimension. This is one of the major achievements of the new Physical Theory of the Universal Law as presented in this volume. The new physical theory in volume II (concise version) confirms all the mathematical results obtained in physics so far. At the same time it disproves some fundamental concepts that have not been really challenged by any scientist until now. In the first place, the second law of thermodynamics, also known as the law of entropy, is refuted, while its mathematical derivations are obtained from the Universal Law and explained by its theory. This theoretical achievement eliminates the fundamental antinomy in present-day science – the existence of highly organized organic matter with evolving consciousness versus dissipating matter as stated by the law of entropy. The refutation of this law is of great importance in the End Times of mankind and this planet, as this law rejects the possibility of Ascension of human beings to the 5th and higher dimensions. In fact, it rejects the existence of such levels of highly organized energy. This volume can be therefore used as a complete textbook on the new theory of physics and cosmology by all physicists, students of physics and other natural sciences with a very good knowledge of conventional physics and mathematics. In addition, the reader must be aware of the numerous blunders, inconsistencies and paradoxes in present-day physics from an epistemological and methodological point of view as to fully grasp and appreciate the new presentation of this discipline in volume II. This appears to be a great challenge to most specialists, as long as they stick to their old, wrong physical dogmas and show no inclination to enter new ways of perceiving the Nature of All-That-Is. The comprehension of Volume II in its concise and full version is a necessary prerequisite for an understanding of the theory of the New Gnosis of the Universal Law as it has been developed by the author in five philosophical books published on this website. The new physical theory in volume II is thus the foundation of the new transcendental physics that will be available to all human beings after their Ascension in the 5th dimension by the end of 2012. Therefore, it is accurate to say that the physical theory in this volume is the new Science of Ascension. 1.1 Mathematical methods of presenting space-time 1.2 Newton’s laws and their applications 1.3 Work and energy in mechanics 1.4 Space-time of rotations 1.5 Kepler’s laws 1.6 Newton’s law of gravity is a derivation of the universal equation (ND) 1.7 The ontology of Newton’s law from consciousness – A paradigm of how physical laws are introduced in physics (ND) 1.8 Mass and mind 1.9 Mass, matter, and photons (ND) 1.10 Mechanics of solids and fluids 2.WAVE THEORY 2.1 Oscillations 2.2 Mechanical waves 2.3 Standing waves and quantum mechanics 2.4 Wave equation 2.5 The action potential as a wave 2.6 The doppler effect 3.1 What is temperature? 3.2 The ideal-gas laws 3.3 Boltzmann’s law and the kinetic theory of gases 3.4 Heat and the first law of thermodynamics (ND) 3.5 Laws of radiation (ND) 3.6 Entropy and the second law of thermodynamics (ND) 3.7 Stankov’s law of photon thermodynamics (ND) 4.1 Etymology of concepts 4.2 Basic quantities and units of electricity (ND) - The charge of the basic photon q[p] is the elementary area (K[s]) of space-time - The fundamental unit of charge e is the geometric area of the electron 4.3 What are permettivity and permeability of free space (ND)? 4.4 Coulomb’s law and the electric field 4.5 Gauss’s law and its applications 4.6 Nabla- and Laplace-operators 4.7 Electric potential 4.8 Capacitance, dielectrics, and electrostatic energy 4.9 Electric current and superconductivity (ND) - The theory of superconductivity in the light of the Universal Law 4.10 The magnetic field (ND) 4.11 The quantum Hall effect (ND) 4.12 Precursors of Maxwell’s equations – electromagnetism of matter 4.13 Maxwell’s equations are derivations of the Universal Law 4.14 The wave equation is the differential form of the universal equation 5.1 Bohr model of energy quantization anticipates the inhomogeneity of space-time (ND) 5.2 Schrödinger wave equation of quantum mechanics is an application of the universal equation (ND) 5.3 Heisenberg uncertainty principle is an intuitive notion of the Universal Law 5.4 Selected solutions of quantum mechanics in the light of the Universal Law – How to calculate the mass of neutrinos? 6. SPACE-TIME CONCEPT OF PHYSICS 6.1 Classical mechanics 6.2 The concept of relativity in electromagnetism 6.3 The space-time concept of the special and general theory of relativity 6.4 Rest mass is a synonym for the certain event, Relativistic mass is a synonym for Kolmogoroff’s probability set 7. COSMOLOGY 7.1 Introduction 7.2 Hubble’s law is an application of the Universal Law for the visible universe 7.3 From Newton’s law to the visible universe (ND) 7.4 The cosmological outlook of traditional physics in the light of the Universal Law 7.5 The role of the CBR-constant in cosmology 7.6 Pitfalls in the conventional interpretation of redshifts 7.7 What do „Planck’s parameters of big bang“ really mean (ND)? 7.8 Adiabatic expansion of the universe 7.9 Derivation rule of absolute constants In 1995, I discovered the Universal Law of nature (the Law); I showed that all physical laws and their applications can be derived from this one law within mathematical formalism, and explained it epistemologically. This has led to the development of a unified theory of physics and cosmology, which is an axiomatization (axiomatics) of physics and mathematics. Thus physics is applied mathematics. The major results of this theory are: all terms in physics can be axiomatically derived from the primary term – energy=space-time (primary axiom). Energy (spacetime) is closed, infinite, continuous, inhomogeneous (discrete), and constant; it is in a state of permanent energy exchange. The continuum (the set of all numbers) is equivalent to the primary term. The new axiomatics can be empirically verified. Thus the validity of mathematics as challenged by Gödel’s theorem can be proven in the real word (proof of existence). This eliminates the continuum hypothesis and the foundation crisis of mathematics. The Universal Law describes space-time in terms of mathematics. The universal equation is E=E[A].f, where E is energy exchange, E[A] is a specific constant amount (quantum) of exchanged energy, called „action potential“, and f=E/E[A] is called „absolute time“. It is a dimensionless quotient. The Universal Law is a „law of energy“. Energy (space-time) is the only real thing. All physical quantities such as mass, charge, force, and momentum are abstract subsets of space-time that are defined within mathematics (objects of thought). They are dimensionless numbers that belong to the continuum. Since they contain spacetime as an element (U-subsets), they can be derived in an axiomatic manner from the primary term. For instance, mass is a synonym for an energy (space-time) relationship and charge is a synonym for area, that is, the SI unit 1 coulomb is equivalent to 1m^2. This leads to the unification of physics and cosmology on the basis of mathematical formalism. This new physical and mathematical axiomatics also integrates gravitation with the other forces. It will be outlined in the Mathematical Appendix that follows this Ebook Download
{"url":"http://www.stankovuniversallaw.com/volume-ii-the-universal-law-the-general-theory-of-physics-and-cosmology-concise-version/","timestamp":"2014-04-19T22:20:44Z","content_type":null,"content_length":"51563","record_id":"<urn:uuid:80f98ed8-7c1b-48c2-8261-263de90611e6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [igraph] Generating a clustered graph [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [igraph] Generating a clustered graph From: Tamas Nepusz Subject: Re: [igraph] Generating a clustered graph Date: Wed, 10 Aug 2011 13:10:50 +0200 User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110617 Lightning/1.0b2 Thunderbird/3.1.11 > I'm a newbie with R. Can you give a hint on how can I do it? I'm a newbie as well because I usually work with the Python interface, but here's a very crude solution that nevertheless works unless you have thousands of clusters. First, let's store the number of clusters: num.clusters <- max(membership) We then iterate over the edge list of the graph and check which clusters the endpoints belong to: edgelist <- get.edgelist(graph) endpoints <- matrix(membership[edgelist], ncol=2, byrow=T) edgelist <- get.edgelist(graph) Then let's create a square matrix which will store the adjacency matrix of our meta-graph: adj <- mat.or.vec(num.clusters, num.clusters) adj[endpoints] <- 1 Finally we create the meta-graph (clearing the diagonal of adj so we don't have loop edges for each cluster): cluster.graph <- graph.adjacency(cluster.graph, mode="undirected", diag=F) Note that the above code is for igraph 0.6, which uses 1-based indexing everywhere. If you use igraph 0.5.4 (which is very likely), you may have to add 1 to some of the indices to make it work. Most likely you'll need this: membership <- membership + 1 edgelist <- get.edgelist(graph) + 1 Also, the above code does not give weights for the edges in the meta-graph. If you want to assign weights to the edges that specify how many edges go between clusters in the original graph, you can do it as follows: adj <- mat.or.vec(num.clusters, num.clusters) for (i in 1:ecount(graph)) { adj[endpoints[i,1], endpoints[i,2]] <- adj[endpoints[i,1], endpoints[i,2]] + 1 I'm sure there's an easier and more efficient way of doing the above calculation in R but I'm not that familiar with R and this is the best I managed to get. [Prev in Thread] Current Thread [Next in Thread]
{"url":"http://lists.gnu.org/archive/html/igraph-help/2011-08/msg00028.html","timestamp":"2014-04-16T10:43:46Z","content_type":null,"content_length":"7001","record_id":"<urn:uuid:55244a74-2fc5-44a2-9ec1-3129f28f6286>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Extra Vol. ICM III (1998), 523-532 DOCUMENTA MATHEMATICA , Extra Volume ICM III (1998), 523-532 Jan-Olov Strömberg Title: Computation with Wavelets in Higher Dimensions In dimension $d$, a lattice grid of size $N$ has $N^d$ points. The representation of a function by, for instance, splines or the so-called non-standard wavelets with error $\epsilon$ would require $O (\epsilon^{-ad})$ lattice point values (resp. wavelet coefficients), for some positive $a$ depending on the spline order (resp. the properties of the wavelet). Unless $d$ is very small, we easily will get a data set that is larger than a computer in practice can handle, even for very moderate choices of $N$ or $\epsilon$. I will discuss how to organize the wavelets so that functions can be represented with $O((\log (1/\epsilon))^{a(d-1)}\epsilon^{-a})$ coefficients. Using wavelet packets, the number of coefficients may be further reduced. 1991 Mathematics Subject Classification: Primary Secondary Keywords and Phrases: Full text: dvi.gz 17 k, dvi 38 k, ps.gz 74 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.emis.de/journals/DMJDMV/xvol-icm/15/Stromberg.MAN.html","timestamp":"2014-04-19T22:29:15Z","content_type":null,"content_length":"1821","record_id":"<urn:uuid:4ad2de43-09f3-42ec-8d9a-bc5aaf142ee6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate and Yield Calculator |- MyCalculators.com ~ This calculator will calculate the rate or annualized compounded rate (yield) based on the compounding period selected. If you don't understand compounding, please look at my FAQ page below to get more information. Privacy Policy © 1995- MyCalculators.com This calculator will calculate the rate or annualized compounded rate (yield) based on the compounding period selected. If you don't understand compounding, please look at my FAQ page below to get more information.
{"url":"http://www.mycalculators.com/ca/yldcalculatorm.html","timestamp":"2014-04-17T19:37:24Z","content_type":null,"content_length":"9182","record_id":"<urn:uuid:cc6c6bc1-722d-4562-898f-29cd524a28a3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] [SciPy-User] Simple pattern recognition Zachary Pincus zachary.pincus@yale.... Mon Sep 21 12:57:53 CDT 2009 I believe that pretty generic connected-component finding is already available with scipy.ndimage.label, as David suggested at the beginning of the thread... This function takes a binary array (e.g. zeros where the background is, non-zero where foreground is) and outputs an array where each connected component of non-background pixels has a unique non-zero "label" value. ndimage.find_objects will then give slices (e.g. bounding boxes) for each labeled object (or a subset of them as specified). There are also a ton of statistics you can calculate based on the labeled objects -- look at the entire ndimage.measurements namespace. On Sep 21, 2009, at 1:45 PM, Gökhan Sever wrote: > I asked this question at http://stackoverflow.com/questions/1449139/simple-object-recognition > and get lots of nice feedback, and finally I have managed to > implement what I wanted. > What I was looking for is named "connected component labelling or > analysis" for my "connected component extraction" > I have put the code (lab2.py) and the image (particles.png) under: > http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/ > labs > What do you think of improving that code and adding into scipy's > ndimage library (like connected_components()) ? > Comments and suggestions are welcome :) > On Wed, Sep 16, 2009 at 7:22 PM, Gökhan Sever > <gokhansever@gmail.com> wrote: > Hello all, > I want to be able to count predefined simple rectangle shapes on an > image as shown like in this one: http://img7.imageshack.us/img7/2327/particles.png > Which is in my case to count all the blue pixels (they are ice-snow > flake shadows in reality) in one of the column. > What is the way to automate this task, which library or technique > should I study to tackle it. > Thanks. > -- > Gökhan > -- > Gökhan > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/045401.html","timestamp":"2014-04-20T11:38:35Z","content_type":null,"content_length":"5624","record_id":"<urn:uuid:105139f1-2926-48e0-b8b8-32ee6ad67d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations and Inequalities in One Variable Equations and Inequalities in One Variable 2.1 Linear Equations: The Addition and Multiplication Properties of Equality ... Carla is making a planter out of an empty can for her mother's birthday. ... – PowerPoint PPT presentation Number of Views:419 Avg rating:3.0/5.0 Slides: 89 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/12c934-NTE3M/Equations_and_Inequalities_in_One_Variable_powerpoint_ppt_presentation","timestamp":"2014-04-16T19:44:00Z","content_type":null,"content_length":"137406","record_id":"<urn:uuid:1450f3ad-b4cf-4f09-9dea-5dc9a65799d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
South Waltham, MA Algebra 2 Tutor Find a South Waltham, MA Algebra 2 Tutor ...I have tutored math to elementary school students. I have a good sense for the curriculum, and I have a good sense for what concepts are easy and what concepts are difficult for some students. I studied Arabic intensively and know the language well. 47 Subjects: including algebra 2, chemistry, reading, calculus ...I will travel throughout the area to meet in your home, library, or wherever is comfortable for you.Materials Physics Research Associate, Harvard, current Geophysics postdoctoral fellow, MIT, 2010-2012 Physics PhD, Brandeis University, 2010 -Includes experience teaching and lecturing Physics... 16 Subjects: including algebra 2, calculus, physics, geometry I am a chemistry PhD candidate in my fourth year at Boston College graduate school. I conduct exciting, cutting-edge research in a materials chemistry laboratory. Since entering graduate school, I have been a general chemistry lab teaching assistant, physical chemistry II (quantum) teaching assistant, and advanced physical chemistry (graduate level) teaching assistant. 10 Subjects: including algebra 2, chemistry, calculus, algebra 1 ...I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students. I did this from 1998-2000. At CASA I tutored algebra through calculus.I am a homeschooling mom who has currently been homeschooling for 8 years. 25 Subjects: including algebra 2, reading, English, ESL/ESOL ...I also have an experience with private tutoring of this subject, preparing students to improve their knowledge and prepare to the midterm and final exams. Teaching and coaching for years, I have developed some strategies and methodologies to help students to understand the subject and grasp the materials efficiently. I believe that WyzAnt students will benefit from working with me. 23 Subjects: including algebra 2, chemistry, English, calculus Related South Waltham, MA Tutors South Waltham, MA Accounting Tutors South Waltham, MA ACT Tutors South Waltham, MA Algebra Tutors South Waltham, MA Algebra 2 Tutors South Waltham, MA Calculus Tutors South Waltham, MA Geometry Tutors South Waltham, MA Math Tutors South Waltham, MA Prealgebra Tutors South Waltham, MA Precalculus Tutors South Waltham, MA SAT Tutors South Waltham, MA SAT Math Tutors South Waltham, MA Science Tutors South Waltham, MA Statistics Tutors South Waltham, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Auburndale, MA algebra 2 Tutors Cherry Brook, MA algebra 2 Tutors Cochituate, MA algebra 2 Tutors East Somerville, MA algebra 2 Tutors East Watertown, MA algebra 2 Tutors Hastings, MA algebra 2 Tutors Kendal Green, MA algebra 2 Tutors Kenmore, MA algebra 2 Tutors North Natick, MA algebra 2 Tutors Reservoir, MS algebra 2 Tutors Stony Brook, MA algebra 2 Tutors Waltham, MA algebra 2 Tutors West Newton, MA algebra 2 Tutors West Somerville, MA algebra 2 Tutors Winter Hill, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/South_Waltham_MA_Algebra_2_tutors.php","timestamp":"2014-04-19T15:10:56Z","content_type":null,"content_length":"24634","record_id":"<urn:uuid:e156d15a-fd78-4e17-837e-c702a2e2eea8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Riddle Thread This is a discussion on Riddle Thread within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Foolish.... Riddle #6: natural languages Last edited by CornedBee; 03-28-2009 at 11:51 AM. A class that doesn't overload all operators just isn't finished yet. -- SmugCeePlusPlusWeenie A year spent in artificial intelligence is enough to make one believe in God. -- Alan J. Perlis Riddle #6: natural languages NaA (not an adjective). Last edited by CornedBee; 03-28-2009 at 11:51 AM. Riddle #6: natural languages "Everything I say is a lie." "This statement is wrong." I feel that this falls into the same category of oxymorons. If barish were barish, it'd be fooish. If it'd be fooish, it wouldn't be fooish. Last edited by CornedBee; 03-28-2009 at 11:52 AM. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Riddle #6: natural languages Last edited by CornedBee; 03-28-2009 at 11:52 AM. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). Riddle #4: if-statement considered harmful Maybe. This used to be true for CS students at my university (that's why I knew about it). Now, the corresponding lecture isn't mandatory anymore. It seems to me that the focus has shifted from logic towards algorithms (which is good for me, because these two are the only topics that I appreciate in CS). BTW, my fellow students in theoretical philosophy tended to be much better in boolean logic than most CS students. Last edited by CornedBee; 03-28-2009 at 11:53 AM. All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #6: natural languages If barish were barish, it'd be fooish. If it'd be fooish, it wouldn't be fooish. For those who don't see it right away, here's the full proof: Suppose that "barish" is barish. Then it denotes a property that it has itself, hence it is fooish. Contradiction. Suppose that "barish" is fooish. Then by definition of "fooish", it must be barish. Contradiction. Hence "barish" is neither barish nor fooish. Grelling–Nelson paradox Or more correctly, the Grelling-Nelson antinomy. Back in 1908, they hadn't heard of "foo" and "bar" yet, so instead they used autological and heterological respectively. In 1884, Gottlob Frege published "The Foundations of Arithmetic", where he tries to deduce mathematics from logic (Frege is way ahead of his time: a few years earlier, Leopold Kronecker said "God made the integers, all the rest is the work of man"). In a letter to Frege, Bertrand Russell points out a fundamental flaw in Frege's design, which later came to be known as "Russell's antinomy". This letter is fun to read, but unfortunately I only have access to the original version, which is in German. Anyway, the Grelling-Nelson antinomy is a reformulation of Russell's antinomy which is a bit easier to understand (the original version isn't too hard either). In 1931, Kurt Gödel used the same technique in his proof of the First Incompleteness Theorem which states that "any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete". Read this as "math is either inconsistent or incomplete or both, and so are any of its reformulations". I highly appreciate the proof and I consider it to be the most important discovery of the 20th century, so I will summarize it below: 1) in a given theory T, assign a unique number to each theorem, e.g. theorem 1, theorem 2, theorem 3, ..., n. 2) add a new theorem n+1 which claims that "theorem X is not provable". 3) apply this theorem to itself, which yields a claim similar to "This sentence is not provable". This claim is either true or false. If it is true, than the theory is incomplete (it lacks a proof). If it is false, then there exists a proof, hence the theory is inconsistent (it has a proof for something that says that there is no proof). In today's mathematics, there are a lot of claims for which no proof can exist. The first one discovered (40 years after Gödel) is the continuum hypothesis which claims that "there is no set whose cardinality is strictly between that of the integers and that of the real numbers". Being unprovable means that you cannot find such a set, but you also can't show that it doesn't exist. Another funny issue that suffers from the same problem: one can prove that it's possible to split a ball from the 3-dimensional space of the reals into six pieces such that out of these, you can construct two balls which both have the same size and volume as the original ball. But one can also prove that it's not possible to find a concrete decomposition such that it works. I'm planning to write an article about the whole story to improve my English skills. Is this stuff somehow appealing to non-mathematicians? Last edited by CornedBee; 03-28-2009 at 11:52 AM. All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #7: two smallest elements The natural numbers are usually ordered in the following way: 1 < 2 < 3 < 4 < ... There is exactly one element (the first) which has no predecessor. Note that we are free to invent our own order for the natural numbers, e.g. 2 < 1 < 3 < 4 < ... Can you find an order for the natural numbers such that there are two elements that have no predecessor? All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #7: two smallest elements Does the order have to be total and strict? If it has to be, then there cannot be more than one. Assume there were e1 and e2 that are both "smallest". By the rules of total, strict ordering, either e1 < e2 or e2 < e1 must be true, otherwise e1 = e2. Last edited by CornedBee; 03-28-2009 at 11:53 AM. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Yes, the order has to be total and strict. Note that an element without a specific predecessor is not necessarily a smallest element (although a smallest element certainly doesn't have a Hint: can you find an order such that there is no smallest element? Last edited by Snafuist; 03-28-2009 at 05:42 AM. All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #7: two smallest elements There's ... < 3 < 2 < 1, where no element has no predecessor, but exactly one has no successor. But I'm not sure if that could really be called a distinct ordering. Predecessor and successor are just a matter of perspective. An ordering that really has no predecessor would be ... < 5 < 3 < 1 < 2 < 4 < ..., i.e. mirror the odd (or even) numbers. int oddneg(unsigned i) { return isodd(i) ? -(int)i : (int)i; } bool operator <(unsigned l, unsigned r) { oddneg(l) < oddneg(r); } Last edited by CornedBee; 03-28-2009 at 11:53 AM. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Riddle #7: two smallest elements As there are infinitely many natural numbers, the two orderings ... < 2 < 1 and 1 < 2 < ... are distinct because they have different properties (there's a clear difference between successor and predecessor). If the natural numbers were finite, these two orderings would be the same (apart from the labeling of the symbols, which is irrelevant). An ordering that really has no predecessor would be ... < 5 < 3 < 1 < 2 < 4 < ..., i.e. mirror the odd (or even) numbers. That's exactly what I had in mind. From here, it's only a small step to an order where two elements don't have a predecessor. Hint: an element may have an ancestor, even if it doesn't have a predecessor. Last edited by CornedBee; 03-28-2009 at 11:53 AM. All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #7: two smallest elements So: 1 < 3 < 5 < ... < 2 < 4 < 6 < ... i.e. order all odd numbers before all even numbers, or vice versa. Given the definition of predecessor as "a is pred. of b if a < b and there's no c such that a < c < b", 2 has no predecessor. The second hint did it. Last edited by CornedBee; 03-28-2009 at 11:53 AM. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Riddle #7: two smallest elements It may sound silly, but I feel happy now. Last edited by CornedBee; 03-28-2009 at 11:54 AM. All things begin as source code. Source code begins with an empty file. -- Tao Te Chip Riddle #6: natural languages >I'm planning to write an article about the whole story to improve my English skills. Is this stuff somehow appealing to non-mathematicians? I'd read it. I was actually planning on taking a class this summer that looks at the foundations of logics, including things loke Gödel's theorem. I'll be away on an internship instead though so I won't get to take the course. Last edited by CornedBee; 03-28-2009 at 11:54 AM. Bonus Riddle Since Snafuist posts all of the riddles he doesn't get the fun of trying to solve them. Here's one you guys may enjoy, the solution is more computer sciency than you might think Every year in a village of dwarves, an evil ogre comes to play a deadly game. The dwarves get lined up by hight and are not permitted to look any direction but forward (this means each dwarf can only see the dwarfs ahead of him/her). Each dwarf is then given a hat, which he can not see. The hat is either black or white. This means each dwarf can see all of the hats in front of him, but not his own or any behind him. Starting from the tallest dwarf, they go in order down the line calling out either "black" or "white". If a dwarf correctly identifies the colour of their hat, they live. If they get the colour wrong, they die. The dwarves spend all year planning a strategy to save the most dwarves as possible. Problem: design a strategy to save the most dwarves. You can assume that all dwarves are cooperative since they want to save as many of themselves as possible. Each dwarf calls out the colour of the final dwarfs hat. When it gets to the final dwarf, he knows the colour of his hat (because he's heard it n-1 times), and he just says his own hat colour. This is guaranteed to save at least one dwarf. You should easily be able to find a solution that saves half of the dwarves. The optimal solution will always save at least n-1 dwarves. 03-27-2009 #61 03-27-2009 #62 03-27-2009 #63 03-27-2009 #64 The larch 03-28-2009 #65 Complete Beginner 03-28-2009 #66 Complete Beginner 03-28-2009 #67 Complete Beginner 03-28-2009 #68 03-28-2009 #69 Complete Beginner 03-28-2009 #70 03-28-2009 #71 Complete Beginner 03-28-2009 #72 03-28-2009 #73 Complete Beginner 03-28-2009 #74 03-28-2009 #75
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/114010-riddle-thread-5.html","timestamp":"2014-04-18T11:31:41Z","content_type":null,"content_length":"112220","record_id":"<urn:uuid:37412e66-4b38-493e-931f-ad04af4e93bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: factor completely 8v^2-104v+288 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bbd7ece4b0bcefefa04db1","timestamp":"2014-04-20T11:17:13Z","content_type":null,"content_length":"50935","record_id":"<urn:uuid:be2ff6e9-0a85-4872-a8c7-36094b3f1747>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Let ${M}^{n}$ be a compact oriented $n$-manifold and let $x:{M}^{n}\to {ℝ}^{n+1}$ be an immersion of ${M}^{n}$ into Euclidean space ${ℝ}^{n+1}$. Let $u :{M}^{n}\to {S}^{n}\subset {ℝ}^{n+1}$ be the Gauss map of $x$, and let $F:{S}^{n}\to {ℝ}^{+}$ be a position function in the unit sphere ${S}^{n}$. Consider compactly supported variations of $M$ that preserve volume and consider the critical points of ${J}_{F}\left(x\right)={\int }_{M}F\left(\gamma \right)dM$ under such variations; for $F\equiv 1$, these critical points are hypersurfaces of constant mean curvature. Of course, a notion of stability is naturally defined for the above variational problem. The author proves that $M$ is stable iff it is $k{W}_{F}$, where $k$ is a (computable) constant and ${W}_{F}$ is the Wulff shape of $F$ defined as follows. Let $\varphi :{S}^{n}\to {ℝ}^{n+1}$ be given by $\varphi \left(u \right)={F}_{u }+abla F$, where $abla F$ is the gradient of $F$ in the standard metric of ${S}^{n}$. The Wulff shape ${W}_{F}$ is the hypersurface given by ${W}_{F}=\ varphi \left({S}^{n}\right)$. If $F\equiv 1$, then ${W}_{F}={S}^{n}$, hence the author’s result generalizes the theorem of J. L. Barbosa and M. P. do Carmo [Math. Z. 185, 339-353 (1984; Zbl 0529.53006)] that a compact hypersurface of constant mean curvature in ${ℝ}^{n+1}$ is stable iff it is a round sphere. 53A10 Minimal surfaces, surfaces with prescribed mean curvature 52A15 Convex sets in 3 dimensions (including convex surfaces) 49Q05 Minimal surfaces (calculus of variations)
{"url":"http://zbmath.org/?q=an:0924.53009","timestamp":"2014-04-21T07:22:17Z","content_type":null,"content_length":"25549","record_id":"<urn:uuid:2d050c8d-74a7-4dd8-9102-a8247d5f0247>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
A Multivariable Adaptive Control Design with Applications to Air-heat Tunnel Using Delta Models Department of Process Control, University of Tomas Bata in Zlin, Faculty of Applied Informatics, Zlin, Czech Republic The article describes the design of adaptive controller for autonomous and non-autonomous control of nonlinear laboratory model hot-air tunnel using delta models. Synthesis of the controller is based on a matrix approach and polynomial theory. Autonomous control is solved using compensators. The controller was verified by simulation and the real-time experiment on nonlinear laboratory model hot-air tunnel. The recursive least squares method in delta domain is used in identification part of the proposed controller. At a glance: Figures Keywords: adaptive control, delta model, real time control, multivariable control American Journal of Mechanical Engineering, 2013 1 (7), pp 293-299. DOI: 10.12691/ajme-1-7-28 Received October 28, 2013; Revised November 13, 2013; Accepted November 25, 2013 © 2013 Science and Education Publishing. All Rights Reserved. Cite this article: • Petr, Navrátil. "A Multivariable Adaptive Control Design with Applications to Air-heat Tunnel Using Delta Models." American Journal of Mechanical Engineering 1.7 (2013): 293-299. • Petr, N. (2013). A Multivariable Adaptive Control Design with Applications to Air-heat Tunnel Using Delta Models. American Journal of Mechanical Engineering, 1(7), 293-299. • Petr, Navrátil. "A Multivariable Adaptive Control Design with Applications to Air-heat Tunnel Using Delta Models." American Journal of Mechanical Engineering 1, no. 7 (2013): 293-299. Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks 1. Introduction Most of the control circuits are implemented as one-dimensional circuits. For a large number of objects it is necessary to control several variables relating to one system simultaneously. There are a number of possible approaches to design multivariable control systems. These approaches are based on different mathematical apparatus and hence the different forms of mathematical description of dynamic systems. This problem can be solved using the method of synthesis based on matrix approach and polynomial theory. This method is based on the description of multivariable systems using matrix fractions. Synthesis is easily algorithmizable for a digital computer. All the linear control tasks can be converted to an equation of the same type, only the coefficients of the equation depends on the task condition. To avoid loop interactions, multivariable systems can be decoupled into separate loops known as single input, single output (SISO) systems ^[7, 12]. Decoupling may be done using several different techniques. In our case the decoupling is realized by means of compensator placed ahead of the system. 2. Delta Models The Z-transfer function is used to describe discrete-time dynamic system. When the sampling period decreases the z-transfer functions have some disadvantages ^[1]. The disadvantage of the discrete models can be avoided by introducing a more suitable discrete model ^[5, 6, 10]. It is possible to introduce new discrete operator ^[10]. This operator has following properties: leads to a model that provides a simple linear constraints on models with the shift-operator converges to the continuous derivatives with sampling period goes to zero converges so that the inverse operator is causal Define operator and associated complex variable to fulfilled following condition By substituting In following parts only the forward ^[1, 3, 9]. Regression model (ARX) is useful to apply this method of identification. This model can be express in its compact form where y(k) is the process output variable, u(k) is the controller output variable and n(k) is the non-measurable random component). The description of the model and relations for feedback control of the model with two inputs and two outputs are derived in the following sections. The polynomials of the second degree are supposed in the description using the matrix fraction. 3. Description of Two-input Two-output System The system with two inputs and two outputs is the simplest and also the most common multivariable circuit. The internal structure of the system is described by single input single output transfer functions. These transfer functions uniquely identifies relationships between variables. The internal structure of the system is depicted in Figure 1. The transfer matrix of the system is The matrix Polynomial matrices A, B are left indivisible decomposition of matrix A[1], B[1] are right indivisible decomposition of matrix The matrices of discrete model take following form and the differential equations of the model are 3.1. Recursive Identification In the case of the above described system with two inputs and two outputs it is necessary to identify a total of sixteen unknown parameters of ARX model described by equation (11). The unknown parameters of the The parameter vector takes the form: The data vector is The detailed description of recursive identification algorithm for TITO system is designed in ^[8]. 4. Designing of Feedback MIMO System Transfer function of the controller takes the form matrix fraction. Polynomial matrices P, Q are left indivisible decomposition of matrix P[1], Q[1], are right indivisible decomposition of matrix Block diagram of closed loop can be seen in Figure 2. To ensure permanent zero control error is necessary to include the matrix of integrator. The matrix of integrator takes the form The control law can be derived from block diagram It is possible to derive following equation for the system output Equation (19) can be modified to give The stability control system is ensured if the transfer controller matrix is given by the solution matrix diophantine equation where M is a stable diagonal polynomial matrix. The behavior of closed loop system is given by the roots of this polynomial matrix. If the system is to be stable the roots of this polynomial matrix must be inside the circle with center at point -1/T[0] with radius 1/T[0]. The degree of the controller matrix polynomials depends on the internal properness of the closed loop. The structure of matrices P[1] and Q[1] was chosen so that the number of unknown controller parameters equals the number of algebraic equations resulting from the solution of the diophantine equations using the uncertain coefficients method. The solution to the diophantine equation results in a two sets of eight algebraic equations with unknown controller parameters. The controller parameters are given by solving these equations. These sets can be rewrite in a matrix form. 5. Autonomous Control Using Compensators Multiple-input multiple-output systems describe processes with more than one input and more than one output which require multiple control loops. These systems can be complicated through loop interactions that result in variables with unexpected effects. Loop interactions need to be avoided because changes in one loop might cause destabilizing changes in another loop. Decoupling the variables of that system will improve the control of that process. There are several ways to control multivariable systems with internal interactions. Some make use of decentralized PID controllers, whilst others are composed of a string of single input – single output methods ^[2, 4]. One possibility is the serial insertion of a compensator ahead of the system ^[8, 11, 12]. The aim here is to suppress of undesirable interactions between the input and output variables so that each input affects only one controlled variable. The resulting transfer function Compensator inserted in series before the system is chosen so that the product of the matrices was diagonal matrix. When matrix H is diagonal the decoupling conditions are fulfilled. Several well – known compensators are given in ^[8, 11, 12]. Control algorithms were derived for the model above with two compensators. These will be referred to as C[1][ ]and C[2]. Compensator C[1][ ]is based on finding the inversion of the controlled system. Matrix H is, therefore, a unit matrix. System output takes the form The stability of the closed loop is given by solution of following diophantine equation The structure of polynomial matrices of controller and matrix The controller parameters are given by solution of diophantine equation (29) using the uncertain coefficients method. The control law is described by matrix equation Compensator C[2 ]is adjugated to matrix B. When C[2] was included in the design of the closed loop the model was simplified by considering matrix A as diagonal. The multiplication of matrix B and adjugated matrix B results in diagonal matrix H. The determinants of matrix B represent the diagonal elements. When matrix is non-diagonal, its inverted form must be placed ahead of the system in order to obtain diagonal matrix H, otherwise it may increase the order of the controller and sophistication of the closed loop system. Although designed for a diagonal matrix, compensator C[2 ]also improves the control process for non – diagonal matrix A in the controlled system. This is demonstrated in the simulation results. Equation for system output takes the form The matrix The stability of the closed loop is given by solution of following diophantine equation The structure of the matrix P1 and Q1 is chosen so that the number of algebraic equations after multiplication of matrix diophantine equation corresponds to the number of unknown parameters. The structure of controller polynomial matrices takes the form Polynomial matrix M takes the form selected with regard to the structure of other matrices in diophantine equation Solving the diophantine equation defines a set of algebraic equations. These equations are subsequently used to obtain the unknown controller parameters. The control law is given by the block diagram 6. Simulation Examples A program and diagrams to simulate and verify the algorithms was created in the program system MATLAB - SIMULINK. Verification by simulation was carried out on a range of systems with varying dynamics. The control of the model below is given here as our example. The right side control matrices are denoted as follows: without compensator – M[1], with compensator C[1]-M[2], and with compensator C[2]-M[3]. The same initial conditions for system identification were used for all the types of adaptive control we tested. The initial parameter estimates were chosen to be The simulation results are shown in Figure 6-8. It is possible to draw several conclusions from the simulation results of the experiments on linear static systems. The basic requirement to ensure permanent zero control error was satisfied in all cases. The criteria on which we judge the quality of the control process are the overshoot on the controlled values and the speed with which zero control error is achieved. According to these criteria the controller incorporating compensator C[1] performed the best. However, this controller appears to be unsuited to adaptive control due to the size of the overshoot and the large numbers of process and controller outputs. The controller which uses compensator C2 seems to work best in adaptive control. The addition of compensators in series ahead of the system caused that change in one of control variables change only the corresponding process variable in all cases. Compensators actually provide autonomous control loop. With regards to decoupling, it is clear that controllers with compensators greatly reduce interaction. 7. Laboratory Experiment The verification of the proposed algorithms for autonomous and non-autonomous adaptive multivariable control on the real object under laboratory conditions has been realized using experimental laboratory model – air-heat tunnel. It is a suitable tool for the laboratory experimental verification of control algorithms and controller parameter settings. The model is composed of the heating coils, primary and secondary ventilator and a thermal resistor covered by tunnel. The heating coils are powered by controllable source of voltage and serves as the source of heat energy while the purpose of ventilators is to ensure and measure the flow of air inside the tunnel. Connecting the real model - hot-air tunnel is made using a technology card Advantech PCL 812, which is connected to the motherboard. The controller output variables are the inputs to the ventilator and heating coils and the process output variables are temperature and airflow at the tunnel. There are interactions between the control loops. The task was to apply the methods we designed for the adaptive control of a model representing a nonlinear system with variable parameters which is, therefore, hardly to control deterministically. Adaptive control using recursive identification both with and without the use of compensators was performed. As indicated in the simulation, compensator C[1] was shown to be unsuitable and control broke down. The other two methods provided satisfactory results. The time responses of the control for both cases are shown in Figure 10 and Figure 11. The figures demonstrate that control with a compensator reduces interaction. Process output variable y1 is the temperature and process output variable y2 is the airflow. The variables u1 and u2 are the controller outputs–inputs to the heating coils and ventilator. 8. Conclusion The aim of this study was to use algebraic methods for synthesis of multivariable control systems for adaptive control using delta models. The used algorithms are based on the pole placement method of the characteristic polynomial matrix. The adaptive control of a two-variable system based on polynomial theory and using delta models was designed. Decoupling problems were solved by the use of compensators. The designs were simulated and used to control a laboratory model. Experimental verification of proposed control algorithms were realized on laboratory model air-heat tunnel. The simulation results proved that these methods are suitable for the control of linear systems. The control tests on the laboratory model gave satisfactory results despite the fact that the nonlinear dynamics were described by a linear model. Due to the fact that the proposed controller is designed as an adaptive it can be used for control of non-linear and time-invariant systems. [1] Bobál, V., Böhm, J., Fessl, J., Machácek, J., Digital Self-tuning Controllers: Algorithms, Implementation and Applications, Springer, London, 2005, 318. [2] Landau, Y. D., Adaptive control: algorithms, analysis and application, Springer, New York, 2011, 587. [3] Keesman, K., System identification: an introduction. Springer, NewYork, 2011, 323. [4] Caravani, P., Moder linear control design a time-domain approach: an introduction, Springer, New York, 2011, 323. [5] Feur, G., Goodwin, C., Sampling in digital signal processing and control, Birkhäuser, Boston, 1996. [6] Middleton, R.H., Goodwin, G.C., Digital control and estimation - A unified approach, Prentice Hall, Englewood Cliffs, New York, 1990. [7] Skogestad, S., Postletwaite, J., Multivariable feedback control - Analysis and design, J. Willey, New York, 1996. [8] Bobál, V., Macháček, J., Dostál, P., Multivariable adaptive decoupling control of a thermo-analyzer, Journal of Systems and Control Engieneering, 215, 365-374, 2001. [9] Kulhavý, R., Restricted exponential forgetting in real time identification, Automatica, 23, 586-600, 1987. [10] Mukhopadhyay, S., Patra, A., Rao, G.P., New class of dicrete-time models for continuous time systems, International Journal of Control, 55, 1161-1187, 1992. [11] Peng, Z., A general decoupling precompensator for linear multivariable systems with application to adaptive control, IEEE Trans. Aut. Control, AC-35,344-348, 1990. [12] Wittenmark, B., Middelton, R.H., Goodwin G.C., Adaptive decoupling of multivariable systems, International Journal of Control, 46, 1993-2009, 1987.
{"url":"http://pubs.sciepub.com/ajme/1/7/28/","timestamp":"2014-04-18T13:07:12Z","content_type":null,"content_length":"95394","record_id":"<urn:uuid:a1951682-5cd0-43b9-9697-42f187d6daa0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/harsh314/answered","timestamp":"2014-04-17T06:46:40Z","content_type":null,"content_length":"113287","record_id":"<urn:uuid:0a752c0d-d07e-4ff3-9d53-91d703825199>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem with recursive function for an expression Problem with recursive function for an expression Hey everyone i need help with this function that displays the result of the following expression : expr=1/1+2/2+3/3+5/4+8/5+... where the numerator represents a fibonaci number. int fibo(int n) if(n==0 || n==1) return 1; return fibo(n-1)+fibo(n-2); float expr(int n) return 1; return (fibo(n)/(n))+expr(n-1); int main() int n,m=1; printf("dami n"); printf("fibonaci de %d=%d\n",n,fibo(n)); printf("expresie de %d=%f",n,expr(n)); Nominal Animal Compile with warnings enabled, and your compiler will help you: sanzor.c: In function ‘main’: sanzor.c:22:11: warning: unused variable ‘m’ [-Wunused-variable] sanzor.c:24:10: warning: ignoring return value of ‘scanf’, declared with attribute warn_unused_result [-Wunused-result] sanzor.c:29:1: warning: control reaches end of non-void function [-Wreturn-type] Are you aware that float type has just seven significant digits or precision? You might wish to use double for this kind of thing. Edit 1: Fibonacci(0) = 0, Fibonacci(1) = 1, Fibonacci(2) = 1. Your fibo() function has wrong seed values (Fibonacci(1) = 1, Fibonacci(2) = 1). Edit 2: If you have two integer variables or expressions, / does an integer division. Therefore, on line 14, your code does an integer division, discarding any fractional part of the result. To calculate the division using floating-point values, cast one or both to float or double. In other words, if you have integer variables or expressions a and b, then a/b is also an integer: the fractional decimals have been discarded. If you want the result as a double, then use a/(double)b for example. That casts b to double type, thus promoting the result to double type too. Thank you i thought that if the function returns a float result it is enough...
{"url":"http://cboard.cprogramming.com/c-programming/152712-problem-recursive-function-expression-printable-thread.html","timestamp":"2014-04-18T22:27:16Z","content_type":null,"content_length":"9159","record_id":"<urn:uuid:386a9fc8-3e17-4a27-adfa-4e2add395da2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
SOC 504 Course Website Spring '11 Syllabus NOTE: These notes are under revision. Any particular set has not been revised unless there is an "*" next to it. Soc 504 Class Notes: Background Mathematics, Probability Theory, and Classical Statistics and Estimation Reviews are now found here Simple Regression I Notes Simple Regression II Notes Multiple Regression I Notes Multiple Regression II Notes Dummy Variables, Interactions, and Nonlinearity Notes Outliers and Influential Cases Notes Multicollinearity Notes Nonnormal and Heteroscedastic Errors Notes Alternative Estimation Methods Notes Missing Data Notes Generalizations of the Regression Model Notes (TO BE UPDATED)
{"url":"http://www.princeton.edu/~slynch/soc504/soc504index.html","timestamp":"2014-04-23T08:55:28Z","content_type":null,"content_length":"1580","record_id":"<urn:uuid:fbdfb754-ee7b-40eb-9089-4cad2a286b9a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
st: What is the best way to graph CI bands that partially overlap? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: What is the best way to graph CI bands that partially overlap? From Joseph Coveney <jcoveney@bigplanet.com> To Statalist <statalist@hsphsun2.harvard.edu> Subject st: What is the best way to graph CI bands that partially overlap? Date Mon, 10 Jul 2006 17:20:53 +0900 Plotting the confidence interval as a tinted band around the prediction line seems to be a good way to present the model fit, and that is what -lfitci- does. A problem arises when plotting two categories' or groups' model fits on the same graph. Wherever the CI bands overlap, the CI band of the last fit plotted will obscure the earlier one's. Using -twoway rline- is an alternative, but it gives rise to a graph with six lines for two treatment groups (two prediction lines and two sets each of upper and lower confidence limit lines). Even with softening of the CL lines (reduced intensity, line thickness, etc.), the graph looks crowded and requires more audience effort. One approach I've seen recently is to separately plot the overlapping regions in a mixed color. Say, the CI band of the first plot is blue and that of the second is red, then the overlapping region is plotted in purple. I cannot recall where I saw the example of this approach, but it seems doable in Stata, using indicator variables for flagging when the lower CL of one fit is less than the upper CL of the other, and then overlaying a -twoway rarea ucl1 lcl2 x if flag-, or something similar. It seems cumbersome, though, and flagging could get complicated if the predictions cross, such as with an interaction. I suppose that another alternative for categorical predictors would be to use a connected-symbols or bars for the predictions with an overlaid range-spike-with-cap for the CI. This, too, would seem to make for an overly busy graph if there are more than just a couple of categories in the predictor, and doesn't seem ideal if the categories are samplings of an underlying predictor that is inherently continuous (time, drug Is there a consensus, or even a plurality, of opinion how best to depict model fits including CIs when there is some overlapping of the CIs? Joseph Coveney * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-07/msg00217.html","timestamp":"2014-04-18T22:19:29Z","content_type":null,"content_length":"7671","record_id":"<urn:uuid:6f93f7da-7f0f-413d-8241-6b08ae9c5260>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate and Yield Calculator |- MyCalculators.com ~ This calculator will calculate the rate or annualized compounded rate (yield) based on the compounding period selected. If you don't understand compounding, please look at my FAQ page below to get more information. Privacy Policy © 1995- MyCalculators.com This calculator will calculate the rate or annualized compounded rate (yield) based on the compounding period selected. If you don't understand compounding, please look at my FAQ page below to get more information.
{"url":"http://www.mycalculators.com/ca/yldcalculatorm.html","timestamp":"2014-04-17T19:37:24Z","content_type":null,"content_length":"9182","record_id":"<urn:uuid:cc6c6bc1-722d-4562-898f-29cd524a28a3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of Quadrilateral, Given Angles and Two Opposite Sides Date: 05/13/2002 at 13:23:20 From: Simon Murtha-Smith Subject: Find the area of quadrilaterals with specific side and angle configurations My question is this: If I have a quadrilateral with all the angles known and two opposite sides known, can I find the area? I have tried splitting the shape in two to form two triangles, each with one side and one angle known, and then using the law of sines and the law of cosines; but I end up stuck with an angle inside of a sin or cos and cannot get it out. This is my work so far. A quadrilateral with known, opposite sides, a and c, and angles A, B, C, and D might look something | | | | | | c a | | | | | | The angles are not necessarily 90 degree angles. I then draw the diagonal | \ | | \ | | \ | c e a | \ | | \ | | \ | I now solve for the angles formed by the two triangles. | \180-(C-x) | | \ | |180-D-x\ | c e a | \ | | \C-x| | x \ | Now using the law of sines I make the following equations: e/sin B = a/sin(180-C+x) e = -a*sin B / sin(x-C) e/sin D = c/sin(180-D-x) e = c*sin D / sin(x+D) from these I get sin(x+D)/sin(x-C) = ( c*sin D )/( a*sin B ) I can then expand the sines on the left side, but it is useless. I end up with ((sin x)(cos D) + (cos x)(sin D)) ((sin x)(cos C) - (cos x)(sin C)) I can't go any further. There doesn't seem to be a way to pull the x out in order to solve for all the angles and then solve for the whole quadrilateral. I really need help on this one. I am so close but I can't it solved. Thanks for your help. Simon Murtha-Smith Date: 05/13/2002 at 16:04:08 From: Doctor Rick Subject: Re: Find the area of quadrilaterals with specific side and angle configurations Hi, Simon. You've done good work so far. I don't see another approach, and this one should work in principle, so let's try to take it Cross-multiplying (that is, multiplying through by both denominators), we have a*sin(B)*sin(x+D) = c*sin(D)*sin(x-C) which can be expressed using the angle-sum formulas as a*sin(B)(sin(x)*cos(D)+cos(x)*sin(D)) = We want to solve this equation for x. The best way to do so is to rewrite the equation so it involves only one trig function of x. Solving this equation for the trig function of x, we'll be one step from a solution for x. I'll do a bit of rearrangement to reduce the number of times x appears; this should give you a good hint. sin(x)(a*sin(B)*cos(D)-c*sin(D)*cos(C)) = Let us know what you come up with; this could be good material for our archives. Check my work, I'm doing it quickly! - Doctor Rick, The Math Forum Date: 05/13/2002 at 19:52:21 From: Simon Murtha-Smith Subject: Find the area of quadrilaterals with specific side and angle configurations Thank you very much Dr. Rick. That was very useful. I managed to get the answer in terms of x. This is what I continued to do from where you left off. You wrote: sin(x)(a*sin(B)cos(D) - c*sin(D)cos(C)) = -cos(x)(a*sin(B)sin(D) - c*sin(D)sin(C)) I then divided by cos(x) to get tan(x)(a*sin(B)cos(D) - c*sin(D)cos(C)) = -(a*sin(B)sin(D) - c*sin(D)sin(C)) I then brought everything but the tan(x) over to get -(a*sin(B)sin(D) - c*sin(D)sin(C)) tan(x) = ---------------------------------- (a*sin(B)cos(D) - c*sin(D)cos(C)) Then I just took the arctan to get -(a*sin(B)sin(D) - c*sin(D)sin(C)) x = arctan( ---------------------------------- ) (a*sin(B)cos(D) - c*sin(D)cos(C)) Thanks for all your help. I am working on a project on finding the area of irregular polygons using trigonometry and this was one particular case of a polygon that I knew I could solve for the area but wasn't sure how to finish the problem.
{"url":"http://mathforum.org/library/drmath/view/60686.html","timestamp":"2014-04-16T04:37:48Z","content_type":null,"content_length":"9591","record_id":"<urn:uuid:d9b163ee-c667-465b-ab12-45411755f0fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] static variables question From: Joe Gilray (jgilray at gmail.com) Date: Sun Feb 19 12:52:21 EST 2012 Hi Neil, thanks for the input. My responses inline below: On Sun, Feb 19, 2012 at 2:02 AM, Neil Van Dyke <neil at neilvandyke.org> wrote: > Some random feedback, from a quick glance at the code (I didn't work > through the algorithm)... > Do you mean to do "[candidate-primes (primes-from-to 2 1000)]" each time > that "factor" is called? Yes, basically I'm trying to avoid generating all the primes up to (sqrt n) if possible. > 2 and 1000 are magic numbers, and should at least be given names, for > documentation purposes and so that the constants appear in only one place. Yes, they are "start" and "end", but when I tried this: (let* loop-factors ([facts '()] [x n] [start 2] [end 1000] [candidate-primes (primes-from-to start end)]) The compiler said: let*: bad syntax (not a sequence of identifier--expression bindings) in: loop-factors > Also consider whether these magic numbers are arbitrary limits that are > unnecessary, and then whether they should be arguments or irrelevant to the > algorithm. Good point, but (factor 923472398 2 1000) seemed ugly to me. I suppose I could put them in a helper function? > For more idiomatic Racket, try to get rid of the "set!" forms, such as by > passing those values as part of a named-"let". OK, but I'd need to see an example. If "end" is always non-negative, is "(or (> end (integer-sqrt x)) (> (* > 1.25 end) (integer-sqrt x)))" redundant? This is where I'm trying to be "smart" about how many primes are using during the factorization. Let's say you are at a point in the algorithm where start is 16000, end is 32000 and (integer-sqrt x) is 65000. Further let's say that primes have only been generated (by primes-from-to) up to 32000. It may be possible that one of the primes in [start end] will divide x and you will be done at 32000, but if not, you will need to get more primes with (primes-from-to start end). Normally you'd get [32000 64000], but if x is prime the you will then have to come back for [64000 65000] before being done. It is much more efficient to make one call (primes-from-to 32000 65000) than to make two calls. Anyway that was my thinking > You are doing "(integer-sqrt x)" a few times, when "x" can't change in > between. You might want to have the code look like "(integer-sqrt x)" is > computed only once in the block of code in which "x" can't change. Just > good practice, IMHO; I don't promise that it makes a performance difference. Hmmm, yes, I left that as an exercise for the compiler! More seriously, will the Racket compiler optimizes those calls? You have some tests in nested "if" forms that look redundant. Perhaps > those can be adjusted so that redundant tests don't happen, such as by > getting rid of the "and" and shuffling around the "if" forms and their > clauses. (Sometimes this can be done easily; other times, it requires > creating little helper procedures, and gets messy.) Yes, I've obviously got some learning to do. thanks. > Formatting-wise, you might consider generally putting newlines between > each of the three parts of an "if" form. It's easier to distinguish the > parts at a glance, especially if the parts contain parens, and you can also > sometimes better see symmetries/asymmetries between the branches. > Lots of people use "append" casually, but if you get to the point of > fine-tuning this code or some other code, I usually try to build up lists > with the code pattern "(cons NEW-ELEMENT EXISTING-LIST)". Sometimes this > means doing a "reverse" after the list is complete. However, assuming that > the rest of the algorithm is essentially the same, whether avoiding > "append" is actually faster can come down to characteristics of the garbage > collector (and both the sizes and the lifetimes of the lists can be > relevant), so I think you'd have to evaluate performance empirically. Interesting! Is there a Racket profiler available? > You can use "(zero? X)" instead of "(= 0 X)". This is a minor point of > personal style: I have a bit of folk wisdom that, if you're testing numeric > equality with a constant in an algorithm, such as a number that changes in > a loop, most often that constant should be 0, and using "zero?" > acknowledges that. I like it. thanks again. > Joe Gilray wrote at 02/19/2012 04:05 AM: > Here's a slight reworking of the factor function. I think it is prettier >> and my in the spirit of Racket/Scheme. > -- > http://www.neilvandyke.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.racket-lang.org/users/archive/attachments/20120219/022f5463/attachment.html> Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2012-February/050550.html","timestamp":"2014-04-17T22:34:01Z","content_type":null,"content_length":"10961","record_id":"<urn:uuid:ac27fd8c-8741-49fd-97e9-69332eb60941>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Solutions of ordinary di erential equations as limits of pure jump Markov processes - in Handbook of Randomized Computing , 2000 "... ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately ..." Cited by 99 (2 self) Add to MetaCart ITo motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately log n= log log n with high probability. Now suppose instead that the balls are placed sequentially, and each ball is placed in the least loaded of d 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this case, the maximum load is log log n= log d + (1) with high probability [ABKU99]. The important implication of this result is that even a small amount of choice can lead to drastically different results in load balancing. Indeed, having just two random choices (i.e.,... - IN PROCEEDINGS OF IEEE INFOCOM , 2000 "... High performance Internet routers require a mechanism for very efficient IP address look-ups. Some techniques used to this end, such as binary search on levels, need to construct quickly a good hash table for the appropriate IP prefixes. In this paper we describe an approach for obtaining good hash ..." Cited by 68 (11 self) Add to MetaCart High performance Internet routers require a mechanism for very efficient IP address look-ups. Some techniques used to this end, such as binary search on levels, need to construct quickly a good hash table for the appropriate IP prefixes. In this paper we describe an approach for obtaining good hash tables based on using multiple hashes of each input key (which is an IP address). The methods we describe are fast, simple, scalable, parallelizable, and flexible. In particular, in instances where the goal is to have one hash bucket fit into a cache line, using multiple hashes proves extremely suitable. We provide a general analysis of this hashing technique and specifically discuss its application to binary search on levels. - THEORETICAL COMPUTER SCIENCE , 2001 "... ..." - STOC , 2000 "... Let X be a set of n Boolean variables and denote by C(X) the set of all 3-clauses over X, i.e. the set of all 8(3) possible disjunctions of three distinct, non-complementary literais from variables in X. Let F(n, m) be a random 3-SAT formula formed by selecting, with replacement, m clauses uniformly ..." Cited by 35 (4 self) Add to MetaCart Let X be a set of n Boolean variables and denote by C(X) the set of all 3-clauses over X, i.e. the set of all 8(3) possible disjunctions of three distinct, non-complementary literais from variables in X. Let F(n, m) be a random 3-SAT formula formed by selecting, with replacement, m clauses uniformly at random from C(X) and taking their conjunction. The satisfiability threshold conjecture asserts that there exists a constant ra such that as n--+ c¢, F(n, rn) is satisfiable with probability that tends to 1 if r < ra, but unsatisfiable with probability that tends to 1 if r:> r3. Experimental evidence suggests rz ~ 4.2. We prove rz> 3.145 improving over the previous best lower bound r3> 3.003 due to Frieze and Suen. For this, we intro-duce a satisfiability heuristic that works iteratively, permanently setting the value of a pair of variables in each round. The framework we develop for the analysis of our heuristic allows us to also derive most previous lower bounds for random 3-SAT in a uniform manner and with little effort. - IEEE Transactions on Information Theory , 2001 "... Congestion control algorithms used in the Internet are difficult to analyze or simulate on a large scale, i.e., when there are large numbers of nodes, links and sources in a network. The reasons for this include the complexity of the actual implementation of the algorithm and the randomness introduc ..." Cited by 30 (11 self) Add to MetaCart Congestion control algorithms used in the Internet are difficult to analyze or simulate on a large scale, i.e., when there are large numbers of nodes, links and sources in a network. The reasons for this include the complexity of the actual implementation of the algorithm and the randomness introduced in the packet arrival and service processes due to many factors such as arrivals and departures of sources and uncontrollable short flows in the network. To make the analysis or simulation tractable, often deterministic fluid approximations of these algorithms are used. These approximations are in the form of either deterministic delay differential equations, or more generally, deterministic functional differential equations (FDEs). In this paper, we ignore the complexity introduced by the window-based implementation of such algorithms and focus on the randomness in the network. We justify the use of deterministic models for proportionally-fair congestion controllers under a limiting regime where the number of flows in a network is large. - In Proceedings of the 10th Annual ACM Symposium on Parallel Algorithms and Architectures , 1998 "... In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity using differential equations. The advantages of this approach include the ability to m ..." Cited by 19 (0 self) Add to MetaCart In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity using differential equations. The advantages of this approach include the ability to model a large variety of systems and to provide accurate numerical approximations of system behavior even when the number of processors is relatively small. We show how this approach can yield significant intuition about the behavior of work stealing algorithms in realistic settings. - UNIVERSITY OF ILLINOIS , 1999 "... We investigate variations of a novel, recently proposed load balancing scheme based on small amounts of choice. The static setting is modeled as a balls-and-bins process. The balls are sequentially placed into bins, with each ball selecting d bins randomly and going to the bin with the fewest balls. ..." Cited by 17 (7 self) Add to MetaCart We investigate variations of a novel, recently proposed load balancing scheme based on small amounts of choice. The static setting is modeled as a balls-and-bins process. The balls are sequentially placed into bins, with each ball selecting d bins randomly and going to the bin with the fewest balls. A similar dynamic setting is modeled as a scenario where tasks arrive as a Poisson process at a bank of FIFO servers and queue at one for service. Tasks probe a small random sample of servers in the bank and queue at the server with the fewest tasks. Recently , 2011 "... We develop an approach to the construction of Lyapunov functions for the forward equation of a nite state nonlinear Markov process. Nonlinear Markov processes can be obtained as a law of large number limit for a system of weakly interacting processes. The approach exploits this connection and the fa ..." Cited by 1 (0 self) Add to MetaCart We develop an approach to the construction of Lyapunov functions for the forward equation of a nite state nonlinear Markov process. Nonlinear Markov processes can be obtained as a law of large number limit for a system of weakly interacting processes. The approach exploits this connection and the fact that relative entropy de nes a Lyapunov function for the solution of the forward equation for the many particle system. Candidate Lyapunov functions for the nonlinear Markov process are constructed via limits, and veri ed for certain classes of models. 1 , 2000 "... ) Petra Berenbrink Dept. of Mathematics & Computer Science Paderborn University D-33095 Paderborn, Germany pebe@uni-paderborn.de Artur Czumaj y Department of Computer and Information Science New Jersey Institute of Technology University Heights, Newark, NJ 07102-1982, USA czumaj@cis.njit. ..." Add to MetaCart ) Petra Berenbrink Dept. of Mathematics & Computer Science Paderborn University D-33095 Paderborn, Germany pebe@uni-paderborn.de Artur Czumaj y Department of Computer and Information Science New Jersey Institute of Technology University Heights, Newark, NJ 07102-1982, USA czumaj@cis.njit.edu Tom Friedetzky Institut f ur Informatik Technische Universit at M unchen D-80290 M unchen, Germany friedetz@informatik.tu-muenchen.de Nikita D. Vvedenskaya Institute of Information Transmission Problems Russian Academy of Science Moscow 101447, Russia ndv@iitp.ru Abstract In recent years, the task of allocating jobs to servers has been studied with the \balls and bins" abstraction. Results in this area exploit the large decrease in maximum load that can be achieved by allowing each job (ball) a little freedom in choosing its destination server (bin). In this paper we examine an innite and parallel allocation process (see [ABS98]) which is related to the \balls and bins" abs... , 2001 "... 3-SAT is a canonical NP-complete problem: satisfiable and unsatisfiable instances cannot generally be distinguished in polynomial time. However, random 3-SAT formulas show a phase transition: for any large number of variables n, sparse random formulas (with m ≤ 3.145n clauses) are almost always s ..." Add to MetaCart 3-SAT is a canonical NP-complete problem: satisfiable and unsatisfiable instances cannot generally be distinguished in polynomial time. However, random 3-SAT formulas show a phase transition: for any large number of variables n, sparse random formulas (with m &le; 3.145n clauses) are almost always satisfiable, dense ones (with m &ge; 4.596n clauses) are almost always unsatisfiable, and the transition occurs sharply when m=n crosses some threshold. It is believed that the limiting threshold is around 4.2, but it is not even known that a limit exists. Proofs of the satisfiability...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1384432","timestamp":"2014-04-23T22:57:22Z","content_type":null,"content_length":"37078","record_id":"<urn:uuid:6a4dfc06-954b-45db-a32c-07a758adcb51>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Information Geometry (Part 7) Today, I want to describe how the Fisher information metric is related to relative entropy. I’ve explained both these concepts separately (click the links for details); now I want to put them But first, let me explain what this whole series of blog posts is about. Information geometry, obviously! But what’s that? Information geometry is the geometry of ‘statistical manifolds’. Let me explain that concept twice: first vaguely, and then precisely. Vaguely speaking, a statistical manifold is a manifold whose points are hypotheses about some situation. For example, suppose you have a coin. You could have various hypotheses about what happens when you flip it. For example: you could hypothesize that the coin will land heads up with probability $x$, where $x$ is any number between 0 and 1. This makes the interval $[0,1]$ into a statistical manifold. Technically this is a manifold with boundary, but that’s okay. Or, you could have various hypotheses about the IQ’s of American politicians. For example: you could hypothesize that they’re distributed according to a Gaussian probability distribution with mean $x$ and standard deviation $y$. This makes the space of pairs $(x,y)$ into a statistical manifold. Of course we require $y \ge 0$, which gives us a manifold with boundary. We might also want to assume $x \ge 0$, which would give us a manifold with corners, but that’s okay too. We’re going to be pretty relaxed about what counts as a ‘manifold’ here. If we have a manifold whose points are hypotheses about some situation, we say the manifold ‘parametrizes’ these hypotheses. So, the concept of statistical manifold is fundamental to the subject known as parametric statistics. Parametric statistics is a huge subject! You could say that information geometry is the application of geometry to this subject. But now let me go ahead and make the idea of ‘statistical manifold’ more precise. There’s a classical and a quantum version of this idea. I’m working at the Centre of Quantum Technologies, so I’m being paid to be quantum—but today I’m in a classical mood, so I’ll only describe the classical version. Let’s say a classical statistical manifold is a smooth function $p$ from a manifold $M$ to the space of probability distributions on some measure space $\Omega$. We should think of $\Omega$ as a space of events. In our first example, it’s just $\{H, T\}$: we flip a coin and it lands either heads up or tails up. In our second it’s $\mathbb{R}$: we measure the IQ of an American politician and get some real number. We should think of $M$ as a space of hypotheses. For each point $x \in M$, we have a probability distribution $p_x$ on $\Omega$. This is hypothesis about the events in question: for example “when I flip the coin, there’s 55% chance that it will land heads up”, or “when I measure the IQ of an American politician, the answer will be distributed according to a Gaussian with mean 0 and standard deviation 100.” Now, suppose someone hands you a classical statistical manifold $(M,p)$. Each point in $M$ is a hypothesis. Apparently some hypotheses are more similar than others. It would be nice to make this precise. So, you might like to define a metric on $M$ that says how ‘far apart’ two hypotheses are. People know lots of ways to do this; the challenge is to find ways that have clear meanings. Last time I explained the concept of relative entropy. Suppose we have two probability distributions on $\Omega$, say $p$ and $q$. Then the entropy of $p$ relative to $q$ is the amount of information you gain when you start with the hypothesis $q$ but then discover that you should switch to the new improved hypothesis $p$. It equals: $\int_\Omega \; \frac{p}{q} \; \ln(\frac{p}{q}) \; q d \omega$ You could try to use this to define a distance between points $x$ and $y$ in our statistical manifold, like this: $S(x,y) = \int_\Omega \; \frac{p_x}{p_y} \; \ln(\frac{p_x}{p_y}) \; p_y d \omega$ This is definitely an important function. Unfortunately, as I explained last time, it doesn’t obey the axioms that a distance function should! Worst of all, it doesn’t obey the triangle inequality. Can we ‘fix’ it? Yes, we can! And when we do, we get the Fisher information metric, which is actually a Riemannian metric on $M$. Suppose we put local coordinates on some patch of $M$ containing the point $x$. Then the Fisher information metric is given by: $g_{ij}(x) = \int_\Omega \partial_i (\ln p_x) \; \partial_j (\ln p_x) \; p_x d \omega$ You can think of my whole series of articles so far as an attempt to understand this funny-looking formula. I’ve shown how to get it from a few different starting-points, most recently back in Part 3 . But now let’s get it starting from relative entropy! Fix any point in our statistical manifold and choose local coordinates for which this point is the origin, $0$. The amount of information we gain if move to some other point $x$ is the relative entropy $S(x,0)$. But what’s this like when $x$ is really close to $0$? We can imagine doing a Taylor series expansion of $S(x,0)$ to answer this question. Surprisingly, to first order the answer is always zero! Mathematically: $\partial_i S(x,0)|_{x = 0} = 0$ In plain English: if you change your mind slightly, you learn a negligible amount — not an amount proportional to how much you changed your mind. This must have some profound significance. I wish I knew what. Could it mean that people are reluctant to change their minds except in big jumps? Anyway, if you think about it, this fact makes it obvious that $S(x,y)$ can’t obey the triangle inequality. $S(x,y)$ could be pretty big, but if we draw a curve from $x$ and $y$, and mark $n$ closely spaced points $x_i$ on this curve, then $S(x_{i+1}, x_i)$ is zero to first order, so it must be of order $1/n^2$, so if the triangle inequality were true we’d have $S(x,y) \le \sum_i S(x_{i+i},x_i) \le \mathrm{const} \, n \cdot \frac{1}{n^2}$ for all $n$, which is a contradiction. In plain English: if you change your mind in one big jump, the amount of information you gain is more than the sum of the amounts you’d gain if you change your mind in lots of little steps! This seems pretty darn strange, but the paper I mentioned in part 1 helps: • Gavin E. Crooks, Measuring thermodynamic length. You’ll see he takes a curve and chops it into lots of little pieces as I just did, and explains what’s going on. Okay, so what about second order? What’s $\partial_i \partial_j S(x,0)|_{x = 0} ?$ Well, this is the punchline of this blog post: it’s the Fisher information metric: $\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$ And since the Fisher information metric is a Riemannian metric, we can then apply the usual recipe and define distances in a way that obeys the triangle inequality. Crooks calls this distance thermodynamic length in the special case that he considers, and he explains its physical meaning. Now let me prove that $\partial_i S(x,0)|_{x = 0} = 0$ $\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$ This can be somewhat tedious if you do it by straighforwardly grinding it out—I know, I did it. So let me show you a better way, which requires more conceptual acrobatics but less brute force. The trick is to work with the universal statistical manifold for the measure space $\Omega$. Namely, we take $M$ to be the space of all probability distributions on $\Omega$! This is typically an infinite-dimensional manifold, but that’s okay: we’re being relaxed about what counts as a manifold here. In this case, we don’t need to write $p_x$ for the probability distribution corresponding to the point $x \in M$. In this case, a point of $M$ just is a probability distribution on $\Omega$, so we’ll just call it $p$. If we can prove the formulas for this universal example, they’ll automatically follow for every other example, by abstract nonsense. Why? Because any statistical manifold with measure space $\Omega$ is the same as a manifold with a smooth map to the universal statistical manifold! So, geometrical structures on the universal one ‘pull back’ to give structures on all the rest. The Fisher information metric and the function $S$ can be defined as pullbacks in this way! So, to study them, we can just study the universal example. (If you’re familiar with ‘classifying spaces for bundles’ or other sorts of ‘classifying spaces’, all this should seem awfully familiar. It’s a standard math trick.) So, let’s prove that $\partial_i S(x,0)|_{x = 0} = 0$ by proving it in the universal example. Given any probability distribution $q$, and taking a nearby probability distribution $p$, we can write $\frac{p}{q} = 1 + f$ where $f$ is some small function. We only need to show that $S(p,q)$ is zero to first order in $f$. And this is pretty easy. By definition: $S(p,q) = \int_\Omega \; \frac{p}{q} \, \ln(\frac{p}{q}) \; q d \omega$ or in other words, $S(p,q) = \int_\Omega \; (1 + f) \, \ln(1 + f) \; q d \omega$ We can calculate this to first order in $f$ and show we get zero. But let’s actually work it out to second order, since we’ll need that later: $\ln (1 + f) = f - \frac{1}{2} f^2 + \cdots$ $(1 + f) \, \ln (1+ f) = f + \frac{1}{2} f^2 + \cdots$ \begin{aligned} S(p,q) &=& \int_\Omega \; (1 + f) \; \ln(1 + f) \; q d \omega \\ &=& \int_\Omega f \, q d \omega + \frac{1}{2} \int_\Omega f^2\, q d \omega + \cdots \end{aligned} Why does this vanish to first order in $f$? It’s because $p$ and $q$ are both probability distributions and $p/q = 1 + f$, so $\int_\Omega (1 + f) \, q d\omega = \int_\Omega p d\omega = 1$ but also $\int_\Omega q d\omega = 1$ so subtracting we see $\int_\Omega f \, q d\omega = 0$ So, $S(p,q)$ vanishes to first order in $f$. Voilà! Next let’s prove the more interesting formula: $\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}$ which relates relative entropy to the Fisher information metric. Since both sides are symmetric matrices, it suffices to show their diagonal entries agree in any coordinate system: $\partial^2_i S(x,0)|_{x = 0} = g_{ii}$ Devoted followers of this series of posts will note that I keep using this trick, which takes advantage of the polarization identity. To prove $\partial^2_i S(x,0)|_{x = 0} = g_{ii}$ it’s enough to consider the universal example. We take the origin to be some probability distribution $q$ and take $x$ to be a nearby probability distribution $p$ which is pushed a tiny bit in the $i$th coordinate direction. As before we write $p/q = 1 + f$. We look at the second-order term in our formula for $S(p,q)$: $\frac{1}{2} \int_\Omega f^2\, q d \omega$ Using the usual second-order Taylor’s formula, which has a $\frac{1}{2}$ built into it, we can say $\partial^2_i S(x,0)|_{x = 0} = \int_\Omega f^2\, q d \omega$ On the other hand, our formula for the Fisher information metric gives $g_{ii} = \left. \int_\Omega \partial_i \ln p \; \partial_i \ln p \; q d \omega \right|_{p=q}$ The right hand sides of the last two formulas look awfully similar! And indeed they agree, because we can show that $\left. \partial_i \ln p \right|_{p = q} = f$ How? Well, we assumed that $p$ is what we get by taking $q$ and pushing it a little bit in the $i$th coordinate direction; we have also written that little change as $p/q = 1 + f$ for some small function $f$. So, $\partial_i (p/q) = f$ and thus: $\partial_i p = f q$ and thus: $\partial_i \ln p = \frac{\partial_i p}{p} = \frac{fq}{p}$ $\left. \partial_i \ln p \right|_{p=q} = f$ as desired. This argument may seem a little hand-wavy and nonrigorous, with words like ‘a little bit’. If you’re used to taking arguments involving infinitesimal changes and translating them into calculus (or differential geometry), it should make sense. If it doesn’t, I apologize. It’s easy to make it more rigorous, but only at the cost of more annoying notation, which doesn’t seem good in a blog post. Boring technicalities If you’re actually the kind of person who reads a section called ‘boring technicalities’, I’ll admit to you that my calculations don’t make sense if the integrals diverge, or we’re dividing by zero in the ratio $p/q$. To avoid these problems, here’s what we should do. Fix a $\sigma$-finite measure space $(\Omega, d\omega)$. Then, define the universal statistical manifold to be the space $P(\ Omega,d \omega)$ consisting of all probability measures that are equivalent to $d\omega$, in the usual sense of measure theory. By Radon-Nikodym, we can write any such measure as $q d \omega$ where $q \in L^1(\Omega, d\omega)$. Moreover, given two of these guys, say $p d \omega$ and $q d\omega$, they are absolutely continuous with respect to each other, so we can write $p d \omega = \frac{p}{q} \; q d \omega$ where the ratio $p/q$ is well-defined almost everywhere and lies in $L^1(\Omega, q d\omega)$. This is enough to guarantee that we’re never dividing by zero, and I think it’s enough to make sure all my integrals converge. We do still need to make $P(\Omega,d \omega)$ into some sort of infinite-dimensional manifold, to justify all the derivatives. There are various ways to approach this issue, all of which start from the fact that $L^1(\Omega, d\omega)$ is a Banach space, which is about the nicest sort of infinite-dimensional manifold one could imagine. Sitting in $L^1(\Omega, d\omega)$ is the hyperplane consisting of functions $q$ with $\int_\Omega q d\omega = 1$ and this is a Banach manifold. To get $P(\Omega,d \omega)$ we need to take a subspace of that hyperplane. If this subspace were open then $P(\Omega,d \omega)$ would be a Banach manifold in its own right. I haven’t checked this yet, for various reasons. For one thing, there’s a nice theory of ‘diffeological spaces’, which generalize manifolds. Every Banach manifold is a diffeological space, and every subset of a diffeological space is again a diffeological space. For many purposes we don’t need our ‘statistical manifolds’ to be manifolds: diffeological spaces will do just fine. This is one reason why I’m being pretty relaxed here about what counts as a ‘manifold’. For another, I know that people have worked out a lot of this stuff, so I can just look things up when I need to. And so can you! This book is a good place to start: • Paolo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, Algebraic and Geometric Methods in Statistics, Cambridge U. Press, Cambridge, 2009. I find the chapters by Raymond Streater especially congenial. For the technical issue I’m talking about now it’s worth reading section 14.2, “Manifolds modelled by Orlicz spaces”, which tackles the problem of constructing a universal statistical manifold in a more sophisticated way than I’ve just done. And in chapter 15, “The Banach manifold of quantum states”, he tackles the quantum version! 25 Responses to Information Geometry (Part 7) 1. Interesting stuff! (On the copy-editing front: it’s missing a “latex” in $L^1(\Omega, d\omega)$.) □ Thanks. I just did that to see if anyone would read that far. I hope everyone got the nasty joke in paragraph 5. 2. And you can retrieve the Fisher information metric from other divergences, as discussed on p. 4 of Snoussi’s The geometry of prior selection. □ Thanks! I’m guessing the `δ-divergence’ defined on top of this page (in equation 2) is the Rényi version of relative entropy. This is a nice paper! I’m still not at the point of understanding all this stuff, but I’m getting a lot closer. 3. It would be interesting to know under which conditions Fisher’s metric defines a flat Levi-Civita connection. □ Nice question! We can make a bit of progress on this in the simple case where our space of events $\Omega$ has a finite number of points, say $\Omega = \{1,2,\dots,n\}$ Then the universal statistical manifold is the space of all probability distributions on $\Omega$. This is the simplex: $\{ p \in [0,1]^n \; \; : \; \; \sum_{i = 1}^n p_i = 1 \}$ So for example for $n = 3$ it’s a triangle. But the Fisher metric does not make this simplex flat! Instead, it turns out to be isometric to a piece of a sphere: $\{ x \in [0,1]^n \; \; : \; \; \sum_{i = 1}^n x_i^2 = 1 \}$ So, for example, when $n = 3$ it’s shaped like an eighth of a sphere. Knowing this, we can begin to understand when the Fisher metric on a general statistical manifold is flat (at least when $\Omega$ is finite.) We need to map that manifold into the universal statistical manifold so that its image is a flat submanifold sitting inside this piece of a sphere! ☆ mmm, I’ll have to think about this, It’s not straightforward to me that Fisher’s curvature should be that of a sphere… (as I process information I elaborate it on a new blog, you might want to take a look at it from time to time) 4. One important concept for interpretation is that on any statistical manifold events shouldn’t be just points but rather collections of points of various densities. One reason is independent events are interpenetrating point clouds. □ John F wrote: One important concept for interpretation is that on any statistical manifold events shouldn’t be just points… Just to clear up any possible miscommunication: in my setup, I’m calling points in the statistical manifold ‘hypotheses’, not ‘events’. There are two spaces floating around here: the statistical manifold $M$ whose points are hypotheses, and the measure space $\Omega$ whose points are events. Each point in $M$ gives rise to a probability distribution on $\Omega$. So, each hypothesis is a probability distribution of events. E.g., when $\Omega = \{H, T\}$, a typical hypothesis is “the coin will land heads-up with probability 60%”. ☆ Your setup is confusing if you are used to ordinary (say Bayesian) statistical inference! I think it translates like this. A Bayesian prior is a distribution over hypotheses. You don’t jump from one hypothesis to another, but weight them differently as the likelihood changes as you get more data. Usually, as you get more data, the posterior becomes more like a multivariate gaussian. To estimate the (inverse of the) covariance matrix, you can take second derivatives of -log(posterior), evaluated at the mode of the posterior. The result is the Fisher information matrix (not metric). In statistical inference, you are therefore interested in how the curvature of log(posterior) changes at a single point as more data becomes available. You could say: data tells the log (posterior) how to curve. ☆ It’s not miscommunication, just stupidity on my part. I thought we were discussing how far apart two distributions are, instead it is “how far apart two hypotheses are”? I can kind of imagine smoothly mapping points in M to distributions in Ω. I have trouble imagining smoothly mapping independent event distributions to points. As Graham suggests I think in practice there is fuzziness with points in M being special cases of pointy distributions in M. But even with points, what about inference as parameter estimation? Probably we need homework examples. Is this an example: Suppose during the week I’m likely to have to stop by a store to get milk, and/or bread, and/or dog food, and/or Coke, depending on the day and time -say for my family more likely to need dog food later in the week. Then an event is me picking up some set of those, and an hypothesis is the set of probabilities of me picking each up? Then in this case different times have different hypotheses, so you can guess the time from what I got. 5. Tomate discusses Fisher information, starting from a rework of John Baez’s post Information Geometry (Part 7), but then moving on to new ideas. 6. In France, “information Geometry” is included in a larger mathematical domain “Geometric Science of Information”, that is debated in Brillouin Seminar launched in 2009 : You can register for Brillouin Seminar News : Recently, Brillouin Seminar has organized : - In 2011, a French-Indian Workshop on “Matrix Information Geometries” at Ecole Polytechnique. Proceedings will be published by Springer in 2012. Slides and abstracts are available on website : - In 2012, a Symposium on “Information Geometry and Optimal Transport Theory” at Institut Henri Poincaré in Paris with GDR CNRS MSPC. All slides are available on the website : You can find a very recent French PhD Dissertation in English on this subject by Yang Le and supervised by Marc Arnaudon : Medians of probability measures in Riemannian manifolds 7. Treating the Fisher information metric as an infinitesimal Kullback-Leibler is interesting, but it begs the question: Given probability distributions P and Q (parameterized by some lambda), what is the geodesic distance between the two? Well, I suppose this is a rhetorical question, its of course $\displaystyle{ \int\sqrt{\frac{d\lambda^j}{dt}g_{jk}\frac{d\lambda^k}{dt}} \;dt}$ for some path $\lambda(t)$ connecting P and Q, we know this from textbooks. But what is this, intuitively? The point is that since we have a metric, we then also have geodesics connecting any two points (assuming simply connected, convex manifold, etc). Since the KL doesn’t satisfy the triangle inequality, it clearly cannot be measuring the geodesic. So what is it measuring? Well, the partial answer is that KL does not need to make an assumption of a parameter lambda, or of a “path” connecting P and Q, and so is “general” in this way. But still, I have no particular intuition here… □ Err, well, clearly I mangled things a bit above, and can’t figure out how to edit to fix it. So, please “understand what I mean, not what I say”. [Moderator's note: I fixed your comment. There is no ability for you to retroactively edit comments. On this blog, as on most WordPress blogs, you need to type $latex x = y$ to get the equation x = y The ‘latex’ must occur directly after the dollar sign, with no space in between. There must be a space after it. And double dollar signs don’t work, at least on this blog.] ☆ Oh, hmm, right. Silly me. The geodesic length is the square root of the Jensen-Shannon divergence, as pointed out by Gavin Crooks in the quoted article. I’m not masochistic enough to try to verify this explicitly, although it would be a good exercise… Next question: in principle, one could calculate the Levi-civita symbols, the curvatute tensor, the ricci tensor, “simply” by plugging in the expression of the fisher metric, and turning the crank. My question then is: does one get lucky, do these expressions simplify, or do they remain a big mess? Again, since Jensen-Shannon does follow the geodesic, and since its a symmetrized form of Kullback-Leibler, I guess this mens that KL somehow follows a path that just starts in the wrong direction, and then curves around? Well, of course it does, but intuitively, its..? ☆ Hi, Linas! Good to see you here! You wrote: The geodesic length is the square root of the Jensen–Shannon divergence, as pointed out by Gavin Crooks in the quoted article. Since I forget what the Jensen–Shannon divergence is, I imagine many other readers will too, so let’s remind ourselves by peeking at the Wikipedia article… Oh, okay. It’s a well-known way of making the relative entropy (also called the ‘Kullback-Leibler divergence’ by people who enjoy obscure technical terms) into an actual metric on the space of probability distributions. Given two probability distributions $p$ and $q$ on a measure space $(\Omega, \omega)$ the entropy of $p$ relative to $q$ is $\displaystyle{ S(p,q) = \int_\Omega \; p\, \ln(\frac{p}{q}) \, d \omega }$ To make this symmetrical, we define a new probability distribution $m = \displaystyle{ \frac{1}{2}(p+q)}$ that’s the midpoint of $p$ and $q$ in some obvious naive sense, and define the Jensen–Shannon distance to be $\displaystyle{ JSD(p,q) = \frac{1}{2} \left( S(p,m) + S(m,d) \right) }$ Unlike the relative entropy, this is obviously symmetric: $JSD(p,q) = JSD(q,p)$ And unlike the relative entropy, but less obviously, it also obeys the triangle inequality! $JDS(p,r) \le JSD(p,q) + JSD(q,r)$ Nonetheless, I always thought this idea was a bit of a ‘trick’, and thus not worth studying. For a second you made me think the Jensen–Shannon distance was just the square of the geodesic distance as measured by the Fisher information metric. And that would mean it’s not just a But unfortunately that’s not true… and probably not even what you were trying to say. The truth is here, and it’s less pretty. ○ Yellow caution alert: Over the last few days, I quadrupled the size of the wikipedia article on the Fisher information metric, synthesizing from the Crooks paper, some of your posts, and other bits and pieces culled via google. i.e. you’re reading what I wrote… (Dislaimer: I learn things by expanding WP articles. They’re not peer-reviewed. Mistakes can creep in. They’re sometimes sloppy and disorganized; it doesn’t pay to be a perfectionist on WP). ☆ Linas wrote: Next question: in principle, one could calculate the Levi-Civita symbols, the curvatute tensor, the Ricci tensor, “simply” by plugging in the expression of the Fisher metric, and turning the crank. My question then is: does one get lucky, do these expressions simplify, or do they remain a big mess? In the end it’s all incredibly beautiful. I think the right path was sketched earlier here. First, observe that the the space of probability distributions on an $n$-element set is an $ (n-1)$-simplex. Then, show tha the Fisher information metric makes this simplex isometric to a portion of a round $(n-1)$-sphere! Once we know this, it’s clear that the curvature will be very nice and simple. But last week Simon Willerton and I were trying to check that the simplex with its Fisher information metric really is isometric to a portion of a sphere. We got stuck, mainly because the ‘obvious’ map from the simplex to the sphere, namely the one that sends a point $(p_1 , \dots, p_n)$ such that $p_i \ge 0$ and $p_1 + \cdots p_n = 1$ to the point $(p_1 , \dots, p_n) /\sqrt{p_1^2 + \cdots + p_n^2}$ on the unit sphere, didn’t seem to work. We couldn’t guess the right one, and we couldn’t find a mistake in our calculation. I tried to cheat by looking up the answer, but I couldn’t quickly find it, and then I got distracted. It’s all well-known stuff, to those who know it well. In a recent paper, Gromov hints that this relation between the Fisher information metric on the simplex and the round metric on the sphere is secretly a link between probability theory and quantum theory (where states lie on a sphere, but in a complex Hilbert space). But he doesn’t elaborate, and he might have been ○ Perhaps Gromov is hinting at the random gaussian unitary ensemble? I’ve only gotten 3-4 page into his paper. I imagine there should be all sorts of ‘secret links’, not just for simplexes and hilbert spaces, but for generic homogenous spaces. At the risk of stating ‘obvious’ things you understand better than I … Whenever one’s got some collection of operators acting on some space, you can ask what happens if you start picking out operators ‘at random’ (after specifying a measure that defines what ‘random’ means). By starting with something that’s homogenous, you get gobs of symmetry, so measures and metrics are … um, ahem, cough cough ‘sphere-like’. Generalizations of legendre polynomials and clebsch-gordon coefficients of good-ol su(2) to various wild hypergeometric series. There’s a vast ocean of neato results coming out of this: there’s also gobs of hyperbolic behaviour, lots of ergodic trajectories, and so analogs of the Riemann zeta that magically seem to obey the Riemann hypothesis (anything ergodic/hyperbolic seems to generalize something or other from number theory, this seems to be a general rule). I’m sure Gromov is quite aware of these things, given his … wish there were more hours in the day, it feels like I’m in a candy store, sometimes… ☆ Sometime jokes turn out to be true. I got motivated again to cheat and look up the answer, and it’s very fascinating! The trick is to think of an amplitude as being like the square root of a probability—as usual in quantum We can map a probability distribution $(p_1 , \dots, p_n)$ to a point on the sphere of radius 1, namely $(p_1^{1/2}, \dots, p_n^{1/2})$ And then the Fisher information metric on the simplex gets sent to the usual round metric on a portion of the sphere… up to a constant factor. If you want the metrics to match exactly, you should map the simplex to the sphere of radius 2, via $(p_1 , \dots, p_n) \mapsto 2 (p_1^{1/2}, \dots, p_n^{1/2})$ I’m not sure how important that is: perhaps it simply means the Fisher information metric was defined slightly wrong, and should include a factor of 1/2 out front. But what’s the deep inner meaning of this relation between probability distributions and points on the sphere? It seems to be saying that the ‘right’ way to measure distances between probability distributions is to treat them as coming from quantum states, and measure the distance between those! ○ Turns out that trick you provide above leads directly to the Fubini-Study metric. Set the phase part of the complex wave-function to zero, apply the Fubini-Study metric, and you get (four times) the Fisher information metric. The Bures metric is identical to the Fubini-Study metric, except that its normally written for mixed states, while FB is normally expressed w/ pure states. The differences in notation obscures a lot. Wikipedia knows all, including references. The part that awes me is that log p and the wave-function phase alpha put together gives a symplectic form. This is surely somehow important, but I don’t know why. ○ Very intriguing stuff! I’m glad you’re trying to improve the Wikipedia article on Fisher information by adding some of these known but obscure facts. Someday some kid will read this stuff, put all the puzzle pieces together, and unify classical mechanics, quantum mechanics and probability theory in a new way. Unless we get there first, of course. ○ The other part that is weird is that log p materializes as if it were a vector in a tangent space. This suggests that the shannon entropy has some geometric interpretation, but I can’t tell what it is. 8. Huh. Will ponder. If I manage to clean this up, will insert into the WP article :-) A quickie google search for ‘fisher information metric’ and ‘curvature’ bring up papers complicated enough to suggest that few are aware of of this: I guess a brute-force attack must get mired. You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.
{"url":"http://johncarlosbaez.wordpress.com/2011/03/02/information-geometry-part-7/","timestamp":"2014-04-17T18:24:30Z","content_type":null,"content_length":"137123","record_id":"<urn:uuid:ca5b82bd-79dd-4066-bf30-81e6be45bcd6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and Device for Analysis and Visualization of a Network Patent application title: Method and Device for Analysis and Visualization of a Network Inventors: Geoffrey Canright (Oslo, NO) Kenth Engø-Monsen (Fredrikstad, NO) Åsmund Weltzien (Oslo, NO) Åsmund Weltzien (Oslo, NO) IPC8 Class: AH04L1226FI USPC Class: 370254 Class name: Multiplex communications network configuration determination Publication date: 2009-12-03 Patent application number: 20090296600 Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method for analysis and visualization of a network is disclosed. The analysis method is based on the use of the steepest ascent graph (SAG). Specifically, the method: (i) uses the SAG to define subregions, in a way that allows iterative refinement; (ii) presents a new and highly efficient way of calculating the SAG; (iii) uses the SAG, and the definitions in (i), as the foundation of a novel method for displaying the structure of the network in a two-dimensional visualization. A method of analyzing and visualizing a network, said network including nodes interconnected by links,characterized insaid method includes steps of:mapping a topology of the network;creating an adjacency representation A of said network;calculating an Eigenvector Centrality (EVC) score for each node;identifying a set of neighbouring nodes from said adjacency representation A for each node in the network;from said set of neighbouring nodes and EVC score, identifying for each node a neighbouring node thereto, wherein said neighbouring node has a highest calculated EVC score; andcreating a representation with entries for each link in the network, in which the entry for a given link is set to indicate if it is a link between a node and its neighbour with the highest EVC score, said representation being a Steepest Ascent Graph (SAG) of the network. A method as claimed in claim 13, wherein said representations A and are implemented as matrices. A method as claimed in claim 12, said method including additional steps of:(a) multiplying a start vector s =i, i being a node number, with a Steepest Ascent Graph of the network expressed as a matrix ;(b) iterating step (a) until the start vector s converges to a stable vector s*; and(c) reading off a regional membership of each node from said stable vector s*. A method as claimed in claim 15, said method including additional steps of:identifying nodes which are local maxima of the Steepest Ascent Graph (SAG) as center nodes;grouping the nodes into regions surrounding each identified center node;removing said center nodes and the links to said center nodes from the Steepest Ascent Graph (SAG);identifying neighbouring nodes of said center nodes as subregion head nodes; andgrouping nodes into subregions surrounding each identified subregion head node, the nodes of a subregion being linked, via one or more hops, to the subregion head node in the Steepest Ascent Graph (SAG). A method as claimed in claim 16, said method including additional steps of:identifying neighbouring nodes of said head nodes as sub-subregion head nodes; andgrouping nodes into sub-subregions surrounding each identified sub-subregion head node, the nodes of a sub-subregion being linked to the sub-subregion head node in the Steepest Ascent Graph (SAG). A method as claimed in claim 16, said method including additional steps of:identifying regions of said network;calculating a Steepest Ascent Graph (SAG) for each region separately; anddisplaying one or more of the Steepest Ascent Graphs (SAG) on a display unit using force-balancing. A method as claimed in claim 16, said method including additional steps of:identifying regions and center nodes in said network;calculating a Steepest Ascent Graph for each region separately; identifying subregions in said network;determining the size of each subregion;selecting a threshold size T;removing subregions smaller than said threshold size T from the graphs;for each graph calculating the net link strength between each pair of subregions;removing the center node from each region;building a coarse-grained graph in which each subregion is represented as a single node using inter-subregion net link strengths as links; anddisplaying the coarse-grained graphs for each region on the display unit using force-balancing. A device for analyzing and visualizing a network, said network including nodes interconnected by links,characterized in thatthe device includes a controller and data storage for a database, the database being adapted to receive setup information for said nodes, the controller being operable:to map a topology of the network from said setup information;to create an adjacency representation A of said network;to identify a set of neighbouring nodes from said adjacency representation A for each node in the network;to calculate an Eigenvector Centrality (EVC) score for each node;from said set of neighbouring nodes and EVC scores, to identify for each node a neighbouring node thereto, said neighbouring node having a highest calculated EVC score; andto create a representation with entries for links in the network, in which an entry for a given link is set to indicate it is a link between a node and its neighbour with the highest EVC score, said representation being a Steepest Ascent Graph (SAG) of the network. A device as claimed in claim 20, wherein the device includes an interface for interfacing to said network, the device being adapted to retrieve setup information from said nodes. A device as claimed in claim 21, wherein the device is operable to retrieve traffic data from said nodes. A computer readable medium bearing computer code which, when executed by a processor controls the processor to carry out the steps described in claim FIELD OF THE INVENTION [0001] The present invention addresses the problem of understanding and controlling the flow of information in networks, with the aim of spreading or preventing spreading of information in said networks. The invention involves analyzing the structure of a given network, based on the measured topology (the nodes of the network and the links between them). The networks in question may be any kinds of networks, but the invention is particularly applicable in communication networks. TECHNICAL BACKGROUND [0002] There exist many methods for defining well connected clusters in a network; but only the regions-analysis method disclosed in the applicant's earlier Norwegian patent applications NO 20035852 and NO 20053330 has been shown to have direct utility for understanding and controlling spreading of information on the network. Specifically, in NO 20035852 we have presented a basic method for analyzing networks. This method is valid whenever the links of the network may be viewed as symmetric--i.e., whenever flow of information over a link may (at least approximately) be assumed to be equally likely in either direction on the link. A principal output of this method is the assignment of each node to a region (well connected cluster) of the network. The analysis predicts that information spreading will be relatively faster within regions than between them. Hence knowledge of these regions is useful for controlling the spread of information--that is, either hindering the spread of harmful information (such as computer viruses) or aiding the spread of useful information. Geoffrey Canright and Kenth Engφ-Monsen, "Roles in Networks". Science of Computer Programming, 53 (2004) 195-214, is a research article which describes the analysis method in detail. Geoffrey Canright and Kenth Engφ-Monsen. "Spreading on networks: a topographic view" to appear in Proceedings, European Conference on Complex Systems, 2005 (ECCS05) and Geoffrey S. Canright and Kenth Engφ-Monsen, "Epidemic spreading over networks: a view from neighbourhoods", Telektronikk 101, 65-85 (2005) are further research articles which demonstrate that our definition of regions is indeed extremely useful for understanding how information is spread over a network. Also, in the last paper mentioned, we present methods for modifying the structure of a given network, towards the goal of either helping or hindering information flow. Results of some limited tests of these design methods are presented, which are also described in the Norwegian patent application NO 20053330. The test results reported in the last paper indicate that design and modification techniques that are based on our regions analysis can significantly affect the rate of information spreading. One shortcoming of our region analysis method is that there has not so far been found any useful way to refine the analysis, i.e., to define subregions within each region. That is, the method allows one to sort the nodes of the networks into a number of regions, defined by their being well connected internally. However, the number of such regions is determined by the analysis, and hence is not subject to any choice by the user of the analysis. Also, for a sufficiently well connected network, the method can give the answer that the network is composed of a single region. Thus, if a user of this approach wishes to examine smaller subregions than those given by the analysis, new methods are needed. In many cases, it is desirable to be able to iteratively refine the analysis, defining sub-subregions, etc. M. Girvan and M. Newman, "Community structure in social and biological networks", Proc. Natl. Acad. Sci. USA, 99 (2002), pp. 8271-8276 describes a method for network analysis which also breaks down a given network into well connected clusters. The Girvan-Newman method has the advantage that the breakdown may be refined as many times as wished, giving subregions, sub-subregions, etc. However, the Girvan-Newman method has no demonstrated connection to the important practical problem of understanding the spreading of information. Another shortcoming of the region analysis method as described in NO 20035852 is that it can be too demanding in terms of computing power when handling large graphs. An important technical aspect of the regions-analysis method is the calculation of the steepest-ascent graph (SAG) for a given network. This graph is used to assign nodes uniquely to regions. We have discovered, in working with multi-million-node graphs, that it is important to be able to calculate this SAG in an efficient manner. Specifically, we found that an ordinary approach to calculating the SAG in such cases might take several hundred years to complete--thus rendering the whole approach practically impossible. Finally, we note that a highly desirable feature of any method of network analysis is the possibility for visualizing the resulting structure (as given by the analysis). There has been, and continues to be, a huge volume of work on the problem of visualizing networks. However, the problem of finding a good visualization which presents our `regional` view of a network is largely unsolved. An overview of current techniques for visualization og graphs may be found in Giuseppe Di Battista, Peter Eades, Roberto Tamassia, and Ioannis G. Tollis, Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall PTR, Upper Saddle River, N.J., USA (1998). SUMMARY OF THE INVENTION [0009] Thus, a principal objective of the present invention is to provide a method and device for network analysis that solves the shortcomings of prior art methods as mentioned above. The analysis method of the present invention is based on the use of the steepest ascent graph (SAG). The method according to the present invention for analysis and visualization of a network, said network including a number of nodes inter-connected by links, is as defined in the appended claim 1. Specifically, the method includes at least the steps of mapping the topology of the network, calculating an Adjacency matrix A of said network, from said Adjacency matrix A extracting a neighbour list for each node in the network, calculating an Eigenvector Centrality (EVC) score for each node, from said neighbour list and EVC score identifying the neighbour of the node with the highest EVC score, and creating a matrix with entries for each link in the network, in which the entry for a given link is set to 1 if it is a link between a node and its neighbour with the highest EVC score, said matrix being the Steepest Ascent Graph (SAG) of the network. The invention also includes a device, a computer program product and a computer readable medium as claimed in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0012] The invention will now be given a detailed description, in reference to the appended drawings, in which: FIG. 1 shows a simple test graph with 16 nodes, [0014]FIG. 2 shows the same graph with contour lines removed, FIG. 3 shows the subregions of the test graph in FIG. 1, [0016]FIG. 4 shows the sub-subregions obtained by further refinement of the largest subregion in FIG. 3, [0017]FIG. 5 is a schematic tree visualization of the test graph in FIG. 1, FIG. 6 is a visualization of the Gnutella network using prior art technique, [0019]FIG. 7 shows the steepest-ascent graph of the same network, FIG. 8 is another prior art visualization of the Gnutella network, taken at another point in time, [0021]FIG. 9 is the corresponding visualization using the steepest-ascent approach, FIG. 10 shows the graph in FIG. 8, but with the nodes colored according to their region membership, [0023]FIG. 11 is the subregion visualization for the two-region graph of FIG. 9 [0024]FIG. 12 is the same graph with a threshold set for subregion size, i.e. small subregions are not shown, [0025]FIG. 13 shows the subregion visualization for the one-region graph of FIG. 7 , also with a threshold set on subregion size. DETAILED DESCRIPTION Defining Subregions and Refining [0026] We invoke our topographic picture in order to describe the ideas behind this invention. In this picture, each region is a `mountain`, and the eigenvector centrality (EVC) index of each node is its `height`. For each region, the top of the mountain is called its Center--this is the highest node in the region. We then note that the steepest-ascent graph gives a picture of the `ridge` structure of the mountain. That is, each link which is retained in the steepest-ascent graph is a link from a node to that node's highest (in EVC) neighbor. These links thus represent the likeliest path for information flow towards or from the Center of the region. Furthermore, there is one such `ridge line` (including lower branches) for each neighbor of the Center. Hence we define a subregion as simply that branch of the SAG (which is a tree) which ends at one neighbor of the Center. That is, each neighbor of the Center sits at the head of a subtree of the SAG tree; and we identify each subtree as a subregion. This definition is not arbitrary, since each subtree represents in fact the set of likeliest paths for information flow between the nodes in the subtree and the Center. This definition also has the obvious advantage that it allows for iterative refinement. Since a subregion is simply a subtree of the SAG, one can readily define sub-subregions as sub-subtrees. That is, one simply moves `down` the subtree from its head, until the first branching of the subtree. Each branch of the subtree then is defined as a distinct sub-subregion. The extension to even further refinements should be clear from this definition. We illustrate the definition of subregions with an example. FIG. 1 shows a simple graph with 16 nodes. `Contour lines` of constant `height` are also shown. It is clear from the figure that a regions analysis gives two regions--one with 12 nodes on the left, and one with 4 nodes on the right. For each region, the Center node is marked with blue color. [0030]FIG. 2 shows the same graph, with contour lines removed, and with those links lying on the SAG marked with thick lines. Hence the SAG is clearly visible in FIG. 2 Now we define the subregions for each region. For each region, we remove the Centers, and all links connected to them. Those nodes that were neighbors of a Center are now `heads` of their subregion. These nodes are colored black (see FIG. 3). Nodes which are at the `leaves` of the tree, ie at the end of a chain of links, are still red. Nodes which are both head and leaf (because they represent a one-node subregion) are black/red. Finally, there is a green node which is neither head nor leaf. Each connected subgraph in FIG. 3 is a subregion of the graph of FIG. 2 . Thus we find that there is one subregion with six nodes, one with two nodes, and six subregions with only one node. We note that trials with empirically measured (peer-to-peer) networks have indicated that one can find typically a wide variation in the size of the subregions, and that, even with large empirical networks, one-node subregions are not unusual. Hence FIG. 3 is typical (except for the small size of the whole graph) of the real networks we have examined so far (these having about 1000 nodes). The graph of FIG. 1 allows for one further step of refinement. We illustrate this in FIG. 4 , in which we refine the largest subregion of the graph. Refinement consists of removing the head of the subregion, and its links. (If the head has only one neighbor below it, we remove that one also--and so on, until the removed head has multiple neighbors.) There are now three sub-subregions--that is, one for each neighbor of the removed head. The green node is now seen as head of its sub-subregion. The process of refinement is almost completely analogous to the process of defining subregions; also, any further refinements (on larger graphs than that in these Figures) are precisely like the refinement process illustrated here. Efficient Calculation of the Steepest-Ascent Graph As noted earlier, we found that applying a straightforward algorithm for finding the SAG gave a projected running time of about 200 years for a test graph with 10 million nodes. The problem here was that the entire test graph did not fit in the fast (RAM) memory of a machine with 4 GB of RAM. Hence we had to resort to `external-memory` algorithms, i.e. approaches which only read in a part of the problem at a time, operate on that part, delete it, and then read in the next part. (For a reference on external memory algorithms, see: External Memory Algorithms, pub. American Mathematical Society, Jan. 1, 1999.) Running time is then strongly constrained by the number of read operations for external memory--these operations are many times (orders of magnitude) slower than access times for RAM. The present invention solves this problem by giving an algorithm which is optimal in terms of the number of accesses to external memory. That is, our new algorithm reads the neighbor list of each node (which is a column of the adjacency matrix) exactly once. Doing so reduced the running time for our 10-million-node example from (expected) 200 years to 58 hours. The method builds on the insight that steepest ascent from any given node is actually determined by (a) its highest neighbor, plus (b) steepest ascent from this neighbor. In other words, for each node, we need only find--once and for all--its single highest neighbor (if there is one--otherwise it is a Center, i.e. a local maximum). Thus, for each node, we find and store that one piece of information, and forget all other links. In short, the SAG requires finding and storing exactly one link for each node. This link is found after a single access to the node's neighbor list, and stored in a separate data structure for the In detail, calculation of the SAG begins with several input structures. First of all, we need the adjacency matrix A expressing the topology of the graph (A =1 if there is a link between nodes i and j, and 0 otherwise). The 1's in the i'th column (or row) of A thus give the node numbers of those nodes which are neighbors of node i; it is in this sense that we say that we can extract the neighbor list of a node from a column of A. We also need a vector e giving the eigenvector centrality (EVC) score e for each node i. We then use the neighbor list of a given node g, and the EVC scores of these neighbors (taken from the vector e), in order to find the single neighbor h of g that has the highest EVC score. We store this result in a new matrix , by placing a 1 at the entry . The matrix is in fact the steepest-ascent graph (SAG). It is highly sparse, since it has only one link for each node. Hence it is much more feasible to store all of in RAM than it is to store all of A (which is typically, in terms of storage requirements, 10-20 times as large as ). Of course, one need store only the 1's for any sparse, binary matrix, such as A or ; but still the former has many more 1's than the latter. The efficiency of this method, in terms of number of read access events for columns of A, is clear. A naive approach would pick a node g, and then find its highest neighbor h, then find h's highest neighbor, and so on, until a Center is reached. This naive approach gives immediate region membership information for each chosen node g; but it clearly requires many more read access events in the case that A is externally stored. Our method, instead, defers determination of region membership until the entire SAG is stored in . One then determines region membership as follows. One builds a start vector s, such that s =i. That is, one simply places the node number at that node's entry. Multiplication of s by sends each node number `downhill` in the SAG tree--for example, in the above notation, multiplication by will send the number at h to g (and to all other nodes having h as their highest neighbour). Repeated multiplication by results in a stable vector s*, where the entry in s* for each node g gives the node number of the Center whose region g belongs to. (In the exceedingly rare case that a node belongs to two regions, it will receive the sum of the node numbers for the two Centers--a case that is easily detected.) We note that only a few multiplications by are needed, as the s vector converges exactly to s* after a number of multiplications equal to the radius of the largest region (measured in number of hops). Typical graphs, even very large graphs, have small radii due to `small worlds` effects. A modified version of the procedure detailed in the previous paragraph can be used in the calculation of subregions. First the SAG must be updated in two ways: i) Remove the centre node from the tree, causing the SAG to decompose into a number of separate trees, and ii) add self-referencing links to the new root node for each new tree. Subregion membership is then determined by the same procedure given above, applied to each separate tree. We describe two methods for visualising the structure of a network, based on the analysis method presented here. We call these two methods `Tree visualization` and `Subregion visualization` Tree Visualization For Tree visualization we proceed as follows: 1. First consider each region as an isolated subgraph, ie, ignore inter-region (`bridge`) links. 2. Find the SAG for each region separately. 3. Use freely available force-balance packages to display the resulting tree structures on the screen. For multiple regions, one can display multiple trees. 4. One can also calculate a `net link strength` between any given pair of subregions-either from the same region, or from distinct regions. One can then use this net link strength to determine which subregions (subtrees) should lie closest to one another in the tree (SAG) representing one region. [0053]FIG. 5 shows the tree visualization for the graph of FIG. 1. This figure is only schematic--that is, we have not used any force balance package to lay out the nodes. A practical approach to tree visualization is outlined above. Our approach uses freely available software to actually lay out the nodes in the plane; the new idea simply comes from discarding all links other than those in the SAG. In other words, tree visualization involves building the SAG (as outline above), and then simply feeding the SAG as a graph to a force-balance visualization program such as UCINet (UCINet and NetDraw may be downloaded from: http://www.analytictech.com/). We offer more realistic examples of tree visualization in FIGS. 6-10. FIG. 6 shows a snapshot of the Gnutella peer-to-peer file-sharing network, taken in 2001. It has about 1000 nodes. The visualization in FIG. 6 was performed using NetDraw, a component of the network analysis package UCINet. This is thus a state-of-the-art visualization; but it reveals (as is common with large networks) a structureless mess. [0056]FIG. 7 shows the same graph, laid out again by NetDraw; but the input to NetDraw was the steepest-ascent graph as found by our analysis. We see that our analysis finds only one region; but FIG. 7 reveals a rich internal subregion structure for this one region. In fact, many layers of substructure are already visible in FIG. 7 ; and it is clear that refinement of the subregions will only bring out this substructure even more clearly. FIG. 8 shows a different Gnutella snapshot, again with about 1000 nodes, again drawn using the full link structure and NetDraw. FIG. 9 shows that our analysis finds two regions for this snapshot. Again the contrast (compare FIGS. 8 and 9) is striking. FIG. 10 is the same layout as in FIG. 8, but with the nodes colored according to their region membership (as found by our analysis). The point of FIG. 10 is that the two-region structure is partially visible in the layout using the full link structure (assuming one knows how to assign the nodes to regions). Hence FIG. 10 gives some indication of the network's structure--more than does FIG. 8--but FIG. 9 shows both the two-region main structure, and many levels of substructure, much more clearly. There are many subregions for the single region in FIG. 7 , and for each of the two regions in FIG. 9 . Clearly, for a tree structure, all subregions should radiate outwards from the center; but there is no obviously best criterion for determining which subregions are `neighbors` as they are laid out in a ring around the Center. The layouts shown in these two figures used the simple, standard mechanism of force-balance algorithms that every node has a degree of repulsion with respect to every other. Thus the force balance itself was allowed to determine the radial ordering of the subregions. We see that the results of using this simple default method are good. It is also possible to use more information to guide the radial ordering of the subtrees. One can define and calculate a measure of `net link strength` (as described in more detail below) between any given pair of subregions, and then use this net link strength to guide in the placement of the subtrees. For example, one can place a fictitious extra link between the respective heads of each pair of subtrees, giving a weight to this link that is determined by the net link strength between the subtrees (subregions). The force balance method will then tend to drive subtrees towards one another if they have a high net link strength between them. We note that the use of net link strength may have an advantage with very large graphs. That is, for very large graphs, even the SAG tree structure may be too time consuming to lay out with force balancing. In such a case, using extra inter-head links, with a high link weight compared to the SAG links, is likely to speed up convergence-perhaps considerably. Methods for calculating net link strength will be given in the next subsection, since this quantity plays a crucial role in subregion visualization. Finally, we emphasize that tree visualization is readily suited for displaying refinements of the subregions. Refinement of a given subregion picture simply gives a new set of subtrees, which may then be handled precisely as for the case of multiple trees from multiple regions. FIG. 4 is (again) a schematic example of one step of refinement, starting from the tree visualization of FIG. 3. Subregion Visualization The procedure for Subregion visualization is as follows: 1. First consider each region as an isolated subgraph, i.e., ignore inter-region (`bridge`) links. 2. Find the SAG for each region separately. 3. For each subregion, determine its size (number of nodes). 4. Choose a threshold size T. Subregions of size smaller than T are not displayed, to save clutter. All subsequent steps apply only to subregions of size ≧T. 5. For each SAG, calculate the net link strength between each pair of subregions. 6. Remove the Center of each region, so that the subregions are decoupled from one another at the Center. Their only remaining coupling is then the pairwise couplings formed by the net inter-subregion link strength; and the resulting structure is no longer a tree. 7. For each region, build a `coarse-grained graph` by representing each subregion as a single node, and using the inter-subregion net link strengths as the links. Display the resulting coarse-grained graphs for each region, using a freely available force-balance package. The displayed size of the nodes in the coarse-grained graphs may be used to indicate the size (number of actual nodes) for the corresponding subregion; and the net link strengths may be displayed using the thickness of the displayed links in the coarse-grained graph. Subregion visualization requires a few more steps to explain than does tree visualization. For this reason, we repeat the steps given above, adding further details where appropriate. 1. First consider each region as an isolated subgraph, ie, ignore inter-region (`bridge`) links. 2. Find the SAG for each region separately. 3. For each subregion, determine its size (number of nodes). These three steps are clear. 4. Choose a threshold size T. Subregions of size smaller than T are not displayed, to save clutter. All subsequent steps apply only to subregions of size ≧T. It is always useful in visualization to be able to choose a level of resolution, i.e., the level of detail that one wishes to have displayed. Subregion visualization already removes much detail by simply displaying each subregion as a single node. However there can be very large variation in the size of the subregions. For example, the graph of FIG. 7 yields subregions of size ranging from 1 to about 350--with a large number of tiny subregions, and only a few large ones. Furthermore, we expect this kind of distribution to be typical of many real networks. Hence it can be desirable to suppress the display of the many tiny subregions, and focus on the large ones. 5. For each SAG, calculate the net link strength between each pair of subregions. In principle, there are many ways to define this net link strength. We give here a formula, based on two ideas: (i) links with high EVC get more weight; (ii) many links give more weight than few To implement these two ideas, we define the `arithmetic link centrality` for a link between nodes i and j to be the arithmetic average of the two nodes' EVC scores: a ij = ( e i + e j ) 2 . ( 1 ) ##EQU00001## Alternatively, one can define the `geometric link centrality` g for a link between nodes i and j to be the geometric average of the two nodes' EVC scores: = {square root over ((e We then define the net link strength between two subregions α and β to be the sum of the link centralities for all links connecting α and β. This gives ( α , β ) = i .di-elect cons. α j .di-elect cons. β a ij . ( 2 ) ##EQU00002## We note finally that one can violate the instruction in step 1, for graphs with multiple regions. That is, an even more thorough overview may be obtained by calculating, and including the effects of, all inter-subregion net link strengths--both those between subregions in the same region, and those between subregions in different regions. [Formula (2) is equally valid for a pair of subregions taken from two distinct regions.] This will allow the resulting display to take into account inter-regional relations, so that the final layout reflects most clearly the whole set of relationships. Our default choice is however to treat each region separately. 6. Remove the Center of each region, so that the subregions are decoupled from one another at the Center. Their only remaining coupling is then the pairwise couplings formed by the net inter-subregion link strength; and the resulting structure is no longer a tree. Here we see that the subregions are now treated as individual nodes (as far as visualization is concerned). They have a `size` (from step 3), and they have internode links with link strengths given as detailed in step 5. The Center is removed as it does not belong to any subregion; and the aim of subregion visualization is to try to display the subregions (only) and their relationship to one Thus we end up with a visualization problem with S nodes (for S subregions of size ≧T), and, in general, links of some strength between most pairs of nodes. Thus our coarse-grained graph is in fact a dense graph--it is not sparse, since most of the possible links are present. However, two aspects make this visualization problem much easier than the problem of visualizing the entire network. First, the number S of subregions for a given region is guaranteed to be very much smaller than the number N of nodes in the graph--it is not more than the number of neighbors for the Center of the region (a number much less than N already), and is likely to be much smaller than even that number, if the threshold size T is set to exclude many small subregions. Secondly, there is likely to be large differences in the various net link strengths in the resulting dense graph. These differences make convergence in the force-balance method much easier than it would be if all links had the same, or nearly the same, strength. 7. For each region, build a `coarse-grained graph` by representing each subregion as a single node and using the inter-subregion net link strengths as the links. Display the resulting coarse-grained graphs for each region, using a freely available force-balance package. The node size in the coarse-grained graphs may be used to indicate the number of nodes for the corresponding subregion; and the net link strengths may be displayed using the thickness of the displayed links in the coarse-grained graph. All of the techniques needed for this step are publicly available. There are of course other ways (eg, colors) to indicate scalar measures of node size and link strength. We do not exclude any such method here. The essential information that we want to include in this invention is that both the node (subregion) size, and the net (inter-subregion) link strength, can and should be displayed in subregion visualization; they are an important part of the total picture of how the subregions are related to one another. [0090]FIG. 11 shows the subregion visualization for the two-region graph of FIG. 9 , with threshold T=1--that is, all subregions are shown. For comparison, in FIG. 12 we have set T=10. The reduction in clutter is significant. We note that it is not trivially easy to find correspondences between subregion structures in FIG. 9 and those in FIG. 11 or 12. We believe that this is because each type of visualization emphasizes different, but useful, structural information about the network under study. That is, the two methods are complementary, rather than redundant. Some main features can however be found to correspond. For example, the largest `red` subregion in FIG. 11 corresponds to the entire `lower half` of the red region in FIG. 9 ; we know that the lower half is a subregion, because the Center of that region is at the hub of the upper half. The same kind of correspondence may be found for the blue region. For completeness, we show in FIG. 13 the subregion visualization for the one-region graph of FIG. 7 , with T=10. Here again we see one very large subregion, corresponding to the `upper half` of FIG. 7 There are many conceivable applications of the inventive method. We list several here: Analysis and improvement of information flow in organizations Systems for supporting other kinds of social networks, e.g. online communities Security for computer networks, e.g. virus control Novel strategies for controlling the spreading of diseases among animals and humans Limiting the spread of damage in technological networks, for example power networks The method may be performed in a device including a controller and a storage device. The controller may be realized as a server, and the storage device may be a database controlled by the server. The storage device/database is storing setup information regarding each node in a network. The setup information includes information on the connections/interfaces to/from each node. The device may also be interfaced to the network, and be adapted to retrieve this information from the nodes. In other cases this information must be gathered in other ways, e.g. when the nodes in question not are communication nodes. For communication nodes, traffic information may be gathered from each node, such as traffic counts. The method according to the present invention may be implemented as software, hardware, or a combination thereof. A computer program product implementing the method or a part thereof comprises a software or a computer program run on a general purpose or specially adapted computer, processor or microprocessor. The software includes computer program code elements or software code portions that make the computer perform the method using at least one of the steps according to the inventive method. The program may be stored in whole or part, on, or in, one or more suitable computer readable media or data storage means such as a magnetic disk, CD-ROM or DVD disk, hard disk, magneto-optical memory storage means, in RAM or volatile memory, in ROM or flash memory, as firmware, or on a data server. It will be understood by those skilled in the art that various modifications and changes may be made to the present invention without departure from the scope thereof, which is defined by the appended claims. Patent applications by Åsmund Weltzien, Oslo NO Patent applications by Geoffrey Canright, Oslo NO Patent applications in class NETWORK CONFIGURATION DETERMINATION Patent applications in all subclasses NETWORK CONFIGURATION DETERMINATION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090296600","timestamp":"2014-04-20T02:56:53Z","content_type":null,"content_length":"69619","record_id":"<urn:uuid:21e1c79b-05b2-4aaf-aaa0-9f951eb75df1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-svn] r4191 - trunk/scipy/cluster scipy-svn@scip... scipy-svn@scip... Sun Apr 27 08:06:17 CDT 2008 Author: damian.eads Date: 2008-04-27 08:06:16 -0500 (Sun, 27 Apr 2008) New Revision: 4191 Tightened language in vq's module summary docstring. Modified: trunk/scipy/cluster/vq.py --- trunk/scipy/cluster/vq.py 2008-04-27 12:55:26 UTC (rev 4190) +++ trunk/scipy/cluster/vq.py 2008-04-27 13:06:16 UTC (rev 4191) @@ -51,13 +51,19 @@ is the code corresponding to the code_book[i] centroid. whiten(obs) -- - Normalize a group of observations so each feature has unit variance. + Normalize a group of observations so each feature has unit + variance. vq(obs,code_book) -- - Calculate code book membership of obs. + Calculate code book membership of a set of observation + vectors. kmeans(obs,k_or_guess,iter=20,thresh=1e-5) -- - Train a codebook for mimimum distortion using the k-means algorithm. + Clusters a set of observation vectors. Learns centroids with + the k-means algorithm, trying to minimize distortion. A code + book is generated that can be used to quantize vectors. kmeans2 -- - Similar to kmeans, but with several initialization methods. + A different implementation of k-means with more methods for + initializing centroids. Uses maximum number of iterations as + opposed to a distortion threshold as its stopping criterion. __docformat__ = 'restructuredtext' @@ -580,10 +586,10 @@ centroids to generate. If minit initialization string is 'matrix', or if a ndarray is given instead, it is interpreted as initial cluster to use instead. - niter : int - Number of iterations of k-means to run. Note that this - differs in meaning from the iters parameter to the kmeans - function. + iter : int + Number of iterations of the k-means algrithm to run. Note + that this differs in meaning from the iters parameter to + the kmeans function. thresh : float (not used yet). minit : string More information about the Scipy-svn mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-svn/2008-April/002138.html","timestamp":"2014-04-17T01:19:40Z","content_type":null,"content_length":"4570","record_id":"<urn:uuid:d56b5424-f1a1-496c-9555-4b8ee8f18e8b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Using a five-step procedure for inferential statistical analyses. Using a five-step procedure for inferential statistical analyses. Abstract: Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The described procedure can be used for both parametric and nonparametric inferential tests. The example given is a chi-square goodness-of-fit test of a genetics experiment involving a dihybrid cross in corn that follows a 9:3:3:1 ratio. This experimental analysis is commonly done in introductory biology labs. Key Words: Five-step procedure; statistical inference; chi-square; goodness-of-fit; dihybrid cross; introductory biology. Article Report Subject: Biology (Study and teaching) Mathematical statistics (Study and teaching) Teaching (Methods) Author: Kamin, Lawrence F. Pub Date: 03/01/2010 Publication: Name: The American Biology Teacher Publisher: National Association of Biology Teachers Audience: Academic; Professional Format: Magazine/Journal Subject: Biological sciences; Education Copyright: COPYRIGHT 2010 National Association of Biology Teachers ISSN: 0002-7685 Issue: Date: March, 2010 Source Volume: 72 Source Issue: 3 Product: Product Code: 8522100 Biology; 8524300 Statistics NAICS Code: 54171 Research and Development in the Physical, Engineering, and Life Sciences Geographic: Geographic Scope: United States Geographic Code: 1USA United States Accession 245037750 Full Text: Inferential statistics is an indispensible tool for biological hypothesis testing. Early in their science education, students learn about the scientific method and how inductive rather than deductive reasoning is used to make the logical leap from particular experimental results to one or more general conclusions. However, before any conclusion can be reached, the experimental results must be tested for statistical significance. After all, there is a chance that any difference between two or more experimental treatments or tests is attributable to random events. Therefore, we use statistics "to compare the data with our ideas and theories, to see how good a match there is" (Hand, 2008: p. 10). The five-step procedure presented here was designed to aid in this process. Science teachers must lead students through a strange new statistical landscape that combines logic, jargon, and mathematical calculations such as variance, standard deviation, sum of squares, and calculated test statistics. Concepts like Type I errors, one-tailed or two-tailed alternative hypotheses, and p value must be defined and related to specific examples. But even in excellent statistics and biostatistics texts, data are given, a value for a (level of significance) is given, and then, typically, a "What do you conclude?" question is asked. As an afterthought, usually a part B to the problem, students are asked to give the p value for their conclusion. This method of posing statistics problems has always struck me as I believe that the following simple procedure allows the given problem to be stated, viewed, and solved as a stand-by-itself organic whole. This procedure both formalizes and crystalizes student thinking. Another advantage of this five-step procedure is that it can be used for essentially all statistical inference tests--both parametric and nonparametric. I was taught this technique in a graduate-level course in statistics, and I have been using it ever since. [GRAPHIC OMITTED] * The Five General Steps in Hypothesis Testing Step 1 Write down the null and alternative hypotheses in both symbols and words, using complete sentences. Step 2 Calculate the test statistic to the appropriate number of significant figures. Step 3 (a) State the given a (probability of a Type I error). (b) Calculate the degrees of freedom. (c) Give the region of rejection both in symbols and in a graph. Step 4 Draw a conclusion based on the calculated test statistic. (a) If the test statistic is in the region of rejection (RR), reject the null hypothesis and state the conclusion in one or more complete sentences. (b) If the test statistic is not in RR, accept the null hypothesis and state the conclusion in one or more complete sentences. Step 5 Bracket the p value. A chi-square goodness-of-fit test is quite commonly used to check the appropriateness of a proposed model that uses categorical data. One popular experiment involves checking to see if a cross involving corn plants results in the Mendelian dihybrid phenotypic ratio of 9 purple smooth to 3 purple wrinkled to 3 yellow smooth to 1 yellow wrinkled corn grains. The following example and data are from such an experiment from one of my botany lab groups. [FIGURE 1 OMITTED] Step 1 [H.sub.o]: The data fit the model of 9 purple smooth to 3 purple wrinkled to 3 yellow smooth to 1 yellow wrinkled corn grains. [H.sub.a]: [H.sub.o] is false. Step 2 Step 3 (a) [alpha] = 0.05 (b) df = 4 - 1 = 3 (c) RR = (7.815, [infinity]) Step 4 [[chi square].sub.calc] = 3.218 does not lie in RR; therefore, I accept [H.sub.o] (the null hypothesis) and conclude that the data fit the model proposed in [H.sub.o] above. Step 5 0.30 < p < 0.40 * Comments Step 1 For this example, no symbols were used in Step 1, although one could use, for example, [p.sub.1] = 9/16, [p.sub.2] = 3/16, [p.sub.3] = 3/16, and [p.sub.4] = 1/16. In a test for means equality, the null hypothesis might be as follows: [H.sub.o]: [[mu].sub.1] = [[mu].sub.2]; and [H.sub.a] might be [[mu].sub.1] [not equal to] [[mu].sub.2] or [[mu].sub.1] < [[mu].sub.2] or [[mu].sub.1 > [[mu].sub.2], where y refers to the population mean. Regarding [H.sub.a], for this example, one could state that the data do not fit the proposed model or simply that [H.sub.o] is false. Step 2 The "expected" counts are calculated under the assumption that [H.sub.o] is true. Thus, the expected count for purple smooth corn grains was calculated as 9/16 x 361 (total of all corn grains). The chi-square statistic is simply the sum of the last column in the table given in Step 2, or 2 [(Obs-Exp).sup.2] / Exp. For this example, it is 3.218. The chi-square statistic was calculated to the same number of significant figures in the chi-square table. It is assumed that the instructor has informed students of the conditions for validity of this test, namely that (1) the data represent a random sample from a large population, (2) the data are whole (counting) numbers and not percentages or standardized scores, and (3) the expected count for each class is [greater than or equal to] 5 (Samuels & Witmer, 2003: chapter 10; Mendenhall et al., 1990: pp. 665-666). Step 3 The probability of a Type I error, a, must be given as part of the problem. A Type I error is made when a true null hypothesis ([H.sub.o]) is rejected. The degrees of freedom (df) are calculated as k - 1, where k is the number of data classes. The chi-square statistic ([chi square]) has a domain of zero to infinity. The region of rejection (RR) is obtained from a statistical table of chi-square values. Step 4 This is the important "Decision Rule" of many statistics books. By plotting the [[chi square].sub.calc] value of 3.218 on the graph in Step 3, one can see that 3.218 does not lie in the region of rejection (RR) but, rather, lies in the region of acceptance; this means that the null hypothesis is accepted. Since an absolute truth is not known, in the sense that the conclusion could be wrong, most statisticians prefer stating that there is insufficient evidence to reject the null hypothesis. Failing to reject [H.sub.o], under the constraints of committing a Type I or Type II error, is a better decision than simply accepting it, even though the two choices appear to give a similar conclusion. At this point, depending on time and the level of the class, the instructor may wish to discuss Type II errors. A Type II error is made if a false null hypothesis is accepted (not rejected). The probability of a Type II error ([beta]) can be calculated after the fact (Glover & Mitchell, 2006: section 5.3; Schork & Remington, 2000: pp. 174-181), looked up in tables for some tests (Portney & Watkins, 2009: p. 853), or controlled for by calculating the sample size needed for a given [beta] value (Mendenhall et al., 1990: pp. 443-446). The instructor may also wish to explain why, in most cases, a Type I error is more insidious than a Type II error and that most problems thus give the value for a without ever mentioning [beta]. Step 5 Most statistics books offer excellent explanations for the concept of "p value." One of the best and simplest explanations I have found is: "The term p-value is used to describe the probability that we would observe a value of the test statistic as extreme or more extreme than that actually observed, if the null hypothesis were true" (Hand, 2008: p. 88). In some statistics books, 0.20 is the largest value for p found in the chi-square table. In that case, Step 5 for this example would be written as p > 0.20. * Discussion The five-step procedure for general hypothesis testing given here allows students to follow a handy template or procedure for statistical inference tests. This procedure formalizes the approach to problem solving and forces the math and logic involved in such tests to form an organic whole. The five steps stand as a unified entity. The problem is stated, a test statistic is calculated, a conclusion is reached based on a given value for a, and a confidence level is given as the last step (see Step 5 in the Comments section above). The problem and its solution thus stand as a single unit of thought and effort. DOI: 10.1525/abt.2010.72.3.11 Glover, T. & Mitchell, K. (2006). An Introduction to Biostatistics. Long Grove, IL: Waveland Press. Hand, D.J. (2008). Statistics: A Very Short Introduction. NY: Oxford University Press. Mendenhall, W., Wackerly, D.D. & Scheaffer, R.L. (1990). Mathematical Statistics with Applications, 4th Ed. Boston: PWS-Kent. Portney, L.G. & Watkins, M.P. (2009). Foundations of Clinical Research: Applications to Practice, 3rd Ed. Upper Saddle River, NJ: Prentice Hall. Samuels, M.L. & Witmer, J.A. (2003). Statistics for the Life Sciences, 3rd Ed. Upper Saddle River, NJ: Prentice Hall. Schork, M.A. & Remington, R.D. (2000). Statistics with Applications to the Biological and Health Sciences, 3rd Ed. Upper Saddle River, NJ: Prentice Hall. LAWRENCE F. KAMIN is Professor of Biological Sciences at Benedictine University, 1344 Yorkshire Drive, Carol Stream, IL 60188; e-mail: lkamin@ben.edu. Phenotypic Class Observed Expected Exp Purple smooth 210 203.06 0.2372 Purple wrinkled 74 67.69 0.5882 Yellow smooth 55 67.69 2.3790 Yellow wrinkled 22 22.56 0.0139 Totals: 361 361.00 3.2183 Gale Copyright 2010 Gale, Cengage Learning. All rights reserved.
{"url":"http://www.biomedsearch.com/article/Using-five-step-procedure-inferential/245037750.html","timestamp":"2014-04-20T16:56:49Z","content_type":null,"content_length":"22463","record_id":"<urn:uuid:39d44914-e6d1-4e87-80d1-213039a0eae5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Critical coupling in dissipative surface-plasmon resonators with multiple ports « journal navigation Critical coupling in dissipative surface-plasmon resonators with multiple ports Optics Express, Vol. 18, Issue 25, pp. 25702-25711 (2010) We theoretically investigate resonant absorption in a multiple-port surface-plasmon polaritons (SPP) resonator near the condition of critical coupling at which internal loss is comparable to radiation coupling. We show that total absorption is obtainable in a multiple-port system by properly configuring multiple coherent lightwaves at the condition of critical coupling. We further derive analytic expressions for the partial absorbance at each port, the total absorbance, and their sum rule, which provide a non-perturbing method to probe coupling characteristics of highly localized optical modes. Rigorous simulation results modeling a surface-plasmon resonance grating in the multiple-order diffraction regime show excellent agreements with the analytic expressions. © 2010 OSA 1. Introduction Coupling of optical power between waveguides and resonators is of use in many photonic devices and systems [ 1. A. Yariv, “Universal relations for coupling of optical power between microresonators and dielectric waveguides,” Electron. Lett. 36(4), 321–322 (2000). [CrossRef] 4. Y. Xu, Y. Li, R. K. Lee, and A. Yariv, “Scattering-theory analysis of waveguide-resonator coupling,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 62(55 Pt B), 7389–7404 (2000). [CrossRef] [PubMed] ]. On coupling a free-space input wave to a periodic waveguide, a leaky-mode resonance can localize light thereby enhancing light-matter interaction [ 6. K. J. Lee, R. LaComb, B. Britton, M. Shokooh-Saremi, H. Silva, E. Donkor, Y. Ding, and R. Magnusson, “Silicon-layer guided-mode resonance polarizer with 40-nm bandwidth,” IEEE Photon. Technol. Lett. 20(22), 1857–1859 (2008). [CrossRef] ]. Effective confinement of light at a resonance site can yield high absorption if the site possesses dissipative loss [ 7. K. Yu. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, “Localized modes in open one-dimensional dissipative random systems,” Phys. Rev. Lett. 97(24), 243904 (2006). ]. Many means providing strong absorption of light have been suggested utilizing, for example, attenuated total reflection prisms [ 9. A. Otto, “Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection,” Z. Phys. 216(4), 398–410 (1968). [CrossRef] ], diffraction gratings [ 11. J. Le Perchec, P. Quémerais, A. Barbara, and T. López-Ríos, “Why metallic surfaces with grooves a few nanometers deep and wide may strongly absorb visible light,” Phys. Rev. Lett. 100(6), 066408 (2008). [CrossRef] [PubMed] ], metallic mesoporous surfaces [ 12. T. V. Teperik, F. J. García de Abajo, A. G. Borisov, M. Abdelsalam, P. N. Bartlet, Y. Sugawara, and J. J. Baumberg, “Omnidirectional absorption in nanostructured metal surfaces,” Nat. Photonics 2 (5), 299–301 (2008). [CrossRef] ], or metamaterials [ 13. N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, “Perfect metamaterial absorber,” Phys. Rev. Lett. 100(20), 207402 (2008). [CrossRef] [PubMed] ]. These resonance structures can be mapped onto the same category of resonators, namely dissipative open resonators [ 14. K. Yu. Bliokh, Yu. P. Bliokh, V. Freilikher, S. Savel’ev, and F. Nori, “Colloquium: Unusual resonators: Plasmonics, metamaterials, and random media,” Rev. Mod. Phys. 80(4), 1201–1213 (2008). Substantial light absorption can be induced by the excitation of surface-plasmon polaritons (SPP) on metallic nanostructures. Whereas Wood’s anomalies were observed in reflectance spectra of corrugated metal surfaces [ ], enormous interest in SPP resonance phenomena has arisen owing to their considerable role in contemporary nanoplasmonics [ 15. A. V. Zayats, I. I. Smolyaninov, and A. A. Maradudin, “Nano-optics of surface plasmon polaritons,” Phys. Rep. 408(3-4), 131–314 (2005). [CrossRef] ]. Although the various nanoplasmonic systems may exhibit diverse embodiments, each possesses features that are inherent in dissipative resonators: a metal-dielectric interface forms a resonator; heat dissipation in the metal represents an internal resonator loss; SPPs trapped at the interface have well-defined eigenfrequencies and eigenmodes. Several analytic approaches treat such resonance systems including use of the Fresnel reflection formula [ 16. K. Kurihara and K. Suzuki, “Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretchmann’s theory,” Anal. Chem. 74(3), 696–701 (2002). [CrossRef] [PubMed] ], multiple interference model [ 17. A. Sharon, S. Glasberg, D. Rosenblatt, and A. A. Friesem, “Metal-based resonant grating waveguide structures,” J. Opt. Soc. Am. A 14(3), 588–595 (1997). [CrossRef] ], scattering matrix formalisms [ 7. K. Yu. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, “Localized modes in open one-dimensional dissipative random systems,” Phys. Rev. Lett. 97(24), 243904 (2006). ], or temporal coupled-mode theory [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ]. A common conclusion is that total absorption of electromagnetic waves, thus maximum enhancement of localized intensity, occurs only at the condition of critical coupling 1. A. Yariv, “Universal relations for coupling of optical power between microresonators and dielectric waveguides,” Electron. Lett. 36(4), 321–322 (2000). [CrossRef] 12. T. V. Teperik, F. J. García de Abajo, A. G. Borisov, M. Abdelsalam, P. N. Bartlet, Y. Sugawara, and J. J. Baumberg, “Omnidirectional absorption in nanostructured metal surfaces,” Nat. Photonics 2 (5), 299–301 (2008). [CrossRef] 14. K. Yu. Bliokh, Yu. P. Bliokh, V. Freilikher, S. Savel’ev, and F. Nori, “Colloquium: Unusual resonators: Plasmonics, metamaterials, and random media,” Rev. Mod. Phys. 80(4), 1201–1213 (2008). 19. Y. P. Bliokh, J. Felsteiner, and Y. Z. Slutsker, “Total absorption of an electromagnetic wave by an overdense plasma,” Phys. Rev. Lett. 95(16), 165003 (2005). [CrossRef] [PubMed] ], when the internal loss rate ( γ [int] ) equals the radiation coupling rate ( γ [rad] ). In particular, the critical coupling condition for SPP resonators was derived by analogically considering the excitation of SPP modes as a problem of surface wave localization on a one-dimensional dissipative oscillator with semitransparent walls [ 7. K. Yu. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, “Localized modes in open one-dimensional dissipative random systems,” Phys. Rev. Lett. 97(24), 243904 (2006). ]. This oscillator model also suggested that the transmission coefficients of the two walls, which play the same role as the coupling coefficients of a resonator with two coupling ports, can be estimated in a phenomenological way by measuring the transformation ratio of incoming waves into evanescent waves. It is formidable, however, to apply this oscillator model to a general SPP system because of its particular resonance geometry with two coupling ports. Therefore, it is of interest to develop an analytic model to derive the critical coupling condition of γ [rad] γ [int] in a universal way, especially for SPP resonators with multiple coupling ports. In this paper, we use a multiple-port resonator model grounded in lossless temporal coupled-mode theory [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ]. Upon introducing a small internal loss, we find that the model provides a thorough understanding of resonant absorption phenomena of multiple-port, dissipative SPP resonators. We also present rigorous simulation results that show an excellent agreement with our analytic estimation of resonant absorption at a multiple-port, grating-coupled SPP resonator. 2. Temporal coupled-mode theory for a dissipative resonator Figure 1 schematically illustrates a resonance mode, ), coupled with pairs of incoming ( ) and outgoing ( f[m] [–] ) radiation modes. We assume that | and | f[m] [±] are normalized to represent energy content of the resonance mode and power transported by the ports’ radiation modes, respectively, and that ) dissipates its energy via both internal loss and radiation coupling with a total damping rate of γ [tot] γ [rad] γ [int] . Temporal behavior of ) resonant at a frequency ω [0] can be generally described by the coupled-mode equations using a vector notation of | X [±] 〉 = ( X [1 ±] X[m] [±] X[N] [±] in the limit that γ [tot] ω [0] << 1 [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] We assume modal amplitudes to describe slowly varying envelope of time-domain electromagnetic fields such thatfor the radiation modes andfor the resonance mode, e.g., SPP. { e [m] h [m] )} and { e [R] h [R] )} are normalized field solutions of frequency-domain Maxwell equations for incoming radiation mode at port and that for the resonance mode, respectively. Thus, { e [m] ), – h [m] )} is a field solution for outgoing radiation mode at port . (C.C) represents the complex conjugate of the left-side term. Accordingly, Eq. (1) is slightly different from the formulation used by Refs [ ]. and [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ] in the factor ) describing the harmonic mode. Energy conservation and time-reversal symmetry in mode coupling enforce fundamental constraints on the relation of the direct scattering matrix and the mode coupling coefficients | κ [±] CC ^† , | κ [+] 〉 = | κ [–] 〉 ≡ | 〉, 〈 〉 = 2 γ [rad] , and 〉* = –| 〉 as shown by Fan et al. [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ]. For later convenience, we define | /2 = as a partial radiation coupling rate at port γ [rad] = ( γ [1] + … + + … + ) as a total radiation coupling rate. Although an internal loss is introduced in Eq. (1) , the constraints between the coupling coefficients are identical to those of the lossless resonator discussed in [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ] as a result of two assumptions: time-reversal symmetry of coupled-mode Eqs. (1) under a loss-gain interchange ( γ [int] → – γ [int] ) and lossless direct scattering, i.e., is unitary (see Appendix for detailed discussions). 3. Critical coupling, total absorption, and sum rule in partial absorbances Consider a case of total resonance absorption when | f [+] 〉 ≠ 0 and | f [–] 〉 = 0. In this instance, with the help of the fundamental constraints, the coupled mode Eqs. (1) are reduced toat ω [0] . In Eq. (8) , it is evident that the outgoing radiation mode is suppressed by destructive interference between the directly scattered part f [+] )〉 and the leakage radiation part 〉 [ 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] ]. In the absence of the outgoing radiation mode, Eq. (7) gives a steady state solution for ) only if γ [rad] γ [int] (critical coupling); otherwise ) is exponentially growing ( γ [rad] γ [int] ) or become evanescent ( γ [rad] γ [int] ) with time. By temporally growing ) for γ [rad] γ [int] , the leakage radiation part 〉 also temporally increases with ). Thus, to satisfy Eq. (8) we should grow | f [+] )〉 at the same rate as ) so that the directly scattered part f [+] )〉 cancels an excess of the leakage radiation part. On the other hand, in case of γ [rad] γ [int] , keeping | f [–] 〉 = 0 necessarily requires the exponential decrease of | f [+] )〉. Finally, the following results show that the fundamental physics of total resonance absorption in a multi-port system is identical to that of a single-port system [ 16. K. Kurihara and K. Suzuki, “Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretchmann’s theory,” Anal. Chem. 74(3), 696–701 (2002). [CrossRef] [PubMed] 17. A. Sharon, S. Glasberg, D. Rosenblatt, and A. A. Friesem, “Metal-based resonant grating waveguide structures,” J. Opt. Soc. Am. A 14(3), 588–595 (1997). [CrossRef] 19. Y. P. Bliokh, J. Felsteiner, and Y. Z. Slutsker, “Total absorption of an electromagnetic wave by an overdense plasma,” Phys. Rev. Lett. 95(16), 165003 (2005). [CrossRef] [PubMed] ]. (i) The incoming radiation mode is strongly confined to the resonance mode due to destructive interference in the outgoing radiation mode. (ii) The destructive interference becomes complete when γ [rad] γ [int] because amplitudes of the two interfering parts balance so that they exactly cancel each other; otherwise the outgoing radiation mode survives with excessive leakage radiation ( γ [rad] γ [int] ) or direct scattering ( γ [rad] γ [int] ). (iii) The resonance mode grows until its internal loss dissipates the same power as that coupled from the incoming radiation mode; i.e., incoming radiation is totally absorbed by the internal dissipation of the resonance mode. In a multi-port system, however, the destructive interference must occur simultaneously at all outgoing ports to achieve total absorption. In other words, magnitudes and phases of incoming radiation modes at all ports should be orchestrated properly. The incoming mode configuration for total absorption is found by solving Eqs. (7) with the fundamental constraints of the coupling constants. At the critical coupling condition ( γ [rad] γ [int] ), they yieldwhere F [0] is an arbitrary constant. Note that | 〉* can be interpreted as time reversal of the leakage radiation mode, i.e., phase-conjugated leakage radiation, as | 〉 represents leakage radiation for the unit excitation of (refer to the second term on the right-hand side of Eqs. (2) ). Thus, outgoing radiation modes at all ports are suppressed simultaneously due to destructive interference if magnitudes and phases of the incoming radiation modes at all ports are given by the time-reversal form of the leakage radiation mode. Finally, we can conclude that total resonance absorption is obtainable for a multiple-port system by having the incoming radiation mode given by Eq. (9) at the critical coupling condition of γ [rad] γ [int] With the incoming radiation mode given by Eq. (9) at an arbitrary frequency , we obtain spectral responses of the outgoing radiation mode and absorbance asrespectively. An important observation from these solutions is that the outgoing radiation mode can be expressed by a scalar multiple of the time-reversed incoming radiation mode aswhere = arg( F [0] ). Note in Eq. (12) that a set of operations that → – and the complex conjugation of a modal amplitude produces its exact time reversal (see Appendix for details). Thus, | f [+] )〉* on the right-hand side of Eq. (12) represents time reversal of the incoming radiation mode. This means that scattering from this particular configuration of incoming radiation modes to all available ports acts like reflection in a single-port resonance system with its reflection coefficient given by Thus, total absorbance in Eq. (11) is identical to absorbance obtained for a single-port system by previous theories such as Kretchmann theory [ 16. K. Kurihara and K. Suzuki, “Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretchmann’s theory,” Anal. Chem. 74(3), 696–701 (2002). [CrossRef] [PubMed] ], multiple interference model [ 17. A. Sharon, S. Glasberg, D. Rosenblatt, and A. A. Friesem, “Metal-based resonant grating waveguide structures,” J. Opt. Soc. Am. A 14(3), 588–595 (1997). [CrossRef] ], and quantum Hamiltonian mapping [ 12. T. V. Teperik, F. J. García de Abajo, A. G. Borisov, M. Abdelsalam, P. N. Bartlet, Y. Sugawara, and J. J. Baumberg, “Omnidirectional absorption in nanostructured metal surfaces,” Nat. Photonics 2 (5), 299–301 (2008). [CrossRef] In many practical cases, however, the incoming radiation mode is not given by a phase-conjugated leakage radiation stated in Eq. (9) but is made incident at one particular port, producing outgoing radiation modes at all available ports. With a single-port incidence, the interferences between direct scattering and leakage radiation parts at all ports cannot be identically destructive and, thus, the incoming energy also cannot be totally absorbed even at the critical coupling condition. The partial absorbance in this case can be derived by considering the excitation strength of the resonance mode. As resonance absorption is described by internal decay of the resonance mode, absorbing power must be P [abs] = 2 γ [int] and the absorbance is a ratio of to incident power , i.e., . For a single incoming radiation at port with amplitude and frequency , the partial absorbance is It is worth noting that Eqs. (11) allow an absorbance sum rule such thatwhere radiative decay probability η [rad] is the ratio of γ [rad] γ [tot] . Note again that the accumulated peak absorbance is unity only when η [rad] = 0.5, that is, γ [rad] γ [int] . We may therefore conclude that the critical coupling condition is a universal constraint for achieving total absorption at a dissipative open resonator and is not limited to a specific geometry or number of coupling ports. Another noteworthy result gathered from the sum rule in Eq. (15) relates to light-emitting applications [ 20. J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, “Plasmonics for extreme light concentration and manipulation,” Nat. Mater. 9(3), 193–204 (2010). [CrossRef] ]. By externally measuring peak absorbance at every single port, we can determine relative coupling rates (or relative magnitudes of the coupling coefficients) of all ports by the relations of = | and Σ ω [0] ) = 4 η [rad] η [rad] ). Therefore, external measurement of absorption peaks is a practical way to investigate the details of coupling at each port and those of multiple-port plasmonic resonators. Based on partial absorbance measurements, one can easily find the coherent configuration of multiple incoming radiation modes for total absorption, i.e., Eq. (9) . We refer this particular form of incoming radiation mode to | 〉 ≡ U ^1/2 〉*, where is a normalization factor. First, its relative power at port , | , can be simply given by the partial absorbance ω [0] ) since | = ( γ [int] γ [tot] ^2 . If we normalize | 〉 so that 〈 〉 = 1, then γ [rad] ^–1 Second, the phase, arg( ), can be found by the consecutive maximization of accumulated absorption. Suppose that double radiation modes are incoming at port 1 and 2 with their powers properly given according to Eq. (16) but their relative phases are unknown. In this case, incoming mode amplitudes at two ports can be written by f [1+] ψ [1] iϕ [1] ) and f [2+] ψ [2] iϕ [2] ϕ [1] ϕ [2] is an arbitrary initial phase at port 1 and port 2, respectively. The absorbance in this case of a double port incidence is The absorbance in this case is phase sensitive and maximized to A [1] A [2] at the phase matching that ϕ [1] ϕ [2] as a result of maximization in the resonance mode amplitude. Thus, with port 1 as a reference port, consecutive maximizations of the absorption over all remaining ports finally yield an incoming mode configuration | f [+] 〉 = exp( iϕ [1] 〉 whose magnitudes and phases at all ports are orchestrated so that the outgoing modes at all ports are suppressed simultaneously due to destructive interference while the resonance mode is maximally excited. It is worth noting in Eq. (17) that if two incoming radiation modes are incoherent relative to each other, the term sin […] is time averaged to a value 1/2 and A ^(2) = ( A [1] ^2 A [2] ^2 A [1] A [2] ), which is always less than A [1] A [2] in the coherent case. 4. Phasor representation of absorption response As discussed in Sec. 3, scattering in a multi-port system reduces to a simple reflection response when the incoming radiation modes at all ports are coherent and are given by a scalar multiple of a phase-conjugated leakage radiation mode. The total reflection coefficient in Eq. (13) can be rewritten aswhere ) = tan ω–ω [0] γ [tot] ]. By representing ρ [tot] on a complex plane [ ], we can intuitively explain resonance behavior of a multiple-port, dissipative resonator in the vicinity of critical coupling. Figure 2(a) shows a phasor representation of ρ [tot] ) as a vectorial superposition of ), leakage radiation amplitude (black arrow), and –1, directly scattered amplitude (red arrow). Note that ) = η [rad] [1 + exp( )], the first term on the right-hand side of Eq. (18) ) traces a circle with radius η [rad] as explicitly shown by the term exp( ). By increasing from lower to upper far off-resonance limits, ) rotates counterclockwise from the lower (2 = – ) limit to the upper limit (2 = + ) via ) = 2 η [rad] ω [0] (blue dot at 2 = 0) while the directly scattered amplitude (red arrow) remains constant. ρ [tot] ) is now represented by a blue arrow directing a point at 2 on the blue circle that crosses the real axis at ω [0] . Therefore, the absorption spectrum defined by A [tot] ) = 1−| ρ [tot] Eq. (11) can be geometrically obtained by the square of the blue segment, A [tot] ^1/2 , which is stretched perpendicularly from the ρ [tot] ) to the rim of the outer unit circle. Figure 2(b) schematically shows the spectral traces of ρ [tot] ) for three typical cases of under coupling (red circle when γ [int] γ [rad] ), critical coupling (green circle when γ [int] γ [rad] ), and over coupling (blue circle when γ [int] γ [rad] ). Grey unit circle represents the lossless case ( γ [int] = 0). Corresponding absorbance and reflection phase spectra shown in Figs. 2(c) intuitively explain all essential features in the Brewster absorption phenomenon [ 21. M. C. Hutley and D. Maystre, “The total absorption of light by a diffraction grating,” Opt. Commun. 19(3), 431–436 (1976). [CrossRef] 22. R. A. Depine, V. L. Brudny, and J. M. Simon, “Phase behavior near total absorption by a metallic grating,” Opt. Lett. 12(3), 143–145 (1987). [CrossRef] [PubMed] ]. The absorption peak is unity only at critical coupling because the green circle can cross the origin at resonance. The phase spectra in Fig. 2(d) also show quite different resonance behaviors for the three cases. An abrupt -phase jump occurs at resonance frequency as the system satisfies the critical coupling condition. In the case of under coupling, the phase behavior reveals a peak/dip profile with a phase difference Δβ [max] . It is easy to show geometrically that sin( Δβ [max] /2) = η [rad] η [rad] ) as an alternative way to estimate η [rad] 5. Consistency with rigorous simulation To numerically confirm the critical coupling condition of total absorption and the absorbance sum rule in a multiple-port SPP resonator, we present in Fig. 3 an example of grating induced SPP resonance. Figure 3(a) shows a single SPP mode propagating along the + direction on an Ag surface (the plane) corrugated by a periodic array of 25-nm-deep and 175-nm-wide (FWHM) Gaussian grooves with a period (Λ) of 700 nm. The SPP mode is coupled with multiple ports; for example, it is coupled with two ports, Port 1 and Port 2, carrying two incoming modes, f [1] [+] f [2] [+] , and two outgoing modes, f [1–] f [2–] as depicted in Fig. 3(a) . Each of the incoming modes excites an SPP mode via diffraction under phase-matching condition that = ( k [SPP] /Λ, where denotes the diffraction order. The incoming mode incident at an angle excites the SPP mode via + order diffraction, and its zero-th order reflection corresponds to f[m] [–] . The excited SPP mode also loses its energy toward f [1–] f [2–] as leakage radiation modes. The numerical calculation used here is a coordinate transformation algorithm known as Chandezon method [ 23. J. Chandezon, M. T. Dupuis, G. Cornet, and D. Maystre, “Multicoated gratings: a differential formalism applicable in the entire optical region,” J. Opt. Soc. Am. 72(7), 839–846 (1982). [CrossRef] ]. Ag is modeled as a Drude metal whose plasma and collision (Γ ) frequencies are 6.35 × 2 and 0.125 × 2 is speed of light in vacuum) at room temperature (300 K), respectively [ ]. The dark bands on the R [0] spectrum in Fig. 3(b) clearly show periodic series of SPP dispersion curves. At three different wavelengths of λ [0] = 825 nm, 580 nm, and 440 nm marked by the horizontal dot-lines, the grating configuration corresponds to an SPP resonator with the number of coupling ports, = 1, 2, and 3, respectively. The two-port case depicted in Fig. 3(a) is thus fit for λ [0] = 580 nm, and the circular dots on the dispersion curves are for the two pairs of incoming and outgoing plane waves forming two coupled ports. Dependences of A [tot] ω [0] ) and ω [0] ) on η [rad] are indicated in Figs. 3(c) , and . The numerical results are represented by the red, blue, green, and black squares for A [1] A [2] A [3] , and A [tot] , respectively. In the numerical calculation, we first find a scattering matrix describing the system response such that | f [–] 〉 = f [+] 〉 and then the absorbance at each port is obtained by the relation of = 1−Σ . By obtaining six different values of η [rad] starting from the left to the right shown by square symbols in all three plots, the collision frequency Γ of Ag is varied by an order of 10%, 20%, 40%, 60%, 80%, and 100% of that (Γ ) at room temperature. In the low-collision frequency limit, γ [int] is linearly proportional to Γ while γ [rad] remains constant; therefore, the exact values of η [rad] are obtained by a linear extrapolation of the absorption bandwidth at Γ = 0. The analytical results obtained from Eqs. (14) are also plotted by the solid curves [ 25. J. Yoon, S. H. Song, and J.-H. Kim, “Extraction efficiency of highly confined surface plasmon-polaritons to far-field radiation: an upper limit,” Opt. Express 16, 1269 (2008), http:// www.opticsinfobase.org/oe/abstra ct.cfm?URI=oe-16-2-1269. ]. Note that the difference between the square symbols and the analytic curves in the whole η [rad] range is less than 0.02; this excellent agreement strongly supports our analytic theory. We also confirmed that det( ) = 0, which means A [tot] = 1, at Γ/Γ = 0.4837 and 0.4396 for λ [0] = 580 and 440 nm, respectively, of which η [rad] values are exactly equivalent to 0.5 at those collision frequency ratios. Consequently, we can say that the results in Figs. 3(c) confirm the universality of the critical coupling condition for total absorption at a dissipative open resonator with multiple coupling ports. 6. Conclusion We have explored the resonance behavior of multiple-port, dissipative surface-plasmon resonators near the critical coupling condition. Generalizing the temporal coupled-mode theory of leaky-mode resonators by incorporating a small internal loss provides an analytic expression of absorption spectra that explicitly reveals the universality of the matching condition, not limited to specific resonance geometries and the number of coupling ports. The phasor representation of the expression intuitively explains all the main features of complex resonance behavior in the vicinity of critical Our model depends only on internal loss and radiation coupling rates; therefore, we can investigate the internal characteristics of resonators in depth by externally measuring the peak absorbance at each coupling port. In particular, the sum rule for peak absorbance will be a practical guideline for estimating light extraction efficiency of nanoplasmonic light emitters [ 20. J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, “Plasmonics for extreme light concentration and manipulation,” Nat. Mater. 9(3), 193–204 (2010). [CrossRef] ] because efficient emission of light mediated by SPP resonance is analogous to the time reversal of strong light absorption in a dissipative leaky-mode resonator. Appendix: The fundamental constraints of coupling constants in a dissipative system The loss-gain interchange requirement for time reversibility corresponds to the conjugation invariance (C-invariance) of linear electromagnetism, which has been often quoted to explain propagation characteristics of electromagnetic waves in negative-index metamaterials. The discussion concerning field characteristics in a negative-index metamaterial is based on the invariance of frequency-domain Maxwell equations under the set of operations that { } → { *} and { } → {– *, – *} [ 26. A. Lakhtakia, “Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars,” Microw. Opt. Technol. Lett. 40(2), 160–161 (2004). [CrossRef] ]. The C-invariance has another interpretation for the time reversal of fields since frequency-domain Maxwell equations are also invariant under a set of operations that { } → { *, − *} and { } → { *}. Therefore, if { } is a solution of frequency-domain Maxwell equations in a system with { }, then { *, − *} must be a solution for the conjugated system with { *}. Physical interpretation of this fundamental property is that the field-scattering process in a dissipative system is reversible when the absorbing process is reversed to a gain process in the time-reversal operation. The operation { } → { *, − *} reverses phase and group velocity of field propagation while the operation { } → { *} turns a material loss into a gain by changing signs of Im( ) and Im( ). Note that the loss-gain interchange corresponds to the time reversal of the absorbing process that is physically forbidden by the second law of thermodynamics. It is inevitable for the absorbed energy to be thermally redistributed into other quanta in an irreversible way in the macroscopic level. In other words, the dissipative system is time irreversible, but the electromagnetic scattering process itself is reversible under the allowance of the loss-gain interchange and, thus, the scattering coefficients in a dissipative system are subject to the time-reversal symmetry provided by C-invariance of linear electromagnetism. Direct scattering in an absorbing system is not lossless in general; i.e., is not unitary. However, in most cases anomalously strong absorption due to a resonance arises in a system that presumably exhibits negligible absorption in off-resonance condition. Thus, for SPP resonance structures consisting of noble metals, it is acceptable to assume lossless direct scattering in Eq. (2) Based on the above two arguments considering time-reversal symmetry in a dissipative system and the assumption of lossless direct scattering, we obtained the fundamental constraints of | 〉 and . The time reversal of fields is given by the operation { )} → { ), – , – )} in the time domain. Thus, in a time-reversed situation,for port’s radiation modes andfor the resonance mode. By comparing Eqs. (A1) ~(A4) to Eqs. (3) ~(6), the time reversal of fields corresponds to the transformation of the modal amplitudes such thatWith an additional operation γ [int] → – γ [int] as a loss-gain interchange, the coupled-mode Eqs. (1) transform intoin a time-reversed situation. Requiring Eqs. (A6) to be identical to Eqs. (1) yields the fundamental constraints of CC ^† , | κ [+] 〉 = | κ [–] 〉 ≡ | 〉, 〈 〉 = 2 γ [rad] , and 〉* = –| 〉, which are identical to those for a lossless resonance system. This work was supported by the National Research Foundation of Korea grant funded by the Korea Government (MEST) [2010-0000256] and the IT R&D program [2008-F-022-01] of the MKE/IITA, Korea. References and links 1. A. Yariv, “Universal relations for coupling of optical power between microresonators and dielectric waveguides,” Electron. Lett. 36(4), 321–322 (2000). [CrossRef] 2. H. A. Haus, Waves and Fields in Optoelectronics (Prentice-Hall, Englewood Cliffs, NJ, 1984). 3. C. Manolatou, M. J. Khan, S. Fan, P. R. Villeneuve, H. A. Haus, and J. D. Joannopoulos, “Coupling of modes analysis of resonant channel add-drop filters,” IEEE J. Quantum Electron. 35(9), 1322–1331 (1999). [CrossRef] 4. Y. Xu, Y. Li, R. K. Lee, and A. Yariv, “Scattering-theory analysis of waveguide-resonator coupling,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 62(55 Pt B), 7389–7404 (2000). [CrossRef] [PubMed] 5. Y. Ding and R. Magnusson, “Use of nondegenerate resonant leaky modes to fashion diverse optical spectra,” Opt. Express 12(9), 1885–1891 (2004), http://www.opticsinfobase.org/oe/abstract.cfm?URI= oe-12-9-1885. [CrossRef] [PubMed] 6. K. J. Lee, R. LaComb, B. Britton, M. Shokooh-Saremi, H. Silva, E. Donkor, Y. Ding, and R. Magnusson, “Silicon-layer guided-mode resonance polarizer with 40-nm bandwidth,” IEEE Photon. Technol. Lett. 20(22), 1857–1859 (2008). [CrossRef] 7. K. Yu. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, “Localized modes in open one-dimensional dissipative random systems,” Phys. Rev. Lett. 97(24), 243904 (2006). 8. E. Kretchmann and H. Reather, “Radiative decay of non-radiative surface plasmons excited by light,” Z. Naturforsch. A 23, 2135–2136 (1968). 9. A. Otto, “Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection,” Z. Phys. 216(4), 398–410 (1968). [CrossRef] 10. R. W. Wood, “On the remarkable case of uneven distribution of a light in a diffracted grating spectrum,” Philos. Mag. 4, 396–402 (1902). 11. J. Le Perchec, P. Quémerais, A. Barbara, and T. López-Ríos, “Why metallic surfaces with grooves a few nanometers deep and wide may strongly absorb visible light,” Phys. Rev. Lett. 100(6), 066408 (2008). [CrossRef] [PubMed] 12. T. V. Teperik, F. J. García de Abajo, A. G. Borisov, M. Abdelsalam, P. N. Bartlet, Y. Sugawara, and J. J. Baumberg, “Omnidirectional absorption in nanostructured metal surfaces,” Nat. Photonics 2 (5), 299–301 (2008). [CrossRef] 13. N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, “Perfect metamaterial absorber,” Phys. Rev. Lett. 100(20), 207402 (2008). [CrossRef] [PubMed] 14. K. Yu. Bliokh, Yu. P. Bliokh, V. Freilikher, S. Savel’ev, and F. Nori, “Colloquium: Unusual resonators: Plasmonics, metamaterials, and random media,” Rev. Mod. Phys. 80(4), 1201–1213 (2008). 15. A. V. Zayats, I. I. Smolyaninov, and A. A. Maradudin, “Nano-optics of surface plasmon polaritons,” Phys. Rep. 408(3-4), 131–314 (2005). [CrossRef] 16. K. Kurihara and K. Suzuki, “Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretchmann’s theory,” Anal. Chem. 74(3), 696–701 (2002). [CrossRef] [PubMed] 17. A. Sharon, S. Glasberg, D. Rosenblatt, and A. A. Friesem, “Metal-based resonant grating waveguide structures,” J. Opt. Soc. Am. A 14(3), 588–595 (1997). [CrossRef] 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] 19. Y. P. Bliokh, J. Felsteiner, and Y. Z. Slutsker, “Total absorption of an electromagnetic wave by an overdense plasma,” Phys. Rev. Lett. 95(16), 165003 (2005). [CrossRef] [PubMed] 20. J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, “Plasmonics for extreme light concentration and manipulation,” Nat. Mater. 9(3), 193–204 (2010). [CrossRef] 21. M. C. Hutley and D. Maystre, “The total absorption of light by a diffraction grating,” Opt. Commun. 19(3), 431–436 (1976). [CrossRef] 22. R. A. Depine, V. L. Brudny, and J. M. Simon, “Phase behavior near total absorption by a metallic grating,” Opt. Lett. 12(3), 143–145 (1987). [CrossRef] [PubMed] 23. J. Chandezon, M. T. Dupuis, G. Cornet, and D. Maystre, “Multicoated gratings: a differential formalism applicable in the entire optical region,” J. Opt. Soc. Am. 72(7), 839–846 (1982). [CrossRef] 24. E. D. Palik, Handbook of Optical Constants of Solids II (Academic Press, San Diego, 1998). 25. J. Yoon, S. H. Song, and J.-H. Kim, “Extraction efficiency of highly confined surface plasmon-polaritons to far-field radiation: an upper limit,” Opt. Express 16, 1269 (2008), http:// www.opticsinfobase.org/oe/abstra ct.cfm?URI=oe-16-2-1269. 26. A. Lakhtakia, “Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars,” Microw. Opt. Technol. Lett. 40(2), 160–161 (2004). [CrossRef] 27. J. Yoon, S. H. Song, C. H. Oh, and P. S. Kim, “Backpropagating modes of surface polaritons on a cross-negative interface,” Opt. Express 13(2), 417–427 (2005), http://www.opticsinfobase.org/oe/ abstract.cfm?URI=oe-13-2-417. [CrossRef] [PubMed] OCIS Codes (030.4070) Coherence and statistical optics : Modes (300.1030) Spectroscopy : Absorption ToC Category: Optics at Surfaces Original Manuscript: September 27, 2010 Revised Manuscript: November 8, 2010 Manuscript Accepted: November 16, 2010 Published: November 23, 2010 Jaewoong Yoon, Kang Hee Seol, Seok Ho Song, and Robert Magnusson, "Critical coupling in dissipative surface-plasmon resonators with multiple ports," Opt. Express 18, 25702-25711 (2010) Sort: Year | Journal | Reset 1. A. Yariv, “Universal relations for coupling of optical power between microresonators and dielectric waveguides,” Electron. Lett. 36(4), 321–322 (2000). [CrossRef] 2. H. A. Haus, Waves and Fields in Optoelectronics (Prentice-Hall, Englewood Cliffs, NJ, 1984). 3. C. Manolatou, M. J. Khan, S. Fan, P. R. Villeneuve, H. A. Haus, and J. D. Joannopoulos, “Coupling of modes analysis of resonant channel add-drop filters,” IEEE J. Quantum Electron. 35(9), 1322–1331 (1999). [CrossRef] 4. Y. Xu, Y. Li, R. K. Lee, and A. Yariv, “Scattering-theory analysis of waveguide-resonator coupling,” Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 62(55 Pt B), 7389–7404 (2000). [CrossRef] [PubMed] 5. Y. Ding and R. Magnusson, “Use of nondegenerate resonant leaky modes to fashion diverse optical spectra,” Opt. Express 12(9), 1885–1891 (2004), http://www.opticsinfobase.org/oe/abstract.cfm?URI= oe-12-9-1885 . [CrossRef] [PubMed] 6. K. J. Lee, R. LaComb, B. Britton, M. Shokooh-Saremi, H. Silva, E. Donkor, Y. Ding, and R. Magnusson, “Silicon-layer guided-mode resonance polarizer with 40-nm bandwidth,” IEEE Photon. Technol. Lett. 20(22), 1857–1859 (2008). [CrossRef] 7. K. Yu. Bliokh, Y. P. Bliokh, V. Freilikher, A. Z. Genack, B. Hu, and P. Sebbah, “Localized modes in open one-dimensional dissipative random systems,” Phys. Rev. Lett. 97(24), 243904 (2006). 8. E. Kretchmann and H. Reather, “Radiative decay of non-radiative surface plasmons excited by light,” Z. Naturforsch. A 23, 2135–2136 (1968). 9. A. Otto, “Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection,” Z. Phys. 216(4), 398–410 (1968). [CrossRef] 10. R. W. Wood, “On the remarkable case of uneven distribution of a light in a diffracted grating spectrum,” Philos. Mag. 4, 396–402 (1902). 11. J. Le Perchec, P. Quémerais, A. Barbara, and T. López-Ríos, “Why metallic surfaces with grooves a few nanometers deep and wide may strongly absorb visible light,” Phys. Rev. Lett. 100(6), 066408 (2008). [CrossRef] [PubMed] 12. T. V. Teperik, F. J. García de Abajo, A. G. Borisov, M. Abdelsalam, P. N. Bartlet, Y. Sugawara, and J. J. Baumberg, “Omnidirectional absorption in nanostructured metal surfaces,” Nat. Photonics 2 (5), 299–301 (2008). [CrossRef] 13. N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, “Perfect metamaterial absorber,” Phys. Rev. Lett. 100(20), 207402 (2008). [CrossRef] [PubMed] 14. K. Yu. Bliokh, Yu. P. Bliokh, V. Freilikher, S. Savel’ev, and F. Nori, “Colloquium: Unusual resonators: Plasmonics, metamaterials, and random media,” Rev. Mod. Phys. 80(4), 1201–1213 (2008). 15. A. V. Zayats, I. I. Smolyaninov, and A. A. Maradudin, “Nano-optics of surface plasmon polaritons,” Phys. Rep. 408(3-4), 131–314 (2005). [CrossRef] 16. K. Kurihara and K. Suzuki, “Theoretical understanding of an absorption-based surface plasmon resonance sensor based on Kretchmann’s theory,” Anal. Chem. 74(3), 696–701 (2002). [CrossRef] [PubMed] 17. A. Sharon, S. Glasberg, D. Rosenblatt, and A. A. Friesem, “Metal-based resonant grating waveguide structures,” J. Opt. Soc. Am. A 14(3), 588–595 (1997). [CrossRef] 18. S. Fan, W. Suh, and J. D. Joannopoulos, “Temporal coupled-mode theory for the Fano resonance in optical resonators,” J. Opt. Soc. Am. A 20(3), 569–572 (2003). [CrossRef] 19. Y. P. Bliokh, J. Felsteiner, and Y. Z. Slutsker, “Total absorption of an electromagnetic wave by an overdense plasma,” Phys. Rev. Lett. 95(16), 165003 (2005). [CrossRef] [PubMed] 20. J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, “Plasmonics for extreme light concentration and manipulation,” Nat. Mater. 9(3), 193–204 (2010). [CrossRef] 21. M. C. Hutley and D. Maystre, “The total absorption of light by a diffraction grating,” Opt. Commun. 19(3), 431–436 (1976). [CrossRef] 22. R. A. Depine, V. L. Brudny, and J. M. Simon, “Phase behavior near total absorption by a metallic grating,” Opt. Lett. 12(3), 143–145 (1987). [CrossRef] [PubMed] 23. J. Chandezon, M. T. Dupuis, G. Cornet, and D. Maystre, “Multicoated gratings: a differential formalism applicable in the entire optical region,” J. Opt. Soc. Am. 72(7), 839–846 (1982). [CrossRef] 24. E. D. Palik, Handbook of Optical Constants of Solids II (Academic Press, San Diego, 1998). 25. J. Yoon, S. H. Song, and J.-H. Kim, “Extraction efficiency of highly confined surface plasmon-polaritons to far-field radiation: an upper limit,” Opt. Express 16, 1269 (2008), http:// www.opticsinfobase.org/oe/abstra ct.cfm ?URI=oe-16-2-1269. 26. A. Lakhtakia, “Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars,” Microw. Opt. Technol. Lett. 40(2), 160–161 (2004). [CrossRef] 27. J. Yoon, S. H. Song, C. H. Oh, and P. S. Kim, “Backpropagating modes of surface polaritons on a cross-negative interface,” Opt. Express 13(2), 417–427 (2005), http://www.opticsinfobase.org/oe/ abstract.cfm?URI=oe-13-2-417 . [CrossRef] [PubMed] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-18-25-25702&id=208370","timestamp":"2014-04-20T23:36:30Z","content_type":null,"content_length":"325891","record_id":"<urn:uuid:b0807c69-5ab0-4c1b-8bc4-00298d09aff8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a fusion rule in positive characteristic? up vote 5 down vote favorite Verlinde's fusion gives a certain "tensor product" of representations of loop groups. The category of representations of loop groups has (essentially equivalent) two incarnations. One is analytic, based on $LG=\operatorname{Map}(S^1,G)$, and one algebraic, based on $L\mathfrak g=\mathfrak g[t,t^{-1}]$. The latter of these makes sense in positive characteristic. In both cases, one constructs the "fusion" of two positive energy representations of the loop group via holomorphic induction on a thrice punctured sphere (in the analytic model, this is a disc in $\mathbb C$ minus two interior discs, and in the algebraic model, this is $\mathbb P^1(\mathbb C)$ minus three points). Can one define a fusion product like this in positive characteristic? I have done some searches online, but haven't even managed to figure out whether there is a reasonable category (corresponding the category of representations of positive energy in characteristic zero) of representations of positive characteristic loop groups where we could expect such a fusion product to exist. conformal-field-theory loop-groups rt.representation-theory qa.quantum-algebra add comment 1 Answer active oldest votes To answer the question in the header, there is certainly a relevant fusion rule in positive characteristic. This arises in a purely representation-theoretic context in the work of various people (Olivier Mathieu and Henning Andersen in particular). But along the way relationships have to be built among a number of representation categories in order to arrive at a transparent version of fusion rules. I'm not sure about online access, but two useful papers in Comm. Math. Physics from the early 1990s are: H.H. Andersen, "Tensor products of quantized tilting modules" (1992) H.H. Andersen and J. Paradowski, "Fusion categories arising from semisimple Lie algebras" (1995) These papers arise indirectly from the influential Verlinde paper in Nuclear Physics B. Loop algebras or affine Lie algebras have representation theory in negative levels shown by Kazhdan and Lusztig to share many features with the theory for quantum groups at a root of unity based on the same type of Lie algebra. (In turn, this quantum group theory transfers in a subtle way to modular Lie algebra settings in prime characteristic.) An essential shared ingredient is the organizing role of an affine Weyl group. up vote 8 down In order to get a rigorous mathematical framework for "truncated" tensor products appearing in the fusion rules of Verlinde, use is made of "tilting" modules which include for small highest vote weights the classical-looking "Weyl modules" whose tensor products are reasonably well understood. But the tilting modules are usually more complicated, having finite filtrations with quotients which are Weyl modules (and similar filtrations involving dual Weyl modules). A key fact is that the category of tilting modules is closed under tensoring. This is the refined setting in which it makes sense to tensor, break up into a direct sum of indecomposables, and then truncate, allowing only a finite number of indecomposable objects to survive: others have "quantum dimension zero" and disappear from the picture. Technically it gets fairly complicated, but the underlying ideas are transparent from the viewpoint of representation theory. The "tilting" modules themselves continue to be studied, but roughly speaking their formal characters are predicted by Kazhdan-Lusztig theory in the settings I indicated. (The unsolved problems are mostly in prime characteristic, where a lot is known but not everything.) After the progress made in the early 1990s there were only a few attempts at surveys of the work for mathematicians, including one by Andersen Tilting modules for algebraic groups (1995). In any case there is substantial literature out there, which I can comment further on if it's useful. add comment Not the answer you're looking for? Browse other questions tagged conformal-field-theory loop-groups rt.representation-theory qa.quantum-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/87485/is-there-a-fusion-rule-in-positive-characteristic","timestamp":"2014-04-19T17:39:18Z","content_type":null,"content_length":"53987","record_id":"<urn:uuid:e69c12e4-3f19-4326-b812-76f7e7516fbf>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: convert 0.2 to percent notation? would that be 20% • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fc75e8e4b010aceb334aa3","timestamp":"2014-04-20T08:31:16Z","content_type":null,"content_length":"37032","record_id":"<urn:uuid:dee53a08-981a-41e1-971a-95bc507c2449>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
exponential growth help July 15th 2007, 04:03 PM #1 Junior Member Jul 2007 exponential growth help Rabbit populations are considered to grow exponentially. A farmer buys 8 rabbits and finds that within 6 months he now has 24 rabbits. How many more months must the farmer wait until he has 216 Find k. $24=8e^{-k(6)}$ Solve for k, then use it in $216=8e^{-kt}$ and solve for t. July 15th 2007, 04:16 PM #2
{"url":"http://mathhelpforum.com/calculus/16898-exponential-growth-help.html","timestamp":"2014-04-20T09:41:56Z","content_type":null,"content_length":"32170","record_id":"<urn:uuid:1fd89ee2-e6a6-4a33-8c1e-b3413106d43a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
tmoertel - Google challenge task and computing the digits of e Now, one can find the digits of online, or one can ask Mathematica to spit them out, but that wouldn't be as fun as computing them. I had recalled so-called "spigot algorithms" for computing digit by digit, and I thought now would be a good time to learn more about the subject and apply it to the digits of A quick Google search led to Jeremy Gibbons's "An Unbounded Spigot Algorithm for the Digits of π" paper. I found that the explanation in the paper jumped around, so I'll explain it in a different way. The idea is that you can take, e.g., a series representation of π and map each portion of that series to a function that transforms an approximation of π into a slightly better approximation. For example, given a series representation like the following (written using Horner's rule): π = 2 + 1/3 * (2 + 2/5 * (2 + 3/7 * ( ... ))) You can then take each term like (2 + 1/3 * ...) and create a function, say, ₁ that represents it: f₁ x = 2 + 1/3 x and continue for the remaining terms in the infinite series: f₂ x = 2 + 2/5 x f₃ x = 2 + 3/7 x f₄ x = 2 + 4/9 x f₅ x = 2 + 5/11 x and so on. Then, you can compute by composing the functions: π = f₁ . f₂ . f₃ . f₄ ... Now, here's the cool part. With a little analysis (or even insight) we can determine that any tail of the above expression is going to represent a value in the range [3,4]. I.e., if we drop the first four terms, the resulting expression f₅ . f₆ . f₇ ... is going to represent a value somewhere between 3 and 4. If we drop the first million terms, the value is still going to be in [3,4]. Now why is this important? Because it lets us drop of the terms after some point and replace them with a 3 or 4 to yield a lower or higher bound on the full computation, and as those bounds converge we can extract digits – as many as we desire. For example, let's drop all of the terms after f₁ and replace them with 3 and 4 to yield lower and upper bounds. f₁ 3 = 2 + 1/3 * 3 = 3 f₁ 4 = 2 + 1/3 * 4 = 3.333333 So, with just a single term, we know that the full, infinite computation of must yield a result within [3, 3.333333]. In other words, we know that the first digit of must be 3! Let's add another term and see what additional precision we can obtain: (f₁ . f₂) 3 = 3.0666666 (f₁ . f₂) 4 = 3.2 The tenths digit still differs, but we know it must be a 0, 1, or 2. Let's add a third term and see if we can resolve it: (f₁ . f₂ . f₃) 3 = 3.10476 (f₁ . f₂ . f₃) 4 = 3.16190 Great! Now we have computed to two places: 3.1. That's the basic idea. We can compute successive digits by adding additional terms to the chain of functions until the digits are revealed. The rest is mostly optimization. One optimization is that we can represent each of the functions by a 2x2 matrix such that composition becomes matrix multiplication. This lets us maintain a single matrix that represents the chain so far. We can compose additional terms by simply multiplying the chain's matrix by the new terms' matrices. Further, we can "shift out" digits as they are revealed to keep the digit of interest in the ones position. That way, we need only test the integer part of the upper and lower bounds for equality in order to determine whether the next digit is revealed. When it becomes revealed, we subtract it out and multiply the remaining fractional part by ten to bring the next digit of interest into the ones position. Conveniently, another matrix multiplication is all we need. Now, to turn this method to computing e, I used Google to find Sebah and Gourdon's "The constant e and its computation" and the following series representation: e = 2 + 1/2 * (1 + 1/3 * (1 + 1/4 * (1 + 1/5) ... )) I then observed that with the exception of the first term, each term takes values in the range [1,2] back into a subrange thereof. And, we can make the first term follow this rule, too, by computing e – 1 instead of e: e – 1 = 1 + 1/2 * (1 + 1/3 * (1 + 1/4 * (1 + 1/5) ... )) With this representation in hand, it was easy to use Gibbons's methods to compute . I wrote my implementation in my favorite programming language, Haskell, which (I'm pleased to note) Gibbons also used for his All in all, reading Gibbons's paper and implementing the e spigot was the most fun part of solving the Google challenge task (and the follow-up task). Thank you for reading, and a big thanks to Google for providing an entertaining problem to solve. The distribution of Hugs includes a program to print digits of e, that uses quite different method. Take a look at the list of my HuSi diaries. At the end of the summary for this very diary entry, you'll note a curious thing: A little lost arrow: The challenge: Explain how it got there. If you've got the mad skillz to explain it, post your explanations as responses to this comment. Write Perl code? Check out LectroTest. Write markup-dense XML? Check out PXSL. I know what's up. Mozilla Firefox can't parse HTML. Konqueror doesn't show any lost arrows. Firefox does. If you look at the source for that page, you'll see that Firefox mis-highlights syntax near that place. I conclude that it mis-parses as well. [ Parent ] But what throws the parser off track? Or, more-specifically, how is the HTML broken? Write Perl code? Check out LectroTest. Write markup-dense XML? Check out PXSL. [ Parent ] I'm no HTMLanguage lawyer. But a double dash in the title must be responsible. [ Parent ] Mozilla parses it right after all. [ Parent ] Write Perl code? Check out LectroTest. Write markup-dense XML? Check out PXSL. [ Parent ] No wonder everybody hates SGML. [ Parent ] here (note: I posted this 11 days ago. this is why i'm the ceo of microsoft, and you're no one in particular.)
{"url":"http://www.hulver.com/scoop/story/2004/7/22/153549/352","timestamp":"2014-04-17T18:38:04Z","content_type":null,"content_length":"28413","record_id":"<urn:uuid:8d82952b-1c44-4aac-88d7-dc24fd13eb23>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuing this series (earlier posts here, here and here) on the paper of Weinberger and Yu, I’m expecting to make two more posts: this one, which will say something about the class of groups for which they can prove their Finite Part Conjecture, and one more, which will say something about what can be done with the conjecture once one knows it. Continue reading “Finite part of operator K-theory” III – repeat (I posted this yesterday but it seems to have vanished into the ether – I am trying again.) This series of posts addresses the preprint “Finite Part of Operator K-theory for Groups Finitely Embeddable into Hilbert Space and the Degree of Non-rigidity of Manifolds” (ArXiv e-print 1308.4744. http://arxiv.org/abs/1308.4744) by Guoliang Yu and Shmuel Weinberger. In my previous post I gave the description of their main conjecture (let’s call it the Finite Part Conjecture) and showed how it would follow from the Baum-Connes conjecture (or, simply, from the statement that the Baum-Connes assembly map was an injection). Continue reading Rope solo experiment Well, the weather has been too good not to go climbing, so I headed down to Donation yesterday afternoon. Much to my surprise, I had the whole place to myself. The picture (not very good) shows the rope solo system that I used. This is my first time using the Peztl MiniTraxion for this purpose. (One advantage of having climbed a couple of walls is that one has a lot of interesting gear to play with on an occasion like this.) Steph Davis has a good post on rope solo systems (and the follow-up comments are helpful too). What I did was tie the climbing rope in to the anchor at its midpoint, with a figure-8 knot on a steel biner, so I had two independent anchored strands. One strand went through the mini-traxion, which rode on a full strength chest harness (also clipped to the sit-harness by a short sling). The other strand went through a Gri-Gri. When climbing, the mini-traxion side was weighted to my gear bag and then fed automatically. the Gri-Gri side I pulled rope through when convenient, and also tied off with a separate backup knot every now and again. To transition to rappel, just open the cam on the mini-trax and rappel on the Gri-Gri. At first I was pretty nervous trusting the system and moved very slowly and inelegantly. But it soon became clear that it would work fine. The pulley moves up with you and gives a good catch. Still, I should probably have started with something a bit easier. Things feel different when you are the only one around! If working something steep, one should carry prusiks so as to be able to unweight the mini-trax and transition to rappel when hanging in space. Of course, one could bring ascenders; but that seems to be taking the idea of raiding the aid box to an unnecessary extreme! Connes Embeddings and von Neumann Regular Closures of Group Algebras This is an interesting paper of Gabor Elek’s which touches on some things I’ve posted about recently – especially (i) the Atiyah conjecture and (ii) the idea (which shows up in the work of Ara et al) that one can use some kind of “asymptotic rank” instead of “asymptotic trace” in some contexts where you want to build “continuous dimension” type invariants.
{"url":"http://sites.psu.edu/johnroe/tag/exhaustion/","timestamp":"2014-04-19T04:56:10Z","content_type":null,"content_length":"39855","record_id":"<urn:uuid:03f5d3a1-2722-4321-b2ff-1220fdbc219c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
HELP Vectors- Finding equation of line of intersection between 2 planes October 30th 2008, 11:42 PM #1 Junior Member Aug 2008 HELP Vectors- Finding equation of line of intersection between 2 planes I got a question to do as assignment. I have been able to do the first part which is finding the angle between 2 planes but am not able to find the equation of line of intersection of the 2 Can anyone check the question and help me out please? Also check if the fist part is right... Thanks again... I got a question to do as assignment. I have been able to do the first part which is finding the angle between 2 planes but am not able to find the equation of line of intersection of the 2 Can anyone check the question and help me out please? Also check if the fist part is right... Thanks again... First part looks OK. Second part: Solve simultaneously: 3x - y + 4z = 3 .... (1) 4x - 2y + 7z = 8 .... (2) There are an infinite number of solutions. One possible form for these solutions is go by letting z = t, say. Then 3x - y = 3 - 4t .... (1') 4x - 2y = 8 - 7t .... (2') Solve (1') and (2') simultaneously to get x and y in terms of t. This soluion is the line of intersection of the two planes (written in parametric form). I got a question to do as assignment. I have been able to do the first part which is finding the angle between 2 planes but am not able to find the equation of line of intersection of the 2 Can anyone check the question and help me out please? Also check if the fist part is right... Thanks again... 1. Your calculation of the angle is OK. 2. Line of intersection between 2 planes. a) Check if the normal vectors (3, -1, 4) and (4, -2, 7) are collinear. Since $(3,-1,4) e k\cdot (4,-2,7)$ the planes are not parallel and therefore there exists a set of common points forming a b) calculate x and y with respect of z from the given system of equations: $\begin{array}{l}3x-y+4z=3 \\ 4x-2y+7z=8\end{array}$ I've got $\begin {array}{l}-2y+5z=12 \\ 6x+3z=12\end{array}$ c) Now set z = t and solve both equations for x respectively y and substitute t instaed of z. $l:\left\{\begin{array}{l}x=-\dfrac12t-1 \\y=\dfrac52t-6 \\z=t\end{array}\right.$ d) This is the parametric equation of the line: $\vec r = (-1, -6, 0)+t\left(-\dfrac12\ ,\ \dfrac52\ ,\ 1\right)$ e) I've attached a drawing of the 2 planes and the line of intersection. EDIT: Since the intersection line belongs to both planes the direction vector of the line must be perpendicular to both normal vectors: $(3,-1,4) \times (4,-2,7) = (1,-5,-2) = (-2)\cdot \left(-\dfrac12\ ,\ \dfrac52\ ,\ 1\right)$ Last edited by earboth; October 31st 2008 at 05:56 AM. Wat about if i need the equation in cartesian form please? Take the equation of the line in parametric form, solve each equation for t and set equal the results: $\begin{array}{l}x=-\dfrac12t-1 \\y=\dfrac52t-6 \\z=t\end{array}~~\implies~~\begin{array}{l}-2x-2=t \\\dfrac{2y+12}5=t \\z=t\end{array}$ will yield: I am getting a diferent answer.. a different directional vector.. Can anyone please check my work and let me know where is my mistake please?... There's no mistake. You should have noticed that <1, -5, -2> and <-1/2, 5/2, 1> both have the same direction: <1, -5, -2> = -2 <-1/2, 5/2, 1> It's up to you which one you use. Oh yes.... Thank you very much man... I succeded in completing it... Thanks ya again... October 31st 2008, 03:14 AM #2 October 31st 2008, 03:14 AM #3 October 31st 2008, 05:58 AM #4 Junior Member Aug 2008 October 31st 2008, 06:05 AM #5 November 1st 2008, 04:06 PM #6 Junior Member Aug 2008 November 1st 2008, 05:05 PM #7 November 1st 2008, 05:57 PM #8 Junior Member Aug 2008
{"url":"http://mathhelpforum.com/calculus/56741-help-vectors-finding-equation-line-intersection-between-2-planes.html","timestamp":"2014-04-19T10:00:45Z","content_type":null,"content_length":"61773","record_id":"<urn:uuid:40a60685-4fbf-4a1f-a67d-bc00e692aba0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
If S and T are non-zero numbers and Author Message If S and T are non-zero numbers and [#permalink] 04 Nov 2012, 07:20 55% (medium) Question Stats: (01:54) correct 74% (00:56) based on 55 sessions If S and T are non-zero numbers and \frac{1}{S} + \frac{1}{T} = S + T , which of the following must be true? ST = 1 S + T = 1 \frac{1}{S} = T \frac{S}{T} = 1 E. None of the above Joined: 07 Aug 2012 Posts: 22 GMAT 1: 730 Q50 V39 Followers: 1 Explanation provided: 1/S + 1/T = S+T; T+S/ST = S+T→; Cross-multiply: S+T=(S+T)∗ST; (S+T)(ST−1)=0. Either S+T=0 or ST=1. Now, notice that if S+T=0 is true then none of the options must be true. The correct answer is E I understand the way of the equation, however, what I would have done is interfere at the following step: S+T = (S+T)*ST -> (S+T)/(S+T)=ST -> ST = 1 Is there some rule which forbids me to take this step? Or is the only option to realize so, that you perform the above given equation as well and realise that "S+T = 0" negates all other options than E.... ?? Thanks in advance, best regards P.S. Sry if the format is terrible, this is the first question I am copying out of somewhere. Spoiler: OA Exhaust your body, proceed your mind, cultivate your soul. Last edited by on 04 Nov 2012, 14:06, edited 1 time in total. Renamed the topic and edited the question. Re: GMATClub M18-19 | Arithmetic: Equations [#permalink] 04 Nov 2012, 07:46 This post received SirGMAT wrote: Vips0000 If S and T are non-zero numbers and 1S+1T=S+T, which of the following must be true? A. ST=1 Director B. S+T=1 C. 1/S=T Status: Done with D. S/T=1 formalities.. and back.. E. none of the above Explanation provided: Joined: 15 Sep 2012 1/S + 1/T = S+T; T+S/ST = S+T→; Posts: 645 Cross-multiply: S+T=(S+T)∗ST; (S+T)(ST−1)=0. Either S+T=0 or ST=1. Now, notice that if S+T=0 is true then none of the options must be true. Location: India The correct answer is E Concentration: Strategy, I understand the way of the equation, however, what I would have done is interfere at the following step: General Management S+T = (S+T)*ST -> (S+T)/(S+T)=ST Schools: Olin - Wash U - -> ST = 1 Class of 2015 Is there some rule which forbids me to take this step? Or is the only option to realize so, that you perform the above given equation as well and realise that "S+T = 0" negates all other options than E.... ?? WE: Information Thanks in advance, Technology (Computer best regards Software) P.S. Sry if the format is terrible, this is the first question I am copying out of somewhere. Followers: 33 Yes, this is not correct way of cancelling. I'll show you one example. Kudos [?]: 331 [1] , 5* 0 = 3*0 given: 23 if we cancel 0 on both side, we get 5=3. is it correct? No. The crux (and the rule) is, we can cancel out a term only when we know its not 0. So the way it is done in explanation is absolutely correct and the right method. Hope it helps Lets Kudos!!! Black Friday Debrief Re: GMATClub M18-19 | Arithmetic: Equations [#permalink] 04 Nov 2012, 08:43 SirGMAT wrote: If S and T are non-zero numbers and 1S+1T=S+T, which of the following must be true? A. ST=1 B. S+T=1 C. 1/S=T D. S/T=1 E. none of the above Explanation provided: 1/S + 1/T = S+T; T+S/ST = S+T→; Cross-multiply: S+T=(S+T)∗ST; (S+T)(ST−1)=0. Either S+T=0 or ST=1. Now, notice that if S+T=0 is true then none of the options must be true. The correct answer is E I understand the way of the equation, however, what I would have done is interfere at the following step: MacFauz S+T = (S+T)*ST -> (S+T)/(S+T)=ST Moderator -> ST = 1 Joined: 02 Jul 2012 Is there some rule which forbids me to take this step? Or is the only option to realize so, that you perform the above given equation as well and realise that "S+T = 0" negates all other options than E.... ?? Posts: 1208 Location: India Thanks in advance, best regards Concentration: Strategy GMAT 1: 740 Q49 V42 P.S. Sry if the format is terrible, this is the first question I am copying out of somewhere. GPA: 3.8 There was this fun derivation that my math teacher showed us in school. Just to demonstrate how cancelling of 0 could yield wrong results. He claimed that he could prove that 1=2 and hence all numbers are equal. WE: Engineering (Energy and Utilities) It goes as below: Followers: 52 Let, Kudos [?]: 560 [0], a=b given: 111 Multiplying both sides by a, We get a^2 = ab from both sides a^2 - b^2 = ab - b^2(a+b)(a-b) = b(a-b) on both sides, a+b = b a=ba+a = a2a = a2=1 Did you find this post helpful?... Please let me know through the Kudos button. Thanks To The Almighty - My GMAT Debrief GMAT Reading Comprehension: 7 Most Common Passage Types Re: GMATClub M18-19 | Arithmetic: Equations [#permalink] 04 Nov 2012, 14:06 Vips0000 wrote: Yes, this is not correct way of cancelling. I'll show you one example. SirGMAT 5* 0 = 3*0 if we cancel 0 on both side, we get 5=3. is it correct? No. The crux (and the rule) is, we can cancel out a term only when we know its not 0. So the way it is done in explanation is absolutely correct and the right method. Joined: 07 Aug 2012 Hope it helps Posts: 22 Thanks - lol I am an idiot - ...! I even thought of the zero number things, but mistakenly memorized the prompt as telling me that the respective equation could not be "0", GMAT 1: 730 Q50 V39 though it only said that each number alone is non-zero.... Followers: 1 Thanks guys! Exhaust your body, proceed your mind, cultivate your soul. Re: If S and T are non-zero numbers and [#permalink] 04 Nov 2012, 14:08 Math Expert Joined: 02 Sep 2009 This post received Posts: 17283 KUDOS Followers: 2867 Expert's post Kudos [?]: 18328 [1] , given: 2345 Re: If S and T are non-zero numbers and [#permalink] 05 Nov 2012, 14:18 Bunuel wrote: P.S. Please read and follow: Please pay attention to the rule #3. Thank you. Joined: 07 Aug 2012 Oh sorry, only thought of the tags. Next time then! Posts: 22 Best regards GMAT 1: 730 Q50 V39 Followers: 1 Exhaust your body, proceed your mind, cultivate your soul. Last edited by on 07 Nov 2012, 07:28, edited 1 time in total. kartik222 Re: If S and T are non-zero numbers and [#permalink] 07 Nov 2012, 00:12 Intern Hi bunuel, Joined: 27 Dec 2011 In this step: s+t = (s+t)st can't we just cancel (s+t) and get ---> st =1? Posts: 47 Followers: 0 thanks, Kudos [?]: 2 [0], given: Re: If S and T are non-zero numbers and [#permalink] 07 Nov 2012, 00:19 Expert's post Bunuel wrote: If S and T are non-zero numbers and \frac{1}{S} + \frac{1}{T} = S + T, which of the following must be true? Verbal Forum Moderator A. ST = 1 Status: Preparing for the B. S + T = 1 another shot...! C. \frac{1}{S} = T D. \frac{S}{T} = 1 Joined: 03 Feb 2011 E. None of the above Posts: 1427 OE: Location: India \frac{1}{S} + \frac{1}{T} = S + T --> \frac{T+S}{ST}=S+T --> cross-multiply: S+T=(S+T)*ST --> (S+T)(ST-1)=0 --> either S+T=0 or ST=1. So, if S+T=0 is true then none of the options must be true. Concentration: Finance, Marketing Answer: E. GPA: 3.75 Bunuel, Followers: 108 in your solution I need to ask one thing. Kudos [?]: 489 [0], Inequalities, there is a rule that if you dont know the sign of denominator, then dont cross multiply. In your solution, how can you be so sure of the sign of ST. Please let given: 62 me know if i am missing something Prepositional Phrases Clarified|Elimination of BEING| Absolute Phrases Clarified Rules For Posting Re: If S and T are non-zero numbers and [#permalink] 07 Nov 2012, 01:30 Expert's post kartik222 wrote: Hi bunuel, In this step: s+t = (s+t)st can't we just cancel (s+t) and get ---> st =1? Never reduce equation by variable (or expression with variable), if you are not certain that variable (or expression with variable) doesn't equal to zero. We can not divide by zero. So, if you divide (reduce) s+t = (s+t)st by (s+t), you assume, with no ground for it, that (s+t) does not equal to zero thus exclude a possible solution (notice that both st Math Expert =1 AND (s+t)=0 satisfy the equation). Joined: 02 Sep 2009 Hope it's clear. Posts: 17283 _________________ Followers: 2867 NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: If S and T are non-zero numbers and [#permalink] 07 Nov 2012, 01:43 Expert's post Marcab wrote: Bunuel wrote: If S and T are non-zero numbers and \frac{1}{S} + \frac{1}{T} = S + T, which of the following must be true? A. ST = 1 B. S + T = 1 C. \frac{1}{S} = T D. \frac{S}{T} = 1 E. None of the above \frac{1}{S} + \frac{1}{T} = S + T --> \frac{T+S}{ST}=S+T --> cross-multiply: S+T=(S+T)*ST --> (S+T)(ST-1)=0 --> either S+T=0 or ST=1. So, if S+T=0 is true then none of the options must be true. Answer: E. Math Expert in your solution I need to ask one thing. Joined: 02 Sep 2009 Inequalities, there is a rule that if you dont know the sign of denominator, then dont cross multiply. In your solution, how can you be so sure of the sign of ST. Please let Posts: 17283 me know if i am missing something Followers: 2867 We are concerned with the sign when we cross-multiply an inequality because this operation might affect (flip) its sign (> to <, for example) but it's always safe to cross-multiply an equation. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Verbal Forum Moderator Status: Preparing for the another shot...! Re: If S and T are non-zero numbers and [#permalink] 07 Nov 2012, 02:05 Joined: 03 Feb 2011 Expert's post Posts: 1427 Location: India Concentration: Finance, GPA: 3.75 Followers: 108 Kudos [?]: 489 [0], given: 62 Re: If S and T are non-zero numbers and [#permalink] 10 Feb 2013, 05:03 Bunuel wrote: If S and T are non-zero numbers and \frac{1}{S} + \frac{1}{T} = S + T, which of the following must be true? ST = 1 S + T = 1 \frac{1}{S} = T \frac{S}{T} = 1 E. None of the above OE:\frac{1}{S} + \frac{1}{T} = S + T Sachin9 --> cross-multiply: Director S+T=(S+T)*ST Status: Gonna rock this --> Joined: 22 Jul 2012 --> either Posts: 551 Location: India GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 ST=1 WE: Information . So, if Technology (Computer Software) S+T=0 Followers: 2 is true then none of the options must be true. Answer: E. P.S. Please read and follow: Please pay attention to the rule #3. Thank you. Since S+T=0 ST=1 and the question asks what be true, the answer is E ? Another way to answer the question. . Is my reasoning right? hope is a good thing, maybe the best of things. And no good thing ever dies. Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 Re: If S and T are non-zero numbers and [#permalink] 10 Feb 2013, 05:09 Expert's post Sachin9 wrote: Bunuel wrote: If S and T are non-zero numbers and \frac{1}{S} + \frac{1}{T} = S + T, which of the following must be true? ST = 1 S + T = 1 \frac{1}{S} = T \frac{S}{T} = 1 E. None of the above OE:\frac{1}{S} + \frac{1}{T} = S + T --> cross-multiply: --> either Bunuel or Math Expert ST=1 Joined: 02 Sep 2009 . So, if Posts: 17283 S+T=0 Followers: 2867 is true then none of the options must be true. Answer: E. P.S. Please read and follow: Please pay attention to the rule #3. Thank you. Since S+T=0 ST=1 and the question asks what be true, the answer is E ? Another way to answer the question. . Is my reasoning right? Yes, if for example s=1 and t=-1 (s+t=1-1=0), then none of the options is true (none of the options MUST be true). NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: If S and T are non-zero numbers and [#permalink] 10 Feb 2013, 06:04 Sachin9 I understand what you are trying to say bunuel.. Director my question is that since the equation results in 2 soln and we have a OR .. . that is soln 1 OR soln 2 and the question asks for MUST be true.. Status: Gonna rock this So based on this reasoning, can we say the answer is E..? For something Joined: 22 Jul 2012 must be true Posts: 551 , we cannot have soln 1 Location: India GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 soln 2.. we need to have 1 soln / soln1 AND soln2.. WE: Information Hope you are getting what I am trying to ask.. Technology (Computer Software) _________________ Followers: 2 hope is a good thing, maybe the best of things. And no good thing ever dies. Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 Director Re: If S and T are non-zero numbers and [#permalink] 10 Feb 2013, 13:55 Status: Done with Sachin9 wrote: formalities.. and back.. I understand what you are trying to say bunuel.. Joined: 15 Sep 2012 my question is that since the equation results in 2 soln and we have a OR .. . that is soln 1 OR soln 2 and the question asks for MUST be true.. Posts: 645 So based on this reasoning, can we say the answer is E..? Location: India For something must be true , we cannot have soln 1 OR soln 2.. we need to have 1 soln / soln1 AND soln2.. Concentration: Strategy, Hope you are getting what I am trying to ask.. General Management I would say that don't generalize this point. You know that because there are two solutions, any of the given options need not be a MUST. But if u really have an option that Schools: Olin - Wash U - says (s+t)(st-1)=0 then that MUST be true. Class of 2015 WE: Information Technology (Computer Lets Kudos!!! Software) Black Friday Debrief Followers: 33 Kudos [?]: 331 [0], given: 23 Re: If S and T are non-zero numbers and [#permalink] 10 Feb 2013, 18:17 Vips0000 wrote: Sachin9 wrote: I understand what you are trying to say bunuel.. Status: Gonna rock this my question is that since the equation results in 2 soln and we have a OR .. . that is soln 1 OR soln 2 and the question asks for MUST be true.. So based on this reasoning, can we say the answer is E..? Joined: 22 Jul 2012 For something must be true , we cannot have soln 1 OR soln 2.. we need to have 1 soln / soln1 AND soln2.. Posts: 551 Hope you are getting what I am trying to ask.. Location: India I would say that don't generalize this point. You know that because there are two solutions, any of the given options need not be a MUST. But if u really have an option that GMAT 1: 640 Q43 V34 says (s+t)(st-1)=0 then that MUST be true. GMAT 2: 630 Q47 V29 But if u really have an option that says (s+t)(st-1)=0 then that MUST be true. WE: Information Technology (Computer Ididn;t understand this.. if (s+t)(st-1)=0 then either s+t=0 or st-1=0.. we still have a OR here Followers: 2 hope is a good thing, maybe the best of things. And no good thing ever dies. Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 Re: If S and T are non-zero numbers and [#permalink] 11 Feb 2013, 00:59 Sachin9 wrote: Vips0000 wrote: Sachin9 wrote: I understand what you are trying to say bunuel.. my question is that since the equation results in 2 soln and we have a OR .. . that is soln 1 OR soln 2 and the question asks for MUST be true.. So based on this reasoning, can we say the answer is E..? For something must be true , we cannot have soln 1 OR soln 2.. we need to have 1 soln / soln1 AND soln2.. Vips0000 Hope you are getting what I am trying to ask.. Director I would say that don't generalize this point. You know that because there are two solutions, any of the given options need not be a MUST. But if u really have an option that says (s+t)(st-1)=0 then that MUST be true. Status: Done with formalities.. and back.. But if u really have an option that says (s+t)(st-1)=0 then that MUST be true. Joined: 15 Sep 2012 Ididn;t understand this.. if (s+t)(st-1)=0 then either s+t=0 or st-1=0.. we still have a OR here Posts: 645 If the question were to be this: Location: India If S and T are non-zero numbers and Concentration: Strategy, \frac{1}{S} + \frac{1}{T} = S + T General Management , which of the following must be true? Schools: Olin - Wash U - Class of 2015 A. WE: Information ST = 1 Technology (Computer Software) B. Followers: 33 S + T = 1 Kudos [?]: 331 [0], C. given: 23 \frac{1}{S} = T \frac{S}{T} = 1 E. None of the above F. (s+t)(st-1)=0 Then your generalization will go wrong as you have an ans choice F that must hold true. Lets Kudos!!! Black Friday Debrief Re: If S and T are non-zero numbers and [#permalink] 11 Feb 2013, 01:55 Status: Gonna rock this time!!! I get it Vippss. But in all other cases, the generalization will hold good right? Joined: 22 Jul 2012 _________________ Posts: 551 hope is a good thing, maybe the best of things. And no good thing ever dies. Location: India Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 GMAT 1: 640 Q43 V34 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 GMAT 2: 630 Q47 V29 WE: Information Technology (Computer Followers: 2 Director Re: If S and T are non-zero numbers and [#permalink] 11 Feb 2013, 07:44 Status: Done with 1 formalities.. and back.. This post received Joined: 15 Sep 2012 KUDOS Posts: 645 Sachin9 wrote: Location: India I get it Vippss. But in all other cases, the generalization will hold good right? Concentration: Strategy, Rules are good, generalizations are not General Management Enjoy and practice kudos :-p Schools: Olin - Wash U - Class of 2015 _________________ WE: Information Lets Kudos!!! Technology (Computer Black Friday Debrief Followers: 33 Kudos [?]: 331 [1] , given: 23 Sachin9 Re: If S and T are non-zero numbers and [#permalink] 11 Feb 2013, 07:51 Director Vips0000 wrote: Status: Gonna rock this Sachin9 wrote: I get it Vippss. But in all other cases, the generalization will hold good right? Joined: 22 Jul 2012 Rules are good, generalizations are not Posts: 551 Enjoy and practice kudos :-p Location: India why on earth do u need kudos GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 _________________ WE: Information hope is a good thing, maybe the best of things. And no good thing ever dies. Technology (Computer Software) Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 Followers: 2 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 gmatclubot Re: If S and T are non-zero numbers and [#permalink] 11 Feb 2013, 07:51 Similar topics Author Replies Last post For a finite sequence of nonzero numbers, the number of jet1445 3 25 Feb 2007, 09:13 For a finite sequence of nonzero numbers, the number of alimad 4 17 May 2007, 06:05 For a finite sequence of nonzero numbers, the number of priyankur_saha@ml.com 2 05 Jun 2007, 13:41 For a finite series of nonzero numbers, the number of Vemuri 3 10 Nov 2007, 10:42 For a finite sequence of nonzero numbers, the number of marcodonzelli 2 18 Dec 2007, 09:39
{"url":"http://gmatclub.com/forum/if-s-and-t-are-non-zero-numbers-and-141887.html","timestamp":"2014-04-16T16:35:35Z","content_type":null,"content_length":"291772","record_id":"<urn:uuid:2ba9f337-f595-449e-b0d9-1c58c1075110>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Limits question [Challenge] • 11 months ago • 11 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/519e3b8be4b04449b2214326","timestamp":"2014-04-19T07:19:54Z","content_type":null,"content_length":"77660","record_id":"<urn:uuid:91bae42f-b153-4abb-92b2-faaef96542c5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how do you solve an equation by completing the square Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f68f9a5e4b0f81dfbb5c8ac","timestamp":"2014-04-16T10:34:46Z","content_type":null,"content_length":"139993","record_id":"<urn:uuid:23fa479f-8a98-491d-ab90-a809b8f02c65>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursion – The Art and Ideas Behind M. C. Escher’s Drawings The above Escher work, Circle Limit III, was published in 1959, a full 20 years before Benoit Mandelbrot, the creator of the Mandelbrot set, began to study fractals. One can see that the image has an origin of sorts in the very middle, where each fish is perfectly aligned with each other. Once we look further out, we see the pattern becoming more complex and more fish fitting in the same area until the very border of the circle has indistinguishable (but really infinite) repetitions of the same design. If we were to zoom in on one of these tiny designs, we’d see that they go on forever. This is the very essence of recursion and an apt example of a fractal. Howdie stranger! If you want to participate in our photoshop and photography contests, just: LOGIN HERE or REGISTER FOR FREE
{"url":"http://www.pxleyes.com/blog/2010/06/recursion-the-art-and-ideas-behind-m-c-eschers-drawings/","timestamp":"2014-04-19T04:19:34Z","content_type":null,"content_length":"45895","record_id":"<urn:uuid:763e3831-1f20-407c-ba8f-0896db98af19>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
An incremental genetic algorithm approach to multiprocessor Results 1 - 10 of 23 "... Workflow scheduling is one of the key issues in the management of workflow execution. Scheduling is a process that maps and manages execution of inter-dependent tasks on distributed resources. It introduces allocating suitable resources to workflow tasks so that the execution can be completed to sat ..." Cited by 18 (4 self) Add to MetaCart Workflow scheduling is one of the key issues in the management of workflow execution. Scheduling is a process that maps and manages execution of inter-dependent tasks on distributed resources. It introduces allocating suitable resources to workflow tasks so that the execution can be completed to satisfy objective functions specified by users. Proper scheduling can have significant impact on the performance of the system. In this chapter, we investigate existing workflow scheduling algorithms developed and deployed by various Grid projects. - in Proc. IEEE Intl. Conf. Cluster Computing , 2006 "... The paper addresses the problem of matching and scheduling of DAG-structured application to both minimize the makespan and maximize the robustness in a heterogeneous computing system. Due to the conflict of the two objectives, it is usually impossible to achieve both goals at the same time. We give ..." Cited by 12 (3 self) Add to MetaCart The paper addresses the problem of matching and scheduling of DAG-structured application to both minimize the makespan and maximize the robustness in a heterogeneous computing system. Due to the conflict of the two objectives, it is usually impossible to achieve both goals at the same time. We give two definitions of robustness of a schedule based on tardiness and miss rate. Slack is proved to be an effective metric to be used to adjust the robustness. We employ ǫ-constraint method to solve the bi-objective optimization problem where minimizing the makespan and maximizing the slack are the two objectives. Overall performance of a schedule considering both makespan and robustness is defined such that user have the flexibility to put emphasis on either objective. Experiment results are presented to validate the performance of the proposed algorithm. "... Abstract: A heterogeneous computing environment is a suite of heterogeneous processors interconnected by high-speed networks, thereby promising high speed processing of computationally intensive applications with diverse computing needs. Scheduling of an application modeled by Directed Acyclic Graph ..." Cited by 7 (1 self) Add to MetaCart Abstract: A heterogeneous computing environment is a suite of heterogeneous processors interconnected by high-speed networks, thereby promising high speed processing of computationally intensive applications with diverse computing needs. Scheduling of an application modeled by Directed Acyclic Graph (DAG) is a key issue when aiming at high performance in this kind of environment. The problem is generally addressed in terms of task scheduling, where tasks are the schedulable units of a program. The task scheduling problems have been shown to be NP-complete in general as well as several restricted cases. In this study we present a simple scheduling algorithm based on list scheduling, namely, low complexity Performance Effective Task Scheduling (PETS) algorithm for heterogeneous computing systems with complexity O (e) (p+ log v), which provides effective results for applications represented by DAGs. The analysis and experiments based on both randomly generated graphs and graphs of some real applications show that the PETS algorithm substantially outperforms the existing scheduling algorithms such as Heterogeneous Earliest Finish Time (HEFT), Critical-Path-On a Processor (CPOP) and Levelized Min Time (LMT), in terms of schedule length ratio, speedup, efficiency, running time and frequency of best results. Key words: DAG, task graph, task scheduling, heterogeneous computing system, schedule length, "... Multiprocessor task scheduling is an important and computationally difficult problem. Multiprocessors have emerged as a powerful computing means for running real-time applications, especially that a uni-processor system would not be sufficient enough to execute all the tasks. That computing environm ..." Cited by 6 (0 self) Add to MetaCart Multiprocessor task scheduling is an important and computationally difficult problem. Multiprocessors have emerged as a powerful computing means for running real-time applications, especially that a uni-processor system would not be sufficient enough to execute all the tasks. That computing environment requires an efficient algorithm to determine when and on which processor a given task should execute. A task can be partitioned into a group of subtasks and represented as a DAG (Directed Acyclic Graph), that problem can be stated as finding a schedule for a DAG to be executed in a parallel multiprocessor system. The problem of mapping meta-tasks to a machine is shown to be NP-complete. The NP-complete problem can be solved only using heuristic approach. The execution time requirements of the applications ’ tasks are assumed to be stochastic. In multiprocessor scheduling problem, a given program is to be scheduled in a given multiprocessor system such that the program’s execution time should be minimized. The last job must be completed as early as possible. "... Abstract:- In multiprocessor systems, an efficient scheduling of a parallel program onto the processors that minimizes the entire execution time is vital for achieving a high performance. The problem of multiprocessor scheduling can be stated as finding a schedule for a general task graph to be exec ..." Cited by 3 (0 self) Add to MetaCart Abstract:- In multiprocessor systems, an efficient scheduling of a parallel program onto the processors that minimizes the entire execution time is vital for achieving a high performance. The problem of multiprocessor scheduling can be stated as finding a schedule for a general task graph to be executed on a multiprocessor system so that the schedule length can be minimize. This scheduling problem is known to be NP- Hard. In multiprocessor scheduling problem, a given program is to be scheduled in a given multiprocessor system such that the program’s execution time is minimized. The objective is makespan minimization, i.e. we want the last job to complete as early as possible. The tasks scheduling problem is a key factor for a parallel multiprocessor system to gain better performance. A task can be partitioned into a group of subtasks and represented as a DAG ( Directed Acyclic Graph), so the problem can be stated as finding a schedule for a DAG to be executed in a parallel multiprocessor system so that the schedule can e minimized. This helps to reduce processing time and increase processor utilization. Genetic algorithm (GA) is one of the widely used technique for this optimization. But there are some shortcomings which can be reduced by using GA with another optimization technique, such as simulated annealing (SA). This combination of GA and SA is called memetic algorithms. This paper proposes a new algorithm by using this memetic algorithm technique. "... Abstract. We propose an efficient method of extracting knowledge when scheduling parallel programs onto processors using an artificial immune system (AIS). We consider programs defined by Directed Acyclic Graphs (DAGs). Our approach reorders the nodes of the program according to the optimal executio ..." Cited by 1 (0 self) Add to MetaCart Abstract. We propose an efficient method of extracting knowledge when scheduling parallel programs onto processors using an artificial immune system (AIS). We consider programs defined by Directed Acyclic Graphs (DAGs). Our approach reorders the nodes of the program according to the optimal execution order on one processor. The system works in either learning or production mode. In the learning mode we use an immune system to optimize the allocation of the tasks to individual processors. Best allocations are stored in the knowledge base. In the production mode the optimization module is not invoked, only the stored allocations are used. This approach gives similar results to the optimization by a genetic algorithm (GA) but requires only a fraction of function evaluations. "... Abstract—As Distributed Systems begin to rely more and more on Service Oriented Architectures there is an increasingly need to store information remotely and to access to it by means of services. In this frame scheduling heuristics play an important role as they help reduce task execution costs. We ..." Cited by 1 (1 self) Add to MetaCart Abstract—As Distributed Systems begin to rely more and more on Service Oriented Architectures there is an increasingly need to store information remotely and to access to it by means of services. In this frame scheduling heuristics play an important role as they help reduce task execution costs. We propose a model that follows a nature inspired paradigm to represent the scheduling heuristics itself. Services are used to access remotely available data required by the algorithm. Furthermore a model to share the schedule data among multiple distributed scheduling algorithms that run in parallel is devised. Keywords-scheduling algorithms; distributed computing; nature inspired scheduling I. , 2006 "... Task scheduling for parallel and distributed systems is an NP-complete problem, which is well documented and studied in the literature. A large set of proposed heuristics for this problem mainly target to minimize the completion time or the schedule length of the output schedule for a given task gra ..." Cited by 1 (0 self) Add to MetaCart Task scheduling for parallel and distributed systems is an NP-complete problem, which is well documented and studied in the literature. A large set of proposed heuristics for this problem mainly target to minimize the completion time or the schedule length of the output schedule for a given task graph. An additional objective, which is not much studied, is the minimization of number of processors allocated for the schedule. These two objectives are both conflicting and complementary, where the former is on the time domain targeting to improve task utilization and the latter is on the resource domain targeting to improve processor utilization. In this paper, we unify these two objectives with a weighting scheme that allows to personalize the importance of the objectives. In this paper, we present a new genetic search framework for task scheduling problem by considering the new objective. The performance of our genetic algorithm is compared with the scheduling algorithms in the literature that consider the heterogeneous processors. The results of the synthetic benchmarks and task graphs that are extracted from well-known applications clearly show that our genetic algorithm-based framework outperforms the related work with respect to normalized cost values, for various task graph characteristics. "... Multiprocessors have become powerful computing means for running real-time applications and their high performance depends greatly on parallel and distributed network environment system. Consequently, several methods have been developed to optimally tackle the multiprocessor task scheduling problem ..." Cited by 1 (0 self) Add to MetaCart Multiprocessors have become powerful computing means for running real-time applications and their high performance depends greatly on parallel and distributed network environment system. Consequently, several methods have been developed to optimally tackle the multiprocessor task scheduling problem which is called NPhard problem. To address this issue, this paper presents two approaches, Modified List Scheduling Heuristic (MLSH) and hybrid approach composed of Genetic Algorithm (GA) and MLSH for task scheduling in multiprocessor system. Furthermore, this paper proposes three different representations for the chromosomes of genetic algorithm: task list (TL), processor list (PL) and combination of both (TLPLC). Intensive simulation experiments have been conducted on different random and real-world application graphs such as Gauss-Jordan, LU decomposition, Gaussian elimination and Laplace equation solver problems. Comparisons have been done with the most related algorithms like: list scheduling heuristics algorithm LSHs, Bipartite GA (BGA) [1] and Priority based Multi-Chromosome (PMC) [2]. The achieved results show that the proposed approaches significantly surpass the other approaches in terms of task execution time (makespan) and processor efficiency. , 2009 "... evolution for scheduling workflow applications on global Grids ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=575242","timestamp":"2014-04-19T20:50:35Z","content_type":null,"content_length":"39617","record_id":"<urn:uuid:eefe989d-d92c-4540-9b61-0994bb3f5272>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding the traveling sales algorithm 10-19-2005 #1 Understanding the traveling sales algorithm I'm having trouble understanding the traveling sales algorithm. Each link node represents a city with an x and y coordinate in a circular linked list. I've managed to get the each city in the linked list through a file that is passed as an argument, and print it out. What i don't understand is the distance for sales trip: /* code tags used to keep table alignment */ City | x coord | y coord city1 4.40 7.70 city2 3.30 8.80 /* code tags used to keep table alignment */ Starting distance for sales trip = 3.11 City | x coord | y coord city1 4.40 7.70 city2 3.30 8.80 Reduced distance for sales trip = 3.11 the formula for determining the distance between two cities is: double x1 = 4.40; double x2 = 3.30; double y1 = 7.70; double y2 = 8.80; ->> printf("%.2f", sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2))* 2); which gives: 3.11, sounds easy right? but it gets more confusing: the original text file, before the algorithm is run: Nowhere 28.8 17.6 Somewhere 5.6 79.12 Whoknows 55.3 49.9 Elsewhere 22.22 11.11 /* code tags used to keep table alignment */ City | x coord | y coord Elsewhere 22.22 11.11 Nowhere 28.80 17.60 Somewhere 5.60 79.12 Whoknows 55.30 49.90 Starting distance for sales trip = 183.62 City | x coord | y coord Elsewhere 22.22 11.11 Nowhere 28.80 17.60 Whoknows 55.30 49.90 Somewhere 5.60 79.12 Reduced distance for sales trip = 178.69 i just don't get how the 'Starting distance for sales trip' and 'Reduced distance for sales trip' are determined. Can anyone give me any hints? Last edited by Axel; 10-19-2005 at 08:49 AM. and just an addition, the formula should be used to calculate the shortest path. For example if going from A to C to B to D is shorter than ABCD then C and B should be swapped in the link list. I'm still trying to figure out how to swap an element first before i get to that. Another question, if i have 4 cities (A, B, C, D) how can i compare their distances? because i only have 2 pointers to the list, start and next. How do i keep track of the 3rd element etc so i can compare it to the others. I'm still confused on how to get the "Starting distance for sales trip" and "Reduced distance for sales trip" The travelling salesman problem concerns itself with the efficiency of algorithms. A brute force solution would work however, would be regarded as being horribly inefficient. If there are N locations to visit, the there are N! possible itineraries. (N! is pronounced N factorial which means if N was '5' then 5! would be 5x4x3x2x1.) You can imagine how long it would take to compute a problem where N was greater than 10. Anyway, you need to find a more efficient solution. One such solution would be Djikstra's algo. However, there is no guarantee that it will find the shortest path. ok, with the algorithm i'm trying to develop N wont be greater than 10 so thats fine. I'm just not sure how to compare each link list node's x and y coordinates So your problem is with the linked list? Have you tried to obtain a solution without linked lists to ensure you get the same answer as given by your teacher? One possible solution, is to implement a brute force algorithm: For example find all the possible routes you can take and calculate their total distance... ABCD = 17m ABDC = 14m ADBC = 12m DCBA = 19m Since there are only four locations you should have 4! possible itineraries. Which is (4x3x2x1=24) possibilities. Then it would be a simple case of looping through your list and identifying the shortest journey. However, if your teacher wants you to work with linked lists only then this may not be an option. Secondly, I noticed a flaw in your formula: ->> printf("%.2f", sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2))* 2); Should that be there? Hmm.. Last edited by treenef; 10-20-2005 at 06:03 AM. Should that be there? Hmm.. no it shouldn't but that's the only way i could get close to the answer, 3.11. I've tried many different ways, but i can get to the answer given. I guess i could do it the way you said, i'll give it a go. Remember that the shortest path from a-d have to be swapped to. I'm still stuck on http://cboard.cprogramming.com/showthread.php?t=71117 so i have to get that working first. Thanks for your help Have you tried to obtain a solution without linked lists yes i've tried hard coding the variables and tried using the formula manually without a link list and i can't get the same result. for example with the city 1 and city 2 example: double x1 = 4.40; double x2 = 3.30; double y1 = 7.70; double y2 = 8.80; double total = sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)); doesn't give me 3.30 this may not be an option. Is there a way of doing it through a linked list? That's the problem i'm having how can i compare, in a while loop 4 destinations when i only have a next a start pointer i have to figure out how to calculate and get reduced sales/sales trip first and so far all my calculations don't get to the correct answer. Last edited by Axel; 10-20-2005 at 07:50 AM. /* code tags used to keep table alignment */ City | x coord | y coord Elsewhere 22.22 11.11 Nowhere 28.80 17.60 Somewhere 5.60 79.12 Whoknows 55.30 49.90 Starting distance for sales trip = 183.62 I don't know if I use my corrected formula I get the same result... Here's my working.... First of all let us denote the following as such: Elsewhere = A Nowhere = B Whoknows= D Then A to B to C to D to A... A to B = √(22.22-28.80)²+(11.11-17.60)² = 9.24 B to C = √(28.80-5.60)²+(17.60-79.12)² = 65.75 C to D = √(5.60-55.30)²+(79.12-49.90)² = 57.65 D to A = √(55.30-22.22)²+(49.90-11.11)² = 50.97 Then 9.24 + 65.75 + 57.65 + 50.97 = 183.62 QED I think in the case you have given: /* code tags used to keep table alignment */ Starting distance for sales trip = 3.11 City | x coord | y coord city1 4.40 7.70 city2 3.30 8.80 Reduced distance for sales trip = 3.11 You have to remember the full tour goes from city 1 to city 2 and then BACK again to city 1. Last edited by treenef; 10-20-2005 at 02:50 PM. Yes, thanks alot! i understand that bit now. Now time to get my linked list working. Before the list is ordered: Nowhere 28.80 17.60 Somewhere 5.60 79.12 Whoknows 55.30 49.90 Elsewhere 22.22 11.11 I just don't get the logic behind when it's ordered. Going from A to B: A to B = √(22.22-28.80)²+(11.11-17.60)² = 9.24 is the shortest shouldn't D to A = √(55.30-22.22)²+(49.90-11.11)² = 50.97 come second because its the second shortest? ok i just figured out the i have to start with the city that has the lowest alphabet and then apply the algorithm. I'm still having trouble figuring out the distance between the cities so i can reorder them. shouldn't D to A = √(55.30-22.22)²+(49.90-11.11)² = 50.97 come second because its the second shortest? No, it appears to me you don't really know what is going on do you? Let's be honest. It doesn't matter what order the list is. In fact the only way to solve this would be to find all 24 possibilities and then calculate the TOTAL distance for the entire tour. And of those 24 any one of them could yield the shortest distance. You just don't know until you have tried each one. Do you understand on a visual level why you need to do: D to A = √(55.30-22.22)²+(49.90-11.11)² to find the ACTUAL distance between the coordinates? Moreover, using 'linked lists', which in my mind is far from trivial, is asking a bit much if you don't really understand the basics? Have you tried googling 'travelling salesman' to get a better idea of what is going on. ok i understand now, so i have to go to all the cities and keep swapping till i have the shortest total route? and once re-ordered that should reflect the new citieis are that are being printed I think the part where i was confused was, why are the cities reordered? Does it reflect the new order of the link list (i.e. after the finding the shortest total route)? treenef, after reading various resources about the traveling sales man algorithm i don't understand why the list is reordered. i was wondering if you could explain the logic behind why the list is re-ordered. I know how to calculate the total distance now, but i need to figure out why it's ordered that way in order to do the calculations through my code (i.e. i read the original text and if i do the formula i get the wrong answer, i need to find out why it's in that order then i can start applying the if you could show a sample from how you got from the unordered list to the answer that would be great. Another question, how is the Reduced distance for sales trip calculated? this is driving me nuts oh silly me, i didn't even post the original text before the algorithm is run: City | x coord | y coord Nowhere 28.80 17.60 Somewhere 5.60 79.12 Whoknows 55.30 49.90 Elsewhere 22.22 11.11 hmm this is strange, i just applied the forumla to the unsorted list and it give me the correct answer: total += sqrt( ( pow( (28.80 - 5.60), 2) + pow( (17.60 - 79.12), 2) )); total += sqrt( ( pow( (5.60 - 55.30), 2) + pow( (79.12 - 49.90), 2) )); total += sqrt( ( pow( (55.30 - 22.22), 2) + pow( (49.90 - 11.11), 2) )); total += sqrt( ( pow( (22.22 - 28.80), 2) + pow( (11.11 - 17.60), 2) )); = 183.62 even more confused :| 10-20-2005 #2 10-20-2005 #3 Super Moderater. Join Date Jan 2005 10-20-2005 #4 10-20-2005 #5 Super Moderater. Join Date Jan 2005 10-20-2005 #6 10-20-2005 #7 10-20-2005 #8 Super Moderater. Join Date Jan 2005 10-21-2005 #9 10-21-2005 #10 10-21-2005 #11 Super Moderater. Join Date Jan 2005 10-21-2005 #12 10-22-2005 #13 10-22-2005 #14 10-22-2005 #15
{"url":"http://cboard.cprogramming.com/c-programming/71086-understanding-traveling-sales-algorithm.html","timestamp":"2014-04-16T16:48:31Z","content_type":null,"content_length":"105628","record_id":"<urn:uuid:3d9f9760-f2f6-4cdd-9297-b2944dc9e7c3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
When can a Connection Induce a Riemannian Metric for which it is the Levi-Civita Connection? up vote 48 down vote favorite As we all know, for a Riemannian manifold $(M,g)$, there exists a unique torsion free connection $\nabla_g$, the Levi-Civita connection, that is compatible witht metric. I was wondering if one can reverse this situation: Given a manifold with $M$ with connection $\nabla$, when does there exist a Riemannian metric $g$ for which $\nabla$ is the Levi-Civita connection. If this were true for complex projective manifolds it would make me be very happy. dg.differential-geometry riemannian-geometry 11 Levi-Civita is one person --- not two; so do not write it "Levi--Civita" – Anton Petrunin Feb 5 '11 at 18:49 @Anton: Thanks for the correction, I wasn't aware of this convention. – Jean Delinez Feb 5 '11 at 19:37 1 @Anton, thanks! I had honestly thought all these years that it was named after two people (and had typeset it that way). – Joel Fine Feb 11 '11 at 2:12 2 Can someone with enough reputation change the $\Delta$'s in the question and in Bill Thurston's answer to $\nabla$'s? This is highly non-standard notation, since $\Delta$ is usually the Laplacian. – Spiro Karigiannis Apr 17 '11 at 18:50 @Spiro (honestly I am a bit surprised you don't already have the rep): since the notation between the question and the answer matches, perhaps it is okay? – Willie Wong Apr 17 '11 at 23:54 show 2 more comments 3 Answers active oldest votes Bill and Willie have (of course) given correct answers in terms of the holonomy of the given torsion-free connection $\nabla$ on the $n$-manifold $M$. However, it should be pointed out that, practically, it is almost impossible to compute the holonomy of $\nabla$ directly, since this would require integrating the ODE that define parallel transport with respect to $\ nabla$. Even though they are linear ODE, for most connections given explicitly by some functions $\Gamma^i_{jk}$ on a domain, one cannot perform their integration. Although, as Bill pointed out, you cannot always tell from local considerations whether $\nabla$ is a metric connection, you can still get a lot of information locally, and this usually suffices to determine the only possibilities for $g$. The practical tests (carried out essentially by differentiation alone) were of great interest to the early differential geometers, but they don't get much mention in the modern literature. For example, one should start by computing the curvature $R$ of $\nabla$, which is a section of the bundle $T\otimes T^\ast\otimes \Lambda^2(T^\ast)$. (To save typing, I won't write the $M$ for the manifold.) Taking the trace (i.e., contraction) on the first two factors, one gets the $2$-form $tr(R)$. This must vanish identically, or else there cannot be any solutions of $\nabla g = 0$ for which $g$ is nondegenerate. (Geometrically, $\nabla$ induces a connection on $\Lambda^n(T^\ast)$ (i.e., the volume forms on $M$) and $tr(R)$ is the curvature of this connection. If this connection is not flat, then $\nabla$ doesn't have any parallel volume forms, even locally, and hence cannot have any parallel metrics.) To get more stringent conditions, one should treat $g$ as an unknown section of the bundle $S^2(T^\ast)$, pair it with $R$ (i.e., 'lower an index') and symmetrize in the first two factors, giving a bilinear pairing $\langle g, R\rangle$ that is a section of $S^2(T^\ast)\otimes \Lambda^2(T^\ast)$. By the Bianchi identities, the equation $\langle g, R\rangle = 0$ must be satisfied by any solution of $\nabla g = 0$. Notice that these are linear equations on the coefficients of $g$. For most $\nabla$ when $n>2$, this is a highly overdetermined system that up vote 44 has no nonzero solutions and you are done. Even when $n=2$, this is usually $3$ independent equations for $3$ unknowns, and there is no non-zero solution. down vote accepted Often, though, the equations $\langle g, R\rangle = 0$ define a subbundle (at least on a dense open set) of $S^2(T^\ast)$ of which all the solutions of $\nabla g= 0$ must be sections. (As long as $R$ is nonzero, this is a proper subbundle. Of course, when $R=0$, the connection is flat, and the sheaf of solutions of $\nabla g = 0$ has stalks of dimension $n(n{+}1)/2$.) The equations $\nabla g = 0$ for $g$ a section of this subbundle are then overdetermined, and one can proceed to differentiate them and derive further conditions. In practice, when there is a $\nabla$-compatible metric at all, this process spins down rather rapidly to a line bundle of which $g$ must be a section, and one can then compute the only possible $g$ explicitly if one can take a primitive of a closed $1$-form. For example, take the case $n=2$, and assume that $tr(R)\equiv0$ but that $R$ is nonvanishing on some simply-connected open set $U\subset M$. In this case, the equations $\langle g, R\ rangle = 0$ have constant rank $2$ over $U$ and hence define a line bundle $L\subset S^*(T^\ast U)$. If $L$ doesn't lie in the cone of definite quadratic forms, then there is no $\ nabla$-compatible metric on $U$. Suppose, though, that $L$ has a positive definite section $g_0$ on $U$. Then there will be a positive function $f$ on $U$, unique up to constant multiples, so that the volume form of $g = f\ g_0$ is $\nabla$-parallel. (And $f$ can be found by solving an equation of the form $d(\log f) = \phi$, where $\phi$ is a closed $1$-form on $U$ computable explicitly from $\nabla$ and $g_0$. This is the only integration required, and even this integration can be avoided if all you want to do is test whether $g$ exists, rather than finding it explicitly.) If this $g$ doesn't satisfy $\nabla g = 0$, then there is no $\nabla$-compatible metric. If it does, you are done (at least on $U$). The complications that Bill alludes to come from the cases in which the equations $\langle g, R\rangle = 0$ and/or their higher order consequences (such as $\langle g, \nabla R\rangle = 0$, etc.) don't have constant rank or you have some nontrivial $\pi_1$, so that the sheaf of solutions to $\nabla g = 0$ is either badly behaved locally or doesn't have global sections. Of course, those are important, but, as a practical matter, when you are faced with determining whether a given $\nabla$ is a metric connection, they don't usually arise. 2 @Robert: I'm glad to see your answer, thanks for giving an answer for the more typical generic circumstances. As you suggest, I ended up focusing on unlikely pathological cases --- I was curious. – Bill Thurston Apr 18 '11 at 8:52 @Bill: Actually, the kind of cautionary examples that you highlighted are very important, and I, too, always bring them up when I'm lecturing on the subject. The old sources (such as 3 Cartan), though wonderful, tend to be cavalier about constant rank assumptions. (A HW exercise I like to give is to construct a metric connection on $R^4$ that is locally Kahler, but not Kahler.) I do like to counterbalance those kinds of examples with `practical advice' about computing holonomy, partly because it's interesting and partly because it tends not to be treated in most modern texts. – Robert Bryant Apr 18 '11 at 12:07 add comment First, there's a very simple criterion for whether $\nabla$ is an orthogonal connection: look at the holonomy of $\nabla$ around closed loops in the manifold, and ask whether they preserve a quadratic form. The set of quadratic forms preserved by a linear transformation is a linear subspace of all quadratic forms, so there's some linear subspace of quadratic forms preserved by the holonomy. The condition that $\nabla$ is torsion free doesn't depend on a metric, so it's straightforward to check. The necessary and sufficient condition for $\nabla$ to be a Levi-Civita condition is that its holonomy preserve at least one positive definite quadratic form, and that it be torsion-free. up vote Note that the condition on holonomy is global: it can't be reduced to some set of pointwise identities involving $\nabla$, or even the local behavior of $\nabla$. For instance, take $\nabla$ 38 down to be the standard flat connection in $\mathbb R^n \setminus 0$ modulo the linear transformation $x \rightarrow 2x$. Since $\nabla$ is preserved by $x \rightarrow 2x$, it descends to the vote quotient $S^{n-1} \times \mathbb R$. It can locally be expressed as a Levi-Civita connection, but there is no globally-defined metric for which it is the Levi-Civita connection. It's also possible to concoct simply-connected examples with a connection that is locally Levi-Civita, but not globally Levi-Civita. For instance: inside $S^3$ embed a copy of $T^2 \times I$, and make a Riemannian metric that for which $T^2 \times I$ is isometric to $[0,1] \times \mathbb E^2$ modulo a discrete group of translations, and for which each component of the complement has holonomy (as usual) equal to the full $SO(3)$. Make a second, similar metric, but where the $T^2$ has a different shape. Make a hybrid of the two, combining half from one $S^ 3$ and the other half from the other $S^3$, glued together by an affine map of the torus. The flat connection is identified by the gluing map, but the holonomy does not globally preserve a Riemannian metric. 4 Exercise: make a connection on $S^2$ that is locally Levi-Civita but not globally Levi-Civita. – Bill Thurston Feb 5 '11 at 19:13 Thanks a lot for your answer. Just one question: what is an "orthogonal" connection? – Jean Delinez Feb 5 '11 at 19:40 2 An orthogonal connection with respect to a quadratic form is one that preserves that quadratic form. If you're given the quadratic form $g$, this is the identity $X(\left < Y, Z \right > _g = \left < \Delta_X Y , Z\right >_g + \left < Y, \Delta_X Z \right >$. Note that if $Y$ and $Z$ range over a local orthonormal basis, this reduces to skewsymmetry of the matrix for $\ Delta_X$ expressed in terms of the basis. – Bill Thurston Feb 5 '11 at 20:23 @Bill Great. Thanks a lot for your help. – Jean Delinez Feb 11 '11 at 17:19 Changed $\Delta$ to $\nabla$. – Deane Yang Apr 29 '11 at 13:55 add comment To start with, you need the connection to be torsion free. After that, there is a characterisation of metric connections given by Schmidt, CMP 29 (1973) 55-59, which states that the linear up vote torsion-free connection is metric if and only if the holonomy group is a sub-group of the orthogonal group of the desired signature. 18 down 3 In other words, the holonomy group must be pre-compact. Given that this condition is non-local, it is unlikely that there is a more manageable equivalent formulation. – Sergei Ivanov Feb 5 '11 at 19:05 1 The statement is essentially trivial, not clear what Schmidt could write on these 4 pages... – Anton Petrunin Feb 5 '11 at 19:11 @Sergei: Right. The connection is Levi-Civita for some Riemannian metric if and only if it is torsion-free with relatively compact holonomy group. But what about pseudo-Riemannian metrics? – George Lowther Feb 5 '11 at 19:45 @George: The question was about Riemannian metrics. For pseudo-Riemannian ones, maybe there is another reasonable criterion when a group is conjugate to a subgroup of the group preserving a non-degenerate product. Maybe it is that no element of (the closure of) this group has a real eigenvalue other than $\pm 1$? (I'm not quite sure this is equivalent, but it looks plausible.) – Sergei Ivanov Feb 5 '11 at 21:06 @George, @Sergei: since the set of quadratic forms preserved by any single group element is a linear subspace, it's already an easy condition, to look at the intersection of these 2 subspaces among all holonomy elements. The infinitesimal intesections are also linear, so it's a matter of parallel transporting these local invariant subspaces to compare globally. For the orthogonal case, I think this is easier than checking whether the holonomy is compact. Eigenvalues aren't enough: the holonomy might be the group that conserves some subspace, and is orthogonal on subspace and quotient. – Bill Thurston Feb 5 '11 at 22:17 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/54434/when-can-a-connection-induce-a-riemannian-metric-for-which-it-is-the-levi-civita/62042","timestamp":"2014-04-19T15:35:12Z","content_type":null,"content_length":"87842","record_id":"<urn:uuid:ab4b20e8-372e-4c3c-99ae-7901755eae5b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
A geometric viewpoint on computation? Let me try to explain what I am trying to do in this work related to “computing with space“. The goal is to understand the process of emergence, in its various precise mathematical forms, like: - how the dynamics of a big number of particles becomes the dynamics of a continuous system? Apart the physics BS of neglecting infinities, I know of very few mathematically correct approaches. From my mixed background of calculus of variations and continuous media mechanics, I can mention an example of such an approach in the work of Andrea Braides on the $\Gamma$-convergence of the energy functional of a discrete system to the energy functional of a continuous system and atomistic models of solids. - how to endow a metric space (like a fractal, or sub-riemannian space) with a theory of differential calculus? Translated: how to invent “smoothness” in spaces where there is none, apparently? Because smoothness is certainly emergent. This is part of the field of non-smooth calculus. - how to explain the profound resemblance between geometrical results of Gromov on groups with polynomial growth and combinatorial results of Breuillard, Gree, Tao on approximate groups? In both cases a nilpotent structure emerges from considering larger and larger scales. The word “explain” means here: identify a general machine at work in both results. - how to explain the way our brain deals with visual input? This is a clear case of emergence because the input is the excitation of some receptors of the retina and the output is almost completely not understood, except that we all know that we see objects which are moving and complex geometrical relations among them. A fly sees as well, read From insect vision to robot vision by N. Franceschini, J.M. Pichon, C. Blanes. Related to this paper, I cite from the abstract (boldfaced by me): We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (EMDS) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop. More generally, there seems to be a “computation” involved in vision, massively parallel and taking very few steps (up to six), but it is not understood how this is a computation in the mathematical, or computer science sense. Conversely, the visual performances of any device based on computer science computation up to now, are dwarfed by any fly. I identified a “machine of emergence” which is in work in some of the examples given above. Mathematically, this machine should have something to do with emergent algebras, but what about the computation part? Probably geometers reason like flies: by definition, a geometrical statement is invariant up to the choice of maps. A sphere is not, geometrically speaking, a particular atlas of maps on the sphere. For a geometer, reproducing whatever it does by using ad-hoc enumeration by natural numbers, combinatorics and Turing machines is nonsense, because profoundly not geometrical. On the other hand, the powerful use and control of abstraction is appealing to the geometer. This justifies the effort to import abstraction techniques from computer science and to replace the non-geometrical stuff by … whatever is more of a geometrical character. For the moment, such efforts are mostly a source of frustration, a familiar feeling for any mathematician. But at some point, in these times of profound changes in, mathematics as well as in the society, from all these collective efforts will emerge something beautiful, clear and streamlined.
{"url":"http://chorasimilarity.wordpress.com/2012/05/20/a-geometric-viewpoint-on-computation/","timestamp":"2014-04-19T14:30:01Z","content_type":null,"content_length":"90172","record_id":"<urn:uuid:79be497e-bbae-4873-9d18-0ea3a9b03fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving the theorem (Appolonius) for the triangle ABC where . . . . Prove the theorem (Appolonius) for the triangle ABC where A, B, C are the respective points (-a,0), (a,0), (b,C) on the cartesian plane. Would i do this using vectors in component form otherwise i have no idea how to do it?
{"url":"http://www.physicsforums.com/showthread.php?p=1359616","timestamp":"2014-04-21T07:14:59Z","content_type":null,"content_length":"21906","record_id":"<urn:uuid:c770f7a2-3b21-43b2-aa01-6c9b26fe424c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Old Greenwich Algebra 2 Tutor ...Since the MCAT is made up of various sections ranging from the Physical and Biological Sciences to Verbal and Writing, it is very important to understand how the test is set up. I have a very strong background in the Biological Sciences and am very comfortable with the Verbal and Writing parts o... 45 Subjects: including algebra 2, English, chemistry, GED ...Currently I am an economist with an investment bank based in NYC. Prior to that I was an Adjunct Professor at a private university in New York. I have over six years of experience teaching and tutoring students from economics to history. 16 Subjects: including algebra 2, calculus, statistics, accounting ...While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. 26 Subjects: including algebra 2, calculus, physics, geometry ...Moreover, I relate to many younger students. It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family and friends. 29 Subjects: including algebra 2, English, chemistry, geometry ...The position required the ability to explain a variety of math subjects to high school students, which ranged from basic math to advance topics like Pre-Calculus and AP Calculus. During my undergraduate career at UIC, I taught general chemistry and classical physics as a supplemental instructor ... 13 Subjects: including algebra 2, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Old_Greenwich_Algebra_2_tutors.php","timestamp":"2014-04-19T20:18:14Z","content_type":null,"content_length":"24096","record_id":"<urn:uuid:0b393401-bf90-43d1-918d-244088984bae>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Essentials of interpretation. Intro. “Essentials of interpretation” is a new series which consists of small lessons on interpretation of computer programs. The lessons are implemented in JavaScript and contain detailed comments. The sources can be found on the appropriate GitHub repository. Available lessons: Currently the first lesson is available where we describe the simplest interpreter of the arithmetic expressions (AE). Every lesson contains also exercises which are proposed to implement improving the understanding of the passed topic. Besides the code sources from the repository (which are quite compact), the appropriate chapters of the series are planned. At the end, we’re going to implement a (simple?) language with support of basic expressions, variables stored in environments and functions as closures (that is, quite easily implementing higher-order functions). A previous work is the implementation of Scheme on Coffee language — an interpreter of Scheme written in CoffeeScript. The inspiration of the series are two books on programming languages theory (PLT) and interpretation: SICP (Structure and Interpretation of Computer Programs) by Harold Abelson and Gerald Jay Sussman with Julie Sussman, and also PLAI (Programming Languages: Application and Interpretation) by Shriram Krishnamurthi. The books are recommended as additional literature. Have fun with implementing programming languages Dmitry Soshnikov, 2011-08-10. Tags: Essentials of interpretation, Interpreter, PLAI, SICP 10. August 2011 at 19:19 very good initiative 10. August 2011 at 23:03 Thanks, Dmitry, it’s very interesting! Look forward to further explanations. 11. August 2011 at 00:41 That will be definitely something huge! 11. August 2011 at 11:28 My solution to homework assignment #2: var handle = { '+': function(a, b) { return a + b; '-': function(a, b) { return a - b; '*': function(a, b) { return a * b; '/': function(a, b) { return a / b; '_default': function(a) { return +a; function evaluate(exp) { var symbol = +exp; if (exp == symbol) { return symbol; symbol = exp[0]; return (handle.hasOwnProperty(symbol) && handle[symbol] || handle._default)(evaluate(exp[1]), evaluate(exp[2])); 11. August 2011 at 15:49 @scriptLover, @Robert Polovsky, @John Merge, thanks, guys! @Mathias Bynens, yes, quite an elegant solution; very good. And by the way, the second lesson in a source code view is available! 11. August 2011 at 21:00 This be interesting. I love SICP, even though I haven’t worked out through all the exercises yet =/ This is what I’ve come up with for lesson 1′s first two exercises: 11. August 2011 at 21:26 @Quildreen Motta very good solution. I see you also implemented variables using environments — great, congrats! The next step is nested functions and scope chain lookups 11. September 2011 at 19:11 I’ve been using Sage (sagemath.org) recently for linear algebra and other math work. Their notebooks are quite nice and provide a way for wonderful collaboration. BUT .. I realize Python is a lovely language but I’ve recently discovered how sophisticated JS is, and how complete it has made the JS world: node.js for servers, JS + HTML5 for clients, and JSON for communications between the two. So I’ve got two questions relating to your work: 1 – How would I go about implementing operator overloading? It is a pretty important part of mathematics, I think, and keeps the notation uncluttered. 2 – Could you point me to good existing mathematics in JS work? — Owen 14. September 2011 at 20:25 @Owen Densmore Sage seems a powerful framework for math (not sure how it correlates in tools with e.g. MatLab, but anyway, seems a good helper). Python is also very interesting language (actually, it has a similar semantics as JS: also completely dynamic, you may augment objects and classes at runtime, though, not built-in classes as in JS and Ruby). And about operator overloading, in the simplest way just comparing the types of operands. If they (or one of them) are strings, use concatenation, if operands are floats, use different algorithm, Unfortunately I’m not aware about mathematics framework such as Sage in JS. But seems Python also fits nice This series though isn’t about mathematics much, but about interpretation of computer programs in general (though of course we touch some abstract data and operations, including primitive, such as math addition, etc, on them). 5. July 2013 at 12:42 Thank you, thank you and thank you
{"url":"http://dmitrysoshnikov.com/courses/essentials-of-interpretation-intro/","timestamp":"2014-04-20T23:27:31Z","content_type":null,"content_length":"51154","record_id":"<urn:uuid:90579bdf-29d8-4441-a91f-5aa8c7d6b77b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Best known algos for calculating nCr % M I have often encountered calculating nCr problems the last one being one of the IIIT-M problems but it probably had weak test cases that got easy solutions AC. The other one https:// cs3.interviewstreet.com/challenges/dashboard/#problem/50877a587c389 of codesprint was completey out of my minds and hands , I tried the inverse euler , power of p in n! ( I would also like if someone could explain me the theory of this method ) but all went in vain . Anyone who could suggest probably the best algos for cases when nCr is to be calculated with large n and r with and without % M where M is prime/not prime + the above method which I coded but couldn't understand how it was so. This question is marked "community wiki". asked 15 Nov '12, 14:45 accept rate: 5% I encountered ^nC[r] for the first time on GCJ 08, Round 3 Problem D .. link to analysis. The first key idea is that of Lucas' Theorem. Lucas's Theorem reduces ^nC[r] % M to (^n[0]C[r[0]] % M) (^n[1]C[r[1]] % M) ... (^n[k]C[r[k]] % M) (n[k]n[k-1]...n[0]) is the base M representation of n (r[k]r[k-1]...r[0]) is the base M representation of r • Note, if any of the above terms is zero because r[i] > n[i] or any other degeneracy, then the binomial coefficient ^nC[r] % M = 0 This means that any of the terms in the expansion of ^n[i]C[r[i]] is not divisible by M. But this is only half the job done. Now you have to calculate ^nC[r] % M (ignoring subscripts for brevity) for some 0 ≤ r ≤ n < M There are no ways around it, but to calculate [ n! / ( r! (n-r)! ) ] % M Without loss of generality, we can assume r ≤ n-r Remember, you can always do the Obvious. Calculate the binomial and then take a modulo. This is mostly not possible because the binomial will be too large to fit into either int or long long int (and Big Int will be too slow) This can then be simplified by using some clever ideas from Modular Arithmetic. For brevity, we say B % M = A^-1 % M It is not always possible to calculate modular multiplicative inverses. If A and M are not co-prime, finding a B will not be possible. For example, A = 2, M = 4. You can never find a number B such that 2*B % 4 = 1 Most problems give us a prime M. This means calculating B is always possible for any A < M. For other problems, look at the decomposition of M. In the codesprint problem you mentioned 142857 = 3^3 * 11 * 13 * 37 You can find the result of ^nC[r] % m for each m = 27, 11, 13, 37. Once you have the answers, you can reconstruct the answer modulo 142857 using Chinese Remainder Theorem. These answers can be found by Naive Methods since, m is small. I have also seen problems where M is a product of large primes, but square free. In these cases, you can calculate the answers modulo the primes that M is composed of using modular inverses (a little more about that below), and reconstruct the answer using CRT. I am yet to see a problem where M is neither, but if it is. I do not know if there is a way to calculate binomial coefficients generally (since you cannot calculate modular inverses, and neither can you brote force). I can dream of a problem where there are powers of small primes, but square-free larger ones for a Number Theory extravaganza. There is one other way to calculate ^nC[r] for any M which is small enough (say M ≤ 5000) or small n and r (say r ≤ n ≤ 5000) by using the following recursion with memoization ^nC[r] = ^n-1C[r] + ^n-1C[r-1] Since there are no divisions involved (no multiplications too) the answer is easy and precise to calculate even if the actual binomials would be very large. So, back to calculating [ n! / ( r! (n-r)! ) ] % M, you can convert it to n * (n-1) ... * (n-r+1) * r^-1 * (r-1)^-1 ... * 1 Of course, each product is maintained modulo M. This may be fast enough for problems where M is large and r is small. But sometimes, n and r can be very large. Fortunately, such problems always have a small enough M :D The trick is, you pre-calculate factorials, modulo M and pre-calculate inverse factorials, modulo M. fact[n] = n * fact[n-1] % M ifact[n] = modular_inverse(n) * ifact[n-1] % M Modular Multiplicative Inverse for a prime M is in fact very simple. From Fermat's Little Theorem A^M-1 % M = 1 Hence, A * A^M-2 % M = 1 Or in other words, A^-1 % M = A^M-2 % M, which is easy (and fast) to find using repeated squaring. There is one last link I wish to paste to make this complete. Modular inverses can also be found using the Extended Euclid's Algorithm. I have only had to use it once or twice among all the problems I ever solved. answered 15 Nov '12, 18:57 gamabunta ♦♦ accept rate: 14% @gamabunta: this one is a godd recipe for a tutorial on codechef and topcoder @admins do try to make it a tutorial .. @gamabunta: Firstly Thanks - Well written . Lucas's Theorem reduces nCr % M to (n0Cr0 % M) (n1Cr1 % M) ... (nkCrk % M) (nknk-1...n0) is the base M representation of n (rkrk-1...r0) is the base M representation of r This is only where M is prime .. But in that particular problem M is not prime and so can we reduce it into Base form (And still Lucas Theorem Holds) ? And if we use CRT how can we use it ? Thanks Anu :) @gamabunta: yes probably if you explain chinese remainder that would be even more beneficial...I have read it a lot many times but forget it too easily. Amazing Detail. Cheers. Just a great feeling seeing someone spend so much time and energy typing this out for others. :) P.S. @anudeep2011:Lucas' Theorem holds true even for prime powers(like 27=3^3) (Called generalized Lucas' theorem) (14 Mar, 01:31) pvaish I agree completely with what @kavishrox wrote... Admins, would it be possible to have some sort of "pin" feature, so that the "tutorial" like posts wouldn't go down? ;) It would greatly help newbies and ofc make this community even more respected :D answered 17 Nov '12, 04:56 accept rate: 7% To take as example : "142857 = 27 * 11 * 13 * 37. You can find the result of nCr % m for each m = 27, 11, 13, 37." as 27 is not a prime, i hope we would have to resort to last method of finding all fact[n] and inverse-fact[n] for 27. So we want nCr and we have x! for all x, WHY do we need inverse factorial ? Is it for (n-r)! and r!. To find inverse of (n-r) and r mod M and then multiply all of their factorials and then find % M. answered 18 Nov '12, 15:08 accept rate: 0% what if do not want modulo, just ncr which does't fit in even long long? answered 15 Dec '13, 13:31 accept rate: 0% Would someone pls xplain me what are these "inverse factorials" ?? answered 18 Nov '12, 10:10 accept rate: 0% How would you calculate nCr mod 27? You can't use the inverse modulo here. and also one can't do this by nCr=(n-1)Cr+(n-1)C(r-1) given that the minimum space would be just 2 rows but each of size r which in this case is 1000000000? answered 17 Jul '13, 16:53 accept rate: 0%
{"url":"http://discuss.codechef.com/questions/3869/best-known-algos-for-calculating-ncr-m","timestamp":"2014-04-19T06:51:12Z","content_type":null,"content_length":"59330","record_id":"<urn:uuid:112deec7-f2e4-4b05-9dbe-5d51b7d05d3f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Chain Rule Calculus f(x)=(2x-7)^3 I NEEED HELP!! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5063dbb9e4b0da5168bdfeb2","timestamp":"2014-04-17T09:54:45Z","content_type":null,"content_length":"37076","record_id":"<urn:uuid:a76a7d05-9c8b-4af9-a38f-36e9652943ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Books in Canada - Reviewfooter A flourishing new genre in the trade book market might be called "academics rushing into public print with abstruse speculations and an occasional joke." Many of these books, if not about cosmology or molecular biology, focus on the human brain. The question often asked is something like "Why are we as smart as we seem to be (or think we are)?" and the answer is usually a put-down: "Because of an evolutionary accident." The standard scientific explanation for all our human talents and complex social organization is that we have a "big brain," with billions of individual neurons and trillions of interconnections. Given the big brain and a few environmental pressures and opportunities, all of our great accomplishments and big ideas (especially those about ourselves) were almost bound to become manifest. I can't imagine anyone doubting that human abilities are unrelated to the kind of brain we have, so it might seem that there isn't too much to argue about. Nevertheless, all these books are characterized by an earnest intention to persuade, though it is difficult to determine whether the entreaty is addressed to lay readers or academic colleagues. Keith Devlin, a British mathematician working in California, has already explored mathematics as a subject in a score of books and other media, both academic and popular. In The Math Gene, he wants to account for how mathematics came on the scene in the first place, and why, like language, it seems to be an exclusively human characteristic. He begins by disowning his title. There is no such thing as a math gene, he says, but it is a common metaphor for an innate facility. (It is?) His explanation for mathematics is that the potential is wired into our brains, and he has some interesting and provocative ideas about why it actually emerged. But his exposition is patchy because it looks in two directions at the same time. He addresses colleagues who share the same knowledge base and can answer him back, but also readers whose understanding of the technical issues must be assumed to be minimal, and who probably couldn't care less about the actual conclusions that Devlin reaches. So Devlin has to wrap his hypotheses in a cocoon of stories (and that misleading title), blunting the sharp edges of argument and transforming academic slog into a breathtaking tale of excitement and discovery. Surprisingly, Devlin doesn't have much time for arithmetic and numbers, though this doesn't stop him from devoting his opening chapters to the topics. Infants, for example, can discriminate between two and three objects or sounds soon after birth, and by the age of two can make some simple calculations. Many other animals can do the same things. These abilities are probably not mathematical in any intellectual sense, but more like estimation, or a sensitivity to quantities that are not counted. Devlin has a reason for this difference, but can't bring himself to discuss it before other The first is a discussion of individuals who lost aspects of their ability to understand numbers or perform mathematical operations after severe injury to their brain. There is a certain morbid fascination about all this, but it surely doesn't come as a surprise that damage to a crucial part of the brain is likely to affect intellectual functioning, just as jamming a screwdriver into the motherboard of your computer will probably interfere with its speed and accuracy. The neurological exposition is followed by an abrupt excursion into group theory, which essentially concerns the basic relationships of mathematics without the numbers. If group theory doesn't mean anything to you, don't worry. Devlin says it doesn't matter, except that it will give an idea of the kind of thing he is talking about when he refers to higher mathematics. Devlin defines mathematics as the science of patterns. One could argue that mathematics isn't always a science (certainly not the way most people do it), and that it is arbitrary, or at least metaphorical, to refer to mathematical relationships as patterns, when they are unlike most other kinds of patterns that we encounter. However, the definition enables Devlin to make his first significant point: that our mathematical ability comes from having a brain that is basically a pattern-creating and pattern-recognizing organ. This is why he thinks higher mathematics is probably easier and more natural than arithmetic, because counting, calculating and other activities with numbers are linear, and unsuited to the kind of brain we have. He even suggests that it might be a mistake to begin children's mathematical education with numbers, rather than plunging them directly into more patterned mathematical activities, such as group theory (an idea reminiscent of the educational turmoil of the 1950s and 1960s when children were introduced to the so-called New Math precisely through group theory and other topics normally taught in higher grades). Devlin is now ready to explain why mathematics is possible. The technical term he uses is "mental representation," although "reflection" and "imagination" would probably serve just as well. He argues that the human brain is able to achieve four levels of representation (at least when thinking about mathematics) while other creatures must be satisfied with just one or two. The first level of representation is the here and now. All you can think about, if you can think at all, is the situation you are currently in. The second level of representation is when you can think about something not present at the moment, but that you know because you have encountered it before. The third level is when you can think about something that is new, but put together from elements of your past experience. Chimps may be able to do this, Devlin thinks, but he is dubious. Only humans, however, are able to attain the fourth level, which is representing (reflecting upon) abstractions that were never part of concrete experience. He calls this "off-line thinking." It enables us, he says, to live in the "wide-open spaces" of symbolic thought, seeing real and imaginary events as patterns rather than as sequences of rules. What gives us that unique ability for abstract thought, apart from our large brain? Devlin argues that it is only language that enables us to pursue and understand mathematical patterns and other aspects of off-line thought. Given language, mathematics is inevitable. But language itself would never exist without off-line thinking, continues Devlin. In fact, language and off-line thinking boil down to the same thing. You can't have one without the other, and they developed togetherùnot as a direct result of evolution, but as a by-product of our big brain. What brought about the big brain, and therefore language and mathematics? Evolutionary chance. Some people, by chance, were born with slightly bigger brains. Others, no doubt, were born with slightly bigger feet. Those with bigger brains begot chains of bigger-brained offspring, at least one of which survived to beget everyone in the world today. Those with bigger feet, fortunately, did not And why did off-line thinking and language develop? It is here that Devlin is most original, or outrageous. When our ancestors left the trees (Devlin blames meteorites) and took to an organized terrestrial life, they needed to keep track of personal relationships. Language as we know it, with its complex reflective syntax and story patterns, developed as a way to talk about what everyone was doing. Gossip, in other words (Devlin's word). Our attention to patterns and relationships is the reason, in the long run, we got numbers and mathematics. Numbers, as Devlin's sub-title asserts, are like gossip. Mathematicians do their job by exploring the relationships they find in the topics they study. Why should many people find mathematics inaccessible? Here Devlin is relentless in asserting that "math is hard." Math involves tedious exercises and its value is inadequately explained, so many students aren't motivated to made the effort. It's like running a marathon, he says. You have to want to succeed. It's an interesting tale, like all speculations about evolution, even if untestable and devoid of any practical utility. ò Frank Smith lives in Victoria, BC. His latest publication is The Book of Learning and Forgetting (Teachers College Press).
{"url":"http://www.booksincanada.com/article_view.asp?id=3046","timestamp":"2014-04-17T12:01:55Z","content_type":null,"content_length":"17446","record_id":"<urn:uuid:b73b9b53-a6ac-4e1d-b2ef-5dcda97442ed>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Micro Problem Set 6 Introductory Microeconomics: Problem Set 6 Utility maximisation and applications. 1. Alice consumes only cheese and dates. Her utility function is U = 2c^0.5 + d, where c is the quantity of cheese she consumes and d is the quantity of dates. Her income is fixed at m > 0. The price of cheese is p > 0, and the price of dates is 1. a. What is Alice's budget constraint? Will she spend all her income on cheese and dates? b. What is Alice's demand function for cheese? (You may assume that Alice's income is sufficiently large that she buys positive quantities of each good.) Is there anything interesting or unusual about this demand function? Explain. c. Find an expression for Alice's demand for dates, and show that her income elasticity of demand is greater than 1. d. Bob obtains twice as much utility from consuming cheese and dates as Alice; his utility function is U = 4c^0.5 + 2d . Bob's income is twice that of Alice. Compare their demands for cheese and 2. Gordon is an employee of a company that allows him to choose the number of hours he works per day. His preferences for consumption of goods and leisure can be represented as follows: U = C^2F, where C stands for consumption (measured in expenditure) and F stands for free time or leisure. Gordon always sleeps for 8 hours each night and this is not included in F . The company pays Gordon a wage of £10 per hour and Gordon also has income from a trust fund that pays him £40 per day. Gordon spends all of his income on consumption goods. a. How many hours a day does Gordon work and how much does he spend on consumption goods? b. In 1998, the government imposed a 50% tax on labour income. How did Gordon's work hours and consumption levels change? c. Explain the changes in part (b) in terms of income and substitution effects. Use a diagram in your answer. d. In 1999 the government decided to impose a lump-sum tax on each individual equal to the tax revenue collected in 1998. Now how many hours does Gordon work and how much does he consume? e. Compare Gordon's utility in 1998 and 1999 and comment on the difference. 3. A consumer's utility function is given by U = x^0.2y^0.8, and her income is M = 1000. She initially faces a price vector p[0] = (1,1), which then changes to p[1] = (2,1). a. Calculate the Compensating Variation and the Equivalent Variation of the price change. b. Illustrate your answer with an appropriate diagram. 4. Think about the market for salt. Suppose your household only buys two goods, "salt" and "all other goods". Suppose the price of salt trebles. a. Represent on a diagram the magnitude of the substitution effect. How big is this effect? b. What about the magnitude of the income effect? c. What can we conclude about the change in your optimal choice that is induced by this enormous increase in the price of salt? d. What if your household were choosing between "housing" and "all other goods", and you were analysing the impact on your optimal choice of an increase in, say, 50% of housing prices? How would your optimal choice change? 5. Consider the consumption and savings decision of a person who lives for two periods, working in the first and enjoying retirement in the second. Explain with the aid of indifference curve diagrams how her plans change in the following scenarios, carefully stating any assumptions you are making. a. Wages (in the current period) rise. b. Interest rates (available in the current period) rise. c. Prices are expected to rise in Period 2. d. How does her utility vary in each case? a. Explain how a worker will vary the number of hours she works in response to a rise in the wage rate, decomposing the change into income and substitution effects. b. Write down the Slutsky equation for this problem. Back to micro home page
{"url":"http://users.ox.ac.uk/~pemb3023/micro1_ps6.html","timestamp":"2014-04-19T05:02:16Z","content_type":null,"content_length":"4903","record_id":"<urn:uuid:61297392-5aa6-4af4-9d74-182ec1f9204a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD FOR DETERMINING THE POWER OF AN INTRAOCULAR LENS Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP For the pre-operative calculation of the power of an intraocular lens, three input parameters are needed: the axial length of the eye (AL), the refractive power of the cornea, and the distance between the front of the cornea and the back focal plane of the intraocular lens, the so-called effective lens position (ELP). The invention shows a novel approach to the determination of the ELP. A method for calculating the power of an intraocular lens comprising the steps of:measuring an axial separation (ACD') between the front surface of the cornea and the plane of the iris root; anddetermining the power of the intraocular lens using the measured axial separation together with other measured parameters and empirically determined lens constants. The method of claim 1, where the plane of the iris root is determined using an optical coherence measurement at an infrared wavelength. The method of claim 2, where the optical coherence measurement consists of one or more meridional scans. The method of claim 3, where the location of the iris root is measured as the outer end points of the strongly scattering layer in the back of the iris. The method of claim 4, where the other measured parameters include the axial eye length and at the two central radii of the corneal front surface. The method of claim 5, where the other measured parameters further include the lens thickness. A method for calculating the power of an intraocular lens comprising the steps of:measuring an axial separation (ACD'') between the front surface of the cornea and a line connecting the posterior edge of the ciliary body; anddetermining the power of the intraocular lens using the measured axial separation together with other measured parameters and empirically determined lens constants. A method for calculating the power of an intraocular lens comprising the steps of:determining the location of the crystalline lens equator based on a combination of the anterior chamber depth, crystalline lens thickness and the anterior radius of curvature of the crystalline lens;using the determined location of the crystalline lens equator to predict the location of the intraocular lens equator; andusing the predicted location of the intraocular lens equator in a calculation to determine the appropriate power of an intraocular lens. A method as recited in claim 8, wherein said step of determining the location of the crystalline lens equator is further based on the posterior radius of curvature of the crystalline lens. A method as recited in claim 8, wherein the step of predicting the location of the intraocular lens equator is further based on a determination of the location of the iris root. A method of determining the power of an intraocular lens comprising the steps of:generating image information of the eye using an optical coherence tomography systems;identifying the location of the iris root based on the image information; anddetermining the power of the intraocular lens based on the location of the iris root coupled additional measurements including the anterior cornea radius and axial eye length. A method as recited in claim 11, wherein the location of the iris root is used to calculate the effective intraocular lens position and wherein the calculated effective intraocular lens position is used to determine the power of the intraocular lens. A method as recited in claim 11, wherein said additional measurements include at least one of the posterior corneal radius and anterior chamber depth. A method of determining the power of an intraocular lens comprising the steps of:measuring the crystalline lens radius or radii or thickness with one of an optical coherence tomography (OCT) system, a slit projection system and an Ultrasound system; anddetermining the power of the intraocular lens based on said measurements and additional measurements including one or more of the axial eye length, anterior cornea radius, posterior corneal radius, anterior chamber depth (ACD). A method as recited in claim 14, where said determining step is performed using a formula or a ray-tracing algorithm. A method as recited in claim 14, wherein said determining step is further based on a calculation of the effective lens position (ELP), wherein the ELP is derived from one or more additional parameters including the crystalline lens radius and the crystalline lens thickness. PRIORITY [0001] This application is a continuation of PCT/EP2008/004406, filed Jun. 3, 2008. This application claims priority to U.S. Provisional Application, Ser. No. 60/933,012, filed Jun. 4, 2007, the disclosure of which is incorporated herein by reference. TECHNICAL FIELD [0002] Biometry, particularly measurement of geometrical parameters in the anterior segment, for the calculation of the refractive power of intraocular lenses (IOL). BACKGROUND OF THE INVENTION [0003] For the pre-operative calculation of the power of an intraocular lens, three input parameters are needed: the axial length of the eye (AL), the refractive power of the cornea, and the distance between the front of the cornea and the back focal plane of the intraocular lens, the so-called effective lens position (ELP). To a good approximation, the post-operative axial length can be substituted by the corresponding value measured pre-operatively. The axial length can be measured either ultrasonically or optically using partial coherence interferometry (PCI). Also--at least for eyes that have not undergone keratorefractive surgery--the post-op corneal power can be predicted based on the pre-op measurement of the front surface corneal radii. This prediction is based on assumptions about the corneal index of refraction and the ratio of front and back surface corneal radii. Keratometry can be measured using manual or automatic optical keratometers, or extracted from a corneal topography obtained via Placido ring projection. The effective lens position, on the other hand, is inherently a post-operative value. In fact the final position of an IOL does not manifest itself until a number of weeks after surgery, when the capsular bag has shrunk around the implant. A pre-op parameter the ELP approximately corresponds to is the distance from the front of the cornea to the front of the crystalline lens, the so-called anterior chamber depth (ACD). The ACD can be measured ultrasonically or optically using slit projection, or it can be predicted based on the diameter of the clear cornea (the so-called white-to-white distance, WTW) and its central curvature. In commonly used IOL calculation formulas, the ELP is predicted using an empirical fit of several parameters such as ACD and AL. Olsen has suggested that the prediction can be improved by inclusion of additional parameters such as the lens thickness (LT), corneal radius, and pre-op refraction (Acta Ophthalmol. Scand. 2007: 85: 84-87). Most commonly used IOL calculation formulas are based on the same vergence formula to model focusing by the intraocular lens; they only differ in the method for predicting ELP. SUMMARY OF THE INVENTION [0006] It is the purpose of this invention to provide a better ELP prediction by measuring a pre-op quantity that correlates closely to the post-op position of the IOL. In one preferred embodiment, an OCT device is used to indentify the iris root. The axial separation (ACD') between the front surface of the cornea and the plane of the iris root is then determined. The power of the intraocular lens is determined using the measured axial separation together with other measured parameters and empirically determined lens constants. BRIEF DESCRIPTION OF THE DRAWINGS [0007]FIG. 1 is a cross sectional view of the eye. [0008]FIG. 2 is a cross sectional view of the eye illustrating the axial distance (ACD') between the front surface of the cornea and the plane of the iris root. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0009] As seen in the attached FIG. 1 , in cross-sectional images of the anterior segment of the eye obtained by optical coherence tomography (OCT) at infrared wavelengths (e.g., 1310 nm) a strongly scattering layer 10 is visible near the back surface of the iris. This structure is commonly interpreted as the iris pigment epithelium. On the other hand, it has also been suggested that this scattering region may correspond to the iris dilator muscle. At the periphery, the absorbing layer ends at a well-defined radial position (location 12 in FIG. 1 ), which is anatomically close to, or co-located with the iris root. In a meridional cross sectional OCT scan of the cornea the two peripheral end points of the scattering layer in the iris can be used to uniquely identify two iris root points. A line connecting the two iris root points can be used to define a root-to-root line. This line is shown as item 20 in FIG. 2 The separation of the root-to-root line from the front of the cornea 16 can be used to define a modified anterior chamber depth parameter ACD' (see FIG. 2 ). This can be defined in one of several different ways. Most simply, ACD' can be defined as the longest perpendicular distance from the root-to-root line to the front surface of the cornea. If the patient fixates in a direction parallel to the direction of the OCT scan during measurement, the line of sight can be uniquely identified in the acquired cross sectional image. ACD' can then alternatively be defined as the longest distance from the root-to-root line to the corneal front surface, measured parallel to the line of sight. Finally, the inner limits of the iris in the OCT scan can be used to mark the mid-point of the pupil along the root-to-root line. As a third alternative, the parameter ACD' can be measured from this mid-point to the corneal front surface, in a direction parallel to the line of sight. The parameter ACD' can be used to predict the post-op effective lens position. Like with presently used formulas this can be done by empirically determining regression coefficients for a set of parameters, such as ACD', AL, and/or LT. Other measured parameters can include the two central radii of the corneal front surface. The prediction of ELP thus obtained can be used in IOL calculation formulas together with empirical lens constants. Another possible approach relies on a measurement associated with a region located at the most anterior portion of a highly scattering layer posterior to the sclera (location 14 of FIG. 1 ). This layer is presumably a pigmented layer along the posterior boundary of the ciliary muscle. Similar to the approach shown in FIG. 2 , a line connecting these two opposing points (14) can be drawn and the separation between this line and the front of the cornea (ACD''--not shown) can be defined and used to predict the IOL position after surgery. The value for ACD'' can be used alone or in conjunction with the value for ACD'. As is well known in the art, the determination of regression coefficients requires large data sets and produce formulas that have limited physical interpretation. The larger number of measurements to be included and the more complex the formula, the more data is required to develop those formulas. This can especially be a drawback in the modification of IOL calculation formulas for newly developed IOL's. The IOL calculation formula may instead take the form of regression formulas to calculate intermediate parameters such as the position of the IOL equator and the effective power of the lens. For example, the ELP is determined by a combination of anatomical features, such as the distance from the corneal vertex to the sulcus, by the design of the IOL and by surgical technique. Various surrogate measurements may be combined. For example the ACD' characterizes the position of the iris root. A combination of the traditional ACD, LT, anterior radius of curvature of the crystalline lens, and possibly also posterior radius of curvature of the crystalline lens, characterize the crystalline lens equator. These and other surrogate measurements (including ACD'') can be combined into a regression formula for predicting the position of the IOL equator. The ELP prediction can then be calculated as a combination of the optical power, derived from the radii of curvature and index of refraction, the predicted IOL equator. The resulting ELP estimate can be integrated into an IOL calculation formula. A particular embodiment of the inventive method consists in the following sequence: The axial length (AL) of a patient eye is measured using partial coherence interferometry (PCI), the modified anterior chamber depth ACD' is determined using optical coherence tomography (OCT) and the corneal power is determined using a suitable keratometric setup. The keratometric setup can be a stand-alone keratometer or integrated into a combination device such as the IOLMaster. After obtaining these measurements, the values are processed together with the desired target refraction using the Haigis-Formula to determine the required power of an intraocular lens. In the Haigis - Formula ##EQU00001## D L = n L - d - n n / z - d ##EQU00001.2## with ##EQU00001.3## z = D C + ref 1 - ref dBC and D C = nC - 1 R C ##EQU00001.4## where [0016] DL: IOL-refraction DC: cornea refraction RC: cornea tradius nC: refractive index of the cornea ref: refraction to be obtained after surgery dBC: spectacle distance from cornea d: optical anterior chamber depth ACD L: Eye length n: refractive index of the eye (1.336)d is normally predicted using a function based on a multi-variable regression analysis from a large sample of surgeon and IOL-specific outcomes for a wide range of axial lengths (AL) and anterior chamber depths (ACD). In the preferred embodiment, the modified anterior chamber depths parameter ACD' (or ACD'') would be used in place of ACD in the regression fit. In other common IOL formulas an expression equivalent to d is used too as the table shows: SRK/T d=A-constant Hoffer Q d=pACD Holladay 1 d=Surgeon Factor Holladay 2 d=ACD Also in these formulas d may be substituted by the modified anterior chamber depth ACD' (or ACD''). The invention is not limited to the embodiments described, also other uses of the measured values ACD' or ACD'' for IOL calculation fall within the scope of protection. Patent applications by Scott A. Meyer, Livermore, CA US Patent applications by Xunchang Chen, Pleasanton, CA US Patent applications by CARL ZEISS MEDITEC AG Patent applications in class Methods of use Patent applications in all subclasses Methods of use User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100134763","timestamp":"2014-04-18T09:29:21Z","content_type":null,"content_length":"42323","record_id":"<urn:uuid:69e659a1-758c-4c54-ab90-a455dd45c37d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can anyone please help me with this question? Find Y prime: Y=cos^-1(2x+5). • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5111d647e4b09cf125bde79f","timestamp":"2014-04-18T16:42:33Z","content_type":null,"content_length":"565418","record_id":"<urn:uuid:25e83af1-2107-4bec-9793-50a106018642>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6366835 - Height estimating apparatus 1. Field of the Invention This invention relates to a height estimating apparatus and method suitable particularly, but not exclusively, for estimating a first height of a vehicle above a first reference surface. 2. Discussion of Prior Art Conventional apparatus used in aircraft navigation systems to estimate height of the aircraft above a fixed reference surface typically include instruments such as the baro-altimeter to estimate height above sea level, radar altimeter to estimate height above ground level, and can also include various configurations of laser and radar devices to estimate a height from obstacles to the aircraft. The baro-altimeter measurement is combined with various outputs from the aircraft's navigation system to provide a baro-inertial height, which is an estimate of the aircraft's height above sea level. The baro-inertial height is the least accurate of the various height measurements described above because the performance of the baro-altimeter is dependent on atmospheric conditions and the flight dynamics of the aircraft. Various systems are available as an alternative for estimating height above sea level, which are partially or wholly independent of the baro-altimeter input. These include using secondary surveillance radar, where a body external to the aircraft reports a measured height of the aircraft by radio, but this suffers from the limitation that someone or something is required to communicate the height to the aircraft and that the relevant surveillance instrumentation, additional to the navigation instrumentation on the aircraft, is required in order to make the measurements. Further alternatives include using the height output from a satellite range triangulation system such as the Global Positioning System (GPS) described in the Nato unclassified report STANAG 4294 (published by the military agency for standardisation) but this relies on satellite signals being available, or using the radar altimeter measurement, but as this measures height above ground level, it is unsuitable for mountainous terrain where the ground level is a significant distance from that of the sea. There is thus a need for an improved height estimating apparatus which is substantially independent of flight dynamics and atmospheric conditions, and which can estimate height above sea level. According to a first aspect of the present invention there is provided apparatus for estimating a first height of a vehicle above a first reference surface, including a system for determining position, velocity and attitude incorporating at least one sensing means operable to provide an output signal indicative of a vertical specific force of the vehicle, error-estimating means for receiving as input signals a horizontal reference velocity and position of the vehicle, and a radar altimeter measurement of a second height of the vehicle above a second reference surface and for providing as an output signal estimates of errors associated with the sensing means output signal, and integrating means for receiving said sensing means and error-estimating means output signals, and for subtracting the estimates of errors from the signal indicative of vertical specific force while performing a double integration of the results of the subtraction, to provide an output indicative of the required estimated first height. Preferably the at least one sensing means is an inertial vertical specific force sensor operable to provide the output signal indicative of the vertical specific force of the vehicle. Conveniently said first reference surface is sea level and said second reference surface is ground level. Advantageously the error-estimating means includes a Kalman filter and a gravity corrector. Preferably there are provided first, second and third estimator stations at which the output signal estimates of errors associated with the sensing means output signal are stored. Conveniently the integrating means includes first and second subtractors and first and second integrators. Advantageously the horizontal velocity and position of the vehicle forming an input to the Kalman filter is provided by the system for determining position, velocity and attitude. According to a further aspect of the present invention there is provided a method for estimating a first height of a vehicle above a first reference surface, including the steps of operating at least one sensing means, forming part of a system for determining position, velocity and attitude of a vehicle, to provide an output signal indicative of a vertical specific force of the vehicle, inputting a horizontal velocity and position of the vehicle and a radar altimeter measurement of a second height of the vehicle above a second reference surface to error-estimating means, establishing in said error-estimating means estimates of errors associated with the sensing means output signal, and subtracting the estimates of errors from the sensing means output signal while performing a double integration of the results of the subtraction, to provide the required estimated first height. Preferably said double integration includes a first integration, which first integration integrates the vertical specific force to provide a vertical velocity, and a second integration, which second integration integrates the vertical velocity in order to provide the required estimated first height. Conveniently the at least one sensing means is a vertical specific force sensor, with estimates of a bias associated with the vertical specific force sensor being provided by a Kalman filter, together with estimates of a vertical velocity error and estimates of a vertical height error associated with the system for determining position, velocity and attitude. Advantageously said subtraction of estimates of errors includes a first subtraction, which first subtraction is effected while performing the first integration and subtracts the estimate of the bias associated with the specific force sensor and a correction for gravity supplied by the gravity corrector from said vertical specific force on a continuous basis, together with a subtraction of the vertical velocity error at discrete intervals, and a second subtraction, which second subtraction is effected on the second integration at discrete intervals and subtracts the vertical height error estimate therefrom to provide the required estimated first height. Preferably the double integration is effected at a processing cycle frequency of substantially 50 Hz and outputs the required estimated first height at an output rate of substantially 50 Hz, the radar altimeter measurement of a second height of the vehicle above a second reference surface is input to the estimating means at an input rate of substantially 12.5 Hz and the estimate of the bias of the vertical specific force sensor, the estimate of gravity, the vertical velocity error estimate and the vertical height error estimate are used to correct the double integration with a correction cycle frequency in the range of from 2 to 4 Hz. For a better understanding of the present invention, and to show how the same may be carried into effect, reference will now be made, by way of example, to the accompanying drawings, in which: FIG. 1 is a perspective view of first and second reference surfaces relative to a typical trajectory of an airborne vehicle, and FIG. 2 is a block diagram illustrating an apparatus according to the present invention for carrying out the method of the present invention for estimating a first height of a vehicle above a first of the two reference surfaces of FIG. 1. Apparatus of the invention for estimating a first height of a vehicle above a first reference surface as shown in FIGS. 1 and 2 is intended for use in situations requiring the estimate of first height to be substantially independent of variations in atmospheric and dynamic conditions. Such apparatus is thus particularly suited for use in navigation systems of airborne vehicles, specifically as a replacement for standard height measurement devices such as the baro-altimeter, which is sensitive to atmospheric conditions and the flight dynamics of the aircraft. Thus as shown in FIGS. 1 and 2 of the accompanying drawings, apparatus for estimating a first height 1 of a vehicle 2 above a first reference surface 3, includes a system 4 for determining position and velocity incorporating at least one sensing means 4 b operable to provide an output signal 5 indicative of a vertical specific force of the vehicle 2. The apparatus also includes error-estimating means, preferably a gravity corrector 19 and a Kalman filter 6. The Kalman filter 6 is for receiving as input signals a radar altimeter measurement 7 of a second height 1 b of the vehicle 2 above a second reference surface 8, and a reference velocity and position in the horizontal plane which may be provided by the system 4, and for providing as an output signal 9 estimates of errors associated with the sensing means output signal 5. The output signal 9 estimates of errors are stored at first, second and third estimator stations 15, 16, 17. Integrating means 10 forms a further part of the apparatus, which integrating means is for receiving and manipulating said sensing means and error-estimating means output signals 5, 9. This manipulation includes subtracting the estimates of errors from the signal 5 indicative of a vertical specific force at first and second subtractors 20 21, whilst performing a double integration at first and second integrators 11, 12 on the results of the subtraction, so as to provide an output indicative of the required estimated first height 1. The at least one sensing means 4 b preferably an inertial vertical specific force sensor operable to provide the output signal 5 indicative of the vertical specific force of the vehicle. As shown in FIG. 1, the first reference surface 3 is sea level, the second reference surface 8 is ground level, and the first height estimate 1 therefore provides an estimate of the height of the vehicle 2 above sea level. The apparatus of the invention, described above, is operable to estimate a first height 1 of the vehicle 2 above a first reference surface 3 by implementing a method of the invention. The method of the invention includes the steps of operating the at least one sensing means 4 b to provide an output signal 5 indicative of a vertical specific force of the vehicle 2, and inputting the radar altimeter measurement 7 to the error-estimating means 6. A further input to the error-estimating means includes horizontal position and velocity 4 a, which preferably is provided by the system 4 for determining position, velocity and attitude. In the method, the integrating means 10 combines the steps of subtracting the estimates of errors from the sensing means output signal 5 whilst performing a double integration on the results of the subtraction. The double integration includes a first integration by the first integrator 11, which first integration integrates the vertical specific force to provide a vertical velocity, and a second integration by the second integrator 12, which second integration integrates the vertical velocity in order to provide the required estimate for first height 1. The estimates of errors include an estimate of a bias associated with the vertical specific force sensor, stored at estimator station 15, an estimate of a vertical velocity error, stored at estimator station 16, and a vertical height error, stored at estimator station 17, all of which are provided by the Kalman filter 6 represented in FIG. 2. The subtraction of the estimates of errors stored at the estimator stations 15, 16, 17 includes a first subtraction at the first subtractor 20 and a second subtraction at the second subtractor 21. The first subtraction is effected on a continuous basis while performing the first integration and subtracts the estimate of the bias associated with the specific force sensor and a correction for gravity supplied by the gravity corrector 19 from said vertical specific force. The first subtraction also includes a subtraction of the vertical velocity error from the first integration at discrete intervals as shown schematically by a first switch 13 in FIG. 2, and subtraction of the vertical velocity error therefore intermittently affects the subtractor 20. These estimates of errors, output from estimator stations 15, 16 and gravity corrector 19, are shown as a collective input 18 a to subtractor 20, having first been synchronised at collecting station 18 b. The second subtraction is effected on the second integration at discrete intervals, as indicated by a second switch 14 in FIG. 2. The correction for gravity 19 compensates for latitude and height with respect to gravity on the surface of the earth at the equator, and is preferably estimated using an output from a further sensing means (not shown) together with the estimate for first height 1. Successful operation of the above method is dependent on synchronisation of the various inputs to the integrating means 10, and a preferred timing schedule is as follows. The integrating means 10 runs at a processing cycle frequency of substantially 50 Hz and outputs the estimate of first height at substantially 50 Hz An estimate of the second height 1 b is input to the error-estimating means 6 at a rate of approximately 12.5 Hz, and the error estimates output from the estimator stations 15, 16, 17 are sent to update the integrating means at a rate of from between 2 to 4 Hz; once an update has been performed, the inputs from the estimator stations 15, 16, 17 to the integrating means are reset to zero for the remaining integrating means processing cycle so as the error estimates are only applied once per update cycle.
{"url":"http://www.google.com/patents/US6366835?dq=6263352","timestamp":"2014-04-21T07:35:40Z","content_type":null,"content_length":"74410","record_id":"<urn:uuid:8493c3a1-faa7-44da-855f-e44cf83172aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Structure Plots bcsstk02 dw2048 fidapm05 qc324 Structure plots provide a quick visual check on the sparsity pattern of the matrix. A structure plot is a rectangular array of dots; a dot is black if the corresponding matrix element is nonzero otherwise it is white. For an n by m matrix, the (1,1) matrix element is at the top left, while the (n,m) element is at the bottom right. Note that we use the term nonzero loosely here. For sparse matrices, we say an element is nonzero if its value is explicitly recorded in the file defining the matrix; the value in the file may actually be zero. Producing high fidelity structure plots of fixed-size is quite difficult when the order of the matrix is not commesurate with the number of pixels used in the display. When the order of the matrix is smaller than the number of available pixels, more than one pixel is used to display a single matrix element. Scaling up the size of such an image can lead to plaid effects or fuzziness. These can be seen above where bcsstk0 (a 66x66 matrix) and fidapm05 (a 42x42 matrix) are displayed in 150x150 pixel arrays. Note that the detailed views of such matrices linked to by on matrix home pages are not constrained to a fixed size, and hence do not suffer from such artifacts. At the other extreme, when the order of the matrix is much larger than the number of pixels used to display it, the matrix can appear denser than it is. For example, the matrix psmigr_3 is a 3140x3140 matrix with only 5.5% of its entries nonzero. However, the nonzeros are distributed fairly uniformly, and hence when it is scaled down to be much smaller than 3140x3140 pixels it looks quite dense. Because of such difficulties matrix structure plots should only be used to obtain a rough idea of the nonzero structure. We are working on methods to improve these visualizations. The structure plots in the Matrix Market were produced using IBM Visualization Data Explorer, MATLAB, and xv. The Matrix Market is a service of the Mathematical and Computational Sciences Division / Information Technology Laboratory / National Institute of Standards and Technology [ Home ] [ Search ] [ Browse ] [ Resources ] Last change in this page : 9 July 2002. [ ].
{"url":"http://math.nist.gov/MatrixMarket/structureplots.html","timestamp":"2014-04-20T10:59:20Z","content_type":null,"content_length":"4977","record_id":"<urn:uuid:fab61dd4-b034-45a4-936c-5cfe225c1dea>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
A Survey of the Almagest Ptolemy's Almagest is one of the great books of mathematics, one of the best examples of effective mathematical modeling. Ptolemy sets out to create a mathematical description of the planets and their motions. Based on vast amounts of empirical data, Eucldean geometry, and trigonometry, he produces a model that allows him to predict the future motions of the planets with a fair amount of Of course, today Ptolemy is mostly known as the one who established a geocentric model of the universe. In this, he is the victim of his success: precisely because his model is so good at predicting planetary motion, it was easy to conclude that the model was the reality. Since today everyone knows that the Earth is not at the center of the universe, Ptolemy was wrong. Similarly, Ptolemy argued that it was impossible for the Earth to be rotating quickly enough to account for the daily rotation of the heavens. We all know this was a mistake, though many of my students find it hard to figure out why Ptolemy's argument against the Earth's motion is wrong. Of course, most of the people who talk about the Almagest have never read it, even though there exists an excellent English translation by G. J. Toomer. After all, the Almagest is a notoriously difficult book. Already in late antiquity there were several commentaries that stepped the student through Ptolemy's arguments. Modern readers need a guide as well. Olaf Pederson's Survey, first published in 1974 (so ten years before Toomer's translation), is precisely such a guide. It has served many readers since 1974; in fact, in his (new) introduction Alexander Jones describes it as "the first book one puts in the hands of a student approaching the Almagest." Besides writing an introduction for the new edition, Jones has added extensive annotations. Presumably in order to keep the production costs down, these have been added in the back of the book. So the core of this book is a photographic reproduction of Pedersen's text. Black bars in the margins indicate sections where new scholarship requires some correction and/or addition be made, and the corresponding notes need to be looked for in the added pages at the end. One must, then, read with a finger in each part, flipping back and forth. Not ideal, but an acceptable compromise. Pedersen locates the sections he is discussing by citing pages in Heiberg's Greek edition; Toomer's translation gives these in the margin, making it easy to use the two books together, which is certainly what any serious student must do. Toomer refers frequently to Pedersen, and of course Jones's notes refer frequently to Toomer. Jones may be a little too optimistic when he says that modern students "no longer expect to be led by the hand"; reading Ptolemy, even with Pedersen's help, will be hard work. But reading it without such help would be almost impossible. G. J. Toomer, Ptolemy's Almagest. Princeton University Press, 1998. (Originally published by Duckworth, 1984) Fernando Q. Gouvêa is Cartar Professor of Mathematics at Colby College in Waterville, ME. He loves reading old books.
{"url":"http://www.maa.org/publications/maa-reviews/a-survey-of-the-almagest","timestamp":"2014-04-19T21:31:00Z","content_type":null,"content_length":"98485","record_id":"<urn:uuid:5fbfa505-4f5f-4b5d-b2d3-6adcf40b0d6d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Fachbereich Mathematik 26 search hits A Mathematical Model for Diffusion and Exchange Phenomena in Ultra Napkins (1992) Joachim Weickert The performance of napkins is nowadays improved substantially by embedding granules of a superabsorbent into the cellulose matrix. In this paper a continuous model for the liquid transport in such an Ultra Napkin is proposed. Its mean feature is a nonlinear diffusion equation strongly coupled with an ODE describing a reversible absorbtion process. An efficient numerical method based on a symmetrical time splitting and a finite difference scheme of ADI-predictor-corrector type has been developed to solve these equations in a three dimensional setting. Numerical results are presented that can be used to optimize the granule distribution. Moduli spaces for torsion free modules on curve singularities I (1993) Gert-Martin Greuel Gerhard Pfister A Model for the Cloudiness of Fabrics (1995) Joachim Weickert Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales. Wavelet Thresholding: Beyond the Gaussian I.I.D. Situation (1995) Michael H. Neumann Rainer von Sachs With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor. Stochastic Reconstruction of Loading Histories from a Rainflow Matrix (1995) Klaus Dreßler Michael Hack Wilhelm Krüger This paper is devoted to the mathematica l description of the solution of the so-called rainflow reconstruction problem, i.e. the problem of constructing a time series with an a priori given rainflow m atrix. The algorithm we present is mathematically exact in the sense that no app roximations or heuristics are involved. Furthermore it generates a uniform distr ibution of all possible reconstructions and thus an optimal randomization of the reconstructed series. The algorithm is a genuine on-line scheme. It is easy adj ustable to all variants of rainflow such as sysmmetric and asymmetric versions a nd different residue techniques. Fatigue Lifetime Estimation Based on Rainflow Counted Data Using the Local Strain Approach (1995) Klaus Dreßler Michael Hack In the automotive industry both the loca l strain approach and rainflow counting are well known and approved tools in the numerical estimation of the lifetime of a new developed part especially in the automotive industry. This paper is devoted to the combination of both tools and a new algorithm is given that takes advantage of the inner structure of the most used damage parameters. Multiscale Texture Enhancement (1995) Joachim Weickert The ideas of texture analysis by means of the structure tensor are combined with the scale-space concept of anisotropic diffusion filtering. In contrast to many other nonlinear diffusion techniques, the proposed one uses a diffusion tensor instead of a scalar diffusivity. This allows true anisotropic behaviour. The preferred diffusion direction is determined according to the phase angle of the structure tensor. The diffusivity in this direction is increasing with the local coherence of the signal. This filter is constructed in such a way that it gives a mathematically well-funded scale-space representation of the original image. Experiments demonstrate its usefulness for the processing of interrupted one-dimensional structures such as fingerprint and fabric images. Nonlinear Diffusion Scale-Spaces: From the Continuous to the Discrete Setting (1995) Joachim Weickert A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state. Symmetric subgroups of rational groups of hermitian type (1995) Bruce Hunt Modular subvarieties of arithmetic quotients of bounded symmetric domains (1995) Bruce Hunt
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15997/start/0/rows/10/doctypefq/article/sortfield/year/sortorder/asc","timestamp":"2014-04-20T21:50:12Z","content_type":null,"content_length":"42751","record_id":"<urn:uuid:a758e800-bacb-45a5-9035-a67c74e22da9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Ritz method From Encyclopedia of Mathematics A method for solving problems in variational calculus and, in general, finite-dimensional extremal problems, based on optimization of a functional on finite-dimensional subspaces or manifolds. Let the problem of finding a minimum point of a functional Banach space Complete system), be given (a so-called coordinate system). In the Ritz method, the minimizing element in the are defined by the condition that This is equivalent to the problem of finding a minimum point of the quadratic functional which can be written in the form are determined from the linear system of equations One can also arrive at a Ritz approximation without making use of the variational statement of the problem (1). Namely, by defining the approximation (2) from the condition (the Galerkin method), one arrives at the same system of equations (3). That is why the Ritz method for equation (1) is sometimes called the Ritz–Galerkin method. Ritz's method is widely applied when solving eigenvalue problems, boundary value problems and operator equations in general. Let Completely-continuous operator). By virtue of the above requirements, consists of positive eigenvalues: Ritz's method is based on a variational determination of eigenvalues. For instance, by carrying out minimization only over the subspace and the vector of coefficients where [2]). W. Ritz [4] proposed his method in 1908, but even earlier Lord Rayleigh had applied this method to solve certain eigenvalue problems. In this connection the Ritz method is often called the Rayleigh–Ritz method, especially if one speaks about solving an eigenvalue problem. [1] M.M. Vainberg, "Variational method and method of monotone operators in the theory of nonlinear equations" , Wiley (1973) (Translated from Russian) [2] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian) [3] S.G. [S.G. Mikhlin] Michlin, "Variationsmethoden der mathematischen Physik" , Akademie Verlag (1962) (Translated from Russian) [4] W. Ritz, "Ueber eine neue Methode zur Lösung gewisser Variationsprobleme der mathematischen Physik" J. Reine Angew. Math. , 135 (1908) pp. 1–61 [a1] G.H. Golub, C.F. van Loan, "Matrix computations" , Johns Hopkins Univ. Press (1989) [a2] G.J. Fix, "An analyse of the finite element method" , Prentice-Hall (1973) [a3] J. Stoer, R. Bulirsch, "Einführung in die numerische Mathematik" , II , Springer (1978) [a4] P.G. Ciarlet, "The finite element method for elliptic problems" , North-Holland (1975) How to Cite This Entry: Ritz method. G.M. Vainikko (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Ritz_method&oldid=19210 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Ritz_method","timestamp":"2014-04-16T13:04:20Z","content_type":null,"content_length":"29722","record_id":"<urn:uuid:3fdc2f2d-173c-4d7c-90cc-01636f216ed3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Space Travel and index . chapter one . chapter two How will Space Travel and Space Commerce be conducted when the best propulsion systems are constant acceleration, slower than light, drives? by Roger Bourke White Jr., copyright 1997, August 2002 Chapter One It takes a long time to travel from one star system to another. Even at the speed of light, it takes four years to travel from our solar system to the Alpha Centuri, the next nearest solar system. For most science fiction writers, this is too long, so they fabricate some sort of Faster-than-light (FTL) technology so their story line can advance. (Warp, sub-space and hyperspace drives being some perennial favorite names.) The problem with this is: there is no FTL technology in our real world foreseeable future, and no theoretical breakthrough that would permit one. In our real world, the fastest ships that can be devised with known sciences and technologies will be constant acceleration ships that can rapidly approach the speed of light, but never surpass it. What will space travel and space commerce be like with Slower-than-light (STL), constant acceleration propulsion? The Basics: how long will it take to get from place to place? How long will it take to get from place to place? There are two answers to this question: how long will it take in the planetary frame of reference, and how long will it take in the space ship frame of reference? The answers are quite different. What an observer will see from a planetary frame of reference is a ship that accelerates rapidly to near the speed of light, then continues to slowly approach the speed of light until midway through the journey, when it reverses the process and starts slowing down. The calculation of how long this journey will take can be simplified by assuming that the trip consists of three parts: 1. An accelerating to light speed part 2. A traveling at light speed part 3. A decelerating from light speed part Here is the basic math for this simplified calculation: #1 G=32 ft/sec-sec (Earth's gravity) = 1 LY/yr-yr This is a simple conversion of units of Earth's gravitational force from those good for Earthly problems (ft/sec-sec) to those good for interstellar space problems (LY/yr-yr). If a ship accelerates at one G, then it will reach light speed in a year, and have traveled half a light year. So, to get up to light speed, and back down to zero speed (phases 1 and 3), takes two years, and during that time the ship travels one light year. The rest of the journey takes place at light speed. So, a journey to Alpha Centuri takes: Two years to start, stop, and travel one light year plus... Three years to travel the remaining distance. Total: Five years. A trip to Sirius, twelve light years distant, takes thirteen years. A trip to the Galactic Core, 30,000 light years distant, takes 30,000 years. Another curiosity emerges: if the ship accelerates at 2G instead of one G, how much does that cut travel time? The answer is: hardly at all! That part of the journey which is taking place at near light speed (phase 2) is unaffected. Journey time to somewhere nearby, such as Alpha Centuri, will be measurably affected, but a journey even to somewhere as distant as Serius will show little change in travel time. A very simple rule of thumb emerges for calculating the travel times of constant acceleration ships -- as seen from planetary perspective: Travel time = Take the distance in light years, and add a year. Travel time in ship reference The travel time in the ship time reference looks quite different. Travel time looks Newtonian, and acceleration does make quite a difference in travel time. These same journeys experienced from inside the space ship are covered by a completely different formula: it's simply Newtonian d=1/2att. The space ship flying to Alpha Centuri would experience a flight of: 1/2 the journey: t=sqrt(2*2LY/1LY/yr-yr)=2 , full journey = 4 years The trip to Sirius would be: 1/2 the journey: t=sqrt(2*6LY/1LY/yr-yr)=3.5 , full journey = 7 years The galactic center would be: 1/2 the journey: t=sqrt(2*15,000LY/1LY/yr-yr)=170 , full journey = 340 years Note that from the space ship reference acceleration is important to journey time, and note that in ship times these journeys are always much faster than they appear in planetary reference. Here is a table of these same journeys being undertaken in different acceleration regimes. │ Destination │ 1G │ 2G │ 5G │ 10G │ Planet time │ │ Alpha Centauri │ 4 │ 2.8 │ 1.8 │ 1.3 │ 5 │ │ Sirius │ 7 │ 5 │ 3 │ 2.2 │ 13 │ │ Galactic Core │ 340 │ 244 │ 155 │ 110 │ 30,000 │ index . chapter one . chapter two . wikipedia article
{"url":"http://www.whiteworld.com/technoland/stories-nonfic/2008-stories/STL-commerce-01.htm","timestamp":"2014-04-17T15:28:32Z","content_type":null,"content_length":"7039","record_id":"<urn:uuid:e9647732-df5a-429d-bd67-fe25de4869fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: why does x^2+y^2 = 4-c have no real solutions? (book example about functions of several variables and graphing functions) • one year ago • one year ago Best Response You've already chosen the best response. This equation could actually have real solutions. It depends on the value of c. The equation of a circle with midpoint (0,0) and radius r is: x²+y²=r² This means, as long as 4-c in your equation is positive, (which means c <4) all the points on the circle with midpoint (0,0) and radius sqrt(4-c) are (real) solutions. If c=4, then only the pair x=0, y=0 is a solution. If c>4, there are no real solutions. Best Response You've already chosen the best response. That makes sense, but if I consider all solutions on one graph, would that make the valid ones, not valid? Does my question make sense? Best Response You've already chosen the best response. I'm not really sure what you mean, but if it is required that c can be any real value, then the conclusion remains: for some c there are solutions, for other there are none. For each of the values of c that are valid (c<4) you can draw graphs of these solutions. In the image you see the graphs of the solutions for c = {-5,-4,-3,...,3,4} The largest circle is the graph for c=-5, the origin is the solution when c=4. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510df6bbe4b09cf125bccf14","timestamp":"2014-04-17T04:23:36Z","content_type":null,"content_length":"34174","record_id":"<urn:uuid:db24be43-0a05-46c4-bc05-20bcf6fb96d6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Dunwoody, GA Algebra Tutor Find a Dunwoody, GA Algebra Tutor ...I am currently steadily trying to get into a GATAPP Program. I am a partner in a family owned business. The business has been in operation for over sixteen years and is an accounting outfit where we work for various small businesses in several states. 18 Subjects: including algebra 1, reading, writing, accounting ...When students realize that all of algebra follows rules that they already know, they can usually relax and have fun with it. Geometry is the subject where math teachers bring in more abstract concepts and many students are left behind. This is great for tutoring because there are a limited number of equations to learn and everything can be demonstrated by real world objects and 17 Subjects: including algebra 2, algebra 1, chemistry, physics ...Abigail was recently the Tutor of the Month for another tutoring website! "Abigail really knows her stuff. I can't say enough great things about her. She always makes herself available and has infinite patience and knowledge in Math. 22 Subjects: including algebra 1, algebra 2, reading, calculus ...I have recommendations from both. I home schooled three boys in Algebra II. I have also successfully tutored students in Algebra II. 15 Subjects: including algebra 2, algebra 1, chemistry, statistics ...All the best, ArisOne of the most difficult classes, calculus can be a killer. I probably have taught and tutored this subject more than any other, having taught it for the last 15 years without a break and having graded the AP calculus test a few years back. My Ph.D is in the area of Partial ... 20 Subjects: including algebra 2, algebra 1, calculus, statistics Related Dunwoody, GA Tutors Dunwoody, GA Accounting Tutors Dunwoody, GA ACT Tutors Dunwoody, GA Algebra Tutors Dunwoody, GA Algebra 2 Tutors Dunwoody, GA Calculus Tutors Dunwoody, GA Geometry Tutors Dunwoody, GA Math Tutors Dunwoody, GA Prealgebra Tutors Dunwoody, GA Precalculus Tutors Dunwoody, GA SAT Tutors Dunwoody, GA SAT Math Tutors Dunwoody, GA Science Tutors Dunwoody, GA Statistics Tutors Dunwoody, GA Trigonometry Tutors Nearby Cities With algebra Tutor Alpharetta algebra Tutors Chamblee, GA algebra Tutors Decatur, GA algebra Tutors Doraville, GA algebra Tutors Duluth, GA algebra Tutors Johns Creek, GA algebra Tutors Mableton algebra Tutors Norcross, GA algebra Tutors North Springs, GA algebra Tutors Roswell, GA algebra Tutors Sandy Springs, GA algebra Tutors Smyrna, GA algebra Tutors Snellville algebra Tutors Tucker, GA algebra Tutors Woodstock, GA algebra Tutors
{"url":"http://www.purplemath.com/dunwoody_ga_algebra_tutors.php","timestamp":"2014-04-19T15:02:46Z","content_type":null,"content_length":"23847","record_id":"<urn:uuid:f38cbf31-4907-4a31-9bf7-d5d3c3654193>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Brackets Indices Division Multiplication Addition Subtraction BIDMAS is a mnemonic to help us to remember the correct order in which to do calculations. If we had, say, 7 + 3 x 2 and did the calculation from left to right 7 + 3 = 10. We then multiply by 2 and the answer is 20 This answer is wrong because we did the calculation in the wrong order. BIDMAS tells us that we must do multiplication before addition. So, 3 x 2 is 6. We can add the 7 now to get 13. By using BIDMAS we get the correct answer. Always resolve Brackets first. Then Indices. Division and/or Multiplication next and finally Addition and/or Subtraction.
{"url":"http://www.bidmas.com/","timestamp":"2014-04-17T00:57:03Z","content_type":null,"content_length":"2034","record_id":"<urn:uuid:95fa1afd-3faf-42cf-bb64-ab805052eb81>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Tips for Success in Undergraduate Math Courses by Jessica Purcell. Some very good advice for college calculus students. Read this carefully and do as it suggests. Common Errors in Undergraduate Mathematics by Eric Schechter The calculus page problems list problems by D. A. Kouba at UC Davis Assorted Applets The Web page contains a plethora of interesting resources to help you visualize important mathematical concepts. Most of these are Java applets, which run interactively as plug-ins in your web browser. To use them you must have Java enabled in your browser preferences - by default, it probably is already. • Some Java applets from MIT's OpenCourseWare page for 18.013A: • Function and Derivative Animations, by Przemyslaw Bogacki and Gordon Melrose (.avi files, can play e.g. with Windows Media® Player) • Java applets of secant lines and tangent lines (from IES, Manipula Math): • Secant lines for a function with two non-differentiable points (applet by Daniel J. Heath) • Animations of secant lines approaching (or not approaching) tangent lines (by Douglas N. Arnold): • Constructing functions that are continuous but nowhere differentiable (!), applet from Maths Online • Chain rule applet (from a multimedia calculus course by Scott Sarra) • First and second derivatives applet (by Scott Sarra) • More first and second derivatives, with parameters you can tweak (applet from Maths Online) • Derivatives of a^x, sin x, cos x (applets by Daniel J. Heath) • Converging to the number e (applet from IES, Manipula Math); note that the simulation doesn't let you go far enough to approach that close to e • Zooming in on a tangent line (animation by Douglas N. Arnold) • Linear approximation of sin x at 0 (applet from UBC Calculus Online) • Finding a function's extremum, applet from Maths Online • Rolle's theorem and the mean value theorem (applet from IES, Manipula Math) • Some nice integral applets (by Daniel J. Heath): • Numerical Integration Simulation (by Joseph L. Zachary) • Some applets on volumes of solids (from IES, Manipula Math): • Direction field applet (by Scott Sarra) • Yet another direction field applet (from IES, Manipula Math) • Parametric equation applet (by Scott Sarra) • Another parametric equation applet (from IES, Manipula Math) • Cycloid animation (AVI) (by Przemyslaw Bogacki and Gordon Melrose) • Cycloid applet (from Maths Online) • Computing arc length (AVI) (animation by Przemyslaw Bogacki and Gordon Melrose) • Approximating arc length (applet by Daniel J. Heath) • Polar curve applet (from IES, Manipula Math) • Several polar curve animations (by Przemyslaw Bogacki and Gordon Melrose): □ limacons: □ rose curves: ☆ r = 3 cos(3t) (AVI) ☆ r = 5 sin(2t) (AVI) □ spiral: r = t (AVI) □ circle: r = cos t (AVI) • Converging and diverging series animations (by Przemyslaw Bogacki and Gordon Melrose): • Taylor approximations (applet by Daniel J. Heath) And just for the fun of it, there's the Quadratic Formula Song (WMA - 1.1 MB).
{"url":"http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/related-resources/","timestamp":"2014-04-21T15:11:19Z","content_type":null,"content_length":"38053","record_id":"<urn:uuid:71d6dd2e-763c-4707-892e-fff8f056d6af>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
need help!! May 23rd 2006, 12:43 AM need help!! how can i proof :( : $<br /> v=(mu/a)^{1/2} (1+e cosE/1-e cosE)^{1/2}<br />$ where $v =(v_{Ap}^2 + v_{peri}^2)^{1/2}$ if the planet orbit on ellipse ??? May 23rd 2006, 02:57 AM Originally Posted by sweet how can i proof :( : $<br /> v=(mu/a)^{1/2} (1+e cosE/1-e cosE)^{1/2}<br />$ where $v =(v_{Ap}^2 + v_{peri}^2)^{1/2}$ if the planet orbit on ellipse ??? A bit more explanation of what the symbols stand for is needed, also we are guessing what you have been asked to prove, I assume it is: $<br /> v=(mu/a)^{1/2} \left(\frac{1+e\ \cos(E)}{1-e\ \cos(E)}\right)^{1/2}<br />$ but that leaves us guessing is $mu$ supposed to be $\mu$? Is $e$ the eccentricity? $a$ the semi-major axis? Also what is $E$? We can guess but it is just making it more work for the helpers here. May 23rd 2006, 04:23 AM ok ;) mu = $<br /> \mu<br />$ e= the eccentricity a= the semi-major axis E=the eccentric anomaly we want to proof $<br /> v=(\mu/a)^{1/2} \left(\frac{1+e\ \cos(E)}{1-e\ \cos(E)}\right)^{1/2}<br />$ if the planet orbit on ellipse ??? $<br /> v =(v_{Ap}^2 + v_{peri}^2)^{1/2}<br />$ Thanks for helping me! May 23rd 2006, 06:21 AM Originally Posted by sweet ok ;) mu = $<br /> \mu<br />$ e= the eccentricity a= the semi-major axis E=the eccentric anomaly we want to proof $<br /> v=(\mu/a)^{1/2} \left(\frac{1+e\ \cos(E)}{1-e\ \cos(E)}\right)^{1/2}<br />$ if the planet orbit on ellipse ??? $<br /> v =(v_{Ap}^2 + v_{peri}^2)^{1/2}<br />$ Thanks for helping me! Since the eccentric anomaly is a variable depending on the point of the planet in its orbit and v as defined here is a constant, as are e mu and a this cannot be true for almost all values of eccentricity. Or have I misunderstood something? May 23rd 2006, 07:53 AM E divined like in the graph and v isn't constant it's a Velocity May 23rd 2006, 08:41 AM Originally Posted by sweet E divined like in the graph and v isn't constant it's a Velocity What's this: $<br /> v =(v_{Ap}^2 + v_{peri}^2)^{1/2}<br />$ May 23rd 2006, 09:08 AM $v_{AP}$ is Aphelion Velocity $v_{peri}$ is pericentre Velocity and v is aggregate of $v_{AP} ,v_{peri}$ $<br /> v^2=v_{peri}^2+v_{AP}^2<br />$ May 23rd 2006, 09:11 AM Originally Posted by sweet $v_{AP}$ is Aphelion Velocity $v_{peri}$ is pericentre Velocity and v is aggregate of $v_{AP} ,v_{peri}$ $<br /> v^2=v_{peri}^2+v_{AP}^2<br />$ which would make it a constant velocity. May 23rd 2006, 09:15 AM Originally Posted by CaptainBlack which would make it a constant velocity. why u say it's constant :confused: :confused: May 23rd 2006, 12:44 PM Originally Posted by sweet $v_{AP}$ is Aphelion Velocity $v_{peri}$ is pericentre Velocity and v is aggregate of $v_{AP} ,v_{peri}$ $<br /> v^2=v_{peri}^2+v_{AP}^2<br />$ I know what a "perihelion" velocity would be. What is a "pericenter" velocity? May 23rd 2006, 12:52 PM Originally Posted by topsquark I know what a "perihelion" velocity would be. What is a "pericenter" velocity? i'm sorry it's Perihelion velocity not pericenter velocity :rolleyes: May 24th 2006, 12:45 PM Originally Posted by sweet i'm sorry it's Perihelion velocity not pericenter velocity :rolleyes: In that case I agree with CaptainBlack. The perihelion and aphelion speeds are constant (meaning they don't vary with how many orbits have occurred.) Thus v^2 as you have defined it will also be constant. Since the eccentric anomaly changes over the course of the orbit (if I'm reading your diagram correctly) there must be an error in your formula. May 24th 2006, 03:21 PM Originally Posted by topsquark In that case I agree with CaptainBlack. The perihelion and aphelion speeds are constant (meaning they don't vary with how many orbits have occurred.) Thus v^2 as you have defined it will also be constant. Since the eccentric anomaly changes over the course of the orbit (if I'm reading your diagram correctly) there must be an error in your formula. ok i agree with u that v is aconstant ...but the problem is how can i proof the formula $<br /> v=(mu/a)^{1/2} \left(\frac{1+e\ \cos(E)}{1-e\ \cos(E)}\right)^{1/2}<br />$ it,s verey hard .... May 25th 2006, 03:37 AM Originally Posted by sweet ok i agree with u that v is aconstant ...but the problem is how can i proof the formula $<br /> v=(mu/a)^{1/2} \left(\frac{1+e\ \cos(E)}{1-e\ \cos(E)}\right)^{1/2}<br />$ it,s verey hard .... v is constant but E is not. The only way that formula can work is if you are calculating v for a particular point on the orbit (that is to say for a particular value of E), which we are not May 25th 2006, 09:29 AM Originally Posted by topsquark v is constant but E is not. The only way that formula can work is if you are calculating v for a particular point on the orbit (that is to say for a particular value of E), which we are not No No No. v is now the speed of the planet as a function of eccentric anomaly.
{"url":"http://mathhelpforum.com/advanced-math-topics/3077-need-help-print.html","timestamp":"2014-04-20T14:08:38Z","content_type":null,"content_length":"23166","record_id":"<urn:uuid:76afad7b-e120-4b1e-80ed-7d21dea07d39>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 23rd 2010, 06:44 AM #1 Feb 2010 The speed (r) of a race car around a track varies inversely as the time it takes to go one lap around the track. When Car 44 is traveling 170 mph, it takes him 1.5 mins to complete one lap around the track. If he can complete a lap in 1.45 mins, how fast would he be traveling? The distance a stone falls when dropped off a cliff is proportional to the square of the time it falls. If the stone falls 64.4 ft in 2 secs, how far would it have fallen in 3 secs? No entiendo. I don't understand. Thanks for any help! if stone fall 64.4 ft in 2 sec x ft would be in three seconds 64.4ft : 2 = x : 3 from this we got The speed (r) of a race car around a track varies inversely as the time it takes to go one lap around the track. When Car 44 is traveling 170 mph, it takes him 1.5 mins to complete one lap around the track. If he can complete a lap in 1.45 mins, how fast would he be traveling? The distance a stone falls when dropped off a cliff is proportional to the square of the time it falls. If the stone falls 64.4 ft in 2 secs, how far would it have fallen in 3 secs? No entiendo. I don't understand. Thanks for any help! For 2 The distance is not "linearly" proportional to time, it's proportional to the square of the time elapsed since release, as gravity accelerates the stone to the cliff base or water. $distance\ fallen=kt^2$ After 3 seconds $d=16.1(3^2)\ feet$ 1) if car traveling 170 mph complete lap for 1.5 or(90 sec) for x speed (mph) he would travel the lap 1.45(87 sec) 170mph : 90 s = x mph : 87 s 90x = 170 * 87 90x = 14790mph*s x = 14790 mph*s/90s x = 164.33 mph... here it is ....this should be fine Archie Meade that was nice...didnt readed well...anyway i cant speak so well english and dont know good math terms ... but i checked now ..by the way thanks... i dont know if this second answer is correct Inverse relationship... $r=\frac{k}{t}$ $170=\frac{k}{\left(\frac{1.5}{60}\right)}$ by converting minutes to hours. $r=\frac{4.25}{\left(\frac{1.45}{60}\right)}=\frac{ 60(4.25)}{1.45}$ mph $\frac{distance}{time}=average\ speed$ $(av\ speed)time=distance$ distance = lap, so it's the same both times If the racecar takes a shorter time, it's average speed must have increased. Hi icefirez, you are almost there! but you left out an important piece at the start.... Distance is the same for both laps, so February 23rd 2010, 09:09 AM #2 Feb 2010 February 23rd 2010, 09:22 AM #3 MHF Contributor Dec 2009 February 23rd 2010, 09:26 AM #4 Feb 2010 February 23rd 2010, 09:31 AM #5 Feb 2010 February 23rd 2010, 09:45 AM #6 MHF Contributor Dec 2009 February 23rd 2010, 11:53 AM #7 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/pre-calculus/130319-explain.html","timestamp":"2014-04-17T08:44:04Z","content_type":null,"content_length":"51379","record_id":"<urn:uuid:e282bb05-ee7e-4de4-94ac-1483b3310342>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
FGA explained FGA explained One of the series of Grothendieck’s works is FGA (see entry EGA for overall description of EGA, FGA and SGA). A summer school in Trieste 2003, has tried to summarize some of the main historical breakthroughs of FGA in modern exposition. The proceedings of that school are an updated version of freely available materials on the ICTP web. • Barbara Fantechi, Lothar Göttsche, Luc Illusie, Steven L. Kleiman, Nitin Nitsure, Angelo Vistoli, Fundamental algebraic geometry. Grothendieck’s FGA explained, Mathematical Surveys and Monographs 123, Amer. Math. Soc. 2005. x+339 pp. MR2007f:14001 Revised on September 1, 2010 09:59:47 by Zoran Škoda
{"url":"http://www.ncatlab.org/nlab/show/FGA+explained","timestamp":"2014-04-17T18:36:49Z","content_type":null,"content_length":"14734","record_id":"<urn:uuid:388a0a58-e82a-4aa9-8c19-d24ea9b73a8f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Batch point-in-polygon operations on GeoJSON files. npm install points2polygons Want to see pretty graphs? Log in now! 6 downloads in the last week 12 downloads in the last month Given a list of polygons and points, points2polygons will determine if each point is inside a polygon, using point-in-polygon. If found, the point will be added to the polygon. For example: Given a list of polygons, polygons.geojson: "type": "FeatureCollection", "features": [ "type": "Feature", "properties": { }, "geometry": { "type": "Polygon", "coordinates": [ and a list of points, points.csv: address, lat, lon 111 Point Lane, 1, 1 points2polygons will assign points to matching polygons, and generate a new GeoJSON file: "type": "FeatureCollection", "features": [ "type": "Feature", "properties": { "points": [ "type": "Feature", "properties": { "address": "111 Point Lane" "geometry": { "type": "Point", "coordinates": [1,1] "geometry": { "type": "Polygon", "coordinates": [ Points with no polygons will be placed in their own GeoJSON file, orphans.geojson. points2polygons can perform sum and count aggregations. Let me explain. Say you have a town.geojson: "type": "Feature", "properties": { "name": "Polygonville" "geometry": { "type": "Polygon", "coordinates": [ and houses.csv: address, color, value, latitude, longitude 111 Point Lane, red, 100, 1, 1 222 Point Lane, green, 200, 2, 2 333 Point Lane, red, 300, 3, 3 Running points2polygons --polygons town.geojson --points houses.csv will correctly place the houses in our town, and generate something like this: "type": "FeatureCollection", "features": [ "type": "Feature", "properties": { "name": "Polygonville", "points": [ // a house "type": "Feature", "properties": { "address": "111 Point Lane", "color": "red", "value": "100" "geometry": { "type": "Point", "coordinates": [1,1] // another house "type": "Feature", "properties": { "address": "222 Point Lane", "color": "green", "value": "200" "geometry": { "type": "Point", "coordinates": [2,2] "geometry": { "type": "Polygon", "coordinates": [ But what I really want is a count of red and green houses in my town. In other words, I want to by color. Use the --count param. Running points2polygons --polygons town.geojson --points houses.csv --count color generates something like this: "type": "FeatureCollection", "features": [ "type": "Feature", "properties": { "name": "Polygonville", "green": 1, // there is one green house in the town "red": 2 // there are two red houses in the town "geometry": { "type": "Polygon", "coordinates": [ properties doesn't contain points anymore, only the aggregation result. Pretty incredible! But what I really want is a total of house values, by color. In other words, I want to value, and group by color. Use the --groupBy and --sum params. Running points2polygons --polygons town.geojson --points houses.csv --groupBy color --sum value generates something like this: "type": "FeatureCollection", "features": [ "type": "Feature", "properties": { "name": "Polygonville", "green": 200, // this town's green houses are worth a total of 200 "red": 400 // this towns's red houses are worth a total of 400 "geometry": { "type": "Polygon", "coordinates": [ Pretty incredible! npm install points2polygons Using it as a console utility ➜ points2polygons Batch point-in-polygon operations. Creates a GeoJSON file of polygons containing points. Usage: points2polygons -y, --polygons a GeoJSON file of polygons [required] -t, --points a CSV file of points [required] -i, --latitude latitude field [default: "latitude"] -e, --longitude longitude field [default: "longitude"] -d, --delimiter delimiter character [default: ","] -o, --output a GeoJSON file of polygons containing points [default: "output.json"] -c, --count aggregate points and count - group by this field [default: null] -g, --groupBy aggregate points and sum - group by this field [default: null] -s, --sum aggregate points and sum - sum this field [default: null] Using it as a library .batch(polygons, points, showProgress, count, groupBy, sum) • polygons: (required) a GeoJSON object of polygons. • points: (required) a GeoJSON object of points. • showProgress: (optional) a callback that gets fired per point processed, and receives the current point index. • count: (optional) if provided, will aggregate points and count this field. See example. • groupBy: (optional) if provided, will aggregate points and sum, grouping by this field. See example. • sum: (optional) if provided, will aggregate points and sum, summing by this field. See example. Returns an object with two properties: • polygons: same as input, but each polygon has a points property containing corresponding points. • orphans: a GeoJSON object containing points with no polygons. See also
{"url":"https://www.npmjs.org/package/points2polygons","timestamp":"2014-04-18T06:14:44Z","content_type":null,"content_length":"16837","record_id":"<urn:uuid:5c801aa7-485e-42e8-9f1b-21716a7307ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Angular Momentum Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems, Volume 1 William Jackson Thompson Develops angular momentum theory in a pedagogically consistent way, starting from the geometrical concept of rotational invariance. Uses modern notation and terminology in an algebraic approach to derivations. Each chapter includes examples of applications of angular momentum theory to subjects of current interest and to demonstrate the connections between various scientific fields which are provided through rotations. Includes Mathematica and C language programs. We haven't found any reviews in the usual places. References from web pages ANGULAR MOMENTUM. An Illustrated Guide to Rotational. Symmetries for Physical Systems. WILLIAM. J. THOMPSON. University of North Carolina ... doi.wiley.com/ 10.1002/ 9783527617821.fmatter Angular Momentum : an Illustrated Guide to Rotational Symmetries ... Estamos abrindo esta discussão para conversarmos sobre a obra ANGULAR MOMENTUM : AN ILLUSTRATED GUIDE TO ROTATIONAL SYMMETRIES FOR PHYSICAL SYSTEMS ... forum.comprar-livro.com.br/ 1_117748_0.html Bibliographic information
{"url":"http://books.google.com.au/books?id=O25fXV4z0B0C&pg=PA5","timestamp":"2014-04-18T21:14:50Z","content_type":null,"content_length":"102646","record_id":"<urn:uuid:82c9cf73-bf62-4711-b3ec-963a3fb8b20c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Brentwood, MD Precalculus Tutor Find a Brentwood, MD Precalculus Tutor ...I have also worked with high school students on Math and Science. While working on my Molecular Biology BS from Johns Hopkins University, I tutored college students on Math (including Calculus) and Science (including Chemistry). I have worked with individual students and small groups. I like to... 40 Subjects: including precalculus, English, reading, chemistry ...I offer tutoring for any high school math subject up to and including AP Calculus AB and BC. I also help students improve their scores for the quantitative portions of the SAT and ACT. I did very well on the math portion of the SAT by scoring 720. 11 Subjects: including precalculus, calculus, geometry, algebra 1 ...I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. I was born and raised in Seoul, Korea where my parents still live. I came to the States to finish high school and college. 17 Subjects: including precalculus, chemistry, physics, calculus ...I have tutored high school and college level math. I understand all the concepts well and can explain them in a manner in which they make sense. I have degrees in Electrical Engineering and 23 Subjects: including precalculus, chemistry, Spanish, calculus I have tutored students in Loudoun County for the past 6 years in all levels of math through A/P Calculus, Chemistry, A/P Chemistry, Physics and SAT / ACT prep. My undergraduate is in Chemical Engineering from Virginia Tech, and my MBA is in marketing from Wharton. I truly enjoy working with students and believe that each one has specific learning styles. 17 Subjects: including precalculus, chemistry, calculus, geometry Related Brentwood, MD Tutors Brentwood, MD Accounting Tutors Brentwood, MD ACT Tutors Brentwood, MD Algebra Tutors Brentwood, MD Algebra 2 Tutors Brentwood, MD Calculus Tutors Brentwood, MD Geometry Tutors Brentwood, MD Math Tutors Brentwood, MD Prealgebra Tutors Brentwood, MD Precalculus Tutors Brentwood, MD SAT Tutors Brentwood, MD SAT Math Tutors Brentwood, MD Science Tutors Brentwood, MD Statistics Tutors Brentwood, MD Trigonometry Tutors Nearby Cities With precalculus Tutor Berwyn Heights, MD precalculus Tutors Bladensburg, MD precalculus Tutors Colmar Manor, MD precalculus Tutors Cottage City, MD precalculus Tutors Edmonston, MD precalculus Tutors Fairmount Heights, MD precalculus Tutors Hyattsville precalculus Tutors Mount Rainier precalculus Tutors North Brentwood, MD precalculus Tutors Riverdale Park, MD precalculus Tutors Riverdale Pk, MD precalculus Tutors Riverdale, MD precalculus Tutors Seat Pleasant, MD precalculus Tutors University Park, MD precalculus Tutors West Hyattsville, MD precalculus Tutors
{"url":"http://www.purplemath.com/Brentwood_MD_Precalculus_tutors.php","timestamp":"2014-04-17T04:16:09Z","content_type":null,"content_length":"24380","record_id":"<urn:uuid:f9f0aeb1-31df-47fa-ab82-5092a1ae47f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
¿Quién Inventó el Punto Decimal? 9. Who invented the decimal point? The decimal point goes along with place-value notation. According to Edward deBono's very comprehensive book place-value notation goes back at least to the Sumerians in Babylonia in the 18th century BCE, who wrote numbers in base 60 with cuneiform script. They had no zero symbol, however, merely leaving a space where a zero should be. This source claims that Indian mathematicians picked up the Babylonian place-value idea and adapted it to decimal notation. Quoting deBono: Indian mathematicians simplified the Babylonian number notation and changed from base 60 to base 10, thus creating the modern decimal system. Very little evidence exists of the chronology of Indian number symbols but it seems that, like the Babylonians, the Indians for a long time saw no need to write a symbol for zero. The earliest example of Indian use of the decimal system with a zero dates from AD 595. The earliest definite reference to the Hindu numerals beyond the borders of India is in a note written by a Mesopotamian bishop, Severus Sebokht, about AD 650, which speaks of `nine signs', not mentioning the zero. By the end of the 8th century, some Indian astronomical tables had been translated at Baghdad and these signs became known to Arabian scholars of the time. In 824, the scholar al-Khwarizmi wrote a small book on numerals, and 300 years later it was translated into Latin by Adelard of Bath. Some historians believe that these number symbols came to Europe even before they arrived in Baghdad, but the oldest European manuscript containing them dates from AD 976 in Spain. From the same source: Far away from the mainstream of Western history, the Mayan culture of Central America, which died out at the end of the 9th century, developed a place-value system of notation with a symbol for zero. Mayan numbers were written vertically and are read from bottom upwards. The Mayans worked in base 20... It is conjectured that the Mayans first used their zero symbols at about the same time as the Babylonians used theirs on the other side of the earth, but the oldest Mayan numerical inscription dates from no earlier than the end of the 3rd century AD. But there's still the question of the decimal point . Francesco Pellos (or Pelizzati) of Nice used a decimal point to indicate division of a number by a power of 10, in his 1492 book on commercial arithmetic. The 16th century German mathematician Bartholomäus Pitiscus (or Petiscus) (1561-1613) uses a decimal point in his book on trigonometry. Tomado de: A Science History Quizz También pueden consultar: Math-History Timeline No comments:
{"url":"http://alejandralopezrodriguez.blogspot.com/2009/10/quien-invento-el-punto-decimal.html","timestamp":"2014-04-19T04:35:37Z","content_type":null,"content_length":"44236","record_id":"<urn:uuid:b410da29-5402-4544-bc9f-c33a60bf89f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability requires multi-parameter type classes Stability provisional Maintainer numericprelude@henning-thielemann.de Safe Haskell Safe-Infered Abstraction of bases of finite dimensional modules class C a v => C a v whereSource It must hold: Module.linearComb (flatten v `asTypeOf` [a]) (basis a) == v dimension a v == length (flatten v `asTypeOf` [a]) basis :: a -> [v]Source basis of the module with respect to the scalar type, the result must be independent of argument, undefined should suffice. flatten :: v -> [a]Source scale a vector by a scalar dimension :: a -> v -> IntSource the size of the basis, should also work for undefined argument, the result must be independent of argument, undefined should suffice. C Double Double C Float Float C Int Int C Integer Integer (C a v0, C a v1) => C a (v0, v1) (C a v0, C a v1, C a v2) => C a (v0, v1, v2) C a => C (T a) (T a) Instances for atomic types Instances for composed types
{"url":"http://hackage.haskell.org/package/numeric-prelude-0.3.0.1/docs/Algebra-ModuleBasis.html","timestamp":"2014-04-19T12:39:58Z","content_type":null,"content_length":"7470","record_id":"<urn:uuid:0ac2a76f-175f-4367-959d-f84369c432b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Sparse matrix-vector multiplication forms the heart of iterative linear solvers used widely in scientific computations (e.g., finite element methods). In such solvers, the matrix-vector product is computed repeatedly, often thousands of times, with updated values of the vector until convergence is achieved. In an SIMD architecture, each processor has to fetch the updated off-processor vector elements while computing its share of the product. In this paper, we report on run-time optimization of array distribution and off-processor data fetching to reduce both the communication and computation time. The optimization is applied to a sparse matrix stored in a compressed sparse row-wise format. Actual runs on test matrices produced up to a 35 percent relative improvement over a block distribution with a naive multiplication algorithm while simulations over a wider range of processors indicate that up to a 60 percent improvement may be possible in some cases.
{"url":"http://www.cs.rpi.edu/research/html/94-13abstract.html","timestamp":"2014-04-16T13:28:04Z","content_type":null,"content_length":"1343","record_id":"<urn:uuid:29122773-0872-4f6c-a47c-288d559d83a8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the optimal solution to a Linear Program October 7th 2009, 09:31 PM #1 Sep 2009 Seattle, WA Finding the optimal solution to a Linear Program Hi there, It's the beginning of a new quarter and I can tell my basic linear algebra is failing me at this point. I just need help solving a multi-dimensional linear program. So far this is the LP. Maximize: 6*x1 + 3*x2 + 8*x3 + 3*x4 + 9*x5 + 5*x6 Subject to: x1 + x2 <= 480 x3 + x4 <= 400 x5 + x6 <= 230 x1 + x3 + x5 <= 420 x2 + x4 + x6 <= 250 x1, x2, ..., x6 >= 0 From there I have no idea where to go. I know how to solve a two-dimensional LP graphically but I just can't figure this one out. Thanks in advance! :) Last edited by hjhart; October 7th 2009 at 09:34 PM. Reason: Errors Hi there, It's the beginning of a new quarter and I can tell my basic linear algebra is failing me at this point. I just need help solving a multi-dimensional linear program. So far this is the LP. Maximize: 6*x1 + 3*x2 + 8*x3 + 3*x4 + 9*x5 + 5*x6 Subject to: x1 + x2 <= 480 x3 + x4 <= 400 x5 + x6 <= 230 x1 + x3 + x5 <= 420 x2 + x4 + x6 <= 250 x1, x2, ..., x6 >= 0 From there I have no idea where to go. I know how to solve a two-dimensional LP graphically but I just can't figure this one out. Thanks in advance! If you have never done a course that covered solving multidimensional LP problems then you will not get far with this, but here is the Wikipedia article on the simplex algorithm. Also the solver in Excel (and most other spreadsheet programs) will solve LP problems. Thanks CaptainBlack, we've just gotten into the Simplex method this week. :) October 11th 2009, 02:50 AM #2 Grand Panjandrum Nov 2005 October 11th 2009, 11:52 PM #3 Sep 2009 Seattle, WA
{"url":"http://mathhelpforum.com/advanced-applied-math/106797-finding-optimal-solution-linear-program.html","timestamp":"2014-04-17T12:05:09Z","content_type":null,"content_length":"37330","record_id":"<urn:uuid:16493de5-9b1d-490b-b985-67f6ebfbefd9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Techniques in Theoretical Physics Section 6 Exact Solution in One Dimension Like so many other problems in theoretical physics, the percolation problem can be solved exactly in one dimension, and some aspects of that solution seem to be valid also for higher dimensional Problem and System: • Site percolation on an infinitely long linear chain • `lattice' sites are placed in fixed distances. • Each of these lattice sites is randomly occupied with probability p. Definition of a cluster in 1-dim: • A group of neighboring occupied sites containing no empty site in between. • A single empty site splits the group into two different clusters. • In order that the cluster is separated from the other clusters, the site neighboring the left end of the cluster must be empty; and the same is true for the right end of the cluster. Example of a central 5-cluster. Status of sites: □ All five central sites occupied. □ Two neighbors at both ends empty. Probability of this configuration: The probability of each site being occupied is p. Since all sites are occupied randomly, the probability of two arbitrary sites being occupied is p^2, for three being occupied is p^3, and for five being occupied is p^5. (This product property of the probabilities is valid only for statistically independent events, as for random percolation). The probability of one end having an empty neighbor is (1-p), and the probability for both ends empty is (1-p)^2 as the two ends are statistically independent. Total probability of this configuration: Cluster number per lattice site: If the total chain length is L, with L -> infinity, then the total number of five-clusters, apart from effects from the chain ends, is: L p^5(1-p)^2 We see that it is practical to talk about the number of clusters per lattice site, which is the total number divided by L and thus n[5] = p^5(1-p)^2 This normalized cluster number is thus independent of the lattice size L and equals the probability that a fixed site is the end of a cluster. Cluster number for clusters containing s sites. We define n[s] as the number of such s-clusters per lattice site: n[s] = p^s (1-p)^2 This normalized cluster number is crucial for many of our discussions in two or three dimensions. It equals the probability, in an infinite chain, of an arbitrary site being the left hand end of the cluster. For p<1, the cluster number goes exponentially to zero if the cluster size s goes to infinity. Percolation threshold For p= 1, all sites of the chain are occupied, and the whole chain constitutes one single cluster. For every p smaller than unity, there will be some holes in the chain where a site is not occupied, which means that there is no continuous row of occupied sites, i.e., no one-dimensional cluster, connecting the two ends. In other words, there is no percolating cluster for p below unity. Thus the percolation threshold is unity: p[c] = 1 Relationship between n[s] and p at p< p[c] We have learned that: P + sum [s = 1, 2, 3, ... ] n[s] s = p. For p < pc, P=0, therefore: sum [s = 1, 2, 3, ... ] n[s] s = p This law can also be checked directly from expression for n[s] = p^s (1-p)^2 and the formula for the geometric series: sum[s=1 -> infinity] p^s (1-p)^2 s = (1-p)^2 sum[s=1 -> infinity] p d(p^s)/dp = (1-p)^2 p [ d (sum[s=1 -> infinity] p^s) ] / dp = (1-p)^2 p [ d (p/(1-p)) ] / dp = p For higher dimension the above equation is also valid except that one has to take into account the sites in the infinite cluster separately, if one does not include them in the sum over all cluster sizes. There this equation is restricted to p < p[c] Even in one dimension at p = p[c] = 1 there is only one cluster covering the whole lattice. Thus s=infinite value and n[s] = 0, which makes the equation undefined at p=1. Average cluster size We have defined cluster size S as: S = ( sum [s = 1 to infinity] n[s] s^2 ) / ( sum [s = 1 to infinity] n[s] s ) Let us now calculate this mean cluster size explicitly: The denominator is simply p, as shown above. The numerator is, by substituting n[s] = p^s (1-p)^2: (1-p)^2 sum [s = 1 to infinity] ( s^2 p^s ) = (1-p)^2 (p d/dp)^2 sum [s = 1 to infinity] ( p^s ) where the trick from our previous derivation is applied twice in order to calculate sum by using suitable derivatives of easier sums. S = (1 + p) / (1 - p) (p < p[c]) The mean cluster size diverges if we approach the percolation threshold. We will obtain similar results in more than one dimension. This divergence is very plausible, for if there is an infinite cluster present above the percolation threshold, then slightly below the threshold one already has very large (though finite) clusters. Thus a suitable average over these sizes is also getting very large, if one is only slightly below the threshold. Correlation function Correlation function or pair connectivity g(r) has been defined as the probability that a site a distance r apart from an occupied site belongs to the same cluster. For r=0 that probability g(0) equals unity. For r=1 the neighboring site belongs to the same cluster if and only if it is occupied; this is the case with probability p. For a site at distance r, this site and the (r-1) sites in between this site and the origin at r=0 must be occupied without exception, which happens with probability pr. Thus: g(r) = p^r for all p and r. For p<1 this correlation function goes to zero exponentially if the distance r goes to infinity: g(r) = exp(-r/xi) xi = - 1/ln p ~ 1/(p[c] - p) The last equality in the above equation is valid only for p close to p[c] = 1 and uses the expansion ln(1-x) = - x for small x. The quantity xi is called the correlation length and we see that it also diverges at the threshold. We will see in higher dimensions that the correlation length is proportional to a typical cluster diameter. This relation is quite obvious here. The length of a cluster with s sites is (s-1), not much different form s if s is large. Thus the average length xi varies as the average cluster size S: S ~ A xi, (p -> p[c]). where A is a constant. Unfortunately, this relation becomes more complicated in higher dimensions. Rather more generally valid is a relation between the sum over all distances r of the correlation function, and the mean cluster size: sum[r] g(r) = S. Beyond 1-dim system: • In one dimensional systems, certain quantities diverge at the percolation threshold, and that the divergence can be described by simple power laws like 1/(p[c] - p), at least asymptotically close to p[c]. The same seems true in higher dimensions where the problems have not been solved exactly. • The quantities S and xi have counterparts in higher dimensions for thermal phase transitions. In fluids near their critical point, critical opalescence is observed in light-scattering experiments, since the compressibility (analogous to S) and the correlation length xi diverge there. • One may utilize one-dimensional percolation further by calculating the cluster numbers in finite one-dimensional chains. Then one can check the general concepts of finite-size scaling and • The one-dimensional case is now solved exactly, whereas for the d-dimensional case only small clusters will be treated exactly. There is another case with an exactly known solution, the Bethe
{"url":"http://xin.cz3.nus.edu.sg/group/teach/comphys/sec06.htm","timestamp":"2014-04-21T13:00:18Z","content_type":null,"content_length":"14292","record_id":"<urn:uuid:74659de8-99f1-444c-94b0-a3ad564919bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling Thu, 12/16/2010 - 17:08 Hi. What am I doing wrong? Hi. What am I doing wrong? I'm using FIML. I supply the matrix names: > standardizeRAM(Fit4238,Amat=Fit4238@output$matrices$s4238.A,Smat=Fit4238@output$matrices$s4238.S,Mmat=Fit4238@output$matrices$s4238.M) Error in standardizeRAM(Fit4238, Amat = Fit4238@output$matrices$s4238.A, : I need either mxRAMObjective or the names of the A and S matrices. I've also tried with quotes around the matrix names. I've also tried: Thu, 12/16/2010 - 17:17 The correct syntax The correct syntax is: standardizeRAM(Fit4238, Amat="A", Smat="S", Mmat="M") Fri, 12/17/2010 - 11:10 Thanks. That does work. Thanks. That does work. Aren't these names the standard names for these matrices? I would have tried these names but it seemed to simple and that something more was needed. Fri, 12/17/2010 - 11:22 Yeah, I'd imagine most people Yeah, I'd imagine most people will use matrices named "A", "S" and "M". I avoided the default because I didn't want to make unnecessary assumptions, and I hadn't come up with a good way to catch when someone had a matrix called "A" and it wasn't the A matrix (for instance, some people use words like "asym" or "arrows"). The function does automatically grab the names when you use the RAMObjective, but not with FIML. Sun, 11/07/2010 - 11:04 Drupal freaked out a bit. Drupal freaked out a bit. Here's the function. Sun, 11/07/2010 - 10:42 Thu, 10/21/2010 - 06:57 Hi Ryne: This will be very Hi Ryne: This will be very helpful for people! Question: Shouldn't rescale <- invSDs[as.numeric(p$row)] * invSDs[as.numeric(p$col)] rescale <- invSDs[p$row] * invSDs[p$col] Best, wishes Thu, 10/21/2010 - 11:06 You're right. I did that as a You're right. I did that as a work-around to mxSummary, which returned the row names as numeric 1:4 and column names as characters 1:4. I now realize that won't work if someone actually specifies dimnames for their A and S matrices. I added a patch that gives the invSDs default names of the numbers 1:length(invSDs) as characters, so that invSDs[1] and invSDs["1"] both work. However, if there are dimnames on the columns of the A matrix, those are populated instead. I'm choosing not to check whether the dimnames of A and the dimnames of S are identical, because they're usually either NULL or autopopulated to be equal through type="RAM". One would have to manually set A and S to have different dimnames. I might as well describe the function a little better. The primary argument is an existing model to be standardized. If it is type="RAM" or uses the mxRAMObjective, you don't have to do anything else. If you don't use the RAM objective (say, you use an algebra to to the RAM matrices), you have to supply the names (as character strings) of the A and S matrices. The output (varied by the argument 'return') is either an mxSummary-style list of standardized parameters and standard errors (return="parameters"), the standardized matrices (return="matrices") or a model with the standardized A and S matrices populated (return="model"). The last option also returns a standardized M matrix, which is all zeros by definition, whereas the parameters and matrices options don't return the M matrices or their free parameters. It should be noted that return="model" only changes the model matrices; no changes are made to the 'output' slot of a returned model, summary(standardizeRAM(model)) will look exactly like summary(model). If you want a standardized parameter list a la summary, use standardizeRAM(model).
{"url":"http://openmx.psyc.virginia.edu/thread/718","timestamp":"2014-04-18T19:13:23Z","content_type":null,"content_length":"43586","record_id":"<urn:uuid:9bb8de9e-5c24-4b60-a778-bb501525e923>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
San Juan Capistrano Algebra 2 Tutor ...I use math in my everyday life and even for leisure, I love doing Sudoku and logic problems. I taught math to my three children who are now in college. They also now have a love for math and feel confident in their own math skills. 5 Subjects: including algebra 2, algebra 1, elementary math, prealgebra ...My name is Shadi and I'm an Engineering major student at IVC. I've been tutoring mathematics, chemistry and also physics for almost two years. Right now I'm working as a math tutor at IVC's campus and am also working with a tutoring agency. 7 Subjects: including algebra 2, chemistry, calculus, physics ...I approach my tutoring and teaching as a team effort, not a top down approach. Education is my passion and I welcome new challenges. I prefer problems that require me to think outside the box. 81 Subjects: including algebra 2, reading, English, physics ...I previously lectured at UCI - Mechanical Engineering Department. I currently teach calculus, algebra and statistics at a number of schools in SoCal as well as online. I have completed my CFA Charter exams in 2008 and I am available for tutoring CFA candidates of all three levels. 12 Subjects: including algebra 2, calculus, statistics, algebra 1 ...You should consider becoming my student if: You would like to work with one of very few tutors in the world who has received a perfect 36 on the ACT and a 2400 on the SAT. You would like to work somebody who has not only tutored, but designed curriculum, and coordinated programs for many differ... 36 Subjects: including algebra 2, chemistry, English, Spanish
{"url":"http://www.purplemath.com/san_juan_capistrano_algebra_2_tutors.php","timestamp":"2014-04-21T02:43:46Z","content_type":null,"content_length":"24311","record_id":"<urn:uuid:0c69c885-9faf-45b2-84c5-ba45136f9611>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] T + ~Con(T) Refutes Its Goedel Sentence Richard Heck richard_heck at brown.edu Sun Jun 9 11:05:29 EDT 2013 The purpose of this post is simply to make something explicit that emerged from the discussion last week of PA + ~Con(PA), and to correct something silly that I said. The general observation, which is presumably well-known, but which I've never seen mentioned in the textbooks from which I've taught, is that, for sufficiently strong T, T + ~Con(T) proves the negation of that theory's Goedel sentence. This gives a nice example of why omega-consistency really is needed for Goedel's proof of the first incompleteness theorem and therefore of why Rosser's improvement really is an improvement. To fill in a few details, let T be an r.e. theory containing R, and let G(T) be its Goedel sentence (for some choice of a provability predicate). Recall that Goedel's proof of G1 divides into two parts: (i) If T is consistent, then T does not prove G(T). (ii) If T is omega-consistent, then T does not prove ~G(T). Let T~ be T + ~Con(T). Then since T~ is consistent if T is, T~ never proves G(T~), if T is consistent. However, since T~ is omega-inconsistent, Goedel's proof leaves open the possibility that T~ proves ~G(T). Which, as we shall now see, it will, if T is sufficiently strong (and "contains I\Sigma_1" will be more than sufficient). Proof: As Arnon observed, if T is sufficiently strong, then T will prove G(T) <--> Con(T). Now T~ proves ~Con(T), trivially. But it also proves Con(T~) --> Con(T), almost equally trivially. So T~ proves ~Con(T~). But then, if T is sufficiently strong, so is T~, so T~ proves G(T~) <--> Con(T~) and so proves ~G(T~). As I said, nothing terribly surprising, really, but a nice example. Just historically: Does anyone know if this was known, say, before Rosser? One related question: If I am not mistaken, the claims made right before the proof, that are central to G2, can be strengthened: If T is sufficiently strong, then T proves G(U) <--> Con(U), for ANY r.e. theory U. Right? Richard Heck More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-June/017358.html","timestamp":"2014-04-18T10:36:27Z","content_type":null,"content_length":"4684","record_id":"<urn:uuid:3a0fb4b2-1e69-4222-86c7-ea3bfae254f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment May 2001 Suppose you have an infinitely large sheet of paper (mathematicians refer to this hypothetical object as the plane). You also have a number of different colours - pots of paint, perhaps. Your aim is to colour every point on the plane using the colours available. That is, each point must be assigned one colour. Can you do this so that, for any two points on the plane which are exactly 1cm apart, they are given different colours? It's not too hard to prove that you can't paint the plane in this way with only 3 colours, no matter how hard you try, and that it can be done with 7 colours. But no-one knows whether it's possible to do it with 4, 5 or 6 colours. This problem is from the branch of mathematics known as Ramsey Theory. Maybe you can solve it! See if you can prove that no way of painting the plane in 3 colours can work, and try to find a way of doing it with 7 colours - or find out how here if you get stuck. About the author Helen Joyce is an assistant editor of Plus.
{"url":"http://plus.maths.org/content/comment/reply/4750","timestamp":"2014-04-19T12:02:15Z","content_type":null,"content_length":"22196","record_id":"<urn:uuid:9bba18c1-742d-4678-a79a-0a9bd068753b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximately known matrix up vote 4 down vote favorite What linear algebraic quantities can be calculated precisely for a nonsingular matrix whose entries are only approximately known (say, entries in the matrix are all huge numbers, known up to an accuracy of plus or minus some small number)? Clearly not the determinant or the trace, but probably the signature, and maybe some sort of twisted signatures? What is a reference for this sort of stuff? (numerical linear algebra, my guess for the name of such a field, seems to mean something else). add comment 3 Answers active oldest votes SVD is stable, and in some sense incorporates all the stable data you can have, so the answer is: "anything you can see on the SVD". Specifically you can easily see the signature up vote 7 down (assuming the matrix is far enough from being singular). vote accepted Could you add some details? What do you mean by "SVD is stable"? – Daniel Moskovich Dec 9 '09 at 10:11 I mean that you can easily bound the change in the output by the change in the input (and the input itself); which is something you cannot do for e.g. LU decomposition. Caveat: If some Eigenvalue of the SVD is (close to) multiple, then the output is of course the span of the related vectors, and not the vectors themselves. – David Lehavi Dec 9 '09 at 11:05 I assume you mean “far enough from singular”. – Harald Hanche-Olsen Dec 9 '09 at 13:59 @ Harald: Thanks – David Lehavi Dec 9 '09 at 15:02 This is a great answer. What about the change in the singular values? Also, is there a name for this field or a reference for related problems? – Daniel Moskovich Dec 10 '09 at show 1 more comment If an invariant of nonsingular matrices is locally constant (I guess this is what's meant by "can be calculated precisely"), then it can only depend on the connected component of the linear group, which means only the orientation (sign of the determinant) can be calculated. For symmetric matrices, the same argument shows that any calculable quantity is a function of the up vote 6 signature since any matrix can be connected to a standard representatives of one of the signature classes using a continuous version of orthogonalization. down vote I think the words "approximate" and "numerical" in the question hint that this is not really what Daniel has in mind.... – David Lehavi Dec 9 '09 at 15:15 1 This is a fair interpretation of "precisely," I think. Perhaps Daniel should rephrase his question if that's not what he has in mind. – Qiaochu Yuan Dec 9 '09 at 15:50 add comment The name "matrix analysis" seems to be associated with questions like this. This is an answer instead of a comment because I lack brownie points. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/8331/approximately-known-matrix/8335","timestamp":"2014-04-21T15:38:44Z","content_type":null,"content_length":"63289","record_id":"<urn:uuid:a96a51a1-a911-4857-9528-ced6303feec4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/agentjamesbond007/medals","timestamp":"2014-04-19T17:22:27Z","content_type":null,"content_length":"75322","record_id":"<urn:uuid:bc3f4143-5ddc-4f5a-9cf9-a5d3e857f73f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
AdForceField is an abstract class that defines the interface for objects that calculate the energy and forces associated with the elements of an AdSystem or AdInteractionSystem object. AdForceField classes essentially represent complex potential energy functions. Note: Many of the methods necessary for a working AdForceField subclass are defined in separate protocols. See The function an AdForceField object represents can be extended by adding objects that conform to the AdForceFieldTerm protocol. Note that this requires that the AdForceField subclass implements the AdForceFieldExtension protocol. Terms & Components AdForceField differentiates between objects that perform potential energy calculations ('Terms') and the potential energy values they calculate ('Components'). For example AdNonbondedTerm objects calculate two potentials, Electrostatic and VanDerWaals, and hence have two components. Most terms have only one component e.g. in AdCharmmForceField the HarmonicBond term calculates the HarmonicBond potential. Although internally you may not use different objects to calculate different terms you must define this mapping. This is because it is required in order to enable force-field extension. All AdForceField subclasses should observe AdSystemContentsDidChangeNotification from their systems. Extra Documentation - Not all force fields will calculate forces and vice versa. Affected by Task - Units. Possible Extra Functionality - Ability to use external force matrices.
{"url":"http://home.gna.org/adun/class_ad_force_field.html","timestamp":"2014-04-17T13:39:52Z","content_type":null,"content_length":"24472","record_id":"<urn:uuid:3e8e7807-b3f2-4852-9325-0603a26a671e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Sherman Oaks Trigonometry Tutor Find a Sherman Oaks Trigonometry Tutor ...Geometry in particular requires a lot of real-world applications. Trig and precalc concepts are rampant on standardized tests, so I see and tutor them frequently. In particular, the SAT Math II Subject Test deals with trigonometry and some precalculus often. 60 Subjects: including trigonometry, reading, Spanish, chemistry ...Additionally, I have tutored and coached cross-country and track and field at St. Augustine High School, Our Lady of Peace Academy, and Patrick Henry High School (in San Diego, CA) for almost 10 combined years. Furthermore, in the time between my college graduation and my matriculation to law s... 58 Subjects: including trigonometry, English, reading, writing ...I have a lot of experience in this field. I have done many sessions of one-on-one tutoring and have worked in a math lab. With a good teacher and a student's effort, anyone can understand main concepts in math. 10 Subjects: including trigonometry, calculus, geometry, algebra 1 I have my Bachelor and Master of Science in Mechanical Engineering from University of Southern California. I have 8+ years of experience tutoring students from the Beverly Hills school district in most subject areas of math and science (pre-alegbra, algebra, geometry, trigonometry, precalculus, calculus, and chemistry). I'm passionate of making sure my students succeed in school. 21 Subjects: including trigonometry, reading, algebra 1, chemistry ...After graduation I went on to medical school at the University of Pittsburgh School of Medicine. Upon completion of my medical education I entered into a surgical residency. After completing a year of residency I decided that I no longer was interested in pursuing a career in surgery, and so I decided to enter another field of medicine that I am currently applying for. 18 Subjects: including trigonometry, chemistry, English, geometry
{"url":"http://www.purplemath.com/sherman_oaks_ca_trigonometry_tutors.php","timestamp":"2014-04-18T03:47:56Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:984324b8-f94d-4b2e-8390-269f3a7acc13>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Loss Models: From Data to Decisions, 3rd Edition (One Year Online): Preparation for Actuarial Exam C/4 Loss Models: From Data to Decisions, 3rd Edition (One Year Online): Preparation for Actuarial Exam C/4 ISBN: 978-0-470-30027-5 October 2011, ©2011 This online, multi-color, self-looping electronic product has full text with searchable links; more than 75 plugged-in data sets (in EXCEL); thousands of uniquely-designed and randomly-selected sample SOA/CAS/CIA test exercises, complete with hints and worked-out solutions; and multiple forms of simulated exams; and a built-in record-keeping system. There are three modules in this application: • The Prologue Module describes the book, its authors, and how to best use the product. • The Chapter Modules represent the text proper, complete with examples and exercise/solution sets (some static, some with spreadsheet functionality, and some with regeneration functionality). • The Exam Modules present simulations of the actuarial Exam C/4. Each exam features multiple choice questions similar in content and difficulty to those on C/4. Detailed solutions are provided. Upon ordering, customers will receive an email that contains a registration code, which is needed to access the application. To explore our additional offerings in actuarial exam preparation visit www.wiley.com/go/c4actuarial Online3e is the perfect electronic substitute for a traditional linear book. Price includes one-year access/subscription. Once purchased, the product is nonreturnable. See More 1. Modeling. 1.1 The model-based approach. 1.2 Organization of this book. 2. Random variables. 2.1 Introduction. 2.2 Key functions and four models. 3. Basic distributional quantities. 3.1 Moments. 3.2 Quantiles. 3.3 Generating functions and sums of random variables. 3.4 Tails of distributions. 3.5 Measures of Risk. 4. Characteristics of actuarial models. 4.1 Introduction. 4.2 The role of parameters. 5. Continuous models. 5.1 Introduction. 5.2 Creating new distributions. 5.3 Selected distributions and their relationships. 5.4 The linear exponential family. 5.5 TVaR for continuous distributions. 5.6 Extreme value distributions. 6. Discrete distributions and processes. 6.1 Introduction. 6.2 The Poisson distribution. 6.3 The negative binomial distribution. 6.4 The binomial distribution. 6.5 The (a, b, 0) class. 6.6 Counting processes. 6.7 Truncation and modification at zero. 6.8 Compound frequency models. 6.9 Further properties of the compound Poisson class. 6.10 Mixed Poisson distributions. 6.11 Mixed Poisson processes. 6.12 Effect of exposure on frequency. 6.13 An inventory of discrete distributions. 6.14 TVaR for discrete distributions. 7. Multivariate models. 7.1 Introduction. 7.2 Sklara??s theorem and copulas. 7.3 Measures of dependency. 7.4 Tail dependence. 7.5 Archimedean copulas. 7.6 Elliptical copulas. 7.7 Extreme value copulas. 7.8 Archimax copulas. 8. Frequency and severity with coverage modifications. 8.1 Introduction. 8.2 Deductibles. 8.3 The loss elimination ratio and the effect of inflation for ordinary deductibles. 8.4 Policy limits. 8.5 Coinsurance, deductibles, and limits. 8.6 The impact of deductibles on claim frequency. 9. Aggregate loss models. 9.1 Introduction. 9.2 Model choices. 9.3 The compound model for aggregate claims. 9.4 Analytic results. 9.5 Computing the aggregate claims distribution. 9.6 The recursive method. 9.7 The impact of individual policy modifications on aggregate payments. 9.8 Inversion methods. 9.9 Calculations with approximate distributions. 9.10 Comparison of methods. 9.11 The individual risk model. 9.12 TVaR for aggregate losses. 10. Discrete-time ruin models. 10.1 Introduction. 10.2 Process models for insurance. 10.3 Discrete, finite-time ruin probabilities. 11. Continuous-time ruin models. 11.1 Introduction. 11.2 The adjustment coefficient and Lundberga??s inequality. 11.3 An integrodifferential equation. 11.4 The maximum aggregate loss. 11.5 Cramera??s asymptotic ruin formula and Tijms' approximation. 11.6 The Brownian motion risk process. 11.7 Brownian motion and the probability of ruin. 12. Review of mathematical statistics. 12.1 Introduction. 12.2 Point estimation. 12.3 Interval estimation. 12.4 Tests of hypotheses. 13. Estimation for complete data. 13.1 Introduction. 13.2 The empirical distribution for complete, individual data. 13.3 Empirical distributions for grouped data. 14. Estimation for modified data. 14.1 Point estimation. 14.2 Means, variances, and interval estimation. 14.3 Kernel density models. 14.4 Approximations for large data sets. 15. Parameter estimation. 15.1 Method of moments and percentile matching. 15.2 Maximum likelihood estimation. 15.3 Variance and interval estimation. 15.4 Non-normal confidence intervals. 15.5 Bayesian estimation. 15.6 Estimation for discrete distributions. 15.6.7 Exercises. 16. Model selection. 16.1 Introduction. 16.2 Representations of the data and model. 16.3 Graphical comparison of the density and distribution functions. 16.4 Hypothesis tests. 16.5 Selecting a model. 17. Estimation and model selection for more complex models. 17.1 Extreme value models. 17.2 Copula models. 17.3 Models with covariates. 18. Five examples. 18.1 Introduction. 18.2 Time to death. 18.3 Time from incidence to report. 18.4 Payment amount. 18.5 An aggregate loss example. 18.6 Another aggregate loss example. 18.7 Comprehensive exercises. 19. Interpolation and smoothing. 19.1 Introduction. 19.2 Polynomial interpolation and smoothing. 19.3 Cubic spline interpolation. 19.4 Approximating functions with splines. 19.5 Extrapolating with splines. 19.6 Smoothing splines. 20. Credibility. 20.1 Introduction. 20.2 Limited fluctuation credibility theory. 20.3 Greatest accuracy credibility theory. 20.4 Empirical Bayes parameter estimation. 21. Simulation. 21.1 Basics of simulation. 21.2 Examples of simulation in actuarial modeling. 21.3 Examples of simulation in finance. Appendix A: An inventory of continuous distributions. Appendix B: An inventory of discrete distributions. Appendix C: Frequency and severity relationships. Appendix D: The recursive formula. Appendix E: Discretization of the severity distribution. Appendix F: Numerical optimization and solution of systems of equations. See More STUART A. KLUGMAN, PhD, is Principal Financial Group Distinguished Professor of Actuarial Science at Drake University. A Fellow of the Society of Actuaries, Dr. Klugman was vice president of the SOA from 2001–2003. HARRY H. PANJER, PhD, is Professor Emeritus in the Department of Statistics and Actuarial Science at the University of Waterloo, Canada. Past president of both the Canadian Institute of Actuaries and the Society of Actuaries, Dr. Panjer has published numerous articles on risk modeling in the fields of finance and actuarial science. GORDON E. WILLMOT, PhD, is Munich Re Chair in Insurance and Professor in the Department of Statistics and Actuarial Science at the University of Waterloo, Canada. Dr. Willmot has authored or coauthored over sixty published articles in the areas of risk theory, queueing theory, distribution theory, and stochastic modeling in insurance. See More
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470300272.html","timestamp":"2014-04-16T04:55:56Z","content_type":null,"content_length":"49514","record_id":"<urn:uuid:94c9c72d-3fb0-4590-91fd-b5b70675a22c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: A Neuron-Weighted Learning Algorithm and its Hardware Implementation in Associative Memories May 1993 (vol. 42 no. 5) pp. 636-640 ASCII Text x Tao Wang, Xinhau Zhuang, XiaoLiang Xing, Xipeng Xiao, "A Neuron-Weighted Learning Algorithm and its Hardware Implementation in Associative Memories," IEEE Transactions on Computers, vol. 42, no. 5, pp. 636-640, May, 1993. BibTex x @article{ 10.1109/12.223686, author = {Tao Wang and Xinhau Zhuang and XiaoLiang Xing and Xipeng Xiao}, title = {A Neuron-Weighted Learning Algorithm and its Hardware Implementation in Associative Memories}, journal ={IEEE Transactions on Computers}, volume = {42}, number = {5}, issn = {0018-9340}, year = {1993}, pages = {636-640}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.223686}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - A Neuron-Weighted Learning Algorithm and its Hardware Implementation in Associative Memories IS - 5 SN - 0018-9340 EPD - 636-640 A1 - Tao Wang, A1 - Xinhau Zhuang, A1 - XiaoLiang Xing, A1 - Xipeng Xiao, PY - 1993 KW - hardware implementation; associative memories; learning algorithm; neuron-weighted associative memory; NWAM; global minimization; gradient descent rule; analog neural network; computer simulation experiments; content-addressable storage; learning (artificial intelligence); neural chips; neural nets. VL - 42 JA - IEEE Transactions on Computers ER - A novel learning algorithm for a neuron-weighted associative memory (NWAM) is presented. The learning procedure is cast as a global minimization, solved by a gradient descent rule. An analog neural network for implementing the learning method is described. Some computer simulation experiments are reported. [1] L. F. Abbott and T. B. Kepler, "Optimal learning in neural network memories,"J. Phys. A: Math. Gen., vol. 22, pp. L711-717, 1989. [2] J. Bruck and V. P. Roychowdhury, "On the number of spurious memories in the Hopfield model,"IEEE Trans. Inform. Theory, vol. IT-36, no. 2, pp. 393-397, 1990. [3] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities,"Proc. Nat. Acad. Sci. USA, vol. 79, 1982, pp. 2554-2558. [4] M. P. Kennedy and L. O. Chua, "Neural networks for nonlinear programming,"IEEE Trans. Circuits Syst., vol. 35, no. 5, pp. 554-562, 1988. [5] W. Krauth and M. Mezard, "Learning algorithms with optimal stability in neural networks,"J. Phys. A: Math. Gen., vol. 20, pp. L745-752, 1987. [6] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, "The capacity of the Hopfield associative memory,"IEEE Trans. Inform. Theory, vol. IT-33, pp. 461-482, 1987. [7] A. Moopenn, J. Lambe, and A. P. Thankoor, "Electronic implementation of associative memory based on neural network models,"IEEE Trans. Syst., Man., Cybern., vol. SMC-17, no. 2, pp. 325-331, 1987. [8] G. V. Vanderplaats,Numerical Optimization Techniques for Engineering Design: With Applications. New York: McGraw-Hill, 1984. [9] A. R. Vazquez, R. D. Castro, A. Rueda, J. L. Huertas, and E. S. Sinencio, "Nonlinear switched-capacitor neural networks for optimization problems,"IEEE Trans. Circuits Syst., vol. 37, no. 3, pp. 384-397, 1990. [10] C. F. Zhang, X. Q. Wang, Z. Q. Zhou, and K. H. Zhao, "Analysis of varieties of neural network models for associative memory,"J. Pattern Recognition Artif. Intell. China, vol. 3, no. 1, pp. 21-27, 1990. Index Terms: hardware implementation; associative memories; learning algorithm; neuron-weighted associative memory; NWAM; global minimization; gradient descent rule; analog neural network; computer simulation experiments; content-addressable storage; learning (artificial intelligence); neural chips; neural nets. Tao Wang, Xinhau Zhuang, XiaoLiang Xing, Xipeng Xiao, "A Neuron-Weighted Learning Algorithm and its Hardware Implementation in Associative Memories," IEEE Transactions on Computers, vol. 42, no. 5, pp. 636-640, May 1993, doi:10.1109/12.223686 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1993/05/t0636-abs.html","timestamp":"2014-04-16T07:50:21Z","content_type":null,"content_length":"52388","record_id":"<urn:uuid:737be326-db33-4cd5-a83b-1e599fa45c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractions of Money b Number of results: 11,736 Harry spent one third of his money on a hat, 1/6 of his money on socks, and $12.00 on a tie. How much did Harry spend? Saturday, March 16, 2013 at 5:24pm by Christine math: percent and fractions What are "nice" fractions? The problems? If a shirt was originally marked $30 and the store is offering 20% off, how much money will you save if you buy it? If you invest $1,000 and earn %50 on it in one year, what is the percent interest you've earned? Sunday, April 6, 2008 at 10:33pm by Ms. Sue math (fractions) Three people share a sum of money. The first receives 2/7 and the second receives a 2/5 share of the money. What fracion does the third receive? Monday, January 20, 2014 at 1:03pm by Mack fractions, decimals,and percents JJ -- I gave you two uses of fractions and two uses of percents, plus money -- $1.25, $0.99, and so on. Monday, November 17, 2008 at 9:15pm by Ms. Sue Fractions of Money b. What is two thirds of £33? Saturday, March 2, 2013 at 12:25pm by hind Maybe it will help if we see it with spaces and get rid of what seem to be typos.. 1. 16 nickels a.circle 3 of the nickels b.how much money is that 3 nickels = 15 cents 2. b. says circle 5/6 of the dimes then it says how much money is that ? I assume there are 6 dimes. You ... Wednesday, February 18, 2009 at 9:18pm by Ms. Sue We use decimals when we use money. We use fractions when we follow a recipe; i.e. 1/2 cup sugar, 1/4 teaspoon salt, etc. Then we have to know how to multiply and divide fractions when we increase or reduce the size of the recipe. How about decimals for sports statistics? How ... Tuesday, October 12, 2010 at 4:14pm by Ms. Sue By the way, fractions are also a great part of standard music notation. You need to know fractions to read musical notes. Monday, July 27, 2009 at 1:14am by mathland Like fractions are fractions with the same denominator. You can add and subtract like fractions easily - simply add or subtract the numerators and write the sum over the common denominator. Before you can add or subtract fractions with different denominators, you must first ... Tuesday, January 17, 2012 at 9:07pm by Laruen math -fractions ordering fractions 1/2,7/8,9/10,1/3,3/5,1/4 write the above fractions in order. Monday, January 10, 2011 at 4:20pm by arjel I know but wouldn't you have to go through confusing fractions and multiply by fractions to get a whole number of x? I am trying to follow the teachers directions which do not include Tuesday, December 14, 2010 at 8:08pm by David what is the difference and similarities if solving an equation with varibles in the equation if it had 1 or 2 or even 3 variables? Here is one: Money left in Wallet= money earned + money found - money lost - money spent. I don't understand? can you please explain... Saturday, January 13, 2007 at 1:33am by jasmine20 math fractions ADAM WANTS TO COMPARE THE FRACTIONS 3/12,1/6,AND1/3.HE WANTS TO ORDER THEM FROM LEAST TO GREATEST AND REWRITE THEM SO THEY ALL HAVE THE SAME DENOMINATOR?EXPLAIN HOW ADAM CAN REWRITE THE FRACTIONS?If anyone could helpme explain this to my 4 th grader.i am really bad with fractions Tuesday, February 26, 2013 at 4:49pm by ttuuffyy 8th grade youtube(dot)com/watch?v=BeCQWUl1p00&feature=related be sure to change your mix fractions into improper fractions when you divide fractions, you multiply the reciprocal Sunday, September 12, 2010 at 9:14pm by Anonymous Algebra 1-Fractions Or, eliminating fractions, I should say. So, I need some help. See, I am really not a big fan of fractions. But I need to eliminate fractions to do a math problem. First one is 1/2-x=3/8. I know how to find LCD, then multiply both sides, distributive property, etc etc. But ... Monday, September 16, 2013 at 8:16pm by Breighton 6 = 5 4/4 5 4/4 = 3/4 = 5 3/4 You could find the common denominator for those fractions and convert them to equivalent fractions. But the easier way is to convert these fractions to decimals. 1/3 = 0.33 4/9 = 0.44 and so on Monday, April 18, 2011 at 7:51pm by Ms. Sue ( egyptian fractions are fractions where the numerator can only be one) find two egyptian fractions where when added together it equalls 11/32 Tuesday, May 8, 2012 at 8:02pm by livy 3rd grade math Change all fractions to equivalent fractions with a denominator of 60. Either that or change these fractions to decimals. Wednesday, March 20, 2013 at 7:02pm by Ms. Sue 3rd grade math Change all fractions to equivalent fractions with a denominator of 60. Either that or change these fractions to decimals. Wednesday, March 20, 2013 at 7:03pm by Ms. Sue Algebra 1 c. multiplying x/6-5/8=4 by 6 did not eliminate all the fractions/ What could you have multiplied to get rid of all the fractions? Explain how you got your answer and write the equivelent equation that has no fractions. HELP ME PLEASE!!!!!!! I don't understand this. Monday, May 11, 2009 at 8:06pm by Kelsie Fractions of Money b. 1) 1/3 * 18 = 18/3 = 6 6) 3 is a third of 9 I'll be glad to check your answers for the other problems. Saturday, March 2, 2013 at 12:25pm by Ms. Sue Applied Business Math/Colleg student 2/3+1/6+11/12= First you need to change these fractions to equivalent fractions with a common denominator. If you post the equivalent fractions, we'll be glad to help you solve this problem. Saturday, January 29, 2011 at 8:19pm by Ms. Sue Shelly and Marcom are selling popcorn for their music club. Each of them received a case of popcorn to sell. Shelly has sold 7/8 of her case and Marcon has sold 5/6 of his case. Which of the following explains how to find the portion of popcorn they have sold together? A.Add ... Thursday, November 8, 2012 at 7:45pm by Jerald The simplest way is to change the fractions to decimals and multiply. 13.25 * 11.5 = ? If your teacher wants you to use the fractions, then change the two numbers to mixed fractions. 53/4 * 23/2 = 1219/8 = 152 3/8 Saturday, July 25, 2009 at 4:04pm by Ms. Sue Convert all of these fractions to equivalent fractions with a common denominator. An easier way is to use a calculator to convert each of these fractions to decimals. Then you can compare them easily. 7/10 = 0.7 5/12 = 0.42 1/2 = 0.5 5/16 = 0.31 Tuesday, February 15, 2011 at 10:25pm by Ms. Sue When we add or subtract fractions, we must have a common denonimator. 2 1/4 = 2 2/8 When I added the two fractions, I added the whole numbers and the numerators. That is 5 9/8. But since 9/8 is larger than 1, I simplified the fraction to 6 1/8. Which part doesn't the child ... Thursday, March 14, 2013 at 5:09pm by Ms. Sue simple equations ( math) You want to solve for the amount of money that Peter has. So you let your variable equal that amount of money. Now, you know that Peter's money + Joseph's money = $100. Can you express the amount of money Joseph has in terms of x, the amount of money Peter has? Sunday, January 10, 2010 at 2:07pm by Marth b. What do think the fractions that are expressed as terminating decimals have in common? Think about equivalent fractions and common multiples. c. Do these fractions follow the same pattern as what you decided about the first set of fractions? d. Why or why not? Note: The ... Monday, August 2, 2010 at 11:22am by Ms. Sue Change the fractions to equivalent fractions with the same denominator. 2/3 = 16/24 3/8 = 9/24 5/12 = 10/24 Add the numerators. Simplify the answer. What do you get? Tuesday, May 28, 2013 at 7:17pm by Ms. Sue It is possible to divide fractions by fractions. You can write a unit rate dealing with fractions by using fraction division. For example: If ½ of the apples are rotten in every ¾ of the boxes then the unit rate is: ⅔ rotten apples per box. The only difference is that ... Thursday, September 5, 2013 at 8:24pm by Graham Actually, that is the best reason. I use the following criterium. If I see one of the variables having a coefficient of 1 OR -1, I solve for that variable and use substitution, resulting in no fractions, unless the equation contains fractions to begin with. As a matter of fact... Monday, February 18, 2008 at 9:52pm by Reiny 1. Why is representative money more useful than commodity money? B. Representative money has value because the government says it does. C. Representative money exists in unlimited supply, so more people use it. D. Representative money is portable, durable, divisible, and ... Monday, June 4, 2012 at 10:03pm by Anonymous Susan had some money. she spent one-sixth of her money on Saturday.on Sunday,she spent one-half of the remaining money and gave $20 to her niece. she spent the rest of her money at an average of $15 for the next 5days.how much money did Susan have at first? Tuesday, December 27, 2011 at 12:03am by Da S Separate the fractions 2/6,2/5,6/13,1/25,7/8and 9/29into two categories: those that can be written as a terminating decimal and those that cannot. Write an explanation of how you made your decisions. b. Form a conjecture about which fractions can be expressed as terminating ... Sunday, September 27, 2009 at 7:22pm by Anonymous Every month, a girl gets allowance. Assume last year she had no money, and kept it up to now. Then she spends 1/2 of her money on clothes, then 1/3 of the remaining money on games, and then 1/4 of the remaining money on toys. After she bought all of that, she had $7777 left. ... Thursday, April 30, 2009 at 9:41am by xxx Fractions don't format well here. Try using a/b for fractions. I'll do #1 and yo can post your own answers for the others, which we will be happy to check. #1. 7 2/3 + 8 5/6 One way is to add the whole numbers, then add the fractions: 7+8 + 2/3 + 5/6 15 + 2/3 + 5/6 Now, set ... Tuesday, December 20, 2011 at 2:14pm by Steve during column chromatography, if four fractions are done, how do we know what is in each beaker and why. I know that beakers 1 and 3 will contain the majority of the fractions being separated but what about beakers 2 and 4. How do we know that they are the residues of the ... Monday, October 22, 2007 at 9:16pm by Del : Fractions are an important part of your daily lives. Describe some practical applications for fractions in your daily life and some challenges that you have experienced regarding the use of Tuesday, May 18, 2010 at 10:30am by ree fractions from least to greatest Yes, I can. You'll be able to do it too after you change these fractions to equivalent fractions with the same denominator. Hint: The common denominator is 60. 3/4 = 45/60 Wednesday, November 7, 2012 at 5:07pm by Ms. Sue money spent in meat shop=4xRs money spent in drugstore= xRs money spent in book store=x-15Rs money remaining=5Rs sum=4x+x+x-15+5 =6x-10 Tuesday, February 9, 2010 at 3:32pm by jagadheeswar Help me on this one :( Express y= (7-3x-x^2)/[((1-x)^2)(2+x)] in partial fractions. Hence, prove that if x^3 and higher powers of x may be neglected, then y=(1/8)(28+30x+41x^2) I did the first part of expressing it in partial fractions. (Since it's very difficult to type out ... Wednesday, March 3, 2010 at 10:24am by Keira 1. Will you lend me some money? 2. He lent me some money. 3. Can I borrow you some money? 4. Can I borrow some money from you? 5. I borrowed some money from him. 6. He borrowed some money from me. (Are the sentences grammatical? Would you like to check them?) Friday, November 20, 2009 at 2:38pm by rfvv math -fractions Change the fractions to equivalent fractions with the same denominator. 7/8 = 21/24 5/6 = 20/24 Follow the same directions I posted before -- except you could draw rectangles, rather than circles. Tuesday, January 4, 2011 at 6:25pm by Ms. Sue PUC raised money for the recent disaster victims.They decided to allocate the money in manner: 1/3 of the money goes for medical supplies, 1/4 of what was left brought tents,2/3 of the remaining went to water purification systems, the rest of the money was spent on shipping ... Wednesday, September 21, 2011 at 6:14pm by Michelle PUC raised money for the recent disaster victims.They decided to allocate the money in manner: 1/3 of the money goes for medical supplies, 1/4 of what was left brought tents,2/3 of the remaining went to water purification systems, the rest of the money was spent on shipping ... Wednesday, September 21, 2011 at 7:01pm by Michelle You need to change the fractions to have a common denominator. Check this site to see how to change fractions so they have a common denominator. http://www.themathpage.com/ARITH/ Wednesday, February 6, 2008 at 3:26pm by Ms. Sue Sarah was given a sum of money. She spent the same amount of money each day. She spent 2/7 of her money in 6 days. After another 5 days, she had $20 left. How much money did she have at first. Sunday, May 19, 2013 at 5:51am by Sarah hi to all of you teachers i hope you will help me to get the answers for this question below.. and thanks god bless you. Perk spent 1/4 of his money on a food and another 3/10 of his money on a drink. a. What fraction of his money did he spend in total? b. What fraction of ... Sunday, February 13, 2011 at 4:07pm by oscar maths urgent 1/5 of J's money is equal to 1/3 of S's money. The difference in their amount is 1/2 of A's money. If Adam gives $120 to S, S will have the same amount of money as J. How much do the 3 people have altogether?(please show me a model method, not algebra) Friday, December 30, 2011 at 2:00pm by Mayyday Change these fractions to equivalent fractions with a common denominator. 3/7 = 27/63 1/9 = 7/63 2/3 = 42/63 Thursday, February 16, 2012 at 8:12pm by Ms. Sue I got part a. I do not understand the rest. b. form a conjecture about which fractions can be expressed as terminating decimals. c. test your conjecture on the following fractions; 6/12, 7/15, 28/ 140, and 0/7. d. use the idea of equivalent fractions and common multiples to ... Monday, August 2, 2010 at 11:22am by Betty If Will gives Molly $9 dollars he will have the same amount of money as her. If Molly give Will $9, the ratio of money she has to the money will has will be 1:2. How much money does will have in the Wednesday, February 5, 2014 at 8:18pm by Dylan math -fractions Change these fractions to equivalent fractions with the same denominator. 1/2 = 60/120 7/8 = 105/120 9/10 = 108/120 1/3 = 40/120 3/5 = 72/120 1/4 = 30/120 Now you can arrange them in order. Monday, January 10, 2011 at 4:20pm by Ms. Sue Snow spent 5/8 of her money on books and another 1/6 of her money on stationeries. What fraction of Kathy’s money was left? Sunday, February 13, 2011 at 3:35am by rowena Your pay stub deducts money for FICA. What does this mean? A. Money is being withheld for personal exemptions and deductions. B. Money is being withheld for excise and estate taxes. C. Money is being withheld to fund Social Security and Medicare. D. Money is being withheld for... Friday, November 30, 2012 at 10:43am by bob A gift of money Yes this is the question and if you take the offer you get the money. and the money has to be on your stomach for 10 min. Wednesday, September 15, 2010 at 6:29pm by Anonymous social studies Economics has to do with money -- the way people earn money and how they spend money. Wednesday, October 27, 2010 at 7:13pm by Ms. Sue the sum of two fractions is 7/12. therre difference is 1/12. they have the same denominators. what are the two fractions? Thursday, January 20, 2011 at 3:03pm by jayjay 4th grade Equivalent fractions are fractions that may look different, but are equal to each other. Two equivalent fractions may have a different numerator and a different denominator. a/b=c/d Thursday, May 6, 2010 at 5:19pm by Luke social studies Money! Money! Money! China wants the huge amounts of money it can earn with trade and tourism. It learned that cooperation with other countries is far more profitable than isolation. And now that China is making so much more money, many of its people are able to live much more... Tuesday, March 11, 2008 at 9:14pm by Ms. Sue math 4th grade You can find these decimals by long division. The other way to solve this is to find a common denominator. Then convert all of the fractions to fractions with a common denominator. That's complicated with these five fractions. Tuesday, April 13, 2010 at 9:50pm by Ms. Sue A merchant visited 3 fairs. At the first, he doubled his money and spent $30; at the second he tripled his money and spent $54; at the third, he quadrupled his money and spent $72, and then had $48 left. How much money had he at the start? Wednesday, April 3, 2013 at 11:50am by Barb I need to complete a project based on fiat money. I need to design my own money and decide what amount of bills and/or money I want to produce for my economy. I understand that producing more money causing inflation, but I need to know: How much money I should intially produce... Friday, September 17, 2010 at 6:28pm by Tom he spend (1/4+1/2)=3/4 of his total money. so has remaining money=(1-3/4)=1/4 of total money. he had total money=$58*4=$432 Sunday, March 10, 2013 at 3:03am by saiko She spends 3/5 of her money, so she had 2/5 of it left Then she spends (3/5) of her remaining money, which would leave her with (2/5)(2/5) = 4/25 of her money so 4/25 of her money equals 8 1/25 of her money equals 8/4 then 25/25 of the money equals 25(8/4) = 50 Wednesday, May 12, 2010 at 6:44am by Reiny A man has $32 and decides to plant a garden. He cannot stand the thought of spending all of his money on any single day, so each day he goes to the store and spends half of his money on tomato plants. When he no longer has enough money to buy additional plants, how much money ... Thursday, August 2, 2012 at 1:55pm by Anonymous contemporary math Julius deposited his money in a money market account paying 1.05% compounded monthly. How much total money will Juliushave after 5 years? Thursday, July 4, 2013 at 9:23am by Anonymous 0k i dont know the answer for these percents, decimals, and fractions. You have to change decimals to percents, fractions to decimals, and percents to fractions. 0.23 3/100 32 1/2% 0.25 3/5 75% 1/8 0.835 10% 95% 4% 120% 0.3333.... 1.05 1/6 If you can please help me anyone =\ Friday, March 14, 2008 at 1:23am by Kenya A woman goes to a store that sells used books and spends half of her money on books. Each day, she returns to the store and again spends half of her remaining money on books because she does not want to spend all of her money in one day. When she no longer has enough money to ... Thursday, August 2, 2012 at 5:06pm by Anonymous How do I do this problem: Chelsea sold baked goods to raise money for various charitable organizations. She gave a third of the money raised to the American Red Cross. Then gave a fourth of that money to United Way. THEN gave half of the money to Twin city Missions. If Twin ... Wednesday, September 30, 2009 at 1:10pm by McKenzie Math repost for Grace Check this site. http://themathpage.com/arith/add-fractions-subtract-fractions-1.htm Friday, September 28, 2007 at 6:50pm by Ms. Sue Hypothetical Economy: -Money Supply= $200 billion -Quantity of money demanded for transactions=$150 Billion -Quantity of money demanded as an asset=$10 billion at 12% interest -increaseing by $10 billion for each 2 percentage point fall in the interest rate. A. What is the ... Monday, January 29, 2007 at 8:26pm by Frank fractions, decimals,and percents Recipes call for 1/2 cup, 1/4 teaspoon, etc. Decimals? Think about money. Percents? Computer downloads use percetages to show you much of the download is completed. Monday, November 17, 2008 at 9:15pm by Ms. Sue english essay, please help me to correct. can you help to correct my answer and english grammar. thank you so much. question below: 1. How does money is importance in our society right now? answer: The importance of money in our society right is more than anything else because everything is needed of money. money is ... Monday, October 3, 2011 at 5:34pm by jessie division of fractions Division of fractions is sometimes said to be multiplication of inverses. Elaborate with an explanation and example. If you have a/b divided by c/d this is the same as a/b * d/c If e.g., we have 2/3 divided by 7/9 then we have 2/3 * 9/7 = 6/7 When dividing fractions the rule ... Saturday, October 7, 2006 at 9:12pm by Brenda In A Year, Seema Earns RS 1,50,000. Find The Ratio Of Money That Seema Earns To The Money She Saves And Money That She Saves To The Money She Spends? Saturday, December 19, 2009 at 12:56pm by naisha kailash gave one-third of his money to keshav.keshav gave three-fourth of tha money he received from kailash to saket.if sanket got rs. 900 less than the money kailsh had ,how much money did kesav get from kailash. Tuesday, May 31, 2011 at 12:48am by shweta Math Fractions What is the first step to adding multiple fractions (3) with differenct denominators? Wednesday, July 8, 2009 at 1:40pm by Angie ordering numbers fractions The common denominator is 42. What are the equivalent fractions for 5/6 and 7/21? Sunday, January 23, 2011 at 10:28pm by Ms. Sue write the following fractions in increasing order 45/44 5/4/and 8/13 Monday, September 12, 2011 at 12:53pm by michael rounding fractions to 0, 0.5, or 1. where would 0.599 lay? Can you round fractions down? Thursday, January 26, 2012 at 6:16pm by lexi Decimals to Fractions(simplify the fractions) 0.2=1/5? 0.9=9/10 0.80=4/5? 0.55=11/20 Tuesday, October 9, 2012 at 5:46pm by Jerald Decimals to Fractions(simplify the fractions) 1.5=1/2 0.5=1/2 0.06=3/50 0.75=3/4 2.25=1/4 0.60=3/5 Tuesday, October 9, 2012 at 5:46pm by Jerald The least common denominator is 238. Change these fractions to equivalent fractions. Thursday, November 29, 2012 at 7:09pm by Ms. Sue If Im naming fractions are these fractions in the right order from least to greatest? 1/4 1/3 1/2? Thursday, January 31, 2013 at 5:52pm by Jerald 4th grade math fractions three different improper fractions that equal 4 1/2 Tuesday, March 23, 2010 at 8:24pm by rita I don't see where you need to use fractions to find the total. 9 + 6 = 15 Tuesday, February 11, 2014 at 9:41am by PsyDAG 4th grade math Do you know how to change these fractions to equivalent fractions with the same denominator? Monday, March 24, 2014 at 6:14pm by Ms. Sue Adding Fractions Before you can add fractions, you have to have common denominators. That is, both fractions need to have the same number in the denominator (the bottom number). The easiest way to find a common denominator is to multiply the two denominators. In this case 3 * 8 = 24. http://... Monday, March 14, 2011 at 10:43pm by Writeacher Determine whether each of the following would lead to an increase, a decrease, or no change in the quantity of money people wish to hold. Also determine whether there is a shift in the money demand curve or a movement along a given money demand curve a. A decrease in the price... Wednesday, October 11, 2006 at 6:49pm by jada Math- Fractions I know this seems easy, but i stink at fractions. What is .105 as a fraction? Monday, November 19, 2007 at 7:59pm by Ariana Dividing fractions and how do you reduce fractions isnt it dive the bottom number by the top Saturday, October 25, 2008 at 3:22pm by kate Joanne, do you want me to help Change the fractions to equivalent fractions with the same denominator?? Sunday, November 7, 2010 at 3:47pm by Erin x - 1/3 = 4/5 x = 4/5 + 1/3 Convert the fractions to equivalent fractions with a common denominator. Add. Sunday, November 7, 2010 at 8:36pm by Ms. Sue I would change the mixed fractions to improper fractions first. Monday, June 27, 2011 at 4:11pm by bobpursley Change the fractions to equivalent fractions with a common denominator or to decimals. Tuesday, January 31, 2012 at 6:04pm by Ms. Sue how to you compare 8 fractions with different denominators including improper fractions? Tuesday, February 14, 2012 at 4:47pm by john If both fractions are portions of the 27, you have none left. Wednesday, March 21, 2012 at 9:42pm by PsyDAG Equivalent Fractions Jerald -- do the same thing to number 5 that you did to the other fractions. Wednesday, October 3, 2012 at 5:52pm by Ms. Sue Change all of these fractions to equivalent fractions with a denominator of 12. Wednesday, February 6, 2013 at 9:02pm by Ms. Sue Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Fractions+of+Money+b","timestamp":"2014-04-20T04:54:25Z","content_type":null,"content_length":"38196","record_id":"<urn:uuid:8741d8a9-6bb7-4a13-a405-267057578b61>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
MECHANICAL ENGINEERING The Bending Moment Created... | Chegg.com MECHANICAL ENGINEERING The bending moment created by a center support on a steel beam is approximated by the formula PL3, in which P is the load on each side of the center support and L is the length of the beam on each side of the center support (assuming a symmetrical beam and load). If the total length of the beam is 24 ft (12 ft on each side of the center) and the total load is 4,124 lb (2,062 lb on each side of the center), what is the bending moment (in ft-lb3) at the center
{"url":"http://www.chegg.com/homework-help/mechanical-engineering-bending-moment-created-center-support-chapter-10-problem-78x4-solution-9780073384177-exc","timestamp":"2014-04-19T02:15:38Z","content_type":null,"content_length":"95788","record_id":"<urn:uuid:54a3cab4-2077-47d4-9d12-6fe3e3b4fd43>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}