content
stringlengths
86
994k
meta
stringlengths
288
619
Map scale confusion. 10-12-2013, 07:20 PM #2 Guild Expert Gracious Donor Join Date May 2009 48° 28′ N 123° 8′ W Blog Entries 10-12-2013, 06:06 PM #1 Now, I'm not sure where to post this, so I felt it'd be safest to do so here. I have a bit of a problem with map scaling...and math. To make a long story short, it was a biblical miracle I ever passed math at all. Numbers and me don't mix, and so when I come across a mathematical problem I panic. This is what it is. My 'world' is around the size of Earth, but just a bit larger. The surface area is: 150,000,000 km^2 for Land 365,000,000 km^2 for Water And so a total of 515,000,000 km^2 for the entirety of the planet. My goal is to work out how many km = to 1cm on the map. Apparently the proper way to do this is to work with random numbers, like 1:10000000, but I need the scale to be the exact size of earth, it's vital to the lore behind the map. To make life easier (just a tad) I've separated this "pangea" into different continents, still, I don't think the size of the number will help at all. The Map is being worked on an A4 sketch pad, portrait, and so is 624cm^2 in area. If any of you are knowledgeable in this field of work, I'll thank you a thousand times over (using a gif of course xD.) I apologise if I come across as arrogant, but this thing has been eating my coconut for days. Thanks alot, Beron. Most global maps don't preserve areas anyway, and the only projection that is equal area and fits exactly into a rectangle is Cylindrical Equal Area. "Apparently the proper way to do this is to work with random numbers, like 1:10000000" Uh, randomness does not enter into it, and a number by itself is not random, it's the process that produced it that is, not that 1/10000000 is a number most people would think of as being "random" anyway or that randomness would be in any way relevant to scaling maps. "but I need the scale to be the exact size of earth" This makes absolutely no sense. I have no clue what you are trying to accomplish. If you hadn't already said the planet is larger than Earth, I'd assume you meant they need to be the same size but as it is, I have no clue. I'm sorry but if you have a clear idea of what you want, you aren't expressing it effectively. 515 000 000 km2 makes for a map of something like 16 047 cm by 32 094 cm but that is huge. At a resolution of 300 you would be able to have a document of 160 by 320 cm. That's 18898 x 37795 pixels and by expirience, depending on your computer, is the limit concerning the size. Even with that, you might encounter performance issues if you have a lot of stuff on your map. So that's probably the maximum you can go. Perhaps the original request is a little confuse, but I think I get what Beron wants. Now, the area of a sphere (no planet is a perfect sphere, but close enough) is 4πR^2. This means that if the total area of the planet is 515 000 000 km2, then its radius is the square root of the Area/4π, which is approximately equal to 6402 km (I guess it's ok to be approximate, since the planet is not a perfect sphere to begin with), which is in fact just a little bigger than Earth's. A radius of 6402 gives a circumference (2πR) of approximately 40225 km. So, let's say that the Equator of this planet is 40225 km long. Then, assuming you are using a rectangular projection in which the scale is constant at the equator (which is most of them, I guess, or at least most of those that are most commonly used), that is the width of the map at the equator. If you use an A4 size in landscape mode, that means that 297 mm = 40225 km, thus 1mm =~ 135,4 km, 1cm =~ 1354 km. 1cm^2 =~ 1833316 km^2. If you count by pixels instead of mm just make the appropriate proportion with the width of 40225km. Unless I made some errors with the calculation, but you get the idea anyway. Oups, my numbers considered that the Earth was a rectangle. Your are pobably right Feanaaro and I think it's easier to use pixels instead of cetimeters. It's irrelevant if you don't plan to print This makes absolutely no sense. I have no clue what you are trying to accomplish. If you hadn't already said the planet is larger than Earth, I'd assume you meant they need to be the same size but as it is, I have no clue. Intoxication and silly 5am ideas make for incoherence, sorry :-) I'll try better with good sleep and a slight hangover xD So, thank you ever so much for the replies, guys/gals So, if I'm not mistaken. If I consider the entirety of the map, including sea + land, 1cm will approximately equal to 1354km? It seems to me that feanaaro feels that I'm working with a sphere? I know I said 'planet' but if it makes things easier, the map I'm drawing works like a typical 'map' of the globe. Unless that's completely irrelevant. If that doesn't make things easier, maybe splitting the globe into sections and working at them individually will help? For example: The world at its current age is akin to Earth's pangea: it's one super continent. I considered the future and made outlines to what will eventually become 8 separate land masses. The largest being 30million kilometres square (around the size of Africa) and the smallest 10million kilometres square. Again, I apologise for the inconvenience, but I still haven't grasped it fully yet. Since 1cm = 1345km on a global scale, how can I equate 30million kilometres square on a continental map? If your equator is 40225 km long, your planet is roughly spherical, and you are working on an A4 sheet, then 1cm =~ 1354 km. This is really simple and there is no need of making the planet a flat square or splitting it in parts or anything. 1cm^2 = 1 833 316 km^2, therefore 30 million km^2 =~ 16.4cm^2, or a little more than a 4cm*4cm square. However, with all due respect, if given the km^2 size of 1cm^2 on the map you are not able to calculate how many cm^2 you would need for a given number of km^2, perhaps you should not worry about these things in the first place, and just mapping without bothering to consider scale (which is how humans have mapped the world for most of their history, anyway). There is an issue with the scale calculation above. If you are talking about a rectangular map of a spherical world, then the scale of 1cm = 1354 km is only true at the equator. The closer to the poles you get, the greater the distortion. In effect your ratio scale approaches 1to1 till 1cm = 1cm at the poles. That is because the top and bottom neat lines of the map are actually two different points, the north and South Pole respectively. To make a long story short, follow Feanaaro's advice. Scale is elusive on a global map that isn't an actual globe, unless you have a program that allows you to recalculate your map at different projections. 10-12-2013, 07:30 PM #3 Guild Artisan Join Date Jul 2008 10-12-2013, 08:42 PM #4 Guild Adept Join Date Jul 2011 Rome, Italy - New York, USA 10-12-2013, 09:07 PM #5 Guild Artisan Join Date Jul 2008 10-13-2013, 09:21 AM #6 10-13-2013, 10:13 AM #7 Guild Adept Join Date Jul 2011 Rome, Italy - New York, USA 10-14-2013, 11:46 PM #8 Guild Novice Join Date Mar 2012 Albuquerque, NM
{"url":"http://www.cartographersguild.com/general-miscellaneous-mapping/24958-map-scale-confusion.html","timestamp":"2014-04-17T22:17:16Z","content_type":null,"content_length":"88663","record_id":"<urn:uuid:6e4de8db-c666-4658-b731-de7930b6f1d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Effect of p-Layer and i-Layer Properties on the Electrical Behaviour of Advanced a-Si:H/a-SiGe:H Thin Film Solar Cell from Numerical Modeling Prospect International Journal of Photoenergy Volume 2012 (2012), Article ID 946024, 7 pages Research Article Effect of p-Layer and i-Layer Properties on the Electrical Behaviour of Advanced a-Si:H/a-SiGe:H Thin Film Solar Cell from Numerical Modeling Prospect Department of Electrical Engineering, Shahid Chamran University of Ahvaz, Ahvaz 61357-831351, Iran Received 9 August 2011; Revised 26 September 2011; Accepted 30 September 2011 Academic Editor: Leonardo Palmisano Copyright © 2012 Peyman Jelodarian and Abdolnabi Kosarian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The effect of p-layer and i-layer characteristics such as thickness and doping concentration on the electrical behaviors of the a-Si:H/a-SiGe:H thin film heterostructure solar cells such as electric field, photogeneration rate, and recombination rate through the cell is investigated. Introducing Ge atoms to the Si lattice in Si-based solar cells is an effective approach in improving their characteristics. In particular, current density of the cell can be enhanced without deteriorating its open-circuit voltage. Optimization shows that for an appropriate Ge concentration, the efficiency of a-Si:H/a-SiGe solar cell is improved by about 6% compared with the traditional a-Si:H solar cell. This work presents a novel numerical evaluation and optimization of amorphous silicon double-junction (a-Si:H/a-SiGe:H) thin film solar cells and focuses on optimization of a-SiGe:H midgap single-junction solar cell based on the optimization of the doping concentration of the p-layer, thicknesses of the p-layer and i-layer, and Ge content in the film. Maximum efficiency of 23.5%, with short-circuit current density of 267A/m^2 and open-circuit voltage of 1.13V for double-junction solar cell has been achieved. 1. Introduction Hydrogenated amorphous silicon-germanium alloys (a-SiGe:H) are widely used in multijunction solar cells, where their main advantage is the capability of shifting the optical band gap to lower energies by increasing the germanium concentration in the film [1–3]. In order to avoid drastic reduction of the short-circuit current () in a cell with small thickness, it is necessary to increase absorbance of the material and optimize the cell structure [4]. Using a lower band-gap material such as a-SiGe in the active base region of the cell is a possible approach, owing to its compatibility with the mature Si-based cell process. An appreciable increase of photocurrent () in the small band-gap a-SiGe material is highly expected, because of the increased absorption of photons. On the other hand, a drop of the open-circuit voltage (), due to the reduction of the SiGe band-gap with increasing Ge concentration [5, 6]. As a result, a compromise between the cell’s parameters is necessary in the optimization procedure. The optimization of band-offset between a-SiGe and a-Si either in the valence band () or in the conduction band () can help design a more effective back surface field, where the photogenerated carriers from the highly dislocated surfaces and interfaces are reflected, thus at the backside of the cell the recombination velocity is significantly decreased. The band offset of and at the a-Si/SiGe heterointerface is dependent on the Ge concentration in the film. In addition to less surface recombination velocity, the smaller band gap () in high Ge concentration SiGe film has a larger light absorption coefficient, which can lead to more electron-hole pair generation and higher . Both of them can result in higher cell efficiency of the a-SiGe-based solar cell [4]. The optimization process of a multijunction structure involves both the design of individual junction layers which produces an optimum output power and the design of a series stacked configuration of these junction layers which yields the highest possible overall output current. Figure 1 shows a double-junction structure for a p-i-n a-Si solar cell used in this work [7]. 2. Numerical Modeling Referring to its I-V characteristics, the most important parameters of a solar cell are the short-circuit current (), the open-circuit voltage (), fill factor (FF), and efficiency (). The short-circuit current is due to the generation and collection of light-generated carriers. So, ideally, it is equal to the light-generated current, and is the largest current that can flow through the solar cell. However, an appreciable fraction of the generated carriers recombine in the bulk and interface. The recombination losses are characterized by the diffusion length and the minority carrier lifetime in the active region. Modeling of the device is based on the simultaneous solution of transport equations, such as, Poisson equations, continuity equations, and current density equations for holes and electrons [8]: where is the dielectric constant, is the homojunction hole potential, is the electron charge, and are the current densities due to transport by hole and electron, respectively, is the generation rate for carriers, and is the recombination rate. The net charge carrier density is given by the following expression: where and are the free hole and electron densities, and are the concentrations of ionized acceptor and donors, and are the net charge densities due to the trapping of holes and electrons in tail states and dangling bond states, respectively. The value for and are calculated from the energy levels and total densities of donors and acceptors and from the spatially varying concentrations of free carriers in the solar cell. Disordered materials like amorphous Silicon and Silicon-Germanium contain a large amount of defect states within the band gap of the material. To accurately model devices made of amorphous silicon, one should use a continuous density of states. The density of defect states (DOSs) is specified with a combination of Gaussian distributions of midgap states and exponentially decaying band tail states [8, 9]. Here, it is assumed that the total density of states (DOSs), , is combined of four components: two tail bands (an acceptor-like conduction band and a donor-like valence band), which are modeled by an exponential distribution, and two deep level bands (one donor-like and the other acceptor-like), which are modeled by a Gaussian distribution [9, 10]: where is the trap energy. The subscripts () stand for Gaussian (deep level), tail, acceptor, and donor states, respectively, and [9, 10] where is the trap energy, is the conduction band energy, and is the valence band energy. For exponential tail distributions, the DOS is described by its conduction and valence band edge intercept densities ( and ) and by its characteristic decay energy ( and ). For Gaussian distributions, the DOS is described by its peak energy ( and ), its characteristic decay energy ( and ), and its total density of states ( and ). The ionized densities of acceptor- and donor-like states ( and , resp.) are given by: where are as below: where and are the ionization probabilities for the tail and Gaussian acceptor DOS, while and are the ionization probabilities for the donors. In the p-i-n structure of Figure 1 the p-type a-Si:H layer is highly doped, since its higher conductivity allows more light to pass through to the intrinsic layer. As a result, the open-circuit voltage as well as the short-circuit current could be improved. Moreover, this layer is made thinner than the other layers to give the carriers less opportunity and time to recombine before reaching the intrinsic layer. Table 1 shows a set of input parameters, which are used in our simulation. The transparent conductive ITO layer has 75nm thickness. For a-SiGe:H p-i-n solar cell, it is widely accepted that the most important parameters affecting the device properties are the gap states of the films. In this work, the AM1.5 spectrum for illumination of samples was used. 3. Results and Discussion The performance of a one-square-meter p-i-n single-junction a-SiGe:H solar cell (SiO2/ITO/p-a-Si:H/i-a-SiGe:H/n-a-Si:H/Silver), such as short-circuit current (), open-circuit voltage (), fill factor (FF), conversion efficiency (), and I-V curve, are obtained as a function of doping concentration, atomic percent of Ge in a-SiGe layer and thicknesses of i-layer and p-layer. Also variations of the electrical properties inside the solar cell such as the distribution of photo-generation and recombination rates, and electric field, with variation of layers properties are obtained. In order to be able to collect the maximum number of electron-hole pairs generated by absorbed photons, the electric field at the p-i interface should be as high as possible; however, the recombination rate also increases with increasing the electric field, resulting in a reduction in the efficiency. So, there must be a compromise between the recombination rate and the electric field. The a-SiGe:H solar cell I-V and P-V curves as a function of p-layer doping concentration are shown in Figure 2. The a-SiGe:H solar cell efficiency as a function of p-layer doping concentration is shown in Figure 3. The other parameters are the same as in Table 1. According to Figures 2 and 3, simulation indicates that for the p-layer concentrations larger than 10^18cm^−3, the cell has a high efficiency due to the drastic increase in short circuit current of the device. However, care is needed to achieve such a high acceptor concentration in the film without degrading the film quality. Figure 4 shows the recombination rate and electric field distribution through the cell as a function of p-layer doping concentration. As shown in figure, 5 × 10^18cm^−3 of p-layer concentration builds the highest electric field at the p-i interface that results more carriers collection by ITO layer, but it also has the highest recombination rate in i-layer. The a-SiGe:H solar cell I-V and P-V curves as a function of p-layer thickness are shown in Figure 5. The effect of p-layer thickness on the efficiency of the cell is shown in Figure 6. The other parameters used are the same as in Table 1. The thickness of the p-layer was changed from 10 to 90nm. The photogeneration rate and recombination rate and electric field distribution through the cell as a function of p-layer thickness are shown in Figures 7 and 8, respectively. These Figures show that the cell with thinner p-layer has lower recombination rate in p-layer and high electric field in p-i interface that results in higher current density. The SiGe band-gap, which is engineered by alteration of Ge concentration, is a critical parameter in the SiGe solar cell design. Introducing Ge atoms to the Si lattice in Si-based solar cells is an effective approach in improving their characteristics. In particular, current density of the cell can be enhanced without deteriorating its open-circuit voltage. The efficiency of the solar cell as a function of Ge concentration in SiGe layer is shown in Figure 9, where it is changed between 0 and 30 at %. The other parameters used are the same as in Table 1. Simulation shows that the maximum efficiency of 18.5% is obtained at the optimized value of Ge concentration of 17 at %, which shows 6% improvement in the overall efficiency of the cell compared with the common single-junction a-Si:H solar cell, Figure 9. In amorphous thin film p-i-n solar cell, a thick absorber layer (i-layer) can absorb more light to generate electron and hole (carriers); however, a thicker i-layer degrades the drift electric field for carrier transport. On the other hand, a thin i-layer cannot absorb enough light. Thickness of i-layer is a key parameter that can limit the performance of amorphous thin film solar cells. The a-SiGe:H solar cell efficiency as a function of the i-layer thickness are shown in Figure 10. The other parameters used are the same as in Table 1. As shown in the figure, maximum efficiency is obtained at the 0.8um of thickness of i-layer. Based on the optimized a-Si:H/SiGe:H single-junction solar cell, described previously, a double-junction solar cell (Figure 1) has been designed. I-V and P-V characteristics of the optimized single- and double-junction solar cells are shown in Figure 11. As you can see the current density of double-junction solar cell is lower than single-junction solar cell, because of current limitation of a-Si:H solar cell. The figures show that for double-junction cell the short-circuit current density, open-circuit voltage, maximum output power, and fill factor of 267 A/m^2, 1.13 V, 235 W/m^2, and 0.795 are obtained, respectively. 4. Conclusion A two-dimensional computer simulation is presented here using the standard continuous density of state model for deep and shallow states in amorphous silicon band-gap for the optimization of the single- and double-junction-hydrogenated amorphous Silicon-Germanium solar cells. The simulation and modeling process indicate that it is possible to optimize the solar cell performance and improve the efficiency of the cell by appropriate selection of the thickness and doping concentration of the layers. This work was partly supported by Khuzestan Regional Electricity Company and Jundishapour Water and Energy Research Institute, Iran. 1. A. Gordijn, R. J. Zambrano, J. K. Rath, and R. E. I. Schropp, “Highly stable hydrogenated amorphous silicon germanium solar cells,” IEEE Transactions on Electron Devices, vol. 49, no. 5, pp. 949–952, 2002. View at Publisher · View at Google Scholar · View at Scopus 2. S. Guha and J. Yang, “Science and technology of amorphous silicon alloy photovoltaics,” IEEE Transactions on Electron Devices, vol. 46, no. 10, pp. 2080–2085, 1999. View at Publisher · View at Google Scholar · View at Scopus 3. A. Banerjee, K. Hoffman, X. Xu, J. Yang, and S. Guha, “Back reflector texture and stability issues in high efficiency multijunction amorphous silicon alloy solar cells,” in Proceedings of the IEEE 1st World Conference on Photovoltaic Energy Conversion, vol. 1, pp. 539–542, Waikoloa, Hawai, USA, 1994. View at Publisher · View at Google Scholar 4. M. H. Liao and C. H. Chen, “The investigation of optimal Si-sige hetero-structure thin-film solar cell with theoretical calculation and quantitative analysis,” IEEE Transactions on Nanotechnology , vol. 10, no. 4, pp. 770–773, 2011. View at Publisher · View at Google Scholar 5. C. Lee, H. Efstathiadis, J. E. Raynolds, and P. Haldar, “Two-dimensional computer modeling of single junction a-Si:H solar cells,” in Proceedings of the 34th IEEE Photovoltaic Specialists Conference, (PVSC '09), pp. 001118–001122, Philadelphia, Pa, USA, June 2009. View at Publisher · View at Google Scholar 6. Z. Q. Li, Y. G. Xiao, and Z. M. S. Li, “Modeling of multi-junction solar cells by Crosslight APSYS,” in Proceedings of the High and Low Concentration for Solar Electric Applications. View at Publisher · View at Google Scholar 7. A. Kosarian and P. Jelodarian, “Optimization and characterization of advanced solar cells based on thin-film a-Si:H/SiGe hetero-structure,” in Proceedings of the 19th Iranian Conference on Electrical Engineering, (ICEE '11), pp. 1–5, 2011. 8. H. Tasaki, W. Y. Kim, M. Hallerdt, M. Konagai, and K. Takahashi, “Computer simulation model of the effects of interface states on high-performance amorphous silicon solar cells,” Journal of Applied Physics, vol. 63, no. 2, pp. 550–560, 1988. View at Publisher · View at Google Scholar · View at Scopus 9. M. Kemp, M. Meunier, and C. G. Tannous, “Simulation of the amorphous silicon static induction transistor,” Solid State Electronics, vol. 32, no. 2, pp. 149–157, 1989. View at Scopus 10. M. Hack and J. G. Shaw, “Numerical simulations of amorphous and polycrystalline silicon thin-film transistors,” in Proceedings of the Conference on Solid State Devices and Materials, pp. 999–1002, Sendai, Japan, 1990.
{"url":"http://www.hindawi.com/journals/ijp/2012/946024/","timestamp":"2014-04-20T12:33:30Z","content_type":null,"content_length":"117824","record_id":"<urn:uuid:9314203e-ad69-4c10-b8cd-ebe693c5d39b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial Actions and Power Sets International Journal of Mathematics and Mathematical Sciences Volume 2013 (2013), Article ID 376915, 4 pages Research Article Partial Actions and Power Sets ^1Departamento de Matemáticas y Estadística, Universidad del Tolima, Ibagué, Colombia ^2Departamento de Matemática, Universidade Federal de Santa Maria, Santa Maria, RS, Brazil Received 4 October 2012; Accepted 26 December 2012 Academic Editor: Stefaan Caenepeel Copyright © 2013 Jesús Ávila and João Lazzarin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider a partial action with enveloping action . In this work we extend α to a partial action on the ring and find its enveloping action . Finally, we introduce the concept of partial action of finite type to investigate the relationship between and . 1. Introduction Partial actions of groups appeared independently in various areas of mathematics, in particular, in the study of operator algebras. The formal definition of this concept was given by Exel in 1998 [1 ]. Later in 2003, Abadie [2] introduced the notion of enveloping action and found that any partial action possesses an enveloping action. The study of partial actions on arbitrary rings was initiated by Dokuchaev and Exel in 2005 [3]. Among other results, they prove that there exist partial actions without an enveloping action and give sufficient conditions to guarantee the existence of enveloping actions. Many studies have shown that partial actions are a powerful tool to generalize many well-known results of global actions (see [3, 4] and the literature quoted therein). The theory of partial actions of groups has taken several directions over the past thirteen years. One way is to consider actions of monoids and groupoids rather than group actions. Another is to consider sets with some additional structure such as rings, topological spaces, ordered sets, or metric spaces. Partial actions on the power set and its compatibility with its ring structure have not been considered. This work is devoted to study some topics related to partial actions on the power set arising from partial actions on the set and its enveloping actions. In Section 1, we present some theoretical results of partial actions and enveloping actions. In Section 2, we extend a partial action on the set to a partial action on the ring . In addition, we introduce the concept of partial action of finite type to investigate the relationship between the enveloping action of and , the power set of the enveloping action of . 2. Preliminaries In this section, we present some results related to the partial actions, which will be used in Section 2. Other details of this theory can be found in [2, 3]. Definition 1. A partial action of the group on the set is a collection of subsets , , of and bijections such that for all , the following statements hold.(1) and is the identity of . (2). (3), for all . The partial action will be denoted by or . Examples of partial actions can be obtained by restricting a global action to a subset. More exactly, suppose that acts on by bijections and let be a subset of . Set and let be the restriction of to , for each . Then, it is easy to see that is a partial action of on . In this case, is called the restriction of to . In fact, for any partial action there exists a minimal global action (enveloping action of ), such that is the restriction of to [2, Theorem 1.1]. To define a partial action of the group on the ring , it is enough to assume in Definition 1 that each , , is an ideal of and that every map is an isomorphism of ideals. Natural examples of partial actions on rings can be obtained by restricting a global action to an ideal. In this case, the notion of enveloping action is the following ([3, Definition 4.2]). Definition 2. A global action of a group on the ring is said to be an enveloping action for the partial action of on a ring , if there exists a ring isomorphism of onto an ideal of such that for all , the following conditions hold. (1). (2), for all . (3) is generated by . In general, there exist partial actions on rings which do not have an enveloping action [3, Example 3.5]. The conditions that guarantee the existence of such an enveloping action are given in the following result [3, Theorem 4.5]. Theorem 3. Let be a unital ring. Then a partial action of a group on admits an enveloping action if and only if each ideal , is a unital ring. Moreover, if such an enveloping action exists, it is unique up equivalence. 3. Results In this section, we consider a nonempty set and a partial action the of the group on . By [2, Theorem 1.1] there exists an enveloping action for . That is, there exist a set and a global action of on , where each is a bijection of , such that the partial action is given by restriction. Thus, we can assume that , is the orbit of , for each and for all and all . The action on can be extended to an action on . Moreover, since , , is a bijective function, we have that and for all and all . Therefore, the group acts on the ring . This action will also be denoted by . Proposition 4. If acts partially on then acts partially on the ring . Proof. Let a partial action of on and consider the collection , where , , and is defined by for all and all . It is clear that , , is a well-defined function, and it is a bijection. Now, we must prove that defines a partial action of on the ring . We verify 2 and 3 of Definition 1, since 1 is evident. (2) If , then . Thus, for each . Hence, , and we conclude that . (3) For all , we have that . Since (item 2), then . In conclusion, for all . Finally, for all , we have that and , because each , , is a bijection. Therefore, acts partially on the ring . The partial action of on will also be denoted by . In the previous proposition, note that each ideal , , has the identity element . Thus, by Theorem 3, we conclude that there exists an enveloping action for the partial action . In the following result, we find this enveloping action and show its relationship with . Proposition 5. Let be a partial action of on the nonempty set . The following statements hold. (1)is an ideal of . (2)is a -invariant ideal of . (3)The enveloping action ofis, where each, ,acts onby restriction. Proof. (1) It is a direct consequence of the inclusion . (2) Since is an ideal of , we have that is an ideal of , and it is clear that is -invariant. (3) We must prove 1, 2, and 3 of Definition 2. Note that by item 2, the action on is global. Moreover, we can identify with because is an ideal of . The item 3 is consequence of 2. To prove 2, let . Then, . Since and is the enveloping action of , we have that for all and all . Thus, , and we conclude that for all . To prove 1, let . Then, and thus . Hence, for all . For the other inclusion, let such that . Then, . Since is the enveloping action of , we have for all . Hence, and thus . We conclude that is the enveloping action of . The final result shows that , the enveloping action of , is a subaction of . Thus it is natural to ask in which case or equivalently when is the enveloping action of . To solve this problem, we first define the concept of partial action of finite type. Definition 6. Let be a partial action of on the set with enveloping action . is said to be of finite type if there exist such that . A partial action of on the ring is called of finite type [5, Definition 1.1] if there exists a finite subset of , such that, for any . If the partial action has an enveloping action, then it can be characterized as follows [5, Proposition 1.2]. Proposition 7. Let be a partial action of on the ring with enveloping action . The following statements are equivalent. (1)is of finite type. (2)There exist such that . (3) has an identity element. The following theorem is the main result of this work. Without loss of generality, we can assume that in Definition 6 and Proposition 7. First, we prove the following specialization of [5, Proposition 1.10]. Proposition 8. Under the previous assumptions, if with , then is the identity element of . Proof. By induction on , it is enough to consider the case with two summands, that is, . Then, . Since the addition is the symmetric difference and the product is the intersection, we obtain that . Theorem 9. Let be a partial action of on the set with enveloping action . The following statements are equivalent. (1) is of finite type. (2). (3) is of finite type. Proof. . Suppose that there exist such that . By Proposition 8, the identity element of the ring is . So, , and since is an ideal of , we conclude that . . If , then . So, is a ring with identity, and by Proposition 7 the result follows. . If is of finite type, then there exist such that . Thus, for each , there exist such that , which implies that for each . Hence, . In conclusion, is of finite type. To illustrate the results obtained, we include the following examples. Example 10. Let be the set of even integers and the group . We define a partial action of on as follows: if is even and if is odd; for , an even integer is defined by for all , and for , an odd integer is the empty function. The enveloping action of is where and is the action of on , defined by for all . Since and the set of odd integers , we have . Hence, , and thus is the enveloping action of . Example 11. Let where is a fixed integer and is the group . We define a partial action of on as follows: and for ; is the identity and is the empty function in other case. The enveloping action of is , that of the previous example. Note that each singleton of is an element of for some integer . So, the enveloping action of coincides with the collection of all finite subsets of . Hence, because is an infinite set, and we conclude that . In [5] it was proved that if is a ring and is a partial action of a group on with enveloping action , then is right (left) Noetherian (Artinian), if and only if is right (left) Noetherian (Artinian) and is of finite type (Corollary 1.3). Under the same assumptions, they also proved that is semisimple if and only if is semisimple and is of finite type (Corollary 1.8). By using these results and Theorem 9 we obtain the following result. Proposition 12. Under the previous assumptions, the following statements are equivalent. (1) is finite, and is of finite type. (2)The ring is Noetherian (Artinian, semisimple), and is of finite type. (3)The ring is Noetherian (Artinian, semisimple). Proof. It is enough to observe that is finite if and only if the ring is noetherian (Artinian, semisimple) and apply Theorem 9. The authors are grateful to the referee for the several comments which help to improve the first version of this paper. This work was partially supported by “Oficina de Investigaciones y Desarrollo Científico de la Universidad del Tolima” and by “Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul.”
{"url":"http://www.hindawi.com/journals/ijmms/2013/376915/","timestamp":"2014-04-16T12:13:40Z","content_type":null,"content_length":"306515","record_id":"<urn:uuid:9590f8ab-c1a9-4834-8289-a9e32bc968a8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequality Proof October 27th 2008, 07:21 PM #1 Aug 2008 Hi, I think this is part of Euclidian Geometry because the term started off with it. I actually posted this in the Urgent hw help section but there was no reply and I am in great need of help right now. I am so sorry for reposting but I really need help with this problem and it's due tomorrow. How would you prove this problem? And can you please explain. I really don't know how to do it. The diagram can be seen here: Given: AB=AC; Line seg. BC bisects angle ABC; Line seg. DE bisects angle ADB Prove: BE>DE By "prove" means it's suppose to be in statements/reasons. Thank you. the question should be BE bisects angle ABC and D is any point on BC other than C. the exterior angle ADB is greater than interior angle ACD of the triangle ACD . again ang ACD = ang ABD ( since AB =AC) hence ang EBD (which is half of ang ABD) is less than ang EDB (which is half of ang ADB) therefore BE is greater than DE Are you satisfied? pl intimate. November 2nd 2008, 01:05 AM #2 Nov 2008
{"url":"http://mathhelpforum.com/geometry/56112-inequality-proof.html","timestamp":"2014-04-18T17:44:24Z","content_type":null,"content_length":"30599","record_id":"<urn:uuid:3064a98a-95b9-47cb-b350-902a8d6e0ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Compiling Error : "Value of index is not static" I'm implementing a simple ripple carry adder using instances of a simple adder (called CLA...that works..).The ripple carry adder is generic , N is the bit widht. Unfortunately when I loop to generate the simple adders ("CLA" components..) , I receive 2 errors : value "I" is not static (at the component "as": entity) , and value of index is not static(at aLess1: entity) . library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity RCA is Port ( A : in STD_LOGIC_VECTOR(N-1 downto 0);--nel caso da 3 a 0 (cioč 4) B : in STD_LOGIC_VECTOR(N-1 downto 0); S : out STD_LOGIC_VECTOR(N-1 downto 0); Carry_in : in STD_LOGIC;--un solo bit di uscita globale Carry_out : out STD_LOGIC);--un solo bit di uscita globale end RCA; architecture behavioral of RCA is signal c : std_logic_vector(N-1 to 0); -- internal carry signal COMPONENT CLA -- GENERIC (N: INTEGER := 32); PORT ( a : IN UNSIGNED ((N-1) DOWNTO 0); b : IN UNSIGNED ((N-1) DOWNTO 0); c_in : IN STD_LOGIC; s : OUT UNSIGNED ((N-1) DOWNTO 0); c_out : OUT STD_LOGIC; overflow : OUT STD_LOGIC END COMPONENT; a0: entity CLA generic map(1) port map(A(0)=>a(0),B(0)=>b(0),C_in=>Carry_in,C_out=>c(0),S(0)=>S(0)); middle: for I in 1 to N-2 generate as: entity CLA generic map(1) port map(A(I)=>a(I),B(I)=>b(I),C_in=> c(I-1),C_out=>c(I), S(I)=>s(I)); end generate middle; aLess1: entity CLA generic map(1) port map(A(N-1)=>a(N-1),B(N-1)=>b(N-1),C_in=> c(N-2) ,C_out=>Carry_out,S(N-1)=>s(N-1)); end Behavioral; Could you help me please? Thanks a lot in advance..
{"url":"http://www.velocityreviews.com/forums/t643782-compiling-error-value-of-index-is-not-static.html","timestamp":"2014-04-19T22:48:29Z","content_type":null,"content_length":"38065","record_id":"<urn:uuid:d5d19f0a-b481-4800-860f-dfa7b07aa159>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find each exact value if 0 < x < pi/2 and 0 < y < pi/2 . • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d943d8e4b069916c853d76","timestamp":"2014-04-18T19:18:02Z","content_type":null,"content_length":"42075","record_id":"<urn:uuid:a213a9b2-49b8-4c16-84df-e6edff1cbb87>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite subgroup of an infinite Group December 12th 2011, 08:26 PM #1 Finite subgroup of an infinite Group The only subgroup of finite order of a an infinite group is the identity My proof Let G be a group with infinite order let H be a subgroup and let $x \in H$ and $x e 1$ $\{x , x^2 , x^3 , x^4 , ... \} \subseteq H$ which make H with an infinite order am I rite ? is there any counter example Re: Finite subgroup of an infinite Group Re: Finite subgroup of an infinite Group More generally, take any field $k$ then if there exists non-zero $x\in k$ with $k^n=1$ (i.e. a non-trivial $n^{\text{th}}$ root of unity) then the group $\mu_n$ of $n^{\text{th}}$-roots of unity form a finite subgroup of $k^\times$. This clearly generalizes Dr. Revilla's example (so that, for example, the $3$-roots of unity in $\mathbb{C}$). But, perhaps even more of a trivial counterexample. Take any infinite group $G$ and consider $G\times\mathbb{Z}_2$.... Re: Finite subgroup of an infinite Group December 12th 2011, 11:14 PM #2 December 13th 2011, 03:16 AM #3 December 13th 2011, 03:26 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/194143-finite-subgroup-infinite-group.html","timestamp":"2014-04-20T21:18:30Z","content_type":null,"content_length":"43577","record_id":"<urn:uuid:25e66d66-2fdd-4a24-a2d5-196e2f502ffc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Confusion about flat functors Let $\mathbf{C}$ be a small category and let $\mathbf{E}$ be a locally small and cocomplete category. For the purposes of this discussion, I am defining a flat functor to be a functor $A : \mathbf{C} \to \mathbf{E}$ such that the induced functor $(-) \otimes_\mathbf{C} A : \widehat{\mathbf{C}} \to \mathbf{E}$ is left exact. Now, when $\mathbf{E} = \mathbf{Set}$, I know this is equivalent to the definition given in the nLab article flat functor: the comma category $(1 \downarrow A)$ is cofiltered if and only if $A$ is flat in the above sense (and I guess this extends without problems to the case $(E \downarrow A)$ for a general set $E$). This is Theorem 3 in [Sheaves and Geometry and Logic, Ch. VII, §6]. The article also asserts that, when $\mathbf{E}$ is a topos, if $A$ is representably flat, then $A$ is flat in the above sense. This seems dubious, since we should at least take $\mathbf{E}$ to be a locally small and cocomplete topos (e.g. a Grothendieck topos). Regardless, assuming $\mathbf{E}$ is a sufficiently nice topos, is the converse true? That is, if $A$ is flat, is $A$ representably flat? For some reason Mac Lane and Moerdijk phrase everything internally in terms of $\mathbf{E}$ in [Ch. VII, §8], and the claim that comes closest is Lemma 4. Ultimately though, what I want to know is this: when $\mathbf{C}$ is a finitely complete category and $\mathbf{E}$ is a locally small cocomplete topos, are 1. $A$ being a flat functor 2. $A$ being a representably flat functor 3. $A$ being left exact all equivalent? For the case $\mathbf{E} = \mathbf{Set}$, Mac Lane and Moerdijk prove (1) ⇒ (3) ⇒ (2) ⇔ (1), and I can see that (1) ⇒ (3) ⇒ (2) hold even when $\mathbf{E}$ is only assumed to be a locally small and cocomplete category. The troublesome step is (2) ⇒ (1), which the nLab article asserts is true but gives no reference for… You may be interested in this blog post. Does that answer some or all of your questions? Interesting – thanks! If I understand correctly, all the definitions coincide when $\mathbf{C}$ is finitely complete, which is probably good enough for most purposes. But the post doesn’t discuss flatness in terms of the Yoneda extension being left exact… how is this definition related to the others, in the general case (of toposes)? You may find info about this in Postulated colimits and left exactness of Kan extensions :: A Kock. This is a retyped version of an old (1989) preprint. If (and that’s a big if) I’m reading what it says there right: if all the colimits used to calculate $Lan_y F$ are what is called there postulated, then $Lan_y F$ preserves finite limits iff $F$ is flat where $F$ flat is in the internal-logic sense that Mike is referring to in the post he linked to above. Now, in a general Grothendieck topos all small colimits are postulated (Proposition 2.1 there), thus we get the characterization in terms of $Lan_y F$. Concerning the technical-looking definition of postulated colimit, a nice, more conceptual way of looking at it is also in that paper: If $E$ is small with subcanonical topology, a colimit is postulated if it is preserved by the Yoneda embedding of $E$ into the topos $\widehat E$ of sheaves on $E$ You may also find interesting all the definitions coincide when C is finitely complete Yes, at least when the codomain category is a topos, or more generally a site with finite limits and extremal-epimorphic covering families. If the codomain is a general site, then covering-flat doesn’t imply representably-flat even if both domain and codomain are finitely complete. flatness in terms of the Yoneda extension being left exact… how is this definition related to the others It is also equivalent (when the codomain is a topos). I think this is VII.9.1 in Sheaves in Geometry and Logic. I have reorganized and added material to the page flat functor, so that hopefully it can answer this question by itself in the future. That is much clearer, thanks! Still, there’s one little thing I have to be sure of: when $\mathbf{C}$ is small, and $\mathbf{E}$ is a locally small and cocomplete topos, does being representably flat imply being internally flat? This would provide the (2) ⇒ (1) step in my original post. Also, one wonders if the business about being internally flat should be rephrased in terms of internal diagrams or somesuch (since being cocomplete implies that any small category $\mathbf{C}$ can be lifted to an internal category in $\mathbf{E}$). does being representably flat imply being internally flat? If you read carefully, yes. Representable flatness is covering-flatness relative to the trivial topology; internal flatness is covering-flatness relative to the canonical topology. Since the canonical topology contains the trivial topology, the one implies the other. Also, one wonders if the business about being internally flat should be rephrased in terms of internal diagrams or somesuch It certainly could be. I don’t know that it should be. (-: I’ve been thinking about flat functors and I have some further questions. 1. The definition in terms of the Yoneda extension is strongly analogous to the definition of “flat module” in commutative algebra – it works word-for-word and symbol-for-symbol if I think of a presheaf $\mathbb{C}^{op} \to \mathbf{Set}$ as a “right $\mathbb{C}$-module” and write the Yoneda extension of a “left $\mathbb{C}$-module” as ${-} \otimes_{\mathbb{C}} F$. But the definition in terms of elements seems to give something very different. 2. Every representable copresheaf is projective and is flat. This is even true for internal copresheaves on internal categories in an elementary topos. But are projective copresheaves flat in general, or have I been thinking about commutative algebra too much? 3. Let $G$ be an internal group in an elementary topos $\mathcal{E}$, and let $X$ be an internal left $G$-torsor. Then, since $X \to 1$ is epic, there is a (generalised) element $x$ of $X$, and we can define an internal morphism $f : G \to X$ by $g \mapsto g \cdot x$. This is an internal epimorphism/monomorphism by transitivity/freeness (resp.). This means $f : G \to X$ is an internal isomorphism. (Right…?) This seems disturbing, since it looks as if $\mathcal{E}$believes that any two left $G$-torsors are isomorphic. What’s really going on? But the definition in terms of elements seems to give something very different. Why do you say that? are projective copresheaves flat in general No. Consider the coproduct of two representables; this is projective, but it doesn’t preserve terminal objects. …we can define an internal morphism… I think I know what you’re getting at, but words like “internal morphism” aren’t usually the way people talk. I think it would be more common to say that “there exists a morphism $G\to X$” is internally true. Anyway, why does it bother you that $\mathcal{E}$ believes any two $G$-torsors are isomorphic? That doesn’t make them externally isomorphic. Why do you say that? Well, by “definition in terms of elements” I mean something like this. Let $R$ be a ring. A left $R$-torsor is a left $R$-module $M$ with the following properties: 1. There is a non-zero element of $M$. 2. Given $m \in M$ and $n \in M$, there exist elements $r, s$ of $R$ and an element $p$ of $M$ such that $r \cdot p = m$ and $s \cdot p = n$. 3. Given $r \in R$, $s \in R$, and $m \in M$ such that $r \cdot m = s \cdot m$, there is an element $t$ of $R$ and an element $p$ of $M$ such that $t r = t s$ and $t \cdot p = m$. In the case that $R$ is a field, we see that a left $R$-torsor must be a one-dimensional vector space over $R$. I think if $R$ is an integral domain then it can be any faithful $R$-submodule of its fraction field. But this is far from a complete classification of flat $R$-modules, no? No. Consider the coproduct of two representables; this is projective, but it doesn’t preserve terminal objects. Ah, thanks. I wondered if I needed a connectedness hypothesis of some kind. Anyway, why does it bother you that $\mathcal{E}$ believes any two $G$-torsors are isomorphic? That doesn’t make them externally isomorphic. Well, I was expecting that the internal logic would be able to distinguish between non-isomorphic $G$-torsors. But I suppose the question is rather delicate. Is there some logical formula in the internal language of $\mathcal{E}$ which may be roughly interpreted as “$X$ is isomorphic to $G$ as $G$-torsors” but which is not automatically valid for all $G$-torsors $X$? Then, since $X \to 1$ is epic, there is a (generalised) element $x$ of $X$, and we can define an internal morphism $f : G \to X$ by $g \mapsto g \cdot x$. Actually what you would get is a morphism $G\times U \to G\times X \to X$, where $x:U\to X$is the generalised element, and hence that $X$ is isomorphic to $G$ as a $G$-object in the internal language (witnessed by the morphism $G\times U \to X\times U$). by “definition in terms of elements” I mean something like this. I think you’ve been thinking about commutative algebra too much. (-: Elementwise definitions of concepts in Set-based category theory don’t generally carry over to enriched category theory. Is there some logical formula in the internal language of $\mathcal{E}$ which may be roughly interpreted as “$X$ is isomorphic to $G$ as $G$-torsors” but which is not automatically valid for all $G$-torsors $X$? No, I don’t think so. Just like in sets, any two isomorphic $G$-torsors have all the same properties, the same is true internally. Externally, a $G$-torsor $X$ is trivial if and only if there is a global element of $X$, so I guess it boils down to whether the internal logic can tell whether something has a global element or not. The most obvious formulation, “There exists a morphism $1 \to X$,” does not work, since it really means “$X$ is inhabited.” That’s quite disappointing, because it seems to be saying that cohomology is invisible to the internal logic… But isn’t what you’re thinking about cohomology defined in the external logic? H^1(1,G) is a quotient of the hom-groupoid in the bicategory of internal groupoids and anafunctors, and this is an external construction. Yes, cohomology is essentially by definition about the difference between internal and external. So it’s to be expected that “internal cohomology” is trivial, just like the cohomology of $Set$ is I haven’t seen that point of view before. It makes a lot of sense. Thanks!
{"url":"http://nforum.mathforge.org/discussion/3665/confusion-about-flat-functors/?Focus=31480","timestamp":"2014-04-18T18:11:04Z","content_type":null,"content_length":"71111","record_id":"<urn:uuid:4d1a429b-0ce0-41cf-a523-28384fce90e9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: address standard error of rho in bivariate probit [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: address standard error of rho in bivariate probit From Maarten buis <maartenbuis@yahoo.co.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: address standard error of rho in bivariate probit Date Sun, 13 Aug 2006 09:10:13 +0100 (BST) --- Bernhard Ganglmair <bganglma@uni-bonn.de> wrote: > version 8 > 1) I estimate two bivariate probit models (bivar1, bivar2) using two > different samples and would like to test for equality of the > correlation coefficient rho that I get for each sample. rho I > address with e(rho), but how do I address the standard error of rho > that is reported in the output table? ereturn list gives me p-values > and chi-squared for the null, but no standard errors directly. The correlation is stored as one of the "b" parameters, only in the form of a Fisher's Z transformed correlation. It is stored as _b[athrho:_cons]. _b[variable name] usually gives you the regression coefficient of that variable. _b[equation name:variable name] gives you the regression coefficient of that variable in that equation. You can get the constant by typing _b[equation name:_cons]. Fisher's Z transformation also happens to be the arc-hyperbolic tangent of rho, which explains the weird equation name. You can transform the variable back to the correlation metric by taking the hyperbolic tangent. See the example below: *-----begin example---- version 8.2 sysuse auto, clear gen rep2 = rep78 <=3 biprobit rep2 foreign mpg price matrix list e(b) nlcom rho: tanh(_b[athrho:_cons]) *-----end example------ > 2) To test for equality of rho in the two samples I thought of > running a suest on bivar1 and bivar2 and then conduct a simple Wald > test using test, but suest seems to have lost the results for rho > in bivar1 and bivar2. Anybody some suggestions how such a test could > be run? I would perform the test on the transformed correlations since a) it is easier to perform because you can use the already available _b[athrho:_cons], and b) the sampling distribution of the transformed correlation is more likely to be normally distributed than that of the correlation coefficient itself. Maarten L. Buis Department of Social Research Methodology Vrije Universiteit Amsterdam Boelelaan 1081 1081 HV Amsterdam The Netherlands visiting adress: Buitenveldertselaan 3 (Metropolitan), room Z434 +31 20 5986715 The all-new Yahoo! Mail goes wherever you go - free your email address from your Internet provider. http://uk.docs.yahoo.com/nowyoucan.html * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-08/msg00307.html","timestamp":"2014-04-18T03:20:55Z","content_type":null,"content_length":"8363","record_id":"<urn:uuid:a406b86d-782f-4d7b-b1f7-ed19f24e8196>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
From HaskellWiki I am not really a Haskell programmer, I mainly use Python. My main objective for creating an account here is to be able to publish comments on the talk page relating to Project Euler, because I disagree with the decision of this community to publish solutions to project Euler problems here. I have a user with the same name on Project Euler. Besides this small disagreement, this wiki seems like a nice place. One day, if I get the time to learn Haskell, I will study it in greater detail. :-)
{"url":"http://www.haskell.org/haskellwiki/index.php?title=User:Slaunger&oldid=36315","timestamp":"2014-04-19T15:19:19Z","content_type":null,"content_length":"13601","record_id":"<urn:uuid:2e5cdbd0-82cc-4b4c-9cc6-426560b30741>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Standrad Equation of Ellipse April 6th 2011, 08:33 PM Standrad Equation of Ellipse Find standard form of the equation of ellipse with given characteristics. Vertices: (0,2), (8,2) minor axis of length 2 With that information I found the center: (4,2) minor axis of length 2 means I would use 2b=2? So b=1 I am lost after this that. April 6th 2011, 08:53 PM Your equation is $\displaystyle \frac{(x-h)^2}{a^2}+\frac{(x-k)^2}{b^2}=1$ You are given the length of the minor axes to be 2 therefore you have the points (0,2),(8,2),(4,0),(4,4) and centre (4,2), major axis is 4. You have lots of information here, firstly can you find (h,k)? April 6th 2011, 09:06 PM (h,k) is the center which is (4,2) Can you please explain where you got the points (4,0), (4,4) and how you found the major axis to be 4? Is it because the distance between the center and one of the vertices is 4 ? April 6th 2011, 09:11 PM Good work with finding (h,k) You're on your way, You told me the minor axes was 2.
{"url":"http://mathhelpforum.com/pre-calculus/177091-standrad-equation-ellipse-print.html","timestamp":"2014-04-16T07:30:58Z","content_type":null,"content_length":"6080","record_id":"<urn:uuid:0dfdb413-1201-4199-a673-cb0f19c82d13>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Help please with reflection problem!? Posted by Knights on Friday, February 22, 2013 at 2:29pm. A laser is shot from vertex A of square ABCD of side length 1, towards point P on BC so that BP = 3/4. The laser reflects off the sides of the square, until it hits another vertex, at which point it stops. What is the length of the path the laser takes? Help?? Won't the laser bounce around until infinity? • Help please with reflection problem!? - Reiny, Friday, February 22, 2013 at 4:06pm No, the laser follows the old billiard table rule, that the angle of incidence is the angle of reflection. I am going to enlarge your square to a 4 by 4, thus making your first bounce BP = 3 make a reasonable sketch, graph paper might be a good idea. From P it will bounce and hit CD at Q look at triangles ABP and PCQ, they are both right-angles and similar. BP^2 + AB^2 = AP^2 9 + 16 = AP^2 AP = √25 = 5 So each triangle formed by a bounce will have the ratio We are interested in the sum of the paths formed by the hypotenuses in the second triangle PC, the short side is 1 so by ratios, PQ/5 = 1/3 PP = 5/3 find CQ by ratios, then you can find QD On my diagram, I have the following paths: From A to P ---- 5 units from P to Q on CD --- done : 5/3 units from Q to R on AD , R is close to D, from R to S on BC from S to T on AB from T to U on AD and ahhhh from U to C , which is a vertex. At this point you should notice that there is a lot of symmetry and AP = RS = UC PQ = TU QR = TS so once you have found QR, again by using the ratios 3:4:5 you have found the 3 different path lengths Add up the 7 lengths, don't forget to divide by 4 , my original step to avoid some initial fractions. • Help please with reflection problem!? - Knights, Friday, February 22, 2013 at 4:50pm Thank you very much it helps a lot!! Related Questions Geometry - please help - reflections - A laser is shot from vertex A of square ... Physics !:( - A carbon dioxide laser is an infrared laser. A CO laser with a ... Physics - A carbon dioxide laser is an infrared laser. A CO2 laser with a cavity... Physics - A CD has to rotate under the readout-laser with a constant linear ... chemistry - An argon ion laser puts 5 W of continuous power at a wavelength of ... college physics - A CD has to rotate under the readout-laser with a constant ... physics - A shot-putter moves his arm and the 7.0 kg shot through a distance of ... physics - The drawing shows three energy levels of a laser that are involved in ... Geometry - How do i do glide reflection on a graph? example. which glide ... Science - Intensity of the HeNe laser (lambda = 632nm) was measured at 2 ...
{"url":"http://www.jiskha.com/display.cgi?id=1361561358","timestamp":"2014-04-19T00:30:18Z","content_type":null,"content_length":"10131","record_id":"<urn:uuid:cc33d636-18b1-4906-8bf6-a8ed727c982e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: graph hbar blabel and text option Stata 11 Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: graph hbar blabel and text option Stata 11 From Nick Cox <n.j.cox@durham.ac.uk> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject st: RE: graph hbar blabel and text option Stata 11 Date Mon, 24 Oct 2011 14:44:29 +0100 Without getting into specifics, this strikes me as very much a problem in which you should reduce your dataset to what needs to be plotted (-collapse- yields a dataset of means directly) and then use -graph twoway bar-. You could make some progress with -graph hbar- in solving these problems but I suspect not all. But specifically with -blabel()- the options are precisely what is documented. The scope for subverting them is about zero. The -twoway- way offers much greater control. I note the absence of complete documentation of exactly what you typed. This never does any harm. Kaulisch, Marc I am constructing a (simple) graph hbar (mean)... Graph with four yvars and a one over-var. While polishing the graph some questions appear: 1. Is it possible to add a (useful) blabel command? I use the option . blabel(bar, pos(center) format(%2.1fc) color(white)) But now I want to add a total outside the bar. Is this possible? Adding blabel(total) changes the values of the barlabels into cumulative sums which is not what I want... 2. Would it be possible to change the color of specific barlabels? Giving my scheme some bars are dark and some light. Using white is great for the dark ones but not for the light ones... 3. I would like to add text to each bar at the end of x-axis. Which are the xaxis coordinates for the bars? I see that text(13 1) places the text at the bottom and text(13 100) at the top, but is there a way to identify the right place for each bar automatically? Or is this a trial and error process? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-10/msg01069.html","timestamp":"2014-04-16T19:15:00Z","content_type":null,"content_length":"8809","record_id":"<urn:uuid:cbfd2620-d06d-4343-8cff-93154ccbac57>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Hedging Greeks for a portfolio of options using linear and quadratic programming Sinha, Pankaj and Johar, Archit (2010): Hedging Greeks for a portfolio of options using linear and quadratic programming. Download (183Kb) The aim of this paper is to develop a hedging methodology for making a portfolio of options delta, vega and gamma neutral by taking positions in other available options, and simultaneously minimizing the net premium to be paid for the hedging. A quadratic programming solution for the problem is formulated, and then it is approximated to a linear programming solution. A prototype for the linear programming solution has been developed in MS Excel using VBA. Item Type: MPRA Paper Original Title: Hedging Greeks for a portfolio of options using linear and quadratic programming English Title: Hedging Greeks for a portfolio of options using linear and quadratic programming Language: English Keywords: Hedging, Greeks, portfolio of options G - Financial Economics > G1 - General Financial Markets > G11 - Portfolio Choice; Investment Decisions C - Mathematical and Quantitative Methods > C0 - General > C00 - General C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C63 - Computational Techniques; Simulation Modeling C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation Methodology; Computer Programs > C88 - Other Computer Software Subjects: G - Financial Economics > G0 - General > G00 - General C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C10 - General C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics > C44 - Operations Research; Statistical Decision Theory C - Mathematical and Quantitative Methods > C0 - General > C02 - Mathematical Methods C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models; Dynamic Analysis Item ID: 20834 Depositing Pankaj Sinha Date Deposited: 22. Feb 2010 11:26 Last Modified: 11. Feb 2013 20:26 [1] Hull, J. C. (2009). Options, Futures, and Other Derivatives, Prentice Hall. [2] Rendleman, R. J. (1995). An LP approach to option portfolio selection, Advances in Futures and Options Research, 8, 31–52. [3] Papahristodoulou, C. (2004). Option strategies with linear programming, European Journal of Operational Research, 157, 246–256. [4] Horasanlı, M. (2008). Hedging strategy for a portfolio of options and stocks with linear programming, Applied Mathematics and Computation, 199, 804–810. URI: http://mpra.ub.uni-muenchen.de/id/eprint/20834
{"url":"http://mpra.ub.uni-muenchen.de/20834/","timestamp":"2014-04-19T19:41:04Z","content_type":null,"content_length":"19333","record_id":"<urn:uuid:2e545262-30b2-4c3a-8130-2eab12b39e05>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Homoeopathic Principles Vis-A-Vis Modern Research Methodology Date posted: June 16, 2012 Prof. (Dr.) Niranjan Mohanty Two centuries back, a German physician called Dr.C.F.S Hahnemann who amazed this medical world with his astounding discovery of this scientific technique, to establish the curative power of substances in infinitesimal dilutions. The most brilliant minds of this era flocked to him. During the course of his life time he introduced ideas and principles of standardization which were trail blazers of that time. The methods were brilliant considering the technological limitations of that time. However technology has advanced astronomically but today’s scientific research methodology can run smoothly with the principles of homoeopathy or not is the subject of my discussion. Let us delineate the principles of homoeopathy around which it is revolving. From advent of the Hahnemann’s era to the present era many erudite, veterans& scholars of Homoeopathy have contributed for the growth & development of homoeopathy and have evolved many principles for the purpose of practices. The major axioms which are universally accepted by classical Homoeopathy are as follows:- 1. Law of Similia 2. Law of Simplex 3. Law of Minimum Dose 4. Doctrine of Drug Proving 5. Theory of Chronic Disease 6. Theory of Vital Force 7. Doctrine of Suppression 8. Doctrine of Individualization 9. Obstacles to Cure 10. Doctrine of Analogy 11. Doctrine of Concomitants 12. Doctrine of Generalization & Others 13. Totality of symptoms & others. The principles required for clinical practices and necessarily relates directly for adoption of Research Methodology are as follows: 1. Law of similia 2. Law of simplex 3. Law of minimum Indirectly for building of totality of symptoms, principle involved are as follows: 1. Individualization 2. Doctrine of analogy 3. Doctrine of concomitant 4. Doctrine of generalization 5. Totality of symptoms (1) The principles involved during treatment & follow up of the cases are as follows: 1. 1) Theory of chronic disease 2. 2) Doctrine of suppression 3. 3) Obstacles to cure Now let me briefly describe few lines on research and research methodology. Research is a quest for knowledge through diligent search or investigation or experiment aimed at the discovery and interpretation of new knowledge. (2) Research is an art of scientific Investigation. While conducting research in homoeopathy above axioms are to be adhered to “A careful investigation or inquiry specially through search for new facts in any branch of nowledge.” (3) A systematized effort to gain new knowledge. (4) Research methodology is a systematic body of procedures and technique applied in carrying out our investigation or experimentation targeted at obtaining new knowledge (WHO). Research techniques that are used for conduction of research are three types such as • Library Research. • Field Research. • Laboratory Research. It is a way to systematically solve the research problem. It is a science of studying how research is done. Mainly research approaches are two types which are as follows: a) Quantitative • Inferential • Experimental • Simulation b) Quantitative Looking to the categories of research we visualize following types such as: A. Empirical and theoretical Research: The philosophical approach to research is basically of two types a) Empirical. b) Theoretical / Conceptual. a) Empirical: – Health research mainly follows the empirical approach. • It is based on observation and experience more than upon theory and abstraction. • For example, Epidemiological research depends upon the systematic collection of observations on the health related phenomenon of interest in defined population. • Empirical research can be qualitative or quantitative in nature. • Health science research deals with information of a quantitative nature. For the most part, this involves • The identification of the population of interest. • The characteristics (variables) of the individuals (units) in the population. • The study of variability of these characteristics among the individuals in the population. • Quantification in empirical research is achieved by three numerical procedures i) Measurement of variables. ii) Estimation of population parameters. iii) Statistical testing of hypothesis. • Empirical research relies on experience or observation without due regard to system or theory. • We can call it experimental type of research. b) Theoretical / Conceptual Research • It is related to some abstract, idea(s), theories. Examples are • In abstraction with mathematical models. • Advances in understanding of disease occurrence and causation cannot be made without a comparison of the theoretical constructs with that which we actually observe in populations. • Empirical & theoretical research complements each other in developing an understanding of the phenomena, in predicting future events. B. Basic and applied: a) Basic / fundamental / pure: • It means formation of theory or generalization. • Basic research is usually considered to involve a research for knowledge without a defined goal of utility or specific purpose. b) Applied / Action: • To find out the solution for immediate problems. • Applied research is problem oriented. C. Other categories of Research: a) Longitudinal / one time research. b) Field setting / laboratory / simulation research. c) Clinical / diagnostic research. d) Historical research. e) Conclusion oriented / decision oriented research. (5) Beliefs / attitudes / practice in society by man) Several fundamental principles are used in scientific inquiry. A) Order • Scientific method is not common sense. • In arriving at conclusion “common sense” can’t be employed e.g. – Draft of air causes Allergic Rhinitis. • To arrive at conclusion an organized observation of entities or events which are classified or ordered on the basis of common properties and behaviors are required. • It is this commonality of properties and behaviors that allows predictions which carried to the ultimate, become laws e.g. A number of Allergic Rhinitis cases are studied and it is found a number cases are having a group of common causes from which prediction is made and there after etiology become conclusive as from ‘Allergens’. B) Inference and chance • Reasoning or inference is the force of advances in research • In terms of logic, It means that a statement / or a conclusion ought to be accepted because one or more other statements / premises (evidence) are true. • Inferential suppositions, presumptions or theories may be so developed, through careful constructions, as to pose testable hypothesis. • The testing of hypothesis is the basic method of advancing knowledge in science. • Two distinct approaches or arguments have evolved in the development of inferences. They are such as: deductive and inductive. • In deduction, the conclusion necessarily follows from premises (evidence) / statements, as in syllogism e.g. [All ‘A’ is ‘B’, all ‘B’ is ‘C’ therefore all ‘A’ is ‘C’] or in algebraic equations. • Deduction can be distinguished by the fact that it moves from the general to the specific. • It dose not allow for the elements of chance or uncertainty. • Deductive inference, therefore are suitable to theoretical research. Induction: - Inductive reasoning is distinguished by the fact that it moves from the specific to the general (from sample to population). It builds Health research being primarily empirical depends entirely upon inductive reasoning. The conclusion dose not necessarily follows from the premises or evidence (facts). We can say only that the conclusion is more likely to be valid it the premises are true, i.e. there is a possibility that the premises may be true but the conclusion is false. Chance must, therefore, be fully accounted for. Mill’s canons of inductive reasoning are frequently utilized in the formation of hypothesis. These methods include: a) Method of difference – When the frequency of a diseases is markedly dissimilar under two circumstances (For example, the difference in frequency of Lung Cancer in Smokers and Non-smokers. b) Method of agreement – In a factor or its absence is common to a number of different circumstances that are found to be associated with the presence of disease, the factor or its absence may be casually associated with the disease (e.g. the occurrence of Hepatitis A is associated with patient contact, crowding & poor sanitation and hygiene, each conducive to the transmission of the Hepatitis virus). c) Method of concomitant variation, or the dose response effect – Example – – Increase expression of goiter with direct level of iodine in the diet. Increasing frequency of leukemia with increasing radiation exposure. Increase prevalence of elephantiasis in areas of increasing filarial endemicity. d) Method of analogy – The distribution and frequency of a disease or effect may be similar enough to that of a some other disease to suggest commonality in cause (e.g. Hepatitis B virus infection and Cancer of the Liver). (6) Designing and methodology of an experiment or a study consists of a series of guideposts to keep one going in right direction and sometime it may be tentative and not final. The steps are as follows: 1. Introduction: Definition of the problem: – Define the problem you intend to study such as Smoking and Lung Cancer, Cholesterol & C.A.D etc. • Relevance of the problem with fields of application of proposed research result. • Rationale of the study: – What necessitate to carry out the study. 2 .Review of literature: – Critically review the literature on the problem under study • Any such work done by others in the past • Clarify • Want to confirm the findings • Challenge the conclusion • Extend the work further • Bridge some gaps in the existing knowledge 3. Aim & Objectives: - • Define the aims and objectives of the study. • State whether nature of the problem has to be studied or solution has to be found by different methods. • Primary • secondary 4. Hypothesis:- • State your hypothesis. • After the problem and purpose are clear and literature is reviewed. • You have to start precisely with an assumption positive or negative, e.g. constitutional medicine is more effective for ‘Lymphangitis’ than pathological prescription with Hydrocotyle ‘Q.’ 5. Plan of action: - “Prepare an over all plan or design of the investigation for studying the problem and meeting the objectives.” A) Definition of the population under study i) It may be country / state / districts / town / village / families / specific groups. ii) Age group iii) Income group iv) Occupation v) Sexes vi) Define clearly who are to be included and who are not to be included, i.e. (Inclusion and exclusion criteria) B) Selection of the sample a) It should be unbiased. b) Sufficiently large in size to represent population under study. a) Sample size The size of sample is very vital in an scientific study. Ordinarily should not be less than 30. A sample small in size, is a biased one & should never be depended upon for drawing any conclusions, therefore however a large sample is considered as large enough. Normally cut off is taken as 30. A sample of size greater than 30 is considered large enough for statistical purpose. For Qualitative Data In such data we deal with proportions such as morbidity rates and cure rates. For finding the suitable size of the sample, the assumption usually made is that the allowable error does not exceed 10% or 20% of the positive character. The size can be calculated by the following formula with a desired allowable error (L) at 5% risk that the true estimate will not exceed allowable error by 10% or 20% of ‘p’ n=4pq/L^2 Where ‘p’ is the positive character, q =1-p and L= allowable error, 10% or 20% of ‘p’ For Quantitative Data In such data we deal with the means of a sample and of the universe. If the SD (s) in a population is known from the past experience, the size of sample can be determined by the following formulae with the desired allowable error (L). At 5% risk the true estimate will lie beyond the allowable error (variation). Hence, the first step is to decide how large an error due to sampling defects can be tolerated or allowed in the estimates. Such allowable error has to be stated by the investigator. The second step is to express the allowable error in terms of confidence limits. Suppose L is the allowable error in the sample mean and we are willing to take a 5% chance that error will exceed L. so we may put: L=2s/Ön or Ön=2s/L or n= 4s^2/L^2 Sample size for analytical studies: a. Testing equality of two proportions: p[1 = ]p[2 ]The sample measures used are the sample proportions, and the sampling distribution used in testing this null hypothesis is either the standard normal distribution (z), or equivalently the chi-square • Set type I error:a; • Determine ‘minimum clinically significant difference’:d; • Make a guess as to the ‘proportion’ in one group (usually ‘control’): p[1]; • Determine the power required to detect this difference: (1-b). The sample size required is: For example, suppose we are interested in determining the sample size required in a clinical trial of a new drug that is expected to improve survival. Suppose the traditional survival rate is 40%, i.e., p[1] = 0.4. We are interested in detecting whether the new drug improves survival by at least 10%, i.e., d = 0.10, therefore p[2] = 0.50. Suppose we want a type I error of 5%, i.e.,a = 0.05, therefore Z[1-a] = 1.96; we also want the type II error (b) to be 5%, or we want to detect a difference of 10% or more with a probability of 95%: therefore Z [b] = -1.645. Substituting these values in the above equation given n = 640. Thus the study would require 640 subjects in each of the two groups to assure a probability of detecting an increase in the survival rate of 10% or more with 95%certainty, if the statistical test used 5% as the level of significance. b. Sample size for a case – control study Suppose that long term use of oral contraceptives (OC) increased the risk for coronary heart disease (CHD) and that one wished to detect an increase in relative risk of at least 30% (equivalently, OR>1.3) by means of a case control study, What would be the proper sample size? The test of hypothesis in the study will be equivalent to testing if the proportion of women using (OC) is the same among those with CHD and those without CHD. We need to determine what proportion of women without CHD (controls) use OC; let us say 20%. Then we decide what will be the minimum difference that should be detected by the statistical test. Since we need to detect an OR >1.3, this translates to an increased use (24.5%) among the CHD patients to give a difference of 4.5% to be detected. Choosing a and b to be 5% each, the sample size, using the above formula, would be 2220, i.e., we need to study 2220 cases and 2220 controls for the disease. Sometimes the ratio of cases and controls may not be one one, e.g., when the disease is rare then the number of cases available for study may be limited, and we may have to increase the number of controls ( I-2,1-3 etc.) to compensate. In such cases, the calculation of the sample size will incorporate these differences. Computer programmes such as EPIINFO allow for these variations. c. Comparison of two population Means. When the study involves comparing the means of two samples, the sample measure that is used as the difference of the sample means. This has an approximately normal distribution. The standard error of difference depends on the standard deviations of the measurements in each of the population, & depending on whether these are the same or the different, different formulae have to be used. In the simplest (and most commonly used) scenario, the two standard deviations are considered to be the same. We will illustrate the procedure. We need to determine, as in case a, the minimum difference (d) in the means that we are interested in detecting by statistical tests: The two types of statistical errors (a and b) and the standard deviations (s). Then the sample size required is calculated using the following formula: n = [(Z[1-a] - Z[b]) s / d ]^2 For example, suppose we want to test a drug that reduces blood pressure. We want to say the drug is effective if the reduction in blood pressure is 5mm of Hg or more, compared with the ‘ placebo’. Suppose we know that systolic blood pressure in a population is distributed normally with a standard deviation of 8 mm of Hg. If we choose a = 0.05 and b = 0.05, the sample size required in this study will be: n=[(1.96 + 1.645) 8/3 ]^2 = 34 subjects in each group. If the design is such that the two groups are not independent (e.g., matched studies or paired experiments) or if the standard deviations are different for the two groups, the formulae should be adjusted accordingly. d. Comparison of more than two groups and methods When considering sample size calculations for studies involving comparisons of more than two groups, either comparing proportions or means several other issues (e.g. which comparison is more important than others; whether errors of paired comparison, or for the study as a whole are more important, etc.) have to be taken in to account. Accordingly the formulae for each of the situations will be much more complicated. In multivariate analysis, such as those using multiple linear regression, logistic regression, or comparison of survival curves, simple formulae for the calculation of sample sizes are not available. Some attempts at estimating sample sizes using nomograms, or by simulating experiments and calculating sample sizes based on these simulated experiments, have recently appeared in the statistical literature. We will not discuss these here. When planning experiments, one of the crucial steps is in deciding how large the study should be, and appropriate guidance should be sought from experts b) Sampling methods: 1) Simple random sampling – Choose random number from the table. ii) Stratified sampling - (Selecting 50% male & 50% female) iii) Systemic sampling – Systematically it is chosen. iv) Cluster sampling - Cluster may be identified (households) and random samples of cluster. v) Multistage sampling – In several stages. c) Specifying the nature of study: i) Longitudinal studies • Prospective study • Retrospective study ii) Cohort studies: – A group of persons exposed to some sort of environment e.g. new born & mother exposed to radiation. • Prospective • Retrospective iii) Interventional studies: – In these there are three phases. • Diagnostic / Identification • Intervention by treatment • Assessment phase for result iv) Experimental studies • Experimental or trial are made • A drug is given and results are wanted v) Cross sectional studies (Non experimental):- Such studies are one time or at a point of time study of all persons in a representative sample. It is conducted in field and not in laboratories. Example: – Examination of children 2- 12 yrs and classify their nutritional grade. Prevalence of pregnancy in age group of 20-25 yrs. vi) Control studies: – Most of the experimental studies need a control as a yard stick of evidence. Example: – Growth of child with constitutional medicine & control group with no medicine. Control group must be identical. To rule out subjective bias in subjects under study single or double blind trial should be made. (7) C) Research strategies & design: The selection of a research strategy is the case of research design and is probably the single most important decision the investigator has to make: Research strategy must include the followings: 1) Use of controls 2) Blinding – double or single 3) Study of instruments 4) Case recording format 5) Categorization – a) test group and b) control group 6) Parameter to assess the improvement – positive and negative response 7) Observations / Results 8) Presentation of data 9) Result analysis / Statistical evaluation Statistical tools or tests of significance:- For testing of hypothesis there is a large no. of tests available in the statistics. The most commonly tests for clinical study are follows: Z – test, t – test and c^2 – test. The other tests are also being used e. g. Variance ratio test and Analysis of Variance test. Z – test: It has two applications: a) To test the significance of difference between a sample mean (X) and a known value of population (m). Z = X – m / SE (X) Where X = Sample mean m = Population mean SE = Standard error b) To test the significance of difference of two sample means or between experiment sample mean and a control sample mean. Z = observed difference between two sample means / Standard error of difference between two sample means. Requirements to apply Z – test: 1. The sample or samples must be randomly selected. 2. The data must be quantitative. 3. The variable is assumed to follow normal distribution in the population. 4. The sample size must be larger than 30. t – test: Requirements to apply t – test: 1. The sample or samples must be randomly selected. 2. The data must be quantitative. 3. The variable is assumed to follow normal distribution in the population. 4. The sample size must be less than 30. t = X – m / SE (X) Where X = Sample mean m = Population mean SE = Standard error Chi – square test (c^2 – test): It is a non-parametric test not based on any assumption or distribution of any variable. It is very useful in research. It is most commonly use when data are in frequencies such as in the no. of responses in two or more categories. It has got the following three very important applications in medical statistics as tests of: 1. Proportion – To find significance in same type of data. 2. Association between two events. 3. Goodness of fit – To test fitness of an observed frequency distribution of qualitative data to a theoretical distribution. The test determines whether an observed frequency distribution differs from the theoretical distribution by chance or if the sample is drawn from a different population. To apply c^2 – test three essential requirements are needed such as: 1. A random sample 2. Qualitative data 3. Lowest expected frequency (value) not less than 5 c^2 = å (O – E)^2 / E Where O = Observed value E = Expected value Variance ratio test: (F – test): This means comparison of sample variance. It is applied to test the homogeneity of variances. F = S[1]^2 / S[2]^2 S[1]^2 = Variance of first sample S[2]^2 = Variance of second sample (S[1]^2 > S[2]^2) ANOVA test (Analysis of Variance test): This test is not confined to comparing two sample means but more than two samples drawn from corresponding normal population. (8) 10) Discussion 11) Conclusion 12) Summary 13) Bibliography A model case of scientific paper presentation on the caption “ Psoriasis in Homoeopathic practice” of the author will be projected to justify that homoeopathic principles can run hand in hand in conducting research work on homoeopathy as per the modern Research Methodology. 1, Roberts, A.Herbert, The Principles and art of cure of Homoeopathy, reprint edition 1997. 2,5,7. Health Research Methodology, A guide for training in research Methods,2 ^nd edition.W.H.O.regional office for the Western Pacific, Manila 2001,page-1. 3. Collin’s cobuild English Dictionary for advanced learners major new edition, Harper Collins publishers, 1995. 4. English Dictionary, Read man. 6. Klinbanm D.G., Kupper L.L, Morgenstern H.,, Epidemiological Research Principles and auantitative methods London, Life time Learning Publication 1982. 7,8. Mahajan B. W., Methods in Biostatistics,Jaypee brothers Medical Pub., New Delhi,6 ^th edition,reprint-1994. Prof. (Dr.) Niranjan Mohanty. M. D. (Hom.) Dean of the Homoeopathic Faculty, Utkal University, Orissa. Principal-cum-Superintendent, H.O.D, P.G Department of Repertory Dr. A. C. H. M. C. & H, Bhubaneswar. National President, I. I. H. P. Member C.C.H New Delhi. 1. Comments will be moderated. Please use a genuine email ID and provide your name, to avoid rejection. 2. Comments that are abusive, personal, incendiary or irrelevant cannot be published. 3. Please write complete sentences. Do not type comments in all capital letters, or in all lower case letters, or using abbreviated text. (example: u cannot substitute for you, d is not 'the', n is not 'and') Comment moderation is enabled. Your comment may take some time to appear.
{"url":"http://www.similima.com/homoeopathic-principles-vis-a-vis-modern-research-methodology","timestamp":"2014-04-18T18:28:15Z","content_type":null,"content_length":"87988","record_id":"<urn:uuid:85066f69-cf73-45d0-81c8-99dbdb3a22f7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: This is the first comprehensive introduction to the theory of mass transportation with its many--and sometimes unexpected--applications. In a novel approach to the subject, the book both surveys the topic and includes a chapter of problems, making it a particularly useful graduate textbook. In 1781, Gaspard Monge defined the problem of "optimal transportation" (or the transferring of mass with the least possible amount of work), with applications to engineering in mind. In 1942, Leonid Kantorovich applied the newborn machinery of linear programming to Monge's problem, with applications to economics in mind. In 1987, Yann Brenier used optimal transportation to prove a new projection theorem on the set of measure preserving maps, with applications to fluid mechanics in mind. Each of these contributions marked the beginning of a whole mathematical theory, with many unexpected ramifications. Nowadays, the Monge-Kantorovich problem is used and studied by researchers from extremely diverse horizons, including probability theory, functional analysis, isoperimetry, partial differential equations, and even meteorology. Originating from a graduate course, the present volume is intended for graduate students and researchers, covering both theory and applications. Readers are only assumed to be familiar with the basics of measure theory and functional analysis. Graduate students and research mathematicians interested in probability theory, functional analysis, isoperimetry, partial differential equations, and meteorology. "Villani writes with enthusiasm, and his approachable style is aided by pleasant typography. The exposition is far from rigid. ... As an introduction to an active and rapidly growing area of research, this book is greatly to be welcomed. Much of it is accessible to the novice research student possessing a solid background in real analysis, yet even experienced researchers will find it a stimulating source of novel applications, and a guide to the latest literature." -- Geoffrey Burton, Bulletin of the LMS "Cedric Villani's book is a lucid and very readable documentation of the tremendous recent analytic progress in `optimal mass transportation' theory and of its diverse and unexpected applications in optimization, nonlinear PDE, geometry, and mathematical physics." -- Lawrence C. Evans, University of California at Berkeley "The book is clearly written and well organized and can be warmly recommended as an introductory text to this multidisciplinary area of research, both pure and applied - the mass transportation -- Studia Universitatis Babes-BolyaiMathematica "This is a very interesting book: it is the first comprehensive introduction to the theory of mass transportation with its many - and sometimes unexpected - applications. In a novel approach to the subject, the book both surveys the topic and includes a chapter of problems, making it a particularly useful graduate textbook." -- Olaf Ninnemann for Zentralblatt MATH • Introduction • The Kantorovich duality • Geometry of optimal transportation • Brenier's polar factorization theorem • The Monge-Ampère equation • Displacement interpolation and displacement convexity • Geometric and Gaussian inequalities • The metric side of optimal transportation • A differential point of view on optimal transportation • Entropy production and transportation inequalities • Problems • Bibliography • Table of short statements • Index
{"url":"http://cust-serv@ams.org/bookstore-getitem/item=GSM-58","timestamp":"2014-04-18T06:39:33Z","content_type":null,"content_length":"18011","record_id":"<urn:uuid:a8e469b9-f021-423f-abba-114de3a52d6d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverton, NJ SAT Math Tutor Find a Riverton, NJ SAT Math Tutor ...As always, tutoring is tailored to the specific needs and disposition of the student. To become a certified teacher I have taken the following Praxis Exams: Reading, Writing, Mathematics, Biology Content Knowledge, Biology Content Knowledge Part 2, General Science Content Knowledge Part 1, and General Science Content Knowledge Part 2. I have scored highly on all of these tests. 37 Subjects: including SAT math, chemistry, reading, writing ...Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring precalculus is one of my main focuses. With a physics and engineering background, I encounter math at and above this level every day. 9 Subjects: including SAT math, physics, calculus, geometry ...I have done extensive research into autism and how to work with students with this condition. I understand their needs and behaviors. I have the experience, skills, and knowledge to work with these students. 43 Subjects: including SAT math, English, reading, writing ...This included a semester abroad, at the University of Tuebingen, in Tuebingen, Germany. While there, I took several classes with other German students (as opposed to classes designed for American students). I also learned a great deal about other subjects, including Chemistry, all levels of Mat... 12 Subjects: including SAT math, chemistry, reading, calculus ...I went to college and got a Cum Laude degree in Linguistics and Languages. I took a class on phonetics and phonology, which involved the sounds of language and how they are made with the mouth. I have a lengthy knowledge of how words are broken down into syllables, letters, and their smallest sounds, or their individual phonemes. 34 Subjects: including SAT math, English, writing, reading
{"url":"http://www.purplemath.com/riverton_nj_sat_math_tutors.php","timestamp":"2014-04-19T12:24:53Z","content_type":null,"content_length":"24234","record_id":"<urn:uuid:d30d097d-adc1-4321-8145-41c736e37dd1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
the Stoermer be a rotational symmetric vector potential in cylinder coordinates This vector potential generates a magnetic field which is rotational symmetric. The Hamiltonian system of the Störmer problem describes the motion of a single charged particle of charge q, mass m and velocity v in the magnetic dipole field B. This is a sketch. For more details see Alex Dragt, Trapped orbits in a magnetic dipole field; Rev. of Geophys.3(2) (1965), pp.255-298 and More literature. [Added August 6, 2012: Thanks to Sateesh R. Mane for some corrections on this page]. Let z-axes and where M is the magnetic dipole moment of the field. This vector potential generates a magnetic field q, mass m and velocity v in the magnetic dipole field B. The relativistic Hamilton function of the particle is in cylinder coordinates given by where c is the speed of light, Because v^2 and so H, equivalent to K, which has the form of a non-relativistic Hamiltonian: The equivalence of the two Hamilton function K and H can be seen by observing that the partial derivatives with respect to all dynamical variables agree. Because H is invariant under the one-parameter group of rotations along the z axes, there is by Noether's theorem an integral After elimination of the variable V: After introducing dimensionless variables The potential V vanishes on the 'Thalweg' The Störmer problem is to analyze the two degree of freedom Hamiltonian system with Hamiltonian H(q,p)=E with 0<E<1/32 contains a compact component on which the flow is area preserving. The two dimensional hyper-surface q[1]=0 in the energy surface is a Poincaré surface and the ruturn map is a symplectic map. The second iterate of this return map can be written as a composition of integrable twist maps. The first map is to shoot the particle from q[2]=0 to the north (with p[2]>0) and wait until it comes back. The second map is to shoot the particle from q[2]=0 to the south and wait until it comes back to the equator q[2]=0. Both maps are twist maps in the plane which have a single fixed point which is the initial condition which shoots the particle into the dipole. That this map is integrable has been shown by finding a regularization of the motion near the singularity of the dipole. The two fixed points of the two maps do not agree, the two maps don't commute. When shooting to the north pole, the particle will not bounce back to the south pole. The situation is similar to take a two horn-like surfaces of revolution and glue them together a bit tilted, leading to a two dimensional surface of revolution with one line, where the metric is discontinuous. Writing this in Texas, it is a Longhorn, the left horn representing the charged particle on the north hemisphere, the right horn representing the charged particle on the south hemisphere. The geodesic flow on this surface of revolution is a Hamiltonian system which is very similar to the Hamiltonian system of the Störmer problem. The Katok-Strelcyn conditions required for Pesin theory are satisfied. An open problem is to establish positive Kolmogorov-Sinai entropy. One would have to show the positivity of the Lyapunov exponent on a set of positive Lebesgue measure.
{"url":"http://www.dynamical-systems.org/stoermer/info.html","timestamp":"2014-04-20T06:43:29Z","content_type":null,"content_length":"11728","record_id":"<urn:uuid:5b198ab6-914a-48cb-9f3c-2f3466bdfff1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Elm Prealgebra Tutor Find a Little Elm Prealgebra Tutor ...Sometimes, this is all a student needs in order to achieve success in the classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas teaching certificate. I have more than 10 years of public school teaching experience, a master's degree in gi... 39 Subjects: including prealgebra, reading, English, chemistry ...I prefer regular sessions so that I can observe a student's learning style and adapt accordingly. I believe it is important to build a relationship with the student first because they say "no one cares how much you know until they know how much you care." I have always worked very hard in my cou... 7 Subjects: including prealgebra, algebra 1, algebra 2, SAT math ...I have taught Algebra 1 for many years. My experience is with all levels but I am drawn to students with learning differences. I can build students' confidence in themselves and help them be better problem solvers. 10 Subjects: including prealgebra, geometry, ASVAB, algebra 1 ...I even used it for personal projects, such as cataloguing my personal book and movie library. I have used Outlook for 3 years, as it was the emailing system used by the school district and universities I worked for. This was necessary for all work-related communications. 69 Subjects: including prealgebra, Spanish, reading, writing I am a local teacher that loves Chemistry!! I have loved Chemistry since high school! I majored in Chemistry and received my BS in Chem in 1999. I have taught high school Chemistry for 5 years. 2 Subjects: including prealgebra, chemistry Nearby Cities With prealgebra Tutor Addison, TX prealgebra Tutors Copper Canyon, TX prealgebra Tutors Corinth, TX prealgebra Tutors Cross Roads, TX prealgebra Tutors Crossroads, TX prealgebra Tutors Double Oak, TX prealgebra Tutors Fairview, TX prealgebra Tutors Frisco, TX prealgebra Tutors Hickory Creek, TX prealgebra Tutors Highland Village, TX prealgebra Tutors Lake Dallas prealgebra Tutors Lakewood Village, TX prealgebra Tutors Oak Point, TX prealgebra Tutors Shady Shores, TX prealgebra Tutors The Colony prealgebra Tutors
{"url":"http://www.purplemath.com/Little_Elm_prealgebra_tutors.php","timestamp":"2014-04-21T02:36:38Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:1e8684d8-4d29-4c5e-99f1-879ed3dc2a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential and Logarithm Mapping on Stiefel Manifold up vote 4 down vote favorite The Stiefel Manifold is defined as $$ \mathrm{St}(p,n):= \{ X\in \mathbb{R}^{n\times p} :\ X^T X = I_p \}. $$ Recall that the tangent space at a point $X\in \mathrm{St}(p,n)$ is given by $$ T_X{\mathrm{St}(p,n)} = \{\xi\in \mathbb{R}^{n\times p}:\ X^T\xi + \xi^T X = 0 \}. $$ Given a point $X\in \mathrm{St}(p,n)$ and a tangent vector $\xi \in T_X{\mathrm{St}(p,n)}$, it is possible to express the exponential map $\exp_X(\xi)$ using the matrix exponential function. A formula is given in the paper www.mit.edu/~wingated/introductions/stiefel-mfld.pdf . My question is whether the inverse $\log_X(\cdot): \mathrm{St}(p,n) \to T_X\mathrm{St}(p,n)$ can also be expressed using the matrix logarithm. A related question is the following: Instead of the exponential function one may define other retractions such as $$ R_X(\xi) = (X+\xi)(I_p + \xi^T\xi)^{-1/2} $$ or the closest point projection $$ R_X(\xi) = \pi(X+\xi), $$ where $\pi$ maps a matrix $A$ to the closest element in the Stiefel manifold. This projection can be easily computed using SVD. Again, the question is wheter one can find simple formulas for the inverses of these retractions. Any helpful comments would be greatly appreciated. edit: at this point I do not really care which metric is used (the Euclidean metric or the one inherited from the orthogonal group). riemannian-geometry matrices add comment 5 Answers active oldest votes There are some potentially useful formulae in Section 7 of this document: up vote 3 down vote I have not seen them in the literature, though I may not have looked in the right place. They are directly applicable to complex Grassmannians, but it may be possible to adapt add comment If one uses the Hilbert-Schmidt inner product then Section 3.1.3 of "A new geometric metric in the space of curves, and applications to tracking deforming objects by prediction and filtering" by Sundaramoorthi, Mennuci, Soatto, and Yezzi in SIAM up vote 2 down Journal on Imaging Sciences (2010) gives a numerical method for computing the log. So I guess that implies that as of 2010 the authors were not able to find any closed form expression in the literature. add comment Have a look at the paper: • Y.~A. Neretin: On Jordan angles and the triangle inequality in Grassmann manifold}, Geometriae Dedicata, 86 (2001). up vote 2 down vote There are explicit formulas for geodesics and even for the geodesic distance on real Grassmannians, and the Riemannian logarithm in terms of $\arccos$. Geodesics on the Grassmannian correspond to horizontal geodesics on the Stiefel manifold, and I am sure that one adapt Neretins formulas to Stiefel manifolds. add comment A list of possible retractions for the compact Stiefel manifold (including the one mentioned in the question) and their inverses is available in: up vote 2 down T. Kaneko, S. Fiori and T. Tanaka, "Empirical Arithmetic Averaging over the Compact Stiefel Manifold," IEEE Transactions on Signal Processing, Vol. 61, No. 4, pp. 883 - 894, vote February 2013 add comment Peter, I'm a non-geometer but I'm a bit curious about a possible subtlety related to your answer. On page 316 of the authors make a careful distinction between the Euclidean metric $$g_{E}(Δ,Δ)=Tr(Δ^{∗}Δ)$$ on the tangent space and the so-called "canonical metric" $$g_{C}(Δ,Δ)=Tr(Δ^{∗}(I-(1/2)UU^ up vote 1 {∗})Δ)$$ on the tangent space at U. The latter metric is called "canonical" because it comes from that comes from viewing the manifold $V_{n,p}$ of isometries $U:C^{p}→C^n$ as the down vote quotient $$V_{n,p}=O_{n}/O_{n-p}.$$ Am I correct in assuming that your answer applies to the canonical metric but not to the euclidean one? add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/84955/exponential-and-logarithm-mapping-on-stiefel-manifold","timestamp":"2014-04-19T04:52:11Z","content_type":null,"content_length":"64768","record_id":"<urn:uuid:6f32c882-217b-4260-8c45-ee7c0fdcb837>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Help Word Problem June 19th 2008, 08:12 AM #1 Jun 2008 Trig Help Word Problem I just don't get these type of problems at all. Please help if at all possible the problem is as follows: An astronaut on the moon tosses a ball upwards; when will the ball initially be 35 feet above the lunar surface if the ball's height(feet) follows: -2.6t^2 + 32.1t + 5.5 and t is in seconds from release? I just don't get these type of problems at all. Please help if at all possible the problem is as follows: An astronaut on the moon tosses a ball upwards; when will the ball initially be 35 feet above the lunar surface if the ball's height(feet) follows: -2.6t^2 + 32.1t + 5.5 and t is in seconds from release? The height $h(t)=-2.6t^2+32.1t+5.5$ Since we are given a height we set $h(t)=35$ to get $35=-2.6t^2+32.1t+5.5 \iff 0=-2.6t^2+32.1t-29.5$ We can multiply the equation by 10 to eliminate the decimals $0=-26t^2+321t-295$ we can then factor this to get (you could also use the quadratic formula) $-(t-1)(26t-295)=0$ so t=1 or $t=\frac{295}{26}\approx 11.3$ Good luck. I just don't get these type of problems at all. Please help if at all possible the problem is as follows: An astronaut on the moon tosses a ball upwards; when will the ball initially be 35 feet above the lunar surface if the ball's height(feet) follows: -2.6t^2 + 32.1t + 5.5 and t is in seconds from release? First, I don't see how we need trig here... All you need to do here is set $35=-2.6t^2+32.1t+5.5$ and solve for t [this is a quadratic that can be easily solved using the quadratic equation]. Note that you'll get two values both of which should be positive. The smaller value of t tells you the time it takes to initially reach the height of 35 feet (before reaching the maximum height). The second t value tells you when it reaches a height of 35 after it has reached its maximum height and starts to fall back to the surface. If you still have questions, feel free to ask. June 19th 2008, 08:20 AM #2 June 19th 2008, 08:23 AM #3
{"url":"http://mathhelpforum.com/trigonometry/41981-trig-help-word-problem.html","timestamp":"2014-04-16T14:35:55Z","content_type":null,"content_length":"40234","record_id":"<urn:uuid:f8c5d56f-b4b1-4042-bafc-4f766cd2a08c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: ABS: Current Research in Euclidean Geometry Replies: 0 ABS: Current Research in Euclidean Geometry Posted: Sep 10, 1992 10:26 AM Euclidean Geometry: a summary of some current research In his article in the January 1991 issue of SIAM News, "Euclidean Geometry Alive and Well," Barry Cipra describes three recent results in Euclidean geometry. The first, an optimization problem called the Steiner ratio conjecture, was solved by Ding Zhu Du and Frank Hwang. The Steiner ratio conjecture involves objects called trees, which are line segments that connect a given (finite) set of points, without ever looping back on itself. Thus, in a tree, if you follow a path, you can never get back to your starting point, or even to a point you have already moved through. A tree can be thought of as an optimization problem: how can you connect all the dots with the shortest length of string? Obviously, one thing you don't want to do is double back on yourself (this would be redundant), and that is exactly what having a tree insures will not happen. The question then becomes: What is the shortest tree one can have connecting all the dots? But this is not exactly the question that the Steiner ratio conjecture answers. The Steiner ratio conjecture has to do with the ratio of the shortest tree connecting a set S of points, and the shortest Steiner tree connecting the same set S of points. Steiner trees are different from "normal" trees because in connecting the dots, you can add extra dots. Obviously, normal trees are just special cases (no extra points) of Steiner trees. Why add the extra points? They can make the tree length shorter, as this diagram illustrates. a a | /\ | / | d / / \ / / \ / / \ / \ b c b Let's say that abc is an equilateral triangle with sides of length 1. The Steiner tree on the right, formed by adding the center of the triangle, d, as a point, is shorter than the tree using only points a, b, and c. The normal tree has length 2; the Steiner tree has length Sqrt(3). It turns out that this ratio, Sqrt(3)/2, is the smallest that the ratio can be, provided that you are measuring the shortest Steiner tree and the shortest regular tree possible (these do exist). The second result that Cipra discusses is an algorithm for triangulating large polygons (large meaning many (n) vertices). Bernard Chazelle has found an algorithm that will triangulate (cut up into triangles, with certain rules about what kind of intersections the triangles can have) a polygon in an amount of time proportional to the number of vertices that the polygon has. Since a more naive approach yields a time proportional to the square of the number of vertices, this algorithm makes it much less time consuming for a computer to triangulate a polygon. The third result Cipra summarizes is the long-held conjecture that the best way to stack spheres is the "face centered cubic lattice packing" or, in layman's terms, the way grocers stack oranges. Wu-Yi Hsiang proves this problem that Kepler proposed in 1611. However,as the problem has such a long history, mathematicians will be going over Hsiang's proof with a fine tooth comb, and, right now at least, the jury is not in on whether or not Hsiang's proof is valid. For further details see "Euclidean Geometry Alive and Well," by Barry A. Cipra, in the January 1991 issue of SIAM News.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=356891","timestamp":"2014-04-16T22:07:10Z","content_type":null,"content_length":"17539","record_id":"<urn:uuid:c57245d5-979a-40e5-a1c6-d9ff2e075fbe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture Notes Ref - Some Statistical Basics, Data Analysis with Epi Info. (A) Before Data Are Collected • Research question ----> Study Design ---> Study Protocol---> Measurements (assigning numbers according to prior set rules) ----> Operationalized variables ----> Data collection form (e.g., survey questionnaire, other data form) • Type of variables □ Categorical (qualitative, nominal) e.g., SEX □ Ordinal (ranked), e.g., Leikert scales □ Continuous (quantitative, scale), e.g., AGE (B) Data Collection • When doing survey research, avoid selection bias by using chance mechanisms to select your sample • Avoid information bias by measuring what you purport to measure (i.e., measurement must be precise and valid, or you are wasting your time or you are an activist for a cause) (C) Descriptive Statistics • Explore the distribution of each variable: □ Shape (e.g., symmetry, kurtosis, modality) □ Central location □ Spread (dispersion) • Graphs □ Categorical data - bar graphs or pie charts □ Continuous data ☆ Histograms (moderate to large data sets) ☆ Stem-&-leaf (small to moderate data sets) e.g., Data are: {93, 82, 84, 71} More info on stem-and-leaf plots: http://www.sjsu.edu/faculty/gerstman/StatPrimer/Freq.PDF • Summary stats □ Central location ☆ Mean = arithmetic average ☆ Median = mid-point of ordered array ☆ Mode = most common value (seldom used) □ Spread ☆ Sum of squares = sum of squared deviations around the mean ☆ Variance = average sum of squares ☆ Standard deviation = square root of variance ☆ Interquartile range = Q3 - Q1 (robust measure of spread) ☆ Coefficient of variation (unit independent measure of standard deviation; seldom used) □ Other points on the distribution (e.g., quartiles, percentiles, z-scores - not covered in HS267) (D) Inferential Statistics • Parameters vs. statistics □ Parameters - from population, hypothetical/unobserved ("counterfactual"), numeric constants, notation - Greek (e.g., "mu") or hatless (e.g., RR) □ Statistics - fro m sample, calculated/observed, random variables, notation - Roman or with hats (e.g., "x bar") • Estimation - predicting most likely notation of parameter □ Point estimate (e.g., "x bar" estimates "mu") □ Interval estimate (e.g., 95% confidence interval for mu) • Hypothesis testing □ Frequently used, often misunderstood; do not rely on as sole source of info. □ Two-by-two table of correct retention, incorrect rejection (type I error), correct rejection, incorrect retention (type II error) □ alpha = Pr(type I error) □ beta = Pr(type II error) □ power = 1 - beta □ Goal: minimize alpha, maximize power □ Retention of the null hypothesis does not imply it is true! (E) Reporting Results • Important! - see chapter for specifics • Use APA reporting style (make free use of manual) (F) Approach Toward Data Analysis • Understand research question and how this translates into study design, measurements, and parameter estimation • Describe data - graphs and summary stats • Estimation - point and interval • Hypothesis test • Narrative Summary - telling a meaningful story • Power and sample size (esp. important when results are insignificant)
{"url":"http://www.sjsu.edu/faculty/gerstman/EpiInfo/basics-notes.htm","timestamp":"2014-04-17T12:38:34Z","content_type":null,"content_length":"4896","record_id":"<urn:uuid:dd63f2a2-398b-4571-a9ab-1cd60d961b0c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergent & Discriminant Validity « PreviousHomeNext » Convergent and discriminant validity are both considered subcategories or subtypes of construct validity. The important thing to recognize is that they work together -- if you can demonstrate that you have evidence for both convergent and discriminant validity, then you've by definition demonstrated that you have evidence for construct validity. But, neither one alone is sufficient for establishing construct validity. I find it easiest to think about convergent and discriminant validity as two inter-locking propositions. In simple words I would describe what they are doing as follows: measures of constructs that theoretically should be related to each other are, in fact, observed to be related to each other (that is, you should be able to show a correspondence or convergence between similar constructs) measures of constructs that theoretically should not be related to each other are, in fact, observed to not be related to each other (that is, you should be able to discriminate between dissimilar To estimate the degree to which any two measures are related to each other we typically use the correlation coefficient. That is, we look at the patterns of intercorrelations among our measures. Correlations between theoretically similar measures should be "high" while correlations between theoretically dissimilar measures should be "low". The main problem that I have with this convergent-discrimination idea has to do with my use of the quotations around the terms "high" and "low" in the sentence above. The question is simple -- how "high" do correlations need to be to provide evidence for convergence and how "low" do they need to be to provide evidence for discrimination? And the answer is -- we don't know! In general we want convergent correlations to be as high as possible and discriminant ones to be as low as possible, but there is no hard and fast rule. Well, let's not let that stop us. One thing that we can say is that the convergent correlations should always be higher than the discriminant ones. At least that helps a bit. Before we get too deep into the idea of convergence and discrimination, let's take a look at each one using a simple example. Convergent Validity To establish convergent validity, you need to show that measures that should be related are in reality related. In the figure below, we see four measures (each is an item on a scale) that all purport to reflect the construct of self esteem. For instance, Item 1 might be the statement "I feel good about myself" rated using a 1-to-5 Likert-type response format. We theorize that all four items reflect the idea of self esteem (this is why I labeled the top part of the figure Theory). On the bottom part of the figure (Observation) we see the intercorrelations of the four scale items. This might be based on giving our scale out to a sample of respondents. You should readily see that the item intercorrelations for all item pairings are very high (remember that correlations range from -1.00 to +1.00). This provides evidence that our theory that all four items are related to the same construct is supported. Notice, however, that while the high intercorrelations demonstrate the the four items are probably related to the same construct, that doesn't automatically mean that the construct is self esteem. Maybe there's some other construct that all four items are related to (more about this later). But, at the very least, we can assume from the pattern of correlations that the four items are converging on the same thing, whatever we might call it. Discriminant Validity To establish discriminant validity, you need to show that measures that should not be related are in reality not related. In the figure below, we again see four measures (each is an item on a scale). Here, however, two of the items are thought to reflect the construct of self esteem while the other two are thought to reflect locus of control. The top part of the figure shows our theoretically expected relationships among the four items. If we have discriminant validity, the relationship between measures from different constructs should be very low (again, we don't know how low "low" should be, but we'll deal with that later). There are four correlations between measures that reflect different constructs, and these are shown on the bottom of the figure (Observation). You should see immediately that these four cross-construct correlations are very low (i.e., near zero) and certainly much lower than the convergent correlations in the previous figure. As above, just because we've provided evidence that the two sets of two measures each seem to be related to different constructs (because their intercorrelations are so low) doesn't mean that the constructs they're related to are self esteem and locus of control. But the correlations do provide evidence that the two sets of measures are discriminated from each other. Putting It All Together OK, so where does this leave us? I've shown how we go about providing evidence for convergent and discriminant validity separately. But as I said at the outset, in order to argue for construct validity we really need to be able to show that both of these types of validity are supported. Given the above, you should be able to see that we could put both principles together into a single analysis to examine both at the same time. This is illustrated in the figure below. The figure shows six measures, three that are theoretically related to the construct of self esteem and three that are thought to be related to locus of control. The top part of the figure shows this theoretical arrangement. The bottom of the figure shows what a correlation matrix based on a pilot sample might show. To understand this table, you need to first be able to identify the convergent correlations and the discriminant ones. There are two sets or blocks of convergent coefficients (in green), one 3x3 block for the self esteem intercorrelations and one 3x3 block for the locus of control correlations. There are also two 3x3 blocks of discriminant coefficients (shown in red), although if you're really sharp you'll recognize that they are the same values in mirror image (Do you know why? You might want to read up on correlations to refresh your memory). How do we make sense of the patterns of correlations? Remember that I said above that we don't have any firm rules for how high or low the correlations need to be to provide evidence for either type of validity. But we do know that the convergent correlations should always be higher than the discriminant ones. take a good look at the table and you will see that in this example the convergent correlations are always higher than the discriminant ones. I would conclude from this that the correlation matrix provides evidence for both convergent and discriminant validity, all in one analysis! But while the pattern supports discriminant and convergent validity, does it show that the three self esteem measures actually measure self esteem or that the three locus of control measures actually measure locus of control. Of course not. That would be much too easy. So, what good is this analysis? It does show that, as you predicted, the three self esteem measures seem to reflect the same construct (whatever that might be), the three locus of control measures also seem to reflect the same construct (again, whatever that is) and that the two sets of measures seem to be reflecting two different constructs (whatever they are). That's not bad for one simple OK, so how do we get to the really interesting question? How do we show that our measures are actually measuring self esteem or locus of control? I hate to disappoint you, but there is no simple answer to that (I bet you knew that was coming). There's a number of things we can do to address that question. First, we can use other ways to address construct validity to help provide further evidence that we're measuring what we say we're measuring. For instance, we might use a face validity or content validity approach to demonstrate that the measures reflect the constructs we say they are (see the discussion on types of construct validity for more information). One of the most powerful approaches is to include even more constructs and measures. The more complex our theoretical model (if we find confirmation of the correct pattern in the correlations), the more we are providing evidence that we know what we're talking about (theoretically speaking). Of course, it's also harder to get all the correlations to give you the exact right pattern as you add lots more measures. And, in many studies we simply don't have the luxury to go adding more and more measures because it's too costly or demanding. Despite the impracticality, if we can afford to do it, adding more constructs and measures will enhance our ability to assess construct validity using approaches like the multitrait-multimethod matrix and the nomological network. Perhaps the most interesting approach to getting at construct validity involves the idea of pattern matching. Instead of viewing convergent and discriminant validity as differences of kind, pattern matching views them as differences in degree. This seems a more reasonable idea, and helps us avoid the problem of how high or low correlations need to be to say that we've established convergence or « PreviousHomeNext »
{"url":"http://www.socialresearchmethods.net/kb/convdisc.php","timestamp":"2014-04-20T18:24:08Z","content_type":null,"content_length":"14978","record_id":"<urn:uuid:e18c511a-05ed-4e71-a379-0cf15109edf1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
2006 Commonwealth Games Re: 2006 Commonwealth Games I suppose people could think that ... but even though the Commonwealth Games began in 1930 (the modern era of Olympic Games began in 1896), they were first proposed in 1891. But the history goes much deeper - there were Olympic-style games held in England since the early 1600s, and Pierre de Coubertin actually visited one of these in 1890 for inspiration. So I suppose you could say they both have equally valid origins. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=32595","timestamp":"2014-04-19T17:04:02Z","content_type":null,"content_length":"14367","record_id":"<urn:uuid:28608c31-0035-451e-bb81-5d802072afa3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On the Geometric Ergodicity of Metropolis-Hastings Yves F. Atchadé1 and François Perron2 (March, 2003; revised August, 2005) Under a compactness assumption, we show that a -irreducible and aperiodic Metropolis- Hastings chain is geometrically ergodic if and only if its rejection probability is bounded away from unity. In the particular case of the Independence Metropolis-Hastings algorithm, we obtain that the whole spectrum of the induced operator is contained in (and in many cases equal to) the essential range of the rejection probability of the chain as conjectured by Liu (1996). Key words: Geometric ergodicity, Markov chain operators, Metropolis-Hastings algorithm. MSC Numbers: 65C05, 65C40, 60J27, 60J35 1 Introduction The Metropolis-Hastings (MH) algorithm is a very exible algorithm used to approximately sample from complicated distributions in high dimension spaces. If is the probability distribution of inter- est, such an algorithm generates a Markov chain (Xn) which admits as its stationary distribution. Geometric ergodicity characterizes a global stability property of the chain that is particularly useful from a statistical point of view. For example, if the Markov chain is geometric ergodicity, central limit theorems for empirical sums of functional of the chain are easier to obtain (see e.g. Jones
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/278/1701157.html","timestamp":"2014-04-18T09:16:36Z","content_type":null,"content_length":"8459","record_id":"<urn:uuid:8dd4a798-904d-4725-80fb-064d46505a90>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Parametric polynomial time perceptron rescaling algorithm Andriy Kharechko In: Algorithms and Complexity in Durham 2006: Proceedings of the Second ACiD Workshop Texts in Algorithmics , 7 . (2006) College Publications, King's College , London, UK , . ISBN 1-904987-38-9 Let us consider a linear feasibility problem with a possibly infinite number of inequality constraints posed in an on-line setting: an algorithm suggests a candidate solution, and the oracle either confirms its feasibility, or outputs a violated constraint vector. This model can be solved by subgradient optimisation algorithms for non-smooth functions, also known as the perceptron algorithms in the machine learning community, and its solvability depends on the problem dimension and the radius of the constraint set. The classical perceptron algorithm may have an exponential complexity in the worst case when the radius is infinitesimal. To overcome this difficulty, the space dilation technique was exploited in the ellipsoid algorithm to make its running time polynomial. A special case of the space dilation, the rescaling procedure is utilised in the perceptron rescaling algorithm with a probabilistic approach to choosing the direction of dilation. A parametric version of the perceptron rescaling algorithm is the focus of this work. It is demonstrated that some fixed parameters of the latter algorithm (the initial estimate of the radius and the relaxation parameter) may be modified and adapted for particular problems. The generalised theoretical framework allows to determine convergence of the algorithm with any chosen set of values of these parameters, and suggests a potential way of decreasing the complexity of the algorithm which remains the subject of current research. PDF - Requires Adobe Acrobat Reader or other PDF viewer. Postscript - Requires a viewer, such as GhostView
{"url":"http://eprints.pascal-network.org/archive/00002966/","timestamp":"2014-04-17T01:08:00Z","content_type":null,"content_length":"9735","record_id":"<urn:uuid:68efeab9-f1a2-41f6-8c07-26167691ee33>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Particle 1 Of Mass 170 G And Speed2.90 M/s Undergoes ... | Chegg.com Particle 1 of mass 170 g and speed2.90 m/s undergoes a one-dimensionalcollision with stationary particle 2 of mass 400 g. (a) What is the magnitude of the impulse onparticle 1 if the collision is elastic? (b) What is the magnitude of the impulse on particle 1 if thecollision is completely inelastic?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/particle-1-mass-170-g-speed290-m-s-undergoes-one-dimensionalcollision-stationary-particle--q396347","timestamp":"2014-04-19T00:24:43Z","content_type":null,"content_length":"21820","record_id":"<urn:uuid:b386ef11-9cb1-4e8a-97d3-c0a8f56f91d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone please explain why the cycloid curve has a vertical movement on the cusps? Thank you! • one year ago • one year ago Best Response You've already chosen the best response. As far as I understand it. . . the instantaneous direction of motion is vertical because the tangent to the graph of the motion at the cusp is vertical. Direction: The vertical tangent is explained in both the lecture (using approximations) and the Related Reading (using calculus). The slope (dy/dx) of the tangent to the curve signifies the direction of the curve, which in this case is the direction of motion. So the instantaneous direction of motion is vertical. Velocity: However, the Related Reading explains that the velocity of the point at the cusp is 0. So, the point has no instantaneous movement at the cusp (v=change in position/change in time=0). (We have position\[OP=<x,y>=<a \theta-asin \theta, a-acos \theta>\]. So \[velocity =d(OP)/d \theta=<a-acos \theta,-asin \theta>\] which at the cusp is\[<a-acos0, -asin0>=<0,0>.)\] So at the cusp, the direction of motion is vertical, while the instantaneous velocity is 0. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fb7842ae4b05565342d0cd4","timestamp":"2014-04-20T21:01:48Z","content_type":null,"content_length":"28546","record_id":"<urn:uuid:3fe53ebd-8600-438f-a11f-51ea38df139e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] OT: advice on modelling a time series josef.pktd@gmai... josef.pktd@gmai... Fri Mar 12 10:32:38 CST 2010 On Fri, Mar 12, 2010 at 10:46 AM, Robin <robince@gmail.com> wrote: > Hello, > While not directly Python related I am always impressed with the > quality of scientific advice available on this list, and was hoping I > could receive some... > I have a limited amount of an experimentally obtained time series from > a biological system. I would like to come up with a generative model > which would allow production of large quantities of data with > statistics as similar as possible to the experimental data. The time > series represents a position, and I am particularly interested in > transient high velocity/acceleration events (which are often not very > visible by eye in the position trace), so ideally any model should > reproduce those with particular care. > An example plot of a small section of the data (pos vel and acc) (1s) > is available here: > http://i41.tinypic.com/ou42de.jpg > If it makes any difference it is sampled at 4kHz. I tried fitting a > basic autoregressive model. An order 38 model reproduced the position > signal visually quite well, but velocity and acceleration were far too > regular. I tried fitting one to the velocity, but I think the events > of interest are too far apart in bins so the order required is too > large. > So, could anyone point me to anything that would be helpful in python > (so far I did the AR with a matlab package I found)? Also any > suggestions for how to proceed would be great - other than reading the > wikipedia article I am completely new to this type of AR modelling. So > far the only ideas I have involve either downsampling the signal (to > try to reduce the order of AR model needed), or splitting it in > frequency to low f/high f components and attempting to model them > separately then recombine. Do either of these seem sensible? > Is it likely some non-linear model would be required (pos,vel and acc > all have high kurtosis), or are normal AR models capable of recreating > this kind of fine structure if tweaked sufficiently? > Thanks in advance for any pointers, In statsmodels we are working on some time series analysis, but it is still a bit to early for real use. We have AR, but for this kind of data I would recommend scikits.talkbox which has a Levinson-Durbin recursion implemented that gives a more robust estimate of longer AR polynomial (maybe nitime also has it now.) I don't know of any implementation of non-linear models for time series analysis in python, e.g. a markov switching or threshold model, or of any models that would allow for fat-tailes or asymmetric shock If you just want to generate sample data with similar features, then this will be much easier than estimation. (I have some tentative simulation code for continuous time diffusion processes but not cleaned up) Your acceleration data looks like a GARCH process, that is the variance is autocorrelated but not (much) the mean. There also, I have an initial version but not yet good enough to be reliable. >From the graph, it also looks like the three observations are strongly related, so separate (univariate) modeling doesn't look like the most appropriate choice. > Cheers > Robin > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024631.html","timestamp":"2014-04-16T04:43:13Z","content_type":null,"content_length":"6759","record_id":"<urn:uuid:f1d6363f-dcf3-4fd6-97b8-2a3c4e4bde10>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
What is etale descent? up vote 9 down vote favorite What is etale descent? I have a vague notion that, for example, given a variety $V$ over a number field $K$, etale descent will produce (sometimes) a variety $V'$ over $\mathbb{Q}$ of the same complex dimension which is isomorphic to $V$ over $K$ and such that $V(K)=V'(\mathbb{Q})$. Is this at all right? How does one do such a thing? add comment 5 Answers active oldest votes Let $L/K$ be a Galois field extension and consider a variety $Y$ over $L$. The theory of (Galois) descent addresses the question whether $Y$ can be defined over $K$. More precisely, the question is: "does there exist a variety $X$ over $K$ such that $Y = X \times_{Spec(K)} Spec(L)$". Now assume such $X$ does exist. In this case $Y$ is endowed with $Gal(L/K)$ action coming from an action on the second factor. Conversely, if $Y$ has a Galois action compatible with the action on $Spec(L)$, then $Y$ descends to some $X$ defined over $K$. $X$ is actually a quotient of $Y$ by $Gal(L/K)$ (so that the conjugate points glue together to form one point on $X$). Note that the set of $K$-points of $X$ is the set of Galois fixed points. up vote 9 down vote accepted Example. $K = \mathbf R$ and $L = \mathbf C$. For any real variety, the set of complex points admits the action of $ \mathbf Z/2$ by complex conjugation. Conversely, if a complex variety is endowed with conjugation, it descends to a real variety. This is in fact an exercise in Hartshorne. Remarks. Theory of descent also classifies all possible $X$'s arising from $Y$. Such $X$'s are called forms of $Y$. They are in 1-1 correspondence with a certain Galois cohomology add comment As I write the question looks like a muddle of two distinct notions: 1) Restriction of scalars. Given $L/K$ finite and a variety $V/L$ there's a variety $W/K$ of dimension $(dim V)[L:K]$ with $W(K)=V(L)$ canonically. For example over the complexes the variety ${\mathbf C}^*$ is defined by the equation $z\not=0$ and its restriction of scalars to the reals is (isomorphic to) the subspace of affine 2-space defined by $x^2+y^2\not=0$. up vote 10 down 2) Descent. $L/K$ finite again, but this time separable too, and let's even make it Galois for simplicity. Given $V/K$ one can imagine $V$ as a variety over $L$. Over $L$, $V$ is vote suspiciously isomorphic to its conjugates. Descent (vaguely) is the idea that conversely, given a variety over $L$ isomorphic to all its conjugates (in a good way), it's indeed the base change to $L$ of a variety over $K$. Ah yes, you are right, I muddled these two things together. Is there a nice explicit way to think about Weil restriction of scalars in more complicated situations, for example over number fields? – David Hansen Dec 1 '09 at 0:11 @Alex: just to let you know, the mechanics of this site mean that it's highly likely that the only person who will see your comments are the person who wrote the answer that you're commenting on (i.e. me, in this case). Commenting on my answer does not bump the question to the top of the list, and there are no messages sent to the original poster or others involved. In short, what I'm saying is that if you want to say something to me then sure, submit a comment on my answer, but if you want to say something to the OP you're much better off commenting on the question orsubmittinganothranswr – Kevin Buzzard Sep 20 '10 at 19:55 Thanks for pointing this out, Kevin! I have now posted it as an answer, so I will just delete my comments, so as not to clutter up the thread. – Alex B. Sep 25 '10 at 9:32 add comment You may be interested in Illusie's survey. up vote 1 down vote add comment This used to be a comment, but as Kevin pointed out you might never have found out that I left one. So just in case this is still of any relevance, I will repeat it here. I know, this thread is old so maybe you have already figured it out yourself, but in case this is not so, here goes: you asked in a comment whether there was any explicit way of thinking about Weil restriction of scalars and indeed there is. Let $L/K$ be a finite extension of fields and $V/L$ a variety, given by a set of equations $f_i(x_1,…,x_n)$ with coefficients in $L$. up vote Fix a basis $u_1,…,u_m$ for $L/K$. Each variable $x_r$ is a variable in $L$ but you can instead write $x_r=\sum_s y_{r,s}u_s$, where now $y_{r,s}$ are variables in $K$. Do the same with the 1 down coefficients of the $f_i$. By comparing the coefficients of each $u_s$, you get $[L:K]$ equations over $K$ and this new system describes a variety over $K$ - the Weil restriction of scalars. vote You immediately check that its dimension is indeed $[L:K]$ times the dimension of the original variety. If you do this with an explicit simple example, like Weil restrict an elliptic curve from $\mathbb{Q}_i$ to $\mathbb{Q}$, then you will get a much better feel for what's going on. See also: en.wikipedia.org/wiki/Weil_restriction – S. Carnahan♦ Sep 25 '10 at 11:49 Good point. Always worth checking Wikipedia, before posting. – Alex B. Sep 25 '10 at 13:23 add comment Let $K/k$ be a finite separable extension (not necessarily galois) and $Y$ a quasi-projective variety over $K$. The functor $k-Alg \to Sets:A \mapsto Y(A\otimes_k K)$ is representable by a quasi-projective $k$-scheme $Y_0=R_{K/k}(Y)$. We have a functorial adjunction isomorphism $Hom_{k-schemes} (X,R_{K/k}(Y))=Hom_{K-schemes}(X\otimes _k K,Y)$ and the $k$-scheme $Y_0=R_{K/k}(Y)$ is said to be obtained from the $K$-scheme $Y$ by Weil descent. For example if you quite modestly take $X=Spec(k)$, you get $(R_{K/k}(Y))(k)=Y_0(k)=Y up vote 0 (K)$, a formula that Buzzard quite rightfully mentions. If $Y=G$ is an algebraic group over $K$, its Weil restriction $R_{K/k}(G)$ will be an algebraic group over $k$. down vote As the name says this is due (in a different language) to André Weil: The field of definition of a variety. Amer. J. Math. 78 (1956), 509–524. Chapter 16 of Milne's online Algebraic Geometry book is a masterful exposition of descent theory, which will give you many properties of $(R_{K/k}(Y))(k)$ (with proofs), and the only reasonable thing for me to do is stop here and refer you to his wondeful notes. 4 I've never seen the word "descent" used to refer to Weil restriction (aka restriction of scalars) before. As far as I can tell, Weil restriction is a functor, and descent is a way to determine if an object is in the essential image of the left adjoint to that functor. – S. Carnahan♦ Nov 28 '09 at 2:43 1 Yes,this is correct (as the very book I recommend confirms !) I was going to edit it, and I noticed you (probably) already did. How come "edited" didn't appear below the answer? – Georges Elencwajg Nov 28 '09 at 8:42 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/6979/what-is-etale-descent/6986","timestamp":"2014-04-19T20:03:39Z","content_type":null,"content_length":"76151","record_id":"<urn:uuid:e33f9eb0-9c07-468e-be48-7f6c9fa34fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Interaction term in OLS regression [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Interaction term in OLS regression From Robert A Yaffee <bob.yaffee@nyu.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: Interaction term in OLS regression Date Sun, 08 Feb 2009 11:46:59 -0500 If scatterrplots reveal that the functional form of these variables are linear, then there is no need to include polynomial terms. The main effect may not be statistically significant, owing to the ratio of the contribution to the R^2 contributed by the c not being at least 1.96 times its std error. The interaction with the f variable means that the joint (multiplicative) effect of c*f over and above the individual main effects (c and f taken separately) is significantly different at different levels of These joint effects exhibit sufficiently large differences in their contribution to the slope (trend) over their standard error to obtain statistical significance. Plotting different levels of the c or f against the other interacting variable should reveal this phenomenon. Robert A. Yaffee, Ph.D. Research Professor Silver School of Social Work New York University Biosketch: http://homepages.nyu.edu/~ray1/Biosketch2008.pdf CV: http://homepages.nyu.edu/~ray1/vita.pdf ----- Original Message ----- From: Antonio Silva <asilva100@live.com> Date: Sunday, February 8, 2009 9:00 am Subject: st: Interaction term in OLS regression To: Stata list <statalist@hsphsun2.harvard.edu> > List: sorry for the earlier post that did not have a subject line. My > mistake. Here is the original post: > Hello Statlist: > I have an OLS model that looks like this: y = constant + b + c + d + e > + f. c is the variable in which I am most interested. > In the basic model, c turns out NOT to be significant (it is not even > close). However, when I include an interaction term in the model, c*f, > c turns out to be highly significant. > So the new model looks like this: y = constant + b + c + d + e + f + > c*f. The interaction term, c*f, is highly significant as well (though > in many versions f is NOT significant). My question is this: Is it > defensible JUST to report the results of the fully specified > model--that is, the one with the interaction? I kind of feel bad > knowing that the first model does not produce the results I desire (I > am very happy c ends up significant in the full model--it helps > support my hypothesis). I have heard from others that if the variable > of interest is NOT significant without the interaction term in the > model but IS significant WITH the interaction term, I should either a) > report the results of both models; or b) assume the data are screwy > and back away... What do you all think?Thanks so much.Antonio Silva > Anyway, I received several good responses. And here are my responses > to those responses. Any further feedback is appreciated. > First, OLS seems appropriate, though I udnerstand the desire to do > something more. The DV is a continuous variable that is normally distributed. > Diagnostics show the model works well... So I really don't think any > other method makes sense here. > Second, the interaction is exactly what the theory holds, which is > nice. I guess my confusion lies here...why would the variable not > be significant without the interaction term included? Th etheory holds > that c would affect everyone, but would affect > different values of f differently. So I would expect that the model > without the interaction would also produce some good > results on c, but it does not. > Thanks again... > _________________________________________________________________ > Windows Live?: E-mail. Chat. Share. Get more ways to connect. > http://windowslive.com/howitworks?ocid=TXT_TAGLM_WL_t2_allup_howitworks_022009 > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-02/msg00295.html","timestamp":"2014-04-18T00:40:46Z","content_type":null,"content_length":"9559","record_id":"<urn:uuid:4937090e-f0ef-4b48-a7d9-2c7d11adf07f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Elizabeth Buchanan Cowley May 22, 1874 - April 13, 1945 Elizabeth Cowley was born in Allegheny, Pennsylvania. She studied at the Indiana State Normal School of Pennsylvania for two years and received a degree in July, 1893. The next four years were spent teaching in the public schools in Pennsylvania. In 1897 she entered Vassar College, where she earned her A.B. degree in mathematics in 1901. She was awarded the graduate scholarship in mathematics and astronomy for the following academic year and received her A.M. degree in 1902. Part of her work on "position and proper motions of 45 stars" involved working out the definitive orbit of a comet. She then received an appointment as instructor of mathematics at Vassar College in 1902. During the summers of 1903, 1905, and 1905 she studied mathematics and physics at the University of Chicago with Bolza, Dickson, Millikan, Moulton, and Slaught. In February, 1906, Cowley began work at Columbia University. In 1908 she received her Ph.D. degree from Columbia, the fourth woman to receive a Ph.D. in mathematics from that institution. Her thesis, written under the direction of Cassius Keyser, was on "Plane curves of the eighth order with two real four-fold points having distinct tangents and with no other point singularities." Later studies took her to universities in Gottingen and Munich. Cowley taught at Vassar College from 1902 to 1926. In 1913 she was promoted to assistant professor, in 1916 to associate professor. In 1926 she took a three-year leave of absence to go to Pittsburgh to be with her mother. From 1908 until 1926 she served as an associate editor for the Dutch review journal Revue Semestrielle des Publications Mathématiques, in addition to her responsibilities at Vassar. Cowley officially resigned from Vassar in 1929 to stay in Pittsburgh. From 1926 to 1937 Cowley taught plane and solid geometry at the Allegheny Senior High School in Pittsburgh. She served as the vice-president and president of the mathematics section of the Pennsylvania State Education Association. She also served for a number of years as a reader in mathematics for the College Entrance Examination Board. Cowley wrote a number of articles that were published in journals such as the Journal of Educational Research, the Bulletin of the American Mathematical Society, the American Mathematical Monthly, The Mathematics Teacher, and the Journal of the American Association of University Women. In 1907 Cowley and Ida Whiteside submitted a prize-winning paper on "Definitive Orbit of Comet 1826II," published by the Astronomische Nachrichten, for which they received a prize of 100 marks from the German Astronomical Society. A 1926 paper in the American Mathematical Monthly discussed a "Note on a Linear Diophantine Equation." This note concerned a generalized version of the classic arithmetic measuring problem where it is required to divide into two equal parts the contents of an 8-ounce vase if the only empty vases hold 5 ounces and 3 ounces respectively. Nearly the same problem appeared in the 1995 movie "Die Hard 3" in which a bomb will explode if the hero (played by Bruce Willis) cannot solve that problem within a couple of minutes. Cowley also wrote a book on plane geometry (1932) and another book on solid geometry (1934). Both of these books were written primarily for use in the high schools. Cowley wrote a 1928 article for the MAA Monthly in which she addressed the controversy about teaching solid geometry in the high schools. She noted that the former requirement of solid geometry in the freshman year of college had been abolished by practically every college, and that critics wanted to replace solid geometry in the first year by an introduction to the calculus, placing more stress upon solid geometry in the high school. Cowley defended the teaching of solid geometry in the schools, saying that "students who have studied this subject in a good high school compete successfully with those who have had the college course when they take uniform examinations in the subject or when they pursue college courses in analytic geometry and in the calculus." 1. Ph.D. Thesis, The New Era Printing Company, Lancaster, PA 1908. 2. Helen Brewster Owens Papers. Schlesinger Library, Radcliffe College. 3. Vassar College Libarary Archives
{"url":"http://www.agnesscott.edu/lriddle/women/cowley.htm","timestamp":"2014-04-19T22:26:58Z","content_type":null,"content_length":"7537","record_id":"<urn:uuid:62cd2d16-44a2-4d13-bfb1-b4f93845146c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Entropy of the Ising model up vote 7 down vote favorite Consider the standard Ising model on $[0,N]^2$ for $N$ large. By that I mean the square-lattice Ising model without external field, inside an $N$-by-$N$ square. What is its entropy for $N$ large? It must behave asymptotically as $c(\beta)N^2$ for some constant $c(\beta)$ depending on the inverse temperature $\beta$. What is $c(\beta)$? Has it been computed? pr.probability statistical-physics ising-model entropy 3 Start with the partition function, see e.g. p. 480-1 (search for the page numbers) of amazon.com/Modern-Course-Statistical-Physics/dp/0471595209. Then do some thermodynamics. – Steve Huntsman Jul 24 '11 at 14:11 add comment 1 Answer active oldest votes To expand on Steve Huntsman's comment, the entropy follows from Onsager's result for the free energy per site, $F=$ $$ -\beta^{-1}\left[\ln 2+ \frac{1}{2}\frac{1}{(2\pi)^2}\int_0^{2\pi}d\ theta_1\int_0^{2\pi}d\theta_2 \ln(\cosh2\beta E_1\cosh2\beta E_2 -\sinh2\beta E_1\cos\theta_1-\sinh2\beta E_2\cos\theta_2)\right], $$ and the thermodynamic relation, $$ S=-\frac{\partial F}{\ up vote partial T}, $$ for the entropy per site. Here $\beta=1/(k_BT)$ and $E_1$ and $E_2$ are the horizontal and vertical interaction strengths. If you set both interaction strengths equal to 1 and 5 down use units where Boltzmann's constant equals 1, then the critical temperature is $2/\ln(\sqrt2 + 1)\approx2.269$. If you plot $S$, you should find that it interpolates between 0 at low vote temperature and $\ln2$ at high temperature, as expected. At the critical temperature, the graph has infinite slope. Thanks for expanded explanation. I need to do some reading before I can make sense of this answer. – Boris Bukh Jul 25 '11 at 17:36 The double integral expression for F looks a bit unpleasant, but it's really just some sort of hypergeometric fuction, i.e. a solution to a linear second order ODE. It just happens not to be expressible in terms of anything more elementary. Feel free to email me if you have questions. – Will Orrick Jul 25 '11 at 20:23 add comment Not the answer you're looking for? Browse other questions tagged pr.probability statistical-physics ising-model entropy or ask your own question.
{"url":"https://mathoverflow.net/questions/71129/entropy-of-the-ising-model","timestamp":"2014-04-18T21:33:22Z","content_type":null,"content_length":"54463","record_id":"<urn:uuid:1d8461c3-c55b-4a27-a7d3-0f4431628d24>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
by Recursive Step: Church’s analysis of effective calculability "... Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally e ..." Cited by 21 (10 self) Add to MetaCart Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally equivalent to an abstract state machine. This theorem presupposes three natural postulates about algorithmic computation. Here, we show that augmenting those postulates with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church’s Thesis, as Gödel and others suggested may be possible. In a similar way, but with a different set of basic operations, one can prove Turing’s Thesis, characterizing the effective string functions, and—in particular—the effectively-computable functions on string representations of numbers. - BULLETIN OF SYMBOLIC LOGIC , 2002 "... The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. ..." Cited by 8 (0 self) Add to MetaCart The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. , 2007 "... The Abstract State Machine Thesis asserts that every classical algorithm is behaviorally equivalent to an abstract state machine. This thesis has been shown to follow from three natural postulates about algorithmic computation. Here, we prove that augmenting those postulates with an additional requ ..." Cited by 2 (0 self) Add to MetaCart The Abstract State Machine Thesis asserts that every classical algorithm is behaviorally equivalent to an abstract state machine. This thesis has been shown to follow from three natural postulates about algorithmic computation. Here, we prove that augmenting those postulates with an additional requirement regarding basic operations implies Church’s Thesis, namely, that the only numeric functions that can be calculated by effective means are the recursive ones (which are the same, extensionally, as the Turing-computable numeric functions). In particular, this gives a natural axiomatization of Church’s Thesis, as Gödel and others suggested may be possible. , 2000 "... this paper please consult me first, via my home page. ..." , 2008 "... Results going back to Turing and Gödel provide us with limitations on our ability to algorithmically decide the truth or falsity of mathematical assertions in a number of important mathematical contexts. Here we adapt some of this earlier work to very simplified mathematical models of discrete dete ..." Add to MetaCart Results going back to Turing and Gödel provide us with limitations on our ability to algorithmically decide the truth or falsity of mathematical assertions in a number of important mathematical contexts. Here we adapt some of this earlier work to very simplified mathematical models of discrete deterministic physical systems involving a few moving bodies (twelve point masses) in potentially infinite one dimensional space. There are two kinds of such limiting results that must be carefully distinguished. Results of the first kind state the nonexistence of any algorithm for determining whether any statement among a given set of statements is true or false. Results of the second kind are much deeper and present much greater challenges. They point to specific statements A, where we can neither prove nor refute A using accepted principles of mathematical reasoning. We give a brief survey of these limiting results. These include limiting results of the first kind: from number theory, group theory, and topology, in mathematics, and from idealized computing devices in theoretical computer science. We present a new limiting result of the first kind for simplified physical systems. We conjecture some related limiting results of the second kind, for simplified physical systems. "... Abstract. We discuss some of the opportunities and problems which may confront the field of automated reasoning in the years ahead. We focus on various issues related to the development of a Universal Automated Information System for Science and Technology, and the problem of developing institutiona ..." Add to MetaCart Abstract. We discuss some of the opportunities and problems which may confront the field of automated reasoning in the years ahead. We focus on various issues related to the development of a Universal Automated Information System for Science and Technology, and the problem of developing institutional support for long-term projects. 1 , 2008 "... Abstract. Results going back to Turing and Gödel provide us with limitations on our ability to algorithmically decide the truth or falsity of mathematical assertions in a number of important mathematical contexts. Here we adapt some of this earlier work to very simplified mathematical models of disc ..." Add to MetaCart Abstract. Results going back to Turing and Gödel provide us with limitations on our ability to algorithmically decide the truth or falsity of mathematical assertions in a number of important mathematical contexts. Here we adapt some of this earlier work to very simplified mathematical models of discrete deterministic physical systems involving a few moving bodies (twelve point masses) in potentially infinite one dimensional space. There are two kinds of such limiting results that must be carefully distinguished. Results of the first kind state the nonexistence of any algorithm for determining whether any statement among a given set of statements is true or false. Results of the second kind are much deeper and present much greater challenges. They point to specific statements A, where we can neither prove nor refute A using accepted principles of mathematical reasoning. We give a brief survey of these limiting results. These include limiting results of the first kind: from number theory, group theory, and topology, in mathematics, and from idealized computing devices in theoretical computer science. We present a new limiting result of the first kind for simplified physical systems. We conjecture some related limiting results of the second kind, for simplified physical systems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2211048","timestamp":"2014-04-25T07:02:30Z","content_type":null,"content_length":"27070","record_id":"<urn:uuid:252f2c94-4e6a-49a4-bf86-b9c1fb354ad2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about this week’s finds on Azimuth This Week’s Finds (Week 307) 14 December, 2010 I’d like to take a break from interviews and explain some stuff I’m learning about. I’m eager to tell you about some papers in the book Tim Palmer helped edit, Stochastic Physics and Climate Modelling. But those papers are highly theoretical, and theories aren’t very interesting until you know what they’re theories of. So today I’ll talk about "El Niño", which is part of a very interesting climate cycle. Next time I’ll get into more of the math. I hadn’t originally planned to get into so much detail on the El Niño, but this cycle is a big deal in southern California. In the city of Riverside, where I live, it’s very dry. There is a small river, but it’s just a trickle of water most of the time: there’s a lot less "river" than "side". It almost never rains between March and December. Sometimes, during a "La Niña", it doesn’t even rain in the winter! But then sometimes we have an "El Niño" and get huge floods in the winter. At this point, the tiny stream that gives Riverside its name swells to a huge raging torrent. The difference is very dramatic. So, I’ve always wanted to understand how the El Niño cycle works — but whenever I tried to read an explanation, I couldn’t follow it! I finally broke that mental block when I read some stuff on William Kessler‘s website. He’s an expert on the El Niño phenomenon who works at the Pacific Marine Environmental Laboratory. One thing I like about his explanations is that he says what we do know about the El Niño, and also what we don’t know. We don’t know what triggers it! In fact, Kessler says the El Niño would make a great research topic for a smart young scientist. In an email to me, which he has allowed me to quote, he said: We understand lots of details but the big picture remains mysterious. And I enjoyed your interview with Tim Palmer because it brought out a lot of the sources of uncertainty in present-generation climate modeling. However, with El Niño, the mystery is beyond Tim’s discussion of the difficulties of climate modeling. We do not know whether the tropical climate system on El Niño timescales is stable (in which case El Niño needs an external trigger, of which there are many candidates) or unstable. In the 80s and 90s we developed simple "toy" models that convinced the community that the system was unstable and El Niño could be expected to arise naturally within the tropical climate system. Now that is in doubt, and we are faced with a fundamental uncertainty about the very nature of the beast. Since none of us old farts has any new ideas (I just came back from a conference that reviewed this stuff), this is a fruitful field for a smart young person. So, I hope some smart young person reads this and dives into working on El Niño! But let’s start at the beginning. Why did I have so much trouble understanding explanations of the El Niño? Well, first of all, I’m an old fart. Second, most people are bad at explaining stuff: they skip steps, use jargon they haven’t defined, and so on. But third, climate cycles are hard to explain. There’s a lot about them we don’t understand — as Kessler’s email points out. And they also involve a kind of "cyclic causality" that’s a bit tough to mentally process. At least where I come from, people find it easy to understand linear chains of causality, like "A causes B, which causes C". For example: why is the king’s throne made of gold? Because the king told his minister "I want a throne of gold!" And the minister told the servant, "Make a throne of gold!" And the servant made the king a throne of gold. Now that’s what I call an explanation! It’s incredibly satisfying, at least if you don’t wonder why the king wanted a throne of gold in the first place. It’s easy to remember, because it sounds like a story. We hear a lot of stories like this when we’re children, so we’re used to them. My example sounds like the beginning of a fairy tale, where the action is initiated by a "prime mover": the decree of a king. There’s something a bit trickier about cyclic causality, like "A causes B, which causes C, which causes A." It may sound like a sneaky trick: we consider "circular reasoning" a bad thing. Sometimes it is a sneaky trick. But sometimes this is how things really work! Why does big business have such influence in American politics? Because big business hires lots of lobbyists, who talk to the politicians, and even give them money. Why are they allowed to do this? Because big business has such influence in American politics. That’s an example of a "vicious circle". You might like to cut it off — but like a snake holding its tail in its mouth, it’s hard to know where to start. Of course, not all circles are "vicious". Many are "virtuous". But the really tricky thing is how a circle can sometimes reverse direction. In academia we worry about this a lot: we say a university can either "ratchet up" or "ratchet down". A good university attracts good students and good professors, who bring in more grant money, and all this makes it even better… while a bad university tends to get even worse, for all the same reasons. But sometimes a good university goes bad, or vice versa. Explaining that transition can be hard. It’s also hard to explain why a La Niña switches to an El Niño, or vice versa. Indeed, it seems scientists still don’t understand this. They have some models that simulate this process, but there are still lots of mysteries. And even if they get models that work perfectly, they still may not be able to tell a good story about it. Wind and water are ultimately described by partial differential equations, not fairy tales. But anyway, let me tell you a story about how it works. I’m just learning this stuff, so take it with a grain of salt… The "El Niño/Southern Oscillation" or "ENSO" is the largest form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years. It can cause extreme weather such as floods and droughts in many regions of the world. Countries dependent on agriculture and fishing, especially those bordering the Pacific Ocean, are the most affected. And here’s a cute little animation of it produced by the Australian Bureau of Meteorology: Let me tell you first about La Niña, and then El Niño. If you keep glancing back at this little animation, I promise you can understand everything I’ll say. Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with these winds, warming up in the sunlight as it goes. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America. I hope this makes sense so far. But there’s another aspect to the story. Because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. So, wind blows west to fill the "gap" left by rising air. This strengthens the westward-blowing trade winds. So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans even warmer. Get it? This should all make sense so far, except for one thing. There’s one big question, and I hope you’re asking it. Namely: Why do the trade winds blow west? If I don’t answer this, my story so far would work just as well if I switched the words "west" and "east". That wouldn’t necessarily mean my story was wrong. It might just mean that there were two equally good options: a La Niña phase where the trade winds blow west, and another phase — say, El Niño — where they blow east! From everything I’ve said so far, the world could be permanently stuck in one of these phases. Or, maybe it could randomly flip between these two phases for some reason. Something roughly like this last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east. Why not? Mainly because the Earth is turning to the east. Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, the new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect: By the way: in case this stuff wasn’t tricky enough already, a wind that blows to the west is called an easterly, because it blows from the east! That’s what happens when you put sailors in charge of scientific terminology. So the westward-blowing trade winds are called "northeasterly trades" and "southeasterly trades" in the picture above. But don’t let that confuse you. (I also tend to think of Asia as the "Far East" and California as the "West Coast", so I always need to keep reminding myself that Asia is in the west Pacific, while California is in the east Pacific. But don’t let that confuse you either! Just repeat after me until it makes perfect sense: "The easterlies blow west from West Coast to Far East".) Okay: silly terminology aside, I hope everything makes perfect sense so far. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west. But then comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific — that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more! In other words, all the feedbacks reverse themselves. But note: the trade winds never mainly blow east. During an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives. I hope all this makes sense… except for one thing. There’s another big question, and I hope you’re asking it. Namely: Why do the westward trade winds weaken? We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger? The short answer is that nobody knows. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. But now let me just show you the data: The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa! The bottom graph shows the "Southern Oscillation Index" or "SOI". This is the air pressure in Tahiti minus the air pressure in Darwin, Australia. You can see those locations here: So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation. If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation. Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further: This graph was made by William Kessler. His explanations of the ENSO cycle are the first ones I really understood: My own explanation here is a slow-motion, watered-down version of his. Any mistakes are, of course, mine. To conclude, I want to quote his discussion of theories about why an El Niño starts, and why it ends. As you’ll see, this part is a bit more technical. It involves three concepts I haven’t explained yet: • The "thermocline" is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline is deeper in the west than the east: When an El Niño occurs, the thermocline flattens out: • "Oceanic Rossby waves" are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. They move at about 10 centimeters/second, requiring months to years to cross the ocean! The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down. • The "Madden-Julian Oscillation" or "MJO" is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts. With this bit of background, let’s read what Kessler wrote: There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable. The other idea is that the trigger is essentially random. The tropical convection (organized largescale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable. In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation. Next time I’ll talk a bit about mathematical models of the ENSO and another climate cycle — but please keep in mind that these cycles are still far from fully understood! To hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I’ll end up loving your theory. – John Archibald Wheeler This Week’s Finds (Week 306) 7 December, 2010 This week I’ll interview another physicist who successfully made the transition from gravity to climate science: Tim Palmer. JB: I hear you are starting to build a climate science research group at Oxford. What led you to this point? What are your goals? TP: I started my research career at Oxford University, doing a PhD in general relativity theory under the cosmologist Dennis Sciama (himself a student of Paul Dirac). Then I switched gear and have spent most of my career working on the dynamics and predictability of weather and climate, mostly working in national and international meteorological and climatological institutes. Now I’m back in Oxford as a Royal Society Research Professor in climate physics. Oxford has a lot of climate-related activities going on, both in basic science and in impact and policy issues. I want to develop activities in climate physics. Oxford has wonderful Physics and Mathematics Departments and I am keen to try to exploit human resources from these areas where possible. The general area which interests me is in the area of uncertainty in climate prediction; finding ways to estimate uncertainty reliably and, of course, to reduce uncertainty. Over the years I have helped develop new techniques to predict uncertainty in weather forecasts. Because climate is a nonlinear system, the growth of initial uncertainty is flow dependent. Some days when the system is in a relatively stable part of state space, accurate weather predictions can be made a week or more ahead of time. In other more unstable situations, predictability is limited to a couple of days. Ensemble weather forecast techniques help estimate such flow dependent predictability, and this has enormous practical relevance. How to estimate uncertainty in climate predictions is much more tricky than for weather prediction. There is, of course, the human element: how much we reduce greenhouse gas emissions will impact on future climate. But leaving this aside, there is the difficult issue of how to estimate the accuracy of the underlying computer models we use to predict climate. To say a bit more about this, the problem is to do with how well climate models simulate the natural processes which amplify the anthropogenic increases in greenhouse gases (notably carbon dioxide). A key aspect of this amplification process is associated with the role of water in climate. For example, water vapour is itself a powerful greenhouse gas. If we were to assume that the relative humidity of the atmosphere (the percentage of the amount of water vapour at which the air would be saturated) was constant as the atmosphere warms under anthropogenic climate change, then humidity would amplify the climate change by a factor of two or more. On top of this, clouds — i.e. water in its liquid rather than gaseous form — have the potential to further amplify climate change (or indeed decrease it depending on the type or structure of the clouds). Finally, water in its solid phase can also be a significant amplifier of climate change. For example, sea ice reflects sunlight back to space. However as sea ice melts, e.g. in the Arctic, the underlying water absorbs more of the sunlight than before, again amplifying the underlying climate change signal. We can approach these problems in two ways. Firstly we can use simplified mathematical models in which plausible assumptions (like the constant relative humidity one) are made to make the mathematics tractable. Secondly, we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics). If we are to have confidence in climate predictions, this ab initio approach has to be pursued. However, unlike, say temperature in the atmosphere, water vapour and cloud liquid water have more of a fractal distribution, with both large and small scales. We cannot simulate accurately the small scales in a global climate model with fixed (say 100km) grid, and this, perhaps more than anything, is the source of uncertainty in climate predictions. This is not just a theoretical problem (although there is some interesting mathematics involved, e.g. of multifractal distribution theory and so on). In the coming years, governments will be looking to spend billions on new infrastructure for society to adapt to climate change: more reservoirs, better flood defences, bigger storm sewers etc etc. It is obviously important that this money is spent wisely. Hence we need to have some quantitative and reliable estimate of certainty that in regions where more reservoirs are to be built, the climate really will get drier and so on. There is another reason for developing quantitative methods for estimating uncertainty: climate geoengineering. If we spray aerosols in the stratosphere, or whiten clouds by spraying sea salt into them, we need to be sure we are not doing something terrible to our climate, like shutting off the monsoons, or decreasing rainfall over Amazonia (which might then make the rainforest a source of carbon for the atmosphere rather than a sink). Reliable estimates of uncertainty of regional impacts of geoengineering are going to be essential in the future. My goals? To bring quantitative methods from physics and maths into climate decision making. One area that particularly interests me is the application of nonlinear stochastic-dynamic techniques to represent unresolved scales of motion in the ab initio models. If you are interested to learn more about this, please see this book: • Tim Palmer and Paul Williams, editors, Stochastic Physics and Climate Modelling, Cambridge U. Press, Cambridge, 2010. JB: Thanks! I’ve been reading that book. I’ll talk about it next time on This Week’s Finds. Suppose you were advising a college student who wanted to do something that would really make a difference when it comes to the world’s environmental problems. What would you tell them? TP: Well although this sounds a bit of a cliché, it’s important first and foremost to enjoy and be excited by what you are doing. If you have a burning ambition to work on some area of science without apparent application or use, but feel guilty because it’s not helping to save the planet, then stop feeling guilty and get on with fulfilling your dreams. If you work in some difficult area of science and achieve something significant, then this will give you a feeling of confidence that is impossible to be taught. Feeling confident in one’s abilities will make any subsequent move into new areas of activity, perhaps related to the environment, that much easier. If you demonstrate that confidence at interview, moving fields, even late in life, won’t be so difficult. In my own case, I did a PhD in general relativity theory, and having achieved this goal (after a bleak period in the middle where nothing much seemed to be working out), I did sort of think to myself: if I can add to the pool of knowledge in this, traditionally difficult area of theoretical physics, I can pretty much tackle anything in science. I realize that sounds rather arrogant, and of course life is never as easy as that in practice. JB: What if you were advising a mathematician or physicist who was already well underway in their career? I know lots of such people who would like to do something "good for the planet", but feel that they’re already specialized in other areas, and find it hard to switch gears. In fact I might as well admit it — I’m such a person myself! TP: Talk to the experts in the field. Face to face. As many as possible. Ask them how your expertise can be put to use. Get them to advise you on key meetings you should try to attend. JB: Okay. You’re an expert in the field, so I’ll start with you. How can my expertise be put to use? What are some meetings that I should try to attend? TP: The American Geophysical Union and the European Geophysical Union have big multi-session conferences each year which include mathematicians with an interest in climate. On top of this, mathematical science institutes are increasingly holding meetings to engage mathematicians and climate scientists. For example, the Isaac Newton Institute at Cambridge University is holding a six-month programme on climate and mathematics. I will be there for part of this programme. There have been similar programmes in the US and in Germany very recently. Of course, as well as going to meetings, or perhaps before going to them, there is the small matter of some reading material. Can I strongly recommend the Working Group One report of the latest IPCC climate change assessments? WG1 is tasked with summarizing the physical science underlying climate change. Start with the WG1 Summary for Policymakers from the Fourth Assessment Report: • Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Summary for Policymakers. and, if you are still interested, tackle the main WG1 report: • Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Cambridge U. Press, Cambridge, 2007. There is a feeling that since the various so-called "Climategate" scandals, in which IPCC were implicated, climate scientists need to be more open about uncertainties in climate predictions and climate prediction models. But in truth, these uncertainties have always been openly discussed in the WG1 reports. These reports are absolutely not the alarmist documents many seem to think, and, I would say, give an extremely balanced picture of the science. The latest report dates from 2007. JB: I’ve been slowly learning what’s in this report, thanks in part to Nathan Urban, whom I interviewed in previous issues of This Week’s Finds. I’ll have to keep at it. You told me that there’s a big difference between the "butterfly effect" in chaotic systems with a few degrees of freedom, such as the Lorenz attractor shown above, and the "real butterfly effect" in systems with infinitely many degrees of freedom, like the Navier-Stokes equations, the basic equations describing fluid flow. What’s the main difference? TP: Everyone knows, or at least think they know, what the butterfly effect is: the exponential growth of small initial uncertainties in chaotic systems, like the Lorenz system, after whom the butterfly effect was named by James Gleick in his excellent popular book: • James Gleick, Chaos: Making a New Science, Penguin, London, 1998. But in truth, this is not the butterfly effect as Lorenz had meant it (I knew Ed Lorenz quite well). If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly. If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels. JB: What are the practical implications of this difference? TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better? More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications. JB: Nobody has proved existence and uniqueness for solutions of the Navier-Stokes equations. Indeed Clay Mathematics Institute is offering a million-dollar prize for settling this question. But meteorologists use these equations to predict the weather with some success. To mathematicians that might seem a bit strange. What do you think is going on here? TP: Actually, for certain simplifications to the Navier-Stokes equations, such as making them hydrostatic (which damps acoustic waves) then existence and uniqueness can be proven. And for weather forecasting we can get away with the hydrostatic approximation for most applications. But in general existence and uniqueness haven’t been proven. The "real" butterfly effect is linked to this. Well obviously the Intergovernmental Panel on Climate Change can’t wait for the mathematicians to solve this problem, but as I tried to suggest above, I don’t think the problem is just an arcane mathematical conundrum, but rather may help us understand better what is possible to predict about climate change and what not. JB: Of course, meteorologists are really using a cleverly discretized version of the Navier-Stokes equations to predict the weather. Something vaguely similar happens in quantum field theory: we can use "lattice QCD" to compute the mass of the proton to reasonable accuracy, but nobody knows for sure if QCD makes sense in the continuum. Indeed, there’s another million-dollar Clay Prize waiting for the person who can figure that out. Could it be that sometimes a discrete approximation to a continuum theory does a pretty good job even if the continuum theory fundamentally doesn’t make TP: There you are! Spend a few years working on the continuum limit of lattice QCD and you may end up advising government on the likelihood of unexpected consequences on regional climate arising from some geoengineering proposal! The idea that two so apparently different fields could have elements in common is something bureaucrats find it hard to get their heads round. We at the sharp end in science need to find ways of making it easier for scientists to move fields (even on a temporary basis) should they want to. This reminds me of a story. When I was finishing my PhD, my supervisor, Dennis Sciama announced one day that the process of Hawking radiation, from black holes, could be understood using the Principle of Maximum Entropy Production in non-equilibrium thermodynamics. I had never heard of this Principle before, no doubt a gap in my physics education. However, a couple of weeks later, I was talking to a colleague of a colleague who was a climatologist, and he was telling me about a recent paper that purported to show that many of the properties of our climate system could be deduced from the Principle of Maximum Entropy Production. That there might be such a link between black hole theory and climate physics, was one reason that I thought changing fields might not be so difficult after all. JB: To what extent is the problem of predicting climate insulated from the problems of predicting weather? I bet this is a hard question, but it seems important. What do people know about this? TP: John Von Neumann was an important figure in meteorology (as well, for example, as in quantum theory). He oversaw a project at Princeton just after the Second World War, to develop a numerical weather prediction model based on a discretised version of the Navier-Stokes equations. It was one of the early applications of digital computers. Some years later, the first long-term climate models were developed based on these weather prediction models. But then the two areas of work diverged. People doing climate modelling needed to represent lots of physical processes: the oceans, the cryosphere, the biosphere etc, whereas weather prediction tended to focus on getting better and better discretised representations of the Navier-Stokes equations. One rationale for this separation was that weather forecasting is an initial value problem whereas climate is a "forced" problem (e.g. how does climate change with a specified increase in carbon dioxide?). Hence, for example, climate people didn’t need to agonise over getting ultra accurate estimates of the initial conditions for their climate forecasts. But the two communities are converging again. We realise there are lots of synergies between short term weather prediction and climate prediction. Let me give you one very simple example. Whether anthropogenic climate change is going to be catastrophic to society, or is something we will be able to adapt to without too many major problems, we need to understand, as mentioned above, how clouds interact with increasing levels of carbon dioxide. Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified "parametrisations". We can test these parametrisations in weather forecast models. To put it crudely (to be honest too crudely) if the cloud parametrisations (and corresponding representations of water vapour) are systematically wrong, then the forecasts of tomorrow’s daily maximum temperature will also be systematically wrong. To give another example, I myself for a number of years have been developing stochastic methods to represent truncation uncertainty in weather prediction models. I am now trying to apply these methods in climate prediction. The ability to test the skill of these stochastic schemes in weather prediction mode is crucial to having confidence in them in climate prediction mode. There are lots of other examples of where a synergy between the two areas is important. JB: When we met recently, you mentioned that there are currently no high-end supercomputers dedicated to climate issues. That seems a bit odd. What sort of resources are there? And how computationally intensive are the simulations people are doing now? TP: By "high end" I mean very high end: that is, machines in the petaflop range of performance. If one takes the view that climate change is one of the gravest threats to society, then throwing all the resources that science and technology allows, to try to quantify exactly how grave this threat really is, seems quite sensible to me. On top of that, if we are to spend billions (dollars, pounds, euros etc.) on new technology to adapt to climate change, we had better make sure we are spending the money wisely — no point building new reservoirs if climate change will make your region wetter. So the predictions that it will get drier in such a such a place better be right. Finally, if we are to ever take these geoengineering proposals seriously we’d better be sure we understand the regional consequences. We don’t want to end up shutting off the monsoons! Reliable climate predictions really are essential. I would say that there is no more computationally complex problem in science than climate prediction. There are two key modes of instability in the atmosphere, the convective instabilites (thunderstorms) with scales of kilometers and what are called baroclinic instabilities (midlatitude weather systems) with scales of thousands of kilometers. Simulating these two instabilities, and their mutual global interactions, is beyond the capability of current global climate models because of computational constraints. On top of this, climate models try to represent not only the physics of climate (including the oceans and the cryosphere), but the chemistry and biology too. That introduces considerable computational complexity in addition to the complexity caused by the multi-scale nature of climate. By and large individual countries don’t have the financial resources (or at least they claim they don’t!) to fund such high end machines dedicated to climate. And the current economic crisis is not helping! On top of which, for reasons discussed above in relation to the "real" butterfly effect, I can’t go to government and say: "Give me a 100 petaflop machine and I will absolutely definitely be able to reduce uncertainty in forecasts climate change by a factor of 10". In my view, the way forward may be to think about internationally funded supercomputing. So, just as we have internationally funded infrastructure in particle physics, astronomy, so too in climate prediction. Why not? Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena. Athena wasn’t quite in the petaflop range, but not too far off, and using this dedicated time, we produced some fantastic results, otherwise unachievable, showing what the international community could achieve, given the computational resources. Results from the Athena project are currently being written up — they demonstrate what can be done where there is a will from the funding agencies. JB: In a Guardian article on human-caused climate change you were quoted as saying "There might be a 50% risk of widespread problems or possibly only 1%. Frankly, I would have said a risk of 1% was sufficient for us to take the problem seriously enough to start thinking about reducing emissions." It’s hard to argue with that, but starting to think about reducing emissions is vastly less costly than actually reducing them. What would you say to someone who replied, "If the risk is possibly just 1%, it’s premature to take action — we need more research first"? TP: The implication of your question is that a 1% risk is just too small to worry about or do anything about. But suppose the next time you checked in to fly to Europe, and they said at the desk that there was a 1% chance that volcanic ash would cause the aircraft engines to fail mid flight, leading the plane to crash, killing all on board. Would you fly? I doubt it! My real point is that in assessing whether emissions cuts are too expensive, given the uncertainty in climate predictions, we need to assess how much we value things like the Amazon rainforest, or of (preventing the destruction of) countries like Bangladesh or the African Sahel. If we estimate the damage caused by dangerous climate change — let’s say associated with a 4 °C or greater global warming — to be at least 100 times the cost of taking mitigating action, then it is worth taking this action even if the probability of dangerous climate change was just 1%. But of course, according to the latest predictions, the probability of realizing such dangerous climate changes is much nearer 50%. So in reality, it is worth cutting emissions if the value you place on current climate is comparable or greater than the cost of cutting emissions. Summarising, there are two key points here. Firstly, rational decisions can be made in the light of uncertain scientific input. Secondly, whilst we do certainly need more research, that should not itself be used as a reason for inaction. Thanks, John, for allowing me the opportunity to express some views about climate physics on your web site. JB: Thank you! The most important questions of life are, for the most part, really only problems of probability. – Pierre Simon, Marquis de Laplace This Week’s Finds (Week 305) 5 November, 2010 Nathan Urban has been telling us about a paper where he estimated the probability that global warming will shut down a major current in the Atlantic Ocean: • Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010. We left off last time with a cliff-hanger: I didn’t let him tell us what the probability is! Since you must have been clutching your chair ever since, you’ll be relieved to hear that the answer is coming now, in the final episode of this interview. But it’s also very interesting how he and Klaus Keller got their answer. As you’ll see, there’s some beautiful math involved. So let’s get started… JB: Last time you told us roughly how your climate model works. This time I’d like to ask you about the rest of your paper, leading up to your estimate of the probability that the Atlantic Meridional Overturning Current (or "AMOC") will collapse. But before we get into that, I’d like to ask some very general questions. For starters, why are scientists worried that the AMOC might collapse? Last time I mentioned the Younger Dryas event, a time when Europe became drastically colder for about 1300 years, starting around 10,800 BC. Lots of scientists think this event was caused by a collapse of the AMOC. And lots of them believe it was caused by huge amounts of fresh water pouring into the north Atlantic from an enormous glacial lake. But nothing quite like that is happening now! So if the AMOC collapses in the next few centuries, the cause would have to be a bit different. NU: In order for the AMOC to collapse, the overturning circulation has to weaken. The overturning is driven by the sinking of cold and salty, and therefore dense, water in the north Atlantic. Anything that affects the density structure of the ocean can alter the overturning. As you say, during the Younger Dryas, it is thought that a lot of fresh water suddenly poured into the Atlantic from the draining of a glacial lake. This lessened the density of the surface waters and reduced the rate at which they sank, shutting down the overturning. Since there aren’t any large glacial lakes left that could abruptly drain into the ocean, the AMOC won’t shut down in the same way it previously did. But it’s still possible that climate change could cause it to shut down. The surface waters from the north Atlantic can still freshen (and become less dense), either due to the addition of fresh water from melting polar ice and snow, or due to increased precipitation to the northern latitudes. In addition, they can simply become warmer, which also makes them less dense, reducing their sinking rate and weakening the overturning. In combination, these three factors (warming, increased precipitation, meltwater) can theoretically shut down the AMOC if they are strong enough. This will probably not be as abrupt or extreme an event as the Younger Dryas, but it can still persistently alter the regional climate. JB: I’m trying to keep our readers in suspense for a bit longer, but I don’t think it’s giving away too much to say that when you run your model, sometimes the AMOC shuts down, or at least slows down. Can you say anything about how this tends to happen, when it does? In your model, that is. Can you tell if it’s mainly warming, or increased precipitation, or meltwater? NU: The short answer is "mainly warming, probably". The long answer: I haven’t done experiments with the box model myself to determine this, but I can quote from the Zickfeld et al. paper where this model was published. It says, for their baseline collapse experiment, In the box model the initial weakening of the overturning circulation is mainly due to thermal forcing [...] This effect is amplified by a negative feedback on salinity, since a weaker circulation implies reduced salt advection towards the northern latitudes. Even if they turn off all the freshwater input, they find substantial weakening of the AMOC from warming alone. Freshwater could potentially become the dominant effect on the AMOC if more freshwater is added than in the paper’s baseline experiment. The paper did report computer experiments with different freshwater inputs, but upon skimming it, I can’t immediately tell whether the thermal effect loses its dominance. These experiments have also been performed using more complex climate models. This paper reports that in all the models they studied, the AMOC weakening is caused more by changes in surface heat flux than by changes in surface water flux: • J. M. Gregory et al., A model intercomparison of changes in the Atlantic thermohaline circulation in response to increasing atmospheric CO[2] concentration, Geophysical Research Letters 32 (2005), However, that paper studied "best-estimate" freshwater fluxes, not the fluxes on the high end of what’s possible, so I don’t know whether thermal effects would still dominate if the freshwater input ends up being large. There are papers that suggest freshwater input from Greenland, at least, won’t be a dominant factor any time soon: • J. H. Jungclaus et al., Will Greenland melting halt the thermohaline circulation?, Geophysical Research Letters 33 (2006), L17708. • E. Driesschaert et al., Modeling the influence of Greenland ice sheet melting on the Atlantic meridional overturning circulation during the next millennia, Geophysical Research Letters 34 (2007), I’m not sure what the situation is for precipitation, but I don’t think that would be much larger than the meltwater flux. In summary, it’s probably the thermal effects that dominate, both in complex and simpler models. Note that in our version of the box model, the precipitation and meltwater fluxes are combined into one number, the "North Atlantic hydrological sensitivity", so we can’t distinguish between those sources of water. This number is treated as uncertain in our analysis, lying within a range of possible values determined from the hydrologic changes predicted by complex models. The Zickfeld et al. paper experimented with separating them into the two individual contributions, but my version of the model doesn’t do that. JB: Okay. Now back to what you and Klaus Keller actually did in your paper. You have a climate model with a bunch of adjustable knobs, or parameters. Some of these parameters you take as "known" from previous research. Others are more uncertain, and that’s where the Bayesian reasoning comes in. Very roughly, you use some data to guess the probability that the right settings of these knobs lie within any given range. How many parameters do you treat as uncertain? NU: 18 parameters in total. 7 model parameters that control dynamics, 4 initial conditions, and 7 parameters describing error statistics. JB: What are a few of these parameters? Maybe you can tell us about some of the most important ones — or ones that are easy to understand. NU: I’ve mentioned these briefly in "week304" in the model description. The AMOC-related parameter is the hydrologic sensitivity I described above, controlling the flux of fresh water into the North There are three climate related parameters: • the climate sensitivity (the equilibrium warming expected in response to doubled CO[2]), • the ocean heat vertical diffusivity (controlling the rate at which oceans absorb heat from the atmosphere), and • "aerosol scaling", a factor that multiplies the strength of the aerosol-induced cooling effect, mostly due to uncertainties in aerosol-cloud interactions. I discussed these in "week302" in the part about total feedback estimates. There are also three carbon cycle related parameters: • the heterotrophic respiration sensitivity (describing how quickly dead plants decay when it gets warmer), • CO[2] fertilization (how much faster plants grow in CO[2]-elevated conditions), and • the ocean carbon vertical diffusivity (the rate at which the oceans absorb CO[2] from the atmosphere). The initial conditions describe what the global temperature, CO[2] level, etc. were at the start of my model simulations, in 1850. The statistical parameters describe the variance and autocorrelation of the residual error between the observations and the model, due to measurement error, natural variability, and model error. JB: Could you say a bit about the data you use to estimate these uncertain parameters? I see you use a number of data sets. NU: We use global mean surface temperature and ocean heat content to constrain the three climate parameters. We use atmospheric CO[2] concentration and some ocean flux measurements to constrain the carbon parameters. We use measurements of the AMOC strength to constrain the AMOC parameter. These are all time series data, mostly global averages — except the AMOC strength, which is an Atlantic-specific quantity defined at a particular latitude. The temperature data are taken by surface weather stations and are for the years 1850-2009. The ocean heat data are taken by shipboard sampling, 1953-1996. The atmospheric CO[2] concentrations are measured from the Mauna Loa volcano in Hawaii, 1959-2009. There are also some ice core measurements of trapped CO[2] at Law Dome, Antarctica, dated to 1854-1953. The air-sea CO[2] fluxes, for the 1980s and 1990s, are derived from measurements of dissolved inorganic carbon in the ocean, combined with measurements of manmade chlorofluorocarbon to date the water masses in which the carbon resides. (The dates tell you when the carbon entered the ocean.) The AMOC strength is reconstructed from station measurements of poleward water circulation over an east-west section of the Atlantic Ocean, near 25 °N latitude. Pairs of stations measure the northward velocity of water, inferred from the ocean bottom pressure differences between northward and southward station pairs. The velocities across the Atlantic are combined with vertical density profiles to determine an overall rate of poleward water mass transport. We use seven AMOC strength estimates measured sparsely between the years 1957 and 2004. JB: So then you start the Bayesian procedure. You take your model, start it off with your 18 parameters chosen somehow or other, run it from 1850 to now, and see how well it matches all this data you just described. Then you tweak the parameters a bit — last time we called that "turning the knobs" — and run the model again. And then you do this again and again, lots of times. The goal is to calculate the probability that the right settings of these knobs lie within any given range. Is that about right? NU: Yes, that’s right. JB: About how many times did you actually run the model? Is the sort of thing you can do on your laptop overnight, or is it a mammoth task? NU: I ran the model a million times. This took about two days on a single CPU. Some of my colleagues later ported the model from Matlab to Fortran, and now I can do a million runs in half an hour on my laptop. JB: Cool! So if I understand correctly, you generated a million lists of 18 numbers: those uncertain parameters you just mentioned. Or in other words: you created a cloud of points: a million points in an 18-dimensional space. Each point is a choice of those 18 parameters. And the density of this cloud near any point should be proportional to the probability that the parameters have those values. That’s the goal, anyway: getting this cloud to approximate the right probability density on your 18-dimensional space. To get this to happen, you used the Markov chain Monte Carlo procedure we discussed last time. Could you say in a bit more detail how you did this, exactly? NU: There are two steps. One is to write down a formula for the probability of the parameters (the "Bayesian posterior distribution"). The second is to draw random samples from that probability distribution using Markov chain Monte Carlo (MCMC). Call the parameter vector θ and the data vector y. The Bayesian posterior distribution p(θ|y) is a function of θ which says how probable θ is, given the data y that you’ve observed. The little bar (|) indicates conditional probability: p(θ|y) is the probability of θ, assuming that you know y happened. The posterior factorizes into two parts, the likelihood and the prior. The prior, p(θ) says how probable you think a particular 18-dimensional vector of parameters is, before you’ve seen the data you’re using. It encodes your "prior knowledge" about the problem, unconditional on the data you’re using. The likelihood, p(y|θ), says how likely it is for the observed data to arise from a model run using some particular vector of parameters. It describes your data generating process: assuming you know what the parameters are, how likely are you to see data that looks like what you actually measured? (The posterior is the reverse of this: how probable are the parameters, assuming the data you’ve Bayes’s theorem simply says that the posterior is proportional to the product of these two pieces: p(θ|y) ∝ p(y|θ) × p(θ) If I know the two pieces, I multiply them together and use MCMC to sample from that probability distribution. Where do the pieces come from? For the prior, we assumed bounded uniform distributions on all but one parameter. Such priors express the belief that each parameter lies within some range we deemed reasonable, but we are agnostic about whether one value within that range is more probable than any other. The exception is the climate sensitivity parameter. We have prior evidence from computer models and paleoclimate data that the climate sensitivity is most likely around 2 or 3 °C, albeit with significant uncertainties. We encoded this belief using a "diffuse" Cauchy distribution peaked in this range, but allowing substantial probability to be outside it, so as to not prematurely exclude too much of the parameter range based on possibly overconfident prior beliefs. We assume the priors on all the parameters are independent of each other, so the prior for all of them is the product of the prior for each of them. For the likelihood, we assumed a normal (Gaussian) distribution for the residual error (the scatter of the data about the model prediction). The simplest such distribution is the independent and identically distributed ("iid") normal distribution, which says that all the data points have the same error and the errors at each data point are independent of each other. Neither of these assumptions is true. The errors are not identical, since they get bigger farther in the past, when we measured data with less precision than we do today. And they’re not independent, because if one year is warmer than the model predicts, the next year likely to be also warmer than the model predicts. There are various possible reasons for this: chaotic variability, time lags in the system due to finite heat capacity, and so on. In this analysis, we kept the identical-error assumption for simplicity, even though it’s not correct. I think this is justifiable, because the strongest constraints on the parameters come from the most recent data, when the largest climate and carbon cycle changes have occurred. That is, the early data are already relatively uninformative, so if their errors get bigger, it doesn’t affect the answer much. We rejected the independent-error assumption, since there is very strong autocorrelation (serial dependence) in the data, and ignoring autocorrelation is known to lead to overconfidence. When the errors are correlated, it’s harder to distinguish between a short-term random fluctuation and a true trend, so you should be more uncertain about your conclusions. To deal with this, we assumed that the errors obey a correlated autoregressive "red noise" process instead of an uncorrelated "white noise" process. In the likelihood, we converted the red-noise errors to white noise via a "whitening" process, assuming we know how much correlation is present. (We’re allowed to do that in the likelihood, because it gives the probability of the data assuming we know what all the parameters are, and the autocorrelation is one of the parameters.) The equations are given in the paper. Finally, this gives us the formula for our posterior distribution. JB: Great! There’s a lot of technical material here, so I have many questions, but let’s go through the whole story first, and come back to those. NU: Okay. Next comes step two, which is to draw random samples from the posterior probability distribution via MCMC. To do this, we use the famous Metropolis algorithm, which was invented by a physicist of that name, along with others, to do computations in statistical physics. It’s a very simple algorithm which takes a "random walk" through parameter space. You start out with some guess for the parameters. You randomly perturb your guess to a nearby point in parameter space, which you are going to propose to move to. If the new point is more probable than the point you were at (according to the Bayesian posterior distribution), then accept it as a new random sample. If the proposed point is less probable than the point you’re at, then you randomly accept the new point with a certain probability. Otherwise you reject the move, staying where you are, treating the old point as a duplicate random sample. The acceptance probability is equal to the ratio of the posterior distribution at the new point to the posterior distribution at the old point. If the point you’re proposing to move to is, say, 5 times less probable than the point you are at now, then there’s a 20% chance you should move there, and a 80% chance that you should stay where you are. If you iterate this method of proposing new "jumps" through parameter space, followed by the Metropolis accept/reject procedure, you can prove that you will eventually end up with a long list of (correlated) random samples from the Bayesian posterior distribution. JB: Okay. Now let me ask a few questions, just to help all our readers get up to speed on some jargon. Lots of people have heard of a "normal distribution" or "Gaussian", because it’s become sort of the default choice for probability distributions. It looks like a bell curve: When people don’t know the probability distribution of something — like the tail lengths of newts or the IQ’s of politicians — they often assume it’s a Gaussian. But I bet fewer of our readers have heard of a "Cauchy distribution". What’s the point of that? Why did you choose that for your prior probability distribution of the climate sensitivity? NU: There is a long-running debate about the "upper tail" of the climate sensitivity distribution. High climate sensitivities correspond to large amounts of warming. As you can imagine, policy decisions depend a lot on how likely we think these extreme outcomes could be, i.e., how quickly the "upper tail" of the probability distribution drops to zero. A Gaussian distribution has tails that drop off exponentially quickly, so very high sensitivities will never get any significant weight. If we used it for our prior, then we’d almost automatically get a "thin tailed" posterior, no matter what the data say. We didn’t want to put that in by assumption and automatically conclude that high sensitivities should get no weight, regardless of what the data say. So we used a weaker assumption, which is a "heavy tailed" prior distribution. With this prior, the probability of large amounts of warming drops off more slowly, as a power law, instead of exponentially fast. If the data strongly rule out high warming, we can get a thin tailed posterior, but if they don’t, it will be heavy tailed. The Cauchy distribution, a limiting case of the " Student t" distribution that students of statistics may have heard of, is one of the most conservative choices for a heavy-tailed prior. Probability drops off so slowly at its tails that its variance is infinite. JB: The issue of "fat tails" is also important in the stock market, where big crashes happen more frequently than you might guess with a Gaussian distribution. After the recent economic crisis I saw a lot of financiers walking around with their tails between their legs, wishing their tails had been fatter. I’d also like to ask about "white noise" versus "red noise". "White noise" is a mathematical description of a situation where some quantity fluctuates randomly with time in a way so that it’s value at any time is completely uncorrelated with its value at any other time. If you graph an example of white noise, it looks really spiky: If you play it as a sound, it sounds like hissy static — quite unpleasant. If you could play it in the form of light, it would look white, hence the name. "Red noise" is less wild. Its value at any time is still random, but it’s correlated to the values at earlier or later times, in a specific way. So it looks less spiky: and it sounds less high-pitched, more like a steady rainfall. Since it’s stronger at low frequencies, it would look more red if you could play it in the form of light — hence the name "red noise". If understand correctly, you’re assuming that some aspects of the climate are noisy, but in a red noise kind of way, when you’re computing p(y|θ): the likelihood that your data takes on the value y, given your climate model with some specific choice of parameters θ. Is that right? You’re assuming this about all your data: the temperature data from weather stations, the ocean heat data are from shipboard samples, the atmospheric CO[2] concentrations at Mauna Loa volcano in Hawaii, the ice core measurements of trapped CO[2], the air-sea CO[2] fluxes, and also the AMOC strength? Red, red, red — all red noise? NU: I think the red noise you’re talking about refers to a specific type of autocorrelated noise ("Brownian motion"), with a power spectrum that is inversely proportional to the square of frequency. I’m using "red noise" more generically to speak of any autocorrelated process that is stronger at low frequencies. Specifically, the process we use is a first-order autoregressive, or "AR(1)", process. It has a more complicated spectrum than Brownian motion. JB: Right, I was talking about "red noise" of a specific mathematically nice sort, but that’s probably less convenient for you. AR(1) sounds easier for computers to generate. NU: It’s not only easier for computers, but closer to the spectrum we see in our analysis. Note that when I talk about error I mean "residual error", which is the difference between the observations and the model prediction. If the residual error is correlated in time, that doesn’t necessarily reflect true red noise in the climate system. It could also represent correlated errors in measurement over time, or systematic errors in the model. I am not attempting to distinguish between all these sources of error. I’m just lumping them all together into one total error process, and assuming it has a simple statistical form. We assume the residual errors in the annual surface temperature, ocean heat, and instrumental CO[2] time series are AR(1). The ice core CO[2], air-sea CO[2] flux, and AMOC strength data are sparse, and we can’t really hope to estimate the correlation between them, so we assume their residual errors are uncorrelated. Speaking of correlation, I’ve been talking about "autocorrelation", which is correlation within one data set between one time and another. It’s also possible for the errors in different data sets to be correlated with each other ("cross correlation"). We assumed there is no cross correlation (and residual analysis suggests only weak correlation between data sets). JB: I have a few more technical questions, but I bet most of our readers are eager to know: so, what next? You use all these nifty mathematical methods to work out p(θ|y), the probability that your 18 parameters have any specific value given your data. And now I guess you want to figure out the probability that the Atlantic Meridional Overturning Current, or AMOC, will collapse by some date or other. How do you do this? I guess most people want to know the answer more than the method, but they’ll just have to wait a few more minutes. NU: That’s easy. After MCMC, we have a million runs of the model, sampled in proportion how well the model fits historic data. There will be lots of runs that agree well with the data, and a few that agree less well. All we do now is extend each of those runs into the future, using an assumed scenario for what CO[2] emissions and other radiative forcings will do in the future. To find out the probability that the AMOC will collapse by some date, conditional on the assumptions we’ve made, we just count what fraction of the runs have an AMOC strength of zero in whatever year we care about. JB: Okay, that’s simple enough. What scenario, or scenarios, did you consider? NU: We considered a worst-case "business as usual" scenario in which we continue to burn fossil fuels at an accelerating rate until we start to run out of them, and eventually burn the maximum amount of fossil fuels we think there might be remaining (about 5000 gigatons worth of of carbon, compared to the roughly 500 gigatons we’ve emitted so far). This assumes we get desperate for cheap energy and extract all the hard-to-get fossil resources in oil shales and tar sands, all the remaining coal, etc. It doesn’t necessarily preclude the use of non-fossil energy; it just assumes that our appetite for energy grows so rapidly that there’s no incentive to slow down fossil fuel extraction. We used a simple economic model to estimate how fast we might do this, if the world economy continues to grow at a similar rate to the last few decades. JB: And now for the big question: what did you find? How likely is it that the AMOC will collapse, according to your model? Of course it depends how far into the future you look. NU: We find a negligible probability that the AMOC will collapse this century. The odds start to increase around 2150, rising to about a 10% chance by 2200, and a 35% chance by 2300, the last year considered in our scenario. JB: I guess one can take this as good news or really scary news, depending on how much you care about folks who are alive in 2300. But I have some more questions. First, what’s a "negligible NU: In this case, it’s less than 1 in 3000. For computational reasons, we only ran 3000 of the million samples forward into the future. There were no samples in this smaller selection that had the AMOC collapsed in 2100. The probability rises to 1 in 3000 in the year 2130 (the first time I see a collapse in this smaller selection), and 1% in 2152. You should take these numbers with a grain of salt. It’s these rare "tail-area events" that are most sensitive to modeling assumptions. JB: Okay. And second, don’t the extrapolations become more unreliable as you keep marching further into the future? You need to model not only climate physics but also the world economy. In this calculation, how many gigatons of carbon dioxide per year are you assuming will be emitted in 2300? I’m just curious. In 1998 it was about 27.6 gigatons. By 2008, it was about 30.4. NU: Yes, the uncertainty grows with time (and this is reflected in our projections). And in considering a fixed emissions scenario, we’ve ignored the economic uncertainty, which, so far out into the future, is even larger than the climate uncertainty. Here we’re concentrating on just the climate uncertainty, and are hoping to get an idea of bounds, so we used something close to a worst-case economic scenario. In this scenario carbon emissions peak around 2150 at about 23 gigatons carbon per year (84 gigatons CO[2]). By 2300 they’ve tapered off to about 4 GtC (15 GtCO[2]). Actual future emissions may be less than this, if we act to reduce them, or there are fewer economically extractable fossil resources than we assume, or the economy takes a prolonged downturn, etc. Actually, it’s not completely an economic worst case; it’s possible that the world economy could grow even faster than we assume. And it’s not the worst case scenario from a climate perspective, either. For example, we don’t model potential carbon emissions from permafrost or methane clathrates. It’s also possible that climate sensitivity could be higher than what we find in our analysis. JB: Why even bother projecting so far out into the future, if it’s so uncertain? NU: The main reason is because it takes a while for the AMOC to weaken, so if we’re interested in what it would take to make it collapse, we have to run the projections out a few centuries. But another motivation for writing this paper is policy related, having to do with the concept of "climate commitment" or "triggering". Even if it takes a few centuries for the AMOC to collapse, it may take less time than that to reach a "point of no return", where a future collapse has already been unavoidably "triggered". Again, to investigate this question, we have to run the projections out far enough to get the AMOC to collapse. We define "the point of no return" to be a point in time which, if CO[2] emissions were immediately reduced to zero and kept there forever, the AMOC would still collapse by the year 2300 (an arbitrary date chosen for illustrative purposes). This is possible because even if we stop emitting new CO[2], existing CO[2] concentrations, and therefore temperatures, will remain high for a long time (see "week303"). In reality, humans wouldn’t be able to reduce emissions instantly to zero, so the actual "point of no return" would likely be earlier than what we find in our study. We couldn’t economically reduce emissions fast enough to avoid triggering an AMOC collapse. (In this study we ignore the possibility of negative carbon emissions, that is, capturing CO[2] directly from the atmosphere and sequestering it for a long period of time. We’re also ignoring the possibility of climate geoengineering, which is global cooling designed to cancel out greenhouse warming.) So what do we find? Although we calculate a negligible probability that the AMOC will collapse by the end of this century, the probability that, in this century, we will commit later generations to a collapse (by 2300) is almost 5%. The probabilities of "triggering" rise rapidly, to almost 20% by 2150 and about 33% by 2200, even though the probability of experiencing a collapse by those dates is about 1% and 10%, respectively. You can see it in this figure from our paper: The take-home message is that while most climate projections are currently run out to 2100, we shouldn’t fixate only on what might happen to people this century. We should consider what climate changes our choices in this century, and beyond, are committing future generations to experiencing. JB: That’s a good point! I’d like to thank you right now for a wonderful interview, that really taught me — and I hope our readers — a huge amount about climate change and climate modelling. I think we’ve basically reached the end here, but as the lights dim and the audience files out, I’d like to ask just a few more technical questions. One of them was raised by David Tweed. He pointed out that while you’re "training" your model on climate data from the last 150 years or so, you’re using it to predict the future in a world that will be different in various ways: a lot more CO[2] in the atmosphere, hotter, and so on. So, you’re extrapolating rather than interpolating, and that’s a lot harder. It seems especially hard if the collapse of the AMOC is a kind of "tipping point" — if it suddenly snaps off at some point, instead of linearly decreasing as some parameter changes. This raises the question: why should we trust your model, or any model of this sort, to make such extrapolations correctly? In the discussion after that comment, I think you said that ultimately it boils down to 1) whether you think you have the physics right, 2) whether you think the parameters change over time. That makes sense. So my question is: what are some of the best ways people could build on the work you’ve done, and make more reliable predictions about the AMOC? There’s a lot at stake here! NU: Our paper is certainly an early step in making probabilistic AMOC projections, with room for improvement. I view the main points as (1) estimating how large the climate-related uncertainties may be within a given model, and (2) illustrating the difference between experiencing, and committing to, a climate change. It’s certainly not an end-all "prediction" of what will happen 300 years from now, taking into account all possible model limitations, economic uncertainties, etc. To answer your question, the general ways to improve predictions are to improve the models, and/or improve the data constraints. I’ll discuss both. Although I’ve argued that our simple box model reasonably reproduces the dynamics of the more complex model it was designed to approximate, that complex model itself isn’t the best model available for the AMOC. The problem with using complex climate models is that it’s computationally impossible to run them millions of times. My solution is to work with "statistical emulators", which are tools for building fast approximations to slow models. The idea is to run the complex model a few times at different points in its parameter space, and then statistically interpolate the resulting outputs to predict what the model would have output at nearby points. This works if the model output is a smooth enough function of the parameters, and there are enough carefully-chosen "training" points. From an oceanographic standpoint, even current complex models are probably not wholly adequate (see the discussion at the end of "week304"). There is some debate about whether the AMOC becomes more stable as the resolution of the model increases. On the other hand, people still have trouble getting the AMOC in models, and the related climate changes, to behave as abruptly as they apparently did during the Younger Dryas. I think the range of current models is probably in the right ballpark, but there is plenty of room for improvement. Model developers continue to refine their models, and ultimately, the reliability of any projection is constrained by the quality of models available. Another way to improve predictions is to improve the data constraints. It’s impossible to go back in time and take better historic data, although with things like ice cores, it is possible to dig up new cores to analyze. It’s also possible to improve some historic "data products". For example, the ocean heat data is subject to a lot of interpolation of sparse measurements in the deep ocean, and one could potentially improve the interpolation procedure without going back in time and taking more data. There are also various corrections being applied for known biases in the data-gathering instruments and procedures, and it’s possible those could be improved too. Alternatively, we can simply wait. Wait for new and more precise data to become available. But when I say "improve the data constraints", I’m mostly talking about adding more of them, that I simply didn’t include in the analysis, or looking at existing data in more detail (like spatial patterns instead of global averages). For example, the ocean heat data mostly serves to constrain the vertical mixing parameter, controlling how quickly heat penetrates into the deep ocean. But we can also look at the penetration of chemicals in the ocean (such carbon from fossil fuels, or chlorofluorocarbons). This is also informative about how quickly water masses mix down to the ocean depths, and indirectly informative about how fast heat mixes. I can’t do that with my simple model (which doesn’t have the ocean circulation of any of these chemicals in it), but I can with more complex models. As another example, I could constrain the climate sensitivity parameter better with paleoclimate data, or more resolved spatial data (to try to, e.g., pick up the spatial fingerprint of industrial aerosols in the temperature data), or by looking at data sets informative about particular feedbacks (such as water vapor), or at satellite radiation budget data. There is a lot of room for reducing uncertainties by looking at more and more data sets. However, this presents its own problems. Not only is this simply harder to do, but it runs more directly into limitations in the models and data. For example, if I look at what ocean temperature data implies about a model’s vertical mixing parameter, and what ocean chemical data imply, I might find that they imply two inconsistent values for the parameter! Or that those data imply a different mixing than is implied by AMOC strength measurements. This can happen if there are flaws in the model (or in the data). We have some evidence from other work that there are circumstances in which this can happen: • A. Schmittner, N. M. Urban, K. Keller and D. Matthews, Using tracer observations to reduce the uncertainty of ocean diapycnal mixing and climate-carbon cycle projections, Global Biogeochemical Cycles 23 (2009), GB4009. • M. Goes, N. M. Urban, R. Tonkonojenkov, M. Haran, and K. Keller, The skill of different ocean tracers in reducing uncertainties about projections of the Atlantic meridional overturning circulation, Journal of Geophysical Research — Oceans, in press (2010). How to deal with this, if and when it happens, is an open research challenge. To an extent it depends on expert judgment about which model features and data sets are "trustworthy". Some say that expert judgment renders conclusions subjective and unscientific, but as a scientist, I say that such judgments are always applied! You always weigh how much you trust your theories and your data when deciding what to conclude about them. In my response I’ve so far ignored the part about parameters changing in time. I think the hydrological sensitivity (North Atlantic freshwater input as a function of temperature) can change with time, and this could be improved by using a better climate model that includes ice and precipitation dynamics. Feedbacks can fluctuate in time, but I think it’s okay to treat them as a constant for long term projections. Some of these parameters can also be spatially dependent (e.g., the respiration sensitivity in the carbon cycle). I think treating them all as constant is a decent first approximation for the sorts of generic questions we’re asking in the paper. Also, all the parameter estimation methods I’ve described only work with static parameters. For time varying parameters, you need to get into state estimation methods like Kalman or particle filters. JB: I also have another technical question, which is about the Markov chain Monte Carlo procedure. You generate your cloud of points in 18-dimensional space by a procedure where you keep either jumping randomly to a nearby point, or staying put, according to that decision procedure you described. Eventually this cloud fills out to a good approximation of the probability distribution you want. But, how long is "eventually"? You said you generated a million points. But how do you know that’s enough? NU: This is something of an art. Although there is an asymptotic convergence theorem, there is no general way of knowing whether you’ve reached convergence. First you check to see whether your chains "look right". Are they sweeping across the full range of parameter space where you expect significant probability? Are they able to complete many sweeps (thoroughly exploring parameter space)? Is the Metropolis test accepting a reasonable fraction of proposed moves? Do you have enough effective samples in your Markov chain? (MCMC generates correlated random samples, so there are fewer "effectively independent" samples in the chain than there are total samples.) Then you can do consistency checks: start the chains at several different locations in parameter space, and see if they all converge to similar distributions. If the posterior distribution shows, or is expected to show, a lot of correlation between parameters, you have to be more careful to ensure convergence. You want to propose moves that carry you along the "principal components" of the distribution, so you don’t waste time trying to jump away from the high probability directions. (Roughly, if your posterior density is concentrated on some low dimensional manifold, you want to construct your way of moving around parameter space to stay near that manifold.) You also have to be careful if you see, or expect, multimodality (multiple peaks in the probability distribution). It can be hard for MCMC to move from one mode to another through a low-probability "wasteland"; it won’t be inclined to jump across it. There are more advanced algorithms you can use in such situations, if you suspect you have multimodality. Otherwise, you might discover later that you only sampled one peak, and never noticed that there were others. JB: Did you do some of these things when testing out the model in your paper? Do you have any intuition for the "shape" of the probability distribution in 18-dimensional space that lies at the heart of your model? For example: do you know if it has one peak, or several? NU: I’m pretty confident that the MCMC in our analysis is correctly sampling the shape of the probability distribution. I ran lots and lots of analyses, starting the chain in different ways, tweaking the proposal distribution (jumping rule), looking at different priors, different model structures, different data, and so on. It’s hard to "see" what an 18-dimensional function looks like, but we have 1-dimensional and 2-dimensional projections of it in our paper: I don’t believe that it has multiple peaks, and I don’t expect it to. Multiple peaks usually show up when the model behavior is non-monotonic as a function of the parameters. This can happen in really nonlinear systems (an with threshold systems like the AMOC), but during the historic period I’m calibrating the model to, I see no evidence of this in the model. There are correlations between parameters, so there are certain "directions" in parameter space that the posterior distribution is oriented along. And the distribution is not Gaussian. There is evidence of skew, and nonlinear correlations between parameters. Such correlations appear when the data are insufficient to completely identify the parameters (i.e., different combinations of parameters can produce similar model output). This is discussed in more detail in another of our papers: • Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708. In a Gaussian distribution, the distribution of any pair of parameters will look ellipsoidal, but our distribution has some "banana" or "boomerang" shaped pairwise correlations. This is common, for example, when the model output is a function of the product of two parameters. JB: Okay. It’s great that we got a chance to explore some of the probability theory and statistics underlying your work. It’s exciting for me to see these ideas being used to tackle a big real-life problem. Thanks again for a great interview. Maturity is the capacity to endure uncertainty. – John Finley This Week’s Finds (Week 304) 15 October, 2010 About 10,800 BC, something dramatic happened. The last glacial period seemed to be ending quite nicely, things had warmed up a lot — but then, suddenly, the temperature in Europe dropped about 7 °C! In Greenland, it dropped about twice that much. In England it got so cold that glaciers started forming! In the Netherlands, in winter, temperatures regularly fell below -20 °C. Throughout much of Europe trees retreated, replaced by alpine landscapes, and tundra. The climate was affected as far as Syria, where drought punished the ancient settlement of Abu Hurerya. But it doesn’t seem to have been a world-wide event. This cold spell lasted for about 1300 years. And then, just as suddenly as it began, it ended! Around 9,500 BC, the temperature in Europe bounced back. This episode is called the Younger Dryas, after a certain wildflower that enjoys cold weather, whose pollen is common in this period. What caused the Younger Dryas? Could it happen again? An event like this could wreak havoc, so it’s important to know. Alas, as so often in science, the answer to these questions is "we’re not sure, We’re not sure, but the most popular theory is that a huge lake in Canada, formed by melting glaciers, broke its icy banks and flooded out into the Saint Lawrence River. This lake is called Lake Agassiz. At its maximum, it held more water than all lakes in the world now put together: In a massive torrent lasting for years, the water from this lake rushed out to the Labrador Sea. By floating atop the denser salt water, this fresh water blocked a major current that flows in the Altantic: the Atlantic Meridional Overturning Circulation, or AMOC. This current brings warm water north and helps keep northern Europe warm. So, northern Europe was plunged into a deep freeze! That’s the theory, anyway. Could something like this happen again? There are no glacial lakes waiting to burst their banks, but the concentration of fresh water in the northern Atlantic has been increasing, and ocean temperatures are changing too, so some scientists are concerned. The problem is, we don’t really know what it takes to shut down the Atlantic Meridional Overturning Circulation! To make progress on this kind of question, we need a lot of insight, but we also need some mathematical models. And that’s what Nathan Urban will tell us about now. First we’ll talk in general about climate models, Bayesian reasoning, and Monte Carlo methods. We’ll even talk about the general problem of using simple models to study complex phenomena. And then he’ll walk us step by step through the particular model that he and a coauthor have used to study this question: will the AMOC run amok? Sorry, I couldn’t resist that. It’s not so much "running amok" that the AMOC might do, it’s more like "fizzling out". But accuracy should never stand in the way of a good pun. On with the show: JB: Welcome back! Last time we were talking about the new work you’re starting at Princeton. You said you’re interested in the assessment of climate policy in the presence of uncertainties and "learning" – where new facts come along that revise our understanding of what’s going on. Could you say a bit about your methodology? Or, if you’re not far enough along on this work, maybe you could talk about the methodology of some other paper in this line of research. NU: To continue the direction of discussion, I’ll respond by talking about the methodology of a few papers along the lines of what I hope to work on here at Princeton, rather than about my past papers on uncertainty quantification. They are Keller and McInerney on learning rates: • Klaus Keller and David McInerney, The dynamics of learning about a climate threshold, Climate Dynamics 30 (2008), 321-332. Keller and coauthors on learning and economic policy: • Klaus Keller, Benjamin M. Bolkerb and David F. Bradford, Uncertain climate thresholds and optimal economic growth, Journal of Environmental Economics and Management 48 (2004), 723-741. and Oppenheimer et al. on "negative" learning (what happens when science converges to the wrong answer): • Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172. The general theme of this kind of work is to statistically compare a climate model to observed data in order to understand what model behavior is allowed by existing data constraints. Then, having quantified the range of possibilities, plug this uncertainty analysis into an economic-climate model (or "integrated assessment model"), and have it determine the economically "optimal" course of So: start with a climate model. There is a hierarchy of such models, ranging from simple impulse-response or "box" models to complex atmosphere-ocean general circulation models. I often use the simple models, because they’re computationally efficient and it is therefore feasible to explore their full range of uncertainties. I’m moving toward more complex models, which requires fancier statistics to extract information from a limited set of time-consuming simulations. Given a model, then apply a Monte Carlo analysis of its parameter space. Climate models cannot simulate the entire Earth from first principles. They have to make approximations, and those approximations involve free parameters whose values must be fit to data (or calculated from specialized models). For example, a simple model cannot explicitly describe all the possible feedback interactions that are present in the climate system. It might lump them all together into a single, tunable "climate sensitivity" parameter. The Monte Carlo analysis runs the model many thousands of times at different parameter settings, and then compares the model output to past data in order to see which parameter settings are plausible and which are not. I use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (i.e., probability) of each parameter setting. With probability weights for the model’s parameter settings, it is now possible to weight the probability of possible future outcomes predicted by the model. This describes, conditional on the model and data used, the uncertainty about the future climate. JB: Okay. I think I roughly understand this. But you’re using jargon that may cause some readers’ eyes to glaze over. And that would be unfortunate, because this jargon is necessary to talk about some very cool ideas. So, I’d like to ask what some phrases mean, and beg you to explain them in ways that everyone can understand. To help out — and maybe give our readers the pleasure of watching me flounder around — I’ll provide my own quick attempts at explanation. Then you can say how close I came to understanding you. First of all, what’s an "impulse-response model"? When I think of "impulse response" I think of, say, tapping on a wineglass and listening to the ringing sound it makes, or delivering a pulse of voltage to an electrical circuit and watching what it does. And the mathematician in me knows that this kind of situation can be modelled using certain familiar kinds of math. But you might be applying that math to climate change: for example, how the atmosphere responds when you pump some carbon dioxide into it. Is that about right? NU: Yes. (Physics readers will know "impulse response" as "Green’s functions", by the way). The idea is that you have a complicated computer model of a physical system whose dynamics you want to represent as a simple model, for computational convenience. In my case, I’m working with a computer model of the carbon cycle which takes CO[2] emissions as input and predicts how much CO[2] is left in the air after natural sources and sinks operate on what’s there. It’s possible to explicitly model most of the relevant physical and biogeochemical processes, but it takes a long time for such a computer simulation to run. Too long to explore how it behaves under many different conditions, which is what I want to do. How do you build a simple model that acts like a more complicated one? One way is to study the complex model’s "impulse response" — in this case, how it behaves in response to an instantaneous "pulse" of carbon to the atmosphere. In general, the CO[2] in the atmosphere will suddenly jump up, and then gradually relax back toward its original concentration as natural sinks remove some of that carbon from the atmosphere. The curve showing how the concentration decreases over time is the "impulse response". You derive it by telling your complex computer simulation that a big pulse of carbon was added to the air, and recording what it predicts will happen to CO[2] over time. The trick in impulse response theory is to treat an arbitrary CO[2] emissions trajectory as the sum of a bunch of impulses of different sizes, one right after another. So, if emissions are 1, 3, and 7 units of carbon in years 1, 2, and 3, then you can think of that as a 1-unit pulse of carbon in year one, plus a 3-unit pulse in year 2, plus a 7-unit pulse in year 3. The crucial assumption you make at this point is that you can treat the response of the complex model to this series of impulses as the sum of the "impulse response" curve that you worked out for a single pulse. Therefore, just by running the model in response to a single unit pulse, you can work out what the model would predict for any emissions trajectory, by adding up its response to a bunch of individual pulses. The impulse response model makes its prediction by summing up lots of copies of the impulse repsonse curve, with different sizes and at different times. (Techincally, this is a convolution of the impulse response curve, or Green’s function, with the emissions trajectory curve.) JB: Okay. Next, what’s a "box model"? I had to look that up, and after some floundering around I bumped into a Wikipedia article that mentioned "black box models" and "white box models". A black box model is where you’ve got a system, and all you pay attention to is its input and output — in other words, what you do to it, and what it does to you, not what’s going on "inside". A white box model, or "glass box model", lets you see what’s going on inside but not directly tinker with it, except via your input. Is this at all close? I don’t feel very confident that I’ve understood what a "box model" is. NU: No, box models are the sorts of things you find in "systems dynamics" theory, where you have "stocks" of a substance and "flows" of it in and out. In the carbon cycle, the "boxes" (or stocks) could be "carbon stored in wood", "carbon stored in soil", "carbon stored in the surface ocean", etc. The flows are the sources and sinks of carbon. In an ocean model, boxes could be "the heat stored in the North Atlantic", "the heat stored in the deep ocean", etc., and flows of heat between them. Box models are a way of spatially averaging over a lot of processes that are too complicated or time-consuming to treat in detail. They’re another way of producing simplified models from more complex ones, like impulse response theory, but without the linearity assumption. For example, one could replace a three dimensional circulation model of the ocean with a couple of "big boxes of water connected by pipes". Of course, you have to then verify that your simplified model is a "good enough" representation of whatever aspect of the more complex model that you’re interested in. JB: Okay, sure — I know a bit about these "box models", but not that name. In fact the engineers who use "bond graphs" to depict complex physical systems made of interacting parts like to emphasize the analogy between electrical circuits and hydraulic systems with water flowing through pipes. So I think box models fit into the bond graph formalism pretty nicely. I’ll have to think about that Anyway: next you mentioned taking a model and doing a "Monte Carlo analysis of its parameter space". This time you explained what you meant, but I’ll still go over it. Any model has a bunch of adjustable parameters in it, for example the "climate sensitivity", which in a simple model just means how much warmer it gets per doubling of atmospheric carbon dioxide. We can think of these adjustable parameters as knobs we’re allowed to turn. The problem is that we don’t know the best settings of these knobs! And even worse, there are lots of allowed settings. In a Monte Carlo analysis we randomly turn these knobs to some setting, run our model, and see how well it does — presumably by comparing its results to the "right answer" in some situation where we already know the right answer. Then we keep repeating this process. We turn the knobs again and again, and accumulate information, and try to use this to guess what the right knob settings are. More precisely: we try to guess the probability that the correct knob settings lie within any given range! We don’t try to guess their one "true" setting, because we can’t be sure what that is, and it would be silly to pretend otherwise. So instead, we work out probabilities. Is this roughly right? NU: Yes, that’s right. JB: Okay. That was the rough version of the story. But then you said something a lot more specific. You say you "use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (or probability) of each parameter setting." So, I’ve got a couple more questions. What’s "Markov chain Monte Carlo"? I guess it’s some specific way of turning those knobs over and over again. NU: Yes. For physicists, it’s a "random walk" way of turning the knobs: you start out at the current knob settings, and tweak each one just a little bit away from where they currently are. In the most common Markov chain Monte Carlo (MCMC) algorithm, if the new setting takes you to a more plausible setting of the knobs, you keep that setting. If the new setting produces an outcome that is less plausible, then you might keep the new setting (with a likelihood proportional to how much less plausible the new setting is), or you might stay at the existing setting and try again with a new tweaking. The MCMC algorithm is designed so that the sequence of knob settings produced will sample randomly from the probability distribution you’re interested in. JB: And what’s "Bayesian statistical inference"? I’m sorry, I know this subject deserves a semester-long graduate course. But like a bad science journalist, I will ask you to distill it down to a few sentences! Sometime I’ll do a whole series of This Week’s Finds about statistical inference, but not now. NU: I can distill it to one sentence: in this context, it’s a branch of statistics which allows you to assign probabilities to different settings of model parameters, based on how well those settings cause the model to reproduce the observed data. The more common "frequentist" approach to statistics doesn’t allow you to assign probabilities to model parameters. It has a different take on probability. As a Bayesian, you assume the observed data is known and talk about probabilities of hypotheses (here, model parameters). As a frequentist, you assume the hypothesis is known (hypothetically), and talk about probabilities of data that could result from it. They differ fundamentally in what you treat as known (data, or hypothesis) and what probabilities are applied to (hypothesis, or data). JB: Okay, and one final question: sometimes you say "plausibility" and sometimes you say "probability". Are you trying to distinguish these, or say they’re the same? NU: I am using "probability" as a technical term which quantifies how "plausible" a hypothesis is. Maybe I should just stick to "probability". JB: Great. Thanks for suffering through that dissection of what you said. I think I can summarize, in a sloppy way, as follows. You take a model with a bunch of adjustable knobs, and you use some data to guess the probability that the right settings of these knobs lie within any given range. Then, you can use this model to make predictions. But these predictions are only probabilistic. Okay, then what? NU: This is the basic uncertainty analysis. There are several things that one can do with it. One is to look at learning rates. You can generate "hypothetical data" that we might observe in the future, by taking a model prediction and adding some "observation noise" to it. (This presumes that the model is perfect, which is not the case, but it represents a lower bound on uncertainty.) Then feed the hypothetical data back into the uncertainty analysis to calculate how much our uncertainty in the future could be reduced as a result of "observing" this "new" data. See Keller and McInerney for an example. Another thing to do is decision making under uncertainty. For this, you need an economic integrated assessment model (or some other kind of policy model). Such a model typically has a simple description of the world economy connected to a simple description of the global climate: the world population and the economy grow at a certain rate which is tied to the energy sector, policies to reduce fossil carbon emissions have economic costs, fossil carbon emissions influence the climate, and climate change has economic costs. Different models are more or less explicit about these components (is the economy treated as a global aggregate or broken up into regional economies, how realistic is the climate model, how detailed is the energy sector model, etc.) If you feed some policy (a course of emissions reductions over time) into such a model, it will calculate the implied emissions pathway and emissions abatement costs, as well as the implied climate change and economic damages. The net costs or benefits of this policy can be compared with a "business as usual" scenario with no emissions reductions. The net benefit is converted from "dollars" to "utility" (accounting for things like the concept that a dollar is worth more to a poor person than a rich one), and some discounting factor is applied (to downweight the value of future utility relative to present). This gives "the (discounted) utility of the proposed policy". So far this has not taken uncertainty into account. In reality, we’re not sure what kind of climate change will result from a given emissions trajectory. (There is also economic uncertainty, such as how much it really costs to reduce emissions, but I’ll concentrate on the climate uncertainty.) The uncertainty analysis I’ve described can give probability weights to different climate change scenarios. You can then take a weighted average over all these scenarios to compute the "expected" utility of a proposed policy. Finally, you optimize over all possible abatement policies to find the one that has the maximum expected discounted utility. See Keller et al. for a simple conceptual example of this applied to a learning scenario, and this book for a deeper discussion: • William Nordhaus, A Question of Balance, Yale U. Press, New Haven, 2008. It is now possible to start elaborating on this theme. For instance, in the future learning problem, you can modify the "hypothetical data" to deviate from what your climate model predicts, in order to consider what would happen if the model is wrong and we observe something "unexpected". Then you can put that into an integrated assessment model to study how much being wrong would cost us, and how fast we need to learn that we’re wrong in order to change course, policy-wise. See that paper by Oppenheimer et al. for an example. JB: Thanks for that tour of ideas! It sounds fascinating, important, and complex. Now I’d like to move on to talking about a specific paper of yours. It’s this one: • Nathan Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle, and Atlantic meridional overturning circulation system: A Bayesian fusion of century-scale observations with a simple model, Tellus A 62 (2010), 737-750. Before I ask you about the paper, let me start with something far more basic: what the heck is the "Atlantic meridional overturning circulation" or "AMOC"? I know it has something to do with ocean currents, and how warm water moves north near the surface of the Atlantic and then gets cold, plunges down, and goes back south. Isn’t this related to the "Gulf Stream", that warm current that supposedly keeps Europe warmer than it otherwise would be? NU: Your first sentence pretty much sums up the basic dynamics: the warm water from the tropics cools in the North Atlantic, sinks (because it’s colder and denser), and returns south as deep water. As the water cools, the heat it releases to the atmosphere warms the region. This is the "overturning circulation". But it’s not synonymous with the Gulf Stream. The Gulf Stream is a mostly wind-driven phenomenon, not a density driven current. The "AMOC" has both wind driven and density driven components; the latter is sometimes referred to as the "thermohaline circulation" (THC), since both heat and salinity are involved. I haven’t gotten into salinity yet, but it also influences the density structure of the ocean, and you can read Stefan Rahmstorf’s review articles for more (read the parts on non-linear behavior): • Stefan Rahmstorf, The thermohaline ocean circulation: a brief fact sheet. • Stefan Rahmstorf, Thermohaline ocean circulation, in Encyclopedia of Quaternary Sciences, edited by S. A. Elias, Elsevier, Amsterdam 2006. JB: Next, why are people worrying about the AMOC? I know some scientists have argued that shortly after the last ice age, the AMOC stalled out due to lots of fresh water from Lake Agassiz, a huge lake that used to exist in what’s now Canada, formed by melting glaciers. The idea, I think, was that this event temporarily killed the Gulf Stream and made temperatures in Europe drop enormously. Do most people believe that story these days? NU: You’re speaking of the "Younger Dryas" abrupt cooling event around 11 to 13 thousand years ago. The theory is that a large pulse of fresh water from Lake Agassiz lessened the salinity in the Atlantic and made it harder for water to sink, thus shutting down down the overturning circulation and decreasing its release of heat in the North Atlantic. This is still a popular theory, but geologists have had trouble tracing the path of a sufficiently large supply of fresh water, at the right place, and the right time, to shut down the AMOC. There was a paper earlier this year claiming to have finally done this: • Julian B. Murton, Mark D. Bateman, Scott R. Dallimore, James T. Teller and Zhirong Yang, Identification of Younger Dryas outburst flood path from Lake Agassiz to the Arctic Ocean, Nature 464 (2010), 740-743. but I haven’t read it yet. The worry is that this could happen again — not because of a giant lake draining into the Atlantic, but because of warming (and the resulting changes in precipitation) altering the thermal and salinity structure of the ocean. It is believed that the resulting shutdown of the AMOC will cause the North Atlantic region to cool, but there is still debate over what it would take to cause it to shut down. It’s also debated whether this is one of the climate "tipping points" that people talk about — whether a certain amount of warming would trigger a shutdown, and whether that shutdown would be "irreversible" (or difficult to reverse) or "abrupt". Cooling Europe may not be a bad thing in a warming world. In fact, in a warming world, Europe might not actually cool in response to an AMOC shutdown; it might just warm more slowly. The problem is if the cooling is abrupt (and hard to adapt to), or prolonged (permamently shifting climate patterns relative to the rest of the world). Perhaps worse than the direct temperature change could be the impacts on agriculture or ocean ecosystems, resulting from major reorganizations of regional precipitation or ocean circulation patterns. JB: So, part of your paper consists of modelling the AMOC and how it interacts with the climate and the carbon cycle. Let’s go through this step by step. First: how do you model the climate? You say you use "the DOECLIM physical climate component of the ACC2 model, which is an energy balance model of the atmosphere coupled to a one-dimensional diffusive ocean model". I guess these are well-known ideas in your world. But I don’t even know what the acronyms stand for! Could you walk us through these ideas in a gentle way? NU: Don’t worry about the acronyms; they’re just names people have given to particular models. The ACC2 model is a computer model of both the climate and the carbon cycle. The climate part of our model is called DOECLIM, which I’ve used to replace the original climate component of ACC2. An "energy balance model" is the simplest possible climate model, and is a form of "box model" that I mentioned above. It treats the Earth as a big heat sink that you dump energy into (e.g., by adding greenhouse gases). Given the laws of thermodynamics, you can compute how much temperature change you get from a given amount of heat input. This energy balance model of the atmosphere is "zero dimensional", which means that it treats the Earth as a featureless sphere, and doesn’t attempt to keep track of how heat flows or temperature changes at different locations. There is no three dimensional circulation of the atmosphere or anything like that. The atmosphere is just a "lump of heat-absorbing material". The atmospheric "box of heat" is connected to two other boxes, which are land and ocean. In DOECLIM, "land" is just another featureless lump of material, with a different heat capacity than air. The "ocean" is more complicated. Instead of a uniform box of water with a single temperature, the ocean is "one dimensional", meaning that it has depth, and temperature is allowed to vary with depth. Heat penetrates from the surface into the deep ocean by a diffusion process, which is intended to mimic the actual circulation-driven penetration of heat into the ocean. It’s worth treating the ocean in more detail since oceans are the Earth’s major heat sink, and therefore control how quickly the planet can change temperature. The three parameters in the DOECLIM model which we treat as uncertain are the climate (temperature) sensitivity to CO[2], the vertical mixing rate of heat into the ocean, and the strength of the "aerosol indirect effect" (what kind of cooling effect industrial aerosols in the atmosphere create due to their influence on cloud behavior). JB: Okay, that’s clear enough. But at this point I have to raise an issue about models in general. As you know, a lot of climate skeptics like to complain about the fallibility of models. They would surely become even more skeptical upon hearing that you’re treating the Earth as a featureless sphere with same temperature throughout at any given time — and treating the temperature of ocean water as depending only on the depth, not the location. Why are you simplifying things so much? How could your results possibly be relevant to the real world? Of course, as a mathematical physicist, I know the appeal of simple models. I also know the appeal of reducing the number of dimensions. I spent plenty of time studying quantum gravity in the wholly unrealistic case of a universe with one less dimension than our real world! Reducing the number of dimensions makes the math a lot simpler. And simplified models give us a lot of insight which — with luck — we can draw upon when tackling the really hard real-world problems. But we have to be careful: they can also lead us astray. How do you think about results obtained from simplified climate models? Are they just mathematical warmup exercises? That would be fine — I have no problem with that, as long as we’re clear about it. Or are you hoping that they give approximately correct answers? NU: I use simple models because they’re fast and it’s easier to expose and explore their assumptions. My attitude toward simple models is a little of both the points of view you suggest: partly proof of concept, but also hopefully approximately correct, for the questions I’m asking. Let me first argue for the latter perspective. If you’re using a zero dimensional model, you can really only hope to answer "zero dimensional questions", i.e. about the globally averaged climate. Once you’ve simplified your question by averaging over a lot of the complexity of the data, you can hope that a simple model can reproduce the remaining dynamics. But you shouldn’t just hope. When using simple models, it’s important to test the predictions of their components against more complex models and against observed data. You can show, for example, that as far as global average surface temperature is concerned, even simpler energy balance models than DOECLIM (e.g., without a 1D ocean) can do a decent job of reproducing the behavior of more complex models. See, e.g.: • Isaac M. Held, Michael Winton, Ken Takahashi, Thomas Delworth, Fanrong Zeng and Geoffrey K. Vallis, Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing, Journal of Climate 23 (2010), 2418-2427. for a recent study. The differences between complex models can be captured merely by retuning the "effective parameters" of the simple model. For example, many of the complexities of different feedback effects can be captured by a tunable climate sensitivity parameter in the simple model, representing the total feedback. By turning this sensitivity "knob" in the simple model, you can get it to behave like complex models which have different feedbacks in them. There is a long history in climate science of using simple models as "mechanistic emulators" of more complex models. The idea is to put just enough physics into the simple model to get it to reproduce some specific averaged behavior of the complex model, but no more. The classic "mechanistic emulator" used by the International Panel on Climate Change is called MAGICC. BERN-CC is another model frequently used by the IPCC for carbon cycle scenario analysis — that is, converting CO[2] emissions scenarios to atmospheric CO[2] concentrations. A simple model that people can play around with themselves on the Web may be found here: • Ben Matthews, Chooseclimate. Obviously a simple model cannot reproduce all the behavior of a more complex model. But if you can provide evidence that it reproduces the behavior you’re interested in for a particular problem, it is arguably at least as "approximately correct" as the more complex model you validate it against, for that specific problem. (Whether the more complex model is an "approximately correct" representation of the real world is a separate question!) In fact, simple models are arguably more useful than more complex ones for certain applications. The problem with complex models is, well, their complexity. They make a lot of assumptions, and it’s hard to test all of them. Simpler models make fewer assumptions, so you can test more of them, and look at the sensitivity of your conclusions to your assumptions. If I take all the complex models used by the IPCC, they will have a range of different climate sensitivities. But what if the actual climate sensitivity is above or below that range, because all the complex models have limitations? I can’t easily explore that possibility in a complex model, because "climate sensitivity" isn’t a knob I can turn. It’s an emergent property of many different physical processes. If I want to change the model’s climate sensitivity, I might have to rewrite the cloud physics module to obey different dynamical equations, or something complicated like that — and I still won’t be able to produce a specific sensitivity. But in a simple model, "climate sensitivity" sensitivity is a "knob", and I can turn it to any desired value above, below, or within the IPCC range to see what happens. After that defense of simple models, there are obviously large caveats. Even if you can show that a simple model can reproduce the behavior of a more complex one, you can only test it under a limited range of assumptions about model parameters, forcings, etc. It’s possible to push a simple model too far, into a regime where it stops reproducing what a more complex model would do. Simple models can also neglect relevant feedbacks and other processes. For example, in the model I use, global warming can shut down the AMOC, but changes in the AMOC don’t feed back to cool the global temperature. But the cooling from an AMOC weakening should itself slow further AMOC weakening due to global warming. The AMOC model we use is designed to partly compensate for the lack of explicit feedback of ocean heat transport on the temperature forcing, but it’s still an approximation. In our paper we discuss what we think are the most important caveats of our simple analysis. Ultimately we need to be able to do this sort of analysis with more complex models as well, to see how robust our conclusions are to model complexity and structural assumptions. I am working in that direction now, but the complexities involved might be the subject of another interview! JB: I’d be very happy to do another interview with you. But you’re probably eager to finish this one first. So we should march on. But I can’t resist one more comment. You say that models even simpler than DOECLIM can emulate the behavior of more complex models. And then you add, parenthetically, "whether the more complex model is an ‘approximately correct’ representation of the real world is a separate question!" But I think that latter question is the one that ordinary people find most urgent. They won’t be reassured to know that simple models do a good job of mimicking more complicated models. They want to know how well these models mimic reality! But maybe we’ll get to that when we talk about the Monte Carlo Markov chain procedure and how you use that to estimate the probability that the "knobs" (that is, parameters) in your model are set correctly? Presumably in that process we learn a bit about how well the model matches real-world data? If so, we can go on talking about the model now, and come back to this point in due time. NU: The model’s ability to represent the real world is the most important question. But it’s not one I can hope to fully answer with a simple model. In general, you won’t expect a model to exactly reproduce the data. Partly this is due to model imperfections, but partly it’s due to random "natural variability" in the system. (And also, of course, to measurement error.) Natural variability is usually related to chaotic or otherwise unpredictable atmosphere-ocean interactions, e.g. at the scale of weather events, El Niño, etc. Even a perfect model can’t be expected to predict those. With a simple model it’s really hard to tell how much of the discrepancy between model and data is due to model structural flaws, and how much is attributable to expected "random fluctuations", because simple models are too simple to generate their own "natural variability". To really judge how well models are doing, you have to use a complex model and see how much of the discrepancy can be accounted for by the natural variability it predicts. You also have to get into a lot of detail about the quality of the observations, which means looking at spatial patterns and not just global averages. This is the sort of thing done in model validation studies, "detection and attribution" studies, and observation system papers. But it’s beyond the scope of our paper. That’s why I said the best I can do is to use simple models that perform as well as complex models for limited problems. They will of course suffer any limitations of the complex models to which they’re tuned, and if you want to read about those, you should read those modeling papers. As far as what I can do with a simple model, yes, the Bayesian probability calculation using MCMC is a form of data-model comparison, in that it gives higher weight to model parameter settings that fit the data better. But it’s not exactly a form of "model checking", because Bayesian probability weighting is a relative procedure. It will be quite happy to assign high probability to parameter settings that fit the data terribly, as long as they still fit better than all the other parameter settings. A Bayesian probability isn’t an absolute measure of model quality, and so it can’t be used to check models. This is where classical statistical measures of "goodness of fit" can be helpful. For a philosophical discussion, see: • Andrew Gelman and Cosma Rohilla Shalizi, Philosophy and the practice of Bayesian statistics, available as arXiv:1006.3868. That being said, you do learn about model fit during the MCMC procedure in its attempt to sample highly probable parameter settings. When you get to the best fitting parameters, you look at the difference between the model fit and the observations to get an idea of what the "residual error" is — that is, everything that your model wasn’t able to predict. I should add that complex models disagree more about the strength of the AMOC than they do about more commonly discussed climate variables, such as surface temperature. This can been seen in Figure 10.15 of the IPCC AR4 WG1 report: there is a cluster of models that all tend to agree with the observed AMOC strength, but there are also some models that don’t. Some of those that don’t are known to have relatively poor physical modeling of the overturning circulation, so this is to be expected (i.e., the figure looks like a worse indictment of the models than it really is). But there is still disagreement between some of the "higher quality" models. Part of the problem is that we have poor historical observations of the AMOC, so it’s sometimes hard to tell what needs fixing in the models. Since the complex models don’t all agree about the current state of the AMOC, one can (and should) question using a simple AMOC model which has been tuned to a particular complex model. Other complex models will predict something altogether different. (And in fact, the model that our simple model was tuned to is also simpler than the IPCC AR4 models.) In our analysis we try to get around this model uncertainty by including some tunable parameters that control both the initial strength of the AMOC and how quickly it weakens. By altering those parameters, we try to span the range of possible outcomes predicted by complex models, allowing the parameters to take on whatever range of values is compatible with the (noisy) observations. This, at a minimum, leads to significant uncertainty in what the AMOC will do. I’m okay with the idea of uncertainty — that is, after all, what my research is about. But ultimately, even projections with wide error bars still have to be taken with a grain of salt, if the most advanced models still don’t entirely agree on simple questions like the current strength of the AMOC. JB: Okay, thanks. Clearly the question of how well your model matches reality is vastly more complicated than what you started out trying to tell me: namely, what your model is. Let’s get back to To recap, your model consists of three interacting parts: a model of the climate, a model of the carbon cycle, and a model of the Atlantic meridional overturning circulation (or "AMOC"). The climate model, called "DOECLIM", itself consists of three interacting parts: • the "land" (modeled as a "box of heat"), • the "atmosphere" (modeled as a "box of heat") • the "ocean" (modelled as a one-dimensional object, so that temperature varies with depth) Next: how do you model the carbon cycle? NU: We use a model called NICCS (nonlinear impulse-response model of the coupled carbon-cycle climate system). This model started out as an impulse response model, but because of nonlinearities in the carbon cycle, it was augmented by some box model components. NICCS takes fossil carbon emissions to the air as input, and calculates how that carbon ends up being partitioned between the atmosphere, land (vegetation and soil), and ocean. For the ocean, it has an impulse response model of the vertical advective/diffusive transport of carbon in the ocean. This is supplemented by a differential equation that models nonlinear ocean carbonate buffering chemistry. It doesn’t have any explicit treatment of ocean biology. For the terrestrial biosphere, it has a box model of the carbon cycle. There are four boxes, each containing some amount of carbon. They are "woody vegetation", "leafy vegetation", "detritus" (decomposing organic matter), and "humus" (more stable organic soil carbon). The box model has some equations describing how quickly carbon gets transported between these boxes (or back to the atmosphere). In addition to carbon emissions, both the land and ocean modules take global temperature as an input. (So, there should be a red arrow pointing to the "ocean" too — this is a mistake in the figure.) This is because there are temperature-dependent feedbacks in the carbon cycle. In the ocean, temperature determines how readily CO[2] will dissolve in water. On land, temperature influences how quickly organic matter in soil decays ("heterotrophic respiration"). There are also purely carbon cycle feedbacks, such as the buffering chemistry mentioned above, and also "CO[2] fertilization", which quantifies how plants can grow better under elevated levels of atmospheric CO[2]. The NICCS model also originally contained an impulse response model of the climate (temperature as a function of CO[2]), but we removed that and replaced it with DOECLIM. The NICCS model itself is tuned to reproduce the behavior of a more complex Earth system model. The key three uncertain parameters treated in our analysis control the soil respiration temperature feedback, the CO[2] fertilization feedback, and the vertical mixing rate of carbon into the ocean. JB: Okay. Finally, how do you model the AMOC? NU: This is another box model. There is a classic 1961 paper by Stommel: • Henry Stommel, Thermohaline convection with two stable regimes of flow, Tellus 2 (1961), 224-230. which models the overturning circulation using two boxes of water, one representing water at high latitudes and one at low latitudes. The boxes contain heat and salt. Together, temperature and salinity determine water density, and density differences drive the flow of water between boxes. It has been shown that such box models can have interesting nonlinear dynamics, exhibiting both hysteresis and threshold behavior. Hysteresis means that if you warm the climate and then cool it back down to its original temperature, the AMOC doesn’t return to its original state. Threshold behavior means that the system exhibits multiple stable states (such as an ocean circulation with or without overturning), and you can pass a "tipping point" beyond which the system flips from one stable equilibrium to another. Ultimately, this kind of dynamics means that it can be hard to return the AMOC to its historic state if it shuts down from anthropogenic climate change. The extent to which the real AMOC exhibits hysteresis and threshold behavior remains an open question. The model we use in our paper is a box model that has this kind of nonlinearity in it: • Kirsten Zickfeld, Thomas Slawig and Stefan Rahmstorf, A low-order model for the response of the Atlantic thermohaline circulation to climate change, Ocean Dynamics 54 (2004), 8-26. Instead of Stommel’s two boxes, this model uses four boxes: It has three surface water boxes (north, south, and tropics), and one box for an underlying pool of deep water. Each box has its own temperature and salinity, and flow is driven by density gradients between them. The boxes have their own "relaxation temperatures" which the box tries to restore itself to upon perturbation; these parameters are set in a way that attempts to compensate for a lack of explicit feedback on global temperature. The model’s parameters are tuned to match the output of an intermediate complexity climate model. The input to the model is a change in global temperature (temperature anomaly). This is rescaled to produce different temperature anomalies over each of the three surface boxes (accounting for the fact that different latitudes are expected to warm at different rates). There are similar scalings to determine how much freshwater input, from both precipitation changes and meltwater, is expected in each of the surface boxes due to a temperature change. The main uncertain parameter is the "hydrological sensitivity" of the North Atlantic surface box, controlling how much freshwater goes into that region in a warming scenario. This is the main effect by which the AMOC can weaken. Actually, anything that changes the density of water alters the AMOC, so the overturning can weaken due to salinity changes from freshwater input, or from direct temperature changes in the surface waters. However, the former is more uncertain than the latter, so we focus on freshwater in our uncertainty analysis. JB: Great! I see you’re emphasizing the uncertain parameters; we’ll talk more later about how you estimate these parameters, though you’ve already sort of sketched the idea. So: you’ve described to me the three components of your model: the climate, the carbon cycle and the Atlantic meridional overturning current (AMOC). I guess to complete the description of your model, you should say how these components interact — right? NU: Right. There is a two-way coupling between the climate module (DOECLIM) and the carbon cycle module (NICCS). The global temperature from the climate module is fed into the carbon cycle module to predict temperature-dependent feedbacks. The atmospheric CO[2] predicted by the carbon cycle module is fed into the climate module to predict temperature from its greenhouse effect. There is a one-way coupling between the climate module and the AMOC module. Global temperature alters the overturning circulation, but changes in the AMOC do not themselves alter global temperature: There is no coupling between the AMOC module and the carbon cycle module, although there technically should be: both the overturning circulation and the uptake of carbon by the oceans depend on ocean vertical mixing processes. Similarly, the climate and carbon cycle modules have their own independent parameters controlling the vertical mixing of heat and carbon, respectively, in the ocean. In reality these mixing rates are related to each other. In this sense, the modules are not fully coupled, insofar as they have independent representations of physical processes that are not really independent of each other. This is discussed in our caveats. JB: There’s one other thing that’s puzzling me. The climate model treats the "ocean" as a single entity whose temperature varies with depth but not location. The AMOC model involves four "boxes" of water: north, south, tropical, and deep ocean water, each with its own temperature. That seems a bit schizophrenic, if you know what I mean. How are these temperatures related in your model? You say "there is a one-way coupling between the climate module and the AMOC module." Does the ocean temperature in the climate model affect the temperatures of the four boxes of water in the AMOC model? And if so, how? NU: The surface temperature in the climate model affects the temperatures of the individual surface boxes in the AMOC model. The climate model works only with globally averaged temperature. To convert a (change in) global temperature to (changes in) the temperatures of the surface boxes of the AMOC model, there is a "pattern scaling" coefficient which converts global temperature (anomaly) to temperature (anomaly) in a particular box. That is, if the climate model predicts a 1 degree warming globally, that might be more or less than 1 °C of warming in the north Atlantic, tropics, etc. For example, we generally expect to see "polar amplification" where the high northern latitudes warm more quickly than the global average. These latitudinal scaling coefficients are derived from the output of a more complex climate model under a particular warming scenario, and are assumed to be constant (independent of warming scenario). The temperature from the climate model which is fed into the AMOC model is the global (land+ocean) average surface temperature, not the DOECLIM sea surface temperature alone. This is because the pattern scaling coefficients in the AMOC model were derived relative to global temperature, not sea surface temperature. JB: Okay. That’s a bit complicated, but I guess some sort consistency is built in, which prevents the climate model and the AMOC model from disagreeing about the ocean temperature. That’s what I was worrying about. Thanks for leading us through this model. I think this level of detail is just enough to get a sense for how it works. And that I know roughly what your model is, I’m eager to see how you used it and what results you got! But I’m afraid many of our readers may be nearing the saturation point. After all, I’ve been talking with you for days, with plenty of time to mull it over, while they will probably read this interview in one solid blast! So, I think we should quit here and continue in the next episode. So, everyone: I’m afraid you’ll just have to wait, clutching your chair in suspense, for the answer to the big question: will the AMOC get turned off, or not? Or really: how likely is such an event, according to this simple model? …we’re entering dangerous territory and provoking an ornery beast. Our climate system has proven that it can do very strange things. – Wallace S. Broecker This Week’s Finds (Week 303) 30 September, 2010 Now for the second installment of my interview with Nathan Urban, a colleague who started out in quantum gravity and now works on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis". But first, a word about Bali. One of the great things about living in Singapore is that it’s close to a lot of interesting places. My wife and I just spent a week in Ubud. This town is the cultural capital of Bali — full of dance, music, and crafts. It’s also surrounded by astounding terraced rice paddies: In his book Whole Earth Discipline, Stewart Brand says "one of the finest examples of beautifully nuanced ecosystem engineering is the thousand-year-old terraced rice irrigation complex in Bali". Indeed, when we took a long hike with a local guide, Made Dadug, we learned that that all the apparent "weeds" growing in luxuriant disarray near the rice paddies were in fact carefully chosen plants: cacao, coffee, taro, ornamental flowers, and so on. "See this bush? It’s citronella — people working on the fields grab a pinch and use it for mosquito repellent." When a paddy loses its nutrients they plant sweet potatos there instead of rice, to restore the soil. Irrigation is managed by a system of local water temples, or "subaks". It’s not a top-down hierarchy: instead, each subak makes decisions in a more or less democratic way, while paying attention to what neighboring subaks do. Brand cites the work of Steve Lansing on this subject: • J. Stephen Lansing, Perfect Order: Recognizing Complexity in Bali, Princeton U. Press, Princeton, New Jersey, 2006. Physicists interested in the spontaneous emergence of order will enjoy this passage: This book began with a question posed by a colleague. In 1992 I gave a lecture at the Santa Fe Institute, a recently created research center devoted to the study of "complex systems." My talk focused on a simulation model that my colleague James Kremer and I had created to investigate the ecological role of water temples. I need to explain a little about how this model came to be built; if the reader will bear with me, the relevance will soon become clear. Kremer is a marine scientist, a systems ecologist, and a fellow surfer. One day on a California beach I told him the story of the water temples, and of my struggles to convince the consultants that the temples played a vital role in the ecology of the rice terraces. I asked Jim if a simulation model, like the ones he uses to study coastal ecology, might help to clarify the issue. It was not hard to persuade him to come to Bali to take a look. Jim quickly saw that a model of a single water temple would not be very useful. The whole point about water temples is that they interact. Bali is a steep volcanic island, and the rivers and streams are short and fast. Irrigation systems begin high up on the volcanoes, and follow one after another at short intervals all the way to the seacoast. The amount of water each subak gets depends less on rainfall than on how much water is used by its upstream neighbors. Water temples provide a venue for the farmers to plan their irrigation schedules so as to avoid shortages when the paddies need to be flooded. If pests are a problem, they can synchronize harvests and flood a block of terraces so that there is nothing for the pests to eat. Decisions about water taken by each subak thus inevitably affect its neighbors, altering both the availability of water and potential levels of pest infestations. Jim proposed that we build a simulation model to capture all of these processes for an entire watershed. Having recently spent the best part of a year studying just one subak, the idea of trying to model nearly two hundred of them at once struck me as rather ambitious. But as Jim pointed out, the question is not whether flooding can control pests, but rather whether the entire collection of temples in a watershed can strike an optimal balance between water sharing and pest control. We set to work plotting the location of all 172 subaks lying between the Oos and Petanu rivers in central Bali. We mapped the rivers and irrigation systems, and gathered data on rainfall, river flows, irrigation schedules, water uptake by crops such as rice and vegetables, and the population dynamics of the major rice pests. With these data Jim constructed a simulation model. At the beginning of each year the artificial subaks in the model are given a schedule of crops to plant for the next twelve months, which defines their irrigation needs. Then, based on historic rainfall data, we simulate rainfall, river flow, crop growth, and pest damage. The model keeps track of harvest data and also shows where water shortages or pest damage occur. It is possible to simulate differences in rainfall patterns or the growth of different kinds of crops, including both native Balinese rice and the new rice promoted by the Green Revolution planners. We tested the model by simulating conditions for two cropping seasons, and compared its predictions with real data on harvest yields for about half the subaks. The model did surprisingly well, accurately predicting most of the variation in yields between subaks. Once we knew that the model’s predictions were meaningful, we used it to compare different scenarios of water management. In the Green Revolution scenario, every subak tries to plant rice as often as possible and ignores the water temples. This produces large crop losses from pest outbreaks and water shortages, much like those that were happening in the real world. In contrast, the “water temple” scenario generates the best harvests by minimizing pests and water shortages. Back at the Santa Fe Institute, I concluded this story on a triumphant note: consultants to the Asian Development Bank charged with evaluating their irrigation development project in Bali had written a new report acknowledging our conclusions. There would be no further opposition to management by water temples. When I finished my lecture, a researcher named Walter Fontana asked a question, the one that prompted this book: could the water temple networks self-organize? At first I did not understand what he meant by this. Walter explained that if he understood me correctly, Kremer and I had programmed the water temple system into our model, and shown that it had a functional role. This was not terribly surprising. After all, the farmers had had centuries to experiment with their irrigation systems and find the right scale of coordination. But what kind of solution had they found? Was there a need for a Great Designer or an Occasional Tinkerer to get the whole watershed organized? Or could the temple network emerge spontaneously, as one subak after another came into existence and plugged in to the irrigation systems? As a problem solver, how well could the temple networks do? Should we expect 10 percent of the subaks to be victims of water shortages at any given time because of the way the temple network interacts with the physical hydrology? Thirty percent? Two percent? Would it matter if the physical layout of the rivers were different? Or the locations of the temples? Answers to most of these questions could only be sought if we could answer Walter’s first large question: could the water temple networks self-organize? In other words, if we let the artificial subaks in our model learn a little about their worlds and make their own decisions about cooperation, would something resembling a water temple network emerge? It turned out that this idea was relatively easy to implement in our computer model. We created the simplest rule we could think of to allow the subaks to learn from experience. At the end of a year of planting and harvesting, each artificial subak compares its aggregate harvests with those of its four closest neighbors. If any of them did better, copy their behavior. Otherwise, make no changes. After every subak has made its decision, simulate another year and compare the next round of harvests. The first time we ran the program with this simple learning algorithm, we expected chaos. It seemed likely that the subaks would keep flipping back and forth, copying first one neighbor and then another as local conditions changed. But instead, within a decade the subaks organized themselves into cooperative networks that closely resembled the real ones. Lansing describes how attempts to modernize farming in Bali in the 1970′s proved problematic: To a planner trained in the social sciences, management by water temples looks like an arcane relic from the premodern era. But to an ecologist, the bottom-up system of control has some obvious advantages. Rice paddies are artificial aquatic ecosystems, and by adjusting the flow of water farmers can exert control over many ecological processes in their fields. For example, it is possible to reduce rice pests (rodents, insects, and diseases) by synchronizing fallow periods in large contiguous blocks of rice terraces. After harvest, the fields are flooded, depriving pests of their habitat and thus causing their numbers to dwindle. This method depends on a smoothly functioning, cooperative system of water management, physically embodied in proportional irrigation dividers, which make it possible to tell at a glance how much water is flowing into each canal and so verify that the division is in accordance with the agreed-on schedule. Modernization plans called for the replacement of these proportional dividers with devices called "Romijn gates," which use gears and screws to adjust the height of sliding metal gates inserted across the entrances to canals. The use of such devices makes it impossible to determine how much water is being diverted: a gate that is submerged to half the depth of a canal does not divert half the flow, because the velocity of the water is affected by the obstruction caused by the gate itself. The only way to accurately estimate the proportion of the flow diverted by a Romijn gate is with a calibrated gauge and a table. These were not supplied to the farmers, although $55 million was spent to install Romijn gates in Balinese irrigation canals, and to rebuild some weirs and primary canals. The farmers coped with the Romijn gates by simply removing them or raising them out of the water and leaving them to rust. On the other hand, Made said that the people village really appreciated this modern dam: Using gears, it takes a lot less effort to open and close than the old-fashioned kind: Later in this series of interviews we’ll hear more about sustainable agriculture from Thomas Fischbacher. But now let’s get back to Nathan! JB: Okay. Last time we were talking about the things that altered your attitude about climate change when you started working on it. And one of them was how carbon dioxide stays in the atmosphere a long time. Why is that so important? And is it even true? After all, any given molecule of CO[2] that’s in the air now will soon get absorbed by the ocean, or taken up by plants. NU: The longevity of atmospheric carbon dioxide is important because it determines the amount of time over which our actions now (fossil fuel emissions) will continue to have an influence on the climate, through the greenhouse effect. You have heard correctly that a given molecule of CO[2] doesn’t stay in the atmosphere for very long. I think it’s about 5 years. This is known as the residence time or turnover time of atmospheric CO[2]. Maybe that molecule will go into the surface ocean and come back out into the air; maybe photosynthesis will bind it in a tree, in wood, until the tree dies and decays and the molecule escapes back to the atmosphere. This is a carbon cycle, so it’s important to remember that molecules can come back into the air even after they’ve been removed from it. But the fate of an individual CO[2] molecule is not the same as how long it takes for the CO[2] content of the atmosphere to decrease back to its original level after new carbon has been added. The latter is the answer that really matters for climate change. Roughly, the former depends on the magnitude of the gross carbon sink, while the latter depends on the magnitude of the net carbon sink (the gross sink minus the gross source). As an example, suppose that every year 100 units of CO[2] are emitted to the atmosphere from natural sources (organic decay, the ocean, etc.), and each year (say with a 5 year lag), 100 units are taken away by natural sinks (plants, the ocean, etc). The 5 years actually doesn’t matter here; the system is in steady-state equilibrium, and the amount of CO[2] in the air is constant. Now suppose that humans add an extra 1 unit of CO[2] each year. If nothing else changes, then the amount of carbon in the air will increase every year by 1 unit, indefinitely. Far from the carbon being purged in 5 years, we end up with an arbitrarily large amount of carbon in the air. Even if you only add carbon to the atmosphere for a finite time (e.g., by running out of fossil fuels), the CO[2] concentration will ultimately reach, and then perpetually remain at, a level equivalent to the amount of new carbon added. Individual CO[2] molecules may still get absorbed within 5 years of entering the atmosphere, and perhaps fewer of the carbon atoms that were once in fossil fuels will ultimately remain in the atmosphere. But if natural sinks are only removing an amount of carbon equal in magnitude to natural sources, and both are fixed in time, you can see that if you add extra fossil carbon the overall atmospheric CO[2] concentration can never decrease, regardless of what individual molecules are doing. In reality, natural carbon sinks tend to grow in proportion to how much carbon is in the air, so atmospheric CO[2] doesn’t remain elevated indefinitely in response to a pulse of carbon into the air. This is kind of the biogeochemical analog to the "Planck feedback" in climate dynamics: it acts to restore the system to equilibrium. To first order, atmospheric CO[2] decays or "relaxes" exponentially back to the original concentration over time. But this relaxation time (variously known as a "response time", "adjustment time", "recovery time", or, confusingly, "residence time") isn’t a function of the residence time of a CO[2] molecule in the atmosphere. Instead, it depends on how quickly the Earth’s carbon removal processes react to the addition of new carbon. For example, how fast plants grow, die, and decay, or how fast surface water in the ocean mixes to greater depths, where the carbon can no longer exchange freely with the atmosphere. These are slower processes. There are actually a variety of response times, ranging from years to hundreds of thousands of years. The surface mixed layer of the ocean responds within a year or so; plants within decades to grow and take up carbon or return it to the atmosphere through rotting or burning. Deep ocean mixing and carbonate chemistry operate on longer time scales, centuries to millennia. And geologic processes like silicate weathering are even slower, tens of thousands of years. The removal dynamics are a superposition of all these processes, with a fair chunk taken out quickly by the fast processes, and slower processes removing the remainder more gradually. To summarize, as David Archer put it, "The lifetime of fossil fuel CO[2] in the atmosphere is a few centuries, plus 25 percent that lasts essentially forever." By "forever" he means "tens of thousands of years" — longer than the present age of human civilization. This inspired him to write this pop-sci book, taking a geologic view of anthropogenic climate change: • David Archer, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate, Princeton University Press, Princeton, New Jersey, 2009. A clear perspective piece on the lifetime of carbon is: • Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008. which is based largely on this review article: • David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz, Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil fuel carbon dioxide, Annual Review of Earth and Planetary Sciences 37 (2009), 117-134. For climate implications, see: • Susan Solomon, Gian-Kasper Plattner, Reto Knutti and Pierre Friedlingstein, Irreversible climate change due to carbon dioxide emissions, PNAS 106 (2009), 1704-1709. • M. Eby, K. Zickfeld, A. Montenegro, D. Archer, K. J. Meissner and A. J. Weaver, Lifetime of anthropogenic climate change: millennial time scales of potential CO[2] and surface temperature perturbations, Journal of Climate 22 (2009), 2501-2511. • Long Cao and Ken Caldeira, Atmospheric carbon dioxide removal: long-term consequences and commitment, Environmental Research Letters 5 (2010), 024011. For the very long term perspective (how CO[2] may affect the glacial-interglacial cycle over geologic time), see: • David Archer and Andrey Ganopolski, A movable trigger: Fossil fuel CO[2] and the onset of the next glaciation, Geochemistry Geophysics Geosystems 6 (2005), Q05003. JB: So, you’re telling me that even if we do something really dramatic like cut fossil fuel consumption by half in the next decade, we’re still screwed. Global warming will keep right on, though at a slower pace. Right? Doesn’t that make you feel sort of hopeless? NU: Yes, global warming will continue even as we reduce emissions, although more slowly. That’s sobering, but not grounds for total despair. Societies can adapt, and ecosystems can adapt — up to a point. If we slow the rate of change, then there is more hope that adaptation can help. We will have to adapt to climate change, regardless, but the less we have to adapt, and the more gradual the adaptation necessary, the less costly it will be. What’s even better than slowing the rate of change is to reduce the overall amount of it. To do that, we’d need to not only reduce carbon emissions, but to reduce them to zero before we consume all fossil fuels (or all of them that would otherwise be economically extractable). If we emit the same total amount of carbon, but more slowly, then we will get the same amount of warming, just more slowly. But if we ultimately leave some of that carbon in the ground and never burn it, then we can reduce the amount of final warming. We won’t be able to stop it dead, but even knocking a degree off the extreme scenarios would be helpful, especially if there are "tipping points" that might otherwise be crossed (like a threshold temperature above which a major ice sheet will disintegrate). So no, I don’t feel hopeless that we can, in principle, do something useful to mitigate the worst effects of climate change, even though we can’t plausibly stop or reverse it on normal societal timescales. But sometimes I do feel hopeless that we lack the public and political will to actually do so. Or at least, that we will procrastinate until we start seeing extreme consequences, by which time it’s too late to prevent them. Well, it may not be too late to prevent future, even more extreme consequences, but the longer we wait, the harder it is to make a dent in the problem. I suppose here I should mention the possibility of climate geoengineering, which is a proposed attempt to artificially counteract global warming through other means, such as reducing incoming sunlight with reflective particles in the atmosphere, or space mirrors. That doesn’t actually cancel all climate change, but it can negate a lot of the global warming. There are many risks involved, and I regard it as a truly last-ditch effort if we discover that we really are "screwed" and can’t bear the consequences. There is also an extreme form of carbon cycle geoengineering, known as air capture and sequestration, which extracts CO[2] from the atmosphere and sequesters it for long periods of time. There are various proposed technologies for this, but it’s highly uncertain whether this can feasibly be done on the necessary scales. JB: Personally, I think society will procrastinate until we see extreme climate changes. Recently millions of Pakistanis were displaced by floods: a quarter of their country was covered by water. We can’t say for sure this was caused by global warming — but it’s exactly the sort of thing we should expect. But you’ll notice, this disaster is nowhere near enough to make politicians talk about cutting fossil fuel usage! It’ll take a lot of disasters like this to really catch people’s attention. And by then we’ll be playing a desperate catch-up game, while people in many countries are struggling to survive. That won’t be easy. Just think how little attention the Pakistanis can spare for global warming right now. Anyway, this is just my own cheery view. But I’m not hopeless, because I think there’s still a lot we can do to prevent a terrible situation from becoming even worse. Since I don’t think the human race will go extinct anytime soon, it would be silly to "give up". Now, you’re just started a position at the Woodrow Wilson School at Princeton. When I was an undergrad there, this school was the place for would-be diplomats. What’s a nice scientist like you doing in a place like this? I see you’re in the Program in Science, Technology and Environmental Policy, or "STEP program". Maybe it’s too early for you to give a really good answer, but could you say a bit about what they do? NU: Let me pause to say that I don’t know whether the Pakistan floods are "exactly the sort of thing we should expect" to happen to Pakistan, specifically, as a result of climate change. Uncertainty in the attribution of individual events is one reason why people don’t pay attention to them. But it is true that major floods are examples of extreme events which could become more (or less) common in various regions of the world in response to climate change. Returning to your question, the STEP program includes a number of scientists, but we are all focused on policy issues because the Woodrow Wilson School is for public and international affairs. There are physicists who work on nuclear policy, ecologists who study environmental policy and conservation biology, atmospheric chemists who look at ozone and air pollution, and so on. Obviously, climate change is intimately related to public and international policy. I am mostly doing policy-relevant science but may get involved in actual policy to some extent. The STEP program has ties to other departments such as Geosciences, interdisciplinary umbrella programs like the Atmospheric and Ocean Sciences program and the Princeton Environmental Institute, and NOAA’s nearby Geophysical Fluid Dynamics Laboratory, one of the world’s leading climate modeling centers. JB: How much do you want to get into public policy issues? Your new boss, Michael Oppenheimer, used to work as chief scientist for the Environmental Defense Fund. I hadn’t known much about them, but I’ve just been reading a book called The Climate War. This book says a lot about the Environmental Defense Fund’s role in getting the US to pass cap-and-trade legislation to reduce sulfur dioxide emissions. That’s quite an inspiring story! Many of the same people then went on to push for legislation to reduce greenhouse gases, and of course that story is less inspiring, so far: no success yet. Can you imagine yourself getting into the thick of these political endeavors? NU: No, I don’t see myself getting deep into politics. But I am interested in what we should be doing about climate change, specifically, the economic assessment of climate policy in the presence of uncertainties and learning. That is, how hard should we be trying to reduce CO[2] emissions, accounting for the fact that we’re unsure what climate the future will bring, but expect to learn more over time. Michael is very interested in this question too, and the harder problem of "negative learning": • Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172. "Negative learning" occurs if what we think we’re learning is actually converging on the wrong answer. How fast could we detect and correct such an error? It’s hard enough to give a solid answer to what we might expect to learn, let alone what we don’t expect to learn, so I think I’ll start with the former. I am also interested in the value of learning. How will our policy change if we learn more? Can there be any change in near-term policy recommendations, or will we learn slowly enough that new knowledge will only affect later policies? Is it more valuable — in terms of its impact on policy — to learn more about the most likely outcomes, or should we concentrate on understanding better the risks of the worst-case scenarios? What will cause us to learn the fastest? Better surface temperature observations? Better satellites? Better ocean monitoring systems? What observables should they we looking at? The question "How much should we reduce emissions" is, partially, an economic one. The safest course of action from the perspective of climate impacts is to immediately reduce emissions to a much lower level. But that would be ridiculously expensive. So some kind of cost-benefit approach may be helpful: what should we do, balancing the costs of emissions reductions against their climate benefits, knowing that we’re uncertain about both. I am looking at so-called "economic integrated assessment" models, which combine a simple model of the climate with an even simpler model of the world economy to understand how they influence each other. Some argue these models are too simple. I view them more as a way of getting order-of-magnitude estimates of the relative values of different uncertainty scenarios or policy options under specified assumptions, rather than something that can give us "The Answer" to what our emissions targets should be. In a certain sense it may be moot to look at such cost-benefit analyses, since there is a huge difference between "what may be economically optimal for us to do" and "what we will actually do". We have not yet approached current policy recommendations, so what’s the point of generating new recommendations? That’s certainly a valid argument, but I still think it’s useful to have a sense of the gap between what we are doing and what we "should" be doing. Economics can only get us so far, however (and maybe not far at all). Traditional approaches to economics have a very narrow way of viewing the world, and tend to ignore questions of ethics. How do you put an economic value on biodiversity loss? If we might wipe out polar bears, or some other species, or a whole lot of species, how much is it "worth" to prevent that? What is the Great Barrier Reef worth? Its value in tourism dollars? Its value in "ecosystem services" (the more nebulous economic activity which indirectly depends on its presence, such as fishing)? Does it have intrinsic value, and is worth something (what?) to preserve, even if it has no quantifiable impact on the economy whatsoever? You can continue on with questions like this. Does it make sense to apply standard economic discounting factors, which effectively value the welfare of future generations less than that of the current generation? See for example: • John Quiggin, Stern and his critics on discounting and climate change: an editorial essay, Climatic Change 89 (2008), 195-205. Economic models also tend to preserve present economic disparities. Otherwise, their "optimal" policy is to immediately transfer a lot of the wealth of developed countries to developing countries — and this is without any climate change — to maximize the average "well-being" of the global population, on the grounds that a dollar is worth more to a poor person than a rich person. This is not a realistic policy and arguably shouldn’t happen anyway, but you do have to be careful about hard-coding potential inequities into your models: • Seth D. Baum and William E. Easterling, Space-time discounting in climate change adaptation, Mitigation and Adaptation Strategies for Global Change 15 (2010), 591-609. More broadly, it’s possible for economics models to allow sea level rise to wipe out Bangladesh, or other extreme scenarios, simply because some countries have so little economic output that it doesn’t "matter" if they disappear, as long as other countries become even more wealthy. As I said, economics is a narrow lens. After all that, it may seem silly to be thinking about economics at all. The main alternative is the "precautionary principle", which says that we shouldn’t take suspected risks unless we can prove them safe. After all, we have few geologic examples of CO[2] levels rising as far and as fast as we are likely to increase them — to paraphrase Wally Broecker, we are conducting an uncontrolled and possibly unprecedented experiment on the Earth. This principle has some merits. The common argument, "We should do nothing unless we can prove the outcome is disastrous", is a strange burden of proof from a decision analytic point of view — it has little to do with the realities of risk management under uncertainty. Nobody’s going to say "You can’t prove the bridge will collapse, so let’s build it". They’re going to say "Prove it’s safe (to within a certain guarantee) before we build it". Actually, a better analogy to the common argument might be: you’re driving in the dark with broken headlights, and insist “You’ll have to prove there are no cliffs in front of me before I’ll consider slowing down.” In reality, people should slow down, even if it makes them late, unless they know there are no cliffs. But the precautionary principle has its own problems. It can imply arbitrarily expensive actions in order to guard against arbitrarily unlikely hazards, simply because we can’t prove they’re safe, or precisely quantify their exact degree of unlikelihood. That’s why I prefer to look at quantitative cost-benefit analysis in a probabilistic framework. But it can be supplemented with other considerations. For example, you can look at stabilization scenarios: where you "draw a line in the sand" and say we can’t risk crossing that, and apply economics to find the cheapest way to avoid crossing the line. Then you can elaborate that to allow for some small but nonzero probability of crossing it, or to allow for temporary "overshoot", on the grounds that it might be okay to briefly cross the line, as long as we don’t stay on the other side indefinitely. You can tinker with discounting assumptions and the decision framework of expected utility maximization. And so on. JB: This is fascinating stuff. You’re asking a lot of really important questions — I think I see about 17 question marks up there. Playing the devil’s advocate a bit, I could respond: do you known any answers? Of course I don’t expect "ultimate" answers, especially to profound questions like how much we should allow economics to guide our decision, versus tempering it with other ethical considerations. But it would be nice to see an example where thinking about these issues turned up new insights that actually changed people’s behavior. Cases where someone said "Oh, I hadn’t thought of that…", and then did something different that had a real effect. You see, right now the world as it is seems so far removed from the world as it should be that one can even start to doubt the usefulness of pondering the questions you’re raising. As you said yourself, "We’re not yet even coming close to current policy recommendations, so what’s the point of generating new recommendations?" I think the cap-and-trade idea is a good example, at least as far as sulfur dioxide emissions go: the Clean Air Act Amendments of 1990 managed to reduce SO2 emissions in the US from about 19 million tons in 1980 to about 7.6 million tons in 2007. Of course this idea is actually a bunch of different ideas that need to work together in a certain way… but anyway, some example related to global warming would be a bit more reassuring, given our current problems with that. NU: Climate change economics has been very influential in generating momentum for putting a price on carbon (through cap-and-trade or otherwise), in Europe and the U.S., in showing that such policy had the potential to be a net benefit considering the risks of climate change. SO2 emissions markets are one relevant piece of this body of research, although the CO2 problem is much bigger in scope and presents more problems for such approaches. Climate economics has been an important synthesis of decision analysis and scientific uncertainty quantification, which I think we need more of. But to be honest, I’m not sure what immediate impact additional economic work may have on mitigation policy, unless we begin approaching current emissions targets. So from the perspective of immediate applications, I also ponder the usefulness of answering these questions. That, however, is not the only perspective I think about. I’m also interested in how what we should do is related to what we might learn — if not today, then in the future. There are still important open questions about how well we can see something potentially bad coming, the answers to which could influence policies. For example, if a major ice sheet begins to substantially disintegrate within the next few centuries, would we be able to see that coming soon enough to step up our mitigation efforts in time to prevent it? In reality that’s a probabilistic question, but let’s pretend it’s a binary outcome. If the answer is "yes", that could call for increased investment in "early warning" observation systems, and a closer coupling of policy to the data produced by such systems. (Well, we should be investing more in those anyway, but people might get the point more strongly, especially if research shows that we’d only see it coming if we get those systems in place and tested soon.) If the answer is "no", that could go at least three ways. One way it could go is that the precautionary principle wins: if we think that we could put coastal cities under water, and we wouldn’t see it coming in time to prevent it, that might finally prompt more preemptive mitigation action. Another is that we start looking more seriously at last-ditch geoengineering approaches, or carbon air capture and sequestration. Or, if people give up on modifying the climate altogether, then it could prompt more research and development into adaptation. All of those outcomes raise new policy questions, concerning how much of what policy response we should aim for. Which brings me to the next policy option. The U.S. presidential science advisor, John Holdren, has said that we have three choices for climate change: mitigate, adapt, or suffer. Regardless of what we do about the first, people will likely be doing some of the other two; the question is how much. If you’re interested in research that has a higher likelihood of influencing policy in the near term, adaptation is probably what you should work on. (That, or technological approaches like climate/carbon geoengineering, energy systems, etc.) People are already looking very seriously at adaptation (and in some cases are already putting plans into place). For example, the Port Authority of Los Angeles needs to know whether, or when, to fortify their docks against sea level rise, and whether a big chunk of their business could disappear if the Northwest Passage through the Arctic Ocean opens permanently. They have to make these investment decisions regardless of what may happen with respect to geopolitical emissions reduction negotiations. The same kinds of learning questions I’m interested in come into play here: what will we know, and when, and how should current decisions be structured knowing that we will be able to periodically adjust those decisions? So, why am I not working on adaptation? Well, I expect that I will be, in the future. But right now, I’m still interested in a bigger question, which is how well can we bound the large risks and our ability to prevent disasters, rather than just finding the best way to survive them. What is the best and the worst that can happen, in principle? Also, I’m concerned that right now there is too much pressure to develop adaptation policies to a level of detail which we don’t yet have the scientific capability to develop. While global temperature projections are probably reasonable within their stated uncertainty ranges, we have a very limited ability to predict, for example, how precipitation may change over a particular city. But that’s what people want to know. So scientists are trying to give them an answer. But it’s very hard to say whether some of those answers right now are actionably credible. You have to choose your problems carefully when you work in adaptation. Right now I’m opting to look at sea level rise, partly because it is less affected by the some of the details of local meteorology. JB: Interesting. I think I’m going to cut our conversation here, because at this point it took a turn that will really force me to do some reading! And it’s going to take a while. But it should be The climatic impacts of releasing fossil fuel CO[2] to the atmosphere will last longer than Stonehenge, longer than time capsules, longer than nuclear waste, far longer than the age of human civilization so far. – David Archer This Week’s Finds (Week 302) 9 September, 2010 In "week301" I sketched a huge picture in a very broad brush. Now I’d like to start filling in a few details: not just about the problems we face, but also about what we can do to tackle them. For the reasons I explained last time, I’ll focus on what scientists can do. As I’m sure you’ve noticed, different people have radically different ideas about the mess we’re in, or if there even is a mess. Maybe carbon emissions are causing really dangerous global warming. Maybe they’re not — or at least, maybe it’s not as bad as some say. Maybe we need to switch away from fossil fuels to solar power, or wind. Maybe nuclear power is the answer, because solar and wind are intermittent. Maybe nuclear power is horrible! Maybe using less energy is the key. But maybe boosting efficiency isn’t the way to accomplish that. Maybe the problem is way too big for any of these conventional solutions to work. Maybe we need carbon sequestration: like, pumping carbon dioxide underground. Maybe we need to get serious about geoengineering — you know, something like giant mirrors in space, to cool down the Earth. Maybe geoengineering is absurdly impractical — or maybe it’s hubris on our part to think we could do it right! Maybe some radical new technology is the answer, like nanotech or biotech. Maybe we should build an intelligence greater than our own and let it solve our problems. Maybe all this talk is absurd. Maybe all we need are some old technologies, like traditional farming practices, or biochar: any third-world peasant can make charcoal and bury it, harnessing the power of nature to do carbon sequestration without fancy machines. In fact, maybe we need go back to nature and get rid of the modern technological civilization that’s causing our problems. Maybe this would cause massive famines. But maybe they’re bound to come anyway: maybe overpopulation lies at the root of our problems and only a population crash will solve them. Maybe that idea just proves what we’ve known all along: the environmental movement is fundamentally anti-human. Maybe all this talk is just focusing on symptoms: maybe what we need is a fundamental change in consciousness. Maybe that’s not possible. Maybe we’re just doomed. Or maybe we’ll muddle through the way we always do. Maybe, in fact, things are just fine! To help sift through this mass of conflicting opinions, I think I’ll start by interviewing some people. I’ll start with Nathan Urban, for a couple of reasons. First, he can help me understand climate science and the whole business of how we can assess risks due to climate change. Second, like me, he started out working on quantum gravity! Can I be happy switching from pure math and theoretical physics to more practical stuff? Maybe talking to him will help me find out. So, here is the first of several conversations with Nathan Urban. This time we’ll talk about what it’s like to shift careers, how he got interested in climate change, and issue of "climate sensitivity": how much the temperature changes if you double the amount of carbon dioxide in the Earth’s atmosphere. JB: It’s a real pleasure to interview you, since you’ve successfully made a transition that I’m trying to make now — from "science for its own sake" to work that may help save the planet. I can’t resist telling our readers that when we first met, you had applied to U.C. Riverside because you were interested in working on quantum gravity. You wound up going elsewhere… and now you’re at Princeton, at the Woodrow Wilson School of Public and International Affairs, working on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis". That’s quite a shift! I’m curious about how you got from point A to point B. What was the hardest thing about it? NU: I went to Penn State because it had a big physics department and one of the leading centers in quantum gravity. A couple years into my degree my nominal advisor, Lee Smolin, moved to the Perimeter Institute in Canada. PI was brand new and didn’t yet have a formal affiliation with a university to support graduate students, so it was difficult to follow him there. I ended up staying at Penn State, but leaving gravity. That was the hardest part of my transition, as I’d been passionately interested in gravity since high school. I ultimately landed in computational statistical mechanics, partly due to the Monte Carlo computing background I’d acquired studying the dynamical triangulations approach to quantum gravity. My thesis work was interesting, but by the time I graduated, I’d decided it probably wasn’t my long term career. During graduate school I had become interested in statistics. This was partly from my Monte Carlo simulation background, partly from a Usenet thread on Bayesian statistics (archived on your web page ), and partly from my interest in statistical machine learning. I applied to a postdoc position in climate change advertised at Penn State which involved statistics and decision theory. At the time I had no particular plan to remain at Penn State, knew nothing about climate change, had no prior interest in it, and was a little skeptical that the whole subject had been exaggerated in the media … but I was looking for a job and it sounded interesting and challenging, so I accepted. I had a great time with that job, because it involved a lot of statistics and mathematical modeling, was very interdisciplinary — incorporating physics, geology, biogeochemistry, economics, public policy, etc. — and tackled big, difficult questions. Eventually it was time to move on, and I accepted a second postdoc at Princeton doing similar things. JB: It’s interesting that you applied for that Penn State position even though you knew nothing about climate change. I think there are lots of scientists who’d like to work on environmental issues but feel they lack the necessary expertise. Indeed I sometimes feel that way myself! So what did you do to bone up on climate change? Was it important to start by working with a collaborator who knew more about that side of things? NU: I think a physics background gives people the confidence (or arrogance!) to jump into a new field, trusting their quantitative skills to see them through. It was very much like starting over as a grad student again — an experience I’d had before, switching from gravity to condensed matter — except faster. I read. A lot. But at the same time, I worked on a narrowly defined project, in collaboration with an excellent mentor, to get my feet wet and gain depth. The best way to learn is probably to just try to answer some specific research question. You can pick up what you need to know as you go along, with help. (One difficulty is in identifying a good and accessible problem!) I started by reading the papers cited by the paper upon whose work my research was building. The IPCC Fourth Assessment Report came out shortly after that, which cites many more key references. I started following new articles in major journals, whatever seemed interesting or relevant to me. I also sampled some of the blog debates on climate change. Those were useful to understand what the public’s view of the important controversies may be, which is often very different from the actual controversies within the field. Some posters were willing to write useful tutorials on some aspects of the science as well. And of course I learned through research, through attending group meetings with collaborators, and talking to people. It’s very important to start out working with a knowledgeable collaborator, and I’m lucky to have many. The history of science is littered with very smart people making serious errors when they get out of their depth. The physicist Leo Szilard once told a biologist colleague to "assume infinite intelligence and zero prior knowledge" when explaining to him. The error some make is in believing that intelligence alone will suffice. You also have to acquire knowledge, and become intimately familiar with the relevant scientific literature. And you will make mistakes in a new field, no matter how smart you are. That’s where a collaborator is crucial: someone who can help you identify flaws in arguments that you may not notice yourself at first. (And it’s not just to start with, either: I still need collaborators to teach me things about specific models, or data sets, that I don’t know.) Collaborators also can help you become familiar with the literature faster. It’s helpful to have a skill that others need. I’ve built up expertise in statistical data-model comparison. I read as many statistics papers as I do climate papers, have statistician collaborators, and can speak their own language. I can act as an intermediary between scientists and statisticians. This expertise allows me to collaborate with some climate research groups who happen to lack such expertise themselves. As a result I have a lot of people who are willing to teach me what they know, so we can solve problems that neither of us alone could. JB: You said you began with a bit of skepticism that perhaps the whole climate change thing had been exaggerated in the media. I think a lot of people feel that way. I’m curious how your attitude evolved as you began studying the subject more deeply. That might be a big question, so maybe we can break it down a little: do you remember the first thing you read that made you think "Wow! I didn’t know that!"? NU: I’m not sure what was the first. It could have been that most of the warming from CO[2] is currently thought to come from feedback effects, rather than its direct greenhouse effect. Or that ice ages (technically, glacial periods) were only 5-6 °C cooler than our preindustrial climate, globally speaking. Many people would guess something much colder, like 10 °C. It puts future warming in perspective to think that it could be as large, or even half as large, as the warming between an ice age and today. "A few degrees" doesn’t sound like much (especially in Celsius, to an American), but historically, it can be a big deal — particularly if you care about the parts of the planet that warm faster than the average rate. Also, I was surprised by the atmospheric longevity of CO[2] concentrations. If CO[2] is a problem, it will be a problem that’s around for a long time. JB: These points are so important that I don’t want them to whiz past too quickly. So let me back up and ask a few more questions here. By "feedback effects", I guess you mean things like this: when it gets warmer, ice near the poles tends to melt. But ice is white, so it reflects sunlight. When ice melts, the landscape gets darker, and absorbs more sunlight, so it gets warmer. So the warming effect amplifies itself — like feedback when a rock band has its amplifiers turned up too high. On the other hand, any sort of cooling effect also amplifies itself. For example, when it gets colder, more ice forms, and that makes the landscape whiter, so more sunlight gets reflected, making it even colder. Could you maybe explain some of the main feedback effects and give us numbers that say how big they are? NU: Yes, feedbacks are when a change in temperature causes changes within the climate system that, themselves, cause further changes in temperature. Ice reflectivity, or "albedo", feedback is a good example. Another is water vapor feedback. When it gets warmer — due to, say, the CO[2] greenhouse effect — the evaporation-condensation balance shifts in favor of relatively more evaporation, and the water vapor content of the atmosphere increases. But water vapor, like CO[2], is a greenhouse gas, which causes additional warming. (The opposite happens in response to cooling.) These feedbacks which amplify the original cause (or "forcing") are known to climatologists as "positive feedbacks". A somewhat less intuitive example is the "lapse rate feedback". The greenhouse effect causes atmospheric warming. But this warming itself causes the vertical temperature profile of the atmosphere to change. The rate at which air temperature decreases with height, or lapse rate, can itself increase or decrease. This change in lapse rate depends on interactions between radiative transfer, clouds and convection, and water vapor. In the tropics, the lapse rate is expected to decrease in response to the enhanced greenhouse effect, amplifying the warming in the upper troposphere and suppressing it at the surface. This suppression is a "negative feedback" on surface temperature. Toward the poles, the reverse happens (a positive feedback), but the tropics tend to dominate, producing an overall negative feedback. Clouds create more complex feedbacks. Clouds have both an albedo effect (they are white and reflect sunlight) and a greenhouse effect. Low clouds tend to be thick and warm, with a high albedo and weak greenhouse effect, and so are net cooling agents. High clouds are often thin and cold, with low albedo and strong greenhouse effect, and are net warming agents. Temperature changes in the atmosphere can affect cloud amount, thickness, and location. Depending on the type of cloud and how temperature changes alter its behavior, this can result in either positive or negative feedbacks. There are other feedbacks, but these are usually thought of as the big four: surface albedo (including ice albedo), water vapor, lapse rate, and clouds. For the strengths of the feedbacks, I’ll refer to climate model predictions, mostly because they’re neatly summarized in one place: • Section 8.6 of the Intergovernmental Panel on Climate Change Fourth Assessment Report, Working Group 1 (AR4 WG1). There are also estimates made from observational data. (Well, data plus simple models, because you need some kind of model of how temperatures depend on CO[2], even if it’s just a simple linear feedback model.) But observational estimates are more scattered in the literature and harder to summarize, and some feedbacks are very difficult to estimate directly from data. This is a problem when testing the models. For now, I’ll stick to the models — not because they’re necessarily more credible than observational estimates, but just to make my job here easier. Conventions vary, but the feedbacks I will give are measured in units of watts per square meter per kelvin. That is, they tell you how much of a radiative imbalance, or power flux, the feedback creates in the climate system in response to a given temperature change. The reciprocal of a feedback tells you how much temperature change you’d get in response to a given forcing. Water vapor is the largest feedback. Referring to this paper cited in the AR4 WG1 report: • Brian J. Solden and Isaac M. Held, An assessment of climate feedbacks in coupled ocean-atmosphere models, Journal of Climate 19 (2006), 3354-3360. you can see that climate models predict a range of water vapor feedbacks of 1.48 to 2.14 W/m^2/K. The second largest in magnitude is lapse rate feedback, -0.41 to -1.27 W/m^2/K. However, water vapor and lapse rate feedbacks are often combined into a single feedback, because stronger water vapor feedbacks also tend to produce stronger lapse rate feedbacks. The combined water vapor+lapse rate feedback ranges between 0.81 to 1.20 W/m^2/K. Clouds are the next largest feedback, 0.18 to 1.18 W/m^2/K. But as you can see, different models can predict very different cloud feedbacks. It is the largest feedback uncertainty. After that comes the surface albedo feedback. Its range is 0.07 to 0.34 W/m^2/K. People don’t necessarily find feedback values intuitive. Since everyone wants to know what that means in terms of the climate, I’ll explain how to convert feedbacks into temperatures. First, you have to assume a given amount of radiative forcing: a stronger greenhouse effect causes more warming. For reference, let’s consider a doubling of atmospheric CO[2], which is estimated to create a greenhouse effect forcing of 4±0.3 W/m^2. (The error bars represent the range of estimates I’ve seen, and aren’t any kind of statistical bound.) How much greenhouse warming? In the absence of feedbacks, about 1.2±0.1 °C of warming. How much warming, including feedbacks? To convert a feedback to a temperature, add it to the so-called "Planck feedback" to get a combined feedback which accounts for the fact that hotter bodies radiate more infrared. Then divide it into the forcing and flip the sign to get the warming. Mathematically, this is…. JB: Whoa! Slow down! I’m glad you finally mentioned the "Planck feedback", because this is the mother of all feedbacks, and we should have talked about it first. While the name "Planck feedback" sounds technical, it’s pathetically simple: hotter things radiate more heat, so they tend to cool down. Cooler things radiate less heat, so they tend to warm up. So this is a negative feedback. And this is what keeps our climate from spiralling out of control. This is an utterly basic point that amateurs sometimes overlook — I did it myself at one stage, I’m embarrassed to admit. They say things like: "Well, you listed a lot of feedback effects, and overall they give a positive feedback — so any bit of warming will cause more warming, while any bit of cooling will cause more cooling. But wouldn’t that mean the climate is unstable? Are you saying that the climate just happens to be perched at an unstable equilibrium, so that the slightest nudge would throw us into either an ice age or a spiral of ever-hotter weather? That’s absurdly unlikely! Climate science is a load of baloney!" (Well, I didn’t actually say the last sentence: I realized I must be confused.) The answer is that a hot Earth will naturally radiate away more heat, while a cold Earth will radiate away less. And this is enough to make the total feedback negative. NU: Yes, the negative Planck feedback is crucial. Without this stabilizing feedback, which is always present for any thermodynamic body, any positive feedback would cause the climate to run away unstably. It’s so important that other feedbacks are often defined relative to it: people call the Planck feedback λ[0], and they call the sum of the rest λ. Climatologists tend to take it for granted, and talk about just the non-Planck feedbacks, λ. As a side note, the definition of feedbacks in climate science is somewhat confused; different papers have used different conventions, some in opposition to conventions used in other fields like engineering. For a discussion of some of the ways feedbacks have been treated in the literature, see: • J. R. Bates, Some considerations of the concept of climate feedback, Quarterly Journal of the Royal Meteorological Society 133 (2007), 545-560. JB: Okay. Sorry to slow you down like that, but we’re talking to a mixed crowd here. So: you were saying how much it warms up when we apply a radiative forcing F, some number of watts per square meter. We could do this by turning up the dial on the Sun, or, more realistically, by pouring lots of carbon dioxide into the atmosphere to keep infrared radiation from getting out. And you said: take the Planck feedback λ[0], which is negative, and add to it the sum of all other feedbacks, which we call λ. Divide F by the result, and flip the sign to get the warming. NU: Right. Mathematically, that’s T = -F/(λ[0]+λ) λ[0] = -3.2 W/m^2/K is the Planck feedback and λ is the sum of other feedbacks. Let’s look at the forcing from doubled CO[2]: F = 4.3 W/m^2. Here I’m using values taken from Soden and Held. If the other feedbacks vanish (λ=0), this gives a "no-feedback" warming of T = 1.3 °C, which is about equal to the 1.2 °C that I mentioned above. But we can then plug in other feedback values. For example, the water vapor feedbacks 1.48-2.14 W/m^2/K will produce warmings of 2.5 to 4.1 °C, compared to only 1.3 °C without water vapor feedback. This is a huge temperature amplification. If you consider the combined water vapor+lapse rate feedback, that’s still a warming of 1.8 to 2.2 °C, almost a doubling of the "bare" CO[2] greenhouse JB: Thanks for the intro to feedbacks — very clear. So, it seems the "take-home message", as annoying journalists like to put it, is this. When we double the amount of carbon dioxide in the atmosphere, as we’re well on the road to doing, we should expect significantly more than the 1.2 degree Celsius rise in temperature than we’d get without feedbacks. What are the best estimates for exactly how much? NU: The IPCC currently estimates a range of 2 to 4.5 °C for the overall climate sensitivity (the warming due to a doubling of CO[2]), compared to the 1.2 °C warming with no feedbacks. See Section 8.6 of the AR4 WG1 report for model estimates and Section 9.6 for observational estimates. An excellent review article on climate sensitivity is: • Reto Knutti and Gabriele C. Hegerl, The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience 1 (2008), 735-748. I also recommend this review article on linear feedback analysis: • Gerard Roe, Feedbacks, timescales, and seeing red, Annual Reviews of Earth and Planetary Science 37 (2009), 93-115. But note that there are different feedback conventions; Roe’s λ is the negative of the reciprocal of the Soden & Held λ that I use, i.e. it’s a direct proportionality between forcing and temperature. JB: Okay, I’ll read those. Here’s another obvious question. You’ve listed estimates of feedbacks based on theoretical calculations. But what’s the evidence that these theoretical feedbacks are actually right? NU: As I mentioned, there are also observational estimates of feedbacks. There are two approaches: to estimate the total feedback acting in the climate system, or to estimate all the individual feedbacks (that we know about). The former doesn’t require us to know what all the individual feedbacks are, but the second allows us to verify our physical understanding of physical feedback processes. I’m more familiar with the total feedback method, and have published my own simple estimate as a byproduct of an uncertainty analysis about the future ocean circulation: • Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010. I will stick to discussing this method. To make a long story short, the observational and model estimates generally agree to within their estimated uncertainty bounds. But let me explain a bit more about where the observational estimates come from. To estimate the total feedback, you first estimate the radiative forcing of the system, based on historic data on greenhouse gases, volcanic and industrial aerosols, black carbon (soot), solar activity, and other factors which can change the Earth’s radiative balance. Then you predict how much warming you should get from that forcing using a climate model, and tune the model’s feedback until it matches the observed warming. The tuned feedback factor is your observational estimate. As I said earlier, there is no totally model-independent way of estimating feedbacks — you have to use some formula to turn forcings into temperatures. There is a balance between using simple formulas with few assumptions, or more realistic models with assumptions that are harder to verify. So far people have mostly used simple models, not only for transparency but also because they’re fast enough, and have few enough free parameters, to undertake a comprehensive uncertainty analysis. What I’ve described is the "forward model" approach, where you run a climate model forward in time and match its output to data. For a trivial linear model of the climate, you can do something even simpler, which is the closest to a "model independent" calculation you can get: statistically regress forcing against temperature. This is the approach taken by, for example: • Piers M. de F. Forster and Jonathan M. Gregory, The climate sensitivity and its components diagnosed from Earth radiation budget data, Journal of Climate 19 (2006), 39-52. In the "total feedback" forward model approach, there are two major confounding factors which prevent us from making precise feedback estimates. One is that we’re not sure what the forcing is. Although we have good measurements of trace greenhouse gases, there is an important cooling effect produced by air pollution. Industrial emissions create a haze of aerosols in the atmosphere which reflects sunlight and cools the planet. While this can be measured, this direct effect is also supplemented by a far less understood indirect effect: the aerosols can influence cloud formation, which has its own climate effect. Since we’re not sure how strong that is, we’re not sure whether there is a strong or a weak net cooling effect from aerosols. You can explain the observed global warming with a strong feedback whose effects are partially cancelled by a strong aerosol cooling, or with a weak feedback along with weak aerosol cooling. Without precisely knowing one, you can’t precisely determine the other. The other confounding factor is the rate at which the ocean takes up heat from the atmosphere. The oceans are, by far, the climate system’s major heat sink. The rate at which heat mixes into the ocean determines how quickly the surface temperature responds to a forcing. There is a time lag between applying a forcing and seeing the full response realized. Any comparison of forcing to response needs to take that lag into account. One way to explain the surface warming is with a strong feedback but a lot of heat mixing down into the deeper ocean, so you don’t see all the surface warming at once. Or you can do it with a weak feedback, and most of the heat staying near the surface, so you see the surface warming quickly. For a discussion, see: • Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708. We don’t know precisely what this rate is, since it’s been hard to monitor the whole ocean over long time periods (and there isn’t exactly a single "rate", either). This is getting long enough, so I’m going to skip over a discussion of individual feedback estimates. These have been applied to various specific processes, such as water vapor feedback, and involve comparing, say, how the water vapor content of the atmosphere has changed to how the temperature of the atmosphere has changed. I’m also skipping a discussion of paleoclimate estimates of past feedbacks. It follows the usual formula of "compare the estimated forcing to the reconstructed temperature response", but there are complications because the boundary conditions were different (different surface albedo patterns, variations in the Earth’s orbit, or even continental configurations if you go back far enough) and the temperatures can only be indirectly inferred. JB: Thanks for the summary of these complex issues. Clearly I’ve got my reading cut out for me. What do you say to people like Lindzen, who say negative feedbacks due to clouds could save the day? NU: Climate models tend to predict a positive cloud feedback, but it’s certainly possible that the net cloud feedback could be negative. However, Lindzen seems to think it’s so negative that it makes the total climate feedback negative, outweighing all positive feedbacks. That is, he claims a climate sensitivity even lower than the "bare" no-feedback value of 1.2 °C. I think Lindzen’s work has its own problems (there are published responses to his papers with more details). But generally speaking, independent of Lindzen’s specific arguments, I don’t think such a low climate sensitivity is supportable by data. It would be difficult to reproduce the modern instrumental atmospheric and ocean temperature data with such a low sensitivity. And it would be quite difficult to explain the large changes in the Earth’s climate over its geologic history if there were a stabilizing feedback that strong. The feedbacks I’ve mentioned generally act in response to any warming or cooling, not just from the CO[2] greenhouse effect, so a strongly negative feedback would tend to prevent the climate from changing much at all. JB: Yes, ever since the Antarctic froze over about 12 million years ago, it seems the climate has become increasingly "jittery": As soon as I saw the incredibly jagged curve at the right end of this graph, I couldn’t help but think that some positive feedback is making it easy for the Earth to flip-flop between warmer and colder states. But then I wondered what "tamed" this positive feedback and kept the temperature between certain limits. I guess that the negative Planck feedback must be involved. NU: You have to be careful: in the figure you cite, the resolution of the data decreases as you go back in time, so you can’t see all of the variability that could have been present. A lot of the high frequency variability (< 100 ky) is averaged out, so the more recent glacial-interglacial oscillations in temperature would not have been easily visible in the earlier data if they had occurred back then. That being said, there has been a real change in variability over the time span of that graph. As the climate cooled from a "greenhouse" to an "icehouse" over the Cenozoic era, the glacial-interglacial cycles were able to start. These big swings in climate are a result of ice albedo feedback, when large continental ice sheets form and disintegrate, and weren’t present in earlier greenhouse climates. Also, as you can see from the last 5 million years: the glacial-interglacial cycles themselves have gotten bigger over time (and the dominant period changed from 41 to 100 ky). As a side note, the observation that glacial cycles didn’t occur in hot climates highlights the fact that climate sensitivity can be state-dependent. The ice albedo feedback, for example, vanishes when there is no ice. This is a subtle point when using paleoclimate data to constrain the climate sensitivity, because the sensitivity at earlier times might not be the same as the sensitivity now. Of course, they are related to each other, and you can make inferences about one from the other with additional physical reasoning. I do stand by my previous remarks: I don’t think you can explain past climate if the (modern) sensitivity is below 1 °C. JB: I have one more question about feedbacks. It seems that during the last few glacial cycles, there’s sometimes a rise in temperature before a rise in CO[2] levels. I’ve heard people offer this explanation: warming oceans release CO[2]. Could that be another important feedback? NU: Temperature affects both land and ocean carbon sinks, so it is another climate feedback (warming changes the amount of CO[2] remaining in the atmosphere, which then changes temperature). The ocean is a very large repository of carbon, and both absorbs CO[2] from, and emits CO[2] to, the atmosphere. Temperature influences the balance between absorption and emission. One obvious influence is through the "solubility pump": CO[2] dissolves less readily in warmer water, so as temperatures rise, the ocean can absorb carbon from the atmosphere less effectively. This is related to Henry’s law in chemistry. JB: Henry’s law? Hmm, let me look up the Wikipedia article on Henry’s law. Okay, it basically just says that at any fixed temperature, the amount of carbon dioxide that’ll dissolve in water is proportional to the amount of carbon dioxide in the air. But what really matters for us is that when it gets warmer, this constant of proportionality goes down, so the water holds less CO[2]. Like you said. NU: But this is not the only process going on. Surface warming leads to more stratification of the upper ocean layers and can reduce the vertical mixing of surface waters into the depths. This is important to the carbon cycle because some of the dissolved CO[2] which is in the surface layers can return to the atmosphere, as part of an equilibrium exchange cycle. However, some of that carbon is also transported to deep water, where it can no longer exchange with the atmosphere, and can be sequestered there for a long time (about a millennium). If you reduce the rate at which carbon is mixed downward, so that relatively more carbon accumulates in the surface layers, you reduce the immediate ability of the ocean to store atmospheric CO[2] in its depths. This is another potential Another important process, which is more of a pure carbon cycle feedback than a climate feedback, is carbonate buffering chemistry. The straight Henry’s law calculation doesn’t tell the whole story of how carbon ends up in the ocean, because there are chemical reactions going on. CO[2] reacts with carbonate ions and seawater to produce bicarbonate ions. Most of the dissolved carbon in the surface waters (about 90%) exists as bicarbonate; only about 0.5% is dissolved CO[2], and the rest is carbonate. This "bicarbonate buffer" greatly enhances the ability of the ocean to absorb CO[2] from the atmosphere beyond what simple thermodynamic arguments alone would suggest. A keyword here is the "Revelle factor", which is the relative ratio of CO[2] to total carbon in the ocean. (A Revelle factor of 10, which is about the ocean average, means that a 10% increase in CO[2] leads to a 1% increase in dissolved inorganic carbon.) As more CO[2] is added to the ocean, chemical reactions consume carbonate and produce hydrogen ions, leading to ocean acidification. You have already discussed this on your blog. In addition to acidification, the chemical buffering effect is lessened (the Revelle factor increased) when there are fewer carbonate ions available to participate in reactions. This weakens the ocean carbon sink. This is a feedback, but it is a purely carbon cycle feedback rather than a climate feedback, since only carbonate chemistry is involved. There can also be an indirect climate feedback, if climate change alters the spatial distribution of the Revelle factor in the ocean by changing the ocean’s circulation. For more on this, try Section 7.3.4 of the IPCC AR4 WG1 report and Sections 8.3 and 10.2 of: • J. L. Sarmiento and N. Gruber, Ocean Biogeochemical Dynamics, Princeton U. Press, Princeton, 2006. JB: I’m also curious about other feedbacks. For example, I’ve heard that methane is an even more potent greenhouse gas than CO[2], though it doesn’t hang around as long. And I’ve heard that another big positive feedback mechanism might be the release of methane from melting permafrost. Or maybe even from "methane clathrates" down at the bottom of the ocean! There’s a vast amount of methane down there, locked in cage-shaped ice crystals. As the ocean warms, some of this could be released. Some people even worry that this effect could cause a "tipping point" in the Earth’s climate. But I won’t force you to tell me your opinions on this — you’ve done enough for one week. Instead, I just want to make a silly remark about hypothetical situations where there’s so much positive feedback that it completely cancels the Planck feedback. You see, as a mathematician, I couldn’t help wondering about this formula: T = -F/(λ[0]+λ) The Planck feedback λ[0] is negative. The sum of all the other feedbacks, namely λ, is positive. So what if they add up to zero? Then we’re be dividing by zero! When I last checked, that was a no-no. Here’s my guess. If λ[0]+λ becomes zero, the climate loses its stability: it can drift freely. A slight tap can push it arbitrarily far, like a ball rolling on a flat table. And if λ were actually big enough to make λ[0]+λ positive, the climate would be downright unstable, like a ball perched on top of a hill! But all this is only in some linear approximation. In reality, a hot object radiates power proportional to the fourth power of its temperature. So even if the Earth’s climate is unstable in some linear approximation, the Planck feedback due to radiation will eventually step in and keep the Earth from heating up, or cooling down, indefinitely. NU: Yes, we do have to be careful to remember that the formula above is obtained from a linear feedback analysis. For a discussion of climate sensitivity in a nonlinear analysis to second order, see: • I. Zaliapin and M. Ghil, Another look at climate sensitivity, Nonlinear Processes in Geophysics 16 (2010), 113-122. JB: Hmm, there’s some nice catastrophe theory in there — I see a fold catastrophe in Figure 5, which gives a "tipping point". Okay. Thanks for everything, and we’ll continue next week! The significant problems we have cannot be solved at the same level of thinking with which we created them. – Albert Einstein This Week’s Finds (Week 301) 27 August, 2010 The first 300 issues of This Week’s Finds were devoted to the beauty of math and physics. Now I want to bite off a bigger chunk of reality. I want to talk about all sorts of things, but especially how scientists can help save the planet. I’ll start by interviewing some scientists with different views on the challenges we face — including some who started out in other fields, because I’m trying to make that transition myself. By the way: I know “save the planet” sounds pompous. As George Carlin joked: “Save the planet? There’s nothing wrong with the planet. The planet is fine. The people are screwed.” (He actually put it a bit more colorfully.) But I believe it’s more accurate when he says: I think, to be fair, the planet probably sees us as a mild threat. Something to be dealt with. And I am sure the planet will defend itself in the manner of a large organism, like a beehive or an ant colony, and muster a defense. I think we’re annoying the biosphere. I’d like us to become less annoying, both for its sake and our own. I actually considered using the slogan how scientists can help humans be less annoying — but my advertising agency ran a focus group, and they picked how scientists can help save the planet. Besides interviewing people, I want to talk about where we stand on various issues, and what scientists can do. It’s a very large task, so I’m really hoping lots of you reading this will help out. You can explain stuff, correct mistakes, and point me to good sources of information. With a lot of help from Andrew Stacey, I’m starting a wiki where we can collect these pointers. I’m hoping it will grow into something interesting. But today I’ll start with a brief overview, just to get things rolling. In case you haven’t noticed: we’re heading for trouble in a number of ways. Our last two centuries were dominated by rapid technology change and a rapidly soaring population: The population is still climbing fast, though the percentage increase per year is dropping. Energy consumption per capita is also rising. So, from 1980 to 2007 the world-wide usage of power soared from 10 to 16 terawatts. 96% of this power now comes from fossil fuels. So, we’re putting huge amounts of carbon dioxide into the air: 30 billion metric tons in 2007. So, the carbon dioxide concentration of the atmosphere is rising at a rapid clip: from about 290 parts per million before the industrial revolution, to about 370 in the year 2000, to about 390 now: As you’d expect, temperatures are rising: But how much will they go up? The ultimate amount of warming will largely depend on the total amount of carbon dioxide we put into the air. The research branch of the National Academy of Sciences recently put out a report on these issues: • National Research Council, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, 2010. Here are their estimates: You’ll note there’s lots of uncertainty, but a rough rule of thumb is that each doubling of carbon dioxide will raise the temperature around 3 degrees Celsius. Of course people love to argue about these things: you can find reasonable people who’ll give a number anywhere between 1.5 and 4.5 °C, and unreasonable people who say practically anything. We’ll get into this later, I’m sure. But anyway: if we keep up “business as usual”, it’s easy to imagine us doubling the carbon dioxide sometime this century, so we need to ask: what would a world 3 °C warmer be like? It doesn’t sound like much… until you realize that the Earth was only about 6 °C colder during the last ice age, and the Antarctic had no ice the last time the Earth was about 4 °C warmer. You also need to bear in mind the shocking suddenness of the current rise in carbon dioxide levels: You can see several ice ages here — or technically, ‘glacial periods’. Carbon dioxide concentration and temperature go hand in hand, probably due to some feedback mechanisms that make each influence the other. But the scary part is the vertical line on the right where the carbon dioxide shoots up from 290 to 390 parts per million — instantaneously from a geological point of view, and to levels not seen for a long time. Species can adapt to slow climate changes, but we’re trying a radical experiment here. But what, specifically, could be the effects of a world that’s 3 °C warmer? You can get some idea from the National Research Council report. Here are some of their predictions. I think it’s important to read these, to see that bad things will happen, but the world will not end. Psychologically, it’s easy to avoid taking action if you think there’s no problem — but it’s also easy if you think you’re doomed and there’s no point. Between their predictions (in boldface) I’ve added a few comments of my own. These comments are not supposed to prove anything. They’re just anecdotal examples of the kind of events the report says we should expect. • For 3 °C of global warming, 9 out of 10 northern hemisphere summers will be “exceptionally warm”: warmer in most land areas than all but about 1 of the summers from 1980 to 2000. This summer has certainly been exceptionally warm: for example, worldwide, it was the hottest June in recorded history, while July was the second hottest, beat out only by 2003. Temperature records have been falling like dominos. This is a taste of the kind of thing we might see. • Increases of precipitation at high latitudes and drying of the already semi-arid regions are projected with increasing global warming, with seasonal changes in several regions expected to be about 5-10% per degree of warming. However, patterns of precipitation show much larger variability across models than patterns of temperature. Back home in southern California we’re in our fourth year of drought, which has led to many wildfires. • Large increases in the area burned by wildfire are expected in parts of Australia, western Canada, Eurasia and the United States. We are already getting some unusually intense fires: for example, the Black Saturday bushfires that ripped through Victoria in February 2007, the massive fires in Greece later that year, and the hundreds of wildfires that broke out in Russia this July. • Extreme precipitation events — that is, days with the top 15% of rainfall — are expected to increase by 3-10% per degree of warming. The extent to which these events cause floods, and the extent to which these floods cause serious damage, will depend on many complex factors. But today it hard not to think about the floods in Pakistan, which left about 20 million homeless, and ravaged an area equal to that of California. • In many regions the amount of flow in streams and rivers is expected to change by 5-15% per degree of warming, with decreases in some areas and increases in others. • The total number of tropical cyclones should decrease slightly or remain unchanged. Their wind speed is expected to increase by 1-4% per degree of warming. It’s a bit counterintuitive that warming could decrease the number of cyclones, while making them stronger. I’ll have to learn more about this. • The annual average sea ice area in the Arctic is expected to decrease by 15% per degree of warming, with more decrease in the summertime. The area of Arctic ice reached a record low in the summer of 2007, and the fabled Northwest Passage opened up for the first time in recorded history. Then the ice area bounced back. This year it was low again… but what matters more is the overall trend: • Global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion and loss of ice from glaciers and small ice caps. This could be enough to permanently displace as many as 3 million people — and raise the risk of floods for many millions more. Ice loss is also occurring in parts of Greenland and Antarctica, but the effect on sea level in the next century remains uncertain. • Up to 2 degrees of global warming, studies suggest that crop yield gains and adaptation, especially at high latitudes, could balance losses in tropical and other regions. Beyond 2 degrees, studies suggest a rise in food prices. The first sentence there is the main piece of good news — though not if you’re a poor farmer in central Africa. • Increased carbon dioxide also makes the ocean more acidic and lowers the ability of many organisms to make shells and skeleta. Seashells, coral, and the like are made of aragonite, one of the two crystal forms of calcium carbonate. North polar surface waters will become under-saturated for aragonite if the level of carbon dioxide in the atmosphere rises to 400-450 parts per million. Then aragonite will tend to dissolve, rather than form from seawater. For south polar surface waters, this effect will occur at 500-660 ppm. Tropical surface waters and deep ocean waters are expected to remain supersaturated for aragonite throughout the 20th century, but coral reefs may be negatively impacted. Coral reefs are also having trouble due to warming oceans. For example, this summer there was a mass dieoff of corals off the coast of Indonesia due to ocean temperatures that were 4 °C higher than • Species are moving toward the poles to keep cool: the average shift over many types of terrestrial species has been 6 kilometers per decade. The rate of extinction of species will be enhanced by climate change. I have a strong fondness for the diversity of animals and plants that grace this planet, so this particularly perturbs me. The report does not venture a guess for how many species may go extinct due to climate change, probably because it’s hard to estimate. However, it states that the extinction rate is now roughly 500 times what it was before humans showed up. The extinction rate is measured extinctions per million years per species. For mammals, it’s shot up from roughly 0.1-0.5 to roughly 50-200. That’s what I call annoying the biosphere! So, that’s a brief summary of the problems that carbon dioxide emissions may cause. There’s just one more thing I want to say about this now. Once carbon dioxide is put into the atmosphere, about 50% of it will stay there for decades. About 30% of it will stay there for centuries. And about 20% will stay there for thousands of years: This particular chart is based on some 1993 calculations by Wigley. Later calculations confirm this idea: the carbon we burn will haunt our skies essentially forever: • Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008. This is why we’re in serious trouble. In the above article, James Hansen puts it this way: Because of this long CO[2] lifetime, we cannot solve the climate problem by slowing down emissions by 20% or 50% or even 80%. It does not matter much whether the CO[2] is emitted this year, next year, or several years from now. Instead … we must identify a portion of the fossil fuels that will be left in the ground, or captured upon emission and put back into the ground. But I think it’s important to be more precise. We can put off global warming by reducing carbon dioxide emissions, and that may be a useful thing to do. But to prevent it, we have to cut our usage of fossil fuels to a very small level long before we’ve used them up. Theoretically, another option is to quickly deploy new technologies to suck carbon dioxide out of the air, or cool the planet in other ways. But there’s almost no chance such technologies will be practical soon enough to prevent significant global warming. They may become important later on, after we’ve already screwed things up. We may be miserable enough to try them, even though they may carry significant risks of their own. So now, some tough questions: If we decide to cut our usage of fossil fuels dramatically and quickly, how can we do it? How should we do it? What’s the least painful way? Or should we just admit that we’re doomed to global warming and learn to live with it, at least until we develop technologies to reverse it? And a few more questions, just for completeness: Could this all be just a bad dream — or more precisely, a delusion of some sort? Could it be that everything is actually fine? Or at least not as bad as you’re saying? I won’t attempt to answer any of these now. We’ll have to keep coming back to them, over and over. So far I’ve only talked about carbon dioxide emissions. There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess? Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities. A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price. For example: suppose I live in a high-rise apartment and my toilet breaks. Instead of fixing it, I realize that I can just use a bucket — and throw its contents out the window! Whee! If society has no mechanism for dealing with people like me, I pay no price for doing this. But you, down there, will be very unhappy. This isn’t just theoretical. Once upon a time in Europe there were few private toilets, and people would shout “gardyloo!” before throwing their waste down to the streets below. In retrospect that seems disgusting, but many of the big problems that afflict us now can be seen as the result of equally disgusting externalities. For example: • Carbon dioxide pollution caused by burning fossil fuels. If the expected costs of global warming and ocean acidification were included in the price of fossil fuels, other sources of energy would more quickly become competitive. This is the idea behind a carbon tax or a ‘cap-and-trade program’ where companies pay for permits to put carbon dioxide into the atmosphere. • Dead zones. Put too much nitrogen and phosophorus in the river, and lots of algae will grow in the ocean near the river’s mouth. When the algae dies and rots, the water runs out of dissolved oxygen, and fish cannot live there. Then we have a ‘dead zone’. Dead zones are expanding and increasing in number. For example, there’s one about 20,000 square kilometers in size near the mouth of the Mississippi River. Hog farming, chicken farming and runoff from fertilized crop lands are largely to blame. • Overfishing. Since there is no ownership of fish, everyone tries to catch as many fish as possible, even though this is depleting fish stocks to the point of near-extinction. There’s evidence that populations of all big predatory ocean fish have dropped 90% since 1950. Populations of cod, bluefish tuna and many other popular fish have plummeted, despite feeble attempts at regulation. • Species extinction due to habitat loss. Since the economic value of intact ecosystems has not been fully reckoned, in many parts of the world there’s little price to pay for destroying them. • Overpopulation. Rising population is a major cause of the stresses on our biosphere, yet it costs less to have your own child than to adopt one. (However, a pilot project in India is offering cash payments to couples who put off having children for two years after marriage.) One could go on; I haven’t even bothered to mention many well-known forms of air and water pollution. The Acid Rain Program in the United States is an example of how people eliminated an externality: they imposed a cap-and-trade system on sulfur dioxide pollution. Externalities often arise when we treat some resource as essentially infinite — for example fish, or clean water, or clean air. We thus impose no cost for using it. This is fine at first. But because this resource is free, we use more and more — until it no longer makes sense to act as if we have an infinite amount. As a physicist would say, the approximation breaks down, and we enter a new This is happening all over the place now. We have reached the point where we need to treat most resources as finite and take this into account in our economic decisions. We can’t afford so many externalities. It is irrational to let them go on. But what can you do about this? Or what can I do? We can do the things anyone can do. Educate ourselves. Educate our friends. Vote. Conserve energy. Don’t throw buckets of crap out of apartment windows. But what can we do that maximizes our effectiveness by taking advantage of our special skills? Starting now, a large portion of This Week’s Finds will be the continuing story of my attempts to answer this question. I want to answer it for myself. I’m not sure what I should do. But since I’m a scientist, I’ll pose the question a bit more broadly, to make it a bit more interesting. How scientists can help save the planet — that’s what I want to know. Addendum: In the new This Week’s Finds, you can often find the source for a claim by clicking on the nearest available link. This includes the figures. Four of the graphs in this issue were produced by Robert A. Rohde and more information about them can be found at Global Warming Art. During the journey we commonly forget its goal. Almost every profession is chosen as a means to an end but continued as an end in itself. Forgetting our objectives is the most frequent act of stupidity. — Friedrich Nietzsche This Week’s Finds in Mathematical Physics (Week 300) 11 August, 2010 This is the last of the old series of This Week’s Finds. Soon the new series will start, focused on technology and environmental issues — but still with a hefty helping of math, physics, and other When I decided to do something useful for a change, I realized that the best way to start was by interviewing people who take the future and its challenges seriously, but think about it in very different ways. So far, I’ve done interviews with: • Tim Palmer on climate modeling and predictability. • Thomas Fischbacher on sustainability and permaculture. • Eliezer Yudkowsky on artificial intelligence and the art of rationality. I hope to do more. I think it’ll be fun having This Week’s Finds be a dialogue instead of a monologue now and then. Other things are changing too. I started a new blog! If you’re interested in how scientists can help save the planet, I hope you visit: 1) Azimuth, http://johncarlosbaez.wordpress.com This is where you can find This Week’s Finds, starting now Also, instead of teaching math in hot dry Riverside, I’m now doing research at the Centre for Quantum Technologies in hot and steamy Singapore. This too will be reflected in the new This Week’s But now… the grand finale of This Week’s Finds in Mathematical Physics! I’d like to take everything I’ve been discussing so far and wrap it up in a nice neat package. Unfortunately that’s impossible – there are too many loose ends. But I’ll do my best: I’ll tell you how to categorify the Riemann zeta function. This will give us a chance to visit lots of our old friends one last time: the number 24, string theory, zeta functions, torsors, Joyal’s theory of species, groupoidification, and more. Let me start by telling you how to count. I’ll assume you already know how to count elements of a set, and move right along to counting objects in a groupoid. A groupoid is a gadget with a bunch of objects and a bunch of isomorphisms between them. Unlike an element of a set, an object of a groupoid may have symmetries: that is, isomorphisms between it and itself. And unlike an element of a set, an object of a groupoid doesn’t always count as “1 thing”: when it has n symmetries, it counts as “1/nth of a thing”. That may seem strange, but it’s really right. We also need to make sure not to count isomorphic objects as different. So, to count the objects in our groupoid, we go through it, take one representative of each isomorphism class, and add 1/n to our count when this representative has n symmetries. Let’s see how this works. Let’s start by counting all the n-element sets! Now, you may have thought there were infinitely many sets with n elements, and that’s true. But remember: we’re not counting the set of n-element sets – that’s way too big. So big, in fact, that people call it a “class” rather than a set! Instead, we’re counting the groupoid of n-element sets: the groupoid with n-element sets as objects, and one-to-one and onto functions between these as All n-element sets are isomorphic, so we only need to look at one. It has n! symmetries: all the permutations of n elements. So, the answer is 1/n!. That may seem weird, but remember: in math, you get to make up the rules of the game. The only requirements are that the game be consistent and profoundly fun – so profoundly fun, in fact, that it seems insulting to call it a mere “game”. Now let’s be more ambitious: let’s count all the finite sets. In other words, let’s work out the cardinality of the groupoid where the objects are all the finite sets, and the isomorphisms are all the one-to-one and onto functions between these. There’s only one 0-element set, and it has 0! symmetries, so it counts for 1/0!. There are tons of 1-element sets, but they’re all isomorphic, and they each have 1! symmetries, so they count for 1/ 1!. Similarly the 2-element sets count for 1/2!, and so on. So the total count is 1/0! + 1/1! + 1/2! + … = e The base of the natural logarithm is the number of finite sets! You learn something new every day. Spurred on by our success, you might want to find a groupoid whose cardinality is π. It’s not hard to do: you can just find a groupoid whose cardinality is 3, and a groupoid whose cardinality is .1, and a groupoid whose cardinality is .04, and so on, and lump them all together to get a groupoid whose cardinality is 3.14… But this is a silly solution: it doesn’t shed any light on the nature of π. I don’t want to go into it in detail now, but the previous problem really does shed light on the nature of e: it explains why this number is related to combinatorics, and it gives a purely combinatorial proof that the derivative of e^x is e^x, and lots more. Try these books to see what I mean: 2) Herbert Wilf, Generatingfunctionology, Academic Press, Boston, 1994. Available for free at http://www.cis.upenn.edu/~wilf/. 3) F. Bergeron, G. Labelle, and P. Leroux, Combinatorial Species and Tree-Like Structures, Cambridge, Cambridge U. Press, 1998. For example: if you take a huge finite set, and randomly pick a permutation of it, the chance every element is mapped to a different element is close to 1/e. It approaches 1/e in the limit where the set gets larger and larger. That’s well-known – but the neat part is how it’s related to the cardinality of the groupoid of finite sets. Anyway, I have not succeeded in finding a really illuminating groupoid whose cardinality is π, but recently James Dolan found a nice one whose cardinality is π^2/6, and I want to lead up to that. Here’s a not-so-nice groupoid whose cardinality is π^2/6. You can build a groupoid as the “disjoint union” of a collection of groups. How? Well, you can think of a group as a groupoid with one object: just one object having that group of symmetries. And you can build more complicated groupoids as disjoint unions of groupoids with one object. So, if you give me a collection of groups, I can take their disjoint union and get a groupoid. So give me this collection of groups: Z/1×Z/1, Z/2×Z/2, Z/3×Z/3, … where Z/n is the integers mod n, also called the “cyclic group” with n elements. Then I’ll take their disjoint union and get a groupoid, and the cardinality of this groupoid is 1/1^2 + 1/2^2 + 1/3^2 + … = π^2/6 This is not as silly as the trick I used to get a groupoid whose cardinality is π, but it’s still not perfectly satisfying, because I haven’t given you a groupoid of “interesting mathematical gadgets and isomorphisms between them”, as I did for e. Later we’ll see Jim’s better answer. We might also try taking various groupoids of interesting mathematical gadgets and computing their cardinality. For example, how about the groupoid of all finite groups? I think that’s infinite – there are just “too many”. How about the groupoid of all finite abelian groups? I’m not sure, that could be infinite too. But suppose we restrict ourselves to abelian groups whose size is some power of a fixed prime p? Then we’re in business! The answer isn’t a famous number like π, but it was computed by Philip Hall 4) Philip Hall, A partition formula connected with Abelian groups, Comment. Math. Helv. 11 (1938), 126-129. We can write the answer using an infinite product: 1/(1-p^-1)(1-p^-2)(1-p^-3) … Or, we can write the answer using an infinite sum: p(0)/p^0 + p(1)/p^1 + p(2)/p^2 + … Here p(n) is the number of “partitions” of n: that is, the number of ways to write it as a sum of positive integers in decreasing order. For example, p(4) = 5 since we can write 4 as a sum in 5 ways 4 = 4 4 = 3+1 4 = 2+2 4 = 2+1+1 4 = 1+1+1+1 If you haven’t thought about this before, you can have fun proving that the infinite product equals the infinite sum. It’s a cute fact, and quite famous. But Hall proved something even cuter. This number p(0)/p^0 + p(1)/p^1 + p(2)/p^2 + … is also the cardinality of another, really different groupoid. Remember how I said you can build a groupoid as the “disjoint union” of a collection of groups? To get this other groupoid, we take the disjoint union of all the abelian groups whose size is a power of p. Hall didn’t know about groupoid cardinality, so here’s how he said it: The sum of the reciprocals of the orders of all the Abelian groups of order a power of p is equal to the sum of the reciprocals of the orders of their groups of automorphisms. It’s pretty easy to see that sum of the reciprocals of the orders of all the Abelian groups of order a power of p is p(0)/p^0 + p(1)/p^1 + p(2)/p^2 + … To do this, you just need to show that there are p(n) abelian groups with p^n elements. If I shows you how it works for n = 4, you can guess how the proof works in general: 4 = 4 Z/p^4 4 = 3+1 Z/p^3 × Z/p 4 = 2+2 Z/p^2 × Z/p^2 4 = 2+1+1 Z/p^2 × Z/p^2 × Z/p 4 = 1+1+1+1 Z/p × Z/p × Z/p × Z/p So, the hard part is showing that p(0)/p^0 + p(1)/p^1 + p(2)/p^2 + … is also the sum of the reciprocals of the sizes of the automorphism groups of all groups whose size is a power of p. I learned of Hall’s result from Aviv Censor, a colleague who is an expert on groupoids. He had instantly realized this result had a nice formulation in terms of groupoid cardinality. We went through several proofs, but we haven’t yet been able to extract any deep inner meaning from them: 5) Avinoam Mann, Philip Hall’s “rather curious” formula for abelian p-groups, Israel J. Math. 96 (1996), part B, 445-448. 6) Francis Clarke, Counting abelian group structures, Proceedings of the AMS, 134 (2006), 2795-2799. However, I still have hopes, in part because the math is related to zeta functions… and that’s what I want to turn to now. Let’s do another example: what’s the cardinality of the groupoid of semisimple commutative rings with n elements? What’s a semisimple commutative ring? Well, since we’re only talking about finite ones, I can avoid giving the general definition and take advantage of a classification theorem. Finite semisimple commutative rings are the same as finite products of finite fields. There’s a finite field with p^n whenever p is prime and n is a positive integer. This field is called F[p^n], and it has n symmetries. And that’s all the finite fields! In other words, they’re all isomorphic to these. This is enough to work out the cardinality of the groupoid of semisimple commutative rings with n elements. Let’s do some examples. Let’s try n = 6, for example. This one is pretty easy. The only way to get a finite product of finite fields with 6 elements is to take the product of F[2] and F[3]: F[2] × F[3] This has just one symmetry – the identity – since that’s all the symmetries either factor has, and there’s no symmetry that interchanges the two factors. (Hmm… you may need check this, but it’s not Since we have one object with one symmetry, the groupoid cardinality is 1/1 = 1 Let’s try a more interesting one, say n = 4. Now there are two options: F[2] × F[2] The first option has 2 symmetries: remember, F[p^n] has n symmetries. The second option also has 2 symmetries, namely the identity and the symmetry that switches the two factors. So, the groupoid cardinality is 1/2 + 1/2 = 1 But now let’s try something even more interesting, like n = 16. Now there are 5 options: The field F[16] has 4 symmetries because 16 = 2^4, and any field F[p^n] has n symmetries. F[8]×F[2] has 3 symmetries, coming from the symmetries of the first factor. F[4]×F[4] has 2 symmetries in each factor and 2 coming from permutations of the factors, for a total of 2× 2×2 = 8. F[4]×F[2]×F[2] has 2 symmetries coming from those of the first factor, and 2 symmetries coming from permutations of the last two factors, for a total of 2×2 = 4 symmetries. And finally, F[2]×F[2]×F[2]×F[2] has 24 symmetries coming from permutations of the factors. So, the cardinality of this groupoid works out to be 1/4 + 1/3 + 1/8 + 1/4 + 1/24 Hmm, let’s put that on a common denominator: 6/24 + 8/24 + 3/24 + 6/24 + 1/24 = 24/24 = 1 So, we’re getting the same answer again: 1. Is this just a weird coincidence? No: this is what we always get! For any positive integer n, the groupoid of n-element semsimple commutative rings has cardinality 1. For a proof, see: 7) John Baez and James Dolan, Zeta functions, at http://ncatlab.org/johnbaez/show/Zeta+functions Now, you might think this fact is just a curiosity, but actually it’s a step towards categorifying the Riemann zeta function. The Riemann zeta function is ζ(s) = ∑[n > 0] n^-s It’s an example of a “Dirichlet series”, meaning a series of this form: ∑[n > 0] a[n] n^-s In fact, any reasonable way of equipping finite sets with extra stuff gives a Dirichlet series – and if this extra stuff is “being a semisimple commutative ring”, we get the Riemann zeta function. To explain this, I need to remind you about “stuff types”, and then explain how they give Dirichlet series. A stuff type is a groupoid Z where the objects are finite sets equipped with “extra stuff” of some type. More precisely, it’s a groupoid with a functor to the groupoid of finite sets. For example, Z could be the groupoid of finite semsimple commutative rings – that’s the example we care about now. Here the functor forgets that we have a semisimple commutative ring, and only remembers the underlying finite set. In other words, it forgets the “extra stuff”. In this example, the extra stuff is really just extra structure, namely the structure of being a semisimple commutative ring. But we could also take X to be the groupoid of pairs of finite sets. A pair of finite sets is a finite set equipped with honest-to-goodness extra stuff, namely another finite set! Structure is a special case of stuff. If you’re not clear on the difference, try this: 8) John Baez and Mike Shulman, Lectures on n-categories and cohomology, Sec. 2.4: Stuff, structure and properties, in n-Categories: Foundations and Applications, eds. John Baez and Peter May, Springer, Berlin, 2009. Also available as arXiv:math/0608420. Then you can tell your colleagues: “I finally understand stuff.” And they’ll ask: “What stuff?” And you can answer, rolling your eyes condescendingly: “Not any particular stuff – just stuff, in But it’s not really necessary to understand stuff in general here. Just think of a stuff type as a groupoid where the objects are finite sets equipped with extra bells and whistles of some particular Now, if we have a stuff type, say Z, we get a list of groupoids Z(n). How? Simple! Objects of Z are finite sets equipped with some particular type of extra stuff. So, we can take the objects of Z(n) to be the n-element sets equipped with that type of extra stuff. The groupoid Z will be a disjoint union of these groupoids Z(n). We can encode the cardinalities of all these groupoids into a Dirichlet series: z(s) = ∑[n > 0] |Z(n)| n^-s where |Z(n)| is the cardinality of Z(n). In case you’re wondering about the minus sign: it’s just a dumb convention, but I’m too overawed by the authority of tradition to dream of questioning it, even though it makes everything to come vastly more ugly. Anyway: the point is that a Dirichlet series is like the “cardinality” of a stuff type. To show off, we say stuff types categorify Dirichlet series: they contain more information, and they’re objects in a category (or something even better, like a 2-category) rather than elements of a set. Let’s look at an example. When Z is the groupoid of finite semisimple commutative rings, then |Z(n)| = 1 so the corresponding Dirichlet series is the Riemann zeta function: z(s) = ζ(s) So, we’ve categorified the Riemann zeta function! Using this, we can construct an interesting groupoid whose cardinality is ζ(2) = ∑[n > 0] n^-2 = π^2/6 How? Well, let’s step back and consider a more general problem. Any stuff type Z gives a Dirichlet series z(s) = ∑[n > 0] |Z(n)| n^-s How can use this to concoct a groupoid whose cardinality is z(s) for some particular value of s? It’s easy when s is a negative integer (here that minus sign raises its ugly head). Suppose S is a set with s elements: |S| = s Then we can define a groupoid as follows: Z(-S) = ∑[n > 0] Z(n) × n^S Here we are playing some notational tricks: n^S means “the set of functions from S to our favorite n-element set”, the symbol × stands for the product of groupoids, and ∑ stands for what I’ve been calling the “disjoint union” of groupoids (known more technically as the “coproduct”). So, Z(-S) is a groupoid. But this formula is supposed to remind us of a simpler one, namely z(-s) = ∑[n > 0] |Z(n)| n^s and indeed it’s a categorified version of this simpler formula. In particular, if we take the cardinality of the groupoid Z(-S), we get the number z(-s). To see this, you just need to check each step in this calculation: |Z(-S)| = |∑ Z(n) × n^S| = ∑ |Z(n) × n^S| = ∑ |Z(n)| × |n^S| = ∑ |Z(n)| × n^s = z(-s) The notation is supposed to make these steps seem plausible. Even better, the groupoid Z(-S) has a nice description in plain English: it’s the groupoid of finite sets equipped with Z-stuff and a map from the set S. Well, okay – I’m afraid that’s what passes for plain English among mathematicians! We don’t talk to ordinary people very often. But the idea is really simple. Z is some sort of stuff that we can put on a finite set. So, we can do that and also choose a map from S to that set. And there’s a groupoid of finite sets equipped with all this extra baggage, and isomorphisms between those. If this sounds too abstract, let’s do an example. Say our favorite example, where Z is the groupoid of finite semisimple commutative rings. Then Z(-S) is the groupoid of finite semisimple commutative rings equipped with a map from the set S. If this still sounds too abstract, let’s do an example. Do I sound repetitious? Well, you see, category theory is the subject where you need examples to explain your examples – and n-category theory is the subject where this process needs to be repeated n times. So, suppose S is a 1-element set – we can just write S = 1 Then Z(-1) is a groupoid where the objects are finite semisimple commutative rings with a chosen element. The isomorphisms are ring isomorphisms that preserve the chosen element. And the cardinality of this groupoid is |Z(-1)| = ζ(-1) = 1 + 2 + 3 + … Whoops – it diverges! Luckily, people who study the Riemann zeta function know that 1 + 2 + 3 + … = -1/12 They get this crazy answer by analytically continuing the Riemann zeta function ζ(s) from values of s with a big positive real part, where it converges, over to values where it doesn’t. And it turns out that this trick is very important in physics. In fact, back in "week124" – "week126", I explained how this formula ζ(-1) = -1/12 is the reason bosonic string theory works best when our string has 24 extra dimensions to wiggle around in besides the 2 dimensions of the string worldsheet itself. So, if we’re willing to allow this analytic continuation trick, we can say that THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS WITH A CHOSEN ELEMENT HAS CARDINALITY -1/12 Someday people will see exactly how this is related to bosonic string theory. Indeed, it should be just a tiny part of a big story connecting number theory to string theory… some of which is explained here: 9) J. M. Luck, P. Moussa, and M. Waldschmidt, eds., Number Theory and Physics, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990. 10) C. Itzykson, J. M. Luck, P. Moussa, and M. Waldschmidt, eds, From Number Theory to Physics, Springer, Berlin, 1992. Indeed, as you’ll see in these books (or in "week126"), the function we saw earlier: 1/(1-p^-1)(1-p^-2)(1-p^-3) … = p(0)/p^0 + p(1)/p^1 + p(2)/p^2 + … is also important in string theory: it shows up as a “partition function”, in the physical sense, where the number p(n) counts the number of ways a string can have energy n if it has one extra dimension to wiggle around in besides the 2 dimensions of its worldsheet. But it’s the 24th power of this function that really matters in string theory – because bosonic string theory works best when our string has 24 extra dimensions to wiggle around in. For more details, 11) John Baez, My favorite numbers: 24. Available at http://math.ucr.edu/home/baez/numbers/24.pdf But suppose we don’t want to mess with divergent sums: suppose we want a groupoid whose cardinality is, say, ζ(2) = 1^-2 + 2^-2 + 3^-2 + … = π^2/6 Then we need to categorify the evalution of Dirichlet series at positive integers. We can only do this for certain stuff types – for example, our favorite one. So, let Z be the groupoid of finite semisimple commutative rings, and let S be a finite set. How can we make sense of Z(S) = ∑[n > 0] Z(n) × n^-S ? The hard part is n^-S, because this has a minus sign in it. How can we raise an n-element set to the -Sth power? If we could figure out some sort of groupoid that serves as the reciprocal of an n-element set, we’d be done, because then we could take that to the Sth power. Remember, S is a finite set, so to raise something (even a groupoid) to the Sth power, we just multiply a bunch of copies of that something – one copy for each element of S. So: what’s the reciprocal of an n-element set? There’s no answer in general – but there’s a nice answer when that set is a group, because then that group gives a groupoid with one object, and the cardinality of this groupoid is just 1/n. Here is where our particular stuff type Z comes to the rescue. Each object of Z(n) is a semisimple commutative ring with n elements, so it has an underlying additive group – which is a group with n So, we don’t interpret Z(n) × n^-S as an ordinary product, but something a bit sneakier, a “twisted product”. An object in Z(n) × n^-S is just an object of Z(n), that is an n-element semisimple commutative ring. But we define a symmetry of such an object to be a symmetry of this ring together with an S-tuple of elements of its underlying additive group. We compose these symmetries with the help of addition in this group. This ensures that |Z(n) × n^-S| = |Z(n)| × n^-s when |S| = s. And this in turn means that |Z(S)| = |∑ Z(n) × n^-S| = ∑ |Z(n) × n^-S| = ∑ |Z(n)| × n^-s = ζ(-s) So, in particular, if S is a 2-element set, we can write S = 2 for short and get |Z(2)| = ζ(2) = π^2/6 Can we describe the groupoid Z(2) in simple English, suitable for a nice bumper sticker? It’s a bit tricky. One reason is that I haven’t described the objects of Z(2) as mathematical gadgets of an appealing sort, as I did for Z(-1). Another closely related reason is that I only described the symmetries of any object in Z(2) – or more technically, morphisms from that object to itself. It’s much better if we also describe morphisms from one object to another. For this, it’s best to define Z(n) × n^-S with the help of torsors. The idea of a torsor is that you can take the one-object groupoid associated to any group G and find a different groupoid, which is nonetheless equivalent, and which is a groupoid of appealing mathematical gadgets. These gadgets are called “G-torsors”. A “G-torsor” is just a nonempty set on which G acts freely and transitively: 12) John Baez, Torsors made easy, http://math.ucr.edu/home/baez/torsors.html All G-torsors are isomorphic, and the group of symmetries of any G-torsor is G. Now, any ring R has an underlying additive group, which I will simply call R. So, the concept of “R-torsor” makes sense. This lets us define an object of Z(n) × n^-S to be an n-element semisimple commutative ring R together with an S-tuple of R-torsors. But what about the morphisms between these? We define a morphism between these to be a ring isomorphism together with an S-tuple of torsor isomorphisms. There’s a trick hiding here: a ring isomorphism f: R → R’ lets us take any R-torsor and turn it into an R’-torsor, or vice versa. So, it lets us talk about an isomorphism from an R-torsor to an R’-torsor – a concept that at first might have seemed nonsensical. Anyway, it’s easy to check that this definition is compatible with our earlier one. So, we see: THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH AN n-TUPLE OF TORSORS HAS CARDINALITY ζ(n) I did a silly change of variables here: I thought this bumper sticker would sell better if I said “n-tuple” instead of “S-tuple”. Here n is any positive integer. While we’re selling bumper stickers, we might as well include this one: THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH A PAIR OF TORSORS HAS CARDINALITY π^2/6 Now, you might think this fact is just a curiosity. But I don’t think so: it’s actually a step towards categorifying the general theory of zeta functions. You see, the Riemann zeta function is just one of many zeta functions. As Hasse and Weil discovered, every sufficiently nice commutative ring R has a zeta function. The Riemann zeta function is just the simplest example: the one where R is the ring of integers. And the cool part is that all these zeta functions come from stuff types using the recipe I described! How does this work? Well, from any commutative ring R, we can build a stuff type Z[R] as follows: an object of Z[R] is a finite semisimple commutative ring together with a homomorphism from R to that ring. Then it turns out the Dirichlet series of this stuff type, say ζ[R](s) = ∑[n > 0] |Z[R](n)| n^-s is the usual Hasse-Weil zeta function of the ring R! Of course, that fact is vastly more interesting if you already know and love Hasse-Weil zeta functions. You can find a definition of them either in my paper with Jim, or here: 12) Jean-Pierre Serre, Zeta and L functions, Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963), Harper and Row, 1965, pp. 82–92. But the basic idea is simple. You can think of any commutative ring R as the functions on some space – a funny sort of space called an “affine scheme”. You’re probably used to spaces where all the points look alike – just little black dots. But the points of an affine scheme come in many different colors: for starters, one color for each prime power p^k! The Hasse-Weil zeta function of R is a clever trick for encoding the information about the numbers of points of these different colors in a single function. Why do we get points of different colors? I explained this back in "week205". The idea is that for any commutative ring k, we can look at the homomorphisms f: R → k and these are called the “k-points” of our affine scheme. In particular, we can take k to be a finite field, say F[p^n]. So, we get a set of points for each prime power p^n. The Hasse-Weil zeta function is a trick for keeping track of many F[p^n]-points there are for each prime power p^n. Given all this, you shouldn’t be surprised that we can get the Hasse-Weil zeta function of R by taking the Dirichlet series of the stuff type Z[R], where an object is a finite semisimple commutative ring k together with a homomorphism f: R → k. Especially if you remember that finite semisimple commutative rings are built from finite fields! In fact, this whole theory of Hasse-Weil zeta functions works for gadgets much more general than commutative rings, also known as affine schemes. They can be defined for “schemes of finite type over the integers”, and that’s how Serre and other algebraic geometers usually do it. But Jim and I do it even more generally, in a way that doesn’t require any expertise in algebraic geometry. Which is good, because we don’t have any. I won’t explain that here – it’s in our paper. I’ll wrap up by making one more connection explicit – it’s sort of lurking in what I’ve said, but maybe it’s not quite obvious. First of all, this idea of getting Dirichlet series from stuff types is part of the groupoidification program. Stuff types are a generalization of “structure types”, often called “species”. André Joyal developed the theory of species and showed how any species gives rise to a formal power series called its generating function. I told you about this back in "week185" and "week190". The recipe gets even simpler when we go up to stuff types: the generating function of a stuff type Z is just ∑[n ≥ 0] |Z(n)| z^n Since we can also describe states of the quantum harmonic oscillator as power series, with z^n corresponding to the nth energy level, this lets us view stuff types as states of a categorified quantum harmonic oscillator! This explains the combinatorics of Feynman diagrams: 14) Jeffrey Morton, Categorified algebra and quantum mechanics, TAC 16 (2006), 785-854, available at http://www.emis.de/journals/TAC/volumes/16/29/16-29abs.html Also available as arXiv:math/0601458. And, it’s a nice test case of the groupoidification program, where we categorify lots of algebra by saying “wherever we see a number, let’s try to think of it as the cardinality of a groupoid”: 15) John Baez, Alex Hoffnung and Christopher Walker, Higher-dimensional algebra VII: Groupoidification, available as arXiv:0908.4305 But now I’m telling you something new! I’m saying that any stuff type also gives a Dirichlet series, namely ∑[n > 0] |Z(n)| n^-s This should make you wonder what’s going on. My paper with Jim explains it – at least for structure types. The point is that the groupoid of finite sets has two monoidal structures: + and ×. This gives the category of structure types two monoidal structures, using a trick called “Day convolution”. The first of these categorifies the usual product of formal power series, while the second categorifies the usual product of Dirichlet series. People in combinatorics love the first one, since they love chopping a set into two disjoint pieces and putting a structure on each piece. People in number theory secretly love the second one, without fully realizing it, because they love taking a number and decomposing it into prime factors. But they both fit into a single picture! There’s a lot more to say about this, because actually the category of structure types has five monoidal structures, all fitting together in a nice way. You can read a bit about this here: 16) nLab, Schur functors, http://ncatlab.org/nlab/show/Schur+functor This is something Todd Trimble and I are writing, which will eventually evolve into an actual paper. We consider structure types for which there is a vector space of structures for each finite set instead of a set of structures. But much of the abstract theory is similar. In particular, there are still five monoidal structures. Someday soon, I hope to show that two of the monoidal structures on the category of species make it into a “ring category”, while the other two – the ones I told you about, in fact! – are better thought of as “comonoidal” structures, making it into a “coring category”. Putting these together, the category of species should become a “biring category”. Then the fifth monoidal structure, called “plethysm”, should make it into a monoid in the monoidal bicategory of biring categories! This sounds far-out, but it’s all been worked out at a decategorified level: rings, corings, birings, and monoids in the category of birings: 17) D. Tall and Gavin Wraith, Representable functors and operations on rings, Proc. London Math. Soc. (3), 1970, 619-643. 18) James Borger and B. Wieland, Plethystic algebra, Advances in Mathematics 194 (2005), 246-283. Also available at http://wwwmaths.anu.edu.au/~borger/papers/03/paper03.html 19) Andrew Stacey and S. Whitehouse, The hunting of the Hopf ring, Homology, Homotopy and Applications, 11 (2009), 75-132, available at http://intlpress.com/HHA/v11/n2/a6/ Also available as Borger and Wieland call a monoid in the category of birings a “plethory”. The star example is the algebra of symmetric functions. But this is basically just a decategorified version of the category of Vect-valued species. So, the whole story should categorify. In short: starting from very simple ideas, we can very quickly find a treasure trove of rich structures. Indeed, these structures are already staring us in the face – we just need to open our eyes. They clarify and unify a lot of seemingly esoteric and disconnected things that mathematicians and physicists love. I think we are just beginning to glimpse the real beauty of math and physics. I bet it will be both simpler and more startling than most people expect. I would love to spend the rest of my life chasing glimpses of this beauty. I wish we lived in a world where everyone had enough of the bare necessities of life to do the same if they wanted – or at least a world that was safely heading in that direction, a world where politicians were looking ahead and tackling problems before they became desperately serious, a world where the oceans weren’t But we don’t. Certainty of death. Small chance of success. What are we waiting for? – Gimli
{"url":"http://johncarlosbaez.wordpress.com/category/this-weeks-finds/page/2/","timestamp":"2014-04-16T07:45:49Z","content_type":null,"content_length":"353772","record_id":"<urn:uuid:3f9f6f44-da65-4063-bfd2-3b2693ab186d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Fermat’s Last Theorem: A one-operation proof Quote by So? All you need to do is to (correctly) derive one contradiction in order to prove that one of your original assumptions must be false. Do you think, that Victor prove for FLT is wrong ? Moshe Klein
{"url":"http://www.physicsforums.com/showpost.php?p=718757&postcount=73","timestamp":"2014-04-20T11:15:05Z","content_type":null,"content_length":"8297","record_id":"<urn:uuid:3ca0cb5d-05b4-439c-8363-ce632a353661>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
The Acceleration Of Point C Is 0.9 (ft/s^2) Downward ... | Chegg.com The acceleration of point C is 0.9 (ft/s^2) downward and theangular acceleration of the beam is 0.8(rad/s^2) clockwise. Knowing that the angular velocity of the beam is zero at theinstant considered, determine the acceleration of each cable. Would cable A be moving at all or staying still so that it wouldmake solving for the acceleration easier? Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/acceleration-point-c-09-ft-s-2-downward-theangular-acceleration-beam-08-rad-s-2-clockwise--q498295","timestamp":"2014-04-21T10:32:20Z","content_type":null,"content_length":"18471","record_id":"<urn:uuid:0c9668e4-f7af-4424-acda-b197f3ba7d33>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the damping ratio (zeta) of an nth order system from a transfer function I am having trouble with some of my homework. I am not quite sure how to find the damping ratio from a third order system when the transfer function (of s) is the only information supplied. Could anyone help me with this? I would like a method that would work with any nth order system, although my current problem is third order. Also, I must find the damping ratio WITHOUT using differential equations to convert the transfer function to a function of time. Here is a transfer function that may be used as an example: s/2 + 1 Thanks to anyone who is willing to contribute!
{"url":"http://www.physicsforums.com/showthread.php?t=528117","timestamp":"2014-04-20T14:09:10Z","content_type":null,"content_length":"20378","record_id":"<urn:uuid:131e404e-898c-4762-8c8b-5bb70aeee3b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the value of N for Simpson's Rule October 13th 2009, 12:51 PM #1 Sep 2009 Finding the value of N for Simpson's Rule Find a value of N such that Simpson's Rule approximates ∫x^-1/4 dx with an error of at most 10^-2 (but don't calculate Simpson's Rule). The answer sheet says the answer is 4, but I cant get that answer. So far all I found was the fourth derivative. fourth derivative = 585/256 x^(-17/4) In class, we took a similar problem and graphed the fourth derivative to see where it was even to and plugged the value in for the derivative to find the k4, but I tried to do this, and I got a huge number not even close to 4. Can anyone help me out, please? Find a value of N such that Simpson's Rule approximates ∫x^-1/4 dx with an error of at most 10^-2 (but don't calculate Simpson's Rule). The answer sheet says the answer is 4, but I cant get that answer. So far all I found was the fourth derivative. fourth derivative = 585/256 x^(-17/4) In class, we took a similar problem and graphed the fourth derivative to see where it was even to and plugged the value in for the derivative to find the k4, but I tried to do this, and I got a huge number not even close to 4. Can anyone help me out, please? Find the largest positive integer solution of $\frac{(b-a)^3}{12 n^2} \, \text{max}_{[a, b]} f''(x) \leq \frac{1}{100}$ where $a = 2, \, b = 5, \, f(x) = x^{-1/4}$ and $\text{max}_{[a, b]} f''(x) $ is the maximum value of f(x) over the interval [a, b] of integration. October 13th 2009, 06:40 PM #2
{"url":"http://mathhelpforum.com/calculus/107822-finding-value-n-simpson-s-rule.html","timestamp":"2014-04-21T06:11:16Z","content_type":null,"content_length":"35397","record_id":"<urn:uuid:c7c69621-abf7-4897-9ab8-4dae4ab268a3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Probability October 30th 2008, 05:21 PM #1 Junior Member Apr 2008 The monthly sales of a chocolate company is believed to be normally distributed with a mean of 1,000 bars and a standard deviation of 150 bars.Find the probability that the total sales in the next year is over 12,600 bars with assuming that the sales of the 12 months are independent. Let Y be the random variable sales in 1 year. You should realise that Y ~ Normal $\left(\mu = (12)(1,000) = 12,000 ~ \sigma^2 = 12 (150^2) = 270,000 \Rightarrow \sigma = 519.6\right)$ (read this: Sum of normally distributed random variables - Wikipedia, the free encyclopedia). Find Pr(Y > 12,600). Would it be different if the monthly sales of a chocolate company is believed to be normally distributed to exactly normal distributed?? And i still do not understand why we use sampling distributions So when to use sampling distributions and normal distributions Would it be different if the monthly sales of a chocolate company is believed to be normally distributed to exactly normal distributed?? Mr F says: No. And i still do not understand why we use sampling distributions Mr F says: The idea of sampling distibution is NOt being used here. The distribution of the sum of 12 normal random variables is required. It should be clear from my previous post why this sum is required. So when to use sampling distributions and normal distributions Would it be different if the monthly sales of a chocolate company is believed to be normally distributed to exactly normal distributed?? Mr F says: No. But then isn't in approximately(believed) normal distributed meaning the population can be not normal as well And i still do not understand why we use sampling distributions Mr F says: The idea of sampling distibution is NOt being used here. The distribution of the sum of 12 normal random variables is required. It should be clear from my previous post why this sum is required. So when to use sampling distributions and normal distributions This is the main question please help like to is there any tips on when to use sampling distributions instead of normal October 30th 2008, 07:14 PM #2 November 25th 2008, 07:31 PM #3 Junior Member Apr 2008 November 25th 2008, 07:59 PM #4 November 26th 2008, 07:31 AM #5 Junior Member Apr 2008
{"url":"http://mathhelpforum.com/advanced-statistics/56690-sample-probability.html","timestamp":"2014-04-18T18:49:26Z","content_type":null,"content_length":"46104","record_id":"<urn:uuid:0d36e0b5-d695-4c50-a35b-09290344450c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Bound's Of The Poissonratio. A Cube Of Isotropic ... | Chegg.com Bound's of the Poissonratio. A cube of isotropic elastic material (Young's ModulusE, Poisson ratio v) with edge length L has in its undeformed statethe volume V[0]=L^3. After deformation its newedge lengths are L(1+σ/E);L(1-vσ/E);L(1-vσ/E),where σ is the stress tensor component corresponding to auniform extension. What is the volume of the deformed body(V[0]+ΔV) given that the ratio σ/E is smallcompared to unity?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/bound-s-poissonratio-cube-isotropic-elastic-material-young-s-moduluse-poisson-ratio-v-edge-q807614","timestamp":"2014-04-24T07:43:47Z","content_type":null,"content_length":"18630","record_id":"<urn:uuid:caa342cb-6b5f-4726-bcca-4364ca588f23>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Kansas Geological Survey, Public Information Circular (PIC) 3 Prev. Page--Where Earthquakes Occur || Next Page--Earthquakes in Kansas Recording and Measuring Earthquakes Earthquakes generate vibrations called seismic waves that travel through the earth in all directions from the focus, the point beneath the earth's surface where the earthquake begins. The point on the earth's surface directly above the focus, where the strongest shaking occurs, is called the epicenter (Fig. 4). Shaking decreases with distance from the epicenter. Figure 4--Spatial relationship between focus and epicenter of an earthquake. Seismologists use sensitive instruments called seismographs to record these waves. Seismographs can electronically amplify seismic waves more than 10,000 times and are sensitive enough to detect strong earthquakes originating anywhere in the world. The time, location, and magnitude of an earthquake can be determined from a graphical plot of the data called a seismogram. To measure the strength of an earthquake, seismologists use two different scales: the Modified Mercalli Intensity Scale and the Richter Magnitude Scale. The Modified Mercalli Intensity Scale gauges earthquakes by their effect on people and structures. It was originally developed in 1902 in Italy, and it relies on newspaper and eyewitness reports. This scale is used to gauge the size of earthquakes that occurred before sensitive instruments existed to measure them. It has 12 levels designated by roman numerals, ranging from imperceptible shaking (I) to catastrophic destruction The other measure of earthquake size is the Richter Magnitude Scale. It was developed in 1935 by Charles F. Richter of the California Institute of Technology as a mathematical device to compare the size of earthquakes. Using the record of the seismic waves plotted on the seismograph, seismologists determine the magnitude mathematically from the size of the recorded waves and from the calculated distance between the earthquake focus and the seismograph recording station. The Richter Scale expresses magnitude in whole numbers and decimal fractions. Each increase in magnitude by a whole number represents a tenfold increase in measured wave size. In terms of energy, each whole-number increase represents 31 times more energy released--for instance, a Richter 5.3 earthquake releases 31 times more energy than an earthquake of Richter 4.3. The world averages about 20 earthquakes each year that have Richter magnitudes of 7 or larger. The largest earthquake ever recorded in the world was a magnitude 9.5 in Chile in 1960. Sensitive seismographs are capable of recording nearby earthquakes with Richter magnitudes of -1 or smaller. A person with a sledge hammer can generate the equivalent of a Richter magnitude -4 earthquake. An earthquake's magnitude does not necessarily express the damage it caused. In a densely populated area, an earthquake may do far more damage than one of greater magnitude that occurs in a remote area. For example, the magnitude 6.8 earthquake that hit Kobe, Japan, on January 16, 1995, killed 6,308 people and injured thousands of others. Though it was the deadliest earthquake in 1995, its magnitude was lower than 25 other earthquakes recorded that year. Although the Richter Magnitude Scale and the Modified Mercalli Intensity Scale are not strictly comparable, they can be roughly correlated for locations near the epicenter of an earthquake (Fig. 5). In Kansas, for example, the Palco earthquake of June 1989 had a Modified Mercalli (MM) intensity of IV and a Richter magnitude of 4.0, and the 1867 Manhattan earthquake had a MM intensity of VII and a Richter magnitude that was probably between 5.0 and 5.5. Figure 5--Approximate comparison for the U. S. Midcontinent of Modified Mercalli and Richter scales at locations very near the epicenter. Prev. Page--Where Earthquakes Occur || Next Page--Earthquakes in Kansas Kansas Geological Survey, Public Outreach 1930 Constant Ave., Lawrence, KS 66047-3726 Phone: (785) 864-3965, Fax: (785) 864-5317 Comments to webadmin@kgs.ku.edu Web version July 1996
{"url":"http://www.kgs.ku.edu/Publications/pic3/pic3_3.html","timestamp":"2014-04-19T10:03:30Z","content_type":null,"content_length":"6058","record_id":"<urn:uuid:4490b005-04ab-4802-8fea-3d0149ee9162>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
theoretical problem (no calculations) with electric field from moving charge 1. The problem statement, all variables and given/known data I'm in a physics 2 (electricity and magnetism) course, and I'm having trouble with something the professor discussed in class. I tried approaching him about it afterwards, but it still didn't make sense to me. I really think he made a mistake here, and I'd like input. we're discussing magnetism, and as a precursor to that we're discussing electric fields generated by moving charges. so we have a point charge q moving at a relativistic speed v to the right on the x-axis. ( Relativistic effects make the electric field more powerful in directions perpendicular to the x-axis than parallel to the x-axis, but otherwise I don't think the prof mentioned anything that depended on the speed being relativistic- I think I qould have the same problem for non-relativistic speeds.) Fine. At t=t0, the charge is at some negative point on the x-axis p0 (all space and time coordinates are in a stationary reference frame). at t=t1, the charge is at the origin. at that point, the charge decelerates over the extremely brief period dt to v=0. we began discussing what happens as the effects of the stop propagate (at the speed of light, of course), and this is where I got lost. we're looking at the field at time t2 at a point Pa which is farther away than c(t2-t1) such that the effect has not has time to reach Pa yet. My professor's version: at t2, the field within distance c(t2-t1)is that of a stationary point charge at the origin. outside, the information that the charge has stopped moving has not reached Pa yet. therefore, the field at Pa is as though the charge had continued moving, so at t2, the field at Pa is that of a moving point charge at v(t2-t1) on the x-axis. when the information that the charge stopped gets to Pa, it will suddenly shift, in Pa's perspective, from being at a positive point on the x-axis to being at the origin. (he did not discuss any relativistic effects, like changing time or location coordinates, other than the boosted field in the y direction.) My version: at t2, the field within distance c(t2-t1)is that of a stationary point charge at the origin. outside, the information that the charge has stopped has not reached Pa, but neither has the information that the charge is at the origin reached Pa. therefore, Pa will feel the field of a moving charge somewhere left of the origin and will observe it move, decelerate, and stop at the origin in the same way as it actaully did, just with a time delay. so should I apply for his job?
{"url":"http://www.physicsforums.com/showthread.php?p=3239470","timestamp":"2014-04-18T08:18:09Z","content_type":null,"content_length":"25295","record_id":"<urn:uuid:d6d86779-0362-46b6-8d59-61f0330dccee>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with volume: solid of revolution February 22nd 2010, 07:24 PM Help with volume: solid of revolution The problem asks us to find the volume of the area bounded by y=-x^2+6x-8 and y=0 rotated around the x-axis by using the shells method. I did this and got an answer of pi, to confirm the answer I used the disks method and got an answer of 16pi/15... please help by showing me how you do both methods. February 22nd 2010, 08:26 PM The problem asks us to find the volume of the area bounded by y=-x^2+6x-8 and y=0 rotated around the x-axis by using the shells method. I did this and got an answer of pi, to confirm the answer I used the disks method and got an answer of 16pi/15... please help by showing me how you do both methods. You show me your work, and I'll show you mine. February 22nd 2010, 08:44 PM the graph of the original equation rotated around the x axis kinda looks like a football... if I take shells to calculate volume, then I have shells centered around x=3 and from -y to y in height. So the height is 2y and the width is going to be 2pi*r where r=3-x if we're differentiating from 2 to 3. substitute for y and we get the integral from 2 to 3 of 2(-x^2+6x-8)*2pi(3-x)dx = using disks, the area of one disk is going to be pi*r^2 where r is going to be equal to y. A=pi(-x^2+6x-8)^2.... I set up the integral from 2 to 4 of pi(-x^2+6x-8)^2 dx = 16pi/15... What am I doing wrong? February 23rd 2010, 09:18 PM the graph of the original equation rotated around the x axis kinda looks like a football... if I take shells to calculate volume, then I have shells centered around x=3 and from -y to y in height. So the height is 2y and the width is going to be 2pi*r where r=3-x if we're differentiating from 2 to 3. substitute for y and we get the integral from 2 to 3 of 2(-x^2+6x-8)*2pi(3-x)dx = using disks, the area of one disk is going to be pi*r^2 where r is going to be equal to y. A=pi(-x^2+6x-8)^2.... I set up the integral from 2 to 4 of pi(-x^2+6x-8)^2 dx = 16pi/15... What am I doing wrong? I think any "wrongness" is in the algebra of your calculation. I did this by disks and had the same set-up you describe for this method, but got $\int_2^4 \pi (-(x-3)^2+1)\,dx\,=\,\pi (-\frac{1}{3}(x-3)^3+x)|_2^4\,=\,\frac{4 \pi }{3}$: February 23rd 2010, 11:31 PM I don't understand how you set yours up, thanks for trying to help out... I spoke to my professor and he says that I was mistaken in trying to differentiate with respect to x and that to do it via shells I'd have to differentiate with respect to y... I can visualize it now and know why the shells don't work with x.
{"url":"http://mathhelpforum.com/calculus/130235-help-volume-solid-revolution-print.html","timestamp":"2014-04-21T04:11:13Z","content_type":null,"content_length":"9137","record_id":"<urn:uuid:aba1c1ed-fc97-4ab2-9131-728fdfb45b42>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Abington, MA Algebra 2 Tutor Find an Abington, MA Algebra 2 Tutor ...My sessions ran parallel with the school teacher's instruction, meaning the material I used was predicated on the school day's lesson plan. We would practice the same concepts and principles but with different questions, figures, and activities. This insured that the student completed homework ... 49 Subjects: including algebra 2, reading, English, writing ...Prior to my current position, I worked as an auditor at a CPA firm. I have been tutoring for approximately eight years for students in elementary school through the college level. I enjoy tutoring in math, accounting, computer skills (Microsoft Office or Excel), and study and organizational skills. 9 Subjects: including algebra 2, geometry, accounting, algebra 1 ...I got interested in tutoring after I tutored 4 days a week at Global Learning Charter Public School in New Bedford where I did my teaching practicum. I was very successful as a tutor at Global and I would like to continue tutoring on a part-time basis. I have letters of recommendation from teachers I worked with at Global who can verify my tutoring ability. 8 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ... 16 Subjects: including algebra 2, calculus, physics, geometry ...It proved to be true. My brothers, my sister, we all excelled in Math. Now I tutor Math while my college age son studies Math and Computer Science at one of the best colleges for these subjects, Carnegie Mellon University in Pittsburgh, Pennsylvania. 21 Subjects: including algebra 2, English, writing, ESL/ESOL Related Abington, MA Tutors Abington, MA Accounting Tutors Abington, MA ACT Tutors Abington, MA Algebra Tutors Abington, MA Algebra 2 Tutors Abington, MA Calculus Tutors Abington, MA Geometry Tutors Abington, MA Math Tutors Abington, MA Prealgebra Tutors Abington, MA Precalculus Tutors Abington, MA SAT Tutors Abington, MA SAT Math Tutors Abington, MA Science Tutors Abington, MA Statistics Tutors Abington, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Avon, MA algebra 2 Tutors Brockton, MA algebra 2 Tutors East Bridgewater algebra 2 Tutors East Weymouth algebra 2 Tutors Hanover, MA algebra 2 Tutors Hanson, MA algebra 2 Tutors Holbrook, MA algebra 2 Tutors Kingston, MA algebra 2 Tutors North Abington, MA algebra 2 Tutors Norwell algebra 2 Tutors Pembroke, MA algebra 2 Tutors Randolph, MA algebra 2 Tutors Rockland, MA algebra 2 Tutors South Weymouth algebra 2 Tutors Whitman, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/Abington_MA_Algebra_2_tutors.php","timestamp":"2014-04-21T02:43:10Z","content_type":null,"content_length":"24179","record_id":"<urn:uuid:c8cdb352-2b24-4058-8586-4dea8140b737>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate the GST paid from the total - Math Central What you do depends on whether the price you paid also includes the PST. If the price included the PST and GST then look at my response to Janet who also lives in Ontario so the rates are the same as yours. If the PST is not charged on the item you purchase then to find the price before the 5% GST is added, divide the amount you paid by 1.05. I hope this helps,
{"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.08/h/claude1.html","timestamp":"2014-04-21T12:10:16Z","content_type":null,"content_length":"7062","record_id":"<urn:uuid:c17d8cd7-65a2-476b-b329-d16ff1cafd8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
June 20 A Google research team built a machine, a neural network with a billion connections, using a computing array with 16,000 processors. (More on neural networks below.) They then trained the network on ten million digital images chosen at random from YouTube videos. They used unsupervised training , i.e., this machine was not given any feedback to evaluate its training. And, lo and behold, the neural network was able to recognize cats (among other things) - (from The NY Times) Google Fellow Dr. Jeff Dean - "It basically invented the concept of a cat". A bit about neural networks - this slide is from Prof. Yaser Abu-Mostafa's telecourse and depicts a generic neural network. Very simply, the inputs on the left - in our case, some set of numbers derived from the digital images - are multiplied by other numbers, called weights, and summed up. These are fed to the first layer of the neural network. Each "neuron" (the circles marked with ) is fed with its own weighted sum, . The neuron uses the function which is a function of the shape shown above, to convert the input into its output. The outputs of the first layer of neurons is used as inputs to the next layer and so on. The above network has a final output h(x) but you can build a network with many outputs. So, e.g., for a neural network successfully trained to recognize cats, h(x) could be +1 if the inputs correspond to a cat and to -1 if the inputs correspond to a not-cat. Training simply involves choosing the weights. The learning algorithm is some systematic way of picking weights and refining their values during the training algorithm. It is hypothesized that our brains work in very much the same way. Our brains probably undergo supervised training. In the above network, supervised training would mean that you feed it one of the images, look at the result the network gives (cat or not-cat), and then tell the learning algorithm whether the result was right or wrong. The learning algorithm correspondingly changes the weights. Eventually if the network is successful in learning the weights converge to some stable values. I say our brains underwent supervised training, because natural selection would tend to wipe out anyone who misperceived reality. The Google experiment used unsupervised learning, which to me, makes its discovery of cat all the more remarkable. [I should probably say a little more about biological neural networks. The neuron receives inputs usually from other neurons across junctions called synapses. Some inputs are excitatory and some are inhibitive. The strength of signal received depends on properties of the synapse. If the overall strength of inputs cross some threshold, the neuron fires, otherwise it remains quiescent. (The analogs of the threshold and quiescent/firing behavior is handled in our machine neural network above by the inputs from the circles containing the 1s and the θ(s) function respectively. ) I hope it is clear why the machine network above is naturally called a neural network.] Arriving at the main theme, if what the neural network (and our brains) have assembled by perception of the environment is to be termed knowledge, where did it fit on Bee's original chart? The collection of weights and connections in the neural network comprise at best an implicit model of cats. It is these implicit models, presumably derived from the Real World Out There, that we are conscious of as objects in the Real World Out There. Because we have been "trained" by evolution, the objects we perceive in fact, do mostly correspond to objects in the Real World Out There. Once we have Theory, we now have abstract concepts on top of which to build further, and we eventually arrive at ideas, such as atoms and molecules, that are not directly there in our perceptions, but are there in the real world. Perhaps our difficulties with quantum mechanics are that our brains have not had to handle inputs with quantum mechanical features; but perhaps it is not a limitation of neural networks, only a limitation of our perceptions. So perhaps a super-Google neural network could be trained to "understand" quantum mechanics in a way that we never can. That is, until we build additional "senses" - brain-machine interfaces - that can feed additional data into our brains, to train the neural networks in our heads. Further, our brains are definitely biased by the demands of survival. Now we have, in principle, the ability, via unsupervised learning in very large neural networks, to find out more about the Real World Out There, without that bias imposed. New means of knowledge???? This began life as my comment on Bee's blog. The question has been bothering me for a few days. Let us say that formal mathematical reasoning began around the time of Euclid, some 2500 years ago. Let us say that science as we understand it today began around the time of Newton, or some 300 years ago. Each maybe with someone else or sometime a bit earlier. The historical details are not important to the point, which is the relative recentness of the appearance of these methods of acquiring knowledge compared to the long lineage of man - even if you take humans to have the potential mental capability to acquire and use these methods even for only the last 10,000 years, instead of the hundred-fold longer period of the million+ years of human evolution. Much before the appearance of the formal mathematical method, and the scientific method, could anyone have dreamed of these methods and their effectiveness? Can we conceive of additional effective methods of acquiring knowledge? Is our inability to imagine them a proof that there can be no such methods? It would be somewhat arrogant to think that we've exhausted the possibilities so soon. Or are we at the dawn of a third method, that vaguely at the edge of our intuition? The new cognition enhancing tools we have are the computer and the network, and maybe the collaborative mathematics and science enabled by the web are just our first fumbling steps towards what we cannot yet grasp. Hydrangeas at the New York Botanical Garden. Could use some dynamic range here. Raising the shadows beyond what I did in the third version introduces banding noise in the shadows. This is a photographic situation where the Nikon D800 is expected to shine. The dome of the conservatory is overexposed, even with highlights set to 0 in LightRoom 4 (just upgraded from version 3) It must be payback for bad karma that the US is now saddled with the current Supreme Court bench. Read, gnash your teeth and weep. To me, they doesn't sound any different from an Iranian Ayatollah issuing one of his inhuman fatwas, the same clever parsing of words and tortured logic. Everything you need to know about Scalia is contained in the following exchange with Leslie Stahl in a 2009 60 minute interview: If someone's in custody, as in Abu Ghraib, and they are brutalized by a law enforcement person, if you listen to the expression 'cruel and unusual punishment,' doesn't that apply?" Stahl asks. "No, No," Scalia replies. "Cruel and unusual punishment?" Stahl asks. "To the contrary," Scalia says. "Has anybody ever referred to torture as punishment? I don't think so." "Well, I think if you are in custody, and you have a policeman who's taken you into custody…," Stahl says. "And you say he's punishing you?" Scalia asks. "Sure," Stahl replies. "What's he punishing you for? You punish somebody…," Scalia says. "Well because he assumes you, one, either committed a crime…or that you know something that he wants to know," Stahl says. "It's the latter. And when he's hurting you in order to get information from you…you don't say he's punishing you. What's he punishing you for? He's trying to extract…," Scalia says. "Because he thinks you are a terrorist and he's going to beat the you-know-what out of you…," Stahl replies. "Anyway, that's my view," Scalia says. "And it happens to be correct." And after going a tirade about how the sovereignty of the State of Arizona is being violated by the POTUS, Scalia joins in overturning an opinion of the State of Montana's highest court without a The story is here. In Texas, supposedly stressed by drought, a hybrid grass variety, Tifton 85, produced HCN enough to kill cows that grazed on it. This has not been observed before in the decades since its introduction. What I find interesting is this, on dailykos.com from a web-page that has since been taken down (the original link http://haysagriculture.blogspot.com/2012/06/ A little background is in order. Tifton 85 bermudagrass was released from the USDA-ARS station at Tifton, GA in 1992 by Dr. Glenn Burton, the same gentleman who gave us Coastal bermudagrass in 1943. One of the parents of Tifton 85, Tifton 68, is a stargrass. Stargrass is in the same genus as bermudagrass (Cynodon) but is a different species (nlemfuensis versus dactylon) than bermudagrass. Stargrass has a pretty high potential for prussic acid formation, depending on variety, but even with that being said, University of Florida researchers at the Ona, FL station have grazed stargrass since 1972 without a prussic acid incident. The pasture where the cattle died had been severely drought stressed from last year’s unprecedented drought, and had Prowl H2O {a herbicide} applied during the dormant season, a small amount of fertilizer applied in mid to late April, received approximately 5” of precipitation within the previous 30 days, and was at a hay harvest stage of growth. Thus, the pasture did not fit the typical young flush of growth following a drought-ending rain or young growth following a frost we typically associate with prussic acid formation. My question is - how long will it take to unearth the dangers of Genetically Modified plants and animals? "The Mark Inside : A perfect swindle, a cunning revenge, and a small history of the big con" by Amy Reading, is mostly about J. Frank Norfleet, a rancher from Texas, and his successful quest to catch the people that swindled him of a fortune. The book mentions newspaper stories in the New York Times, and so I looked up the archives. I found this, published April 27, 1924, A Sucker With Claws Texan, Gulled by Con' Men, Jails 40 of Them and May Be Rewarded By Congress Just because a man is born a sucker is not sign that he may not turn into a tiger before his earthly course is run. J. Frank Norfleet of Texas made the evolutionary jump almost overnight, with the result that about two score confidence men, members of a gang that fleeced him out of $45,000, are now behind prison bars and have ample time to ponder the Darwinian theory that a fish may grow claws. Balu & friends are shocked by Richard Dawkins' combination of ignorance and arrogance. I say, Welcome to the club! This Wall Street Journal blog talks about the conviction of Rajat Gupta for insider trading on Wall Street: The jury found that he was “motivated not by quick profits but rather a lifestyle where inside tips are the currency of friendships and elite business relationships,” The Wall Street Journal But that is actually what the prosecution claimed. I don't know in finding someone guilty that the jury endorses the motive that the prosecution ascribes. Rajat Gupta, once one of America’s most- respected corporate directors, was indicted on six criminal counts in an insider trading case that prosecutors said was motivated not by quick profits but rather a lifestyle where inside tips are the currency of friendships and elite business relationships. By Michael Rothfeld, Susan Pulliam and S. Mitra Kalita, The Wall Street Journal, October 27, 2011 PS: the indictment does not mention motive. From a reader of the NYTimes: Romney would have let Detroit die and Bin Laden live. But then in typical liberal fashion adds the qualifier: (if he meant what he said, when he said it.) Campaign slogans are not meant to be a place for fair play! Read about it here. The conservative government of Canada is shutting down any research that does not suit its ideological agenda. It sounds just like China or Pakistan. Regardless of whom you think of as (more) correct and whether the language problem in science is real or imagined , from this essay one would have to conclude that people project the implicit assumptions of their culture even onto the animals they study. Here is an article about Kinji Imanishi and his ground-breaking research in primatology ( Current Biology Vol 18 No 14, Tetsuro Matsuzawa and William C. McGrew ) Imanishi’s focus was to seek the evolutionary origin of human society. For him the central issue was society, and society had its own reality: it cannot be reduced to its constituent individuals nor just relationships among individuals. The society exists as a whole. This belief was the primary force for Imanishi sending the expeditions to study the society of monkeys and the society of chimpanzees in the wild. Prof. Krugman points to this post about China and wonders if it is correct. Prof. Krugman's synopsis is: Hempton basically argues that China has turned financial repression — controlled interest rates on deposits, which ensure a negative real rate of return — into a giant engine of kleptocracy. The banks extract rent from depositors, transfer those rents on to state-owned enterprises in the form of cheap loans, and then the Party elite essentially embezzles the money. Underlying the whole system is a high savings rate that Hempton attributes to the one-child policy. His readers point to these writers, here are links to some or other of their writings. Arthur Kroeber. Patrick Chovanec Nick Lardy Michael Pettis and a direct reply: Thomas Barnett So yeah, an accurate description, and yeah, way over-the-top in its gloom-and-doomism. Registration for the recorded version of the course will open mid June Caltech Professor Yaser Abu-Mostafa covers the basic theory of machine learning in this distance learning course. The 18 recorded lectures are here and the rest of the course material is linked from that page, or is here. Each lecture recording is an hour of lecture, followed by a half hour of recorded question & answer. In order to do the homework, you will need to write some programs - perhaps Python is best. I had Mathematica too, which I used for visualization. You will need a quadratic programming package, which with Mathematica, costs $$, but there is freeware for Python. Most of the homework is useful. There is a book, too, "Learning from Data: A Short Course", by Yaser Abu-Mostafa and others; the course covers more than is in the book. The book adds some depth to the areas that it covers, and if you're going to spend time on the course, having the book is probably worthwhile. Professor Abu-Mostafa is a very good lecturer; and overall I rate the course highly. I met my objectives, which was to get a view of the foundations. Supposedly "data science" is an emerging disciple and machine learning is one of the weapons in the data scientist's arsenal. Now I'm a bit better prepared to evaluate prospective data scientists. The most "aha!" moment in the course was with Support Vector Machines; and the most vague concept in the course was that of deterministic noise. The main drawback in distance learning is the relative isolation; however distance learning is a way the problem of the very high cost of higher education might be addressed. Now anyone with a reasonable internet connection can take a fairly substantial course from Caltech. I took the course "live", there were two lectures a week through April and May, and I just submitted my answers to the final exam. I suppose henceforth one could take it self-paced. (Added later): The above probably doesn't sound like an ringing endorsement for this course. That is more due to my nature than to the course. But if it is a subject that interests you I strongly recommend it. On Daily Kos, Hunter imagines what the diary of presidential hopeful Mitt Romney might read like. Here are links to a few, and excerpts from some (no excerpt doesn't mean it isn't good). June 9 entry. June 8 entry. Excerpt: Hello, human diary. It is I again, Mitt Romney, your better. There is not much to report today, as I have been mostly engaged in further practice sessions as to how I can better be as generic as possible when addressing issues of the day. I am doing quite well in these sessions. For example, my economic policy can now be summarized by saying: America is the best nation in the world. My foreign policy is roughly that same sentence. June 7 entry. Excerpt: I have very defined theories as to what money does and does not like, Mr. Diary, that I will perhaps expound upon at a later date. It is obvious that money gets lonely when in small quantities, and strongly prefers the company of like-minded or larger sums of money. It thus tends to shift itself from poorer individuals to richer ones quite rapidly if not blocked by cruel government policies preventing such things. Money is very shy, and will try to hide itself (perhaps, say, in other countries) if it senses tax regulation nearby. Money likes to create jobs, primarily in the sector of the economy dedicated to guarding and pampering itself. Other human units may be experts on foreign policy, or on energy matters, or on matters of law or the like, but my own expertise is in the various moods and preferences of money. I have based each of my various careers and each one of my own current policy prescriptions based on this knowledge; indeed, most of my campaign has been an effort to get this nation to more properly consider how deeply they can hurt the feelings of money, through current policies, and how best to reform those policies in the future. June 6 entry. June 1 entry. A few months ago, I had posted Prof. Robert Lustig's warning about sugar, or more accurately fructose. Sugar is 50% glucose and 50% fructose. The commonly used sweetener, High Fructose Corn Syrup, is said to be 55% fructose and 45% glucose, and doesn't seem much worse than sugar. But now this (and from 2010, this) Consider, for example, the most common form of HFCS - HFCS 55, which has 55% fructose compared to sucrose which is 50% fructose. Most people think this difference is negligible, but it's 10% more fructose. Yet this assumes that foods and drinks are made with HFCS 55. Our study showed that certain popular sodas and other beverages contain a fructose content approaching 65% of sugars. This works out to be 30% more fructose than if the sodas were made with natural sugar. HFCS can be made to have any proportion of fructose, as high as 90%, and added to foods without the need to disclose the specific fructose content. It is not fair for the industry association to talk about the relative harmlessness of HFCS 55 and HFCS 42 and then to feed us HFCS 65! While exploring Alexandre Borovik's math blog, I came across this: Even [when their children are] as young as 22 months, American parents draw boys’ attention to numerical concepts far more often than girls’. Indeed, parents speak to boys about number concepts twice as often as they do girls. For cardinal-numbers speech, in which a number is attached to an obvious noun reference — “Here are five raisins” or “Look at those two beds” — the difference was even larger. Mothers were three times more likely to use such formulations while talking to boys. The Avengers is an entertaining movie. It however fails the Bechdel test. Recall what the Bechdel test is: The movie has to have (1) at least two named female characters, (2) who talk to each other (3) about something other than a man. The Avengers has three named female characters. However, they do not talk to each other. A feminist critique is here. In the Land of the Free and the Home of the Brave, it is men who speak for women, by an overwhelming majority. At least in the mainstream media. Via dailykos.com
{"url":"http://arunsmusings.blogspot.com/2012_06_01_archive.html","timestamp":"2014-04-20T23:41:40Z","content_type":null,"content_length":"163779","record_id":"<urn:uuid:638ed2b5-bc4b-44b3-9d90-6dbcf225b2cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Tensor power- Notation question up vote 2 down vote favorite Hi everyone I have a notational question, which is written usually in papers, but I can not figure it out what could be. Let $M$ be an $A$-module. I have seen this notation $$M^{\otimes -n}$$ I think this would mean $(Hom_A(M,A))^{\otimes n}$, but in the other way this can be denoted by $((M^*)^{\otimes n}$. Is this a standard notation? notation noncommutative-algebra 2 That notation is usually used to mean your first option when the module is invertible. The only way to know what the author of those papers meant is to actually tell us what papers you are reading, though. – Mariano Suárez-Alvarez♦ May 19 '11 at 13:31 7 (The reason one prefers the notation $M^{\otimes -n}$ to $(M^*)^{\otimes n}$ is, first and most importantly, that it is considerably less cumbersome, but it also allows you to write things like $\ bigoplus_{n\in\mathbb Z}M^{\otimes n}$ which would otherwise need notational circumlocutions.) – Mariano Suárez-Alvarez♦ May 19 '11 at 14:00 I retagged this question because a discussion on meta suggested they wanted to get rid of the generic "algebra" tag: tea.mathoverflow.net/discussion/1071/… – David White Jun 28 '11 at 13:43 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged notation noncommutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/65431/tensor-power-notation-question","timestamp":"2014-04-24T04:33:41Z","content_type":null,"content_length":"50014","record_id":"<urn:uuid:68f862c0-5056-49e8-b14c-19acec4e6a54>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Robotics Institute: A Spherical Representation for Recognition of 3-D Curved Objects We are investigating a new approach for representing 3-D curved objects for recognition and modeling. Our approach starts by fitting a discrete mesh of points to the raw data. The fitting is based on the concept of deformable surfaces: Starting with a spherical shape, a mesh is iteratively deformed, subject to attractive forces from the data points, until it reaches the stable shape which is the best fit to the input set of points. Once a discrete set of points is fit to the surface, values such as discrete surface curvature can be computed at each of its nodes. Moreover, each node of the mesh can be mapped to a corresponding node of a reference spherical mesh with the same number of points and the same topology as the object mesh. By storing on the spherical mesh the values computed on the surface of the object, we have, in effect, created a spherical image of the object. We call this spherical image the Spherical Attribute Image (SAI). The SAI representation has an important invariance property that makes it suitable for a number of applications in the area of 3-D object recognition and modeling: Assuming that the mesh fit to the object satisfies certain regularity constraints, the SAIs of two instances of an object which differ by a rigid transformation are identical up to a rotation of the sphere. Consequently, the problem of bringing two 3-D objects into registration is replaced by the much simpler problem of bringing spherical images into correspondence. Moreover, because of the way the mapping between object mesh and SAI is established, SAIs can be used to represent arbitrary non-convex objects and partial views of objects. We are taking advantage of these properties in three main areas which we describe Object Recognition: Given a complete object model represented by its SAI and a partial SAI extracted from a view of a scene, we can compute the best transformation between model and observed object by registering the two spherical images. The registration of the SAIs yields a set of correspondences between nodes of the model mesh and nodes of the observed mesh and a measure of similarity measure between the two SAIs. The correspondences are used for computing the transformation between model and scene; the similarity measure is used for deciding whether the model corresponds to the observed scene. This object recognition algorithm can be applied to general curved objects. Object Modeling: The general problem of 3-D object modeling is to build a complete object surface model given a number of partial views of the object. This problem is usually solved under the constraint that the transformations between viewing positions are at least approximately known. Using the SAI representation eliminates this constraint. Specifically, after a different SAI is created for every view, the transformation between views is computed using the matching algorithm described above. The data from all the views can then be transformed into a single reference frame and aggregated into a single surface model of the object. This approach has the advantage that no prior knowledge of the transformations is required. Data Fusion: Although the previous discussion of the SAI representation was based on the idea of attaching curvature at every mesh node, any value computed at a node of the mesh could be stored at that node, for example, the color. In this case, matching two SAIs involves finding the rotation that yields the smallest distance between the spherical images of both color and curvature. This gives an opportunity to use geometric information and appearance information in the same framework.
{"url":"http://www.ri.cmu.edu/research_project_detail.html?project_id=369&menu_id=261","timestamp":"2014-04-18T05:57:21Z","content_type":null,"content_length":"22772","record_id":"<urn:uuid:5923f7c9-2bbf-4a47-9cf3-c66a40d653d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
University Park, TX Math Tutor Find an University Park, TX Math Tutor ...I am familiar with PostgresSQL, Oracle and Sybase on UNIX (I have used and administered each one both at work and at home). I am an experimental physicist and have been using and designing electronic instruments for use in physics for 40 years. I have extensive experience in high speed,low signa... 25 Subjects: including precalculus, differential equations, SAT math, statistics ...I have taken and received an A in genetics at the graduate and undergraduate level. Additionally, I am enrolled in a Ph.D. program in biomedical sciences and much of the research I do revolves around cancer genetics. I also have a degree in mathematics, so I am able to deal with the probability theory involved in genetics. 33 Subjects: including algebra 2, reading, trigonometry, differential equations ...But I've never met anyone who truly was. After determining where and why a student is struggling, whether it be fundamentals, problem solving approach or something else, I put together a plan that begins will small successes. Even a small success can do a lot for boosting confidence. 11 Subjects: including SAT math, ACT Math, algebra 1, geometry French native and certified from the French Department of Education, I have been an elementary school teacher in France for 25 years and at DIS (Dallas International School) for one year. I am currently giving French lessons to adults and children. My methods of teaching are very open and adapted ... 4 Subjects: including precalculus, French, Microsoft Word, Microsoft PowerPoint I am an experienced tutor and instructor in undergraduate physics. I tutored at the University of Texas at Dallas, where I was also a Teaching Assistant. I taught courses at Richland College and Collin County Community College. 8 Subjects: including algebra 1, algebra 2, calculus, geometry Related University Park, TX Tutors University Park, TX Accounting Tutors University Park, TX ACT Tutors University Park, TX Algebra Tutors University Park, TX Algebra 2 Tutors University Park, TX Calculus Tutors University Park, TX Geometry Tutors University Park, TX Math Tutors University Park, TX Prealgebra Tutors University Park, TX Precalculus Tutors University Park, TX SAT Tutors University Park, TX SAT Math Tutors University Park, TX Science Tutors University Park, TX Statistics Tutors University Park, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/University_Park_TX_Math_tutors.php","timestamp":"2014-04-21T15:12:07Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:90399174-b33b-42b7-9b58-2fabf9d5b4c5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Where would you use prime factorization in real life? One place the notion of factorization and primes is used in everyone's life is in RSA encryption processes. The encryption methods security is based off the fact that it is VERY DIFFICULT to factor a composite number whose prime factorization is two large digit prime numbers. RSA encryption is used for secure internet comunications.
{"url":"http://mathhomeworkanswers.org/40834/where-would-you-use-prime-factorization-in-real-life","timestamp":"2014-04-24T08:25:33Z","content_type":null,"content_length":"166020","record_id":"<urn:uuid:4ca1dbd3-afde-4201-a13f-f3396617bb1e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about 3 X 3 matrices What is the formula (derivative) of the inverse of a 3 X 3 matrix? If you can help, thanks... As far as i know theres no formula (and i could be wrong there). A reasonably simple method is the gauss elimination method, it can be used to find the inverse or a matrix of an size as long as it is square. Gauss elimination involves performing certain operations on the rows. The operations you can use are : 1) interchanging two rows 2) multiplying a row by a number (can be fraction or negative number) 3) adding one row to another 4) adding a multiple of a row Definition of gauss elimination method is: if a sequence of row operations reduces a square matrix, A, down to I of the same size. The same sequence of operations would reduce I down to the inverse of A. Now although you operate on the rows, you work from left to right along the columns trying to get all the numbers below the leading diagonal down to zero, and the leading diagonal itself down to 1's so it looks like this: 1 x x 0 1 x 0 0 1 Once you've done this you go from right to left changing the "x's" into zeros while still keeping the rest of the matrix as it was (1's and zeros). Once you've done this it should look like this: 1 0 0 0 1 0 0 0 1 Do the same operations you did to get A down to I (the above matrix)- to I and you will get the inverse of A. Its easiest to do them simultaneously. Hi ! There is one By the way, here is the formula : where ah ye now i remember.......i also remember why i never learnt it
{"url":"http://mathhelpforum.com/algebra/943-question-about-3-x-3-matrices.html","timestamp":"2014-04-18T00:52:23Z","content_type":null,"content_length":"37605","record_id":"<urn:uuid:e624e65e-0e0b-48eb-b1b4-1a07f8186b86>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Med Ed in real time. Twitter. Today I received a tweet from @MedCalc. I have a bit of an appCrush on MedCalc. I Began using it on the Palm and continued with the Treo and iPhone. This is a question I use in my IV fluid lecture. The lesson goes like this: It starts by me asking the students what's the highest glucose they have ever seen. I go around the room and find the highest. Sometimes they'll have a cool story of a DKA or HONK. Then I tell them that they haven't seen anything until they've had an ESRD patient in DKA. Since dialysis can't make urine they escape the osmotic diuresis. The blood sugar doesn't have an escape route so the plasma levels sky rocket. Additionally the lack of profound volume loss allows these patients to delay treatment by avoiding the life threatening shock that typically hospitalizes the patient with DKA. I have seen blood sugars of 2,700 mg/dL. After that lead in, I ask the students to guess what the concentration of glucose in D5W. Someone will say 50 mg/dL, but quickluy they will arrive at a consensus of 500 mg/dL. If someone says 5,000 it is quickly forgotten as unreasonable. Then I pick on one student and walk through the trivial conversion of g/dL to mg/dL. I like to get a Canadian or foreigner since they are metric natives. Once the students get that the concentration of glucose in D5W is 5,000 mg/dL the questions turns to why. Why would anyone want a routine, off the shelf fluid, to have a such wildly non-physiologic glucose concentration? At least one student knows the answer or I coach a dim group to the answer without much difficulty. The glucose is needed to make D5W iso-osmotic. If we Infused sterile water we would trigger The last step in the lesson is showing the students that they already know how to calculate the osmolality of D5W In the previous acid-base lesson the students learn to calculate an osmolar gap. In America one needs to convert the BUN and glucose from mg/dL to mOsm/L. This means dividing by the molecular weight and multiplying by 10. The equation looks like this: So all we need to do to figure out the osmolality of the glucose is divide by 180 and multiply by 10. I have someone take 5000 and divide by 18 = 278 mOsm/Kg water, isosmotic to plasma. Its a great 5 minute diversion that teaches the students something important. It shows that these rules all fit together. I try to impress on them that fluids and electrolytes are internally consistent like math. So it was rather embarassing to find out I have been wrong. The actual osmolality of D5W is 252 not 278. Here was my erroneous reply to @medCalc The author, Michelle Lin chimed in with her reference. Nice work The first thought was that the discrepancy was due to osmolarity versus osmolality. Science teacher extradonair Gary Abud weighed in: Michelle Lin then came up with the correct molecular weight. It's just glucose at 180 plus a water at 18 = 198 Then Bryan Hayes a pharmacist gives his stamp of approval. The whole thing tumbled out from question to blog post in about 6 hours. The biggest problem is my cool story problem and interactive lesson is off by about 10%. Ahh the weight of a water molecule
{"url":"http://fellowshipofthebeans.com/med-ed-in-real-time-twitter/","timestamp":"2014-04-17T03:48:15Z","content_type":null,"content_length":"68711","record_id":"<urn:uuid:7602f557-6bd3-4a20-b97c-6830dd55a9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof help Originally posted by Lonewolf Try considering the function [tex]E(x) = 1/2[F(x) + F(-x)][/tex] and consider what [tex]O(x)[/tex] could be. Ok so [tex]E(x) = 1/2[F(x) + F(-x)][/tex] [tex]O(x) = 1/2[F(x) - F(-x)][/tex] [tex]F(x) = E(x) + O(x)[/tex] [tex]= 1/2[F(x) + F(-x)] + 1/2[F(x) - F(-x)][/tex] [tex]= 1/2[F(x) + F(-x) + F(x) - F(-x)][/tex] [tex]= 1/2[2F(x)][/tex] [tex]= F(x)[/tex] Ok this is what i have worked out so far. But i still don't see how it solves my problem. I must be missing something.
{"url":"http://www.physicsforums.com/showthread.php?p=130272","timestamp":"2014-04-18T00:21:21Z","content_type":null,"content_length":"35903","record_id":"<urn:uuid:b920d3b1-1bb7-41be-81d6-e13ac7304183>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
icfit {interval} calculate non-parametric MLE for interval censored survival function This function calculates the the non-parametric maximum likelihood estimate for the distribution from interval censored data using the self-consistent estimator, so the associated survival distribution generalizes the Kaplan-Meier estimate to interval censored data. Formulas using Surv are allowed similar to survfit. ## S3 method for class 'formula': icfit((formula, data, ...)) ## S3 method for class 'default': icfit((L, R,initfit =NULL, control=icfitControl(), Lin=NULL, Rin=NULL, conf.int=FALSE, ...)) numeric vector of left endpoints of censoring interval (equivalent to first element of Surv when type='interval2', see details) numeric vector of right endpoints of censoring interval (equivalent to second element of Surv function when type='interval2', see details) an initial estimate as an object of class icfit or icsurv, or a character vector of the name of the function used to calculate the initial estimate (see details) list of arguments for controling algorithm (see icfitControl) logical vector, should L be included in the interval? (see details) logical vector, should R be included in the interval? (see details) a formula with response a numeric vector (which assumes no censoring) or Surv object the right side of the formula may be 1 or a factor (which produces separate fits for each level). an optional matrix or data frame containing the variables in the formula. By default the variables are taken from environment(formula). logical, estimate confidence interval? For setting conf.level, etc see icfitControl. (May take very long, see Warning) values passed to other functions The icfit function fits the nonparametric maximum likelihood estimate (NPMLE) of the distribution function for interval censored data. In the default case (when Lin=Rin=NULL) we assume there are n (n =length(L)) failure times, and the ith one is in the interval between L[i] and R[i]. The default is not to include L[i] in the interval unless L[i]=R[i], and to include R[i] in the interval unless R [i]=Inf. When Lin and Rin are not NULL they describe whether to include L and R in the associated interval. If either Lin or Rin is length 1 then it is repeated n times, otherwise they should be logicals of length n. The algorithm is basically an EM-algorithm applied to interval censored data (see Turnbull, 1976); however first we can define a set of intervals (called the Turnbull intervals) which are the only intervals where the NPMLE may change. The Turnbull intervals are also called the innermost intervals, and are the result of the primary reduction (see Aragon and Eberly, 1992). The starting distribution for the E-M algorithm is given by initfit, which may be either (1) NULL, in which case a very simple and quick starting distribution is used (see code), (2) a character vector describing a function with inputs, L,R, Lin, Rin, and A, see for example initcomputeMLE, (3) a list giving pf and intmap values, e.g., an icfit object. If option (2) is tried and results in an error then the starting distribution reverts to the one used with option (1). Convergence is defined when the maximum reduced gradient is less than epsilon (see icfitControl), and the Kuhn-Tucker conditions are approximately met, otherwise a warning will result. (see Gentleman and Geyer, 1994). There are other faster algorithms (for example see EMICM in the package Icens. The output is of class icfit which is identical to the icsurv class of the Icens package when there is only one group for which a distribution is needed. Following that class, there is an intmap element which gives the bounds about which each drop in the NPMLE survival function can occur. Since the classes icfit and icsurv are so closely related, one can directly use of initial (and faster) fits from the Icens package as input in initfit. Note that when using a non-null initfit, the Lin and Rin values of the initial fit are ignored. Alternatively, one may give the name of the function used to calculate the initial fit. The function is assumed to input the transpose of the A matrix (called A in the Icens package). Options can be passed to initfit function as a list using the initfitOpts variable in icfitControl. The advantage of the icfit function over those in Icens package is that it allows a call similar to that used in survfit of the survival package so that different groups may be plotted at the same time with similar calls. An icfit object prints as a list (see value below). A print function prints output as a list except suppresses printing of A matrix. A summary function prints the distribution (i.e., probabilities and the intervals where those probability masses are known to reside) for each group in the icfit object. There is also a plot method, see plot.icfit. For additional references and background see Fay and Shaw (2010). The confidence interval method is a modified bootstrap. This can be very time consuming, see warning. The method uses a percentile bootstrap confidence interval with default B=200 replicates (see icfitControl), with modifications that prevent lower intervals of 1 and upper intervals of 0. Specifically, if there are n observations total, then at any time the largest value of the lower interval for survival is binom.test(n,n,conf.level=control()$conf.level)$conf.int[1] and analogously the upper interval bounds using binom.test(0,n). The output (CI element of returned list) gives confidence intervals just before and just after each assessment time (as defined by icfitControl$timeEpsilon). An object of class icfit (same as icsurv class, see details). There are 4 methods for this class: plot.icfit, print.icfit, summary.icfit, and [.icfit. The last method pulls out individual fits when the right side of the formula of the icfit call was a factor. A list with elements: this is the n by k matrix of indicator functions, NULL if more than one strata, not printed by default a named numeric vector of numbers of observations in each strata, if one strata observation named NPMLE this is max(d + u - n), see Gentleman and Geyer, 1994 number of iterations vector of estimated probabilities of the distribution 2 by k matrix, where the ith column defines an interval corresponding to the probability, pf[i] a logical, TRUE if normal convergence character text message on about convergence logical denoting whether any of the Turnbull intervals were set to zero if conf.int=TRUE included as a list of lists for each stratum, each one having elements time, lower, upper, confMethod, conf.level The confidence interval method can be very time consuming because it uses a modified bootstrap and the NPMLE is recalculated for each replication. That is why the default only uses 200 bootstrap replications. A message gives a crude estimate of how long the confidence interval calculation will take (it calculates a per replication value by averaging the time of the first 10 replications), but that estimate can be off by 100 percent or more because the time to calculate each bootstrap replication is quite variable. Aragon, J and Eberly, D (1992). On convergence of convex minorant algorithms for distribution estimation with interval-censored data. J. of Computational and Graphical Statistics. 1: 129-140. Fay, MP and Shaw, PA (2010). Exact and Asymptotic Weighted Logrank Tests for Interval Censored Data: The interval R package. Journal of Statistical Software. http://www.jstatsoft.org/v36/i02/. 36 Gentleman, R. and Geyer, C.J. (1994). Maximum likelihood for interval censored data:consistency and computation. Biometrika, 81, 618-623. Turnbull, B.W. (1976) The empirical distribution function with arbitrarily grouped, censored and truncated data. J. R. Statist. Soc. B 38, 290-295. icout<-icfit(Surv(left,right,type="interval2")~treatment, data=bcos) ## can pick out just one group Documentation reproduced from package interval, version 1.1-0.1. License: GPL (>= 2)
{"url":"http://www.inside-r.org/packages/cran/interval/docs/icfit.default","timestamp":"2014-04-20T00:54:17Z","content_type":null,"content_length":"26669","record_id":"<urn:uuid:32a4a090-d602-4e95-a49c-cc9b91e69544>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
An Ancient Relation between Units of Length and Volume Based on a Sphere The modern metric system defines units of volume based on the cube. We propose that the ancient Egyptian system of measuring capacity employed a similar concept, but used the sphere instead. When considered in ancient Egyptian units, the volume of a sphere, whose circumference is one royal cubit, equals half a hekat. Using the measurements of large sets of ancient containers as a database, the article demonstrates that this formula was characteristic of Egyptian and Egyptian-related pottery vessels but not of the ceramics of Mesopotamia, which had a different system of measuring length and volume units. Citation: Zapassky E, Gadot Y, Finkelstein I, Benenson I (2012) An Ancient Relation between Units of Length and Volume Based on a Sphere. PLoS ONE 7(3): e33895. doi:10.1371/journal.pone.0033895 Editor: Fred H. Smith, Illinois State University, United States of America Received: September 19, 2011; Accepted: February 18, 2012; Published: March 28, 2012 Copyright: © 2012 Zapassky et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This study was supported by the European Research Council Advanced Grant n° 229418, titled Reconstructing Ancient Israel: The Exact and Life Sciences Perspective (RAIELSP), see http:// erc.europa.eu/index.cfm?fuseaction=page.display&topicID=517. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Knowledge of the connection between linear dimensions and volume of containers is important, for instance, in order to achieve quick estimates of trade commodities. However, in many measuring systems, both ancient and modern, the length and volume units seem to have emerged independently, without a simple, intrinsic relation between them [1]. One of the advantages of the metric system, initiated at the time of the French Revolution, is the introduction of such a relationship: 10 cm^3 make 1 liter [2], meaning that the unit of volume is based on the unit of length, employing a cube as an elementary body. For the sake of simplicity, we do not distinguish in this paper between the notions of volume and capacity and use “volume” throughout. This conceptually convenient definition, however, does not help in practical measurements, as most containers are not cube-shaped. The use of the cube of a length-unit edge can be traced in antiquity in ancient Egypt. The Egyptian unit of length and volume were the royal cubit and hekat. Various pieces of evidence – papyri, inscribed vessels and monumental texts – attest to the hekat as the dominant unit in practical activities, e.g., in measuring stored grain and liquids [3], [4]. According to the evidence of ancient rods and marked vessels, the royal cubit is estimated as ~52.3 cm, and consists of 28 smaller units called fingers. The hekat is estimated as ~4.8 liters [3], [4]. Ceremonial stone cubit rods were kept in temples and were considered as possessing spiritual meaning: the inscription on the rods described in [5] says “The cubit is life, prosperity, and health, the repeller of the rebel …”. A similar statement can be found on the wooden cubit rod in [6]: “… [Gods]… may give life, prosperity, and health, and good lifespan …” The cube of one cubit edge was used in ancient Egypt for estimating soil volumes in earthworks, see [7] for construction account in Papyrus Reisner I, Section I, and the Egyptians knew how to convert cubits into hekats. Translating Problems 41 and 44 in the Rhind Papyrus [3], [7], [8], [9] to modern mathematical formulae, one learns that the volume of a cube of 1 cubit-edge equals 30 hekats, i.e., (1 royal cubit)^3/30 = 1 hekat. Using the value of 1 royal cubit = 52.3 cm, one indeed obtains, according to the above-mentioned Rhind Papyrus problem, an estimate of one hekat = 4.77 liters. This cube-based relation was of little use in the typically ovoid-shaped Egyptian ceramic jars [10]–[12]. Surprisingly, our measuring of the circumference of hundreds of Egyptian ovoid-shaped jars according to their drawings demonstrates preference for vessels whose maximal external horizontal circumference varies between 26–32 fingers, i.e., 1 cubit±2 fingers (see Fig. 1). Figure 1. The circumference of 376 Egyptian New Kingdom ovoid-shaped jars presented in three recent publications [10]–[12]. According to dip test (see methods below) the distribution is unimodal, p = 0.66. Can the knowledge that the circumference of an ovoid-shaped container is 1 cubit assist in estimating its volume? Below we present evidence that the inherent relationship between ancient Egyptian units of length and volume measurements can be based on another elementary body – the sphere – and test our hypothesis based on the available archaeological information. We also demonstrate that the revealed relation was not relevant in Mesopotamia, where a different system of measuring length and volume units was in use. Results and Discussion In Egyptian units of length and volume, the volume of a spherical container of 1 cubit circumference would be 0.5 hekat. Indeed, the volume v of a sphere of a circumference c equals: Substituting 1 royal cubit for c and employing the above-mentioned solution to Problems 41 and 44 in the Rhind Papyrus, one obtains We checked the 1 royal cubit circumference½ hekat relation in several available sets of Egyptian and Egyptian-related ceramic containers. First, we opted for a large set of New Kingdom Egyptian ovoid-shaped beer jars [10]–[14] (Fig. 2). Despite variation in size, the most frequent maximal external circumference of these vessels (measured by us according to their drawings) indeed varies between 27 and 31 fingers (i.e., slightly above 1 royal cubit) and their modal volume, accounting for a wall width of 0.5–1.5 cm, varies between 0.45–0.65 hekat (Fig. 3). Similar modal circumference and volume values were revealed by Barta [15], who studied 39 beer jars from the Old Kingdom site of Abusir. According to his data, one can estimate that the jars from the temple of Raneferef and the tomb of Fetekta fit our hypothesis; their volume vary between 1.9 and 2.6 liters (0.39–0.54 hekat), with the mode at 2.4 liters (0.50 hekat), while their circumferences vary between 47 and 57 cm (25.2–30.2 fingers). The beer jars found in the tomb of Kaaper are smaller though they have almost identical volume – ca 1.5–1.6 liters (0.31–0.33 hekat) – and their modal circumferences vary between 43.9 and 47.1 cm (23.5–25.2 fingers). Barta [15] argued that the jars were used as a unit for daily rations of food/beer. Beer jars were produced in the coiling technique [15], [16] and the method of production can be related to the maximal circumference of 1 cubit: the potter started building the jar from its base, but could have prepared the longest coil of 1 cubit in advance, to be used in the middle of the vessel. Globular pottery vessels – the best to demonstrate the 1 royal cubit circumference→½ hekat relation – are not common in Egypt proper. We therefore turned to perfect sphere-shaped ceramic jugs produced in late Iron Age I (ca. 1000 BCE) Phoenicia. We think that it is legitimate to do so because of the long-lasting tradition of cultural connections between Phoenicia and Egypt [16], which commenced as early as the third millennium BCE and continued until at least the 8^th century BCE [17]–[21]. This influence can be observed in different realms such as pictorial representations on seals and seal impressions [22], art representations [23], [24] and pottery production [25]. We examined 89 Iron Age I-IIA Phoenicia-made globular jugs. Three of them we measured manually: one jug from Megiddo in the Jezreel Valley (Fig. 4) and two jugs from Tel Masos in the Beer-Sheba Valley. The other 86 jugs were measured according to their drawings; 55 come from Cyprus [26], [27], seven from Tyre [28] and 25 from various locations in Israel: Megiddo, Tel Dor, Tel Keisan, Hazor, Tell Qasile, etc. [29]–[36]. In this case, too, the distribution of the jugs' external maximal circumference has a clear mode at 25–30 fingers (Fig. 5a). Taking into consideration a wall width of 0.5–0.7 cm, they provide a modal volume of 0.5 hekat (Fig. 5b). It is possible that the Phoenician globular jugs were used in trade of valuable liquids [34]. The inherent relationship between the royal cubit and the hekat could have made a quick estimate of their capacity possible. In order to establish whether the sphere-based relation of 1 royal cubit circumference½ hekat was not just a coincidental expression of ovoid-shaped containers of that size being convenient for daily use, we turned to ovoid-shaped vessels in Mesopotamia. We analyzed the circumference and volume of 58 Late Bronze jars from Tell Sabi Abyad [37] in north Syria. Here too the analysis was performed according to their published drawings. It revealed three size groups, none featuring 52–53 cm circumference (proxy of a royal cubit), or 2.4 liters volume (proxy of 0.5 hekat) characteristic of the Egypt-related jars. To the contrary, there is a clear gap in these values in the jars' volume distribution (Fig. 6). This means that despite certain similarities in the use of volume units in Egypt and Mesopotamia, the different units in the latter did not result in a similar, straightforward relationship between units of length and volume that is based on a sphere. One could have expected that the use of such formulae would have started in the Late Bronze Age, when the Levant, including Phoenicia, fell under direct Egyptian sway [19]. However, our study of ovoid-shaped Late Bronze jugs and jars from Megiddo [38] (Fig. 7) provides the modal interval of the circumference as 22–42 fingers, that is essentially wider than the modal interval of the circumference of the Egyptian jars, 26–32 fingers (Fig. 1). Although most of the complete vessels chosen for the comparison were found in tombs, they well-represent the daily, domestic repertoire at Megiddo. Looking at the modal interval of the distribution of the Megiddo jugs and jars' circumference at higher a resolution (inner histogram in Fig. 7) makes it possible to assume that it has more than one mode; one of the modal intervals is 25–32 fingers, the same as that of the Egyptian jars. However, the dip statistical test (p = 0.11) does not allow for definite conclusion. From a broader perspective, it is questionable whether at that time the Phoenician cities had achieved a commercial status similar to what they had in the later Iron Age. Moreover, the very fact that globular vessels are not frequent in Phoenicia in the Late Bronze Age seems to indicate that the idea of connection between the circumference of the globular jar and its volume developed later. The ancient Egyptian 1 royal cubit½ hekat relation in a sphere, detected in pottery vessels, sheds light on the practice of daily measurements of volume of liquids in the Ancient Near East. We have discovered this relation based on the analysis of the form and volume of a large number of Egyptian and Phoenician jars. Phoenician globular jars best express this relation: their circumference concentrates around the value of 1 cubit, while their volume is around ½ hekat. What is missing in order to confirm our discovery is textual evidence which would discuss the relation between circumference and volume in ovoid-shaped jars. To conclude, the ancient Egyptian 1 royal cubit½ hekat relation in a sphere is no less sophisticated than the modern 10 cm^31 liter relation expressed in a cube. This wisdom of sphere-based relationship, which was inherent and possibly unique to Egypt and its cultural sphere of influence, was lost over the ages. Materials and Methods The external circumference of a jar was estimated by direct measurement or by multiplying the length of the widest horizontal cross-section of a drawing by π. In order to estimate the volume of a jar we scan the drawing, digitize its external and internal contours, and construct a 3D model by rotating its internal and external contours with Rhinoceros™ software. We can then estimate the volume of the jar, up to the neck, according to the internal contour. We estimate the wall width according to the drawings as well as by manually measuring the volume of three of the jugs and compared the result to the estimates obtained according to the digitized external profile. As we have demonstrated elsewhere [39], in the case of a symmetrical jar, this procedure provides an adequate estimate of its volume. The unimodality of the distribution was tested according to the dip test [40]. We employed the MATLAB software provided by [41], which implements the algorithm of [42] and applies bootstrapping for significance estimation. Author Contributions Conceived and designed the experiments: EZ IF IB. Performed the experiments: EZ YG IF IB. Analyzed the data: EZ IB. Contributed reagents/materials/analysis tools: EZ YG IF IB. Wrote the paper: EZ YG IF IB. 1. 1. Zupko RE (1990) Revolution in measurement: Western European weights and measures since the age of science. Philadelphia: The American Philosophical Society. 548 p. 2. 2. Bureau International des Poids et Mesures. Available: http://www.bipm.org/en/si/history-si/. Accessed 5 June 2011. 3. 4. Helck W (1980) Masse und Gewichte (Pharaonische Zt). In: Band , Helck W, Otto E, editors. Lexikon der Ägyptologie. Wiesbaden: Otto Harrassowitz. pp. 1199–1209. 4. 5. Hayes WC (1959) The scepter of Egypt: a background for the study of the Egyptian antiquities in the Metropolitan Museum of art, Part I. Cambridge, Massachusetts: Harvard University Press. 496 5. 6. Bienkowski P, Tooley A (1995) Gifts of the Nile: Ancient Egyptian arts and crafts in Liverpool Museum, 31 p, pl.34, cited according to http://www.globalegyptianmuseum.org/detail.aspx?id=4424. Accessed 01.12.2011. 6. 7. Imhausen A (2007) Egyptian mathematics. In: Katz V, editor. The mathematics of Egypt, Mesopotamia, China, India, and Islam. A sourcebook. Princeton: Princeton University Press. pp. 7–56. 7. 8. Peet TE (1923) The Rhind mathematical papyrus, British Museum 10057 and 10058. Introduction, transcription, translation and commentary. London: Hodder and Stoughton. 135 p. 8. 9. Robins G, Shute C (1987) The Rhind mathematical papyrus. An ancient Egyptian text. London: British Museum. 60 p. 9. 10. Aston DA (1999) Pottery from the Late New Kingdom to the Early Ptolomaic period. Mainz am Rhein: P. von Zabern. 363 p. 10. 12. Rose P (2007) The Eighteenth Dynasty pottery corpus from Amarna. London: Egypt Exploration Society. 189 p. 11. 13. Castel G, Meeks D (1980) Deir el-Medineh, 1970: fouilles conduites par Georges Castel. Cairo: Institut français d'archéologie orientale du Caire. 105 p. 12. 15. Barta M (1996) Several remarks on beer jars found at Abusir. pp. 127–131. Cahiers de la Ceramique Egyptienne 4. 13. 16. Roux V, Panitz-Cohen N, Martin MAS (2012) Two potting communities at Beth-Shean? A technological approach to the Egyptian and Canaanite forms in the Ramesside period. In: Martin MAS, editor. Egyptian-type Pottery in the Bronze Age Southern Levant. Contributions to the chronology of the Eastern Mediterranean. Vienna: Academy of the Sciences. In press. 14. 17. Sowada KN (2009) Egypt in the Eastern Mediterranean during the Old Kingdom: An archaeological perspective. Fribourg: Academic Press. 312 p. 15. 18. de Miroschedji P (2002) The socio-political dynamics of Egyptian–Canaanite interaction in the Early Bronze Age. In: Levy TE, Van Den Brink ECM, editors. Egypt and the Levant: Interrelations from the 4th through the Early 3rd Millennium BCE. London: Leicester University Press. pp. 39–57. 16. 19. Redford D (1992) Egypt, Canaan and Israel in ancient times. Princeton: Princeton University Press. 512 p. 17. 20. Stager LE (1992) The Periodization of Palestine from Neolithic through Early Bronze times. In: Ehrich R, editor. Chronologies in Old World archaeology (3rd Edition) Vol. 1. Chicago: University of Chicago Press. pp. 22–41. 18. 21. Stager LE (2001) Port and power in the Early and the Middle Bronze Age: The organization of maritime trade and hinterland production. In: Wolff SR, editor. Studies in the archaeology of Israel and neighboring lands in memory of Douglas L. Esse. The oriental Institute of the University of Chicago Studies in Ancient Oriental Civilization 59. Chicago: University of Chicago Press. pp. 625–638. 19. 22. Keel O, Uehlinger C (1998) Gods, Goddesses, and images of God in Ancient Israel. Minneapolis: Fortress Press. 466 p. 20. 23. Winter IJ (1976) Phoencian and North Syrian ivory carving in historical context: questions of style and distribution, Iraq 38.1–22. 21. 24. Bell L (2011) Collection of Egyptian bronzes. In: Stager LE, Master DM, Schloen DJ, editors. Ashkelon 3, the seventh century B.C. Winona Lake: Eisenbrauns. pp. 397–420. 22. 25. Finkelstein I, Zapassky E, Gadot Y, Master DE, Stager LE, et al. (2012) Phoenician “torpedo” storage jars and Egypt: standardization of volume based on linear dimensions. Ägypten und Levante (Egypt and the Levant). In press. 23. 29. Balensi J (1980) Les fouilles de R.W. Hamilton a Tell Abu Hawam, effectuees en 1932–1933 pour le comte du Dpt. des Antiquites de la Palestine sous Mandat Britannique, niveaux IV et V: dossier sur l'histoire d'un port mediteraneen durant les Ages du Bronze et du Fer (1600-950 environ av. J.-C.) Vol. II. Strasbourg: Universite des sciences humaines. 175 p. PhD thesis. 24. 30. Briend J (1980) Les Neveaux 9–11 (Fer I). In: Briend J, Humbert J-B, editors. Tell Keisan (1971–1976), une cite phenicienne en Galilee. Fribourg: Editions universitaires. pp. 197–234. 25. 31. Arie E (2006) The Iron Age I pottery: levels K-5 and K-4 and an intra-site spatial analysis of the pottery from stratum VIA. In: Finkelstein I, Ussishkin D, Halpern B, editors. Megiddo IV: The 1998–2002 seasons. pp. 191–298. Monograph Series of the Institute of Archaeology of Tel Aviv University 24, Tel-Aviv. 26. 32. Loud G (1948) Megiddo II. Seasons of 1935–1939. Chicago: Oriental Institute Publications LXII. 437 p. 27. 33. Zarzecki-Peleg A (2005) Tel Megiddo during Iron Age and IIA-IIB. Jerusalem: The Hebrew University. PhD thesis. 28. 34. Gilboa A (1999) The dynamic of Phoenician bichrome pottery: A view from Tel Dor. Bulletin of American Schools of Oriental Research 316 1–22. 29. 36. Ben-Tor A (2005) 421 p. Yoqneam II: The Iron Age and the Persian period: Final report of the archaeological excavations (1977–1988), Qedem Reports 6, Jerusalem. 30. 37. Duistermaat K (2008) The pots and potters of Assyria: technology and organization of production, ceramic sequence and vessel function at Late Bronze Age Tell Sabi Abyad, Syria. Turnhout: Brepols. 624 p. 31. 39. Zapassky E, Rosen B (2006) 3-D computer modeling of pottery vessels: the case of level K-4. In: Finkelstein I, Ussishkin D, Halpern B, editors. Megiddo IV: The 1998–2002 seasons. pp. 601–617. Monograph Series of the Institute of Archaeology of Tel Aviv University 24, Tel-Aviv. 32. 40. Hartigan JA, Hartigan PM (1985) The dip test of unimodality. The Annals of Statistics 13(1): 70–84. 33. 41. Hartigan's Dip Statistic. Available: http://www.nicprice.net/diptest/. Accessed 01.12.2011. Israeli scientists reveal ancient measuring system Posted by bennya Media Coverage of This Article Posted by PLoS_ONE_Group Jugs Tell Secret of Ancient Merchants Posted by bennya
{"url":"http://www.plosone.org/article/info:doi/10.1371/journal.pone.0033895?imageURI=info:doi/10.1371/journal.pone.0033895.g005","timestamp":"2014-04-18T18:13:13Z","content_type":null,"content_length":"105897","record_id":"<urn:uuid:ecaf179a-2383-479e-b9e9-73825ec98403>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
[qupqknlc] Greedy leap years Leap day every 4 years. Omit leap day every 33*4=132 years. Keep leap day every 100*33*4=13200 years. This sequence of multipliers [4,33,100] were found by the greedy method. The entire cycle has the same average year length as the Gregorian calendar = 365 + 97/400 days, which uses multipliers [4,25,4]. We alternate between keeping and omitting leap days at each larger multiplier. countDays accum (h:t) offset = countDays (accum*h+offset) t (negate offset); countDays accum [] _ = accum; averageYear :: [Integer] -> Rational; averageYear leapPattern = (countDays 365 leapPattern 1) % (product leapPattern) What is going on such that averageYear [4,33,100] == averageYear [4,25,4]? Find a set of multipliers which minimizes the product, i.e., cycle length. I suspect this is related to Egyptian fractions or continued fractions. I suspect that the greedy method yields a sequence which monotonically increases. Update: these are Pierce expansions, a variant of Engel expansions. Applying the greedy algorithm to 365 + 71/293 days (as proposed by the Symmetry454 calendar) yields multipliers [4,32,58,97,146,293] for a cycle of 11251521835008 days in 30805635584 years. (This is far less efficient than the 293 year cycle.) When applied to adding "leap weeks" to the Gregorian calendar, we find the sequence [5,8,10] as reported in calendar based on 7. If we apply leap weeks to the 365+71/293 year-length, we get the sequence [5,8,10,97,146,293].
{"url":"http://kenta.blogspot.com/2014/01/qupqknlc-greedy-leap-years.html","timestamp":"2014-04-21T09:35:44Z","content_type":null,"content_length":"60813","record_id":"<urn:uuid:014e1ce0-7d03-44f6-9d12-f3ef6be2acf0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Image correspondences using cross-correlation - File Exchange - MATLAB Central Please login to add a comment or rating. Comments and Ratings (22) 21 Oct farah: Yes, you can use another set of images. You'll see that the images are read from disk at the start of the demo, and you can change those lines to read different images. However, it will be better in the end to write your own code to call correlCorresp rather than relying on the demo. If you are not sure how to proceed, you may need to work through "Getting Started with 18 Oct Matlab" in the documentation. MD JAYED Hussan: The corresponding coordinates are held in the corresps property of a correlCorresp object, after a call to the findCorresps method. You can see how the vectors are computed by inspecting the code for correspDisplay. It's all in the documentation, by the way. 18 Oct The code is working great. Can point me where i need to modify it to get the length of the blue lines so that i have calculate the velocity of shifting between two slides when i have a frame 2013 rate. 17 Oct cant I use another set of Images ?If yes then what change is required in which part of the code 18 Jun katia - sorry not to get back sooner. I hope you've solved the problem by now. It's easy - just look at the code that does the display for the demo and modify it to do what you want. 12 Sep Is there any way to display the feature points in every one of the two images before showing the correspondence between them? 27 Nov 30 Jun vamsi - It is possible to get an intermediate view using these correspondences, but how successful it will be depends on the complexity of the flow field. Extending the demo in the FEX package, you can do something like this: x1 = cc.corresps(1,:); y1 = cc.corresps(2,:); x2 = cc.corresps(3,:); y2 = cc.corresps(4,:); 16 Feb input_points = [x1' y1']; 2011 base_points = [(x1+x2)' (y1+y2)']/2; % half way between matchine features % Generate transform from image1 to half-way position tform = cp2tform(input_points, base_points, 'piecewise linear'); % Apply it to image 1 interp = imtransform(image1, tform, 'Xdata', [1 size(image1,2)], 'Ydata', [1 size(image1,1)]); to get an image intermediate between the two images. I've found that for complex flow fields a piecewise linear transform can't be found, but the lwm option can work. I can't make a recommendation though - you have to experiment with your own data set. 15 Feb david-I am trying to get an intermediate view between these two images with out disparity map. So i need to know the amount by which each pixel has been moved. Can i go for image morphing 2011 with these available sparse set? In such a case, which morphing technique would you suggest.. 15 Feb vamsi - I'm glad it's helpful. I am sorry, but I don't know the best way to make a dense set - that's a difficult problem, and there's a lot of research literature to look at. Interpolation 2011 is not good because the flow field is discontinuous, and so it's necessary to go back to the original images to do segmentation. Sorry not to be more help, but it's a big question. 14 Feb Hi david. Thankyou so mcuh for this work. It really helped me a lot. After getting the correspondences, how can i make it dense set from the sparse set obtained. I tried some interpolation 2011 techniques. But they did not worked. Please help me 26 May Marco - It's necessary to convert rgb images to greyscale (using rgb2gray). 25 May David - So what was the resolution since I'm getting the same errors that Michael did. I am running 7.9.0 (R2009b). 24 May Note to potential users: the issue identified by Michael was resolved between us - there isn't a problem with the code. 09 May That's puzzling! What does "which conv2 -all" print? Hi again, Forward matches: ??? Undefined function or method 'conv2' for input arguments of type 'double' and attributes 'full 3d real'. Error in ==> convolve2>doconv at 95 y = conv2(conv2(x, u(:,1)*s(1), shape), vp(1,:), shape); Error in ==> convolve2 at 71 y = doconv(x, m, shape, tol); Error in ==> patch_var at 29 a = convolve2(x, m, shape); Error in ==> varPeaks at 17 vars = patch_var(im, patchsize); 08 May Error in ==> correlCorresp.correlCorresp>correlCorresp.findFeatures at 466 2010 [r, c] = varPeaks(cc.im1, cc.fPS, cc.rT); Error in ==> correlCorresp.correlCorresp>correlCorresp.findCorrespsFwd at 545 cc = cc.findFeatures; Error in ==> correlCorresp.correlCorresp>correlCorresp.findCorresps at 489 cc = cc.findCorrespsFwd; Error in ==> correspDemo_1 at 38 cc = cc.findCorresps; >> conv2 ??? Error using ==> conv2 Not enough input arguments. So I see conv2 but it does not seem to like it, might you know why? 08 May Ah silly me. 2010a, right above. I will give that a go and see how if that fixes it. Hi Michael, I'm not sure what the problem is, but I wonder if it's a version problem - what version of Matlab are you using? 08 May 2010 You don't need to put the files in a special "@" directory - they just go in a folder on your path as normal. But I don't think that in itself would cause the error you observe. Hi David, So I am a little new to class definitions in Matlab, but I placed correlCorresp.m in a folder @correlCorresp and modified the demo to include 2 of my .png files. >> correspDemo_1 ??? Error: File: /mnt/qfs4/mcoughlin/IM/daily/2010_05_01/plots/crossCorr/@correl Corresp/correlCorresp.m Line: 190 Column: 28 Undefined function or variable 'private'. 08 May Error in ==> correspDemo_1 at 24 2010 cc = correlCorresp('image1', image1, 'image2', image2, 'printProgress', 100); So that means it is complaining about this line: properties (Dependent, SetAccess = private) Do you happen to know what I am doing incorrectly? Thank you for your help, Thanks, Ulrich. I don't have a publication on this code, so no real reference, though the SVD trick that speeds up the correlations is described here: 'Computer and Robot Vision' Vol I, by 29 Apr R.M. Haralick and L.G. Shapiro (Addison-Wesley 1992), pp. 298-299. Subpixel displacements: yes, I've been thinking about this, but I haven't done it yet. Wow, this is great. I'm trying to get similar results for landslide displacements using DEMs quite some time now. And this is faster, more reliably and gave me results straight ahead. Thanks 28 Apr for this. Now I have to modify it a bit for subpixel displacements. The reverse matching sort out nicely mismatches in my case. Is there any reference I can use when I include this in my 2010 thesis?
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/27269-image-correspondences-using-cross-correlation","timestamp":"2014-04-21T10:12:27Z","content_type":null,"content_length":"48587","record_id":"<urn:uuid:a21400b1-3c19-4a26-a6ab-0a02bd1f75e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison and Critique of Anomaly Combination Methods Recent Comments j ferguson on Clarification of Understanding Nullius in Verba on Clarification of Understanding hunter on Clarification of Understanding web site on Historic Variation in Arctic I… j ferguson on Clarification of Understanding Jeff Id on Clarification of Understanding j ferguson on Clarification of Understanding Alex Hamilton on Clarification of Understanding omanuel on Clarification of Understanding Ckg The Don on Priceless Entertainment from… WV cools on Clarification of Understanding Timmy on Clarification of Understanding Samuelnik on Russia Accuses CRU of Tam… cloud hosting server on Priceless Entertainment from… hunter on Clarification of Understanding Comparison and Critique of Anomaly Combination Methods Posted by Jeff Id on August 21, 2010 This post is a comparison of the first difference method with other anomaly methods and an explanation of why first difference methods don’t work well for temperature series as well as comparison to other methods. The calculatoins here are relatively simple. First I built 10,000 artificial temp series from 1910 -2010 and another 10,000 from 1960-2010 having the same trendless AR1 noise as the King_Sejong temperature station in the Antarctic. Why the Antarctic? It was easy for me to access. I applied a uniform trend of 0.05 C/Decade to both the short and long series with the shorter series offset -.25 C from the first to mimic the offset created by temperature anomaly calculations. Simple version, two temp stations of different lengths with known trend of 0.05C. I calculated the same stats for each series and each method below. First, the 10,000 long series by itself 1910-2010. Long Series Alone Trend: 0.0496 C/Decade Autocorrelation confidence interval from single trend Santer style: +/-0.04 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.0399 Then the 10,000 shorter series 1960-2010. Both the long and short have the same AR1 Short Series Alone Trend: 0.0503 C/Decade Autocorrelation confidence interval from single trend Santer style: +/-0.1244 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.111 As expected the shorter series has wider confidence intervals. Next, the most simple anomaly combination where the two anomalies are averaged together with no offset. Simple Average Trend: 0.0309 C/Decade Autocorrelation confidence interval from single trend Santer style: +/-0.0351 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.0346 Note that the trend is underestimated by this method. This is the reason that anomaly offsetting is critical to calculating a proper global trend. In climatology the offsetting problem is often handled by taking the anomaly of the series over a short window. This has advantages which include ease of computation and drawbacks in that it slightly underestimates the trend. For this post I chose a window of 1950 – 1970, note the second series begins half way through the window at 1960. Anomaly Offset by Window (Climatology) Trend: 0.0485 C/Decade — slightly low. Autocorrelation confidence interval from single trend Santer style: +/-0.0311 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.0469 Note that the Monte Carlo confidence has expanded. This is due to noise on the offset calculation (120 months offset) which causes a systematic error that is not included in confidence intervals for global datasets. The Santer method only sees the noise in the final trend and doesn’t pick up on this methodological error which is an important point for the first difference method which we’ll apply next. First Difference Method Trend: 0.0500 C/Decade – Perfect Autocorrelation confidence interval from single trend Santer style: +/-0.0363 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.245 The trend using first differences over 10,000 series came out perfect in this run. Sometimes it was slightly high or low in other runs but we would expect any differences to center around the true trend, an advantage for FDM. The Santer confidence interval was of reasonable size being calculated from only 1 run, but the more encompassing Monte Carlo two sigma confidence interval expanded to a whopping +/-0.245 C/Decade 7X larger than normal. This is the reason that first difference doesn’t really work for temperature series, I’ll demonstrate what is going on later in the post. Next though, we’ll look at what Roman’s seasonal offset method does. Roman’s Seasonal Offset Trend: 0.0498 C/Decade Autocorrelation confidence interval from single trend Santer style: +/-0.035 C/Decade Monte Carlo confidence interval from 10,000 trends: +/- 0.0385 Of all the methods above, Roman’s is the only one to replicate the trend correctly while having a reasonable confidence interval. The climatology standard comes close, but only Roman’s least squared method achieves the correct answer. Note his trend is slightly higher than the climate standard anomaly window method, but is more correct. So what is happening to make first difference so bad? Figure 1 is a plot of the two trends used in this post. Both have identical slopes of 0.05. Figure 2 is a plot of the average of the two above slopes. Note that the trend matches the ‘simple anomaly’ combination above very closely – makes sense! Please ignore the confidence interval. I created two random ARMA trends (noise) adding in the above trends. The trend 0f 0.05C/Decade isn’t visible under the noise. The next plot is the two above series combined by first differences. It’s hard to miss the big step in this plot at 1960 and the hugely significant trend. But what caused it? Figure 5 is a zoomed in point where the second series is introduced. To explain further, the next plot is a bit different. I’ve taken the short series and offset it to make the first coexisting point equal to the long series value at that time. It looks extreme but that is exactly what first differences does. A zoomed in plot of the above with the intersection point highlighted in green is below. Figure 8 is the average of Figure 6. Note that it is the same curve as Figure 4. The difference between the two results is a straight line. So there you have it. The first difference method is really just an offset of the anomaly based on the first coexisting point. Since there is so much mircoclimate noise on the temperature signal, the offsets by using a single point are prohibitively large. I chose this example to demonstrate the offset clearly, but other runs step negative and sometimes the step isn’t visible at all. The average of all the steps over 10,000 series is about zero, but even over 50 series the average trend is dominated by this random microclimate noise. That’s why FDM is a terrible method for combining temperature trends. While the net trend using thousands of stations will be good, regional areas having even 20 stations will demonstrate complete nonsense trends with FDM. Now that we can see what FDM does, consider improving the offset of the shorter anomaly above based on more than one coexisting data point! This was the topic of the following post. Area Weighted Antarctic Reconstructions The method used a window of overlapping points to determine the offset for the inserted series. The accuracy of this method falls between the climatology normal method and Roman’s seasonal offset but very much closer to Roman’s method. Figure 3 is reproduced below so you can see how much the trend from even 63 stations varied as we went from 1 month overlap ( equivalent to FDM) to 95. The proper trend for the Antarctic for the last 50 years is about 0.06C/Decade yet FDM returned negative values. Note the stability of trend once enough months of overlapping data are utilized. ## gridded global trend calculation with offset#### #### load external functions filtering used source("http://www.climateaudit.info/scripts/utilities.txt") #Steve McIntyre ######## functions ################################ # many of these functions were written by Ryan O SteveM, NicL,RomanM ### Gausssian filter filter.combine.pad(x,truncated.gauss.weights(11) )[,2]#31 ### Sum of squares ssq=function(x) {sum(x^2)} ######## roman M offset series combination ##### ##### version2.0 ### subfunction to do pseudoinverse psx.inv = function(mat,tol = NULL) if (NCOL(mat)==1) return( mat /sum(mat^2)) msvd = svd(mat) dind = msvd$d if (is.null(tol)) tol = max(NROW(mat),NCOL(mat))*max(dind)*.Machine$double.eps dind[dind>0] = 1/dind[dind>0] inv = msvd$v %*% diag(dind, length(dind)) %*% t(msvd$u) ### subfunction to do offsets calcx.offset = function(tdat,wts) ## new version nr = length(wts) delt.mat = !is.na(tdat) delt.vec = rowSums(delt.mat) row.miss= (delt.vec ==0) delt2 = delt.mat/(delt.vec+row.miss) co.mat = diag(colSums(delt.mat)) - (t(delt.mat)%*% delt2) co.vec = colSums(delt.mat*tdat,na.rm=T) - colSums(rowSums(delt.mat*tdat,na.rm=T)*delt2) co.mat[nr,] = wts temp.combine = function(tsdat, wts=NULL, all=T) ### main routine nr = nrow(tsdat) nc = ncol(tsdat) dims = dim(tsdat) if (is.null(wts)) wts = rep(1,nc) off.mat = matrix(NA,12,nc) dat.tsp = tsp(tsdat) for (i in 1:12) off.mat[i,] = calcx.offset(window(tsdat,start=c(dat.tsp[1],i), deltat=1), wts) colnames(off.mat) = colnames(tsdat) rownames(off.mat) = month.abb matoff = matrix(NA,nr,nc) for (i in 1:nc) matoff[,i] = rep(off.mat[,i],length=nr) temp = rowMeans(tsdat-matoff,na.rm=T) if (all==T) pred =c(temp) + matoff residual = tsdat-pred list(temps = ts(temp,start=c(dat.tsp[1],1),freq=12),pred =pred, residual = residual, offsets=off.mat) #pick out those series with have at least nn + 1 observations in every month #Outputs a logical vector with TRUE indicating that that sereis is OK dat.check = function(tsdat, nn=0) good = rep(NA,ncol(tsdat)) for (i in 1:ncol(tsdat)) good[i]= (min(rowSums(!is.na(matrix(tsdat[,i],nrow=12))))>nn) ##### slope plotting function with modifications for 12 month anomaly month = factor(rep(1:12,as.integer(len/12)+1)) month=month[1:len] #first month by this method is not necessarily jan #the important bit is that there are 12 steps coes = coef(lm(dat~0+month+time2)) for(i in 1:12) for(i in 1:len) anomaly = residual+time2*coes[13]-mean(time2*coes[13],na.rm=TRUE) ### plot average slope using roman's methods plt.avg2=function(dat, st=1850, en=2011, y.pos=NA, x.pos=NA, main.t="Untitled") { ### Get trend fm=get.slope(window(dat, st, en)) ### Initialize variables N=length(window(dat, st, en)) ### Calculate sum of squared errors ### Calculate OLS standard errors ### Calculate two-tailed 95% CI ### Get lag-1 ACF r1=acf(fm [[2]], lag.max=1, plot=FALSE,na.action=na.pass)$acf[[2]] ### Calculate CIs with Quenouille (Santer) adjustment for autocorrelation ### Plot data plot(window(fm[[3]], st, en), main=main.t, ylab="Anomaly (Deg C)") lfit=lm(window(fm[[3]], st, en)~I(time(window(fm[[3]], st, en)))) ### Add trendline and x-axis abline(a=lfit[[1]][1],b=lfit[[1]][2], col=2); abline(fm, col=4, lwd=2) ### Annotate text tx=paste("Slope: ", round(fm[[1]][13]*10, 4) ,"Deg C/Decade", "+/-", round(ciQ, 4) , "Deg C/Decade\n(corrected for AR(1) serial correlation)\nLag-1 value:" , round(r1, 3)) text(x=ifelse(is.na(x.pos), (maxx-minx)*.2+minx,x.pos), y=ifelse(is.na(y.pos), (maxy-miny)*.8+miny, y.pos), tx, cex=.8, col=4, pos=4) ######## function plots monthly data with seasonal signal intact 12 slope curves plt.seasonal.slopes=function(dat, st=1850, en=2011, y.pos=NA, x.pos=NA, main.t="Untitled") ### Get trend fm=get.slope(window(dat, st, en)) ### Initialize variables N=length(window(dat, st, en)) ### Calculate sum of squared errors ### Calculate OLS standard errors ### Calculate two-tailed 95% CI ### Get lag-1 ACF r1=acf(fm [[2]], lag.max=1, plot=FALSE,na.action=na.pass)$acf[[2]] ### Calculate CIs with Quenouille (Santer) adjustment for autocorrelation ### Plot data plot(window(dat, st, en), main=main.t, ylab="Temperature (Deg C)") ### Add 12 trendlines and x-axis for(i in 1:12) abline(a=fm[[1]][i],b=fm[[1]][13], col=2); ### Annotate text tx=paste("Slope: ", round(fm[[1]][13]*10, 4),"C/Dec" ,"+/-", round(ciQ, 4) ,"C/Dec(AR(1)Lag-1 val:" , round(r1, 3)) text(x=ifelse(is.na(x.pos), (maxx-minx)*.2+minx,x.pos), y=ifelse(is.na(y.pos), (maxy-miny)*.05+miny, y.pos), tx, cex=.8, col=4, pos=4) #temp stations King_Sejong data from BAS #You don't need to run this section because I only used the AR1 value for the #next part arm=arima(Yo[,22] ,order=c(1,0,0))#FIT ARIMA MODEL arima(x = tdif, order = c(1, 0, 0)) ar1 intercept 0.4987 0.0023 s.e. 0.0456 0.2191 #long series for(i in 1:10000) sim=arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv))+tren sd(slp)*2*10 #-- +/- 0.0399 C/Dec mean(slp)*10 #0.0496C/Dec plt.avg(dat=sim,st=1900,en=2010,y.pos=2,x.pos=1910) #+/- 0.04 C/Dec #short series for(i in 1:10000) sim=arima.sim(n = 601,list(ar=0.4987, ma = 0,sd = sdv))+tren[1:601] sd(slp)*2*10 #-- +/- 0.111 mean(slp)*10 #0.0503C/Dec plt.avg(dat=sim,st=1900,en=2010,y.pos=2,x.pos=1970) #+/- 0.1244 C/Dec #simple anomaly combination for(i in 1:10000) sim=arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv))+tren sim2=arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv))+tren-.25 sld= cbind(sim,sim2) sd(slp)*2*10 #-- +/- 0.0351 mean(slp)*10 #0.0309C/Dec plt.avg(dat=sim3,st=1900,en=2010,y.pos=2,x.pos=1970) #+/- 0.0346 C/Dec #window anomaly combination for(i in 1:10000) sim=arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv))+tren sim2=arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv))+tren-.25 sld= ts(cbind(sim,sim2),end=2010,deltat=1/12) sd(slp)*2*10 #-- +/- 0.0469 mean(slp)*10 #0.0485C/Dec plt.avg(dat=sim3,st=1900,en=2010,y.pos=2,x.pos=1970) #+/- 0.0311 C/Dec ### first difference method for(i in 1:10000) sim=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren sim2=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren-.25 sld= cbind(sim,sim2) sd(slp)*2*10 #-- +/- 0.245 mean(slp)*10 #0.0500C/Dec plt.avg(dat=sim3,st=1900,en=2010,y.pos=2,x.pos=1970) #+/- 0.0363 C/Dec ### Roman Seasonal Offset method for(i in 1:10000) sim=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren sim2=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren-.25 sld= cbind(sim,sim2) sd(slp)*2*10 #-- +/- 0.0385 mean(slp)*10 #0.0498C/Dec plt.avg(dat=sim3,st=1900,en=2010,y.pos=2,x.pos=1970) #+/- 0.035 C/Dec # plots plot(ta, main="Artificial Trends Used Above", xlab="Year", ylab="Deg C") smartlegend(x="left",y="top",inset=0,fill=1:2,legend=c("Long Trend","Short Trend") ,cex=.7) #show slope of averaged trends tcomb=ts(rowMeans(cbind(ta,tb),na.rm=TRUE),end=2010, freq=12) plt.avg(tcomb,main.t="Averaged Trends",st=1900,en=2010,x.pos=1910) #first difference plots sim=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren sim2=ts(arima.sim(n = 1201,list(ar=0.4987, ma = 0,sd = sdv)),deltat=1/12,end=2010)+tren-.25 sld= cbind(sim,sim2) plot(sld[,1], main="Simulated Temperature Series With Artificial Trends", xlab="Year", ylab="Deg C") smartlegend(x="left",y="top",inset=0,fill=1:2,legend=c("Long series with trend","Short series with trend") ,cex=.7) plt.avg(sim3,main.t="First Difference Combination of Series",st=1900,en=2010,x.pos=1910,y.pos=4) plot(sim3,xlim=c(1959,1961),main="Zoomed in Interface Between Series 1 and 2") sldoffset = sld sldoffset[,2]=sldoffset[,2]-(sldoffset[601,2]-sldoffset[601,1])#offset temp series to match plot(sldoffset[,1], main="Simulated Temperature Series With Artificial Trends \nOffset by First Coexisting Point", xlab="Year", ylab="Deg C",ylim=c(-3,4)) smartlegend(x="left",y="top",inset=0,fill=1:2,legend=c("Long series with trend","Short series with trend") ,cex=.7) plot(sldoffset[,1], main="Simulated Temperature Series With Artificial Trends \nOffset by First Coexisting Point", xlab="Year", ylab="Deg C",ylim=c(-3,4),xlim=c(1959,1961)) smartlegend(x="left",y="top",inset=0,fill=1:2,legend=c("Long series with trend","Short series with trend") ,cex=.7) plt.avg(offsetmean,main.t="Combination of Series Offset by First Point",st=1900,en=2010,x.pos=1910,y.pos=3) plot(diff-mean(diff),ylim=c(-1,1),main="Difference Plot FDM minus Offset Anomaly by First Coexisting Point",xlab="Year", ylab="Deg C") 19 Responses to “Comparison and Critique of Anomaly Combination Methods” 1. August 21, 2010 at 3:41 pm Lesson: don’t use linear extrapolations of apparent short term trends? 2. August 21, 2010 at 5:15 pm Data Set Development These datasets were created from station data using the Anomaly Method, a method that uses station averages during a specified base period (1961-1990 in this case) from which the monthly/seasonal /annual departures can be calculated. Prior to March 25, 2005 the First Difference Method (Peterson et al. 1998) was used in producing the Mean Temperature grid, and prior to February 24, 2004, the FD Method was used in producing the Maximum and Minimum Temperature grids. Due to recent efforts to restructure the GHCN station data by removing duplicates (see Peterson et al. 1997 for a discussion of duplicates), it became possible to develop grids for all temperature data sets using the Anomaly Method. The Anomaly Method can be considered to be superior to the FD Method when calculating gridbox size averages. The FD Method has been found to introduce random errors that increase with the number of time gaps in the data and with decreasing number of stations (Free et al. 2004). While the FD Method works well for large-scale means, these issues become serious on 5X5 degree scales. 3. August 21, 2010 at 5:46 pm You’re a busy man, Jeff. Good stuff! You’ve become quite a data analyst in the past year or two. maybe you should have become an academic instead of just a practical engineer. ;) 4. August 21, 2010 at 5:54 pm Jeff’s doing good work so you want to demote him? WUWT? >:( 5. August 21, 2010 at 9:16 pm Thanks Roman, unfortunately I didn’t think of this problem until after I saw FDM not working. That’s how it goes sometimes though. 6. August 22, 2010 at 4:55 am Excellent article, Jeff. It very clearly explains why FDM won’t work satisfactorily with noisy data. One has to compare stations using a longer period of overlap, as you say. 7. August 23, 2010 at 11:11 am Jeff I was wondering if you could do the reverse and see what happens when you take out a few inputs. They have dropped a lot of stations and I am wondering if that has an effect. 8. August 24, 2010 at 11:07 am Your anomaly by climatology example is not representative: The 20 year window is too short; the 10 year overlap is too short. CRU requires 15 years in a 30 year window. Your short series would be reject in a CRU style analysis. GISTEMP (Reference Station Method) when combining records (critically, Step 3 when station records are combined to produce a gridded product) combines using as many years are in the overlap, and requires 20 years. 9. August 24, 2010 at 11:56 am #8 An odd comment. Of course if I widened the window centered on the start of the short series to 30 years, the series would pass the artificial CRU criteria and it would only affect the confidence interval so why does that matter? 10. August 24, 2010 at 12:21 pm David, what makes you think that CRU criteria are “correct” there is no CORRECT. I use 15 years, but I require the series to have ALL 12 months present. CRU only require 10 months to make a year. which is correct? neither. Each gives you an estimate with different properties. The best one can do with CAM is look at a variety of criteria and study the effect DUE TO your choice. Roman’s method has no such “analyst chocies” all the data is used and the best estimate is made from the data. Simply, the trend changes is you go from 15 years to 10 to 20, 10 month years, 11 month, 12 month 10 month.. or how about at least 12 full years, 5 years with 11 months and 3 years with 10 months? That variation is a variation that is due to the “model specification” that the analyst defines arbitrarily. It puts undocumented uncertainty into the estimate 11. August 24, 2010 at 1:14 pm @Jeff #9. Doesn’t your observation in the article “Note that the Monte Carlo confidence has expanded.” deserve a note to say that had the overlap and window been larger then the “expanded confidence” would not have expanded so much? Aren’t we comparing the confidence intervals that different methods give? @Steven #10. Who mentioned “CORRECT”? FTR I have no problems with Roman’s method and I understand it perfectly. I don’t know why you mention it. 12. August 24, 2010 at 1:19 pm #11 Yes the confidence interval would tighten as I mentioned. It would still underestimate the trend though. My intent wasn’t to replicate CUR or gisstemp (we did that elsewhere) but to demonstrate the strengths and weaknesses of the different methods. 13. August 25, 2010 at 4:31 am Why are you regressing monthly anomalies? Didn’t Roman M tell us that we have to compute yearly anomalies first? (which is what GISTEMP does). 14. August 25, 2010 at 6:33 am Did I miss something in my own post? That is not what I said. In fact, calculating yearly anomalies is a problem when some months are missing. Infilling may increase uncertainty and skipping years with some missing monthly values results in the loss of information from the months which are present. The methods I describe can be applied to the original non-anomaly measurements using all of the information which is available without risk of distortion due to seasonal starting points. 15. August 25, 2010 at 8:31 am @RomanM: Ah yes, you’re right. You said “Several fixes are possible. The simplest is to use year rather than time as the independent variable in the regression.” Which in this toy example amounts to the same thing as computing yearly anomalies. Of course I agree with your following qualification: “this solution can still be problematical if the number of available anomalies differs from year to year” if by “problematical” you mean “introduces distortions that a much smaller than a dot of printer’s ink”. Do you agree that Jeff is introducing an error by drawing trend lines through monthly anomalies? 16. August 25, 2010 at 11:33 am #15, In this example there is no stairstep from monthly anomaly. The data is random AR1 with a gaussian distribution. Think of it like a correct anomaly, rather than the climate standard. 17. August 25, 2010 at 11:39 am So the mean of your July “anomalies” isn’t 0, it’s whatever the trend line is when it hits July (presumably, at 0.05C per decade, some small value). 18. August 25, 2010 at 11:49 am #17, Yes. Interestingly if you read some of the older posts on Roman’s method, it provides a system to do this exact calculation. I think of it as a true anomaly, it makes very little difference to the result though. 19. September 30, 2010 at 1:06 am Sorry david. when you said that 20 was “too short” I thought you were implying there was a correct method. Glad to see you say that there is no correct method.
{"url":"http://noconsensus.wordpress.com/2010/08/21/comparison-and-critique-of-anomaly-combination-methods/","timestamp":"2014-04-20T10:54:39Z","content_type":null,"content_length":"115253","record_id":"<urn:uuid:7bfb6af5-7e0f-4529-93ba-31696234aaf8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. • Index • » Help Me ! • » How to rearrange this? Post a reply Topic review (newest first) 2012-12-24 15:13:01 That's a start! Once you've got that, you should divide both sides by Notice that and you should be able to use that second rule I gave you. 2012-12-24 13:56:03 muxdemux wrote: I just wrote then did a bunch of algebra. Some things to remember while working it out: Good luck! Thank you! So based on the first rule you said, then (Pi/Pt)^(1-n) = (Pi^(1-n)(Pt)^(1-n)...which isn't the answer. Can you help?? 2012-12-24 06:36:56 I just wrote then did a bunch of algebra. Some things to remember while working it out: Good luck! 2012-12-24 01:05:31 Why does ((Pt)^(n-1))((Pi)^(1-n)) EQUAL the term ((Pi/Pt)^(1-n))? Apparantly these two are equivalent? I am not good at rearranging / how to change the signs on powers etc, can someone Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° That's a start! Once you've got that, you should divide both sides by muxdemux wrote:I just wrotethen did a bunch of algebra. Some things to remember while working it out:andGood luck! Thank you! So based on the first rule you said, then (Pi/Pt)^(1-n) = (Pi^(1-n)(Pt)^(1-n)...which isn't the answer. Can you help?? Why does ((Pt)^(n-1))((Pi)^(1-n)) EQUAL the term ((Pi/Pt)^(1-n))? Apparantly these two are equivalent? I am not good at rearranging / how to change the signs on powers etc, can someone explain?Thanks
{"url":"http://www.mathisfunforum.com/post.php?tid=18669&qid=245069","timestamp":"2014-04-19T01:58:23Z","content_type":null,"content_length":"19044","record_id":"<urn:uuid:31435c17-1819-4342-b0f8-797c89e1c2a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Types and Subtypes 3.2 Types and Subtypes Static Semantics {type} {primitive operation [partial]} is characterized by a set of values, and a set of primitive operations which implement the fundamental aspects of its semantics. {object [partial]} of a given type is a run-time entity that contains (has) a value of the type. Glossary entry: } Each object has a type. A has an associated set of values, and a set of primitive operations which implement the fundamental aspects of its semantics. Types are grouped into categories classes Most language-defined categories of types are also classes of types The types of a given class share a set of primitive operations. {closed under derivation} Classes are closed under derivation; that is, if a type is in a class, then all of its derivatives are in that class Glossary entry: {Subtype} A subtype is a type together with a constraint or null exclusion, which constrains the values of the subtype to satisfy a certain condition. The values of a subtype are a subset of the values of its type. {category (of types)} {class (of types)} Types are grouped into categories classes of types , reflecting the similarity of their values and primitive operations {language-defined class (of types)} There exist several language-defined categories classes of types (see NOTES below) , reflecting the similarity of their values and primitive operations {language-defined category (of types)} [Most categories of types form classes of types.] {elementary type} Elementary types are those whose values are logically indivisible; {composite type} {component} composite types are those whose values are composed of {aggregate: See also composite type} The formal definition of category and class is found in 3.4. Glossary entry: Class (of types) } { closed under derivation A class is a set of types that is closed under derivation, which means that if a given type is in the class, then all types derived from that type are also in the class. The set of types of a class share common properties, such as their primitive operations. Glossary entry: {Category (of types)} A category of types is a set of types with one or more common properties, such as primitive operations. A category of types that is closed under derivation is also known as a class. Glossary entry: {Elementary type} An elementary type does not have components. Glossary entry: {Composite type} A composite type may have has components. Glossary entry: {Scalar type} A scalar type is either a discrete type or a real type. Glossary entry: {Access type} An access type has values that designate aliased objects. Access types correspond to “pointer types” or “reference types” in some other languages. Glossary entry: Discrete type } A discrete type is either an integer type or an enumeration type. Discrete types may be used, for example, in s and as array indices. Glossary entry: {Real type} A real type has values that are approximations of the real numbers. Floating point and fixed point types are real types. Glossary entry: {Integer type} Integer types comprise the signed integer types and the modular types. A signed integer type has a base range that includes both positive and negative numbers, and has operations that may raise an exception when the result is outside the base range. A modular type has a base range whose lower bound is zero, and has operations with “wraparound” semantics. Modular types subsume what are called “unsigned types” in some other languages. Glossary entry: {Enumeration type} An enumeration type is defined by an enumeration of its values, which may be named by identifiers or character literals. Glossary entry: {Character type} A character type is an enumeration type whose values include characters. Glossary entry: {Record type} A record type is a composite type consisting of zero or more named components, possibly of different types. Glossary entry: {Record extension} A record extension is a type that extends another type by adding additional components. Glossary entry: {Array type} An array type is a composite type whose components are all of the same type. Components are selected by indexing. Glossary entry: {Task type} A task type is a composite type used to represent whose values are tasks, which are active entities which that may execute concurrently and which can communicate via queued task entries with other tasks. The top-level task of a partition is called the environment task. Glossary entry: {Protected type} A protected type is a composite type whose components are accessible only through one of its protected operations which synchronize protected from concurrent access by multiple tasks. Glossary entry: {Private type} A private type gives a is a partial view of a type that reveals only some of its properties. The remaining properties are provided by the whose full view given elsewhere. Private types can be used for defining abstractions that hide unnecessary details is hidden from their its clients. Glossary entry: {Private extension} A private extension is a type that extends another type, with the additional properties like a record extension, except that the components of the extension part are hidden from its clients. Glossary entry: {Incomplete type} An incomplete type gives a view of a type that reveals only some of its properties. The remaining properties are provided by the full view given elsewhere. Incomplete types can be used for defining recursive data structures. {scalar type} The elementary types are the types ( ) and the types (whose values provide access to objects or subprograms). {discrete type} {enumeration type} Discrete types are either types or are defined by enumeration of their values ( {real type} Real types are either floating point types or fixed point } { } The composite types are the record extensions interface types, task types, and {private type} {private extension} A private type or private extension represents a partial view (see 7.3) of a type, providing support for data abstraction. A partial view is a composite type. This paragraph was deleted.To be honest: The set of all record types do not form a class (because tagged record types can have private extensions), though the set of untagged record types do. In any case, what record types had in common in Ada 83 (component selection) is now a property of the composite class, since all composite types (other than array types) can have discriminants. Similarly, the set of all private types do not form a class (because tagged private types can have record extensions), though the set of untagged private types do. Nevertheless, the set of untagged private types is not particularly “interesting” — more interesting is the set of all nonlimited types, since that is what a generic formal (nonlimited) private type matches. {incomplete type} {private type} {private extension} There can be multiple views of a type with varying sets of operations. [An incomplete type represents an incomplete view (see 3.10.1) of a type with a very restricted usage, providing support for recursive data structures. A private type or private extension represents a partial view (see 7.3) of a type, providing support for data abstraction. The full view (see 3.2.1) of a type represents its complete definition.] An incomplete or partial view is considered a composite type[, even if the full view is not]. Proof: The real definitions of the views are in the referenced clauses. Certain composite types (and views thereof) have special components called whose values affect the presence, constraints, or initialization of other components. Discriminants can be thought of as parameters of the type. The term is used in this International Standard in place of the term component to indicate either a component, or a component of another subcomponent. Where other subcomponents are excluded, the term component is used instead. {part (of an object or value)} Similarly, a of an object or value is used to mean the whole object or value, or any set of its subcomponents. The terms component, subcomponent, and part are also applied to a type meaning the component, subcomponent, or part of objects and values of the type. Discussion: The definition of “part” here is designed to simplify rules elsewhere. By design, the intuitive meaning of “part” will convey the correct result to the casual reader, while this formalistic definition will answer the concern of the compiler-writer. We use the term “part” when talking about the parent part, ancestor part, or extension part of a type extension. In contexts such as these, the part might represent an empty set of subcomponents (e.g. in a null record extension, or a nonnull extension of a null record). We also use “part” when specifying rules such as those that apply to an object with a “controlled part” meaning that it applies if the object as a whole is controlled, or any subcomponent is. {constraint [partial]} The set of possible values for an object of a given type can be subjected to a condition that is called a constraint {null constraint} (the case of a null constraint that specifies no restriction is also included)[; the rules for which values satisfy a given kind of constraint are given in s, and The set of possible values for an object of an access type can also be subjected to a condition that excludes the null value (see 3.10). } { of a given type is a combination of the type, a constraint on values of the type, and certain attributes specific to the subtype. The given type is called the type of the subtype type of the subtype {type (of a subtype)} {subtype (type of)} Similarly, the associated constraint is called the constraint of the subtype constraint of the subtype {constraint (of a subtype)} {subtype (constraint of)} The set of values of a subtype consists of the values of its type that satisfy its constraint and any exclusion of the null value {belong (to a subtype)} Such values to the subtype. {values (belonging to a subtype)} {subtype (values belonging to)} Discussion: We make a strong distinction between a type and its subtypes. In particular, a type is not a subtype of itself. There is no constraint associated with a type (not even a null one), and type-related attributes are distinct from subtype-specific attributes. Discussion: We no longer use the term "base type." All types were "base types" anyway in Ada 83, so the term was redundant, and occasionally confusing. In the RM95 we say simply "the type of the subtype" instead of "the base type of the subtype." Ramification: The value subset for a subtype might be empty, and need not be a proper subset. To be honest: } Any name of a category class of types (such as “discrete” , or , or), or other category of types (such as or “incomplete” ) is also used to qualify its subtypes, as well as its objects, values, declarations, and definitions, such as an “integer type declaration” or an “integer value.” In addition, if a term such as “parent subtype” or “index subtype” is defined, then the corresponding term for the type of the subtype is “parent type” or “index type.” Discussion: We use these corresponding terms without explicitly defining them, when the meaning is obvious. {constrained} {unconstrained} {constrained (subtype)} {unconstrained (subtype)} A subtype is called an subtype if its type has unknown discriminants, or if its type allows range, index, or discriminant constraints, but the subtype does not impose such a constraint; otherwise, the subtype is called a subtype (since it has no unconstrained characteristics). Discussion: In an earlier version of Ada 9X, "constrained" meant "has a non-null constraint." However, we changed to this definition since we kept having to special case composite non-array/ non-discriminated types. It also corresponds better to the (now obsolescent) attribute 'Constrained. For scalar types, “constrained” means “has a non-null constraint”. For composite types, in implementation terms, “constrained” means that the size of all objects of the subtype is the same, assuming a typical implementation model. Class-wide subtypes are always unconstrained. 2 { Any set of types can be called a “category” of types, and any Any set of types that is closed under derivation (see ) can be called a “class” of types. However, only certain categories and classes are used in the description of the rules of the language — generally those that have their own particular set of primitive operations (see ), or that correspond to a set of types that are matched by a given kind of generic formal type (see {language-defined class [partial]} The following are examples of “interesting” language-defined classes : elementary, scalar, discrete, enumeration, character, boolean, integer, signed integer, modular, real, floating point, fixed point, ordinary fixed point, decimal fixed point, numeric, access, access-to-object, access-to-subprogram, composite, array, string, (untagged) record, tagged, task, protected, nonlimited. Special syntax is provided to define types in each of these classes. In addition to these classes, the following are examples of “interesting” language-defined categories: {language-defined categories [partial]} abstract, incomplete, interface, limited, private, is a run-time entity with a given type which can be assigned to an object of an appropriate subtype of the type. { is a program entity that operates on zero or more operands to produce an effect, or yield a result, or both. } Note that a type's category (and depends on the place of the reference — a private type is composite outside and possibly elementary inside. It's really the that is elementary or composite. Note that although private types are composite, there are some properties that depend on the corresponding full view — for example, parameter passing modes, and the constraint checks that apply in various places. } { Every property of types forms a category, but not Not every property of types represents a class. For example, the set of all abstract types does not form a class, because this set is not closed under derivation. Similarly, the set of all interface types does not form a class. } The set of limited types does not form a class (since nonlimited types can inherit from limited interfaces), but the set of nonlimited types does. The set of tagged record types and the set of tagged private types do not form a class (because each of them can be extended to create a type of the other category); that implies that the set of record types and the set of private types also do not form a class (even though untagged record types and untagged private types do form a class). In all of these cases, we can talk about the category of the type; for instance, we can talk about the “category of limited types”. forms a class in the sense that it is closed under derivation, but the more interesting class, from the point of generic formal type matching, is the set of all types, limited and nonlimited, since that is what matches a generic formal “limited” private type. Note also that a limited type can “become nonlimited” under certain circumstances, which makes “limited” somewhat problematic as a class of types Normatively, the language-defined classes are those that are defined to be inherited on derivation by 3.4; other properties either aren't interesting or form categories, not classes. } These language-defined categories classes are organized like this: } all types other enumeration signed integer modular integer floating point fixed point ordinary fixed point decimal fixed point other array tagged (including interfaces) nonlimited tagged record limited tagged limited tagged record synchronized tagged tagged task tagged protected } { There are other categories, such as The classes “numeric” and “ discriminated nonlimited , which represent other categorization classification , but and do not fit into the above strictly hierarchical picture. } { Note that this is also true for some categories mentioned in the chart. The category “task” includes both untagged tasks and tagged tasks. Similarly for “protected”, “limited”, and “nonlimited” (note that limited and nonlimited are not shown for untagged composite types). Wording Changes from Ada 83 This clause and its subclauses now precede the clause and subclauses on objects and named numbers, to cut down on the number of forward references. We have dropped the term "base type" in favor of simply "type" (all types in Ada 83 were "base types" so it wasn't clear when it was appropriate/necessary to say "base type"). Given a subtype S of a type T, we call T the "type of the subtype S." Wording Changes from Ada 95 Added a mention of null exclusions when we're talking about constraints (these are not constraints, but they are similar). Defined an interface type to be a composite type. Revised the wording so that it is clear that an incomplete view is similar to a partial view in terms of the language. Added a definition of component of a type, subcomponent of a type, and part of a type. These are commonly used in the standard, but they were not previously defined. Reworded most of this clause to use category rather than class, since so many interesting properties are not, strictly speaking, classes. Moreover, there was no normative description of exactly which properties formed classes, and which did not. The real definition of class, along with a list of properties, is now in 3.4.
{"url":"http://www.adaic.org/resources/add_content/standards/05aarm/html/AA-3-2.html","timestamp":"2014-04-19T20:12:54Z","content_type":null,"content_length":"40274","record_id":"<urn:uuid:a159942e-30f3-4ac0-928f-4e2a81b84b53>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Was Einstein wrong about relativity? November 20th 2011, 01:34 AM #1 Was Einstein wrong about relativity? Scientists Report Second Sighting of Faster-Than-Light Neutrinos (This is one of the reasons I'm a mathematician and not a physicist. I won't wake up one morning to learn that 50 years of my study has been for nothing.) Re: Was Einstein wrong about relativity? I am not convinced that one can measure 58 billionths of a second accurately. I am also not convinced speed of light is constant. Everything in life fluctuates, why should light be exempt? Re: Was Einstein wrong about relativity? Re: Was Einstein wrong about relativity? Scientists Report Second Sighting of Faster-Than-Light Neutrinos (This is one of the reasons I'm a mathematician and not a physicist. I won't wake up one morning to learn that 50 years of my study has been for nothing.) I still would not bet on the results, but overturning everything we think we know is every physicists ambition. Re: Was Einstein wrong about relativity? I still want to meet an alien before I die. Re: Was Einstein wrong about relativity? my model of reality is something like a Riemann surface. as long as we are only on "part" of the surface, it looks like one thing, if we move to another region of the surface, it may look like an entirely different thing. i do not believe we will ever have an "ultimate theory of ultimate reality", but rather, several robust theories explaining large swaths of reality, but which do not appear to have a common "superset". i believe that which model of reality we choose, will prove to be scale-dependent. if these results are to be believed, physicists have a choice: ditch some portion of relativity, or the standard model. i would argue for ditching the standard model, i believe our understanding of small-scale physics is not as good as large-scale physics. there is some reason to believe that there is "some constraint" to how fast things can travel. assigning that constraint to the speed of a photon in a vacuum (if indeed a vacuum really does exist), may be unrealistic. the down-side to this, is that our classification of sub-atomic particles may not obey the symmetry rules we thought they did. but there are lots of possible symmetries, the possible combination of Lie groups is very large, so we need not, for example, abandon all hope that some other symmetry structure with perhaps more complexity, governs. i do not subscribe to the notion that mathematics is "ultimately true". there are certain reasons we pick the logical structures we do: our brains are "pre-wired" for logics of a certain sort. we can only imagine the things which make sense to us, there is no reason to believe that other ways of viewing things might not be possible. one could imagine an alien species for which a trivalent logic (for example) was as natural for them as a bivalent logic is for us. they might think our set theory is "quaint", and regard many of our logical paradoxes as non-problemmatic. we carry our intuitions as baggage, as helpful as they may be (to us). objective reality (if indeed it exists) has no such need, and is quite likely to continue to display behavior that surprises us. our guesses that infinite things DO exist (or at least "could"), or that topological completeness "DOES" occur, or that choice functions CAN be made, could all prove to be wrong, or only "partially (context-dependently)" true. we are, of course, hopeful that much of our mathematical knowledge will still be a fruitful way of looking at the world far into the future, but there is no guarantee of this, not even logical consistency gives a mathematical system immunity to modification (think of euclidean versus non-euclidean geometry). November 20th 2011, 02:08 AM #2 November 20th 2011, 02:10 AM #3 Grand Panjandrum Nov 2005 November 20th 2011, 02:11 AM #4 Grand Panjandrum Nov 2005 November 20th 2011, 05:27 AM #5 November 20th 2011, 06:01 AM #6 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/math/192304-einstein-wrong-about-relativity.html","timestamp":"2014-04-19T05:05:30Z","content_type":null,"content_length":"49502","record_id":"<urn:uuid:922ddde5-218a-492c-84c9-1a068efdada7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
dailysudoku.com :: View topic - i cannot complete easy Discussion of Daily Sudoku puzzles Author Message guest Posted: Wed Sep 07, 2005 4:53 pm Post subject: i cannot complete easy David Bryant Posted: Wed Sep 07, 2005 6:02 pm Post subject: Please be more specific Hi! I'd like to help you, but I need a little more information. Which puzzle are you talking about? dcb Joined: 29 Jul Posts: 559 Denver, Colorado Guest Posted: Thu Sep 08, 2005 2:11 pm Post subject: most of them, lol sudoku easy and medium, i cannot seem to complete it David Bryant Posted: Thu Sep 08, 2005 4:16 pm Post subject: Let's start with today's Daily Sudoku OK, since you don't care where we start, let's start with today's puzzle. 7 x x | 1 5 x | x x 8 Joined: 29 Jul x x 4 | x x 2 | x x x 2005 x x x | x x 4 | 5 6 x Posts: 559 Location: 6 x x | x x x | x 2 9 Denver, Colorado 5 x 2 | x x x | 8 x 4 3 4 x | x x x | x x 1 x 3 8 | 6 x x | x x x x x x | 2 x x | 9 x x 1 x x | x 8 7 | x x 3 Let's start by thinking about the number "5". You can see that there's a "5" in the first row, and that there's another "5" in the third row. But there's no "5" in the second row, and there's no "5" in the upper left 3x3 box. So we can be certain that a "5" must appear either in row 2 column 1 or in row 2 column 2 -- it can't go in r2c3, because there's already a "4" in that spot. But it can't go in r2c1, either, because of the "5" in r5c1. The only spot left is r2c2, which must be a "5": 7 x x | 1 5 x | x x 8 x 5 4 | x x 2 | x x x x x x | x x 4 | 5 6 x 6 x x | x x x | x 2 9 5 x 2 | x x x | 8 x 4 3 4 x | x x x | x x 1 x 3 8 | 6 x x | x x x x x x | 2 x x | 9 x x 1 x x | x 8 7 | x x 3 Now it's your turn. Can you find the next number in this puzzle? dcb alanr555 Posted: Thu Sep 08, 2005 7:56 pm Post subject: > I cannot complete easy > anybody help Joined: 01 Aug > OK, since you don't care where we start, let's start with today's puzzle. Posts: 193 7 x x | 1 5 x | x x 8 Location: x x 4 | x x 2 | x x x Bideford Devon x x x | x x 4 | 5 6 x 6 x x | x x x | x 2 9 5 x 2 | x x x | 8 x 4 3 4 x | x x x | x x 1 x 3 8 | 6 x x | x x x x x x | 2 x x | 9 x x 1 x x | x 8 7 | x x 3 > Let's start by thinking about the number "5". Why start with "5" and not some other digit? Randomly there is an 11% chance of starting with "5". There are three occurrences of "5" but FOUR occurrences of "8" and so it cannot be that the choice is the digit with the highest frequency of occurrence. I am getting to grips with the logic - but it takes so long to implement and I go to sleep part way through the process and then have to apply again (when I wake up) checks that I probably already undertook. Carol Vorderman advises that she can solve the Fiendish puzzle in just seventeen minutes and has a target time for this of ten minutes. Speed (combined with accuracy) must depend on knowing where to look (ie being able to "sense" patterns from an overall scan) - rather than applying a logical sequence of procedures like a computer would. How does one train oneself to sense such patterns quickly? > Now it's your turn. Can you find the next number in this puzzle? Is there really A (unique) NEXT number? Surely the next number to emerge will depend upon the technique selected by the user, will it not? Можете одговарати на теме у овом форуму Можете мењати ваше порукеу овом форуму Можете брисати ваше поруке у овом форуму Можете гласати у овом форуму This forum is translated in Serbian_cyrillic by: Damjanac Djordje Hvala Lepo. Ha zalost, ne mogu da kucam na masini cirilicem slovima zbog teknicih razloga (i jer ne znam pravilno srpski jezik!) Alan Rayner BS23 2QT Guest Posted: Thu Sep 08, 2005 11:01 pm Post subject: yes thnxs, i can get alot of numbers in the grid but then it just goes wrong , it does this to me everytime, i write 1-9 on a paper and i check them if they r in the box or line veritcal and horizontal, is this a technique David Bryant Posted: Fri Sep 09, 2005 3:48 pm Post subject: Sudoku Techniques Anonymous wrote: yes thnxs, i can get alot of numbers in the grid but then it just goes wrong , it does this to me everytime, i write 1-9 on a paper and i check them if they r in the box or line Joined: 29 Jul veritcal and horizontal, is this a technique Posts: 559 Denver, Colorado OK, I guess you'd like to learn some techniques for solving Sudoku puzzles. I suppose each solver has his own style. I'll just describe the way I usually attack a puzzle. Maybe that will work for you. Maybe not. At least it's a start. -- I print the puzzle using my computer. I like spreadsheets, so I've set up a grid that's shaded gray and white, to make it easy for me to concentrate on the 3x3 boxes. If one copy of the puzzle gets too cluttered, or if I make a mistake, I can easily print a new copy. -- I always start by scanning for simple combinations of two or three numbers that tell me exactly where a single digit has to go, like the situation with "5" I described in an earlier message. -- I do this systematically, starting with rows 1 - 3, continuing with rows 4 - 6, and ending with rows 7 - 9. Then I look at the columns the same way. -- Anytime I locate a new number I check to see if it gives me a clue about where to place another number. Particularly rewarding is the case where placing one digit leads to a chain of deductions that allow me to place all 9 instances of that digit. This happens a lot, especially on the "Easy" puzzles. -- Anytime I have as many as six numbers in a single row, column, or 3x3 box, I stop to analyze the three missing numbers. This often allows me to complete the row or column, as in the example partially illustrated below. x 2 x 4 5 6 x 8 9 x x x x x x 1 x x x x x x x x 3 x x 3 x x x x x x x x I'm sure you can see how this works -- I'm missing just three numbers in the first row, and they are {1, 3, 7}. I can't put a "1" or a "3" in the seventh position, so clearly a "7" has to go there. Now I'm just missing {1, 3}, and since "3" can't go in the first position, a "1" has to go there, leaving "3" as the only possibility in the third position. -- A situation that's a little harder for me to spot looks something like this next example: x 2 x 4 5 6 x 8 9 x x x x x x x x x 7 x x x x x x x x x x 1 x x x x x x I get to the same result as in the previous situation, but the reasoning is a liitle different -- I have to notice that, since I can't have two "7"s in the same 3x3 box, the "7" can only fit in the seventh position. -- I don't usually use an auxiliary table to keep track of unresolved possibilities -- I just make extra marks on the puzzle. Sometimes I have a situation like this: x x x x x x 1 2 3 x x x 7 x x x x 6 x x x x x x * 8 9 In this example the 3x3 box on the right is missing just 3 digits, {4, 5, 7}. Clearly the "7" has to go in the cell marked *. But I can't tell where to put the "4" or the "5". So I just mark two cells as 4/5, to remind myself that these two values have to appear in these cells, in some order: x x x x x x 1 2 3 x x x 7 x x 4/5 4/5 6 x x x x x x 7 8 9 I write these possibilities on the puzzle using small letters at the top of each cell where I have located a "pair" of values. On the easy puzzles, this is usually as far as I have to go. On harder puzzles I may have to mark triplets of values, like this: x x x x x x 1 2 3 x x x 7 x x 4/5/6 4/5/6 4/5/6 x x x x x x 7 8 9 -- If I can't find a row, column, or 3x3 box with six numbers filled in, I look for one with five numbers, then for one with four, etc. I try not to make too many of the auxiliary marks on the puzzle too soon -- I tend to get confused if I make too many marks. So I try to find all the pairs first, then the triplets, and so forth. -- From there, it's just a matter of being sure I don't make any mistakes, and of analyzing the ways the various possibilities interact with each other. I don't know if any of this will help, but I hope it does. dcb Guest Posted: Fri Sep 09, 2005 6:05 pm Post subject: yes thanxs very much i apreciate it You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
{"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?p=589","timestamp":"2014-04-20T18:45:47Z","content_type":null,"content_length":"52161","record_id":"<urn:uuid:53d0d959-a4fe-4ff8-b457-cdef878d0799>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Monthly Contents—April 2011 Journal subscribers and MAA members: Please login into the member portal by clicking on 'Login' in the upper right corner. Integral Apollonian Packings Peter Sarnak We review the construction of integral Apollonian circle packings. There are a number of Diophantine problems that arise in the context of such packings. We discuss some of them and describe some recent advances. Solving Cubics With Creases: The Work of Beloch and Lill Thomas C. Hull Margharita P. Beloch was the first person, in 1936, to realize that origami (paperfolding) constructions can solve general cubic equations and thus are more powerful than straightedge and compass constructions. We present her proof. In doing this we use a delightful (and mostly forgotten?) geometric method due to Eduard Lill for finding the real roots of polynomial equations. A Cubic Analogue of the Jacobsthal Identity Heng Huat Chan, Ling Long, and YiFan Yang It is well known that if p is a prime such that p≡ 1 (mod 4), then p can be expressed as a sum of two squares. Several proofs of this fact are known and one of them, due to E. Jacobsthal, involves the identity p=x^2 + y^2, with x and y expressed explicitly in terms of sums involving the Legendre symbol. These sums are now known as the Jacobsthal sums. In this short note, we prove that if p≡ 1 (mod 6), then 3p= u^2 + uv = v^2 for some integers u and v using an analogue of Jacobsthal’s identity. Artifacts for Stamping Symmetric Designs H. M. Hilden, J. M. Montesinos, D. M. Tejada, and M. M. Toro It is well known that there are 17 crystallographic groups that determine the possible tessellations of the Euclidean plane. We approach them from an unusual point of view. Corresponding to each crystallographic group there is an orbifold. We show how to think of the orbifolds as artifacts that serve to create tessellations. A New Singular Function Jaume Paradís, Pelegrí Viader, and Lluís Bibiloni A new continuous strictly increasing singular function is described with the help of the ternary and binary systems for real number representation; in this, our function is similar to Cantor’s function, but in other aspects it is quite unusual. We are able to determine a condition to identify many points for which the derivative vanishes or is infinite; for other singular functions constructed with the help of a system of representation of real numbers, this condition depends on some metrical properties of the growth of averages of the sum of all the digits of the representation, but in the case of this new function, it depends on the frequency of occurrence of the digit 2 in the usual ternary expansion of a number. A Remark on Euclid’s Theorem on the Infinitude of the Primes Roger Cooke We examine and update an 1889 application of the theory of finite abelian groups to prove that there are at least n- 1primes between the nth prime and the product of the first n primes. When Is a Polynomial a Composition of Other Polynomials? James Rickards In this note we explore when a polynomial f(x) can be expressed as a composition of other polynomials. First, we give a necessary and sufficient condition on the roots of f(x). Through a clever use of symmetric functions we then show how to determine if f(x) is expressible as a composition of polynomials without needing to know any of the roots of f(x). A Top Hat for Moser’s Four Mathemagical Rabbits Pieter Moree If the equation 1^k = 2^k + --- + (m-2)^k = m^k has a solution with k ≥ 2, then . Leo Moser showed this in 1953 by remarkably elementary methods. His proof rests on four identities he derives separately. It is shown here that Moser’s result can be derived from a von Staudt-Clausen type theorem (an easy proof of which is also presented here). In this approach the four identities can be derived uniformly. The mathematical arguments used in the proofs were already available during the lifetime of Lagrange (1736–1813). The Mathematics of Sex: How Biology and Society Conspire to Limit Talented Women and Girls. Stephen J. Ceci and Wendy M. Williams Reviewed by: Susan Jane Colley
{"url":"http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-contents-april-2011?device=desktop","timestamp":"2014-04-18T10:54:05Z","content_type":null,"content_length":"102830","record_id":"<urn:uuid:31a90813-0e5d-496e-9291-a1de5f2ddc9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Armuchee Algebra Tutor Find an Armuchee Algebra Tutor ...I'm currently taking Accounting II. The highest score I made on SAT math is a 780. I have also taken the SAT four times for my experience. 17 Subjects: including algebra 2, algebra 1, calculus, chemistry Hi, I am a math major as well as a soccer player at Berry College. I currently work in the math lab at school, which provides tutoring services to the Berry College community, but I would like to expand my tutoring to the rest of Rome and surrounding communities. As a rising senior, I have taken my fair share of math classes, so almost all subjects are open for me to tutor in. 23 Subjects: including algebra 2, algebra 1, reading, geometry ...My name is Tharushi and I'm a graduate student at Georgia State University. I have a Bachelor's degree in Biology from Berry College. I have been a biology tutor at Berry for the past two years and would love to tutor other students in biology and other areas if needed. 17 Subjects: including algebra 1, algebra 2, chemistry, English ...I would expect each student to take a diagnostic exam to determine exactly what they struggle with and that will be our starting point and we'll base any growth from that point. I'm a very structured individual and would like to use our time wisely. I may not be very talkative but I believe in ... 3 Subjects: including algebra 1, elementary math, prealgebra Hello, my name is Scarlet D. I am a junior at Jacksonville State University, majoring in Exercise Science. I have an associates degree in biology from Georgia Highlands College. 4 Subjects: including algebra 1, precalculus, prealgebra, differential equations Nearby Cities With algebra Tutor Adairsville algebra Tutors Aragon, GA algebra Tutors Cassville, GA algebra Tutors Cave Spring algebra Tutors Cedar Bluff, AL algebra Tutors Cedartown algebra Tutors Euharlee, GA algebra Tutors Hammondville, AL algebra Tutors La Fayette, GA algebra Tutors Lindale, GA algebra Tutors Lyerly algebra Tutors Mentone, AL algebra Tutors Oakman, GA algebra Tutors Resaca algebra Tutors Valley Head, AL algebra Tutors
{"url":"http://www.purplemath.com/Armuchee_Algebra_tutors.php","timestamp":"2014-04-19T05:21:45Z","content_type":null,"content_length":"23528","record_id":"<urn:uuid:fe7d5b24-f6e5-4f2c-9086-0f98861c2f65>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
2662 -- A Walk Through the Forest A Walk Through the Forest Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 2365 Accepted: 873 Jimmy experiences a lot of stress at work these days, especially since his accident made working difficult. To relax after a hard day, he likes to walk home. To make things even nicer, his office is on one side of a forest, and his house is on the other. A nice walk through the forest, seeing the birds and chipmunks is quite enjoyable. The forest is beautiful, and Jimmy wants to take a different route everyday. He also wants to get home before dark, so he always takes a path to make progress towards his house. He considers taking a path from A to B to be progress if there exists a route from B to his home that is shorter than any possible route from A. Calculate how many different routes through the forest Jimmy might take. Input contains several test cases followed by a line containing 0. Jimmy has numbered each intersection or joining of paths starting with 1. His office is numbered 1, and his house is numbered 2. The first line of each test case gives the number of intersections N, 1 < N <= 1000, and the number of paths M. The following M lines each contain a pair of intersections a b and an integer distance 1 <= d <= 1000000 indicating a path of length d between intersection a and a different intersection b. Jimmy may walk a path any direction he chooses. There is at most one path between any pair of For each test case, output a single integer indicating the number of different routes through the forest. You may assume that this number does not exceed 2147483647. Sample Input Sample Output
{"url":"http://poj.org/problem?id=2662","timestamp":"2014-04-23T09:13:14Z","content_type":null,"content_length":"7126","record_id":"<urn:uuid:d91c94ac-f1ed-4629-8b30-8a05f2ec16d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Capitol College: Signals and Systems - Description Signals and Systems Mathematical models, systems, signal classifications, I/O differential and difference equations, block diagram realizations, discrete-time systems. Convolutions: discrete-time and continuous-time. The Z-transform in linear discrete-time systems, transfer functions. Trigonometric Fourier series, polar and rectangular forms, odd/even functions, response of a linear system to periodic input. Fourier transform, symmetry properties, transform theorems, linear filtering, modulation theorem. Prerequisite: MA-360. Offered during fall semester only. (3-0-3)
{"url":"https://mycapitol.capitol-college.edu/schedule/desc_results.asp?course_desc=EE%20%20%20406","timestamp":"2014-04-20T15:56:58Z","content_type":null,"content_length":"1333","record_id":"<urn:uuid:551c42f7-0e07-49a2-a823-2d7d4ee81094>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Lissajou Patterns Lissajou Patterns Author: J.B. Hoag Let a sinusoidal alternating voltage be applied to the horizontal deflecting plates alone. A single horizontal line will be seen on the screen. The beam, starting at the left of the screen, moves slowly at first, then rapidly across the center, and slows down to stop at the right of the screen, after which it reverses its direction, again traveling slowly at the ends and rapidly in the center. But to the eye, only a single straight line appears. If, next, an alternating potential is applied to the vertical deflecting plates alone, then a similar vertical line will appear on the screen. We now consider the patterns which will appear on the screen when alternating voltages are applied simultaneously to both the X and the Y deflecting plates. First of all, let us assume that the alternating voltages have precisely the same frequency and are in the same phase; in other words, they both pass through their zero values at the same moment and they both reach their crests at the same moment. Fig. 22 P. Two sine waves of the same frequency and phase applied to the x and y plates, move the spot along a straight line In Fig. 22 P, the horizontal wave is shown at X and the vertical wave is shown at Y. Starting at zero voltage in both cases, the spot is at the center of the screen. A moment later, marked 1 in this figure, the spot has been moved upward by voltage Y and to the right by X. After one-quarter of a cycle, it is at the upper right corner of the screen, marked 2. Then, as the voltages decrease, the spot of light retraces this path along the straight line to the central point. During the negative half-cycles the spot of light on the screen moves from o to 6 and back to o again. Thus the addition of two voltages in the same phase and of the same frequency results in a straight line on the screen. The line makes an angle of 45° with the X and Y axes when the voltages are of equal value. If the two voltages are not of equal amplitude, the straight line will be produced, but it will occur at an angle differing from 45°. Let us next consider the case when the two voltages have exactly the same frequency and amplitude but start a quarter of a period out of phase with respect to each other. The application of these two voltages to the X and Y plates results in a movement of the spot of light on the screen in a circular pattern as shown in Fig. 22 Q. If the two voltages are not of equal amplitude, an ellipse will be formed on the screen. Fig. 22 Q. A circle is obtained when the two waves have the same amplitude and frequency but are in phase quadrature (90°) Next, let us consider the case when the voltages on the X and Y plates are of equal amounts, start in phase with each other, but differ in frequency in the ratio of 2 to 1. Fig. 22 R. The voltages on the deflection plates are equal in amplitude and phase but the frequency applied to y is twice that on x Figure 22 R shows how the figure 8 pattern observed on the screen is compounded from these two " simple harmonic " motions. It is an interesting problem to work out the shapes of the pattern on the screen when alternating potentials of different amplitude ratios, frequency ratios, and phase differences are applied to the deflecting plates. Fig. 22 S. Some Lissajou patterns Figure 22 S illustrates a few of the possibilities. Conversely, when a particular pattern is noted on the screen, Fig. 22 S permits one to tell the frequency, amplitude, and phase relationship of an unknown voltage with respect to a standard. The various figures shown are known as Lissajou patterns. A phase-splitting circuit, used for obtaining elliptical or circular patterns, is shown in Fig. 22 T (see also Sec. 19.6). One set of plates is connected across resistance R. The other deflecting plates are connected across condenser C. The voltage across R will always be in phase with the input voltage, whereas the voltage across C will always be 90° ahead of the applied voltage. If R is adjusted so that its resistance is numerically equal to the reactance of C, a circular pattern will appear on the screen. If the resistance and reactance of R and C are not equal to each other, an elliptical pattern will be obtained. Furthermore, an alternating current of unknown frequency may be applied in series with r, as shown in Fig. 22 T. Fig. 22 T. A phase-splitting circuit used to produce circular and elliptical patterns Then patterns such as those of Fig. 22 U will appear on the screen. Fig. 22 U. An unknown voltage is applied across r of Fig. 22 T By counting the number of peaks in these patterns, one has a direct measure of the ratio of the unknown frequency to that of the standard input frequency. This is particularly valuable when the frequencies differ from each other by comparatively large amounts. Unless these frequencies are exact multiples of each other, the pattern will appear to rotate. The speed of rotation is a measure of the lack of exact integral-frequency ratio. Having established a circular pattern on the screen by means of the circuit of Fig. 22 T, it now becomes possible to produce a spiral pattern. In order to do so, the sliding contact on resistance R [1] is moved rapidly from one end of the rheostat to the other. This changes the radius of the circle in proportion to the amount of voltage tapped off of R[1]. Elaborate electronic circuits can be devised which are equivalent to this variable applied voltage. If an unknown but comparatively small voltage is connected in series with the a.c. input voltage of Fig. 22 T, the deflecting voltages on the X and Y plates will be proportionally increased and decreased, with the result that the circular pattern becomes crenellated into a pattern such as that shown in Fig. 22 V. Fig. 22 V. A small a.c. voltage is applied in series with the main, lower-frequency A.C. voltage of Fig. 22 T Also, short time pulses such as those from the static discharges in storms or those from man-made machines can be similarly injected, to cause momentary deviations from the circular pattern. In this way, a comparatively long time axis, the circumference of the circle, can be obtained on a small sized screen.
{"url":"http://www.vias.org/basicradio/basic_radio_23_09.html","timestamp":"2014-04-20T23:41:22Z","content_type":null,"content_length":"14192","record_id":"<urn:uuid:c3ab4eb2-239e-4da0-abcb-d4e12378a9c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Radu Laza - Homepage Radu Laza Assistant Professor Department of Mathematics Stony Brook University Office: Math 4-121 • Fall '14: on leave to IAS • Spring '14: Topics in Number Theory (Undergraduate Research Seminar). • Spring '13: Topics in Algebra (Undergraduate Research Seminar); Intro to Lie groups and Lie algebras (MAT 552). • Fall '12: Calculus with Applications (MAT 122); Commutative Algebra (Algebra III). Research Algebraic Geometry (AG@SBU), esp. moduli problems, degenerations, singularities, special classes of varieties (K3s, Calabi-Yau, Hyperkahler manifolds). Recent & Selected Publications: Expository lecture series given at: KAIST (Mar 2014), Fields Institute (Aug and Nov 2013), Vancouver (Jul 2013), Barcelona (Jun 2013). Books edited: My papers on arXiv. My Scholar profile. My research is partially supported by NSF (CAREER DMS-125481, DMS-1200875). I was previously supported by a Sloan Fellowship (2010-2013). I am also part of an FRG group "Hodge theory, Moduli and Representation theory" (DMS-1361143) with P. Brosnan, M. Kerr, G. Pearlstein, and C. Robbles. Travel 2014 • IHES, January 2014 / talk Jussieu (January 9). • KAIST School on Algebraic Geometry, Daejeon (Korea), March 2014. • Johns Hopkins (seminar talk), April 17, 2014. • K3 and their moduli, Schiermonnikoog (Netherlands), May 2014. • Thematic program on Moduli spaces of real and complex varieties, Angers (France), June 2014. • "Classical Algebraic Geometry", Oberwolfach, early July, 2014. • Summer 2014: Hanover, Mainz, Mittag-Leffler, IHES • Fall 2014 - I will be on leave to IAS (for the special program on "Topology of algebraic varieties") Travel 2013 My Students • Patricio Gallardo : moduli of surfaces of general type (esp. quintic surfaces) - expected graduation May 2014, going to U. Georgia • Zheng Zhang : geometric and motivic realizations of VHS • Thao Do (undergraduate, Honors Thesis) - going to MIT Former Associates • Ken Ascher (undergraduate, Honors Thesis) - now at Brown. • Ren Yi (undergraduate) - now at Brown. • Dave Jensen (postdoc) - now at Yale and U. Kentucky. Personal - Irina & Iuliana Mathematics Department Stony Brook University Stony Brook, NY 11794-3651 Office Phone: (631) 632-4506 E-mail: rlaza@math.sunysb.edu Last Modified: Apr 8, 2014
{"url":"http://www.math.sunysb.edu/~rlaza/","timestamp":"2014-04-17T18:23:05Z","content_type":null,"content_length":"9724","record_id":"<urn:uuid:b7b402e6-c564-43c0-aee6-b37ed4551f59>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Multi-node encryption and key delivery - Patent # 6636968 - PatentGenius Multi-node encryption and key delivery 6636968 Multi-node encryption and key delivery (5 images) Inventor: Rosner, et al. Date Issued: October 21, 2003 Application: 09/434,156 Filed: November 4, 1999 Inventors: Epstein; Michael A. (Spring Valley, NY) Pasieka; Michael S. (Thornwood, NY) Rosner; Martin (Hastings-On-Hudson, NY) Assignee: Koninklijke Philips Electronics N.V. (Eindhoven, NL) Primary Peeso; Thomas R. Attorney Or Piotrowski; Daniel J. U.S. Class: 713/178; 726/3 Field Of 713/171; 713/178; 713/179; 713/200; 713/201 International H04L 9/08 U.S Patent 5218638; 5796830 Patent 0810754; 2308282 Other Schneier "Applied Cryptography", Wiley and Sons, Inc.,second edition, sec. 3.3, 3.4, 3.5.. Abstract: The common encryption of content material is provided for decryption at a plurality of destination devices, each destination device having a unique private key of a public-private key pair. A multiple device key exchange is utilized to create a session key for encrypting the content material that is based on each of the public keys of the plurality of destination devices. The content material is encrypted using this session key. A partial key is also created for each of the intended destination devices that relies upon the private key of the destination device to form a decryption key that is suitable for decrypting the encrypted content material. The encrypted content material and the corresponding partial key are communicated to each destination device via potentially insecure means, including broadcast over a public network. Each destination device decrypts the encrypted content material using the decryption key that is formed from its private key and the received partial key. Including or excluding the public key of selected destination devices in the creation of the session key effects selective encryption. Claim: We claim: 1. A method for encrypting content material for decryption by a plurality of destination devices, each destination device of the plurality of destination devices having a private keyand a public key of a public-private key pair, the method comprising: creating a session key based on a combination of each public key corresponding to each destination device, creating a plurality of partial keys corresponding to the plurality ofdestination devices, each partial key being configured to provide a decryption key corresponding to the session key when combined with the private key of each corresponding destination device and a public group key, encrypting the content material basedon the session key to create encrypted content material, and communicating the encrypted content material to at least one destination device with at least one partial key that corresponds to the at least one destination device. 2. The method of claim 1, wherein the partial key of each destination device includes a product of each public key corresponding to each other destination device of the plurality of destination devices. 3. The method of claim 1, wherein each partial key is dependent upon a source device private key corresponding to the public group key. 4. The method of claim 3, wherein the partial key of each destination device includes a product of each public key corresponding to each other destination device of the plurality of destination devices raised to a power of the source deviceprivate key. 5. The method of claim 4, wherein creating the session key is also based on the source device private key. 6. The method of claim 1, further including creating one or more placeholder public keys, and wherein: creating the session key is further based on the one or more placeholder public 7. The method of claim 6, wherein creating the plurality of partial keys includes creating one or more partial keys corresponding to the one or more placeholder public keys, communicating the encrypted content material includes communicating theencrypted content material to other receiving devices, and creating the one or more placeholder public keys is dependent upon the other receiving devices. 8. A source device that is configured to encrypt content material for communication to a plurality of destination devices, each destination device of the plurality of destination devices having a private key and a public key of a public-privatekey pair, the source device comprising: a key generator that is configured to generate a plurality of keys based on the public keys of the plurality of destination devices, the plurality of keys including: a session key for encrypting the contentmaterial, and a plurality of partial keys corresponding to the plurality of destination devices, each partial key being configured to provide a decryption key corresponding to the session key when combined with the private key of each correspondingdestination device and a public group key, and an encrypter that is configured to encrypt the content material based on the session key to create encrypted content 9. The source device of claim 8, further including a transmitter that is configured to communicate the encrypted content material to at least one destination device with at least one partial key that corresponds to the at least one destinationdevice. 10. The source device of claim 9, wherein the session key is further based on a source device private key corresponding to the public group key, and the transmitter is further configured to communicate the public group key to the at least onedestination device. 11. The source device of claim 8, wherein the key generator is configured to generate each partial key of each destination device based on a product of each public key corresponding to each other destination device of the plurality ofdestination devices. 12. The source device of claim 11, wherein each partial key is dependent upon the source device private key. 13. The source device of claim 12, wherein the partial key of each destination device includes a product of each public key corresponding to each other destination device of the plurality of destination devices raised to a power of the sourcedevice private key. 14. A method for decrypting encrypted content material from a source device that is encrypted based on a plurality of public keys, the method comprising: receiving the encrypted content material, receiving a first key that corresponds to apublic key that is associated with the source device, receiving a second key that is based on a subset of the plurality of public keys, and creating a decryption key that is based upon the first key, the second key, and a private key of a public-privatekey pair whose corresponding public key is included in the plurality of public keys and is not included in the subset of the plurality of public keys, and decrypting the encrypted content material based on the decryption key. 15. The method of claim 14, further including: communicating the corresponding public key of the public-private key pair to facilitate a creation of the second key. 16. The method of claim 14, wherein the decryption key includes a product of the second key and the first key raised to a power of the private key. 17. A destination device comprising a receiver that is configured to receive encrypted content material, a first key, and a second key, the encrypted content material being encrypted based on a session key that is based on a plurality of publickeys, the first key corresponding to a public group key, and the second key being based on a subset of the plurality of public keys, a key generator that is configured to create a decryption key based on the first key, the second key, and a private keyof a public-private key pair whose corresponding public key is included in the plurality of public keys and is not included in the subset of the plurality of public keys, and a decrypter that is configured to decrypt the encrypted content material basedon the decryption key. 18. The destination device of claim 17, further including a transmitter that transmits the public key to facilitate a creation of the session key that is used to encrypt the encrypted content material. 19. The destination device of claim 17, wherein the decryption key includes a product of the second key and the first key raised to a power of the private key. Description: BACKGROUND OF THEINVENTION 1. Field of the Invention This invention relates to the field of communications systems, and in particular to the encryption of information for distribution to multiple recipients 2. Description of Related Art Cryptographic systems are commonly used to encrypt sensitive or confidential information, and increasingly, to encrypt copy-protected material, such as copyright audio and video material. Generally, the content information is encrypted by asource device and communicated over a communications path to a destination device, where it is decrypted to recreate the original content material. The source device encrypts the material using an encryption key, and the destination device decrypts thematerial using a decryption key. A symmetric cryptographic system uses the same key to encrypt and decrypt the material; an asymmetric cryptographic system uses one of a pair of keys for encryption, and the other of the pair for decryption. Mostcryptographic systems are based on the premise that the expected computation time, effort, and costs required to decrypt the message without a knowledge of the decryption key far exceeds the expected value that can be derived from such a decryption. Often, a key-exchange method is employed to provide a set of encryption and decryption keys between a source and destination device. One such key-exchange system is the "Diffie-Hellman" key-exchange algorithm, common in the art. FIG. 1illustrates an example flow diagram for a key-exchange and subsequent encryption of content material using the Diffie-Hellman scheme. At 110, a source device, device S, transmits a large prime n, and a number g that is primitive mod n, as a message 111to a destination device, device D, that receives n and g, at 115. Each device, at 120 and 125, generates a large random number, x and y, respectively. At 130, device S computes a number X that is equal to g.sup.x mod n; and, at 135, device D computes anumber Y that is equal to g.sup.y mod n. Device S communicates X to device D, and device D communicates Y to device S, via messages 131, 136, respectively. The numbers X and Y are termed public keys and the numbers x and y are termed private keys. Notethat the determination of x from a knowledge of g and X, and y from a knowledge of g and Y, is computationally infeasible, and thus, an eavesdropper to the exchange of g, n, and the public keys X and Y will not be able to determine the private keys x ory. Upon receipt of the public key Y, the source device S computes a key K that is equal to Y.sup.x mod n, at 140, and the destination device D computes a key K' that is equal to X.sup.y mod n, at 145. Note that both K and K' are equal to g.sup.xymod n, and thus both the source S and destination D devices have the same key K, while an eavesdropper to the exchange of g, n, X, and Y will not know the key K, because the eavesdropper does not know x or y. After effecting the key-exchange, the source device S encrypts the content material M 150 and communicates the encrypted material E.sub.k (M) to destination device D, at 160, via communications path 161. Because device D's key K' is identical tothe key K that is used to encrypt the content material M 150, device D uses key K' to decrypt the received encrypted material E.sub.k (M) to create a decrypted copy 150' of the content material M 150, at 165. This encryption method is referred to assymmetric because both devices use the same key K, K' to encrypt and decrypt the content material M 150. An eavesdropper to the communications path 161, not having knowledge of the key K, is unable to decrypt the encrypted material E.sub.k (M), and thusunable to create a copy of the content material M 150. Note that the source device S need not communicate its public key X to the destination device D until the key X is needed by the destination device D to create the decryption key K, and thereforethe public key X is often included as an attached item to the content material. In this manner, a destination device need not maintain a record of each of the source devices with which it has exchanged keys. The destination device D creates thedecryption key by raising the attached public key X' to the power of its private key y, and applies it to the received encrypted material. X' represents a public key of an arbitrary source device. Provided that the material was encrypted using thedestination device's public key Y and the source device's private key x' corresponding to the attached public key X', the determined decryption key, (X') .sup.y mod n at the destination device D will appropriately decrypt the material. The source deviceS can continue to encrypt other content material using the key K for communication to the destination device D, as required, without repeating the above key-exchange. For device S to communicate encrypted information to another device, a similar key-exchange process is performed with the other device. Device S transmits its public key X, and receives a public key Z that is equal to g.sup.z mod n, where z isthe private key of the other device. The new encryption/decryption key K is then computed by device S and the other device as g.sup.xz mod n, and this key is used to encrypt information from device S to the other device, and vice versa. The source device S may keep a record of the appropriate key to use for communicating to each destination device, so that a key-exchange need not be repeated for each communication. It is also common practice to re-establish a new key betweenthe source device and destination device at regular time intervals, to improve the security of the system. If the same content material is to be communicated from source device S to two destination devices, device S encrypts the content material usingthe key associated with the first destination device, then encrypts the content material using the key associated with the second destination device. If the content material is intended for three destination devices, three unique copies are required,and so on. This requirement of multiple copies for multiple destinations incurs a substantial overhead in terms of processing time and memory resources to encrypt the material, and additional communication time or bandwidth to communicate theinformation to each destination device. BRIEF SUMMARY OF THE INVENTION It is an object of this invention to provide a common encryption of content material that can be decrypted by multiple devices, each device having a unique private key. It is a further object of this invention to provide a multiple devicekey-exchange that facilitates a common encryption of content material for decryption by each device. It is a further object of this invention to provide a multiple device key-exchange that facilitates a common encryption of content material forselective decryption by one or more of the devices. It is a further object of this invention to minimize the computation requirements at a destination node for a multiple device key exchange. These objects and others are achieved by creating a session key for encrypting content material that is based on each of the public keys of a plurality of destination devices. A partial key is also created corresponding to each of thedestination devices that relies upon a private key associated with each destination device to form a decryption key that is suitable for decrypting content material that is encrypted by the session key. The encrypted content material and thecorresponding partial key are communicated to each destination device. Each destination device decrypts the encrypted content material using the decryption key that is formed from its private key and the received partial key. Including or excluding thepublic key of selected destination devices in the creation of the session key effects selective encryption. BRIEF DESCRIPTION OF THE DRAWINGS The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein: FIG. 1 illustrates an example prior art key-exchange between a source and destination device. FIG. 2 illustrates an example block diagram of an encryption and decryption system in accordance with this invention. FIG. 3 illustrates an example key-exchange between a source and multiple destination devices in accordance with this invention. FIG. 4 illustrates an example common encryption and multiple decryption in accordance with this invention. FIG. 5 illustrates an example selective encryption and multiple decryption in accordance with this invention. Throughout the drawings, the same reference numerals indicate similar or corresponding features or functions. DETAILED DESCRIPTION OF THE INVENTION FIG. 2 illustrates an example block diagram of an encryption and decryption system 200 in accordance with this invention. A source device 210 includes a key generator 220 that generates a session key K 221 that is used by an encrypter 230 toencrypt content material 201 to form encrypted content material 231. The session key 221 is based upon a secret key x of the source device 210, and public keys 251a, 261a, 271a, etc. from destination devices 250, 260, 270, etc. The key generator 220also generates partial keys 225, 226, 227, etc. that facilitate the decryption of the encrypted content material 231 at each of the destination devices 250, 260, 270, etc. The partial keys 225, 226, 227, etc. are created such that a knowledge of theprivate key 251b, 261b, 271b, etc. of each corresponding destination device 250, 260, 270, etc. and a knowledge of a common group key X 212a facilitates a determination of a decryption key 255, 265, 275, etc. that is suitable for decrypting the encryptedcontent material 231. The partial keys 225, 226, 227, etc. are communicated to each corresponding destination device, and are used by each destination device to decrypt the encrypted content material 231. Commonly available techniques may be utilizedto communicate this information (225-227, 212a, 231), without risk of compromising the security of this system. The information (225-227, 212a, 231) may be communicated independently or as a composite block; the key generator 220 and the encrypter 230may each provide a transmission means, or a discrete transmitter 240 may be provided. Because the communication paths need not be secure, any number of communication techniques, common in the art, may be utilized. For ease of understanding andillustration, the other components used to effect the communication of information to and from the source and destination device, being common in the art, are not illustrated in the accompanying figures. The key generator 292 in each decryption device 250, 260, 270, etc. combines its private key 251b, 261b, 271b to the public group key X 212a and the partial key 225, 226, 227 respectively to produce a corresponding decryption key K1255, K2265,K3275. The decrypter 294 in each device 250, 260, 270 applies the corresponding decryption key K1255, K2265, K3275 to the encrypted content material E.sub.k (M) 231 to reproduce the original content material M 201' when the decryption key K1, K2, K3matches the original encryption key K 221. That is, in accordance with this invention, a session key is created that is based on a composite of the public keys of each of the intended destination devices, and a group key and partial keys are createdthat, when appropriately combined with a corresponding private key, provide a decryption key corresponding to the session key. For example, the partial key 225 and public group key 212a contain sufficient information to create a decryption key byappropriately applying the private key 251b of destination device 250. The partial key 225 and public group key 212a suitably encoded such that a lack of knowledge of the private key 251b precludes an efficient determination of the decryption key 255. By supplying a partial key and group key that can be combined with a private key of each destination device to form a decryption key, the same encryption of content material can be distributed to multiple destination devices, each destinationdevice receiving the appropriate partial key corresponding to its particular private key. FIG. 3 illustrates an example key-exchange between a source and multiple destination devices that facilitates the generation of a common session key 221, a group key 212a, and multiple partial keys 225-228 in accordance with this invention. Inthis example illustration, each destination device D1250, D2260, D3270 and D4280 generate public keys Y1251a, Y2261a, Y3271a and Y4281a using the conventional Diffie-Hellman equation g.sup.y mod n, where y is the corresponding private key of eachdestination device (y1251b, y2261b, y3271b, and y4281b). As is common in the art, for improved security, g is preferably a global finite field generator, and n is a global prime in the same group as g. The source device 210 creates a session key K 221 that is a composite of each of the public keys Y1251a, Y2261a, Y3271a and Y4281a, using a variant of the Diffie-Hellman technique: (Y1*Y2*Y3*Y4).sup.x mod n, where x is the private key 212b of thesource device 210, preferably chosen at random. The session key K 221 is used to encrypt content material M 201 that is distributed to each of the destination devices D1250, D2260, D3270 and D4280. To facilitate the decryption of this common encryptedmaterial E.sub.k (M) 231, the source device 210 creates partial keys 225-228 and a public group key X 212a. Each partial key X1225, X2226, X3227, and X4228 in this example embodiment is of the form ##EQU1## where k is the number of destination devices. That is, the partial key of each destination device is a composite of each of the public keys of the other destination devices raised to the power of the private key x 212b associated with the sourcedevice, modulo n. The group key X 212a is computed by the source device 210 by raising the common and public value g to the power of the private key x 212b associated with the source device 210, modulo n, and is also referred to as the public key of thesource device 210. FIG. 4 illustrates an example common encryption and multiple decryptions in accordance with this invention. In a preferred embodiment of this invention, the commonly encrypted material E.sub.k (M) 231, the group key X 212a of the source device210, and each of the partial keys 225-228 are communicated to each of the destination devices 250, 260, 270, and 280. Note that these communications may occur via a public communications channel. Each destination device creates a sub-key using theconventional Diffie-Hellman form X.sup.y mod n, where X is the public, or group, key 212a of the source device, and y is the corresponding private key of each destination device. That is, for example, the sub-key 450 of destination device D1250 isX.sup.y1 mod n, the sub-key 460 of destination device D2260 is X.sup.y2 mod n, and so on. Each destination device 250, 260, 270, 280 forms a decryption key 255, 265, 275, 285 by forming the product of its corresponding partial key 225, 226, 227, 228 and its sub-key 450, 460, 470, 480. As illustrated in FIG. 4, because each sub keyX.sup.y mod n is equivalent to Y.sup.x mod n (because (g.sup.x).sup.y mod n=(g.sup.y).sup.x mod n), the product of each partial key with each sub-key is equivalent to the session key K 221, (Y1*Y2*Y3*Y4).sup.x mod n, and thus the decryption keys 255,265, 275, 285 are each equal to the session key K 221 that was used to encrypt the content material M 201. Each destination device uses the derived decryption key 255, 265, 275, 285 to decrypt the commonly encrypted content material E.sub.K (M) 231 toprovide the content material M. Note that the session key K 221 is based upon the public key of each of the destination devices that are intended to decrypt the encrypted content material E.sub.k (M) 231. This provides a method for selectively including or excluding one ormore of the destination devices for authorized decryption. FIG. 5 illustrates an example selective encryption and multiple decryption in accordance with this invention. The example encryption at the source device 210 utilizes the public keys Y1, Y3, and Y4 of devices D1, D3, and D4, but not the publickey Y2 of device D2. In the example encryption of FIG. 5, the public key Y2261a of FIG. 3 is replaced in the creation of the session key K' 511 and each of the partial keys 525-528 by a "dummy" or "placeholder" public key Yz 501. The content material Mis encrypted by this session key K' 511 that is equal to (Y1*Yz*Y3*Y4).sup.x mod n to produce an encrypted content E.sub.k' (M) 531. When each of the devices D1, D3, and D4 form the product of its sub-key and its partial key 525-528, the corresponding decryption key 555, 575, 585 is computed to be equal to (Y1*Yz*Y3*Y4).sup.x mod n, the session key K' 511. Device D2, on theother hand, forms the product of its sub-key X.sup.y2 mod n (which is equal to Y2.sup.x mod n) with its partial key (Y1*Y3*Y4).sup.x mod n, and forms a decryption key that is equal to (Y1*Y2*Y3*Y4).sup.x mod n. Note that this determined key(Y1*Y2*Y3*Y4).sup.x mod n is not equal to the session key K' (Y1*Yz*Y3*Y4).sup.x mod n that was used to encrypt the content material M, and therefore device D2260 is unable to render the content material M. This selective exclusion of destination devices can be extended to multiple destination devices by replacing each of the excluded destination device's public keys with a placeholder key 501 in the generation of the session key and each partialkey. The placeholder key 501 can be any value except zero. The foregoing merely illustrates the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody theprinciples of the invention and are thus within its spirit and scope. For example, different session keys can be defined by regenerating the public keys associated with each destination device by utilizing a different seed value g at each session. Inlike manner, the number of destination devices can be increased by adding the new destination device to the network 200 of communicating devices and regenerating a session key corresponding to the inclusion of the new destination device. Preferably, adifferent seed value g is used for such a new generation of keys, because if the same seed value g is used, the partial key corresponding to the new destination device may correspond to the session key of an encryption before the new destination deviceis added to the network 200. That is, for example, the partial key for a fifth destination device in the example of FIG. 4 will be (Y1*Y2*Y3*Y4).sup.x mod n, which is the session key K 211 for the four-destination-device network of FIG. 4. However, ifthe public keys Y1, Y2, etc. are different for each network configuration, such a problem does not arise. Alternatively, upon network reconfiguration in association with additional destination devices, the source device can securely assign a new valueto its private key x 212b. Such action will cause all subsequent session K keys, partial X1, X2, etc. keys, and group X keys to be distinct from previous session, partial, and group keys. A combination of these approaches may also be employed. Note that other encryption techniques, common in the art, may be applied to further enhance the security of the system. For example, the "station-to-station" protocol of ISO 9798-3 is commonly used to prevent a "man-in-the-middle" attack on aDiffie-Hellman key exchange. In like manner, the station-to-station protocol of ISO 9798-3 may also be employed to prevent a man-in-the-middle attack on a key-exchange in accordance with this invention. The example embodiments of the figures are provided for illustration purposes. Alternative embodiments are also feasible. For example, each destination device need not be unique. A family of destination devices may all have the same privatekey, and the encryption method is structured to provide secure communications to a family of devices rather than a single device. In such an embodiment, the techniques of this invention can be utilized to distribute material to a plurality of familiesof devices. Similarly, the techniques presented in this invention may be combined with other security techniques as well. For example, time-dependent encryptions, limited copy encryptions, and so on may also utilize this multiple-destinationdistribution technique. These and other system configuration and optimization features will be evident to one of ordinary skill in the art in view of this disclosure, and are included within the scope of the following claims. * * * * *
{"url":"http://www.patentgenius.com/patent/6636968.html","timestamp":"2014-04-16T04:42:05Z","content_type":null,"content_length":"47259","record_id":"<urn:uuid:3b554614-f5ed-4adf-8207-f9e6276ec2c2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Number: Its Origin and Evolution--John Zerzan Number: Its Origin and Evolution John Zerzan The wrenching and demoralizing character of the crisis we find ourselves in, above all, the growing emptiness of spirit and arificiality of matter, lead us more and to question the most commonplace of "givens." Time and language begin to arouse suspicions; number, too, no longer seems "neutral." The glare of alienation in technological civilization is too painfully bright to hide its essence now, and mathematics is the schema of technology. It is also the language of science--how deep we must go, how far back to reveal the "reason" for damaged life? The tangled skein of unnecessary suffering, the strands of domination, are unavoidably being unreeled, by the pressure of an unrelenting present. When we ask, to what sorts of questions is the answer a number, and try to focus on the meaning or the reasons for the emergence of the quantitative, we are once again looking at a decisive moment of our estrangement from natural being. Number, like language, is always saying what it cannot say. As the root of a certain kind of logic or method, mathematics is not merely a tool but a goal of scientific knowledge: to be perfectly exact, perfectly self-consistent, and perfectly general. Never mind that the world is inexact, interrelated, and specific, that no one has ever seen leaves, trees, clouds,animals, that are two the same, just as no two moments are identical. As Dingle said, "All that can come from the ultimate scientific anlysis of the material world is a set of numbers," reflecting upon the primacy of the concept of identity in math and its offspring, science. A little further on I will attempt an "anthropology" of numbers and explore its social embeddedness. Horkheimer and Adorno point to the basis of the disease: "Even the deductive form of science reflects hierarchy and coercion...the whole logical order, dependency, progression, and union of [its] concepts is grounded in the corresponding conditions of social reality--that is, the division of If mathematical reality is the purely formal structure of normative or standardizing measure (and later, science), the first thing to be measured at all was time. The primal connection between time and number becomes immediately evident. Authority, first objectified as time, becomes rigidified by the gradually mathematized consciousness of time. Put slightly differently, time is a measure and exists as a reification or materiality thanks to the introduction of measure. The importance of symbolization should also be noted, in passing, for a further interrelation consists of the fact that while the basic feature of all measurement is symbolic representation, the creation of a symbolic world is the condition of the existence of time. To realize that representation begins with language, actualized in the creation of a reproducible formal structure, is already to apprehend the fundamental tie between language and number. An impoverished present renders it easy to see, as language becomes more impoverished, that math is simply the most reduced and drained language. The ultimate step in formalizing a language is to transform it into mathematics; conversely, the closer language comes to the dense concretions of reality, the less abstract and exact it can be. The symbolizing of life and meaning is at its most versatile in language, which, in Wittgenstein's later view, virtually constitutes the world. Further, language, based as it is on a symbolic faculty for conventional and arbitrary equivalences, finds in the symbolism of math its greatest refinement. Mathematics, as judged by Max Black, is the "grammar of all symbolic systems." The purpose of the mathematical aspect of language and concept is the more complete isolation of the concept from the senses. Math is the paradigm of abstract thought for the same reason that Levy termed pure mathematics "the method of isolation raised to a fine art." Closely related are its character of "enormous generality," as discussed by Parsons, its refusal of limitations on said generality, as formulated by Whitehead. This abstracting process and its formal, general results provide a content that seems to be completely detached from the thinking individual; the user of a mathematical system and his/her values do not enter into the system. The Hegelian idea of the autonomy of alienated activity finds a perfect application with mathematics; it has its own laws of growth, its own dialectic, and stands over the individual as a separate power. Self-existent time and the first distancing of humanity from nature, it must be preliminarily added, began to emerge when we first began to count. Domination of nature, and then, of humans is thus enabled. In abstraction is the truth of Heyting's conclusion that "the characteristic of mathematical thought is that it does not convey truth about the external world." Its essential attitude toward the whole colorful movement of life is summed up by, "Put this and that equal to that and this!" Abstraction and equivalence of identity are inseparable; the suppression of the world's richness which is paramount in identity brought Adorno to the "primal world of ideology." The untruth of identity is simply that the concept does not exhaust the thing conceived. Mathematics is reified, ritualized thought, the virtual abandonment of thinking. Foucalt found that "in the first gesture of the first mathematician one saw the constitution of an ideality that has been deployed throughout history and has questioned only to be repeated and purified." Number is the most momentous idea in the history of human nature. Numbering or counting (and measurement, the process of assigning numbers to represent qualities) gradually consolidated plurality into quantification, and thereby produced the homogenous and abstract character of number, which made mathematics possible. From its inception in elementary forms of counting (beginning with a binary division and proceeding to the use of fingers and toes as bases) to the Greek idealization of number, an increasingly abstract type of thinking developed, paralleling the maturation of the time concept. As William James put it, "the intellectual life of man consists almost wholly in his substitution of a conceptual order for the perceptual order in which his experience originally comes." Boas concluded that "counting does not become necessary until objects are considered in such generalized form that their individualities are entirely lost sight of." In the growth of civilization we have learned to use increasingly abstract signs to point at increasingly abstract referents. On the other hand, prehistoric languages had a plethora of terms for the touched and felt, while very often having no number words beyond one, two and many. Hunter-gatherer humanity had little if any need for numbers, which is the reason Hallpike declared that "we cannot expect to find that an operational grasp of quantification will be a cultural norm in many primitive societies." Much earlier, and more crudely, Allier referred to "the repugnance felt by uncivilized men towards any genuine intellectual effort, more particularly towards arithmetic." In fact, on the long road toward abstraction, from an intuitive sense of amount to the use of different sets of number words for counting different kinds of things, along to fully abstract number, there was an immense resistance, as if the objectification involved was somehow seen for what it was. This seems less implausible in light of the striking, unitary beauty of tools of our ancestors half a million years ago, in which the immediate artistic and technical (for want of better words) touch is so evident, and by "recent studies which have demonstrated the existence, some 300,000 years ago, of mental ability equivalent to modern man," in the words of British archeologist Clive Gamble. Based on observations of surviving tribal peoples, it is apparent, to provide another case in point, that hunter-gatherers possessed an enormous and intimate understanding of the nature and ecology of their local places, quite sufficient to have inaugurated agriculture perhaps hundreds of thousands of years before the Neolithic revolution. But a new kind of relationship to nature was involved; one that was evidently refused for so many, many generations. To us it has seemed a great advantage to abstract from the natural relationship of things, whereas in the vast Stone Age being was apprehended and valued as a whole, not in terms of separable attributes. Today, as ever, when a large family sits down to dinner and it is noticed that someone is missing, this is not accomplished by counting. Or when a hut was built in prehistoric times, the number of required posts was not specified or counted, rather they were inherent to the idea of the hut, intrinsically involved in it. (Even in early agriculture, the loss of a herd animal could be detected not by counting but by missing a particular face or characteristic features; it seems clear, however, as Bryan Morgan argues, that "man's first use for a number system" was certainly as a control of domesticated flock animals, as wild creatures became products to be harvested.) In distancing and separation lies the heart of mathematics: the discursive reduction of patterns, states and relationships which we initially perceived as wholes. In the birth of controls aimed at control of what is free and unordered, crystallized by early counting, we see a new attitude toward the world. If naming is a distancing, a mastery, so too is number, which is impoverished naming. Though numbering is a corollary of language, it is the signature of a critical breakthrough of alienation. The root meanings of number are instructive: "quick to grasp or take" and "to take, especially to steal," also "taken, seized, hence...numb." What is made an object of domination is thereby reified, becomes numb. For hundreds of thousands of years hunter-gatherers enjoyed a direct, unimpaired access to the raw materials needed for survival. Work was not divided nor did private property exist. Dorothy Lee focused on a surviving example from Oceania, finding that none of the Trobrianders' activities are fitted into a linear, divisible line. "There is no job, no labor, no drudgery which finds its reward outside the act." Equally important is the "prodigality," "the liberal customs for which hunters are properly famous," "their inclination to make a feast of everything on hand," according to Sahlins. Sharing and counting or exchange are, of course, relative opposites. Where articles are made, animals killed or plants collected for domestic use and not for exchange, there is no demand for standardized numbers or measurements. Measuring and weighing possessions develops later, along with the measurement and definition of property rights and duties to authority. Isaac locates a decisive shift toward standardization of tools and language in the Upper Paleolithic period, the last stage of hunter-gatherer humanity. Numbers and less abstract units of measurement derive, as noted above, from the equalization of differences. Earliest exchange, which is the same as earliest division of labor, was indeterminate and defied systematization; a table of equivalences cannot really be formulated. As the predominance of the gift gave way to the progress of exchange and division of labor, the universal interchangeability of mathematics finds its concrete expression. What comes to be fixed as a principle of equal justice--the ideology of equivalent exchange--is only the practice of the domination of division of labor. Lack of a directly-lived existence, the loss of autonomy that accompany separation from nature are the concomitants of the effective power of specialists. Mauss stated that exchange can be defined only by all the institutions of society. Decades later Belshaw grasped division of labor as not merely a segment of society but the whole of it. Likewise sweeping, but realistic, is the conclusion that a world without exchange or fractionalized endeavor would be a world without number. Clastres, and Childe among others well before him, realized that people's ability to produce a surplus, the basis of exchange, does not necessarily mean that they decide to do so. Concerning the nonetheless persistent view that only mental/cultural deficiency accounts for the absence of surplus, "nothing is more mistaken," judged Clastres. For Sahlins, "Stone Age economics" was "intrinsically an anti-surplus system," using the term system extremely loosely. For long ages humans had no desire for the dubious compensations attendant on assuming a divided life, just as they had no interest in number. Piling up a surplus of anything was unknown, apparently, before Neanderthal times passed to the Cro-Magnon; extensive trade contracts were nonexistent in the earlier period, becoming common thereafter with Cro-Magnon society. Surplus was fully developed only with agriculture, and characteristically the chief technical advancement of Neolithic life was the perfection of the container: jars, bins, granaries and the like. This development also gives concrete form to a burgeoning tendency toward spatialization, the sublimation of an increasingly autonomous dimension of time into spatial forms. Abstraction, perhaps the first spatialization, was the first compensation for the deprivation caused by the sense of time. Spatialization was greatly refined with number and geometry. Ricoeur notes that "Infinity is discovered...in the form of the idealization of magnitudes, of measures, of numbers, figures," to carry this still further. This quest for unrestricted spatiality is part and parcel of the abstract march of mathematics. So then is the feeling of being freed from the world, from finitude that Hannah Arendt described in mathematics. Mathematical principles and their component numbers and figures seem to exemplify a timelessness which is possibly their deepest character. Hermann Weyl, in attempting to sum up (no pun intended) the "life sum of mathematics," termed it the science of the infinite. How better to express an escape from reified time than by making it limitlessly subservient to space--in the form of math. Spatialization--like math--rests upon separation; inherent in it are division and an organization of that division. The division of time into parts (which seems to have been the earliest counting or measuring) is itself spatial. Time has always been measured in such terms as the movement of the earth or moon, or the hands of a clock. The first time indications were not numerical but concrete, as with all earliest counting. Yet, as we know, a number system, paralleling time, becomes a separate, invariable principle. The separations in social life--most fundamentally, division of labor--seem alone able to account for the growth of estranging conceptualization. In fact, two critical mathematical inventions, zero and the place system, may serve as cultural evidence of division of labor. Zero and the place system, or position, emerged independently, "against considerable psychological resistance," in the Mayan and Hindu civilizations. Mayan division of labor, accompanied by enormous social stratification (not to mention a notorious obsession with time, and large-scale human sacrifice at the hands of a powerful priest class), is a vividly documented fact, while the division of labor reflected in the Indian caste system was "the most complex that the world had seen before the Industrial Revolution." (Coon 1954) The necessity of work (Marx) and the necessity of repression (Freud) amount to the same thing: civilization. These false commandments turned humanity away from nature and account for history as a "steadily lengthening chronicle of mass neurosis." (Turner 1980) Freud credits scientific/mathematical achievement as the highest moment of civilization, and this seems valid as a function of its symbolic nature. "The neurotic process is the price we pay for our most precious human heritage, namely our ability to represent experience and communicate our thoughts by means of symbols." The triad of symbolization, work and repression finds its operating principle in division of labor. This is why so little progress was made in accepting numerical values until the huge increase in division of labor of the Neolithic revolution: from the gathering of food to its actual production. With that massive changeover mathematics became fully grounded and necessary. Indeed it became more a category of existence than a mere instrumentality. The fifth century B.C. historian Herodotus attributed the origin of mathematics to the Egyptian king Sesostris (1300 B.C.), who needed to measure land for tax purposes. Systematized math--in this case geometry, which literally means "land measuring"--did in fact arise from the requirements of political economy, though it predates Sesostris' Egypt by perhaps 2000 years. The food surplus of Neolithic civilization made possible the emergence of specialized classes of priests and administrators which by about 3200 B.C. had produced the alphabet, mathematics, writing and the calendar. In Sumer the first mathematical computations appeared, between 3500 and 3000 B.C., in the form of inventories, deeds of sale, contracts, and the attendant unit prices, units purchased, interest payments, etc.. As Bernal points out, "mathematics, or at least arithmetic, came even before writing." The number symbols are most probably older than any other elements of the most ancient forms of At this point domination of nature and humanity are signaled not only by math and writing, but also by the walled, grain-stocked city, along with warfare and human slavery. "Social labor" (division of labor), the coerced coordination of several workers at once, is thwarted by the old, personal measures; lengths, weights, volumes must be standardized. In this standardization, one of the hallmarks of civilization, mathematical exactitude and specialized skill go hand in hand. Math and specialization, requiring each other, developed apace and math became itself a specialty. The great trade routes, expressing the triumph of division of labor, diffused the new, sophisticated techniques of counting, measurement, and calculation. In Babylon, merchant-mathematicians contrived a comprehensive arithmetic between 3000 and 2500 B.C., which system "was fully articulated as an abstract computational science by about 2000 B.C.. (Brainerd 1979) In succeeding centuries the Babylonians even invented a symbolic algebra, though Babylonian-Egyptian math has been generally regarded as extremely trial-and-error or empiricist compared to that of the much later Greeks. To the Egyptians and Babylonians mathematical figures had concrete referents: algebra was an aid to commercial transactions, a rectangle was a piece of land of a particular shape. The Greeks, however, were explicit in asserting that geometry deals with abstractions, and this development reflects an extreme form of division of labor and social stratification. Unlike Egyptian or Babylonian society, in Greece, a large slave class performed all productive labor, technical as well as unskilled, such that the ruling class milieu that included mathematicians disdained practical pursuits or Pythagoras, more or less the founder of Greek mathematics (6th century, B.C.) expressed this rarefied, abstract bent in no uncertain terms. To him numbers were immutable and eternal. Directly anticipating Platonic idealism, he declared that numbers were the intelligible key to the universe. Usually encapsulated as "everything is number," the Pythagorean philosophy held that numbers exist in a literal sense and are quite literally all that does exist. This form of mathematical philosophy, with the extremity of its search for harmony and order, may be seen as a deep fear of contradiction or chaos, an oblique acknowledgement of the massive and perhaps unstable repression underlying Greek society. An artificial intellectual life that rested so completely on the surplus created by slaves was at pains to deny the senses, the emotions and the real world. Greek sculpture is another example, in its abstract, ideological conformations, devoid of feeling or their histories. Its figures are standardized idealizations; the parallel with a highly exaggerated cult of mathematics is manifest. The independent existence of ideas, which is Plato's fundamental premise, is directly derived from Pythagoras, just as his whole theory of ideas flows from the special character of mathematics. Geometry is properly an exercise of disembodied intellect, Plato taught, in character with his view that reality is a world of form from which matter, in every important respect, is banished. Philosophical idealism was thus established out of this world-denying impoverishment, based on the primacy of quantitative thinking. As C.I. Lewis observed, "from Plato to the present day, all the major epistemological theories have been dominated by, or formulated in the light of , accompanying conceptions of mathematics." It is no less accidental that Plato wrote, "Let only geometers enter," over the door to his Academy, than that his totalitarian Republic insists that years of mathematical training are necessary to correctly approach the most important political and ethical questions. Consistently, he denied that a stateless society ever existed, identifying such a concept with that of a "state of swine." Systematized by Euclid in the third century B.C., about a century after Plato, mathematics reached an apogee not to be matched for almost two millenia; the patron saint of intellect for the slave-based and feudal societies that followed was not Plato, but Aristotle, who criticized the former's Pythagorean reduction of science to mathematics. The long non-development of math, which lasted virtually until the end of Renaissance, remains something of a mystery. But growing trade began to revive the art of the quantitative by the twelfth and thirteenth centuries. The impersonal order of the counting house in the new mercantile capitalism exemplified a renewed concentration on abstract measurement. Mumford stresses the mathematical prerequisite of later mechanization and standardization; in the rising merhant world, "counting numbers began here and in the end numbers alone counted." (Mumford 1967) But the Renaissance conviction that mathematics should be applicable to all the arts (not to mention such earlier and atypical forerunners as Roger Bacon's 13th century contribution toward a strictly mathematical optics), was a mild prelude to the magnitude of number's triumph in the seventeenth century. Though they were soon eclipsed by other advances of the 1600's, Johannes Kepler and Francis Bacon revealed its two most important and closely related aspects early in the century. Kepler, who completed the Copernican transition to the heliocentric model, saw the real world as composed of quantitative differences only; its differences are strictly those of number. Bacon, in The New Atlantis (c. 1620) depicted an idealized scientific community, the main object of which was domination of nature; as Jaspers put it, "Mastery of nature...'knowledge is power,' has been the watchword since Bacon." The century of Galileo and Descartes--pre-eminent among those who deepened all the previous forms of quantitative alienation and thus sketched a technological future--began with a qualitative leap in the division of labor. Franz Borkenau provided the key as to why a profound change in the Western world-view took place in the seventeenth century, a movement to a fundamentally mathematical-mechanistic outlook. According to Borkenau, a great extension of division of labor, occurring from about 1600, introduced the novel notion of abstract work. This reification of human activity proved pivotal. Along with degradation of work, the clock is the basis of modern life, equally "scientific" in its reduction of life to a measurability, via objective, commodified units of time. The increasingly accurate and ubiquitous clock reached a real domination in the seventeenth century, as, correspondingly, "the champions of the new sciences manifested an avid interest in horological matters." Thus it seems fitting to introduce Galileo in terms of just this strong interest in the measurement of time; his invention of the first mechanical clock based on the principle of the pendulum was likewise a fitting capstone to his long career. As increasingly objectified or reified time reflects, at perhaps the deepest level, an increasingly alienated social world, Galileo's principal aim was the reduction of the world to an object of mathematical dissection. Writing a few years before World War II and Auschwitz, Husserl located the roots of the contemporary crisis in this objectifying reduction and identified Galileo as its main progenitor. The life-world has been "devalued" by science precisely insofar as the "mathematization of nature" initiated by Gallo has proceeded--clearly no small indictment. (Husserl 1970) For Galileo as with Kepler, mathematics was the "root grammar of the new philosophical discourse that constituted modern scientific method." He enunciated the principle, "to measure what is measurable and try to render what is not so yet." Thus he resurrected the Pythagorean-Platonic substitution of a world of abstract mathematical relations for the real world and its method of absolute renunciation of the senses' claim to know reality. Observing this turning away from quality to quantity, this plunge into a shadow-world of abstractions, Husserl concluded that modern, mathematical science prevents us from knowing life as it is. And the rise of science has fueled ever more specialized knowledge, that stunning and imprisoning progression so well-known by now. Collingwood called Galileo "the true father of modern science" for the success of his dictum that the book of nature "is written in mathematical language" and its corollary that therefore "mathematics is the language of science." Due to this separation from nature, Gillispie evaluated, "After Galileo, science could no longer be humane." It seems very fitting that the mathematician who synthesized geometry and algebra to form analytic geometry (1637) and who, with Pascal, is credited with inventing calculus, should have shaped Galilean mathematicism into a new system of thinking. The thesis that the world is organized in such a way that there is a total break between people and the natural world, contrived as a total and triumphant world-view, is the basis for Descartes' renown as the founder of modern philosophy. The foundation of his new system, the famous "cogito ergo sum," is the assigning of scientific certainty to separation between mind and the rest of reality. This dualism provided an alienated means for seeing only a completely objectified nature. In the Discourse on Method...Descartes declared that the aim of science is "to make us as masters and possessors of nature." Though he was a devout Christian, Descartes renewed the distancing from life that an already fading God could no longer effectively legitimize. As Christianity weakened, a new central ideology of estrangement came forth, this one guaranteeing order and domination based on mathematical precision. To Descartes the material universe was a machine and nothing more, just as animals "indeede are nothing else but engines, or matter sent into a continual and orderly motion." He saw the cosmos itself as a giant clockwork just when the illusion that time is a separate, autonomous process was taking hold. Also as living, animate nature died, dead, inanimate money became endowed with life, as capital and the market assumed the attributes of organic processes and cycles. Lastly, Descartes mathematical vision eliminated any messy, chaotic or alive elements and ushered in an attendant mechanical world-view that was coincidental with a tendency toward central government controls and concentration of power in the form of the modern nation-state. "The rationalization of administration and of the natural order were occurring simultaneously," in the words of Merchant. The total order of math and its mechanical philosophy of reality proved irresistable; by the time of Descartes' death in 1650 it had become virtually the official framework of thought throughout Europe. Leibniz, a near-contemporary, refined and extended the work of Descartes; the "pre-established harmony" he saw in existence is likewise Pythagorean in lineage. This mathematical harmony, which Leibniz illustrated by reference to two independent clocks, recalls his dictum, "There is nothing that evades number." Leibniz, like Galileo and Descartes, was deeply interested in the design of In the binary arithmetic he devised, an image of creation was evoked; he imagined that one represented God and zero the void, that unity and zero expressed all numbers and all creation. He sought to mechanize thought by means of a formal calculus, a project which he too sanguinely expected would be completed in five years. This undertaking was to provide all the answers, including those to questions of morality and metaphysics. Despite this ill-fated effort, Leibniz was perhaps the first to base a theory of math on the fact that it is a universal symbolic language; he was certainly the "first great modern thinker to have a clear insight into the true character of mathematical symbolism." Furthering the quantitative model of reality was the English royalist Hobbes, who reduced the human soul, will, brain, and appetites to matter in mechanical motion, thus contributing directly to the current conception of thinking as the "output" of the brain as computer. The complete objectification of time, so much with us today, was achieved by Issac Newton, who mapped the workings of the Galilean-Cartesian clockwork universe. Product of the severely repressed Puritan outlook, which focused on sublimating sexual energy into brutalizing labor, Newton spoke of absolute time, "flowing equably without regard to anything external." Born in 1642, the year of Galileo's death, Newton capped the Scientific Revolution of the seventeenth century by developing a complete mathematical formulation of nature as a perfect machine, a perfect clock. Whitehead judged that "the history of seventeenth-century science reads as though it were vivid dream of Plato or Pythagoras," noting the astonishingly refined mode of its quantitative thought. Again the correspondence with a jump in division of labor is worth pointing out; as Hill described mid-seventeenth century England, "...significant specialization began to set in. The last polymaths were dying out..." The songs and dances of the peasants slowly died, and in a rather literal mathematization, the common lands were closed and divided. Knowledge of nature was part of philosophy until this time; the two parted company as the concept of mastery of nature achieved its definitive modern form. Number, which first issued from dissociation from the natural world, ended up describing and dominating it. Fontenelle's Preface on the Utility of Mathematics and Physics (1702) celebrated the centrality of quantification to the entire range of human sensibilities, thereby aiding the eighteenth century consolidation of the breakthroughs of the preceding era. And whereas Descartes had asserted that animals could not feel pain because they are soulless, and that man is not exactly a machine because he had a soul, LeMetrie, in 1747, went the whole way and made man completely mechanical in his L'Homme Machine. Bach's immense accomplishments in the first half of the eighteenth century also throw light on the spirit of math unleashed a century earlier and helped shape culture to that spirit. In reference to the rather abstract music of Bach, it has been said that he "spoke in mathematics to God." (LeShan & Morgenau 1982) At this time the individual voice lost its independence and tone was no longer understood as sung but as a mechanical conception. Bach, treating music as a sort of math, moved it out of the stage of vocal polyphony to that of instrumental harmony, based always upon a single, autonomous voice fixed by instruments, instead of somewhat variable with human voices. Later in the century Kant stated that in any particular theory there is only as much real science as there is mathematics, and devoted a considerable part of his Critique of Pure Reason to an analysis of the ultimate principles of geometry and arithmetic. Descartes and Leibniz strove to establish a mathematical science method as the paradigmatic way of knowing, and saw the possibility of a singular universal language, on the model of empirical symbols, that could contain the whole of philosophy. The eighteenth century Enlightenment thinkers actually worked at realizing this latter project. Condillac, Rousseau and others were also characteristically concerned with origins--such as the origin of language; their goal of grasping human understanding by taking language to its ultimate, mathematized symbolic level made them incapable of seeing that the origin of all symbolizing is alienation. Symmetrical plowing is almost as old as agriculture itself, a means of imposing order on an otherwise irregular world. But as the landscape of cultivation became distinguished by linear forms of an increasingly mathematical regularity--including the popularity of formal gardens--another eighteenth-century mark of math's ascendancy can be gauged. In the early 1800s, however, the Romantic poets and artists, among others, protested the new vision of nature as a machine. Blake, Goethe and John Constable, for example, accused science of turning the world into a clockwork, with the Industrial Revolution providing ample evidence of its power to violate organic life. The debasing of work among textile workers, which caused the furious uprisings of the English Luddites during the second decade of the nineteenth century, was epitomized by such automated and cheapened products as those of the Jacquard loom. This French device not only represented the mechanization of life and work unleashed by seventeenth century shifts, but directly inspired the first attempts at the modern computer. The designs of Charles Babbage, unlike the "logic machines" of Leibniz and Descartes, involved both memory and calculating units under the control of programs via punched cards. The aims of the mathematical Babbage and the inventor-industrialist J.M. Jacquard can be said to rest on the same rationalist reduction of human activity to the machine as was then beginning to boom with industrialism. Quite in character, then, were the emphasis in Babbage's mathematical work on the need for improved notation to further the processes of symbolization, his Principles of Economy, which contributed to the foundations of modern management--and his contemporary fame against London "nusiances," such as street musicians! Paralleling the full onslaught of industrial capitalism and the hugely accelerated division of labor that it brought was a marked advance in mathematical development. According to Whitehead, "During the nineteenth century pure mathematics made almost as much progress as during the preceding centuries from Pythagoras onwards." The non-Euclidean geometries fo Bolyai, Lobachevski, Riemann and Klein must be mentioned, as well as the modern algebra of Boole, generally regarded as the basis of symbolic logic. Boolean algebra made possible a new level of formulized thought, as its founder pondered "the human mind...and instrument of conquest and dominion over the powers of surrounding nature," (Boole 1952) in an unthinking mirroring of the mastery mathematized capitalism was gaining in the mid-1800s. (Although the specialist is rarely faulted by the dominant culture for his "pure" creativity, Adorno adroitly observed that "The mathematician's resolute unconsciousness testifies to the connection between division of labor and "purity.") If math is impoverished language, it can also be seen as the mature form of that sterile coercion known as formal logic. Bertrand Russell, in fact, determined that mathematics and logic had become one. Discarding unreliable, everyday language, Russell, Frege and others believed that in the further degradation and reduction of language lay the real hope for "progress in philosophy." The goal of establishing logic on mathematical grounds was related to an even more ambitious effort by the end of the nineteenth century, that of establishing the foundations of math itself. As capitalism proceeded to redefine reality in its own image and became desirous of securing its foundations, the "logic" stage of math in the late 19th and early 20th centuries, fresh from new triumphs, sought the same. David Hilberts theory of formalism, one such attempt to banish contradiction or error, explicitly aimed at safeguarding "the state power of mathematics for all time from all 'rebellions.'" Meanwhile, number seemed to be doing quite well without the philosophical underpinnings. Lord Kelvin's late nineteenth century pronouncement that we don't really know anything unless we can measure it bespoke an exalted confidence, just as Frederick Taylor's Scientific Management was about to lead the quantification edge of industrial management further in the direction of subjugating the individual to the lifeless Newtonian categories of time and space. Speaking of the latter, Capra has claimed that the theories of relativity and quantum physics, developed between 1905 and the late 1920s, "shattered all the principal concepts fo the Cartesian world view and Newtonian mechanics." But relativity theory is certainly mathematical formulism, and Einstein sought a unified field theory by geometrizing physics, such that success would have enabled him to have said, like Descartes, that his entire physics was nothing other than geometry. That measuring time and space (or "space-time") is a relative matter hardly removes measurement as its core element. At the heart of quantum theory, certainly, is Heisenberg's Uncertainty Principle, which does not throw out quantification but rather expresses the limitations of classical physics in sophisticated mathematical ways. As Gillespie succinctly had it, Cartesian-Newtonian physical theory "was an application of Euclidean geometry to space, general relativity a spatialization of Riemann's curvilinear geometry, and quantum mechanics a naturalization of statistical probability." More succinctly still: "Nature, before and after the quantum theory, is that which is to be comprehended mathematically." During the first three decades of the 20th century, moreover, the great attempts by Russell & Whitehead, Hilbert, et al., to provide a completely unproblematic basis for the whole edifice of math, referred to above, went forward with considerable optimism. But in 1931 Kurt Godel dashed these bright hopes with his Incompleteness Theorem, which demonstrated that any symbolic system can be either complete or fully consistent, but not both. Godel's devastating mathematical proof of this not only showed the limits of axiomatic number systems, by rules out enclosing nature by any closed, consistent language. If there are theorems or assertions within a system of thought which can neither be proved or disproved internally, if it is impossible to give a proof of consistency within the language used, as Godel and immediate successors like Tarski and Church convincingly argued, "any system of knowledge about the world is, and must remain, fundamentally incomplete, eternally subject to revision." (Rucker 1982) Morris Kline's Mathematics: The Loss of Certainty related the "calamities" that have befallen the once seemingly inviolable "majesty of mathematics," chiefly dating from Godel. Math, like language, used to describe the world and itself, fails in its totalizing quest, in the same way that capitalism cannot provide itself with unassailable grounding. Further, with Godel's Theorem mathematics was not only "recognized to be much more abstract and formal than had been traditionally supposed," but it also became clear that "the resources of the human mind have not been, and cannot be, fully formalized." (Nagel & Newman 1958) But who could deny that, in practice, quantity has been mastering us, with or without definitively shoring up its theoretical basis? Human helplessness seems to be directly proportional to mathematical technology's domination over nature, or as Adorno phrased it, "the subjection of outer nature is successful only in the measure of the repression of inner nature." And certainly understanding is diminished by number's hallmark, division of labor. Raymond Firth accidentally exemplified the stupidity of advanced specialization, in a passing comment on a crucial topic: "the proposition that symbols are instruments of knowledge raises epistemological issues which anthropologists are not trained to handle." The connection with a more common degradation is made by Singh, in the context of an ever more refined division of labor and a more and more technicised social life, noting that "automation of computation immediately paved the way for automatizing industrial The heightened tedium of computerized office work is today's very visible manifestation of mathematized, mechanized labor, with its neo-Taylorist quantification via electronic display screens, announcing the "information explosion" or "information society." Information work is now the chief economic activity and information the distinctive commodity, in large part echoing the main concept of Shannon's information theory of the late 1940s, in which "the production and the transmission of information could be defined quantitatively." (Feinstein 1958) From knowledge, to information, to data, the mathematizing trajectory moves away from meaning--paralleled exactly in the realm of "ideas" (those bereft of goals or content, that is) by the ascendancy of structuralism. The "global communications revolution" is another telling phenomenon, by which a meaningless "input" is to be instantly available everywhere among people who live, as never before, in isolation. Into this spiritual vacuum the computer boldly steps. In 1950 Turing said, in answer to the question 'can machines think?', "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." Note that his reply had nothing to do with the state of machines but wholly that of humans. As pressures build for life to become more quantified and machine-like, so does the drive to make machines more life-like. By the mid-'60s, in fact, a few prominent voices already announced that the distinction between human and machine was about to be superseded--and saw this as positive. Mazlish provided an especially unequivocal commentary: "Man is on the threshold of breaking past the discontinuity between himself and machines...We cannot think any longer of man without a machine...Moreover, this change...is essential to our harmonious acceptance of an industrialized world." By the late 1980s thinking sufficently impersonates the machine that Artificial Intelligence experts, like Minsky, can matter-of-factly speak of the symbol-manipulating brain as a "computer made of meat." Cognitive psychology, echoing Hobbes, has become almost based on the computational model of thought in the decades since Turing's 1950 prediction. Heidegger felt that there is an inherent tendency for Western thinking to merge into the mathematical sciences, and saw science as "incapable of awakening, and in fact emasculating, the spirit of genuine inquiry." We find ourselves, in an age when the fruits of science threaten to end human life altogether, when a dying capitalism seems capable of taking everything with it, more apt to want to discover the ultimate origins of the nightmare. When the world and its thought (Levi-Strauss and Chomsky come immediately to mind) reach a condition that is increasingly mathematized and empty (where computers are widely touted as capable of feelings and even of life itself), the beginnings of this bleak journey, including the origins of the number concept, demand comprehension. It may be that this inquiry is essential to save us and our
{"url":"http://www.primitivism.com/number.htm","timestamp":"2014-04-20T08:33:48Z","content_type":null,"content_length":"60717","record_id":"<urn:uuid:1aab4135-83e0-4987-bd01-f51d9d5c4740>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematica problem, nontrivial solution for matrix equation Ax=0 The equation Ax= 0 has a non-trivial solution if and only if A is not one-to-one. That is the same as saying that its determant is 0 and that it has 0 as an eigenvalue. The standard way to find an eigenvalue, [itex]\lambda[/itex] for matrix A is to solve the equation [itex]det(A- \lambda I)= 0[/itex]. If A is an n by n matrix, that will be a polynomial equation of degree n and so has n solutions (not necessarily all distinct, not necessarily real). IF [itex]\lambda[/itex] really is an eigenvalue, then [itex]Ax= \lambda x[/itex] or [itex] Ax- \lambda x= (A- \lambda I)x= 0[/itex] has, by definition of "eigenvalue", a non-trivial solution. That is, some of the equations you get by looking at individual components will be dependent. Note that x= 0 always will be a solution, just not the only one. Perhaps if you posted a specific example, we could point out errors. The most obvious one, if you "keep getting x=0", is that what you think is an eigenvalue really isn't!
{"url":"http://www.physicsforums.com/showthread.php?p=3493249","timestamp":"2014-04-16T13:50:55Z","content_type":null,"content_length":"26082","record_id":"<urn:uuid:468a558f-9357-4087-9cf9-6db5e5ad5fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Baseball on Let's Play Math! feature photo above by USAG- Humphreys via flickr (CC BY 2.0) During off-times, at a long stoplight or in grocery store line, when the kids are restless and ready to argue for the sake of argument, I invite them to play the numbers game. “Can you tell me how to get to twelve?” My five year old begins, “You could take two fives and add a two.” “Take sixty and divide it into five parts,” my nearly-seven year old says. “You could do two tens and then take away a five and a three,” my younger son adds. Eventually we run out of options and they begin naming numbers. It’s a simple game that builds up computational fluency, flexible thinking and number sense. I never say, “Can you tell me the transitive properties of numbers?” However, they are understanding that they can play with numbers. I didn’t learn the rules of baseball by filling out a packet on baseball facts. Nobody held out a flash card where, in isolation, I recited someone else’s definition of the Infield Fly Rule. I didn’t memorize the rules of balls, strikes, and how to get someone out through a catechism of recitation. Instead, I played baseball. Conversational Math The best way for children to build mathematical fluency is through conversation. For more ideas on discussion-based math, check out these posts: Learning the Math Facts For more help with learning and practicing the basic arithmetic facts, try these tips and math games: Get all our new math tips and games: Subscribe in a reader, or get updates by Email.
{"url":"http://letsplaymath.net/tag/baseball/","timestamp":"2014-04-16T07:14:53Z","content_type":null,"content_length":"41818","record_id":"<urn:uuid:031ac09b-ecea-4cbe-a490-a7b92bd95209>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Integer Exponentiation Mappings Diagrams representing mappings, in which each vertex represents an integer n modulo k, joined to the vertex representing b^n modulo k, where b is the "base". THINGS TO TRY • Rotate and Zoom in 3D • Slider Zoom "Integer Exponentiation Mappings" from the Wolfram Demonstrations Project Contributed by: Stephen Wolfram
{"url":"http://demonstrations.wolfram.com/IntegerExponentiationMappings/","timestamp":"2014-04-17T06:43:52Z","content_type":null,"content_length":"41248","record_id":"<urn:uuid:8ff859e4-dab4-491e-a292-4056e48e8f99>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
China's (Uneven) Progress Against Poverty Martin Ravallion and Shaohua Chen1 Development Research Group, World Bank 1818 H Street NW, Washington DC, 20433, USA While the incidence of extreme poverty in China fell dramatically over 1980- 2001, progress was uneven over time and across provinces. Rural areas accounted for the bulk of the gains to the poor, though migration to urban areas helped. The pattern of growth mattered; rural economic growth was far more important to national poverty reduction than urban economic growth; agriculture played a far more important role than the secondary or tertiary sources of GDP. Rising inequality within the rural sector greatly slowed poverty reduction. Provinces starting with relatively high inequality saw slower progress against poverty, due both to lower growth and a lower growth elasticity of poverty reduction. Taxation of farmers and inflation hurt the poor; external trade had little short-term impact. Keywords: China, poverty, inequality, economic growth, policies JEL: O15, O53, P36 World Bank Policy Research Working Paper 3408, September 2004 The Policy Research Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about development issues. An objective of the series is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the view of the World Bank, its Executive Directors, or the countries they represent. Policy Research Working Papers are available online at http://econ.worldbank.org. 1 The authors are grateful to the staff of the Rural and Urban Household Survey Divisions of China's National Bureau of Statistics for their invaluable help in assembling the data base we draw on in this paper. Helpful comments were received from David Dollar, Tamar Manuelyan Atinc, Justin Lin, Will Martin, Thomas Piketty, Scott Rozelle, Dominique van de Walle, Shuilin Wang, Alan Winters and seminar/ conference participants at the National Bureau of Statistics, Beijing, the McArthur Foundation Network on Inequality, the Center for Economic Research, Beijing University, the Australian National University and the World Bank. Addresses for correspondence: mravallion@worldbank.org and schen@worldbank.org. 1. Introduction This paper aims to document and explain China's record against poverty over the two decades following Deng Xiaoping's initiation of pro-market reforms in 1978. We apply new poverty lines to newly assembled distributional data -- much of which has not previously been analyzed -- and we address some of the data problems that have clouded past estimates. We thus offer the longest and most internally consistent series of national poverty and inequality measures, spanning 1980-2001. While data are less complete at the provincial level, we can estimate trends since the mid-1980s. Armed with these new measures, we address some long-standing questions in development economics. How much did poor people share in the gains from growth? Did the sectoral and geographic pattern of growth matter? What role was played by urbanization of the population? How did initial distribution influence subsequent rates of growth and poverty reduction? What role was played by economic policies? Our principal findings are as follows: Finding 1: China has made huge overall progress against poverty, but it has been uneven progress. In the 20 year period after 1981, the proportion of the population living below our new poverty lines fell from 53% to 8%. However, there were many setbacks for the poor. Poverty reduction stalled in the late 1980s and early 1990s, recovered pace in the mid-1990s, but stalled again in the late 1990s. Half of the decline in poverty came in the first few years of the 1980s. Some provinces saw far more rapid progress against poverty than others. Finding 2: Inequality has been rising, though not continuously and more so in some periods and provinces. In marked contrast to most developing countries, relative inequality is higher in China's rural areas than in urban areas. However, there has been convergence over 2 time with a steeper increase in inequality in urban areas. Relative inequality between urban and rural areas has not shown a trend increase once one allows for the higher rate of increase in the urban cost of living. Absolute inequality has increased appreciably over time between and within both urban and rural areas, and absolute inequality is higher in urban areas. Finding 3: The pattern of growth matters. While migration to urban areas has helped reduce poverty nationally, the bulk of the reduction in poverty came from within rural areas. Growth in the primary sector (primarily agriculture) did more to reduce poverty and inequality than either the secondary or tertiary sectors. Starting in 1981, if the same aggregate growth rate had been balanced across sectors, it would have taken 10 years to bring the poverty rate down to 8%, rather than 20 years. The geographic composition of growth also mattered. While provinces with higher rural income growth tended to have higher poverty reduction, growth did not tend to be higher in the provinces where it would have had the most impact on poverty nationally. The pattern of growth mattered to the evolution of overall inequality. Rural and (in particular) agricultural growth brought inequality down. Rural economic growth reduced inequality in both urban and rural areas, as well as between them. Finding 4: Inequality has emerged as a concern for both growth and poverty reduction. With the same growth rate and no rise in inequality in rural areas alone, the number of poor in China would be less than one-quarter of its actual value (a poverty rate in 2001 of 1.5% rather than 8%). This calculation would be deceptive if the rise in inequality was the "price" of high economic growth, which did help reduce poverty. However, we find no evidence of such an aggregate trade off. The periods of more rapid growth did not bring more rapid increases in inequality. Nor did provinces with more rapid rural income growth experience a steeper increase in inequality. Thus the provinces that saw a more rapid rise in inequality saw less progress 3 against poverty, not more. Over time, poverty has also become far more responsive to the (continuing) increase in inequality. At the outset of China's current transition period, levels of poverty were so high that inequality was not an important concern. That has changed. Furthermore, even without a further rise in inequality, the historical evidence suggests that more unequal provinces will face a double handicap in future poverty reduction; they will have lower growth and poverty will respond less to that growth. 2. Data on income poverty and inequality in China We draw on the Rural Household Surveys (RHS) and the Urban Household Surveys (UHS) of China's National Bureau of Statistics (NBS).2 NBS ceased doing surveys during the Cultural Revolution in 1966-76 and stated afresh in 1980 (for rural areas) and 1981 (urban). While virtually all provinces were included from the outset, 30% had sample sizes in the early surveys that NBS considered too small for estimating distributional statistics (though still adequate for the mean). However, this does not appear to be a source of bias; we could not reject the null hypothesis that the first available estimates of our poverty measures were the same for these "small sample" provinces as the rest.3 While sample sizes for the early surveys were smaller, they are still adequate for measuring poverty; 16,000 households were interviewed for the 1980 RHS and about 9,000 for the 1981 UHS. Since 1985, the surveys have had nationally representative samples of about 70,000 in rural areas and 30-40,000 in urban areas. An unusual feature of these surveys is that their sample frames are based on China's registration system rather than the population census. This means that someone with rural 2 On the history and design of these surveys see Chen and Ravallion (1996) and Bramall (2001). 3 Included provinces had a poverty rate by our main poverty lines that was 1.9% points higher, but this is not significantly different from zero (t-ratio=0.32). This held for all other poverty measures. 4 registration who has moved to an urban area is effectively missing from the sample frame. Migrants from rural areas gain from higher earnings (the remittances back home are captured in the RHS), but are probably poorer on average than registered urban residents. Against this likely source of downward bias in poverty estimates from the UHS, the UHS income aggregates do not capture fully the value of the various entitlements and subsidies received exclusively by urban residents, though these appear to be of declining importance over time. While NBS has selectively made the micro data (for some provinces and years) available to outside researchers, the complete micro data are not available to us for any year. Instead we use tabulations of the distribution of income. The majority of these tabulated data are unpublished and were provided by NBS.4 The income aggregates include imputed values for income from own-production, but exclude imputed rents for owner-occupied housing. (Imputation is difficult, given the thinness of housing markets.) The usual limitations of income as a welfare indicator remain. For example, our measures of inequality between urban and rural residents may not adequately reflect other inequalities, such as in access to public services (health, education, water and sanitation -- all of which tend to be better provided in urban areas). There was a change in the methods of valuation for consumption of own-farm production in the RHS in 1990 when public procurement prices were replaced by local selling prices.5 To help us correct for this problem, NBS provided tabulations of the distribution in 1990 by both methods, allowing us to estimate what the income distributions for the late 1980s would have 4 There are a number of tabulations in the NBS Statistical Yearbook, but they only provide the percentages of households in each income class; without the mean income for each income class and mean household size these tabulations are unlikely to give accurate estimates of the Lorenz curve. Some of these data are available in the Provincial Statistical Yearbooks or the Household Survey Yearbooks. 5 Past estimates have used the "old prices" for the 1980s and the "new prices" for 1990 onwards, ignoring the change. Chen and Ravallion (1996) created a consistent series for 1985-90 for the micro data for a few provinces. However, this is not feasible without the complete micro data. 5 looked like if NBS had used the new valuation method. The Appendix describes the correction method in detail. Our corrections entail lower poverty measures in the late 1980s. In measuring poverty from these surveys, we use two poverty lines. One is the long- standing "official poverty line" for rural areas of 300 Yuan per person per year at 1990 prices. (There is no comparable urban poverty line.) It has been argued by many observers that this line is too low to properly reflect prevailing views about what constitutes "poverty" in China. It can hardly be surprising that in such a rapidly growing economy, perceptions of what income is needed to not be considered poor will rise over time.6 In collaboration with the authors, NBS has been developing a new set of poverty lines that appears to better reflect current conditions. Region-specific food bundles are used, with separate food bundles for urban and rural areas, valued at median unit values by province. The food bundles are based on the actual consumption bundles of those between the poorest 15th percentile and the 25th percentile nationally. These bundles are then scaled to reach 2100 calories per person per day, with 75% of the calories from foodgrains.7 Allowance for non-food consumption are based on the nonfood spending of households in a neighborhood of the point at which total spending equaled the food poverty line in each province (and separately for urban and rural areas). The methods closely follow Chen and Ravallion (1996) and Ravallion (1994). For measuring poverty nationally we have simply used the means of these regional lines. With a little rounding off, we chose poverty lines of 850 Yuan per year for rural areas and 1200 Yuan for urban areas, both in 2002 prices. (Ideally one would build up all national poverty 6 Poverty lines across countries tend to be higher the higher the mean income of the country, though with an initially low elasticity at low income (Ravallion, 1994). 7 Without the latter condition, the rural food bundles were deemed to be nutritionally inadequate (in terms of protein and other nutrients) while the urban bundles were considered to be preferable. The condition was binding on both urban and rural bundles. 6 measures by applying the regional poverty lines to the provincial distributions and then aggregating. However, this would entail a substantial loss of information given that we have only 10-12 years of rural data at province level.) We use the 2002 differential between the urban and rural lines to calculate an urban equivalent to the 300 Yuan rural line. Finally, we convert to prices at each date using the rural and urban Consumer Price Indices produced by NBS. We also use these urban and rural poverty lines as deflators for urban-rural cost-of-living (COL) adjustments in forming aggregate inequality measures and for measuring inequality between urban and rural areas. Past work in the literature on inequality in China has ignored the COL difference between urban and rural areas, and we will see that this does matter. However, our COL adjustments are not ideal, in that a common deflator is applied to all levels of income. We provide three poverty measures: The headcount index (H) is the percentage of the population living in households with income per person below the poverty line. The poverty gap index (PG) gives the mean distance below the poverty line as a proportion of the poverty line (where the mean is taken over the whole population, counting the non-poor as having zero poverty gaps.) The third measure is the squared poverty gap index (SPG), in which the individual poverty gaps are weighted by the gaps themselves, so as to reflect inequality amongst the poor (Foster et al., 1984). For all three, the aggregate measure is the population-weighted mean of the measures found across any complete partition of the population into subgroups. Datt and Ravallion (1992) describe our methods for estimating the Lorenz curves and calculating these poverty measures from the grouped data provided by the NBS tabulations. 3. Poverty measures for China 1981-2001 It can be seen from Table 1 that the (census-based) urban population share rose from 19% in 1980 to 39% in 2002. This may be a surprisingly high pace of urbanization, given that there 7 were governmental restrictions on migration (though less so since the mid-1990s). For example, in India (with no such restrictions) the share of the population living in urban areas increased from 23% to 28% over the same period. We do not know how much this stemmed from urban expansion into rural areas versus rural-urban migration. The cost-of-living differential in Table 1 rises over time, from 19% to 41% in 2002. This reflects the fact that the urban inflation rate is higher than the rural rate; the index at base 1980 (=100) had risen to 438 in urban areas by 2001 versus 368 in rural areas.8 This divergence between urban and rural inflations rates started in the mid-1980s and undoubtedly reflects the rising costs of urban goods that had been subsidized in the pre-reform economy. Table 1 also gives our estimates of mean income for rural and urban areas. The large disparities in mean incomes between urban and rural areas echo a well-known feature of the Chinese economy, though our COL adjustment narrows the differential considerably.9 We will return in section 5 to discuss the implications for urban-rural inequality. Table 2 gives our rural poverty measures. Table 3 gives our estimates for urban areas, and Table 4 gives the national aggregates. Figure 1 plots the national headcount indices for both poverty lines. By the new lines, the headcount index falls from 53% in 1981 to 8% in 2001. Conservatively assuming the 1981 urban number for 1980, the national index was 62% in 1980. For all years and all measures, rural poverty incidence exceeds urban poverty, and by a wide margin. Rural poverty measures show a strong downward trend, though with some reversals, notably in the late 1980s, early 1990s and in the last two years of our series. The urban measures also show a trend decline, though with even greater volatility. 8 While there is a high correlation between the urban population share and the urban-rural COL differential, this appears to be spurious; there is no correlation between the changes over time. 9 Since the latter adjustment is based on the poverty lines, it may not be appropriate for the mean (at least toward the end of the period). But it is our best available option. 8 There was more progress in some periods than others. There was a dramatic decline in poverty in the first few years of the 1980s, coming from rural areas. By our new poverty line, the rural poverty rate fell from 76% in 1980 to 23% in 1985. The late 1980s and early 1990s were a difficult period for China's poor. Progress was restored around the mid-1990s, though the late 1990s saw a marked deceleration, with signs of rising poverty in rural areas.10 We can decompose the change in national poverty into a "population shift effect" and a "within sector" effect.11 Letting Pt denote the poverty measure for date t, while Pt is thei measure for sector i=u,r (urban, rural), with corresponding population shares nt , we can write an i exact decomposition of the change in poverty between t=1981 and t=2001 as: (1) P01 - P81 = [n01(P01 - P81) + n01(P01 - P81)]+[(P81 - P81)(n01 - n81)] r r r u u u u r u u Within-sector effect Population shift effect The within-sector effect is the change in poverty weighted by final year population shares while the population shift effect measures the contribution of urbanization, weighted by the initial urban-rural difference in poverty measures. The "population shift effect" should be interpreted as the partial effect of urban-rural migration, in that it does not allow for any effects of migration on poverty levels within urban and rural areas. Table 5 gives this decomposition. We find that the national headcount index fell by 45% points, of which 35% points were accountable to the within-sector term; within this, 33% points was due to falling poverty within rural areas while only 2% was due to urban areas. The population shift from rural to urban areas accounted for 10% points. The other poverty measures tell a very similar story, though the rural share is slightly higher for SPG than PG, and lowest for 10 Using different measures and data sources, Benjamin et al., (2003) also find signs of falling living standards amongst the poorest in rural China in the late 1990s. 11 This is one of the decompositions for poverty measures proposed by Ravallion and Huppi (1991). 9 H. As can be seen from the lower panel of Table 5, the pattern is also similar for the period 1991-2001, the main difference being that the "within-urban" share falls to zero using the old poverty line, with the rural share rising to around 80%. So we find then that 75-80% of the drop in national poverty incidence is accountable for poverty reduction within the rural sector; most of the rest is attributable to urbanization. Understanding what has driven rural poverty reduction is clearly of first-order importance to understanding the country's overall success against poverty. 4. Poverty reduction and economic growth The rate of economic growth is a key proximate determinant of China's diverse performance over time against poverty. The regression coefficient of the log national headcount index on the log national mean is ­1.43, with a t-ratio of 15.02. However, this regression is deceptive, given that both series are nonstationary; the residuals show strong serial dependence (the Durbin-Watson statistics is 0.62). Differencing the series deals with this problem.12 Table 6 gives regressions of the log difference in each poverty measure against the log difference in mean income per capita. (All growth rates in this paper are annualized log differences.) There is a possible upward bias in the OLS estimates stemming from common measurement errors in the dependent and independent variable; when the mean is overestimated the poverty measure will be underestimated. Following Ravallion (2001) we use the GDP growth rate as the instrument for the growth rate in mean income from the surveys, under the assumption that measurement errors in the two data sources are uncorrelated. (China's national accounts have been based 12 The correlograms of the first differences of the three log poverty measures shows no significant autocorrelations. While the first difference of the log mean still shows mild positive serial correlation, the residuals of the regression of the log difference of the poverty measure on the on the log difference of the mean shows no sign of serial correlation. 10 largely on administrative data.) Both the OLS and IVE results in Table 6 confirm studies for other countries indicating that periods of higher economic growth tended to be associated with higher rates of poverty reduction.13 The implied elasticity of poverty reduction to growth is over three for the headcount index and around four for the poverty gap measures. The IVE elasticity is similar to that for OLS, suggesting that the aforementioned problem of correlated measurement errors is not a serious source of bias. Notice that the intercepts are positive and significant in Table 6. Our OLS results imply that at zero growth, the headcount index would have risen at 11% per year (16% for PG and 19% for SPG). So falling poverty in China has been the net outcome of two strong but opposing forced: rising inequality and positive growth. We also give regressions in Table 6 that include the rate of change in inequality. It is unsurprising that this has a strong positive effect on poverty. (The regression can be viewed as a log-linear approximation of the underlying mathematical relationship between a poverty measure and the mean and distribution on which that measure is based.) What is more interesting is that there is evidence of a strong time trend in the impact of inequality, as indicated by the positive interaction effect between time and the change in inequality (Table 6). Poverty in China has become more responsive to inequality over this period. Indeed, the size of the interaction effect in Table 6 suggests that the elasticity of poverty to inequality was virtually zero around 1980, but the elasticity rose to 3.7 in 2001 for the headcount index and 5-6 for the poverty gap measures. While China's economic growth has clearly played an important role in the country's long-term success against absolute poverty, the data suggest that the sectoral composition of 13 Evidence on this point for other countries can be found in Ravallion (2001). 11 growth has mattered.14 This can be seen clearly if we decompose the growth rates by income components. Consider first the urban-rural decomposition for the survey mean. The overall mean at date t is µt = nt µt + nt µt where µt is the mean for sector i=r,u for rural and urban r r u u i areas. It is readily verified that the growth rate in the overall mean can be written as lnµt = st lnµt + st lnµt +[st - st (nt /nt )]lnnt where st = ntµt / µt (for i=r,u) is the r r u u r u r u r i i i income share. We can thus write down the following regression for testing whether the composition of growth matters: (2) ln Pt =0 +rst lnµt +ust lnµt +n(st - st . r r u u r u ntr ) ln nt + t r ntu wheret is a white-noise error term. The motivation for writing the regression this way is evident when one notes that if the i (i=r,u,n) parameters are the same then equation (2) collapses to a simple regression of the rate of poverty reduction on the rate of growth ( ln µt ). Testing H0:i = for all i tells us whether the urban-rural composition of growth matters. Note that this regression decomposition is based on somewhat different assumptions to that used in equation (1). In particular, any systematic within-sector distributional effects of urbanization would now change the measured contribution to poverty. Table 7 gives the results for all three poverty measures. The null hypothesis that i = for all i is convincingly rejected in all three cases. Furthermore, we cannot reject the null that only the growth rate of rural incomes matters. 14 The literature has often emphasized the importance of the sectoral composition of growth to poverty reduction; for an overview of the arguments and evidence see Lipton and Ravallion (1995). The following analysis follows the methods introduced in Ravallion and Datt (1996), which found that the composition of growth mattered to poverty reduction in India. 12 A second decomposition is possible for GDP per capita which we can divide into n sources to estimate a test equation of the following form: n (3) ln Pt = 0 + isitlnYit +t i=1 where Yit is GDP per capita from source i, sit = Yit /Yt is the source's share, and t is a white- noise error term. In the special case in which i = for i=1,..,n, equation (3) collapses to a simple regression of the rate of poverty reduction on the rate of GDP growth ( lnYt ). With only 21 observations over time there are limits on how far we can decompose GDP. We used a standard classification of its origins, namely "primary" (mainly agriculture), "secondary" (manufacturing and construction) and "tertiary" (services and trade). Figure 2 shows how the shares of these sectors evolved over time. The primary sector's share fell from 30% in 1980 to 15% in 2001, though not montonically. Almost all of this decline was made up for by an increase in the tertiary-sector share. However, it should not be forgotten that these are highly aggregated GDP components; the near stationarity of the secondary sector share reflects the net effect of both contracting and expanding manufacturing sectors. Table 8 gives the estimated test equations for H and PG, while Table 9 gives the results for SPG (for which a slightly different specification is called for, as we will see). We find that the sectoral composition of growth matters to the rate of poverty reduction. The primary sector has far higher impact (by a factor of about four) than either the secondary or tertiary sectors. The impacts of the latter two sectors are similar (and we cannot reject the null that they have the same impact). For SPG we cannot reject the null hypothesis that only the primary sector matters and Table 9 gives the restricted model for this case. Our finding that the sectoral composition of 13 growth matters echoes the findings of Ravallion and Datt (1996) for India, though tertiary sector growth was relatively more important in India than we find for China. These aggregate results do not tell us about the source of the poverty-reducing impact of primary sector growth. With a relatively equitable distribution of access to agricultural land and higher incidence and depth of poverty in rural areas it is plausible that agricultural growth will bring large gains to the poor. There is evidence for China that this may also involve external effects at the farm-household level. One important source of externalities in rural development is the composition of economic activity locally. In poor areas of southwest China, Ravallion (2004) finds that the composition of local economic activity has non-negligible impacts on consumption growth at the household level. There are significant positive effects of local economic activity in a given sector on income growth from that sector. And there are a number of significant cross-effects, notably from farming to certain nonfarm activities. The sector that matters most as a generator of positive externalities turns out to be agriculture (Ravallion, 2004). A natural counterfactual for measuring the contribution of the sectoral composition of growth is the rate of poverty reduction if all three sectors had grown at the same rate. We call this "balanced growth." Then the sector shares of GDP in 1981 would have remained constant over time, with 32% of GDP originating from the primary sector. From Table 8, the expected rate of change in the headcount index, conditional on the overall GDP growth rate, would then have been 0.155 - 4.039 lnYt (where 4.039=0.32 x 7.852 + 0.68 x 2.245, based on Table 8). For the same GDP growth rate, the mean rate of poverty reduction would then have been 16.3% per year, rather than 9.5%. Instead of 20 years to bring the headcount index down from 53% to 8% it would have taken about 10 years. 14 This assumes that the same overall growth rate would have been possible with balanced growth. There may well be a trade off, arising from limited substitution possibilities in production and rigidities in some aggregate factor supplies; or the trade-off could stem from aggregate fiscal constraints facing the government in supplying key public infrastructure inputs to private production. It is suggestive in this respect that there is a correlation of ­0.414 between the two growth components identified from Table 8, s1t lnY1t and s2t lnY2t + s3t lnY3t . However, this correlation is only significant at the 6% level, and it is clear that there were sub- periods (1983-84, 1987-88 and 1994-96) in which both primary sector growth and combined growth in the secondary and tertiary sectors were both above average. We have seen that growth accounts for a sizeable share of the variance in rates of poverty reduction. When measured by survey means, growth accounts for about half of the variance. When measured from the national accounts, growth account for one fifth of the variance, though the share of variance explained is doubled when we allow for the sectoral composition of growth, with the primary sector emerging as far more important than the secondary or tertiary sectors (though again there may well be heterogeneity within these broad sectors). 5. Inequality and growth The literature has provided numerous partial pictures of inequality in China, focusing on sub-periods (the longest we know of is for 1985-95, in Kanbur and Zhang, 1999) and/or sub- sectors or selected provinces. As we will see, these partial pictures can be deceptive. We begin by considering inequality between urban and rural sectors; then within sectors and in the aggregate. Finally we turn to the relationship between inequality and growth. Has inequality risen between urban and rural areas? Figure 3 gives the ratio of the urban mean income to the rural mean. Without our adjustment for the cost-of-living difference, there is 15 a significant positive trend in the ratio of urban to rural mean income. The regression coefficient of the ratio of means on time is 0.047, with a t-ratio of 3.12 (this is corrected for serial correlation in the error term). However, when using the COL adjusted means the coefficient drops to 0.021 and is not significantly different from zero at the 5% level (t=1.79). Notice also that there are still some relatively long sub-period trends in which the ratio of the urban to the rural mean was rising. This includes the period 1986 to 1994 studied by Yang (1999) who argued that there was a rising urban-rural disparity in mean incomes in post-reform China. However, this is clearly not a general feature of the post-reform period. Indeed, the ratio of means fell sharply in the mid-1990s, though re-bounding in the late 1990s. There is a trend increase in absolute inequality between urban and rural areas. This can be measured by the absolute difference between the urban and rural means, as given in Figure 4 (normalized by the 1990 national mean). The trend in the absolute difference (again calculated as the regression coefficient on time) is 0.044 per year, with a t-ratio of 3.40 (again corrected for serial correlation in the error term). However, here too there were periods that went against the trend, including in the early 1980s and mid-1990s. Turning to inequality within urban and rural areas, we find trend increases, though rural inequality fell in the early 1980s and again in the mid-1990s (Table 10). In marked contrast to most developing countries, relative income inequality is higher in rural areas, though the rate of increase in inequality is higher in urban areas; it looks likely that the pattern in other developing countries will emerge in China in the near future. Notice also that there appears to be a common factor in the changes in urban and rural inequality; there is a correlation of 0.69 between the first difference in the log rural Gini index and that in the log urban index. We will return to this. 16 In forming the national Gini index in Table 10 we have incorporated our urban-rural cost of living adjustment. The table also gives the unadjusted estimates (as found in past work). As one would expect, national inequality is higher than inequality within either urban or rural areas. And allowing for the higher cost-of-living in urban areas reduces measured inequality. By 2001, the COL adjustment brings the overall Gini index down by over five percentage points. While a trend increase in national inequality is evident (Figure 5), the increase is not found in all sub- periods: inequality fell in the early 1980s and the mid-1990s. The rise in inequality is even more pronounced. Figure 6 gives the absolute Gini index, in which income differences are normalized by a fixed mean (for which we use the 1990 national mean). (The absolute Gini calculated this way is not bounded above by unity.) It is also notable that while relative inequality is higher in rural areas than urban areas, this reverses for absolute inequality, which is higher in urban areas at all dates. Higher inequality greatly dampened the impact of growth on poverty. On re-calculating our poverty measures using the 2001 rural mean applied to the 1981 Lorenz curve, we find that the incidence of poverty in rural areas (by our upper line) would have fallen to 2.04% in 2001, instead of 12.5%. The rural PG would have fallen to 0.70% (instead of 3.32%) while the SPG would have been 0.16 (instead of 1.21). Repeating the same calculations for urban areas, poverty would have virtually vanished. But even with the same urban poverty measures for 2001 (so letting inequality within urban areas rise as it actually did), the national incidence of poverty would have fallen to 1.5% without the rise in rural inequality. This begs the question of whether the same growth rate would have been possible without the rise in inequality. If de-controlling China's economy inevitably put upward pressure on inequality then we would be underestimating the level of poverty in 2001 that would have been 17 observed without the rise in rural inequality, because the lower inequality would have come with a lower mean. Inequality has certainly risen over time, in line with mean income. The regression coefficient of the Gini index on GDP per capita has a t-ratio of 9.22 (a correlation coefficient of 0.90). But this correlation is probably spurious; the Durbin-Watson statistic is 0.45, indicating strong residual auto-correlation. This is not surprising since both inequality and mean income have strong trends, though possibly associated with different causative factors. A better test is to compare the growth rates with changes in inequality over time.15 Then it becomes far less clear that higher inequality has been the price of China's growth. The correlation between the growth rate of GDP and log difference in the Gini index is ­0.05. Now the regression coefficient has a t-ratio of 0.22 (and a Durbin-Watson of 1.75). This test does not suggest that higher growth per se meant a steeper rise in inequality. The same conclusion is reached if instead of using annual data we divide the series into four sub-periods according to whether inequality was rising or falling at national level, as in Table 11. If there was an aggregate growth-equity trade-off then we would expect to see higher growth in the period in which inequality was rising. This is not the case; indeed; the two periods with highest growth were when inequality was falling. These calculations do not reveal any sign of a short-term trade off between growth and equity. Possibly these time periods are too short to capture the effect. Another test is to see whether the provinces that had higher growth rates saw higher increases in inequality; we return to that question in section 7. 15 There is still positive first-order serial correlation of 0.48 in the first difference of log GDP though the regression of the first difference of log Gini on log GDP shows no sign of serial correlation in the residuals. So the differenced specification is appropriate. 18 What role has the sectoral composition of growth played in the evolution of inequality?16 Repeating our test based on equation (2) but this time using changes in the log Gini index as the dependent variable we find strong evidence that the urban-rural composition of growth matters to the evolution of the Gini index: (4) lnGt = 0(.020- 0.511 st ln µt + 0(.466 st ln µt - 0.366[st - st (nt / nt )] ln nt + ^t r r u u r u r u r G 1.285) (-4.399) 2.651) (-0.208) R2=0.622; n=20 There is no sign of a population shift effect on aggregate inequality and the rural and urban coefficients add up to about zero. The joint restrictions r +u = 0 and n = 0 (borrowing the notation of equation 2) pass comfortably, giving the rate of change in inequality as an increasing function of the difference in (share-weighted) growth rates between urban and rural areas: (5) lnGt = 0(.015+ 0(.499(st lnµt - st lnµt ) +^t u u r r G R2=0.619; n=20 2.507) 5.405) If instead one looks at the components of China's GDP by origin, one finds that primary sector growth has been associated with lower inequality overall, while there is no correlation with growth in either the secondary or tertiary sectors. This can be seen from Table 12. It is clear that an important channel through which primary sector growth has been inequality reducing is its effect on the urban-rural income disparity. There is a negative correlation between primary sector growth and the changes in the (log) ratio of urban to rural mean income; the correlation is strongest if one lags primary sector growth by one period, giving the following OLS regression for the log of the ratio of urban mean (Yt ) to rural mean (Yt ): u r (6) ln(Yt /Yt ) = 0(.044- 0.969 lnY1t-1 +^t u r Y R2=0.437; n=20 2.657) (-3.802) 16 The literature on inequality and development has emphasized the importance of the sectoral composition of growth (see, for example, Bourguignon and Morrison, 1998). 19 Primary sector growth has also brought lower inequality within rural areas. At the same time, secondary sector growth has been inequality increasing within rural areas:17 (7) lnGt = 0(.010- 0.219(lnY1t - lnY2t) +^t R2=0.346; n=21 r r 1.892) (-4.516) Both secondary and tertiary sector growth were inequality increasing in urban areas, but there is no sign of an effect from primary sector growth (the bulk of which stems from rural areas). The secondary and tertiary effect is strongest with a one year lag, but is no different between the two sectors when share-weighted, giving a simple regression for the rate of change in urban inequality:18 (8) lnGt = -( 0.064+1(.340(s2t-1lnY2t-1 + s3t-1lnY3t-1) +^t R2=0.396; n=21 u u -2.078) 2.989) An alternative perspective on the pattern of growth is found in the survey means. Table 13 gives regressions of the log difference of the Gini index by urban and rural areas on the growth rates (log differences) of both rural and urban mean incomes. We find that growth in rural incomes is inequality reducing nationally, and this is so in both urban and rural areas. However, there is a strong and roughly offsetting lagged effect in rural areas, suggesting that it is the positive (negative) shocks to rural incomes that reduce (increase) inequality. Growth in urban incomes is inequality increasing in the aggregate and within urban areas, but not rural areas. This echoes the results of Ravallion and Datt (1996) for India. What then is driving the co-movement of inequality between urban and rural areas? The answer appears to lie in the role of rural incomes. As we have seen, for both urban and rural areas, the first differences in the log Gini index are negatively correlated with rural income 17 The homogeneity restriction in the following regression passes comfortably; if one adds lnY2 t to this regression its coefficient has a t-ratio of 0.45. 18 There is (negative) first-order serial correlation in the residuals of this regression. Correcting for this, the slope coefficient falls to 0.974, though the standard error falls more (giving a t-ratio of 3.942). 20 growth. The regression residuals for the changes in rural inequality in Table 13 show no significant correlation with those for urban inequality.19 6. Economy-wide policies and income distribution The early 1980s saw high growth in primary sector output and rapid rural poverty reduction in the wake of de-collectivization and the privatization of land-use rights under the "household responsibility system." (Agricultural land had previously been farmed by organized brigades, in which all members shared the output more-or-less equally.) The literature has pointed to the importance of these reforms in stimulating rural economic growth at the early stages of China's transition (Fan, 1991; Lin, 1992; Chow, 2002). Since this was a one-off event, we cannot test its explanatory power against alternatives. However, it would appear reasonable to attribute the bulk of rural poverty reduction between 1981 and 1985 to this set of agrarian reforms. The rural headcount index fell from 64.7% in 1981 to 22.7% in 1985 (Table 2). After weighting by the rural population shares, this accounts for 77% of the decline in the national poverty rate between 1981 and 2001. Even if other factors accounted for (say) one third of the drop in rural poverty over 1981-85, we are left with the conclusion that China's agrarian reforms in the early 1980s accounted for half of the total decline in poverty over this 20 year period. Agricultural pricing policies have also played a role. Until recently, the government has operated a domestic foodgrain procurement policy by which farmers are obliged to sell fixed quotas to the government at prices that are typically below the local market price. For some farmers this is an infra-marginal tax, given that they produce more foodgrains than their assigned 19 Rural economic growth as measured from the surveys does a better job in accounting for the correlation between changes in urban and rural Gini indices than does primary sector GDP growth. 21 quota; for others it will affect production decisions at the margin. It has clearly been unpopular with farmers (see, for example, Kung's, 1995, survey of Chinese farmers' attitudes.) Reducing this tax by raising procurement prices stimulated primary sector GDP. We find a strong correlation between the growth rate of primary sector output and the real procurement price of foodgrains (nominal price deflated by the rural CPI); see Figure 7. There is both a current and lagged effect; an OLS regression of the growth rate in primary sector GDP on the current and lagged rates of change in the real procurement price (PP) gives: (9) lnY1 = 0.045+ 0.210lnPPt + 0.315lnPPt +^t R2=0.590; D-W=2.60; n=19 t (5.937) (2.152) (3.154) -1 It is not then surprising that we find a strong negative correlation between the changes in the government's procurement price and changes in inequality; Figure 8 plots the two series (lagging the procurement price change by one year); the simple correlation coefficient is ­0.609. Cutting this tax has thus been an effective short-term policy against poverty. The regression coefficient of ln Ht on ln PPt -1 is ­1.060 (t-ratio=3.043). The channel for this effect was through agricultural incomes, which (as we have seen) responded positively to higher procurement prices. (The regression coefficient changes little if one adds controls for secondary and tertiary sector growth.) The elasticities of national poverty to procurement price changes are even higher for the poverty gap indices; for PG the regression coefficient (of log differences on log differences) is ­1.433 (t=2.929) and for SPG it is even higher at ­1.708 (t=3.134). Two other types of economy-wide policies have been identified as relevant to poverty in the literature, namely macroeconomic stabilization and trade reform. A number of studies in other developing countries have found evidence that inflation hurts the poor, including Easterly and Fischer (2001) and Dollar and Kraay (2002) both using cross-country data, and Datt and Ravallion (1998) using data for India. There were two inflationary periods in China, 1988-89 22 and 1994-95. Poverty rose in the former period and fell in the latter. However, when one controls for procurement price changes we find an adverse effect of lagged changes in the rate of inflation for all three poverty measures; for the headcount index: (10) lnHt = -(-03..082-1(-.3257lnPPt +1(.22492 lnCPIt +^t R2=0.491; D-W=1.86; n=19 058) .688) -1 .493) -1 where CPI is the rural CPI. The regression was similar for the other poverty measures. There are also strong (pro-poor) distributional effects of procurement and inflationary shocks as can be seen by the fact that both regressors in (10) remain significant if one controls for the log difference in overall mean income: (11) ln Ht = 0(.060- 1.040 ln PPt + 0(.8822 lnCPIt - 2.335 lnYt - 0.739^t +^t 3.791) (-8.049) -1 -1 4.651) (-9.843) (-3.775) -1 R2=0.907; D-W=2.28; n=18 It has also been claimed that China's trade reforms helped reduce poverty (World Bank, 2002; Dollar, 2004). However, the timing does not suggest that they are a plausible candidate for explaining China's progress against poverty. Granted, trade reforms had started in the early 1980s as part of Deng Xiaoping's "Open-Door Policy" -- mainly entailing favorable exchange rate and tax treatment for exporters and creation of the first special-economic zone, Shenzhen, near Hong Kong. However, the bulk of the trade reforms did not occur in the early 1980s, when poverty was falling so rapidly, but were later, notably with the extension of the special-economic zone principle to the whole country (in 1986) and from the mid-1990s, in the lead up to China's accession to the World Trade Organization (WTO); Table 14 shows that mean tariff rates fell only slightly in the 1980s and non-tariff barriers actually increased. And some of the trade policies of this early period were unlikely to have been good for either equity or efficiency.20 20 For example, a two-tier price system allowed exporters to purchase commodities at a low planning price and then export them at a profit. For this reason, oil was a huge export item until 1986. 23 Nor does the times series on trade volume (the ratio of exports and imports to GDP) suggest that trade was poverty reducing, at least in the short term; the correlation between changes in trade volume and changes in the log headcount index is 0.00! Nor are changes in trade volume (current and lagged two-years) significant when added to either equations (10) or (11). Trade volume may well be endogenous in this test, though it is not clear that correcting for the bias would imply that it played a more important role against poverty. This would require that trade volume is positively correlated with the omitted variables. However, one would probably be more inclined to argue that trade volume is negatively correlated with the residuals; other (omitted) growth-promoting policies simultaneously increased trade and reduced poverty. Other evidence, using different data and methods, also suggests that trade reform had had little impact on poverty or inequality. Chen and Ravallion (2004b) studied the household level impacts of the tariff changes from 1995 onwards (in the lead up to accession to the WTO). (The induced price and wage changes were estimated by Ianchovichina and Martin, 2004, using a CGE model.) There was a positive impact of these trade reforms on mean household income, but virtually no change in aggregate inequality and only slightly lower aggregate poverty in the short term. 7. Poverty at provincial level So far we have focused solely on the national time series. We now turn to the less complete data available at province level. We focus solely on rural poverty. (Urban poverty incidence is so low in a number of provinces that it becomes hard to measure and explain trends.) The series on mean rural incomes from NBS is complete from 1980. However, there are only 11-12 years of provincial distributions available. Table 15 gives summary statistics on the "initial" values of the mean, poverty and inequality. For the mean, the first observation is for 24 1980; for the distributional measures the first available year is 1983 in two-thirds of cases and 1988 for almost all the rest. There are marked differences in starting conditions. Even for inequality, the Gini index around the mid 1980s varied from 18% to 33% (Table 15). Table 16 gives the trends based on the OLS estimates of log Xit =i + i t +it for X X X variable X and province i. We assume an AR(1) error term for mean income; for the (incomplete, discontinuous) distributional data we have little practical choice but to treat the error term as white noise. Trend growth rates in mean income vary from 1% per year (in Xinjiang) to almost 7% per year (in Anhui). Trends in the Gini index vary from near zero (Guangdong) to 3% (Beijing). Guangdong had an astonishing trend rate of decline in H of 29% per year. At the other extreme there are six provinces for which the trend was not significantly different from zero, namely Beijing, Tianjin, Shanghai, Yunnan, Ningxia, Xinjiang, though the first three of these started the period with very low poverty rates (Table 15). The literature has pointed to divergence between the coastal and inland provinces.21 This has been linked to the government's regional policies, which have favored coastal provinces through differential tax treatment and public investment. We confirm expectations that coastal provinces had significantly higher trend rates of poverty reduction.22 The mean trend rate of decline in the headcount index was 8.43% per year for inland provinces (t=4.14) versus 16.55% for the coastal provinces (t=5.02); the t-statistic for the difference in trends is 2.10. Poverty and growth at the provincial level The association between rural income growth and poverty reduction is confirmed in the provincial data. Figure 9 plots the trend rate of change in the headcount index against the trend 21 See Chen and Fleisher (1996), Jian et al. (1996), Sun and Dutta (1997), and Raiser (1998). 22 The costal provinces are Hebei, Liaoning, Shanghai, Jiangsu, Zhejiang, Fujian, Shangdong and Guangdong; following convention, we do not classify Guangxi as "coastal" though it has a costal area. 25 rate of growth in mean rural income across provinces. The figure also identifies the three observations with lowest initial poverty measures, for which there was also an increase (though statistically insignificant) in poverty over time, namely Beijing, Shanghai and Tianjin. The regression coefficient of the trend in the headcount index on the trend in rural income is ­1.58, which is significant at the 5% level (t = ­2.05). The 95% confidence interval for the impact of a 3% growth rate on the headcount index is about (0, 9%). However, if one drops Beijing, Shanghai and Tianjin then the relationship is steeper and more precisely estimated. The regression coefficient is then ­2.43 (t=4.29). The 95% confidence interval for the impact of a 3% growth rate is then about (4%,10%). While higher growth meant a steeper decline in poverty, we see in Figure 9 considerable dispersion in the impact of a given rate of growth on poverty. This is also evident if we calculate the "growth elasticity of poverty reduction" as the ratio of the trend in the headcount index to the trend in the mean. This varies from ­6.6 to 1.0, with a mean of ­2.3. What explains these diverse impacts of a given rate of growth on poverty? If inequality did not change then the elasticity will depend on the parameters of the initial distribution, roughly interpretable as the mean and "inequality." More generally, with changing distribution, the elasticity will also depend on the trend in inequality. On imposing data consistent parameter restrictions, the following regression is easily interpreted:23 (12) i /i = (-5.935+0.0136y80i)(1-G83i)+1.365i +^t R2=0.386; n=29 H Y R R G (-4.487) (2.560) (2.392) 23 This specification is a variation on Ravallion (1997). Starting from an unrestricted regression of H / M on G83, yR, G83.yR and Ga joint F-test does not reject the null hypothesis (with R R prob.=0.17) that the joint restrictions hold that are needed to obtain (12) as the restricted form. 26 where y80i is the initial mean for province i less the national mean. At zero trend in inequality R and the mean residual, the elasticity is zero at G83 = 1 and becomes more negative as inequality R falls. At G83 = 0 , the elasticity at mean income is ­6, but goes toward zero as income rises. So R a given rate of growth had more impact on poverty in initially less unequal and poorer provinces. Echoing our results using the national time series data, we find no evidence of a growth- equity trade off in the provincial data. Figure 10 plots the trends in the Gini index against the trend in the mean; the correlation coefficient is -0.188. With no evidence of an aggregate trade- off, we are drawn to conclude that rising inequality over time put a brake on the rate of poverty reduction at provincial level. Provinces with lower increases in inequality had higher rates of poverty reduction (Figure 11); the correlation coefficient is 0.517 (t=3.14). A simple measure of the cost to the poor of rising inequality can be obtained by projecting the poverty measure in 2001 that one would have expected if the growth process had been distribution neutral, such that all levels of income grew at the same rate. Figure 12 compares this simulated poverty measure for rural areas in 2001 with the actual values.24 The distributional shifts were poverty increasing; indeed. in 23 provinces the poverty rate in 2001 was more than three times higher than one would have expected without the rise in inequality. One province stands out as an exception to this pattern of rising inequality, namely Guangdong (the hinterland of Hong Kong). Because inequality showed no upward trend, Guangdong was able to achieve the highest rate of poverty reduction with only a slightly above average rate of growth and despite relatively high initial inequality (Table 15). 24 The simulated poverty measure was obtained using the initial Lorenz curve and the 2001 mean. 27 How pro-poor was the geographic pattern of growth? This can be assessed by seeing whether there was higher growth in the provinces where growth had more impact on poverty nationally. Figure 13 gives the scatter plot of growth rates against the total elasticities (ratio of trend in H to trend in mean) weighted by the 1981 shares of total poverty. The weights assure that this gives the impact on national poverty of growth in a given province. It is plain that growth has not been any higher in the provinces in which it would have had the most impact on poverty nationally. This also echoes findings for India in the 1990s (Datt and Ravallion, 2002). Explaining the provincial trends It is instructive to see how much of the inter-provincial variance in trend rates of poverty reduction is explicable in terms of two sets of variables: (i) initial conditions related to mean incomes and their distribution, and (ii) location, notably whether the province is coastal or not (COAST). Guangdong is treated as a special case. In accounting for initial distribution, we include both the initial Gini index of rural incomes (G83 ) and the initial ratio of urban mean R income to rural mean (UR ).25 We postulate that these variables mattered to both the rate of growth and the growth elasticity of poverty reduction. Combining these variables, we obtain the following regression for the trend rate of change in the headcount index:26 (13) i = -(67.877+ 0( .141Y80i + 0( .463G83i + 6(.797URi - 9.291 COASTi - 25.012GDONGi + ^t H R -6.239) 8.090) 3.313) 3.201) (-5.292) (-15.160) R2=0.827; n=28 Initially poorer (in terms of mean income) and less unequal provinces (by both measures) had higher subsequent rates of poverty reduction. The effects are large; going from the lowest initial 25 This is defined as the ratio of urban mean in 1985 (the first available data point from the UHS) and the first available rural mean (in two-thirds of the cases 1983). 26 We also tried re-running this regression only using the 20 provinces for which the first year is 1983. The initial Gini index and the urban-rural income differential remained highly significant. 28 inequality to the highest cuts 7% points off the annual rate of poverty reduction. Controlling for the initial mean and distributional variables, being on the coast increased the trend rate of poverty reduction by 9% points; being in Guangdong raised it by (a massive) 25% points. There are two ways in which initial inequality mattered. One is through growth; less unequal provinces had higher growth rates, consistent with a body of theory and evidence.27 This can be seen if we switch to the trend in mean rural income as the dependent variable for equation (13), giving: (14) i =14.143- 0.007Y80i - 0.149G83i - 1.632URi + 0.507COASTi +1.290GDONGi +^t Y R (3.759) (-1.294) (-2.526) (-2.682) (0.913) (1.875) R2=0.423; n=28 Surprisingly, the dummy variables for coastal provinces and Guangdong are insignificant in the growth regression; their effect on poverty is largely distributional. Secondly, initial distribution matters independently of growth, as we saw in equation (12). This is consistent with the fact that if one adds the trend rate of growth to equation (13) then both inequality measures remain significant, although the coefficients drop in size (by about one third) and the initial Gini index is only significant at the 10% level (the urban rural differential remains significant). Growth has less impact on poverty in more unequal provinces, consistent with cross-country evidence (Ravallion, 1997). 8. Conclusions China's success against poverty since the reforms that began in 1978 is undeniable. But a closer inspection of the numbers holds some warnings for the future and some caveats on the implications for fighting poverty in the rest of the developing world. 27 For evidence on this point at county level for China see Ravallion (1998) and at village level see Benjamin et al., (2004); on the theory and evidence see Aghion et al., (1999) and Bardhan et al., (1999). 29 The specifics of the situation in China at the outset of the reform period should not be forgotten in attempting to draw implications for other developing countries. The Great Leap Forward and the Cultural Revolution had clearly left a legacy of pervasive and severe rural poverty by the mid-1970s. Yet much of the rural population that had been forced into collective farming (with weak incentives for work) could still remember how to farm individually. So there were some relatively easy gains to be had by undoing these failed policies -- by de- collectivizing agriculture and shifting the responsibility for farming to households. This brought huge gains to the country's (and the world's) poorest. The halving of the national poverty rate in the first few years of the 1980s must be largely attributable to picking these "low-lying fruits" of agrarian reform. But this was a one-time reform. An obvious, though nonetheless important, lesson for other developing countries that is well illustrated by China's experience is the need for governments to do less harm to poor people, by reducing the (explicit and implicit) taxes they face. In China's case, the government has until recently operated an extensive foodgrain procurement system that effectively taxed farmers by setting quotas and fixing procurement prices below market levels. This gave the Chinese government a powerful anti-poverty lever in the short-term, by raising the procurement price as happened in the mid-1990's, bringing both poverty and inequality down. When so much of a country's poverty is found in its rural areas, it is not surprising that agricultural growth played such an important role in poverty reduction in China. Here too the past efficacy of agricultural growth in reducing poverty in China reflects (at least in part) an unusual historical circumstance, namely the relatively equitable land allocation that could be achieved at the time of breaking up the collectives. However, China's experience is consistent 30 with the view that promoting agricultural and rural development is crucial to pro-poor growth in most developing countries. We also find some support for the view that macroeconomic stability (notably by avoiding inflationary shocks) has been good for poverty reduction. The score card for trade reform is less clear. While the country's success in trade reform may well bring longer term gains to the poor -- such as by facilitating more labor intensive urban economic growth -- the experience of 1981-2001 does not provide support for the view that China's periods of expanding external trade brought more rapid poverty reduction. Looking ahead, this study points to some reasons to think that it may well be more difficult for China to maintain its past rate of progress against poverty without addressing the problem of rising inequality. To the extent that recent history is any guide to the future, we can expect that the historically high levels of inequality found in many provinces today will inhibit future prospects for poverty reduction -- just as we have seen how the provinces that started the reform period with (relatively) high inequality had a harder time reducing poverty. At the same time, it appears that aggregate growth is increasingly coming from sources that bring limited gains to the poorest. The low-lying fruits of efficiency-enhancing pro-poor reforms are getting scarce. Inequality is continuing to rise and poverty is becoming much more responsive to rising inequality. It also appears that perceptions of what "poverty" means are evolving in China. It can hardly be surprising to find that the standards that defined poverty 20 years ago have lost relevance to an economy that quadrupled its mean income over that period. China could well be entering a stage of development in which relative poverty becomes a more important concern. Economic growth will then be a blunter instrument for fighting poverty in the future. 31 Appendix: Adjustments for the change in valuation methods in 1990 The change in valuation methods is clearly not a serious concern for the early 1980s when foodgrain markets had not yet been liberalized (Guo, 1992; Chow, 2002). Since virtually all foodgrain output was sold to the government, it would have been appropriate to value consumption from own-production at the government's procurement price. However, with the steps toward liberalization of foodgrain markets starting in 1985, a discrepancy emerged between procurement and market prices, with planning prices for foodgrain being substantially lower than market prices in the late 1980s (Chen and Ravallion, 1996). The change in the methods of valuation for income-in-kind in 1990 (whereby planning prices were replaced by local selling prices) creates a problem in constructing a consistent series of poverty measures for China. Table A1 gives our calculations of the key summary statistics by both methods using the rural data for 1990 provided by NBS. This entailed about a 10% upward revision to NBS estimates of mean rural income and a downward revision to inequality estimates. On both counts, measured poverty fell, as can be seen by comparing the first two rows of numbers in Table A1. To address this problem in the data for the late 1980s, we calibrated a simple "correction model" to the data for 1990. Note first that the data from the tabulations provided by NBS do not come in equal-sized fractiles. So we must first "harmonize' the data for the old and new prices. To do this we estimated parametric Lorenz curves for each distribution separately and used these to estimate the mean income of all those below each of 100 percentiles of the distribution ranked by income per person. Having lined up the distributions in common factiles, we estimated a flexible parametric model of the log ratio of mean income at new prices to that at the old prices. A cubic function of the percentile gave an excellent fit to the data, in the form of 32 the following regression for the ratio of income valued at the new prices (Y(new)) to that at the old prices (Y(old)) (t-ratios in parentheses): (A1) Y (new) /Y (old) = 1 .19272 0.20915 - p + 0.23457 p2 - 0.12562 p3 + ^ R2=0.99959 (5421.5) (-111.8) (54.5) (-44.9) where p = cumulative proportion of the population ranked by income per person (i.e., 0
{"url":"http://www-wds.worldbank.org/external/default/WDSContentServer/IW3P/IB/2004/10/08/000012009_20041008125921/Rendered/INDEX/WPS3408.txt","timestamp":"2014-04-19T02:37:19Z","content_type":null,"content_length":"126352","record_id":"<urn:uuid:73b980b4-8e50-4d5b-a1ef-e795abc361ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairmount Heights, MD Calculus Tutor Find a Fairmount Heights, MD Calculus Tutor ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including calculus, physics, geometry, GRE ...I have taught all math subjects including Algebra 1, Algebra 2, Geometry, Trigonometry, Pre-Calculus, and AP Calculus AB. I have taught math for 44 years. I have tutored students privately over the last 30 years. 21 Subjects: including calculus, statistics, geometry, algebra 1 ...I have used linear algebra in my work as an electrical engineer for many years. As an electrical engineer for over 50 years, I have used MATLAB in my work for building and evaluating mathematical models of real systems. In addition, I am a part time professor at Johns Hopkins University where I've been teaching a course in Microwave Receiver design for over 20 years. 17 Subjects: including calculus, English, geometry, ASVAB ...Additionally, I like to stay active by playing soccer and rugby with teams in the DC area.I have taken this class when I was an undergrad at the Colorado School of Mines and got an A. I can provide a transcript of verification if necessary. I have also tutored several students on this subject in the past. 13 Subjects: including calculus, chemistry, physics, geometry ...I have taken several Praxis Tests and have done very well on all of them. My scores highly qualify me to teach all of the math and science curricula at the middle school and high school levels. My scores are as follows: Praxis 1: 550/570 MS Science: 198/200 MS Math: 195/200 Chemistry: 177/200 ... 31 Subjects: including calculus, chemistry, physics, statistics Related Fairmount Heights, MD Tutors Fairmount Heights, MD Accounting Tutors Fairmount Heights, MD ACT Tutors Fairmount Heights, MD Algebra Tutors Fairmount Heights, MD Algebra 2 Tutors Fairmount Heights, MD Calculus Tutors Fairmount Heights, MD Geometry Tutors Fairmount Heights, MD Math Tutors Fairmount Heights, MD Prealgebra Tutors Fairmount Heights, MD Precalculus Tutors Fairmount Heights, MD SAT Tutors Fairmount Heights, MD SAT Math Tutors Fairmount Heights, MD Science Tutors Fairmount Heights, MD Statistics Tutors Fairmount Heights, MD Trigonometry Tutors Nearby Cities With calculus Tutor Bladensburg, MD calculus Tutors Brentwood, MD calculus Tutors Capitol Heights calculus Tutors Cheverly, MD calculus Tutors Colmar Manor, MD calculus Tutors District Heights calculus Tutors Edmonston, MD calculus Tutors Glenarden, MD calculus Tutors Landover Hills, MD calculus Tutors Morningside, MD calculus Tutors Mount Rainier calculus Tutors North Brentwood, MD calculus Tutors North Englewood, MD calculus Tutors Seat Pleasant, MD calculus Tutors Tuxedo, MD calculus Tutors
{"url":"http://www.purplemath.com/fairmount_heights_md_calculus_tutors.php","timestamp":"2014-04-16T13:45:04Z","content_type":null,"content_length":"24673","record_id":"<urn:uuid:aed8e181-a0e9-45d7-aeb2-490897399cae>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Brookville, NY Math Tutor Find a Brookville, NY Math Tutor ...Depending on your distance and the time needed to travel, I may request an additional amount to my rate to compensate for travel time/train fare. CANCELLATION POLICY I have a 24-hour policy in which you can contact me to cancel/reschedule a session. I will charge you half your rate if you cancel within those 24 hours. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...In the years since finishing my undergraduate degree and my Master's in education, as well as my teaching certification, my students have continued to receive nothing less than 100% of what I have to give every time and the results speak for themselves: my past clients will gladly tell you themse... 23 Subjects: including precalculus, differential equations, proofreading, SAT reading ...Simple things like the way you solve a rubik's cube, or even the migrating patterns of birds, can be explained by math. So to say you don't like math is to say you don't like anything-because math is in everything! Originally I was a computer science minor as well, which always left people asking me-why computer science? 11 Subjects: including ACT Math, algebra 1, algebra 2, geometry ...I demonstrated excellent communication skills and planning expertise in developing educational programs for children creating appealing (and awarded) posters and craft activities for international nights in primary and secondary schools in the US, even creating an annual treasure hunt contest in ... 4 Subjects: including prealgebra, French, ESL/ESOL, elementary (k-6th) ...I love animals and, at the moment, have two indoor cats. - Each day, I have three goals: 1) make at least two people laugh; 2) learn something; and 3) do the very best I can at all I undertake. I believe that everyone learns in a somewhat different way and at a different pace. Some people learn by listening, others by visual means, and still others by a combination of both. 37 Subjects: including SAT math, French, linear algebra, algebra 1 Related Brookville, NY Tutors Brookville, NY Accounting Tutors Brookville, NY ACT Tutors Brookville, NY Algebra Tutors Brookville, NY Algebra 2 Tutors Brookville, NY Calculus Tutors Brookville, NY Geometry Tutors Brookville, NY Math Tutors Brookville, NY Prealgebra Tutors Brookville, NY Precalculus Tutors Brookville, NY SAT Tutors Brookville, NY SAT Math Tutors Brookville, NY Science Tutors Brookville, NY Statistics Tutors Brookville, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Brookville_NY_Math_tutors.php","timestamp":"2014-04-19T02:17:29Z","content_type":null,"content_length":"24180","record_id":"<urn:uuid:d1ca924b-3134-4adf-a2c9-6638e24b1335>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
tutorial introduction to ACL2 Major Section: ACL2 Documentation This section contains a tutorial introduction to ACL2, some examples of the use of ACL2, and pointers to additional information. You might also find CLI Technical Report 101 helpful for a high-level view of the design goals of ACL2. If you are already familiar with Nqthm, see nqthm-to-acl2 for help in making the transition from Nqthm to ACL2. If you would like more familiarity with Nqthm, we suggest CLI Technical Report 100, which works through a non-trivial example. A short version of that paper, which is entitled ``Interaction with the Boyer-Moore Theorem Prover: A Tutorial Study Using the Arithmetic-Geometric Mean Theorem,'' is to appear in the Journal of Automated Reasoning's special issue on induction, probably in 1995 or 1996. Readers may well find that this paper indirectly imparts useful information about the effective use of ACL2.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/7/language/acl2-html-docs/ACL2-TUTORIAL.html","timestamp":"2014-04-19T19:44:40Z","content_type":null,"content_length":"2213","record_id":"<urn:uuid:52146eb5-81e4-4d96-b290-151c0217c860>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
MathStep is a symbolic pocket calculator with a complete CAS (Computer Algebra System) built-in. That means that it can solve math problems for you! It has an expression editor with a live preview and even gives you hints to solving a problem by yourself for some operations. It can even work without an internet connection! The following operations are currently supported: - Simplification - Deriving - Integration - Calculating limits - Calculating sums - Determining series expansions - Solving equations Perfect!!! I missed having a TI-89 with its CAS in College Level Physics and now I can finally stop using wolfram|alpha for everything. This seems to work just fine, albeit not as fast as the 89 from a functional standpoint, just as great practically. Great UI too!! Props for using Holo's design language. Great, but... Developer can't be contacted. Email adress is invalid. If you read here, there is a bug in the app. You can't write a minus after an equality sign: "6 = - x" for instance. Also "x = x" says no solutions could be found. "6 = 6" says invalid equation. I would expect both to be satisfied by all values of x. I don't know if this is a bug or not. Perfect! Simple, straight forward, works great, and works offline. I was surprised with the option to plot as well. Droid react I don't have an options menu no matter what i try it only evaluates. I can't change it to simplify or anything else. Nice It is nice i am not telling perfectly because i didnt start and didnt experience so when i will see then i will know how this is good app for maths Great! It has helped me out in calc class various times. I recommend this app a lot! What's New - Norwegian translation Math application that finds derivatives, computes definite integrals, arc lengths, finds Taylor series, plots graphs of functions, including polar and parametric and slope fields. Also includes integral tables and custom keyboard. This is a quadratic equation solver that shows your work! It gives the teacher friendly answer AND shows work each step of the way. Very handy for tedious homework assignments OR just for finding the accurate answers quickly. TIP: Hold "Clear" to clear all text inputs. -Shows the work each step of the way. -Dynamically updates the solution as you change the variables. -Has a custom keypad. -Handles imaginary numbers. -Simplifies square roots & fractions. -Gives approximate & exact answers. -Each variable input is a simple calculator supporting the following operators (^, *, /, +, -). -Supports whole numbers & decimals with each operation. -Totally FREE! Please contact me if you find any bugs. tags: quadratic equation formula solver solve calculator show work shows step steps math algebra calculus calc homework help free Symbolic is just another GUI for the immense powerful Reduce computer algebra system which is open source software created by many programmers during several decades. You may find lots of examples and tutorials about Reduce in the internet. This app is based on code provided by Ahmad M. Akra and Prof. Arthur C. Norman (Codemist Ltd. JLisp, precompiled Reduce engine and most of the Latex formatting feature) who published the app AndroidReduce and made available the corresponding source code via sourceforge.net (great, many thanks!). You may choose between single line and multiline input. In single line mode you may omit closing parentheses and/or the semicolon, but in multiline mode you have to take care of matching parentheses by yourself and explicitly type in a semicolon for telling Reduce that the expression is complete. Multiline mode is convenient for breaking down more complex expressions into several lines and always applies to reading script files. Processing starts when tapping the Go button (if visible) or the Done or Next button of the soft keyboard. In Settings you find some options to modify the look of the output (preferred width, font size, text or latex). You may process Reduce scripts stored in files (*.red or *.txt) with ease. Just enter a filename in the input field and tap menu-entry Read to execute it. Or save your precious work by tapping Save. When saving, the tex formatted output fields go to png-files with increasing numbers added to the original file name. But this only works for output fields not too big. Plain text outputs go to a text-file along with the corresponding input expressions. This may be useful for later processing the results in Latex or similar text processing tools. If you do not know the exact filename containing your desired script just leave the input field empty (e.g. tap Clr button) and tap menu-entry Read to reveal a file selector box showing the files in the actually chosen directory. Choosing a file only copies the filename to the input text field. Don\'t forget to tap Read again (or the Go button, if visible) to actually start processing the script file because you still have the option to add "in " before the filename to let Reduce process the whole file at once (what is mostly faster). Tapping Clr or Go or Done during reading a script file will terminate processing the file. But the definitions realized so far stay valid in Reduce till you Reset Reduce. The directory to read from or to save to may be defined via Settings. This global directory is also valid for in and out statements of Reduce itself. But nested input files do need complete pathnames along with "in" or "out" or "shut". You may also modify some cosmetic features like background and text colors and the availability of the Clear and Go button which may support an efficient workflow. The settings get saved once you finish Settings with OK and are available next time you start the program.\n\n So, happy calculating! Please report bugs, questions and comments to Dieter Egger (dr.egger@alice.de). Create equations and letter art on the move. Use your mobile phone to create equations. Easy and intuitive 2D navigation system. Useful for taking notes in lectures or to save equations that you can recall at a later time. You can create any sort of equations such as polynomials, tensor equations matrices or anything you can think of since you have total freedom on how to arrange the symbols. You can also create ASCII art using a variety of symbols. One use for this app is to write equations on the train or to store equations to show to your colleagues at a conference. If you think of an equation and just have to write it down you should use this app. It combines the freedom of a 2D grid system with the accuracy of using characters and symbols. Also see the equations as LaTeX or Unicode or as plain text to copy and paste. Take screenshots of the equations you make and save it to your gallery. Equation format mode to easily format equations including fractions, integrals and square roots. Equation notepad is what's known as an equation editor or formula editor. For a $Pi donation, you can gain access to the beta release channel for Calculus Tools. New feature updates will be pushed out more quickly, but they may not be entirely stable or fully functional. New in v1.2.1 - Better graph scale adjustments MagicCalc Classic contains the same functions present in MagicCalc, but using compact keyboards, to feel like in real calculator. MagicCalc Classic is a full functions full screen scientific and programmable graphing calculator for Phones and Tablets. - One Input Screen - Product Features : Console window for input/output, and calculus operations. 2D graphic window for 2D functions and 2D parametric functions. 3D graphic window for 3D functions and 3D parametric functions. Program editor for scripting complex operations. You can save and load your programs. - We don't assume any responsabilities on copies installed outside the appstore. Universal free, every day use calculator with scientific features. One of top. Good for simple and advanced calculations! * Math expressions calculation (developed on RPN-algorithm but no RPN-calculators' kind UI!) * Percentages (calculation discount, tax, tip and other) * Radix mode (HEX/BIN/OCT) * Time calculation (two modes) * Trigonometric functions. Radians and degrees with DMS feature (Degree-Minute-Second) * Logarithmic and other functions * Calculation history and memory * Digit grouping * Cool color themes (skins) * Large buttons * Modern, easy and very user friendly UI * Very customizable! * NO AD! * Very small apk * More features will be added. Stay in touch! :) OLD NAME is Cube Calculator. PRO-version is currently available on Google Play. KW: mobicalc, mobicalculator, mobi, calc, cubecalc, mobicalcfree, android calculator, percentage, percent, science, scientific calculator, advanced, sine, simple, best, kalkulator, algebra, basic A calculator with 10 computing modes in one application + a handy scientific reference facility - different modes allow: 1) basic arithmetic (both decimals and fractions), 2) scientific calculations, 3) hex, oct & bin format calculations, 4) graphing applications, 5) matrices, 6) complex numbers, 7) quick formulas (including the ability to create custom formulas), 8) quick conversions, 9) solving algebraic equations & 10) time calculations. Functions include: * General Arithmetic Functions * Trigonometric Functions - radians, degrees & gradients - including hyperbolic option * Power & Root Functions * Log Functions * Modulus Function * Random Number Functions * Permutations (nPr) & Combinations (nCr) * Highest Common Factor & Lowest Common Multiple * Statistics Functions - Statistics Summary (returns the count (n), sum, product, sum of squares, minimum, maximum, median, mean, geometric mean, variance, coefficient of variation & standard deviation of a series of numbers), Bessel Functions, Beta Function, Beta Probability Density, Binomial Distribution, Chi-Squared Distribution, Confidence Interval, Digamma Function, Error Function, Exponential Density, Fisher F Density, Gamma Function, Gamma Probability Density, Hypergeometric Distribution, Normal Distribution, Poisson Distribution, Student T-Density & Weibull Distribution * Conversion Functions - covers all common units for distance, area, volume, weight, density, speed, pressure, energy, power, frequency, magnetic flux density, dynamic viscosity, temperature, heat transfer coefficient, time, angles, data size, fuel efficiency & exchange rates * Constants - a wide range of inbuilt constants listed in 4 categories: 1) Physical & Astronomical Constants - press to include into a calculation or long press for more information on the constant and its relationship to other constants 2) Periodic Table - a full listing of the periodic table - press to input an element's atomic mass into a calculation or long press for more information on the chosen element - the app also includes a clickable, pictorial representation of the periodic table 3) Solar System - press to input a planet's orbit distance into a calculation or long press for more information on the chosen planet 4) My Constants - a set of personal constants that can be added via the History * Convert between hex, oct, bin & dec * AND, OR, XOR, NOT, NAND, NOR & XNOR Functions * Left Hand & Right Hand Shift * Plotter with a table also available together with the graph * Complex numbers in Cartesian, Polar or Euler Identity format * Fractions Mode for general arithmetic functions including use of parentheses, squares, cubes and their roots * 20 Memory Registers in each of the calculation modes * A complete record of each calculation is stored in the calculation history, the result of which can be used in future calculations An extensive help facility is available which also includes some useful scientific reference sections covering names in the metric system, useful mathematical formulas and a detailed listing of physical laws containing a brief description of each law. A default screen layout is available for each function showing all buttons on one screen or, alternatively, all the functions are also available on a range of scrollable layouts which are more suitable for small screens - output can be set to scroll either vertically (the default) or horizontally as preferred – output font size can be increased or decreased by long pressing the + or - A full range of settings allow easy customisation - move to SD for 2.2+ users Please email any questions that are not answered in the help section or any requests for bug fixes, changes or extensions regarding the functions of the calculator - glad to help wherever possible. This is an ad-supported app - an ad-free paid version is also available for a nominal US$ 0.99 - please search for Scientific Calculator (adfree) I'm Fraction Calculator Plus and I'm the best and easiest way to deal with everyday fraction problems. Whether you're checking homework, preparing recipes, or working on craft or even construction projects, I can help: - Wish you could find the time to check your kids' math homework? Now checking fraction math takes just seconds. - Need to adjust recipe quantities for a larger guest list? Let me adjust your cup and teaspoon quantities. - Working on a craft or home project in inches? Stop double-or-triple calculating on paper - let me do it once, accurately. I'm attractive and effective and I make great use of either a phone or tablet display: - I show your calculations in crisp, clear, elegant type that you can read at-a-glance from a distance. - My innovative triple keypad display lets you type fast! (entering three and three quarters takes just 3 taps!). - Every fraction result gets automatically reduced to its simplest form to make your job easy. - NEW! Every result is also shown in decimal to make conversion a breeze. - It couldn't be easier to add, subtract, multiply, and divide fractions. Let Fraction Calculator Plus turn your phone or tablet into an everyday helping hand. This is an ad supported version - our ad-free version is also available. Fraction Calculator Plus (C) 2013 Digitalchemy, LLC Equation Editor allows you to create and share mathematical equations. You can use a simple metalanguage to write your equations and then you can send them to your friends, share them on your social networks or save them to your device with the resolution you select. You can also store your favourite equations to work with them later. Ability to find antilog. stability fix cleared crash on clear button added ads for our survival This app is able to calculate the logarithm for a number. You can choose the base as 2 and e, which are widely used in arithmetic calculations. Very useful tool for school and college! PowerCalc is a powerful Android scientific calculator with real look. It is one of the few Android calculators with complex number equations support. Features: * Real equation view editor with brackets and operator priority support * Component or polar complex entry/view mode * Equation and result history * 7 easy to use memories * Large universal/physical/mathematical/chemical constant table * Degrees, radians and grads mode for trigonometric functions * Fixed, scientific and engineering view mode * Easy to use with real look * Advertisement free! Would you like to have multiline equation editor with equation syntax hightiting, actual bracket highlighting and trigonometric functions of complex argument support? Upgrade to PowerCalc Pro. * Multiline equation editor * Equation syntax highliting * Actual bracket highliting * Trigonometric functions with complex argument support Stay tuned! We are preparing new functionalities: * Unit conversions * Radix modes * Help Found bug? Please contact us to fix it. If you find PowerCalc useful please upgrade to PowerCalc Pro to support further development. Thank you! Aplicación sencilla para calcular derivadas paso a paso. MathAlly Graphing Calculator is quickly becoming the most comprehensive free Graphing, Symbolic, and Scientific Calculator for Android. Here are some of our current features: -Enter values and view results as you would write them -Swipe up, down, left, or right to quickly switch between keyboard pages. -Long click on keyboard key to bring up dialog about key. -Undo and Redo keys to easily fix mistakes. -Cut, Copy, and Paste. -User defined functions with f, g, h Symbolic Calculator: -Simplify and Factor algebra expressions. -Polynomial long division. -Solve equations for a variable. -Solve equations with inequalities such as > and < -Solve systems of equations. -Simplify trigonometric expressions using trigonometric identities. -Graph three equations at once. -View equations on graph or in table format. -Normal functions such as y=x^2 -Inverse functions such as x=y^2 -Circles such as y^2+x^2=1 -Ellipses, Hyperbola, Conic Sections. -Logarithmic scaling -Add markers to graph to view value at given point. -View delta and distance readings between markers on graph. -View roots and intercepts of traces on graph. -Definite integration. Other Features: -Complex numbers -Hyperbolic functions -nCr and nPr functions -Change numeric base between binary, octal, decimal, and hexadecimal -Bitwise operators AND, OR, XOR, and NOT -Vector dot product and norm. Q. Is there are tutorial anywhere explaining how to use the graphing calculator? A. There are three into tutorials in the app for the calculator, graph equations, and graph screens. Additional tutorials can be found on our website http://www.mathally.com/ Q. How do I get to the keys for pi, e, solve, etc? A. There are four keyboard pages. Each swipe direction across the keyboard moves you to a different page. The default page is the swipe down page. To get to the page with trig functions, swipe left. To get to the matrix keys, swipe up. To get to the last page, swipe right. No matter what page you are on, the swipe direction to move to a specific page is always the same. Q. What do you have planned for future releases? A. You can keep up to date on the latest news on our blog at http://mathally.blogspot.com/ . This news will include what is coming up in future releases. Also feel free to leave comments and let me know what you think! If you find a bug or have questions, please email me. Math Ally Small app that calculates definite integral and area under the curve of ANY function for you! Lang.: EN, GER Please rate, report bugs and email translations ;) KWs: Maths, Calculator, Calculus, definite Integral, functions, Integral Calculator Sci Calculus is a professional scientific and graphics calculator with many useful features. In addition to the classic functions of a scientific calculator this application is also able to calculate complex mathematical expressions and to design, simplify, solve and find derivatives up to N order. Sci Calculus can be used also to convert numeric base (hex<->bin<->dec). The application also uses (in addition to normal "OK" button) the accelerometer to show the result on the screen (just shake the phone). Many other useful functions will be added soon ...
{"url":"https://play.google.com/store/apps/details?id=nl.vertinode.mathstep","timestamp":"2014-04-20T13:40:23Z","content_type":null,"content_length":"188372","record_id":"<urn:uuid:b56d6c57-cd7d-4daa-a6f2-8af6727d68a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Cube Notation Mag's Rubik's Cube Notation Why you are here This document describes the notation I use in describing operations on Rubik's Cube type puzzles. I use this notation in my 3x3x3 cube solution, my 4x4x4 cube solution, and my 5x5x5 cube solution. My notation bears similarities to and differences from several other somewhat well-known notations. It is very simple, yet it has advantages over some other notations: 1. It covers cubes of any size. (Actually, for 10x10x10 and beyond, it would need a modification. But these cubes don't/can't exist in real life anyway.) It is easy to describe turns of inner slices on higher-order cubes, as well as turns of multiple slices at the same time. 2. It covers re-orientations of the entire cube, such as "roll the cube toward you, so that the top face comes the front". 3. It is succinct -- even in the 3x3x3 case, most operations can be described with equal or fewer characters than in most other notations; some operations can be described with much fewer. What you came for A move consists of a face, a slice, and a direction. Each part is represented by a single character. Faces are the 6 sides of the cube. These are named Up, Down, Left, Right, Front, and Back, and are represented by their initial letters, U, D, L, R, F, and B Slices, sometimes called slabs, are the divisions of a cube due to planar cuts. On the 3x3x3 cube, there are three horizontal slices parallel to the U and D faces, three vertical slices parallel to the L and R faces, and three vertical slices parallel to the F and B faces. The slices are numbered 1..N relative to a face. For example, the horizontal slices on a 4x4x4 cube, from top to bottom, are U1, U2, U3, and U4. The same four slices are also called D4, D3, D2, and D1, respectively, because they are the 4th, 3rd, 2nd, and 1st from the bottom. (This is just one of many examples of redundancy in my notation. Oh well.) The direction of a move can be +, which means a quarter turn clockwise, -, which means a quarter turn counterclockwise, or *, which means a half turn. Cubelet (also Cubie) An individual colored piece of the cube. There are 26 cubelets on the 3x3x3 cube, 56 on the 4x4x4, and 98 on the 5x5x5. On the 3x3x3 and 5x5x5, six cubelets remain fixed (except for rotation) on the faces of the cubes. The rest of the cubelets move about the cube freely, to our great enjoyment and consternation. □ 3x3x3 cubelets ☆ 8 corners ☆ 12 edges ☆ 6 middles □ 4x4x4 cubelets ☆ 8 corners ☆ 24 edge pieces ○ 2 of these make an "edge" ☆ 24 middle pieces ○ 4 of these make an "middle") □ 5x5x5 cubelets ☆ 8 corners ☆ 36 edge pieces: 12 edge-middles, 24 edge-edges ○ 1 edge-middle and 2 edge-edges make an "edge" ☆ 54 middle pieces: 6 middle-middles, 24 middle-edges, 24 middle-corners ○ 1 middle-middle, 4 middle-edges, and 4 middle-corners make a "middle" ○ 1 middle-middle and 2 middle-edges make a "middle-3-row" ○ 1 middle-edge and 2 middle-corners make an "edge-3-row" A cubelet at the intersection of three faces. May be referred to by the names of the three faces that meet there. For example, UFR and DBL are corners. One of the three ways in which a face may be turned. + means a quarter-turn clockwise, - means a quarter-turn counterclockwise, and * means a half-turn. The cubelets comprising the intersection of two faces. An edge on the 3x3x3 cube consists of 3 cubelets (and so on). An edge may be denoted by the names of the two intersecting faces; for example, UF and FR are edges. Confusingly, the term "edge" is sometimes used to refer to just the "middle" edge pieces -- all the edge cubelets except the corner cubelets on either end. Edge cubelet One of the cubelets on an edge, except for the ones on the corners. On the 3x3x3 cube, there is one edge cubelet per edge. On the 5x5x5 cube there are three. On the 5x5x5 cube, the 3 cubelets on one edge other than the corner cubelets. On the 4x4x4 cube, the 2 cubelets on one edge other than the corner cubelets. One of the six surfaces of the cube. The faces are called U (top), D (down), R (right), L (left), F (front), and B (back), relative to the orientation of the cube in your hand. Turn a single cubelet in place. Usually used for edge cubelets. See also rotate. Do not confuse with swap. The operation that exactly reverses the moves involved in another operation. Take each turn backwards, and take all of the turns in reverse order. For example, the inverse of (R2- D R2 D* R2- D R2) is (R2- D- R2 D* R2- D- R2). Where a cubelet is located. Also position. On the 3x3x3, the fixed cubelet in the center of a face. On the bigger cubes, a general term for the whole group of non-edge and non-corner cubelets on a face. A single turn of one or more slices of the cube together. The notation for a move consists of: 1. A letter representing the face. 2. An optional list of numbers representing the slices (usually just one for the 3x3x3 cube, but frequently more for bigger cubes). If omitted, it is assumed to be '1'. 3. An optional character representing the direction. If omitted, it is assumed to be '+'. There are multiple valid notations for any one possible move. For example, the following notations (if the cube is 3x3x3) all mean "turn the front face clockwise a quarter turn": F, F+, F1, F1+, A sequence of moves. The way a cubelet is situated in its location. Center face cubelets (odd-sized cubes only) have 4 different orientations. Corner cubelets have 3. Middle edge cubelets (odd-sized cubes only) have 2. All other cubelets have only one. Where a cubelet is located. Also location. Turn a single cubelet in place. See also flip. Do not confuse with swap. Also layer or slab. Nine co-planar cubelets on the 3x3x3 cube; 16 on the 4x4x4; 25 on the 5x5x5. The NxNxN cube is N slices deep any way you look at it. The slices are numbered 1, 2, 3, etc., relative to a face. Switch the locations of 2 or 3 cubelets. Do not confuse with rotate. Tom Magliery
{"url":"http://magliery.com/Cube/CubeNotation.html","timestamp":"2014-04-19T09:53:43Z","content_type":null,"content_length":"7459","record_id":"<urn:uuid:ea26841d-a9f6-4931-85b7-afda08991b82>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Proverbs Found 83 results: Mathematics Proverbs For the execution of the voyage to the Indies, I did not make use of intelligence, mathematics or maps. Christopher Columbus It is now quite lawful for a Catholic woman to avoid pregnancy by a resort to mathematics, though she is still forbidden to resort to physics or chemistry. H. L. Mencken But there is another reason for the high repute of mathematics: it is mathematics that offers the exact natural sciences a certain measure of security which, without mathematics, they could not Albert Einstein How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things? Albert Einstein Mathematics are well and good but nature keeps dragging us around by the nose. Albert Einstein Mathematics is the language with which God has written the universe. Galileo Galilei As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein If I were again beginning my studies, I would follow the advice of Plato and start with mathematics. Galileo Galilei But the creative principle resides in mathematics. In a certain sense, therefore, I hold true that pure thought can grasp reality, as the ancients dreamed. Albert Einstein One Crucifixion is recorded -- only -- How many be Is not affirmed of Mathematics -- Or History -- One Calvary -- exhibited to Stranger -- As many be As persons -- or Peninsulas -- Gethsemane -- Is but a Province -- in the Being's Centre -- Judea -- For Journey -- or Crusade's Achieving -- Too near -- Our Lord -- indeed -- made Compound Witness -- And yet -- There's newer -- nearer Crucifixion Than That -- Emily Dickinson Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians. Edsger Dijkstra Mathematics allows for no hypocrisy and no vagueness. I went off to college planning to major in math or philosophy-- of course, both those ideas are really the same idea. Frank Wilczek I have hardly ever known a mathematician who was capable of reasoning.
{"url":"http://www.litera.co.uk/mathematics_proverbs/3/","timestamp":"2014-04-20T06:07:04Z","content_type":null,"content_length":"13801","record_id":"<urn:uuid:3a0cff72-b83d-4867-b5d2-21ad03e2c2a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
symmetric group October 25th 2009, 06:47 AM #1 Junior Member Mar 2009 symmetric group 1. does the symmetric group S7 contain an element of order 5? of order 10? of order 15? 2. what is the largest possible order of an element of S7? could anyone help me, please ?? Use the fact that the order of a permutation is the least common multiple of the lengths of its cycles. The 7 elements must be partitioned into one of the following ways : It's not hard to see that the least common multiple is maximized when using the partition 3+4=7, yielding a maximum order of $3 \times 4 = 12$. So the product of a three-cycle with a four-cycle has order 12 and that is maximum. October 25th 2009, 07:04 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/110293-symmetric-group.html","timestamp":"2014-04-16T11:46:44Z","content_type":null,"content_length":"31824","record_id":"<urn:uuid:74b56bdd-ee15-42f5-8a71-8d1ed996db7e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
AGT conjecture Abstract: More than 10 years ago, I constructed representations of Heisenberg algebras and affine algebras on the homology groups of instanton moduli spaces on certain 4-manifolds. The proof was mathematically rigorous, but was not conceptually satisfactory. About two years ago, physicists, Alday-Gaiotto-Tachikawa (AGT) proposed a much larger framework, which is conceptually satisfactory, but lacks mathematical footing. I will explain their theory, and a mathematical approach towards a conjecture. nakajima@kurims.kyoto-u.ac.jp
{"url":"http://www.kurims.kyoto-u.ac.jp/~nakajima/Talks/20121023.html","timestamp":"2014-04-21T14:46:55Z","content_type":null,"content_length":"925","record_id":"<urn:uuid:dd0530a8-3225-457a-a08c-f143d883ce5a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Analysis: an Introduction using R/R basics R is a command-driven statistical package. At first sight, this can make it rather daunting to use. However, there are a number of reasons to learn statistics using this computer program. The two most important are: □ R is free; you can download it from http://www.r-project.org and install it onto just about any sort of computer you like. □ R allows you to do all the statistical tests you are likely to need, from simple to highly advanced ones. This means that you should always be able to perform the right analysis on your data. An additional bonus is that R has excellent graphics and programming capabilities, so can be used as an aid to teaching and learning. For example, all the illustrations in this book have been produced using R; by clicking on any illustration, you can obtain the R commands used to produce it. A final benefit, which is of more use once you have some basic knowledge of either statistics or R, is that there are many online resources to help users of R. A list is available in the appendix to this book. How to use this book with REdit The main text in this book describes the why and how of statistics, which is relevant whatever statistical package you use. However, alongside the main text, there are a large number of "R topics": exercises and examples that use R to illustrate particular points. You may find that it takes some time to get used to R, especially if you are unfamiliar with the idea of computer languages. Don't worry! The topics in this chapter and in Chapter 2 should get you going, to the point where you can understand and use R's basic functionality. This chapter is intended to get you started: once you have installed R, there are topics on how to carry out simple calculations and use functions, how to store results, how to get help, and how to quit. The few exercises in Chapter 1 mainly show the possibilities open to you when using R, then Chapter 2 introduces the nuts and bolts of R usage: in particular vectors and factors, reading data into data frames, and plotting of various sorts. From then on, the exercises become more statistical in nature. If you wish to work straight through these initial exercises before statistical discussion, they are collected here. Note that when working through R topics online, you may find it more visually appealing if you set up wikibooks to display R commands nicely. If the R topics get in the way of reading the main text, they can be hidden by clicking on the arrow at the top right of each box. Starting REdit If you don't already have R installed on your computer, download the latest version for free from http://www.r-project.org, and install the base system. You don't need to install any extra packages yet. Once you have installed it, start it up, and you should be presented with something like this: R version 2.11.1 (2010-05-31) Copyright (C) 2010 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. You are now in an R session. R is a command-driven program, and the ominous-looking ">" character means that R is now waiting for you to type something. Don't be daunted. You will soon get the hang of the simplest commands, and that is all you should need for the moment. And you will eventually find that the command-line driven interface gives you a degree of freedom and power^[1] that is impossible to achieve using more "user-friendly" packages. R as a calculator Text marked like this is used to discuss an R-specific point. The basics of R can be learned by reading these sections in the order they appear in the book. There will also be commands that can be entered directly into R; you should be able to copy-and-paste them directly into your R session . Try the following to see how to use R as a simple calculator In the absence of any instructions of what to do with the output of a command, R usually prints the result to the screen. For the time being, ignore the [1] before the answer: we will see that this is useful when R outputs many numbers at once. Note that R respects the standard mathematical rules of carrying out multiplication and division before addition and subtraction: it divides 2 by 3 before adding 100. R commands can sometimes be rather difficult to follow, so occasionally it can be useful to annotate them with comments. This can be done by typing a hash (#) character: any further text on the same line is ignored by R. This will be used extensively in the R examples in this wikibook, e.g. 1. #this is a comment: R will ignore it 2. (100+2)/3 #You can use round brackets to group operations so that they are carried out first 3. 5*10^2 #The symbol * means multiply, and ^ means "to the power", so this gives 5 times (10 squared), i.e. 500 4. 1/0 #R knows about infinity (and minus infinity) 5. 0/0 #undefined results take the value NaN ("not a number") 6. (0i-9)^(1/2) #for the mathematically inclined, you can force R to use complex numbers > #this is a comment: R will ignore it > (100+2)/3 #You can use round brackets to group operations so that they are carried out first [1] 34 > 5*10^2 #The symbol * means multiply, and ^ means "to the power", so this is 5 times (10 squared) [1] 500 > 1/0 #R knows about infinity (and minus infinity) [1] Inf > 0/0 #undefined results take the value NaN ("not a number") [1] NaN > (0i-9)^(1/2) #for the mathematically inclined, you can force R to use complex numbers [1] 0+3i • If you don't know anything about complex numbers, don't worry: they are not important here. • Note that you can't use curly brackets {} or square brackets [] to group operations together Storing objects R is what is known as an "object-oriented" program. Everything (including the numbers you have just typed) is a type of object. Later we will see why this concept is so useful. For the time being, you need only note that you can give a name to an object, which has the effect of storing it for later use. Names can be assigned by using the arrow-like signs as demonstrated in the exercise below. Which sign you use depends on whether you prefer putting the name first or last (it may be helpful to think of as "put into" and as "set to"). Unlike many statistical packages, R does not usually display the results of analyses you perform. Instead, analyses usually end up by producing an object which can be stored. Results can then be obtained from the object at leisure. For this reason, when doing statistics in R, you will often find yourself naming and storing objects. The name you choose should consist of letters, numbers, and the "." character^[3], and should not start with a number. 1. 0.001 -> small.num #Store the number 0.0001 under the name "small.num" (i.e. put 0.0001 into small.num) 2. big.num <- 10 * 100 #You can put the name first if you reverse the arrow (set big.num to 10000). 3. big.num+small.num+1 #Now you can treat big.num and small.num as numbers, and use them in calculations 4. my.result <- big.num+small.num+2 #And you can store the result of any calculation 5. my.result #To look at the stored object, just type its name 6. pi #There are some named objects that R provides for you > 0.001 -> small.num #Store the number 0.0001 under the name "small.num" (i.e. put 0.0001 into small.num) > big.num <- 10 * 100 #You can put the name first if you reverse the arrow (set big.num to 10000). > big.num+small.num+1 #Now you can treat big.num and small.num as numbers, and use them in calculations [1] 1001.001 > my.result <- big.num+small.num+2 #And you can store the result of any calculation > my.result #To look at the stored object, just type its name [1] 1002.001 > pi #There are some named objects that R provides for you [1] 3.141593 Note that when the end result of a command is to store (assign) an object, as on input lines 1, 2, and 4, R doesn't print anything to the screen. Apart from numbers, perhaps the most useful named objects in R are . Nearly everything useful that you will do in R is carried out using a function, and many are available in R by default. You can use (or "call") a function by typing its name followed by a pair of round brackets. For instance, the start up text mentions the following function, which you might find useful if you want to reference R in published work: > citation() To cite R in publications use: R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org. A BibTeX entry for LaTeX users is url = {http://www.R-project.org}, title = {R: A Language and Environment for Statistical Computing}, author = {{R Development Core Team}}, organization = {R Foundation for Statistical Computing}, address = {Vienna, Austria}, year = {2008}, note = {{ISBN} 3-900051-07-0}, We have invested a lot of time and effort in creating R, please cite it when using it for data analysis. See also ‘citation("pkgname")’ for citing R packages. Many R functions can produce results which differ depending on that you provide to them. Arguments are placed inside the round brackets, separated by commas. Many functions have one or more arguments: that is, you can choose whether or not to provide them. An example of this is the function. It can take an optional argument giving the name of an R add-on package . If you do not provide an optional argument, there is usually an assumed default value (in the case of , this default value is , i.e. provide the citation reference for the base package: the package which provides most of the foundations of the R language). Most arguments to a function are named. For example, the first argument of the citation function is named package. To provide extra clarity, when using a function you can provide arguments in the longer form name=value. Thus does the same as If a function can take more than one argument, using the long form also allows you to change the order of arguments, as shown in the example code below. 1. citation("base") #Does the same as citation(), because the default for the first argument is "base" 2. #Note: quotation marks are needed in this particular case (see discussion below) 3. citation("datasets") #Find the citation for another package (in this case, the result is very similar) 4. sqrt(25) #A different function: "sqrt" takes a single argument, returning its square root. 5. sqrt(25-9) #An argument can contain arithmetic and so forth 6. sqrt(25-9)+100 #The result of a function can be used as part of a further analysis 7. max(-10, 0.2, 4.5) #This function returns the maximum value of all its arguments 8. sqrt(2 * max(-10, 0.2, 4.5)) #You can use results of functions as arguments to other functions 9. x <- sqrt(2 * max(-10, 0.2, 4.5)) + 100 #... and you can store the results of any of these calculations 10. x 11. log(100) #This function returns the logarithm of its first argument 12. log(2.718282) #By default this is the natural logarithm (base "e") 13. log(100, base=10) #But you can change the base of the logarithm using the "base" argument 14. log(100, 10) #This does the same, because "base" is the second argument of the log function 15. log(base=10, 100) #To have the base as the first argument, you have to use the form name=value > citation("base") #Does the same as citation(), because the default for the first argument is "base" To cite R in publications use: R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org. A BibTeX entry for LaTeX users is title = {R: A Language and Environment for Statistical Computing}, author = {{R Development Core Team}}, organization = {R Foundation for Statistical Computing}, address = {Vienna, Austria}, year = {2008}, note = {{ISBN} 3-900051-07-0}, url = {http://www.R-project.org}, We have invested a lot of time and effort in creating R, please cite it when using it for data analysis. See also ‘citation("pkgname")’ for citing R packages. > #Note: quotation marks are needed in this particular case (see discussion below) > citation("datasets") #Find the citation for another package (in this case, the result is very similar) The 'datasets' package is part of R. To cite R in publications use: R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org. A BibTeX entry for LaTeX users is title = {R: A Language and Environment for Statistical Computing}, author = {{R Development Core Team}}, organization = {R Foundation for Statistical Computing}, address = {Vienna, Austria}, year = {2008}, note = {{ISBN} 3-900051-07-0}, url = {http://www.R-project.org}, We have invested a lot of time and effort in creating R, please cite it when using it for data analysis. See also ‘citation("pkgname")’ for citing R packages. > sqrt(25) #A different function: "sqrt" takes a single argument, returning its square root. [1] 5 > sqrt(25-9) #An argument can contain arithmetic and so forth [1] 4 > sqrt(25-9)+100 #The result of a function can be used as part of a further analysis [1] 104 > max(-10, 0.2, 4.5) #This function returns the maximum value of all its arguments [1] 4.5 > sqrt(2 * max(-10, 0.2, 4.5)) #You can use results of functions as arguments to other functions [1] 3 > x <- sqrt(2 * max(-10, 0.2, 4.5)) + 100 #... and you can store the results of any of these calculations > x [1] 103 > log(100) #This function returns the logarithm of its first argument [1] 4.60517 > log(2.718282) #By default this is the natural logarithm (base "e") [1] 1 > log(100, base=10) #But you can change the base of the logarithm using the "base" argument [1] 2 > log(100, 10) #This does the same, because "base" is the second argument of the log function [1] 2 > log(base=10, 100) #To have the base as the first argument, you have to use the form name=value [1] 2 Note that when typing normal text (as in the name of a package), it needs to be surrounded by quotation marks^[4], to avoid confusion with the names of objects. In other words, in R refers to a function, whereas is a "string" of text. This is useful, for example when providing titles for plots, etc. You will probably find that one of the trickiest aspects of getting to know R is knowing which function to use in a particular situation. Fortunately, R not only provides documentation for all its functions, but also ways of searching through the documentation, as well as other ways of getting help. Getting help There are a number of ways to get help in R, and there is also a wide variety of online information. Most installations of R come with a reasonably detailed help file called "An Introduction to R", but this can be rather technical for first-time users of a statistics package. Almost all functions and other objects that are automatically provided in R have a help page which gives intricate details about how to use them. These help pages usually also contain examples, which can be particularly helpful for new users. However, if you don't know the name of what you are looking for, then finding help may not be so easy, although it is possible to search for keywords and concepts that are associated with objects. Some versions of R give easy access to help files without having to type in commands (for example, versions which provide menu bars usually have a "help" menu, and the Macintosh interface also has a help box in the top right hand corner). However, this functionality can always be accessed by typing in the appropriate commands. You might like to type some or all of the following into an R session (no output is listed here because the result will depend on your R system). 1. help.start() #A web-based set of help pages (try the link to "An Introduction to R") 2. help(sqrt) #Show details of the "sqrt" and similar functions 3. ?sqrt #A shortcut to do the same thing 4. example(sqrt) #run the examples on the bottom of the help page for "sqrt" 5. help.search("maximum") #gives a list of functions involving the word "maximum", but oddly, "max" is not in there! 6. ### The next line is commented out to reduce internet load. To try it, remove the first # sign. 7. #RSiteSearch("maximum") #search the R web site for anything to do with "maximum". Probably overkill here! The last but one command illustrates a problem you may come across with using the R help functions. The searching facility for help files is sometimes a bit hit-and-miss. If you can't find exactly what you are looking for, it is often useful to look at the "See also" section of any help files that sound vaguely similar or relevant. In this case, you might probably eventually find the function by looking at the "See also" section of the help file for . Not ideal!. Quitting R To quit R, you can use either the function or its identical shortcut, , which do not require any arguments. Alternatively, if your version of R has a menu bar, you can select "quit" or "exit" with the mouse. Either way, you will be asked if you want to save the workspace image. This will save all the work you have done so far, and load it up when you next start R. Although this sounds like a good idea, if you answer "yes", you will soon find yourself loading up lots of irrelevant past analyses every time you start R. So answer "no" if you want to quit cleanly. Setting up wikibooksEdit Before you start on the main text, we recommend that you add a few specific wikibooks preferences. The first three lines will display the examples of R commands in a nicer format. The last line gives a nicer format to figures consisting of multiple plots (known as subfigures). You can do this by creating a user CSS file, as follows. • Make sure you are logged in (and create yourself an account if you do not have one already). • Visit your personal css stylesheet, at Special:MyPage/skin.css. • Click on "Edit this page". • Paste the following lines into the large edit box pre {padding:0; border: none; margin:0; line-height: 1.5em; } .code .input ol {list-style: none; font-size: 1.2em; margin-left: 0;} .code .input ol li div:before {content: "\003E \0020";} table.subfigures div.thumbinner, table.subfigures tr td, table.subfigures {border: 0;} • If you know any CSS, make any alterations you like to this stylesheet. • Finally save the page by clicking on "Save page", Enough! Let's move on to the main text. 1. ↑ These are poor attempts at a statistical jokes, as you will soon find out. 2. ↑ Depending on how you are viewing this book, may see a ">" character in front of each command. This is not part of the command to type: it is produced by R itself to prompt you to type something. This character should be automatically omitted if you are copying and pasting from the online version of this book, but if you are reading the paper or pdf version, you should omit the ">" prompt when typing into R. 3. ↑ If you are familiar with computer programming languages, you may be used to using the underscore ("_") character in names. In R, "." is usually used in its place. 4. ↑ you can use either single (') or double (") quotes to delimit text strings, as long as the start and end quotes match 5. ↑ (note that this is a temporary hack until GeSHi supports R code, in which case Statistical Analysis: an Introduction using R/R/Syntax can be changed. The css code should really read .pre {padding:0; border: none; margin:0; line-height: 1.5em; } .source-R ol {list-style: none; font-size: 1.2em; margin-left: 0;} .source_R ol li div:before {content: "\003E \0020";} Last modified on 1 February 2014, at 22:27
{"url":"https://en.m.wikibooks.org/wiki/Statistical_Analysis:_an_Introduction_using_R/R_basics","timestamp":"2014-04-16T04:33:52Z","content_type":null,"content_length":"58989","record_id":"<urn:uuid:2c9ada9f-d34a-41fe-949a-760a240aea9d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
The Cartesian Plane 3.1: The Cartesian Plane Created by: CK-12 Kaitlyn walked into Math class and saw the following image displayed from the overhead projector. Her teacher asked everyone in the class to duplicate the picture on the blank sheet of paper that she had placed on each student’s desk. When the teacher felt that the students had completed the drawing, she asked them to share their results with the class. Most of the students had difficulty reproducing the picture. Kaitlyn told the class that she could not make the picture the same size as the one shown. She also said that she had a problem locating the leaves in the same places on the stem. Her teacher said that she could offer a solution to these problems. Watch This Khan Academy The Coordinate Plane The Cartesian plane is a system of four areas or quadrants produced by the perpendicular intersection of two number lines. The two number lines intersect at right angles. The point of intersection is known as the origin. One number line is a horizontal line and this is called the $x$. The other number line is a vertical line and it is called the $y$. The two number lines are referred to as the axes of the Cartesian plane. The Cartesian plane, also known as the coordinate plane, has four quadrants that are labeled counterclockwise. The value of the origin on the $x$$x$$y$$y$ Every point that is plotted on a Cartesian plane has two values associated with it. The first value represents the $x$$y$coordinates of the point and are written as the ordered pair $(x, y)$ To plot a point on the Cartesian plane: • Start at zero (the origin) and locate the $x-$$x$ • If the $x-$$x-$ • Once the $x-$abscissa) has been located, move vertically the number of units displayed by the $y-$ordinate). If the $y-$$x-$$y-$$y-$$x-$$y-$ • The point is can now be plotted. Examine the points $A, B, C$$D$ • $A (-4, 2)$$x$$A$ • $B (-2, -1)$$x$$B$ • $C (3, -4)$$x$$C$ • $D (6, 3)$$x$$D$ Example A For each quadrant, say whether the values of $x$$y$ Solution: The graph below shows where $x$$y$ Example B On the following Cartesian plane, draw an $x-y$ $A(5,3) \quad B(-3,-2) \quad C(4,-5) \quad D(-4,1)$ Example C Determine the coordinates of each of the plotted points on the following graph. Concept Problem Revisited Now, let us return to the beginning of the lesson to find out the solution that the teacher had for the students. Now that the students can see the picture on a Cartesian plane, the reproduction process should be much easier. The abscissa is the $x-$3 is the abscissa. Cartesian Plane A Cartesian plane is a system of four areas or quadrants produced by the perpendicular intersection of two number lines. A Cartesian plane is the grid on which points are plotted. The coordinates are the ordered pair $(x, y)$ Coordinate Plane The coordinate plane is another name for the Cartesian plane. The ordinate is the $y$7 is the ordinate The origin is the point of intersection of the $x$$y$ The $x$ is the horizontal number line of the Cartesian plane. The $y$ is the vertical number line of the Cartesian plane. Guided Practice 1. Draw a Cartesian plane that displays only positive values. Number the $x$$y$ LINE 1 (6, 0) (8, 0) (9, 1) (10, 3) (10, 6) (9, 8) (7, 9) (5, 9) STOP LINE 2 (6, 0) (4, 0) (3, 1) (2, 3) (2, 6) (3, 8) (5, 9) STOP LINE 3 (7, 9) (6, 12) (4, 11) (5, 9) STOP LINE 4 (4, 8) (3, 6) (5, 6) (4, 8) STOP LINE 5 (8, 8) (7, 6) (9, 6) (8, 8) STOP LINE 6 (5, 5) (7, 5) (6, 3) (5, 5) STOP LINE 7 (3, 2) (4, 1) (5, 2) (6, 1) (7, 2) (8, 1) (9, 2) STOP LINE 8 (4, 1) (6, 1) (8, 1) STOP 2. In which quadrant would the following points be located? i) (3, -8) ii) (-5, 4) iii) (7, 2) iv) (-6, -9) v) (-3, 3) vi) (9, -7) 3. State the coordinates of the points plotted on the following Cartesian plane. 1. The following picture is the result of plotting the coordinates and joining them in the order in which they were plotted. Your pumpkin can be any color you like. 2. i) (3, -8) – the $x$$y-$ ii) (-5, 4) – the $x$$y-$ iii) (7, 2) – the $x$$y-$ iv) (-6, -9) – the $x$$y-$ v) (-3, 3) – the $x$$y-$ vi) (9, -7) – the $x$$y-$ 3. $A(4,4) \quad B(-10,8) \quad C(8,-1) \quad D(-6,-6) \quad E(0,5) \quad F(-3,0) \quad G(2,-5) \quad H(0,0)$ Answer the following questions with respect to the Cartesian plane: 1. What name is given to the horizontal number line on the Cartesian plane? 2. What name is given to the four areas of the Cartesian plane? 3. What are the coordinates of the origin? 4. What name is given to the vertical number line on the Cartesian plane? 5. What other name is often used to refer to the $x-$ On each of the following graphs, select three points and state the coordinates of these points. 8. With a partner, create a picture on a Cartesian plane that is numbered ten round. Using the coordinates, list the points for at least five lines necessary for a classmate to complete this same picture. (Go back to the directions for the pumpkin). You can only attach files to Modality which belong to you If you would like to associate files with this Modality, please make a copy first.
{"url":"http://www.ck12.org/book/CK-12-Algebra-I-Concepts-Honors/r9/section/3.1/","timestamp":"2014-04-20T16:57:24Z","content_type":null,"content_length":"133330","record_id":"<urn:uuid:04c4a12d-447b-4870-90a8-178dbce62777>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Seagoville Algebra 2 Tutor Find a Seagoville Algebra 2 Tutor ...Regarding the tutoring itself, I prefer to travel at MOST around 15-16 miles. preferably I recommend to students if their tutoring spot isn't quiet enough they can most certainly always be welcome at mine, where it is. And that will be at my home, otherwise, I have no trouble whatsoever coming o... 16 Subjects: including algebra 2, reading, chemistry, geometry ...I also know many memory tricks, and tips for doing well on exams. I believe that enjoyment and encouragement can be fundamental motivators for success. These will necessarily be qualities I commit to in any tutoring relationship. 17 Subjects: including algebra 2, reading, chemistry, geometry ...Louis School of Law so I have extensive experience in legal writing and legal research as well. I have experience helping high school and college level students write essays. I have experience preparing students to take the SAT and scored a 750 in the critical reading section when I took the test in 2005. 40 Subjects: including algebra 2, Spanish, English, chemistry ...I was required to teach math essentials, reading essentials, SAT/ACT Prep, as well as, homework support for students grades K-12th. I also had a handful of college students and adults that would come to the center for homework support. At Sylvan Learning Center I acquired the necessary skills t... 23 Subjects: including algebra 2, reading, physics, chemistry I have a bachelor's degree in education from Texas Wesleyan University, and over 10 years of public school teaching experience. I have taught at the primary, secondary, and college levels. I work very hard to make learning meaningful and fun. 39 Subjects: including algebra 2, reading, English, chemistry
{"url":"http://www.purplemath.com/Seagoville_Algebra_2_tutors.php","timestamp":"2014-04-21T14:49:25Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:ee7ec8b3-48a6-4d7a-bf5f-5c9a699e2bdf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
population growth model Having problems with a homework problem: 2 populations (J(t) and K(t)) of microbial species are each assumed to grow according to the euler differential equation model, with different growth parameters c and d, respectively. Suppose the populations are grown together in a beaker, and define p(t) = J(t) / (J(t) + K(t)) to be the fraction of the total population that is of species type J. Using differential equations for J and K, show that p(t) satisfies a logistic growth equation. any input would be helpful.
{"url":"http://mathhelpforum.com/calculus/2361-population-growth-model.html","timestamp":"2014-04-18T14:41:00Z","content_type":null,"content_length":"37529","record_id":"<urn:uuid:ffef0867-252f-42dd-8b0d-b8dfa1561add>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions College Algebra if x=a is a real zero. (x-a) is a factor. Divide f(x) by (x-a) and see what the quotient is. Maybe you can factor it, maybe not. In this case, not. Wednesday, March 27, 2013 at 10:29am College Algebra How do you use the real zeros to factor f? Wednesday, March 27, 2013 at 9:58am College Algebra Possible rational zeros: ± 1, ± 1/2 f(1) = 2 - 1 + 2 - 1 ≠0 f(-1) = -2 - 1 - 2 - 1 ≠0 f(1/2) = 1/4 - 1/4 + 1-1 = 0 , yeahh, (2x - 1) is a factor Using long division... 2x^3-x^2+2x-1 = (2x - 1)(x^2 + 1) so x = 1/2 or x^2 = -1 x = 1/2 or x = ± i ... Wednesday, March 27, 2013 at 9:35am College Algebra Use the rational zeros theorem to find all the real zeros of the polynomial function. Use the zeros to factor f over the real numbers. f(x)=2x^3-x^2+2x-1 Wednesday, March 27, 2013 at 8:15am college algebra Squaring both sides 3 x + 7 = ( 3 x + 5 ) ^ 2 3 x + 7 = 9 x ^ 2 + 2 * 3 x * 5 + 5 ^ 2 3 x + 7 = 9 x ^ 2 + 30 x + 25 0 = 9 x ^ 2 + 30 x + 25 - 3 x - 7 9 x ^ 2 + 27 x + 18 = 0 Divide both sides by 9 x ^ 2 + 3 x + 2 = 0 If you don't know how to solve this equation then in ... Wednesday, March 27, 2013 at 1:16am college algebra √3x+7=3x+5 How do I solve when the answer is -1? Wednesday, March 27, 2013 at 1:03am College Gen. Math X Qtrs. 6X Dimes. (X+6) Nickels. 25*x + 10*6X + 5*(X+6) = 570. 25x + 60x + 5x+30 = 570 90x = 570-30 = 540 X = 6 Quarters. Amount = 6 * $.25 = $1.50. Tuesday, March 26, 2013 at 10:00pm College Algebra! help! complex zeros always come in conjugate pairs so if -i is a zero , so is +i if -2+i is a zero, so is -2 - i the zero of -4 comes from the factor (x+4) the zeros of ±i come from (x^ + 1) the zeros of -2 ± i produce (x-(-2+i))(x - (-2-i) ) = (x^2 + 4x + 5) so a ... Tuesday, March 26, 2013 at 9:21pm algebra 10th grade please help with this problem Your dad is taking Biology as part of the veterinarian program at the local community college. This semester he will spend a total of 40 hours in the biology lab, and he will take notes in class 6 hours per week. Define a variable for the number ... Tuesday, March 26, 2013 at 8:41pm College Algebra! help! Form a polynomial f(x) with the real coefficients having the given degree and zeros. Degree 5; Zeros: -4; -i; -2+i f(x)=a( ) Tuesday, March 26, 2013 at 8:23pm College Algebra Form a polynomial f(x) with real coefficients having the given degree and zeros. Degree 5; Zeros: -3; -i; -6+i F(x)=a ( ) Tuesday, March 26, 2013 at 7:14pm College algebra Form a polynomial f(x) with the real coefficients having the given degree and zeros. Degree 5; Zeros: -3; -i; -6+i f(x)=a( ) Tuesday, March 26, 2013 at 6:42pm College algebra f(1) = 1-2-13-10 ≠ 0 f(-1) = -1 - 2 + 13 -10 = 0 , so x+1 is a factor by synthetic division x^3 - 2x^2 - 13x - 10 = (x+1)(x^2 - 3x - 10) = (x+1)(x-5)(x+2) = 0 for roots x = -1 or x = -2 or x = 5 Tuesday, March 26, 2013 at 6:01pm College algebra Use the rationals theorem to find all the zeros of the polynomial function. Use the zeros to factor f over the real numbers. f(x)=x^3-2x^2-13x-10 x= Tuesday, March 26, 2013 at 5:07pm I don't know what the answer is "supposed" to be, but all of those are potential consequences of plagiarism. I knew a college freshman who inadvertently plagiarized and was suspended from school for a semester. She appealed and that decision was repealed -- but ... Tuesday, March 26, 2013 at 4:27pm I don't think any of those answers are correct. In the college where I taught, the first three were used every time a student plagiarized; he/she unfortunately didn't believe plagiarism rules would be enforced. I am not sure what 'cancellation' means here, ... Tuesday, March 26, 2013 at 4:25pm College A&PII 1.plasma osmolarity distilled waters quantity of salutes is 0 and the patients quantity would be 900 at 3L and 300osm/l. making the pre- equilibrium values of plasma: volume-4 Quantity:900 Concentration(mosm/l):225 since concentraion=Quantity/volume. Tuesday, March 26, 2013 at 3:34am college physics A 30.3-kg child starting from rest slides down a water slide with a vertical height of 10.5 m. (Neglect friction.) (a) What is the child's speed halfway down the slide's vertical distance? Monday, March 25, 2013 at 10:54pm Hi, I am an 11th grade student beginning to look into some college options. I am interested in pursuing a career in proctology and have been reading some preliminary books on the topic. Can anyone recommend a college or pre-med program that would offer a strong proctology ... Monday, March 25, 2013 at 8:53pm college algebra X^4+x^3=16-8x-6x^2 x^4 + x^3 + 6x^2 + 8x - 16 = 0 you must know the factor theorem to do these look at the constant of 16 if there are factors of the type ( ?x + c) , c must be a factor of 16 so try ±1, ±2, ±4, ... that is, try factors of 16 e.g. let x = 1... Monday, March 25, 2013 at 8:50pm college algebra I'm stuck on 2 problems I know the answers but need to learn to show the work. X^4+x^3=16-8x-6x^2. Answer: -2,1,+-√8 3x^3+4x^2+6=x. (-2,1/3, i√8/3) Please help so I know how to figure these equations out.A big THANKS Monday, March 25, 2013 at 7:53pm College algebra recall that complex roots come in conjugate pairs, so the complete factorization is (x-5)(x+i)(x-i)(x-(-9+i))(x-(-9-i)) (x-5)(x^2+1)((x+9)-i)((x+9)-i) (x-5)(x^2+1)((x+9)^2+1) (x-5)(x^2+1)(x^2+18x+82) Monday, March 25, 2013 at 3:06pm It seems like you are looking to correlate sleep times to reaction times. If you will actually be doing the experiment rather than just proposing one, it might be convenient to choose college peers. (I am assuming you are in college.) You might want to do a pilot study with ... Monday, March 25, 2013 at 2:37pm College algebra Form a polynomial f(x) with real coefficents having the given degree and zeros degree 5; zeros: 5; -i; -9 +i f(x)=a Monday, March 25, 2013 at 1:49pm College Gen. Math There are 6 times as many dimes as quarters, and 6 times more nickels than quarters. If you have $5.70, how much money do you have in quarters? Monday, March 25, 2013 at 12:06pm College Physics A child's toy consists of a toy car of mass 0.100kg which is able to roll without friction on the loop-the-loop track shown. The car is accelerated from rest by pushing it with force F over a distance d, then the car slides with a constant velocity until it encounters the ... Sunday, March 24, 2013 at 10:36pm College algebra Find the vertical, horizontal, and qblique asymptotes, if any, for the following rational function. T(x)=x^2/x^4-256 Sunday, March 24, 2013 at 9:34pm college algebra b(x) = x^3/(5x^3 - x^2 - 22x) = x^2/(5x^2 - x - 22) , after dividing by x , x≠0 f'(x) = (2x(5x^2 - x - 22) - x^2(10x - 1) )/(5x^2-x-22)^2 to have a vertical tangent, the slope must be undefined, that is the denominator of the above f'(x) must be zero 5x^2 - x - ... Sunday, March 24, 2013 at 8:32pm college algebra determine the vertical asymptotes of the graph of function.g(x)=x^3 diveded by 5x^3-x^2-22x Sunday, March 24, 2013 at 8:22pm MA 107 college algebra (3^2)^5 = 3^10 = 59049 Sunday, March 24, 2013 at 8:14pm MA 107 college algebra Factor any difference of two squares, or state that the polynomial is prime. Assume any variable exponents represent whole numbers. 2 - 36 X Sunday, March 24, 2013 at 7:41pm MA 107 college algebra Simplify the expression using the power rule.(3to the power of 2)to the power of 5 Sunday, March 24, 2013 at 7:38pm MA 107 college?? algebra √81 = 9 9 - 2 = 7 Sunday, March 24, 2013 at 7:36pm MA 107 college algebra Each side of a square is lengthened by 2 inches. The area of this new, larger square is 81 square inches. Find the length of a side of the original square. Sunday, March 24, 2013 at 7:34pm College Algerbra You're very welcome, Eric. Sunday, March 24, 2013 at 5:16pm College Algerbra Thank You so much for your invaluable help and patience with walking me through these problems Ms. Sue, I hope you have a great day! Sunday, March 24, 2013 at 5:11pm College Algerbra Sunday, March 24, 2013 at 5:08pm college math number of ways = C(9,2) x C(39,3) = 36(9139) = .... number of committees without restrictions = C(48,5) = 1712304 Sunday, March 24, 2013 at 5:07pm College Algerbra Sunday, March 24, 2013 at 5:04pm College Algerbra I know right? I almost did not pass my ability to benefit test to get into Barstow College my math skills were so bad. So would the answer be x<_-1.4? Sunday, March 24, 2013 at 5:01pm college math How many different committees can be formed from 9 teachers and 39 students if the committee consists of 2 teachers and 3 students? In how many ways can the committee of 5 members be selected? Sunday, March 24, 2013 at 4:54pm College Algerbra Sorry Mr. Reiny, I guess I should of figured that out since you are so smart at doing the math problems. I do not have an option key on my windows 7 keyboard but I bet there is another way I can do the underline thing. Thanks Again for taking time out of your day to help us ... Sunday, March 24, 2013 at 4:54pm College Algerbra C = 5/9 (-21) I subtracted 32 from 11 (the degrees F). C = -105/9 I multiplied 5/9 times -21 C = -11.67 I divided the numerator by the denominator When the temperature is 77 degrees F, it is 25 degrees C. Work with the formula until you get 25 degrees C. Sunday, March 24, 2013 at 4:47pm College Algerbra I am sorry, but I find it extremely odd that you call these questions "College Algebra" and you don't know what 7 ÷ 5 is . What "college" are you attending ? Sunday, March 24, 2013 at 4:45pm College Algerbra I answer most of the questions on a Mac, where I hold down the "option" key as I press the < There are combinations of key like that on the PC also, I don't have the list handy right now. BTW, I am an old retired guy, not a woman. Sunday, March 24, 2013 at 4:42pm College Algerbra I don't know, -35? I stink at math and all of it's many steps. Sunday, March 24, 2013 at 4:40pm College Algerbra Thanks Reiny, you assumed correct, how did you get the line under the greater than sign? You are a very smart and kind woman to have been such a great help, Thanks! Sunday, March 24, 2013 at 4:36pm College Algerbra So I did what you said but I am not sure how you got to C= 5/9(-21) to C=105/9 So on mine C=(5/9)(77-32) I got C=5/9(-45) I am unsure how you did the math to come to your C=-105/9 answer and then the C=11.67 answer. Please show me those last steps if you do not mind, Thanks ... Sunday, March 24, 2013 at 4:29pm College Algerbra well, what is -7/5 ?? Sunday, March 24, 2013 at 4:23pm College Algerbra looking at all those positive numbers on the left, there is no way the inequality sign could have changed. I am assuming your _ < is ≤ ?? .1(.1x + .1) < -.8 times 10 .1x + .1 ≤ -8 times 10 again x + 1 < -80 x≤ -81 Sunday, March 24, 2013 at 4:21pm College Algerbra I got the answer x >_ -81, is that correct? Sunday, March 24, 2013 at 4:07pm College Algerbra I tried that and I came up with x< -1.4, is that correct? Sunday, March 24, 2013 at 4:03pm College Algerbra Use the same method I used, except substitute 77 for 11. Sunday, March 24, 2013 at 3:45pm College Algerbra Hint: multiply both sides by 10 and proceed like I showed you in the last one Sunday, March 24, 2013 at 3:40pm College Algerbra multiply each term by 7 to get of fractions -5x - 14 ≥ -7 -5x ≥ 7 x ≤ -7/5 Sunday, March 24, 2013 at 3:39pm College Algerbra THE TEMPERATURE ON A SUMMER DAY IS 77 DEGREES FAHRENHEIT.the formula to convert the temperature to degrees Celsius is C = 5/9(F-32) what is the corresponding temperature in degrees Celsius? Sorry Ms. Sue I cut of the top of the screenshot, it is 77 degrees Fahrenheit. Sunday, March 24, 2013 at 3:39pm College Algerbra Solve the inequality. 0.1(0.1x+0.1) _< -0.8 Sunday, March 24, 2013 at 3:24pm College Algerbra Solve the inequality. - 5x / 7 -2>-1 Sunday, March 24, 2013 at 3:20pm College Algerbra C = 5/9(F-32) C = (5/9) (11 - 32) C = 5/9 (-21) C = -105/9 C = 11.67 Sunday, March 24, 2013 at 3:19pm College Algerbra THE TEMPERATURE ON A SUMMER DAY IS // DEGREES FAHRENHEIT.the formula to convert the temperature to degrees Celsius is C = 5/9(F-32) what is the corresponding temperature in degrees Celsius? Sunday, March 24, 2013 at 3:10pm Mark averaged 60 miles per hour during the 30-mile trip to college. Because of heavy traffic he was able to average only 40 miles per hour during the return trip. What was Mark's average speed for the round trip? Sunday, March 24, 2013 at 2:37pm chemistry 2 college % by mass? 10% means 10 g solute in 100 g solution. That's 10 g solute in (10g solute+90 g solvent) 0.1 = (xsolute)/(xsolute + 50g solvent) Solve for x. Sunday, March 24, 2013 at 2:04pm chemistry 2 college explain how to find the mass of solute in the solution: 50.0g of solvent in a 10.0% NaCl solution? Sunday, March 24, 2013 at 12:25pm college algebra the width is 320-x, so a = x(320-x) Sunday, March 24, 2013 at 5:18am college algebra each side is now 10-2x, and the height is x v = x(10-2x)^2 Sunday, March 24, 2013 at 5:17am college algebra let the side of the cut-out square be x inches base is 10-2x by 10-2x, and the height is x Volume = x(10-2x)^2 , where x > 5 , or else the base is negative. Saturday, March 23, 2013 at 8:46pm college algebra Vertex is -b/2a ax^2+by+c=0 or (x2-x1)/(y2-y1) Saturday, March 23, 2013 at 7:50pm college algebra A rectangular box with no top is to be constructed from a 10 in. x 10 in square piece of cardboard by cutting equal square of side x from each corner and then bending up the sides. Write the volume of the box as a function of x. Saturday, March 23, 2013 at 6:00pm college algebra A rectangular box with no top is to be constructed from a 10 in. x 10 in square piece of cardboard by cutting equal square of side x from each corner and then bending up the sides. Write the volume of the box as a function of x. Saturday, March 23, 2013 at 5:31pm college algebra A roll of 640 feet of chicken wire is used to enclose a rectangular vegetable garden.Express the area A of the garden in terms of its length (x) Saturday, March 23, 2013 at 5:19pm college algebra A rectangular box with no top is to be constructed from a 10 in. x 10 in square piece of cardboard by cutting equal square of side x from each corner and then bending up the sides. Write the volume of the box as a function of x. Saturday, March 23, 2013 at 4:58pm College??? math 4.5/6 = 8/x Cross multiply and solve for x. 4.5x = 48 x = 10 2/3 cup Saturday, March 23, 2013 at 4:55pm college algebra A roll of 640 feet of chicken wire is used to enclose a rectangular vegetable garden.Express the area A of the garden in terms of its length (x) Saturday, March 23, 2013 at 4:42pm College math Rebekah's recipe for 41/2 dozen cookies call for 6 cups of sugar. How many cups of sugar are needed to make 8 dozen cookies? Saturday, March 23, 2013 at 4:23pm can some1 plz recheck my answers 1.is B. It has a specific goal to accomplish: earning college degree. C is the beginning of a goal: working on research paper, but doesn't say paper will be completed, therefore it's not specific goal. 2.I agree that it is C. 3. is C. Although, the other 3 answers are ... Saturday, March 23, 2013 at 9:48am 2. Last week, Roger suddenly quit his job. He told his family he had decided to learn carpentry. He purchased a truckload of wood and nails, which now sits in his driveway because he changed his mind and decided to enroll in a school for massage therapy. The school confirms ... Friday, March 22, 2013 at 9:21am A solid is known to be either BaCO3 or CaCO3. It is dissolved in 6 M acetic acid. A pale yellow precipitate forms when K2CrO4 solution is added. The subsequent flame test with that precipitate shows an orange-red color. Which cation is present? Briefly explain Friday, March 22, 2013 at 12:28am college stats Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportions/probabilities related to the Z scores. Thursday, March 21, 2013 at 12:55pm college algebra evaluate f(x) at x = -1 f(-1) = -1+4+2-3 = 2 it's not zero, so (x+1) is not a factor of f(x) Thursday, March 21, 2013 at 12:36pm college algebra x^3+4x^2-2x-3 ; x+1 factor theorum to decide whether or not second polynomial is a factor of the first. Thursday, March 21, 2013 at 8:01am college stats iN manoa Valley on the island of Oahu Hawaii the annual rainfall averages 43.6 inches with a standard diviation 7.5 inches. for a given year, find the probability that the annual rainfall will be.. (solve for a,b,and c) show work a)more than 53 inches b)less than 28 inches c)... Thursday, March 21, 2013 at 12:49am College Chemistry Wednesday, March 20, 2013 at 10:02pm College Math substitution is a method, not a problem (though it appears to be a problem for you! :-) taking the first equation, solve for x or y: x = (16+3y)/2 now "substitute" that into the other equation 5 (16+3y)/2 + 2y = 21 80+15y + 4y = 42 19y = -38 y = -2 so, x = (16-6)/2 = ... Wednesday, March 20, 2013 at 11:50am college- find domain and range Looks good, except that the domain and range probably should be a closed intervals (<= rather than <) I guess that depends on how you interpret "between" Wednesday, March 20, 2013 at 11:46am College Math I need help solving these problems by using the substitution problem 2x-3y=16 5x+2y=21 Wednesday, March 20, 2013 at 11:28am college- find domain and range y= X^2 + X - 500/10 and y is the number of milliliters of fuel consumed per second and X is the speed of the engine in revolutions per minute. If the engine operates between 100 revolutions per minute and 400 revolutions per minute what is the domain and range Is this correct... Wednesday, March 20, 2013 at 11:12am college physics would it just be 4.20N?? Wednesday, March 20, 2013 at 1:32am college physics If the initial tension in the string is (2.10+/- 0.02)N, what is the tension that would double the wave speed? Wednesday, March 20, 2013 at 1:28am college chemistry I bet if it didn't have the density, it had the specific gravity. You can use that to determine the density. Tuesday, March 19, 2013 at 10:02pm college chemistry for my lab i used 2.0mL of NaOH solution and it doesnt have a density in (g/mL)... is there still a way to calculate the moles to it? unless there is a density to NaOH Tuesday, March 19, 2013 at 9:33pm Please help with this question that has been bothering me! I take bio college class and we have graduate/student assistant for labs & when taking an exam. The professor is never there during those sections.Due to my low exam results,I feel I'm not being graded on fairly ... Tuesday, March 19, 2013 at 8:36pm college physics Ep = mg*h = 230*9.8*125 = 281,750 Joules Ek = Ep = 281,750 Joules. V^2 = Vo^2 + 2g*h V^2 = 0 + 19.6*125 = 2450 V = 49.5 m/s. Tuesday, March 19, 2013 at 12:27pm A box in a college bookstore contains books, and each book in the box is a history book, an English book or a science book. If one-third of these books are history books and one-sixth are English books, what fraction of the books are science books? A. 1/3 B. 1/2 C. 2/3 D. 3/4... Tuesday, March 19, 2013 at 10:24am A box in a college bookstore contains books, and each book in the box is a history book, an English book or a science book. If one-third of these books are history books and one-sixth are English books, what fraction of the books are science books? Tuesday, March 19, 2013 at 10:23am college physics Archimedes's principle states that the upward buoyant force exerted on a body immersed in a fluid is equal to the weight of the fluid the body displaces. In this case. 2.8 cc of water is displaced and its weight 2.8 g. Hence the net buoyant force = 2.8 gf Now the density ... Tuesday, March 19, 2013 at 3:53am college algebra There are two equations there. In the first one, let y = x^2 and solve the quadratic. 9y^2 -13y +4 = 0 (y -1)(9y-4) = 0 y = 1 or 4/9 x = sqrt(y) = +/-1 and +/-2/3 In the second one, let y = x^3 and again solve the quadratic. Tuesday, March 19, 2013 at 3:02am college algebra I am solving the first one.....second one you can try yourself. so 9x^4+4=13x^2 this can be written, x^2*(9x^2-13)=-4 as x^2 is always +ve so to be -ve, (9x^2-13)<0 we see that -(sqrt(13)/3)<x<(sqrt (13)/3) it comes as -1.2<x<1.2 next we will see if there is some... Tuesday, March 19, 2013 at 2:54am college algebra Solve, what is the easiest way to solve this equation? 9x^4+4=13x^2 and x^6-19x^3=216 Thanks, in advance.. Tuesday, March 19, 2013 at 12:08am college physics fn=(n/2L)v It is a direct relationship. If v increases by a factor of 100, fn increases by the same factor. Monday, March 18, 2013 at 11:06pm college physics for the resonance frequencies: fn=nv/2L If the speed (v) doubles my fn will also doubles ? why? Monday, March 18, 2013 at 10:03pm Pages: <<Prev | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | Next>>
{"url":"http://www.jiskha.com/college/?page=17","timestamp":"2014-04-17T16:23:48Z","content_type":null,"content_length":"34788","record_id":"<urn:uuid:f606f87d-3702-4dca-b23a-2095898dc560>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Silvercreek, CO Math Tutor Find a Silvercreek, CO Math Tutor ...I love working with students to discover where challenges lie and finding fun and engaging ways to help them learn! I have taught all types of elementary school and middle school science for 19 years in the public school classroom and as a tutor. I am happy to help your student with any science... 14 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I am organized, prompt, and hate wasting someone's time and money. I also embrace technology to make learning more effective and efficient. Send me a note and we'll get started!I use the flipped model here, where I will assign a short youtube video on the topic we are working on. 41 Subjects: including calculus, ACT Math, precalculus, probability ...I have a 36 hour cancellation policy.I worked for three years for an executive search firm. I spent countless hours reviewing resumes, reading cover letters, and interviewing job candidates. This line of work gave me a lot of insight into how people present themselves to prospective employers as well as what signals employers are looking for. 30 Subjects: including probability, geometry, precalculus, reading ...I have been a professional software engineer since 1978. I am fluent in several computer languages, including Java, Octave, Groovy and Python. I have worked in small startups and large corporations, including Apple Computer. 17 Subjects: including statistics, probability, algebra 1, algebra 2 ...In addition, I instruct students in the creation of authentic tools to promote a generalized understanding of English letter-sound correspondences. I coach students in organizational techniques, note taking, and resource utilization. In addition, I provide instruction in the fundamental concepts of the target subject. 53 Subjects: including precalculus, trigonometry, ACT Math, SAT math Related Silvercreek, CO Tutors Silvercreek, CO Accounting Tutors Silvercreek, CO ACT Tutors Silvercreek, CO Algebra Tutors Silvercreek, CO Algebra 2 Tutors Silvercreek, CO Calculus Tutors Silvercreek, CO Geometry Tutors Silvercreek, CO Math Tutors Silvercreek, CO Prealgebra Tutors Silvercreek, CO Precalculus Tutors Silvercreek, CO SAT Tutors Silvercreek, CO SAT Math Tutors Silvercreek, CO Science Tutors Silvercreek, CO Statistics Tutors Silvercreek, CO Trigonometry Tutors Nearby Cities With Math Tutor Adams City, CO Math Tutors Deckers, CO Math Tutors Dupont, CO Math Tutors Foxton, CO Math Tutors Irondale, CO Math Tutors Montclair, CO Math Tutors Nast, CO Math Tutors Norrie, CO Math Tutors Old Snowmass, CO Math Tutors Ruedi, CO Math Tutors Sweetwater, CO Math Tutors Tabernash Math Tutors Wattenburg, CO Math Tutors West Village, CO Math Tutors Western Area, CO Math Tutors
{"url":"http://www.purplemath.com/silvercreek_co_math_tutors.php","timestamp":"2014-04-19T17:09:16Z","content_type":null,"content_length":"23976","record_id":"<urn:uuid:ecdbf194-98dc-4451-a944-566538d87b55>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
I need to verify part of the proof involve Green's function. September 12th 2010, 03:58 PM #1 Junior Member Nov 2009 I need to verify part of the proof involve Green's function. This is actually a subset of proofing $G(\vec{x},\vec{x_0}) = G(\vec{x_0},\vec{x})$. Where G is the Green's function. I don't want to present the whole thing, just the part I have question. Let D be an open solid region with surface S. Let $P \;=\; G(\vec{x},\vec{a}) \;\hbox{ and } P \;=\; G(\vec{x},\vec{b}) \;$ where both are green function at point a and b resp. inside D. This means Q is defined at point a ( harmonic at point a ) and P is defined at point b. Both P and Q are defined in D except at a and b resp. Both equal to zero on surface S. Green function defined: $G(\vec{x},\vec{x_0}) \;=\; v + H \;\hbox { where } \;v=\; \frac{-1}{4\pi|\vec{x}-\vec{x_0|}}$ and $H \;\hbox { is a harmonic function in D and on S where }\; G(\vec{x},\vec{x_0}) \;=\; 0 \;\hbox { on D}.$ In this proof, I need to make two spherical cutout each with radius = $\epsilon$ with center at a and b. I call the spherical region of this two sphere A and B resp and the surface $S_a \;&\; S_b$ resp. Then I let $D_{\epsilon} = D -A-B$ so both P and Q are defined and harmonic in $D_{\epsilon}$. Now come to the step I need to verify: I want to prove: $^{lim}_{\epsilon\rightarrow 0} \int\int_{S_a} P\frac{\partial Q}{\partial n} \;-\; Q\frac{\partial P}{\partial n} \;dS \;=\; ^{lim}_{\epsilon\rightarrow 0} \int\int_{S_a} v\frac{1}{4\pi\epsilon^ 2} \;dS$ This is my work: $^{lim}_{\epsilon\rightarrow 0} \int\int_{S_a} P\frac{\partial Q}{\partial n} \;-\; Q\frac{\partial P}{\partial n} \;dS \;=\; ^{lim}_{\epsilon\rightarrow 0} \int\int_{S_a} (-\frac{1}{4\pi r} + H) \frac{\partial Q}{\partial n} \;-\; Q\frac{\partial }{\partial n}(-\frac{1}{4\pi r} + H) \;dS$(1) $^{lim}_{\epsilon\rightarrow 0}\; v\; =\; \frac{-1}{4\pi |\vec{x}-\vec{a}|} \;=\; ^{lim}_{\epsilon\rightarrow 0} \;\frac{-1}{4\pi r} \;$. in sphere region A. $^{lim}_{\epsilon\rightarrow 0}( P=v+H )\;=\; ^{lim}_{\epsilon\rightarrow 0} (\frac{-1}{4\pi r } + H)$ Form (1) I break into 3 parts: $^{lim}_{\epsilon\rightarrow 0} [ \int\int_{S_a} -\frac{1}{4\pi r}\frac{\partial Q}{\partial n} dS + \int\int_{S_a} (H\frac{\partial Q }{\partial n} \;-\; Q\frac{\partial H}{\partial n}) dS + \ int\int_{S_a} Q \frac{\partial}{\partial n}(-\frac{1}{4\pi r}) \;dS]$ $^{lim}_{\epsilon\rightarrow 0} [ \int\int_{S_a} -\frac{1}{4\pi r}\frac{\partial Q}{\partial n} dS \;=\; -\frac{1}{4\pi \epsilon} \int\int_{S_a} \frac{\partial Q}{\partial n} dS \;=\; 0$ Because Q is harmonic and $\int\int_{S_a} \frac{\partial Q}{\partial n} dS \;=\; 0$ From second identity: $\int\int_{S_a} (H\frac{\partial Q }{\partial n} \;-\; Q\frac{\partial H}{\partial n}) dS \;= \int\int\int_A (Habla^2 Q - Qabla^2 H) dV =0$ because both H and Q are harmonic in A and on surface $S_A$. $^{lim}_{\epsilon\rightarrow 0} \int\int_{S_a} P\frac{\partial Q}{\partial n} \;-\; Q\frac{\partial P}{\partial n} \;dS \;=\; ^{lim}_{\epsilon\rightarrow 0}\int\int_{S_a} Q \frac{\partial}{\ partial n}(-\frac{1}{4\pi r}) \;dS = \frac{1}{4\pi \epsilon^2} \int\int_{S_a} Q dS$ The proof of the Strauss's book is very funky to put it politely. This is the way I proof it and please bare with the long explaination and tell me whether I am correct or not. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-applied-math/155953-i-need-verify-part-proof-involve-green-s-function.html","timestamp":"2014-04-20T14:36:44Z","content_type":null,"content_length":"39530","record_id":"<urn:uuid:683ac7d1-98de-4e89-ac95-5c2f7f39dab4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Find Hypotenuse when given two angles and side September 29th 2012, 12:32 PM #1 Aug 2012 United States Find Hypotenuse when given two angles and side Hello. I was hoping someone would be able to help with this problem I have. I have a right triangle. I need to find the hypotenuse x. I am given the opposite side of 20 and the θ of 32 degrees. How do I find the hypotenuse with all this? If you would be able to explain how you got the answer, I would really appreciate it. I know this work isn't going anywhere anytime soon so I would really like to learn how to do this for future reference. Thank you so much guys. Re: Find Hypotenuse when given two angles and side Hello, DreadfulGlory! In a right triangle, an acute angle is $32^o$ and the opposite side has length $20.$ Find the length of the hypotenuse. * * opp * * hyp 20 * * x * * * 32^o * * * * * * * * * * We know that: . $\sin\theta \:=\:\frac{opp}{hyp}$ So we have: . $\sin32^o \:=\:\frac{20}{x}$ Hence: . $x \;=\;\frac{20}{\sin32^o} \;=\;37.7415983$ Therefore: . $x \;\approx\;37.7$ Re: Find Hypotenuse when given two angles and side Thank you so much. Great answer. September 29th 2012, 03:48 PM #2 Super Member May 2006 Lexington, MA (USA) September 29th 2012, 08:23 PM #3 Aug 2012 United States
{"url":"http://mathhelpforum.com/trigonometry/204298-find-hypotenuse-when-given-two-angles-side.html","timestamp":"2014-04-19T00:19:19Z","content_type":null,"content_length":"37041","record_id":"<urn:uuid:7db604b7-6d12-4a5c-a329-a69810c4fb55>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
From Free Software Directory Description of concept "Mathematics"RDF feed <q> <q>[[Mathematics::+]]</q> OR <q>[[Use::mathematics]]</q> </q> Pages of concept "Mathematics" Showing 104 pages belonging to that concept. A G cont. P cont. B I R C J S D K T E L U F M V G N Z Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the page “GNU Free Documentation License”. The copyright and license notices on this page only apply to the text on this page. Any software or copyright-licenses or other similar notices described in this text has its own copyright notice and license, which can usually be found in the distribution or license text itself.
{"url":"http://directory.fsf.org/wiki?title=Concept:Mathematics&oldid=10830","timestamp":"2014-04-19T18:35:06Z","content_type":null,"content_length":"24876","record_id":"<urn:uuid:072baf66-b584-4538-b758-1f87fc13204f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Buckets Blogging 1. Calculate the same store sales change given the following information: TY Sales = $5,000 LY Sales = $4,900 2. Calculate average price given the following information (round to the nearest penny): Units sold = 2,000 Sales $ = $5,000 3. If a product’s initial retail price is $2.25, and its unit cost is $1.00, what is the initial margin? 4. If a product’s initial margin is 25% and its price is $40, what is the product’s cost? (round to the nearest penny.) 5. A product category had sales of $500,000 and markdowns of $100,000. What was the category’s markdown percent? 6. A product category had retail inventory turns of 7.0 and sales of $1,450,000. What was the category’s average inventory level in dollars? (round to the nearest penny.) 7. Calculate GMROI based on the following numbers: Annual profit = $300,000 Average inventory at cost = $75,000 8. If a product has sales of $365,000 and an average retail inventory of $24,500, what is that product’s retail turn? Note that a printable version is available HERE. Also the answer key can be viewed HERE. Getting Students Interested in Math A couple of years ago my department chair asked to attend one of my Retail Strategy classes. Our college had decided to begin emphasizing retail more strongly in our curriculum, and they were curious about my classes. That day I was covering turnover, sell-through and weeks of supply in class – not some of the most thrilling material in the course. After class, my chair asked me how on earth I managed to keep 60 students paying attention and taking notes when we were covering math! I told him the truth – I scare them . The biggest problem I faced when I started teaching retail math to my undergraduate students was that they didn’t care. They simply didn’t see math as being that important for their futures. So, they didn’t listen, take notes, do the homework, etc. The problem was fixed for me when a major corporation offered to let me use their retail math employment screening test in my classes. The test was 15 questions long and asked job applicants to calculate everything from sales increases to turnover to GMROI. If you couldn’t pass the test, you couldn’t work for their company. So, I started using it as a pre-test on the very first day of the retail math section. My students typically average a 40% on the test, and it scares them half to death. The results of using this test? Attendance in my classes has shot up to nearly 100%, every single student takes notes, and it is rare for them to miss doing a homework assignment. I’ve included a sample test on this blog that is available here. Feel free to use it with your classes. If you do, or if you use something similar, I’d love to hear about your experiences.
{"url":"http://www.threebuckets.com/category/retail-math-test/","timestamp":"2014-04-21T09:35:43Z","content_type":null,"content_length":"38215","record_id":"<urn:uuid:fe0f6527-ed33-49f5-b7c5-60c41b03c213>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Poolville JH math, science team has strong showing at Azle competition Weatherford Democrat November 17, 2013 Poolville JH math, science team has strong showing at Azle competition Weatherford Democrat — POOLVILLE – The Poolville Junior High School Math and Science Team competed in Azle on Nov. 9. Thirteen students competed against 46 other competitors. A total of eight other schools attended the competition. Most of the PJHS students placed in the top 20 at the competition, with three first-place medals awarded. “Several of our members have joined us just this year,” said Susan Garmon, one of the team’s coaches. “We are proud of how well they did and how well the whole team worked.” The team’s results were: Team – third in Calculator. Eighth grade: Dawson Harris placed 14th in Science; Christopher Tunnell placed 18th in Calculator and ninth in Number Sense. Seventh grade: Madelyn Gilmore placed 19th in Number Sense; Sarah Kelly placed 13th in Number Sense and 13th in Calculator; Logan Spikes placed 13th in Number Sense and eighth in Calculator; and Tyler Tunnell placed sixth in Calculator. Sixth grade: Elijah Batchelor placed 19th in Number Sense. Fifth grade: Emily Booth tied for fourth in Number Sense; 12th in Calculator; 11th in Mathematics and seventh in Science; Megan Burnett tied for fourth in Number Sense and placed fourth in Calculator, third in Mathematics and fourth in Science; Brooklyn Hensley placed eighth in Number Sense, third in Calculator, ninth in Mathematics and third in Science; Evan Lang placed seventh in Number Sense, 10th in Calculator, seventh in Mathematics and fifth in Science; Dalton Sprague placed second in Number Sense, 14th in Calculator, first in Mathematics and sixth in Science; and Wyatt Thomas placed first in Number Sense, first in Calculator, sixth in Mathematics and eighth in Science. The team competed Saturday at Forte Junior High School in Azle.
{"url":"http://www.weatherforddemocrat.com/education/x2136380308/Poolville-JH-math-science-team-has-strong-showing-at-Azle-competition/print","timestamp":"2014-04-18T18:58:30Z","content_type":null,"content_length":"4357","record_id":"<urn:uuid:2138ee80-926c-42f4-a252-f7389fa74761>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of perimeter perimeter (pəˈrɪmɪtə) 1. maths a. the curve or line enclosing a plane area b. the length of this curve or line 2. a. any boundary around something, such as a field b. (as modifier): a perimeter fence; a perimeter patrol 3. a medical instrument for measuring the limits of the field of vision [C16: from French périmètre, from Latin perimetros; see peri-, -meter] perimeter (pə-rĭm'ĭ-tər) Pronunciation Key 1. The sum of the lengths of the segments that form the sides of a polygon. 2. The total length of any closed curve, such as the circumference of a circle. The sum of the lengths of the segments that form the sides of a polygon. The total length of any closed curve, such as the circumference of a circle.
{"url":"http://dictionary.reference.com/browse/perimeter","timestamp":"2014-04-17T10:27:22Z","content_type":null,"content_length":"106444","record_id":"<urn:uuid:72a2ffa8-fd62-44a8-b591-87114df548b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
C# - How to convert fractional years (say 1.25 years) to number of days C# .NET - C# - How to convert fractional years (say 1.25 years) to number of days Asked By basam nath on 13-Feb-13 02:25 PM I have a table that shows periods like 1 year to 10 years. I want to calculate number of days (approximately 365 days in a year and no need to include leap year) for each period. If it was just years, it is easy to calculate days ( like 2 years = 2*365 days). But how can convert for 1.5 years or 1.75 years into days? what is the efficient way to calculate days if the years are specified in terms of fractional years. Robbe Morris replied to basam nath on 13-Feb-13 02:27 PM Why isn't the answer 1.5 years * 365? Isn't 547.5 the number of days you are trying to get?
{"url":"http://www.nullskull.com/q/10472251/c--how-to-convert-fractional-years-say-125-years-to-number-of-days.aspx","timestamp":"2014-04-19T02:41:17Z","content_type":null,"content_length":"8205","record_id":"<urn:uuid:8d418113-1d4a-4db5-af4c-08f81a65d3fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Carlo This is a simple applet showing how Monte Carlo works in different ensembles for a 2D Lennard-Jones system. If you do see the applet then you can start playing by pressing the button 'Start'. Once you press the button 'Start' you should see on the left side 100 disks moving around and on the right two diagrams (top density versus number of cycles; bottom average energy per disk versus number of cycles). The simulation starts in NVT ensemble (N: fixed number of particles, V: fixed volume (in our case is fixed area); T: fixed temperature), now if you want to change the ensemble then go to bottom right part of the applet and change the ensemble at the list-box. The available ensembles are NVT, NPT (N: fixed number of particles, P: fixed pressure; T: fixed temperature), and μVT (μ: fixed chemical potential, V: fixed volume; T: fixed temperature). Feel free to change the simulation parameters by navigating through the available panels, such as input and graphics.
{"url":"http://personal-pages.ps.ic.ac.uk/~achremos/Applet1-page.htm","timestamp":"2014-04-16T16:22:38Z","content_type":null,"content_length":"81102","record_id":"<urn:uuid:4a23139e-ada7-415c-8f33-c013a48dd759>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search High school Higher education Sort by: Per page: Now showing results 1-10 of 76 This is a poster about radiation in space. Learners can read about the Van Allen belts and how NASA's Van Allen Probes are investigating the influence of the Sun's energy on Earth. The activity version also includes math problems, a vocabulary... (View More) matching game, a communication research challenge, and a toolbox of web resources. (View Less) This collection of activities is based on a weekly series of space science mathematics problems distributed during the 2012-2013 school year. They were intended for students looking for additional challenges in the math and physical science... (View More) curriculum in grades 5 through 12. The problems were created to be authentic glimpses of modern science and engineering issues, often involving actual research data. The problems were designed to be one-pagers with a Teacher’s Guide and Answer Key as a second page. (View Less) This book contains 24 illustrated math problem sets based on a weekly series of space science problems. Each set of problems is contained on one page. The problems were created to be authentic glimpses of modern science and engineering issues, often... (View More) involving actual research data. Learners will use mathematics to explore problems that include basic scales and proportions, fractions, scientific notation, algebra, and geometry. (View Less) This book presents 49 space-related math problems published weekly on the SpaceMath@NASA site during the 2011-2012 academic year. The problems utilize information, imagery, and data from various NASA spacecraft missions that span a variety of math... (View More) skills in pre-algebra and algebra. (View Less) This collection of 103 individual sets of math problems derives from images and data generated by NASA remote sensing technology. Whether used as a challenge activity, enrichment activity and/or a formative assessment, the problems allow students to... (View More) engage in authentic applications of math. Each set consists of one page of math problems (one to six problems per page) and an accompanying answer key. Based on complexity, the problem sets are designated for two grade level groups: 6-8 and 9-12. Also included is an introduction to remote sensing, a matrix aligning the problem sets to specific math topics, and four problems for beginners (grades 3-5). (View Less) This is a lesson about statistics in science as it applies to the measurement of dust in space. Learners will be introduced to the concepts of error analysis, including standard deviation. They will apply the knowledge of averages (means), standard... (View More) deviation from the mean, and error analysis to their own distribution of heights and then to the Student Dust Counter (SDC) data to determine the issues associated with taking data including error and noise. (View Less) This is an online set of information about astronomical alignments of ancient structures and buildings. Learners will read background information about the alignments to the Sun in such structures as the Great Pyramid, Chichen Itza, and others.... (View More) Next, the site contains 10 short problem sets that involve a variety of math skills, including determining the scale of a photo, measuring and drawing angles, plotting data on a graph, and creating an equation to match a set of data. Each set of problems is contained on one page and all of the sets utilize real-world problems relating to astronomical alignments of ancient structures. Each problem set is flexible and can be used on its own, together with other sets, or together with related lessons and materials selected by the educator. This was originally included as a folder insert for the 2010 Sun-Earth Day. (View Less) This math problem determines the areas of simple and complex planar figures using measurement of mass and proportional constructs. Materials are inexpensive or easily found (poster board, scissors, ruler, sharp pencil, right angle), but also... (View More) requires use of an analytical balance (suggestions are provided for working with less precise weighing tools). This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View This book offers an introduction to the electromagnetic spectrum using examples of data from a variety of NASA missions and satellite technologies. The 84 problem sets included allow students to explore the concepts of waves, wavelength, frequency,... (View More) and speed; the Doppler Shift; light; and the energy carried by photons in various bands of the spectrum. Extensive background information is provided which describes the nature of electromagnetic radiation. (View Less) In this problem set, learners will analyze a table of global electricity consumption to answer a series of questions and consider the production of carbon dioxide associated with that consumption. Answer key is provided. This is part of Earth Math:... (View More) A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) «Previous Page12345 8 Next Page»
{"url":"http://nasawavelength.org/resource-search?educationalLevel%5B%5D=Higher+education%3AUndergraduate%3AMajors%2C+lower+division&educationalLevel%5B%5D=High+school&facetSort=1&resourceType=Instructional+materials%3AProblem+set","timestamp":"2014-04-21T14:05:28Z","content_type":null,"content_length":"82445","record_id":"<urn:uuid:0100aa47-5668-4892-84bb-67590fb85698>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
M.I.T. Junior Lab 8.13/ 8.14 This experiment will let you perform a series of simple quantum computations on a two spin system, demonstrating one and two quantum-bit quantum logic gates, and a circuit implementing the Deutsch-Jozsa quantum algorithm. You will use NMR techniques and manipulate the state of a proton and a carbon nucleus in a chloroform molecule, measuring ensemble nuclear magnetization. WARNING: you should know Matlab well to successfully do this experiment! You will measure: the coupling constant describing the electron-mediated interaction between the proton and carbon nuclear spins of chloroform, the classical input-output truth table for a controlled-not gate the numerical output of the Deutsch-Jozsa quantum algorithm, and optionally, the output and oscillatory behavior of the Grover quantum search algorithm. Student Wiki: Quantum Information processing with NMR Download Lab Guide in PDF format (certificates required) 1. [1961] C. Landauer, "Irreversibility and Heat Generation inthe Computing Process"IBM J. Res. Dev. 5, 183 (1961) 2. [1973] C.H. Bennett, "Logical Reversibility of Computation", IBM J. Res. Dev. 17, 525 (1973) 3. [1980] P. Benioff, The Computer as a Physical System: A Microscopic Quantum Mechanical Hamiltonian Model of Comupters as Represented by Turing Machines; Journal of Statistical Physics, Vol. 22, No. 5, (1980) 4. [1982] R. P. Feynman, Simulating Physics with Computers; Int. J. Theor. Phys. 21, 467 (1982) 5. [1982] E. Fredkin and T. Toffoli, Conservative Logic; Int. J. Theor. Phys. 21, 219 (1982) 6. [1985] R. P. Feynman,Quantum Mechanical Computers;, Optics News, p. 11 (1985) 7. [1985] David Deutsch, "Quantum theory, the Church-Turing principle and the universal quantum computer", Proc. Royal Soc. London A400, p97, 1985. 8. [1989] David Deutsch, "Quantum computational networks", Proc. Royal Soc. London A425, p73, 1989. 9. [1990] H. Leff and R. Rex,"Maxwell's Demon: Entropy, Information, Computing" Princeton University Press, (1990) 10. [1992] David Deutsch and Richard Jozsa, "Rapid solution of problems by quantum computation", Proc. Royal Soc. London A439, p553, 1992
{"url":"http://web.mit.edu/8.13/www/49.shtml","timestamp":"2014-04-20T03:14:33Z","content_type":null,"content_length":"38860","record_id":"<urn:uuid:26a6fafa-8205-4d7a-a1b1-f8710e007418>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"}