content
stringlengths
86
994k
meta
stringlengths
288
619
Physics Tutors West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...tudents and Parents, It is a pleasure to make your acquaintance, and I am elated that you have found interest in my profile. I am the recipient of a Master of Science: and Master of Arts: Applied Mathematics with a focus in computation and theory. I am... Offering 10+ subjects including physics
{"url":"http://www.wyzant.com/Novi_Physics_tutors.aspx","timestamp":"2014-04-23T23:57:33Z","content_type":null,"content_length":"60986","record_id":"<urn:uuid:7cb43b5a-4cce-427b-8198-f487bc6c42ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Iteration of order preserving subhomogeneous maps on a cone The Library Iteration of order preserving subhomogeneous maps on a cone Akian, Marianne, Gaubert, S. (Stephane), Lemmens, Bas and Nussbaum, Roger D., 1944-. (2006) Iteration of order preserving subhomogeneous maps on a cone. Mathematical Proceedings of the Cambridge Philosophical Society, Vol.14 (No.1). pp. 157-176. ISSN 0305-0041 WRAP_Lemmens_Iteration_subhomogeneous.pdf - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Download (243Kb) We investigate the iterative behaviour of continuous order preserving subhomogeneous maps $f: K\,{\rightarrow}\, K$, where $K$ is a polyhedral cone in a finite dimensional vector space. We show that each bounded orbit of $f$ converges to a periodic orbit and, moreover, the period of each periodic point of $f$ is bounded by \[ \beta_N = \max_{q+r+s=N}\frac{N!}{q!r!s!}= \frac{N!}{\big\lfloor\frac {N}{3}\big\rfloor!\big\lfloor\frac{N\,{+}\,1}{3}\big\rfloor! \big\lfloor\frac{N\,{+}\,2}{3}\big\rfloor!}\sim \frac{3^{N+1}\sqrt{3}}{2\pi N}, \] where $N$ is the number of facets of the polyhedral cone. By constructing examples on the standard positive cone in $\mathbb{R}^n$, we show that the upper bound is asymptotically sharp. Item Type: Journal Article Subjects: Q Science > QA Mathematics Divisions: Faculty of Science > Mathematics Library of Congress Subject Iterative methods (Mathematics), Homogeneous spaces, Cone, Polyhedra--Mathematical models, Geometry, Solid, Geometry, Descriptive Headings (LCSH): Journal or Publication Title: Mathematical Proceedings of the Cambridge Philosophical Society Publisher: Cambridge University Press ISSN: 0305-0041 Official Date: January 2006 Volume: Vol.14 Number: No.1 Page Range: pp. 157-176 Identification Number: 10.1017/S0305004105008832 Status: Peer Reviewed Access rights to Published Open Access Funder: Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Netherlands Organisation for Scientific Research] (NWO) [1] M. Akian and S. Gaubert. Spectral theorem for convex monotone homogeneous maps, and ergodic control. Nonlinear Anal. 52(2) (2003), 637–679. [2] T. Apostol. Introduction to Analytic Number Theory. Undergraduate Texts in Mathematics (Springer-Verlag, 1976). [3] H. Bauer and H. S. Bear. The part metric in convex sets. Pacific J. Math. 30(1) (1969), 15–33. [4] F. Baccelli, G. Cohen, G. J. Olsder and J. P. Quadrat. Synchronization and Linearity: An Algebra for Discrete Event Systems (Wiley, 1992). [5] A. Blokhuis and H. A. Wilbrink. Alternative proof of Sine's theorem on the size of a regular polygon in $\mathbb{R}^k$ with the U2113U221E-metric. Discrete Comput. Geom. 7(4) (1992), 433–434. [6] B. Bollobás. Combinatorics (Cambridge University Press, 1989). [7] A. D. Burbanks, R. D. Nussbaum and C. T. Sparrow. Extensions of order-preserving maps on a cone. Proc. Roy. Soc. Edinburgh Sect. A 133(1) (2003), 35–59. [8] P. Bushell. Hilbert's metric and positive contraction mappings in a Banach space. Arch. Rat. Mech. Anal. 52 (1973), 330–338. [9] M. G. Crandall and L. Tartar. Some relations between nonexpansive and order preserving mappings. Proc. Amer. Math. Soc. 78(3) (1980), 385–390. [10] H. Freudenthal and W. Hurewicz. Dehnungen, Verkürzungen und Isometrien. Fundamenta Math. 26 (1936), 120–122. [11] S. Gaubert and J. Gunawardena. The Perron–Frobenius theory for homogeneous, monotone functions. Trans. Amer. Math. Soc. 356(12) (2004), 4931–4950. [12] J. Gunawardena (ed.). Idempotency. Publ. Newton Inst., 11 (Cambridge University Press, 1998). [13] J. Gunawardena. From max-plus algebra to nonexpansive mappings: a nonlinear theory for discrete event systems. Theoret. Comput. Sci. 293(1) (2003), 141–167. [14] J. Gunawardena and M. S. Keane. On the existence of cycle times for some nonexpansive maps. Technical Report HPL-BRIMS-95-003, Hewlett-Packard Labs (1995). [15] M. W. Hirsch. Positive equilibria and convergence in subhomogeneous monotone dynamics. In Comparison Methods and Stability Theory (X. Liu and D. Siegel ed.), pp. 169–188, Lecture Notes in Pure and Appl. Math. 162 (Dekker, 1994). [16] J. F. Jiang. Sublinear discrete-time order-preserving dynamical systems. Math. Proc. Camb. Phil. Soc. 119(3) (1996), 561–574. [17] V. N. Kolokoltsov and V. P. Maslov. Idempotent Analysis and Applications (Kluwer Academic Press, 1997). References: [18] U. Krause and R. D. Nussbaum. A limit set trichotomy for self-mappings of normal cones in Banach spaces. Nonlinear Anal. 20(7) (1993), 855–870. [19] U. Krause and P. Ranft. A limit set trichotomy for monotone nonlinear dynamical systems. Nonlinear Anal. 19(4) (1992), 375–392. [20] B. Lemmens and M. Scheutzow. On the dynamics of sup-norm nonexpansive maps. Ergodic Theory Dynam. Systems 25 (3) (2005), 861–871. [21] R. Lyons and R. D. Nussbaum. On transitive and commutative finite groups of isometries. In Fixed Point Theory and Applications (K.-K. Tan ed.), pp. 189–228 (World Scientific, 1992). [22] P. Martus. Asymptotic properties of nonstationary operator sequences in the nonlinear case. PhD Thesis (Friedrich-Alexander Univ., Germany, 1989). [23] V. P. Maslov and S. N. SamborskiU0131 (ed.). Idempotent Analysis. Advances in Soviet Mathematics, 13 (American Mathematical Society, 1992). [24] A. Neyman and S. Sorin (ed.). Stochastic Games and Applications. Proceedings of the Nato Advanced Study Institute held in Stony Brook, NY, July 7–17, 1999. NATO Science Series C: Mathematical and Physical Sciences, 570 (Kluwer Academic Publishers, 2003). [25] R. D. Nussbaum. Hilbert's projective metric and iterated nonlinear maps. Mem. Amer. Math. Soc. 75(391) (1988), 1–137. [26] R. D. Nussbaum. Iterated nonlinear maps and Hilbert's projective metric II. Mem. Amer. Math. Soc. 79(401) (1989), 1–118. [27] R. D. Nussbaum. Omega limit sets of nonexpansive maps: finiteness and cardinality estimates. Differential Integral Equations 3(3) (1990), 523–540. [28] R. D. Nussbaum. Convergence of iterates of a nonlinear operator arising in statistical mechanics. Nonlinearity 4(4) (1991), 1223–1240. [29] D. Rosenberg and S. Sorin. An operator approach to zero-sum repeated games. Israel J. Math. 121 (2001), 221–246. [30] A. Schrijver. Theory of Linear and Integer Programming (John Wiley, 1986). [31] R. Sine. A nonlinear Perron–Frobenius theorem. Proc. Amer. Math. Soc. 109(2) (1990), 331–336. [32] P. Taká\c. Asymptotic behavior of discrete-time semigroups of sublinear, strongly increasing mappings with applications to biology. Nonlinear Anal. 14(1) (1990), [33] P. Taká\c. Convergence in the part metric for discrete dynamical systems in ordered topological cones. Nonlinear Anal. 26(11) (1996), 1753–1777. [34] A. C. Thompson. On certain contraction mappings in a partially ordered vector space. Proc. Amer. Math. Soc. 14 (1963), 438–443. [35] D. Weller. Hilbert's metric, part metric and self mappings of a cone. PhD Thesis (Universität Bremen, Germany, 1987). URI: http://wrap.warwick.ac.uk/id/eprint/711 Actions (login required)
{"url":"http://wrap.warwick.ac.uk/711/","timestamp":"2014-04-20T13:37:19Z","content_type":null,"content_length":"49697","record_id":"<urn:uuid:16035381-9d35-433d-ad41-27c221745f6f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Synthesis and Characterization of Lithium-Substituted Cu-Mn Ferrite Nanoparticles Indian Journal of Materials Science Volume 2013 (2013), Article ID 910762, 7 pages Research Article Synthesis and Characterization of Lithium-Substituted Cu-Mn Ferrite Nanoparticles ^1Department of Physics, Manarat Dhaka International College, Dhaka 1212, Bangladesh ^2Department of Arts and Sciences, Ahsanullah University of Science and Technology, Dhaka 1208, Bangladesh Received 5 July 2013; Accepted 29 August 2013 Academic Editors: H. Leiste and D. L. Sales Copyright © 2013 M. A. Mohshin Quraishi and M. H. R. Khan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The effect of Li substitution on the structural and magnetic properties of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] (x = 0.00, 0.10, 0.20, 0.30, 0.40, and 0.44) ferrite nanoparticles prepared by combustion technique has been investigated. Structural and surface morphology have been studied by X-ray diffractometer (XRD) and high-resolution optical microscope, respectively. The observed particle size of various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] is found to be in the range of 9nm to 30nm. XRD result confirms single-phase spinel structure for each composition. The lattice constant increases with increasing Li content. The bulk density shows a decreasing trend with Li substitution. The real part of initial permeability () and the grain size (D) increase with increasing Li content. It has been observed that the higher the is, the lower the resonance frequency in Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites is. 1. Introduction Ferrite nanoparticles have attracted a growing interest due to their potential applications such as magnetic recording [1], storage [2], and biotechnology [3]. In the most recent years, the interest in the use of nanoparticles in biomedical applications has greatly increased [4, 5]. The size and composition of nanoparticles influence the bio-application of the magnetic nanoparticles [6]. It is well known that the physical and chemical properties of the nanosized magnetic materials are quite different from those of the bulk ones due to their surface effect and quantum confinement effects. These nanoparticles can be obtained through precipitation of metallic salts in different media as polymers [7], organic acid or alcohol [8], sugars [9], and so forth. In particular, sol-gel, autocombustion, thermal decomposition, hydrothermal, ball milling, reverse micelle synthesis, solid-phase reaction, thermally activated solid state reaction, and pulsed laser deposition have been developed to prepare the single-domain MnFe[2]O[4 ]nanoparticles [10–23]. Manganese ferrite (MnFe[2]O[4]) nanoparticles have become very popular due to their wide range of magnetic applications, such as recording devices, drug delivery, ferrofluid, biosensors, and catalysis [10, 24–27]. Recently, Deraz and Alarifi [28] have studied structural and magnetic properties of MnFe[2]O[4] nanoparticles by combustion route. Till now, no other report has been found in the literature for Li-doped Cu-Mn ferrite. Lithium ferrites are low-cost materials which are attractive for microwave device applications. Hence, there has been a growing interest in Li-substituted Cu-Mn ferrite for microwave applications and high permeability with low magnetic loss. Therefore, this paper is devoted to study the effect of Li^+ substitution on the physical and magnetic properties of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites prepared by combustion technique. 2. Experimental 2.1. Sample Preparation and Characterization The Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites were prepared by autocombustion technique. The analytical grade of Li(NO[3])[2,] MnCl[2]·4H[2]O, Cu(NO[3])[2]·3H[2]O, and Fe(NO[3])[3]·9H[2]O was taken as raw material and weighted according to the stoichiometric amount and then dissolved in ethanol. The mixture was placed in a magnetic heating stirrer at 80°C, followed by an ignition, the combustion takes place within a few seconds, and fine nanosized powders were precipitated. These powders were crushed and ground thoroughly. The fine powders of the composition were then calcined at 900°C for 5h for the final formation of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites nanoparticles. Then, the fine powders were granulated using polyvinyl alcohol (PVA) as a binder and pressed uniaxially into disk-shaped (about 13mm outer diameter, 1.5mm–2.0mm thickness) and toroid-shaped (about 13mm outer diameter, about 6.5mm inner diameter and 2mm thickness) samples. The samples prepared from each composition were sintered at 1200°C for 1 hour in air. The temperature ranges for sintering was maintained at 5°C/min for heating and 10°C/min for cooling. All sintered samples were polished and thermal etching was performed. X-ray diffraction was carried out with an X-ray diffractometer (Model: D8 Advance, Bruker AXS) for each sample. For this purpose, monochromatic Cu- radiation was used. The lattice parameter for each peak of each sample was calculated by using the formula where , , and are the indices of the crystal planes. To determine the exact lattice parameter for each sample, Nelson-Riley method was used. The Nelson-Riley function is given as The values of lattice constant “” of all the peaks for a sample are plotted against . Then, using a least-square fit method exact lattice parameter “” was determined. The point where the least-square fit straight line cuts the -axis (i.e., at or ) is the actual lattice parameter of the sample. The physical or bulk densities of the samples were determined by Archimedes principle with water medium using the following expression: where is the weight of the sample in air, is the weight of the sample in the water, and is the density of water in room temperature. The theoretical density was calculated using the following expression: where is Avogadro's number (6.02 × 10^23mol^−1) and is the molecular weight. The optical micrographs for various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites have been taken by using high-resolution optical microscope (Model: NMM-800TRF). Average grain sizes of all samples were determined from optical micrographs by linear intercept technique [29]. The frequency-dependent initial permeability for each sample was measured by using a Wayne Kerr Impedance Analyzer (Model: 6500B). The complex permeability measurement on toroid-shaped samples was carried out at room temperature in frequency range 10KHz–100MHz. Both the and of the complex permeability were calculated using the following relations: where is the self-inductance of the sample core and is derived geometrically. Here, is the inductance of the winding coil without the sample core, is the number of turns of the coil , and is the area of cross-section of the toroidal sample as follows: where , = inner diameter, = outer diameter, = Height and is the mean diameter of the toroidal sample as follows: The Loss factor, , was determined from the ratio = /. 3. Results and Discussion 3.1. X-Ray Diffraction Analysis The XRD analysis was performed to verify the formation of spinel structure of various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites, in which Mn^2+ is replaced with Li^+ and Fe^3+. The XRD patterns of these Li^+-substituted Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] (with = 0.00, 0.10, 0.20, 0.30, 0.40, and 0.44) ferrites sintered at 1200°C in air for 1h are shown in Figure 1. The patterns indicated that these materials have a well-defined single crystalline phase and formation of cubic spinel structure for each composition. Analyzing the XRD patterns, it is observed that the positions of the peaks comply with the reported value [30] and some traces of raw materials were found for = 0.00, = 0.10 and = 0.20 and = 0.30). 3.2. Lattice Constant The values of lattice constant obtained from each plane are plotted against Nelson-Riley function [31]. The values of lattice constant were estimated from the extrapolation of these lines to or °. It is noticed from Figure 2 that increases with the increase of Li^+ content in Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] (with = 0.00, 0.10, 0.20, 0.30, 0.40, and 0.44) ferrites. Values of for various Li[x] Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites are presented in Table 2. The increase in with Li content indicates that the present system obeys Vegard’s law [32]. This increase of can be attributed to the ionic size. The ionic radius of Li^+ (0.76Å) is greater than that of Mn^2+ (0.67Å) [29, 33]. When the larger Li^+and Fe^3+ ions enter the lattice, the unit cell expands while preserving the overall cubic symmetry. 3.3. Average Particle Size The average particle size was estimated by using Debye-Scherrer [34] formula from the broadening of the highest intensity peaks (311) of XRD patterns: where is the average particle size, is the wavelength of the radiation used as the primary beam of Cu-K[α] (Å), is the angle of the incident beam in degree, and β is the full width at half maximum (FWHM) of the fundamental reflection (311) in radian of the FCC ferrites phase. Debye-Scherer formula assumes approximation and gives the average particle size if the grain size distribution is narrow and strain-induced effects are quite Figure 3 shows the XRD patterns of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites sintered at 1200°C for 1h, where (311) peak is shown in expanded form to understand the variation of FWHM of the Bragg peaks with the Li content. From Figure 3, it is seen that the value of FWHM decreases with the increase of lithium content. The particle size of the sample is inversely proportional to FWHM according to Debye-Scherrer formula. The observed particle size is in the range from 9 to 30nm which has been listed in Table 1. 3.4. Theoretical and Bulk Density The values of and for the various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites (with = 0.00, 0.10, 0.20, 0.30, 0.40, and 0.44) are tabulated in Table 2. It is noticed from Figure 4 that both and decrease with the increase of Li substitution in Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites for constant sintering temperature. This phenomenon could be explained in terms of the atomic weight.The atomic weight of Mn (54.94amu) is greater than that of combined atomic weight of the Li (6.941amu) and Fe (55.845 amu) [33]. 3.5. Microstructure The optical micrographs of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites (where = 0.00, = 0.10, = 0.20, = 0.30, = 0.40, and = 0.44) are shown in Figure 5 sintered at 1200°C. The grain size is significantly dependent on Li substitution. The increases with increasing Li substitution for fixed sintering temperature which is shown in Figure 5. This is probably due to the lower melting temperature of Li (180°C) compared to Mn (1245°C). The values of for various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites are presented in Table 2. 3.6. Complex Initial Permeability The compositional variations of complex initial permeability spectra for the various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] samples sintered at 1200°C are shown in Figure 6. It is observed that the remains fairly constant in the frequency range up to some critical frequency which is called resonance frequency, . A sharp decrease in and increase in are observed above the . The increases with the increase of Li^+ content for various Li[x]Cu[0.12] Mn[0.88−2x]Fe[2+x]O[4]. On the other hand, was found to decrease with Li substitution. It was observed that of Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites sintered at 1200°C increases from 18 to 55. Figure 7 shows both and as a function of Li content for various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites. According to Globus and Duplex model [35], the can be explained as , where is the saturation magnetization and is the magnetocrystalline anisotropy constant. This increase in permeability is expected, because grain size of all samples increases with Li content. It is known that the mobility of domain walls is greatly affected by the microstructure of ferrites. Therefore, in the present case, variation of the initial permeability may be influenced by its grain size. The variation of loss factor, with frequency for all samples, has been studied. The variation of initial loss with frequency for the various Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] samples sintered at 1200°C is shown in Figure 8. At lower frequencies magnetic loss is observed and remains constant up to, a certain frequency, 9MHz; this frequency limit depends upon the sintering temperatures. The lag of domain wall motion with respect to the applied magnetic field is responsible for magnetic loss and this is accredited to lattice imperfections [36]. At higher frequencies, a rapid increase in loss factor is observed. A resonance loss peak is shown in this rapid increase of magnetic loss. At the resonance, maximum energy transfer occurs from the applied field to the lattice which results in the rapid increases in loss factor. 4. Conclusion The Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ( = 0.00 to = 0.44) nanoparticles have been successfully synthesized by the combustion technique. The observed particle size is in the range from 9nm to 30nm. The XRD patterns confirm that the compositions are single phase and form cubic spinel structure. The lattice parameter increases linearly with increasing Li content and obeys Vegard’s law. The study of microstructure shows that grain size increases with increasing Li content. The bulk density decreases with increasing Li substitution in Li[x]Cu[0.12]Mn[0.88−2x]Fe[2+x]O[4] ferrites. The real part of initial permeability increases with increase of Li content for a fixed sintering temperature. This result may be explained with the help of average grain size. The highest was found 55 for = 0.44 which is three times greater than that of parent composition. It was also observed that the resonance frequency, , and real part of initial permeability, , are inversely proportional which confirms Snoek’s relation, = constant. The authors are grateful to the BUET authority for providing financial support for this research. The authors are also thankful to the authority of BCSIR for using their equipment. 1. M. Suda, M. Nakagawa, T. Iyoda, and Y. Einaga, “Reversible photoswitching of ferromagnetic FePt nanoparticles at room temperature,” Journal of the American Chemical Society, vol. 129, no. 17, pp. 5538–5543, 2007. View at Publisher · View at Google Scholar · View at Scopus 2. B. O. Regan and M. Gratzel, “A low-cost, high-efficiency solar cell based on dye-sensitized colloidal TiO[2] films,” Nature, vol. 353, pp. 737–740, 1991. View at Publisher · View at Google 3. M. Shinkai, “Functional magnetic particles for medical application,” Journal of Bioscience and Bioengineering, vol. 94, no. 6, pp. 606–613, 2002. 4. C. C. Berry and A. S. G. Curtis, “Functionalisation of magnetic nanoparticles for applications in biomedicine,” Journal of Physics D, vol. 36, no. 13, article R198, 2003. View at Publisher · View at Google Scholar 5. S. Mornet, S. Vasseur, F. Grasset, and E. Duguet, “Magnetic nanoparticle design for medical diagnosis and therapy,” Journal of Materials Chemistry, vol. 14, no. 14, pp. 2161–2175, 2004. View at Publisher · View at Google Scholar · View at Scopus 6. C. Corot, P. Robert, J. M. Ideé, and M. Port, “Recent advances in iron oxide nanocrystal technology for medical imaging,” Advanced Drug Delivery Reviews, vol. 58, no. 14, pp. 1471–1504, 2006. 7. J.-F. Berret, N. Schonbeck, F. Gazeau et al., “Controlled clustering of superparamagnetic nanoparticles using block copolymers: design of new contrast agents for magnetic resonance imaging,” Journal of the American Chemical Society, vol. 128, no. 5, pp. 1755–1761, 2006. View at Publisher · View at Google Scholar · View at Scopus 8. C. Sun, R. Size, and M. Zhang, “Folic acid-PEG conjugated superparamagnetic nanoparticles for targeted cellular uptake and detection by MRI,” Journal of Biomedical Materials Research A, vol. 78, no. 3, pp. 550–557, 2006. 9. R. Y. Hong, B. Feng, L. L. Chen, G. H. Li, Y. Zeng, and D. G. Wei, “Synthesis, characterization and MRI application of dextran-coated Fe[3]O[4] magnetic nanoparticles,” Biochemical Engineering Journal, vol. 42, no. 3, pp. 290–300, 2008. 10. N. M. Deraz and S. Shaban, “Optimization of catalytic, surface and magnetic properties of nanocrystalline manganese ferrite,” Journal of Analytical and Applied Pyrolysis, vol. 86, pp. 173–179, 11. M. A. Ahmed, N. Okasha, and M. M. El-Sayed, “Enhancement of the physical properties of rare-earth-substituted Mn-Zn ferrites prepared by flash method,” Ceramics International, vol. 33, no. 1, pp. 49–58, 2007. View at Publisher · View at Google Scholar · View at Scopus 12. Q. M. Wei, J.-B. Li, Y.-J. Chen, and Y.-S. Han, “X-ray study of cation distribution in NiMn[1−x]Fe[2−x]O[4] ferrites,” Materials Characterization, vol. 47, no. 3-4, pp. 247–252, 2001. 13. M. H. Mahmoud, H. H. Hamdeh, J. C. Ho, M. J. O'Shea, and J. C. Walker, “Moessbauer studies of manganese ferrite fine particles processed by ball-milling,” Journal of Magnetism and Magnetic Materials, vol. 220, no. 2, pp. 139–146, 2000. View at Publisher · View at Google Scholar · View at Scopus 14. M. Muroi, R. Street, P. G. McCormick, and J. Amighian, “Magnetic properties of ultrafine MnFe[2]O[4] powders prepared by mechanochemical processing,” Physical Review B, vol. 63, no. 18, Article ID 184414, 2001. View at Scopus 15. C. Li and Z. J. Zhang, “Size-dependent superparamagnetic properties of Mn spinel ferrite nanoparticles synthesized from reverse micelles,” Chemistry of Materials, vol. 13, no. 6, pp. 2092–2096, 16. M. H. Mahmoud, C. M. Williams, J. Cai, I. Siu, and J. C. Walker, “Investigation of Mn-ferrite films produced by pulsed laser deposition,” Journal of Magnetism and Magnetic Materials, vol. 261, no. 3, pp. 314–318, 2003. 17. C. Alvani, G. Ennas, A. La Barbera, G. Marongiu, F. Padella, and F. Varsano, “Synthesis and characterization of nanocrystalline MnFe[2]O [4]: advances in thermochemical water splitting,” International Journal of Hydrogen Energy, vol. 30, no. 13-14, pp. 1407–1411, 2005. View at Publisher · View at Google Scholar · View at Scopus 18. D. Carta, M. F. Casula, A. Falqui et al., “A structural and magnetic investigation of the inversion degree in ferrite nanocrystals MFe[2]O[4] (M = Mn, Co, Ni),” Journal of Physical Chemistry C, vol. 113, no. 20, pp. 8606–8615, 2009. View at Publisher · View at Google Scholar · View at Scopus 19. Y. Liu, Y. Zhang, J. D. Feng, C. F. Li, J. Shi, and R. Xiong, “Dependence of magnetic properties on crystallite size of CoFe[2]O[4] nanoparticles synthesised by auto-combustion method,” Journal of Experimental Nanoscience, vol. 4, no. 2, pp. 159–168, 2009. View at Publisher · View at Google Scholar · View at Scopus 20. C. Cannas, A. Musinu, D. Peddis, and G. Piccaluga, “Synthesis and characterization of CoFe[2]O[4] nanoparticles dispersed in a silica matrix by a sol-gel autocombustion method,” Chemistry of Materials, vol. 18, no. 16, pp. 3835–3842, 2006. View at Publisher · View at Google Scholar · View at Scopus 21. C. Cannas, A. Falqui, A. Musinu, D. Peddis, and G. Piccaluga, “CoFe[2]O[4] nanocrystalline powders prepared by citrate-gel methods: synthesis, structure and magnetic properties,” Journal of Nanoparticle Research, vol. 8, no. 2, pp. 255–267, 2006. View at Publisher · View at Google Scholar · View at Scopus 22. L.J. Zhao, H.J. Zhang, Y. Xing et al., “Studies on the magnetism of cobalt ferrite nanocrystals synthesized by hydrothermal method,” Journal of Solid State Chemistry, vol. 181, no. 2, pp. 245–252, 2008. View at Publisher · View at Google Scholar · View at Scopus 23. Q. Liu, J.H. Sun, H.R. Long, X.Q. Sun, X.J. Zhong, and Z. Xu, “Hydrothermal synthesis of CoFe[2]O[4] nanoplatelets and nanoparticles,” Materials Chemistry and Physics, vol. 108, no. 2-3, pp. 269–273, 2008. View at Publisher · View at Google Scholar · View at Scopus 24. S. R. Ahmed, S. B. Ogale, G. C. Papaefthymiou, R. Ramesh, and P. Kofinas, “Magnetic properties of CoFe[2]O[4] nanoparticles synthesized through a block copolymer nanoreactor route,” Applied Physics Letters, vol. 80, no. 9, pp. 1616–1618, 2002. View at Publisher · View at Google Scholar · View at Scopus 25. I. Brigger, C. Dubernet, and P. Couvreur, “Nanoparticles in cancer therapy and diagnosis,” Advanced Drug Delivery Reviews, vol. 54, no. 5, pp. 631–651, 2002. View at Publisher · View at Google Scholar · View at Scopus 26. R. Arulmurugan, G. Vaidyanathan, S. Sendhilnathan, and B. Jeyadevan, “Mn-Zn ferrite nanoparticles for ferrofluid preparation: study on thermal-magnetic properties,” Journal of Magnetism and Magnetic Materials, vol. 298, no. 2, pp. 83–94, 2006. View at Publisher · View at Google Scholar · View at Scopus 27. J. B. Haun, T. J. Yoon, H. Lee, and R. Weissleder, “Magnetic nanoparticle biosensors,” Wiley Interdisciplinary Reviews, vol. 2, no. 3, pp. 291–304, 2010. View at Publisher · View at Google Scholar · View at Scopus 28. N. M. Deraz and A. Alarifi, “Controlled synthesis, physicochemical and magnetic properties of nano-crystalline Mn ferrite system,” International Journal of Electrochemical Science, vol. 7, pp. 5534–5543, 2012. 29. M. I. Mendelson, “Average grain size in polycrystalline ceramics,” Journal of the American Ceramic Society, vol. 52, no. 8, pp. 443–446, 1969. View at Scopus 30. C. Rath, S. Anand, R. P. Das et al., “Dependence on cation distribution of particle size, lattice parameter, and magnetic properties in nanosize Mn-Zn ferrite,” Journal of Applied Physics, vol. 91, no. 4, article 2211, 2002. View at Publisher · View at Google Scholar · View at Scopus 31. J. B. Nelson and D. P. Riley, “An experimental investigation of extrapolation methods in the derivation of accurate unit-cell dimensions of crystals,” Proceedings of the Physical Society, vol. 57, no. 3, pp. 160–177, 1945. View at Publisher · View at Google Scholar · View at Scopus 32. L. Vegard, “The constitution of mixed crystal and the space occupied by atom,” Zeitschrift für Physik, no. 17, pp. 17–26, 1921. 33. M. J. Winter, University of Sheffield, Yorkshire, UK, 1995–2006, http://www.webelements.com/. 34. B. D. Cullity, Elements of X-Ray Diffraction, Addison-Wesley, Reading, Mass, USA, 3rd edition, 1972. 35. A. Globus, P. Duplex, and G. M. Guyot, “Determination of initial magnetization curve from crystallites size and effective anisotropy field,” IEEE Transactions on Magnetics, vol. 7, no. 3, pp. 617–622, 1971. View at Scopus 36. J. L. Snoek, “Dispersion and absorption in magnetic ferrites at frequencies above one Mc/s,” Physica, vol. 17, no. 4, pp. 207–217, 1948. View at Scopus
{"url":"http://www.hindawi.com/journals/ijms/2013/910762/","timestamp":"2014-04-16T14:24:36Z","content_type":null,"content_length":"142230","record_id":"<urn:uuid:0042dcb8-1755-4077-a665-301a578d6260>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Pick a mirror Generalized Nonlinear Superposition Principles for Polynomial Planar Vector Fields Isaac A. Garcia, Hector Giacomini, and Jaume Giné Isaac A. Garcia and Jaume Giné Departament de Matemàtica Universitat de Lleida Avda. Jaume II, 69 25001 Lleida, Spain Hector Giacomini Laboratoire de Mathématiques et Physique Théorique C.N.R.S. UMR 6083 Faculté des Sciences et Techniques Université de Tours Parc de Grandmont 37200 Tours, France Abstract: In this paper we study some aspects of the integrability problem for polynomial vector fields $\dot{x}=P(x,y)$, $\dot{y}=Q(x,y)$. We analyze the possible existence of first integrals of the form $I(x,y)=(y\!-\!g_1(x))^{\alpha_1} (y\!-\!g_2(x))^{\alpha_2} \cdots(y\!-\!g_\ell(x))^{\alpha_\ell}h(x)$, where $g_1(x), \ldots, g_{\ell}(x)$ are unknown particular solutions of $dy/dx=Q(x,y)/P (x,y)$, $\alpha_i$ are unknown constants and $h(x)$ is an unknown function. We show that for certain systems some of the particular solutions remain arbitrary and the other ones are explicitly determined or are functionally related to the arbitrary particular solutions. We obtain in this way a nonlinear superposition principle that generalize the classical nonlinear superposition principle of the Lie theory. In general, the first integral contains some arbitrary solutions of the system but also quadratures of these solutions and an explicit dependence on the independent variable. In the case when all the particular solutions are determined, they are algebraic functions and our algorithm gives an alternative method for determining such type of solutions. \endgraf { Keywords: nonlinear differential equations, polynomial planar vector fields, nonlinear superposition principle, Darboux first integral, Liouvillian first integral Full text of the article: (for faster download, first choose a mirror) • PDF file (220 kilobytes) Electronic fulltext finalized on: 26 Aug 2004. This page was last modified: 4 Jun 2010. © 2004 Heldermann Verlag © 2004–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/JLT/vol.15_no.1/7.html","timestamp":"2014-04-21T02:24:21Z","content_type":null,"content_length":"5252","record_id":"<urn:uuid:b45fccbf-2378-4677-a4d7-c3155ebf1ea1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Bulletin Item AISB event Bulletin Item CALL FOR PAPERS: "The Classical Model of Science II", 2-5 August 2011, THE NETHERLANDS The Classical Model of Science II - The Axiomatic Method, the Order of Concepts and the Hierarchy of Sciences from Leibniz to Tarski - Vrije Universiteit Amsterdam, The Netherlands Among the invited speakers are: Hourya Benis Sinaceur (IHPST, Paris) Patricia Blanchette (Notre Dame) Paola Cantù (CEPERC, Université de Provence) Paolo Mancosu (Berkeley) Paul Rusnock (Ottawa) Stewart Shapiro (Ohio State University/St. Andrews) Organising committee: Arianna Betti (chair), Hein van den Berg, Lieven Decock, Wim de Jong, Iris Loeb & Stefan Roski, VU Amsterdam This conference is devoted to the development of the axiomatic method, with particular attention for the period from Leibniz to Tarski. In particular, we aim to achieve a better historical and philosophical understanding of the way the axiomatic method in the sense of an ideal of scientific knowledge as "cognitio ex principiis" has influenced the development of modern science. The overarching framework for this will be the so-called "Classical Model of Science". The Classical Model (or Ideal) of Science consists of the following conditions for counting a system S as properly scientific (de Jong & Betti 2010: http://bit.ly/f7QKXW): (1) All propositions and all concepts (or terms) of S concern a specific set of objects or are about a certain domain of being(s). (2a) There are in S a number of so-called fundamental concepts (or terms). (2b) All other concepts (or terms) occurring in S are composed of (or are definable from) these fundamental concepts (or terms). (3a) There are in S a number of so-called fundamental propositions. (3b) All other propositions of S follow from or are grounded in (or are provable or demonstrable from) these fundamental propositions. (4) All propositions of S are true. (5) All propositions of S are universal and necessary in some sense or another. (6) All propositions of S are known to be true. A non-fundamental proposition is known to be true through its proof in S. (7) All concepts or terms of S are adequately known. A non-fundamental concept is adequately known through its composition (or definition). This systematization represents a general historical hypothesis insofar as it aims at capturing an ideal that many philosophers and scientists adhered to for more than two millennia, going back ultimately to Aristotle's "Analytica Posteriora". This cluster of conditions has been set up as a rational reconstruction of particular philosophical systems, which is also meant to serve as a fruitful interpretative framework for a comparative evaluation of the way certain concepts/ideas evolved in the history of philosophy. Call for papers The focus of this conference will be the rise of the (formal) axiomatic method in the deductive sciences from Leibniz to Tarski on the basis of the so-called Classical Model (or Ideal) of Science. Although preference will be given to contributions matching this focus, we welcome and strongly encourage submissions discussing historical developments of the ideal of scientific knowledge as "cognitio ex principiis" as sketched above concerning any epoch or longer period. The historical studies should aim at a philosophical understanding of the role and development of the seven conditions listed above in the rise of modern science. Contributed papers will be programmed in parallel sessions (30-40 minute presentations, of which about half for discussion). Topics of interest include, but are not limited to: - Leibniz's Characteristica universalis, and the ideals of "lingua characteristica" and "calculus - Analysis and proper scientific explanation in Wolff and Kant - Grounding and Logical Consequence from Bolzano to Tarski - Explanation in mathematics from Leibniz to Tarski - Epistemology and metatheory in Frege - The relation between descriptive psychology, ontology, logic and axiomatic method in Meinong - Knowing the principles and self-evidence in Husserl's conception of logic - Mereology and axiomatics in 19th century mathematics - The role of mereology as formal ontology in the system of sciences - The notion of form in 19th and 20th century logic and mathematics - Russell's conception of axiomatics - The disappearance of epistemology from 19th and 20th century geometry - Axiomatics, truth and consequence in the Lvov-Warsaw School - Logic as calculus, logic as language - Type theory, range of quantifiers and domain of discourse in the early 20th century - Interpretation, satisfaction and the history of model theory - The axiomatisation of particular disciplines such as logic, mereology, set theory, geometry and physics but also biology, chemistry and linguistics - Constitution systems - The analytic-synthetic distinction - The unity of science - Axiomatics and model theory - Axiomatics and extensionality constraints Abstracts (maximum 500 words) must be sent in electronic form to axiom.erc@gmail.com. They must contain the author's name, address, institutional affiliation and e-mail address. Deadline for submission: April 15th, 2011 Authors will be notified of the acceptance of their submission by May 1st, 2011. Please notice that we are currently trying to arrange conference child care for speakers. More information on this facility will follow. Additional information The history of the methodology systematised in the model as presented above knows three milestones: Aristotle's "Analytica Posteriora", the "Logic of Port-Royal" (1662) and Bernard Bolzano's "Wissenschaftslehre" (1837). In all generality the historical influence of this model has been enormous. In particular, it dominated the philosophy of science of the Seventeenth, and Eighteenth Century (Newton, Spinoza, Descartes, Leibniz, Wolff, Kant) but its influence is still clear in Husserl, Frege and Lesniewski. The axiomatisation of various scientific disciplines involved a strict characterisation of the 'domain' of objects and the list of primitive predicates, strict rules of composition of well-formed formulas, the determination of fundamental axioms (or axiom schemas), formal inference rules, a formalisation of the truth-concept, and a formalisation of modality. The success of the model can be seen in the formalisation of logic (Boole, Schröder, Peirce, Frege, Whitehead & Russell, Lesniewski), the axiomatisation of geometry (Hilbert, Veblen, Whitehead), the axiomatisation of set theory (Zermelo, Fraenkel, Bernays, von Neumann), the axiomatisation of physics (Vienna Circle), or in the construction of constitution systems (Carnap, Goodman). However, full and rigorous formalisation also made visible some of the intrinsic limitations of classical axiomatic methodology: problems with the determination of ontological domains (e.g. pure set theory instead of physical Ur-elements, de-interpretation and the rise of model theory), problems with the characterisation of fundamental concepts (e. g. the debate on the analytic-synthetic distinction), the separation between truth and proof, the demise of the ideal of the unity of science, etc. The first Classical Model of Science conference took place in January 2007. For more information on the Classical Model of Science, its formulation and its application as an interpretive tool from Proclus to Lesniewski and until today, see the papers in Betti & de Jong 2010 (http://bit.ly/hlB5yb by Arianna Betti, Paola Cantù, Wim de Jong, Tapio Korte, Sandra Lapointe and Marije Martijn) and in Betti, de Jong and Martijn forthcoming (http://bit.ly/hERked, by Hein van den Berg, Jaakko Hintikka, Anita Konzelmann-Ziv, F. A. Muller, Dirk Schlimm and Patrick Suppes).
{"url":"http://www.aisb.org.uk/index.php/news/82-bulletin/120-bulletin-item?event_id=2149&bulletin_type=event","timestamp":"2014-04-20T15:53:29Z","content_type":null,"content_length":"30123","record_id":"<urn:uuid:431ca519-a034-4810-a0f4-767a3d9ded31>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
How many solutions does the following system of equations have? 6x - 3y = 6 2x - y = 2 (Points : 5) an infinite number How many solutions does the following system of equations have? 6x - 3y = 6 2x - y = 2 (Points : 5) zero one two an infinite number the following system of equations have c) two solutions Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=8DBB8761","timestamp":"2014-04-20T13:25:35Z","content_type":null,"content_length":"35408","record_id":"<urn:uuid:0bc6bfb1-3109-4568-899d-3ac58fbb0bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: MAXIMUM SET PACKING Up: Covering, Hitting, and Splitting Previous: Covering, Hitting, and Splitting &nbsp Index • INSTANCE: Set X, Y, and Z are disjoint. • SOLUTION: A matching for T, i.e., a subset M agree in any coordinate. • MEASURE: Cardinality of the matching, i.e., • Good News: Approximable within 3/2265]. • Bad News: APX-complete [280]. • Comment: Transformation from MAXIMUM 3-SATISFIABILITY. In the weighted case, the problem is approximable within 30]. Admits a PTAS for `planar' instances [386]. Variation in which the number of occurrences of any element in X, Y or Z is bounded by a constant B is APX-complete for 280]. The generalized Maximum k-Dimensional Matching problem is approximable within 265], and within 2(k+1)/ 3 in the weighted case. The constrained variation in which the input is extended with a subset S of T, and the problem is to find the 3-dimensional matching that contains the largest number of elements from S, is not approximable within 476]. • Garey and Johnson: SP1 Next: MAXIMUM SET PACKING Up: Covering, Hitting, and Splitting Previous: Covering, Hitting, and Splitting &nbsp Index Viggo Kann
{"url":"http://www.nada.kth.se/~viggo/wwwcompendium/node143.html","timestamp":"2014-04-16T13:06:20Z","content_type":null,"content_length":"6857","record_id":"<urn:uuid:392a2066-7aa8-45d2-8e1d-e9657d05966a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Chris Heunen EPSRC Early Career Research Fellow University of Oxford Department of Computer Science, Room 213 Wolfson Building, Parks Road, Oxford OX1 3QD Curriculum vitae I work in quantum computer science, more precisely on the mathematical foundations of physics, especially quantum mechanics, and its logical aspects. My weapons of choice are category theory, functional analysis, and order theory; specifically, monoidal categories, operator algebras, and orthomodular lattices. I suppose you could say that my ultimate goal is to really understand the category of Hilbert spaces, in particular categorical aspects of a choice of basis. For a very gentle introduction to the ideas behind my work, see "The state of quantum computer science".
{"url":"http://www.cs.ox.ac.uk/people/chris.heunen/about.html","timestamp":"2014-04-20T10:46:55Z","content_type":null,"content_length":"30864","record_id":"<urn:uuid:1e00f068-73c5-47ce-8e0e-cf576cd415a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial - Introduction to Software-based Rendering: Triangle Rasterization January 19, 2009 This article explains how to rasterize triangles. It contains sample C++ code and is accompanied by a demo program with full source code that uses SDL for display. Triangle Rasterization In our previous article, we implemented a function for drawing lines. This was adequate for drawing simple wireframes, such as the triangle consisting of three lines as displayed in the demo program, but now let's shift our attention to full triangle rasterization. This will enable us to draw fully shaded triangles with independent colors for each vertex. This is a bit more complicated than simple line drawing, but it will build on top of that knowledge. There are basically three steps to triangle rasterization. First, let's break triangle drawing down into a few separate stages, as shown in the figures below (each figure represents a 20x20 grid of Figure 1 Figure 2 Figure 3 The first figure shows the three points (represented by black dots) that are the three vertices of the triangle. The second figure shows the edges of the triangle; these are the three lines that connect the points and form an outline of the triangle. Last, we have the fully drawn triangle in the third figure, which is made up of horizontal lines of blue pixels that are within the boundaries of the triangle. So the first step is to use the three points to determine the edges of the triangle. Next, we loop through the y axis boundaries of the triangle to calculate the horizontal spans that the triangle consists of; these are the horizontal lines that will form the triangle. Any row of pixels within the outline of the triangle is a span. Finally, we loop through the x axis boundaries of each span to draw each individual pixel. Let's now dive into the code for doing all of this, starting with the first step: edge calculation. Edge Calculation First, we need a class to represent a single edge of a triangle. Here's the definition of the class: class Edge Color Color1, Color2; int X1, Y1, X2, Y2; Edge(const Color &color1, int x1, int y1, const Color &color2, int x2, int y2); This class contains color, x, and y values for the two points that an edge consists of. The only function is the constructor, which is shown below: Edge::Edge(const Color &color1, int x1, int y1, const Color &color2, int x2, int y2) if(y1 < y2) { Color1 = color1; X1 = x1; Y1 = y1; Color2 = color2; X2 = x2; Y2 = y2; } else { Color1 = color2; X1 = x2; Y1 = y2; Color2 = color1; X2 = x1; Y2 = y1; When we're looping through the y axis, we want to start with lower values and end with higher ones, so this constructor simply makes sure that the first point of the edge is the point with the lower y value of the two given. Let's now take a look at the DrawTriangle() function of the Rasterizer class, which creates an array of edges from the three points given: Rasterizer::DrawTriangle(const Color &color1, float x1, float y1, const Color &color2, float x2, float y2, const Color &color3, float x3, float y3) // create edges for the triangle Edge edges[3] = { Edge(color1, (int)x1, (int)y1, color2, (int)x2, (int)y2), Edge(color2, (int)x2, (int)y2, color3, (int)x3, (int)y3), Edge(color3, (int)x3, (int)y3, color1, (int)x1, (int)y1) The first edge is from point one to point two, the second edge is from point two to point three, and the third edge is from point three to point one. Let's refer back to the figures shown earlier in the article for a moment. Notice that one edge of the triangle (the vertical one on the left) spans the entire length of the triangle in the y axis, whereas the other two edges span roughly half of the length in the y axis. When we're drawing triangles, one edge will always have a length in the y axis greater than either of the other two edges (actually, it's possible that two edges have the same length in the y axis and the third has a 0 length, but our code will handle that situation as well); its length in the y axis will be the sum of the lengths of the other two edges. The following bit of code is used to find the index of the tallest edge (the one with the greatest length in the y axis) in the edges array: int maxLength = 0; int longEdge = 0; // find edge with the greatest length in the y axis for(int i = 0; i < 3; i++) { int length = edges[i].Y2 - edges[i].Y1; if(length > maxLength) { maxLength = length; longEdge = i; Next, we get the indices of the shorter edges, using the modulo operator to make sure that we stay within the bounds of the array: int shortEdge1 = (longEdge + 1) % 3; int shortEdge2 = (longEdge + 2) % 3; Next, we pass the edges to the DrawSpansBetweenEdges() function, which will calculate the horizontal spans between two edges of the triangle and send them to another function for drawing indivudal spans; we call this function twice, passing in the tall edge along with each of the short edges, and the DrawTriangle() function is done: // draw spans between edges; the long edge can be drawn // with the shorter edges to draw the full triangle DrawSpansBetweenEdges(edges[longEdge], edges[shortEdge1]); DrawSpansBetweenEdges(edges[longEdge], edges[shortEdge2]); As mentioned earlier, the tall edge's length in the y axis is the sum of the lengths in the y axis of the other two edges. Because the two short edges have different slopes, we take one short edge at a time to calculate the minimum/maximum x values of each span within the edge's boundaries. Let's now look at how we calculate the spans. Span Calculation Just like we have a class to represent edges, we have one to represent spans. Here's the definition: class Span Color Color1, Color2; int X1, X2; Span(const Color &color1, int x1, const Color &color2, int x2); This is similar to the Edge class, but it has no y values because spans are always parallel to the x axis; the function for drawing a single span (shown later in the article) will take a single y value to draw at along with a span. The constructor of the Span class makes sure that the first point stored is the one with the lower x value: Span::Span(const Color &color1, int x1, const Color &color2, int x2) if(x1 < x2) { Color1 = color1; X1 = x1; Color2 = color2; X2 = x2; } else { Color1 = color2; X1 = x2; Color2 = color1; X2 = x1; Span calculation is performed by the DrawSpansBetweenEdges() function. This function first takes the differences between the y positions of the points for the given edges. If either of them is 0, there are no spans to render and the function simply returns: Rasterizer::DrawSpansBetweenEdges(const Edge &e1, const Edge &e2) // calculate difference between the y coordinates // of the first edge and return if 0 float e1ydiff = (float)(e1.Y2 - e1.Y1); if(e1ydiff == 0.0f) // calculate difference between the y coordinates // of the second edge and return if 0 float e2ydiff = (float)(e2.Y2 - e2.Y1); if(e2ydiff == 0.0f) Next, we also calculate the differences of the x positions and colors of the points for the given edges, as this will make it a little easier to interpolate x and color values for each span: // calculate differences between the x coordinates // and colors of the points of the edges float e1xdiff = (float)(e1.X2 - e1.X1); float e2xdiff = (float)(e2.X2 - e2.X1); Color e1colordiff = (e1.Color2 - e1.Color1); Color e2colordiff = (e2.Color2 - e2.Color1); The last task before looping through each span is initializing factors for interpolating values between the two points of the given edges, and step values for increasing the factors each time the loop runs: // calculate factors to use for interpolation // with the edges and the step values to increase // them by after drawing each span float factor1 = (float)(e2.Y1 - e1.Y1) / e1ydiff; float factorStep1 = 1.0f / e1ydiff; float factor2 = 0.0f; float factorStep2 = 1.0f / e2ydiff; When this function is called, the first edge given must be the long edge and the second edge given must be one of the short ones. We're going to loop from the minimum y value of the second edge to the maximum y value of the second edge to calculate spans, since every span within the boundaries of the short edge will also be within the boundaries of the long edge. factor2 starts at 0 and is increased by factorStep2 until it reaches a value of 1 towards the end of the loop. factor1, however, may start with a value greater than 0 (if the long edge's starting y value is lower than the short edge's) or may end up at a value lower than 1 (if the long edge's ending y value is greater than the short edge's). Here's the loop that calculates spans and passes them to the DrawSpan() function to draw them: // loop through the lines between the edges and draw spans for(int y = e2.Y1; y < e2.Y2; y++) { // create and draw span Span span(e1.Color1 + (e1colordiff * factor1), e1.X1 + (int)(e1xdiff * factor1), e2.Color1 + (e2colordiff * factor2), e2.X1 + (int)(e2xdiff * factor2)); DrawSpan(span, y); // increase factors factor1 += factorStep1; factor2 += factorStep2; The x and color values for each span are interpolated from the first point of each edge to the second point, similarly to the line drawing function from our last tutorial. Once the loop finishes, the function is complete. Let's now take a look at the DrawSpan() function, which is where we draw each individual pixel of a span. Span Drawing Span drawing is basically like drawing a one-dimensional line that exists only in the x axis. We loop from the minimum x value of the span to the maximum x value, interpolating the color and setting pixels along the way. Here's how the DrawSpan() function starts. We calculate the differences between the starting/ending x/color values of the span and, if the x difference is 0, simply return because there are no pixels to draw: Rasterizer::DrawSpan(const Span &span, int y) int xdiff = span.X2 - span.X1; if(xdiff == 0) Color colordiff = span.Color2 - span.Color1; Next, we initialize a factor for interpolating the color between the beginning and end of the span, and calculate a step value for incrementing the factor each time the loop runs: float factor = 0.0f; float factorStep = 1.0f / (float)xdiff; Finally, we loop through each x position in the span and set pixels using the y value passed to this function and a calculated color value: // draw each pixel in the span for(int x = span.X1; x < span.X2; x++) { SetPixel(x, y, span.Color1 + (colordiff * factor)); factor += factorStep; The function is finished, and with that, our triangle rasterization algorithm is complete. Here's a screenshot of the demo program: The C++ source code for the demo can be found on GitHub. The demo requires SDL.
{"url":"http://joshbeam.com/articles/triangle_rasterization/","timestamp":"2014-04-19T10:26:09Z","content_type":null,"content_length":"33511","record_id":"<urn:uuid:c76a4f94-bb36-46d5-8780-eaf17fa46fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Maxwell speed distribution I want to know if the Maxwell speed distribution is the following. An ideal gas system of n particles, say constrained to the unit box, has the phase space ([0,1]^3 x R^3)^n. That is, [0,1]^3 for the position of a particle, R^3 for the velocity, and all to the n since there are n particles. Now in this space we can take the surface of constant energy say E=n/2, so that the average energy of a single particle is 1. This surface has finite surface area, so we can put a uniform probability distribution on it, and ask what the distribution of the first particle's velocity is. Is said distribution the Maxwell speed distribution, in the limit as n->infinity? In other words, is the Maxwell speed distribution just the distribution for the velocity of a particle found in a system chosen uniformly over all systems of the same energy E? Thanks in advance!
{"url":"http://www.physicsforums.com/showpost.php?p=4161625&postcount=1","timestamp":"2014-04-19T22:43:43Z","content_type":null,"content_length":"9344","record_id":"<urn:uuid:146a1f0c-7c38-4484-932b-23c5e225994e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Hydrostatic equilibrium 4.1 Hydrostatic equilibrium For a neutron star to be in true equilibrium, the spacetime must be stationary as discussed in Section 2.4. This means that the spacetime possesses both “temporal” and “angular” Killing vectors (cf. Ref. [35 ]). If the matter is also to be in equilibrium, then the 4-velocity of the matter 53) with the angular Killing vector in the Here, It is common to define The velocity If we assume that the matter source is a perfect fluid, then the stress-energy tensor is given by where If the fluid is barytropic, then we can define the relativistic enthalpy as and rewrite the relativistic Bernoulli equation as The constants 118) is rather easy to solve. The case of differential rotation is somewhat more complicated. An integrability condition of (116) requires that [35 ].
{"url":"http://relativity.livingreviews.org/Articles/lrr-2000-5/articlesu10.html","timestamp":"2014-04-16T04:19:11Z","content_type":null,"content_length":"10953","record_id":"<urn:uuid:62edc2ba-ed2b-4317-b062-a03df111b7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems: Simple Regression (It is useful to calculate a regression coefficient at least once. However, no one who is doing useful work does hand calculation --they use statistical programs. Here is a simple regression coefficient calculator from the Internet: http://www.easycalculation.com/statistics/regression.php.) 1. For the data below, compute regression coefficients a and b. Y X Find the R^2 for the least squares regression line that you found. 2. A regression is run using 100 observations to determine the relationship between price and the number of pages in a book. The regression yields this equation: Price = 1.41 + 1.32(Number of pages) a) What price does this equation predict for a book with 500 pages? b) If the standard deviation of the regression coefficient for pages is .13, what is a 95% confidence interval for the true coefficient? 3. The regression equation for the numbers in the following table is Y = 8 + .5X. What is the standard error of estimate? X Y Predicted Y e^2 4. Suppose we have run a regression with five observations and we have the following results: │X│ error │ │5│ -1 │ │4│ 1 │ │1│ 0 │ │2│ ? │ │0│ ? │ What are the last two values for the residuals? (Hint: They must sum to zero, and the correlation of the error terms and the independent variables must be zero.) 5. Two researchers were interested in what relationship, if any, existed between a teacher's teaching effectiveness (measured by student evaluations) and his/her research ability (measured by the number of books or articles published over a three year period). Taking a sample of 69, they obtained this result Teaching Effectiveness = 387.22 + 3.137(Research Ability) R2 = .155; t-value for the regression coefficient = 3.51 a) What does the coefficient on Research Ability tell you? b) What does the R2 tell you? c) You are given a t-value. What does it mean? d) It is possible to find the correlation coefficient of the two variables from the information above. What is it? 6. A teacher used a series of problems in a class that came from a variety of sources. After each set of problems, the students evaluated it in terms of usefulness, with 1 meaning very helpful and 5 meaning useless. The teacher wondered if the material from a prestigious school was better than the rest. He ran a regression using as the dependent variable the average student rating of the set of problems (remember, higher numbers mean less useful) and as an independent variable whether or not the problems came from the prestigious school (0 if from an ordinary school, 1 if from the prestigious school). Below are his results. │Variable │Coefficient │std error│t-statistic │ │constant │2.285 │.034 │67.071 │ │Prestige?│.214 │.057 │3.780 │ │R^2 = .212 │ │n > 40 │ a) What was the average rating of the lessons from the ordinary schools? b) What was the average rating of the lessons from the prestigious school? c) Was the expectation of the teacher confirmed? d) Suppose the claim was that the lessons from the prestigious school were just like the other lessons and that any differences are due to random chance. Does random chance look like a good explanation of the differences in the quality of the lessons as perceived by students? What number do we use to answer this? e) How much of the variation in student evaluations did the teacher explain with this regression? Is this a lot or a little? (Comment: This is a problem of comparing whether or not two means are the same. Here it is done with regression. It can also be done without regression using a two-sample t-test, a test that some introductory texts explain but which I have not included on this site. The results will be the same regardless of which method is used.) (Use of a zero-one coding is common when we have an off-on situation. Variables with this coding are called dummy variables.) 7. Below are the results from a regression trying to predict the asking price of Cadillacs based on their mileage (measured in thousands of miles). (These data were taken from an issue of the Chicago Tribune a number of years ago.) │R Square │.603 │ │Adjusted R Square│.591 │ │ . │ │ │Variable │Regression Coefficient │Std. Error│ t │Significance│ │Constant │26303.415 │1928.098 │13.642│.000 │ │miles │-226.465 │32.478 │-6.973│.000 │ a) How successful is our attempt to explain the prices of these cars? (Hint: Use R Square.) b) If we have a Caddy that has 10,000 miles on it, what would we predict for its price? c) The level of significance for miles .000. What is the hypothesis being tested? d) There is a problem with the regression. Miles and age tend to go together, with older cars having more miles. Perhaps we are capturing some of the effects of age when we include only miles. How do you think we could fix this problem? 8. For the data below, compute compute the correlation coefficient for X and Y. Then compute the vaues of a and b in the regression equation y = a + bX. Answers here.
{"url":"http://ingrimayne.com/statistics/problems_simpleregression.htm","timestamp":"2014-04-18T21:48:47Z","content_type":null,"content_length":"17682","record_id":"<urn:uuid:b10a1f31-9610-4277-a825-53f770b3e63a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Public CourseBasic Math Skills Review 48 ( Official ) Taxonomy of Programs (TOPS) Information: TOPS Code and Course Program Title: 170200 - Mathematics Skills SAM Priority Code: Courses offered to apprentices only. Advanced Occupational Courses taken in the advanced stages of an occupational program. Each “B” level course must have a “C” level prerequisite in the same program area. Clearly Occupational Courses taken in the middle stages of an occupational program. Should provide the student with entry-level job skills. Possibly Occupational Courses taken in the beginning stages of an occupational program. Discipline Placement: State Transfer Code: C0 Not Transferable, No Degree Grading Method: Pass/No Pass Frequency Offered: On Demand Earn Credit: Non-repeatable Credit - equates to 0 repeats Course Special Designators Course Description: Math fundamentals: adding, subtracting, multiplying and dividing whole numbers and fractions. Emphasis on math learning strategies such as organization and managing math anxiety. Course Outline: - Meta-math learning strategies (Resource use, math anxiety management, test-taking strategies, textbook use, note taking) - Place-value, rounding, estimation - Whole numbers: addition, subtraction, multiplication, division - Exponents - Averages - Order of operations of whole numbers for addition, subtraction, multiplication and division - Divisibility, multiples - The fundamentals of the factoring process: prime numbers (Least Common Multiple, Greatest Common Factor) - Fractions: multiplication, division, simplification - Like fractions: addition, subtraction - Final exam Lab Outline: Course Measurable Objectives: 1. Add, subtract, multiply and divide whole numbers and basic fractions. 2. Use multiplication facts in the solution of various problems. 3. Identify and use appropriate math terms and vocabulary. 4. Select and apply strategies for application problems. 5. Identify and utilize math learning strategies related to resource use, math anxiety management, test-taking strategies, textbook use, and note taking. 6. Develop a plan for managing math anxiety. Course Methods of Evaluation: Category 1. Substantial written assignments for this course include: Half-page written reflection papers pertaining to study techniques and/or learning strategies. If the course is degree applicable, substantial written assignments in this course are inappropriate because: Not degree applicable Category 2. Computational or non-computational problem solving demonstrations: Demonstrations of problem-solving using manipulatives and visual representations Problem-based learning activities for real-world applications Presentations and projects Category 3. Skills Demonstrations: Category 4. Objective Examinations: Quizzes and exams on basic computation facts Sample Assignments: Sample Assignments for LERN 48: 1. Create and present a poster or a slide presentation which describes one prime factorization process. 2. Using rounded dollar amounts and the data provided (dinner menu, shopping list, information from Costco.com), estimate the cost of a holiday party for groups of 10, 20, 35 and 90 people 3. Utilize at least three math anxiety management strategies. Create a one-page written critique and identify the effectiveness of each strategy.
{"url":"http://webcms.mtsac.edu/webcms/Display.asp?outline_id=3343&sFormID=PUBLICCOURSE","timestamp":"2014-04-20T10:47:17Z","content_type":null,"content_length":"61751","record_id":"<urn:uuid:6d753e1f-e397-448b-a611-22307dec31d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction To P-adic Numbers Date: 04/29/2004 at 00:27:10 From: Scott Subject: p-adic numbers What are p-adic numbers? What are they used for? Date: 04/29/2004 at 12:18:52 From: Doctor Vogler Subject: Re: p-adic numbers Hi Scott, Thanks for writing to Dr Math. That's not an easy question! It could lead to a whole semester-long course of math. There are many ways to think about the p-adic numbers, but this is the way which seems most enlightening to me. By the way, the "p" in p-adic refers to a prime number p, and there are a completely different set of p-adic numbers for each prime (the 2-adics, the 3-adics, the 5-adics, and so on), but we often speak about them all together, because most of them are very similar to one another (the 2-adics being an exception to some of the rules). First think of the integers mod p. There are p of them, from 0 to p - 1. It is a field, so we can add, subtract, multiply, and divide by anything but the 0. Now think of the integers mod p^2. This is no longer a field, but it is a ring, and it still has some interesting properties. Now think of the elements of p^2 being written in the form a_0 + p*a_1. Then you can reduce to mod p by simply dropping the a_1 term. You can keep going, too. You can think of the integers mod p^n as numbers of the form a_0 + p*a_1 + p^2*a_2 + ... + p^(n-1)*a_(n-1) and you then reduce to a lower power of n by dropping the terms with higher powers of p. Now think of the normal integers. If you look at the integer mod p, then mod p^2, and p^3, and so on, then a certain digit a_i is going to come out at each step, until p^n gets bigger than your number, at which point the coefficients all become zero. But when we're looking at a number mod p^10, we can't tell what the next coefficient is going to be. We can't even say that it will stop there if the last 8 coefficients were all zero. We can't say where the number might stop. So what if it doesn't? What if your number is an infinite string of powers of p a_0 + p*a_1 + p^2*a_2 + ... + p^i*a_i + ... and it never ends. It's sort of like a power series in the number p. This is what p-adic integers are. Think of it in another way: When you write a number in decimal, you can only have finitely many digits on the left of the decimal, but you can have infinitely many on the right of the decimal. They might "terminate" (and become all zeros after some point) but they might not. The p-adic integers can be thought of as writing out integers in base p, but you can have infinitely many digits to the *left* of the decimal (and none on the right; but the rational p-adic numbers can have finitely many digits on the right of the decimal). These numbers are useful for two very important reasons: They are surprisingly easy to work with (once you understand them), and they can tell you things about the (normal) rational numbers. More on this Another way to think of these p-adic integers is as an infinite string of residues mod p^n (for n = 1, 2, 3, ...) such that reducing a higher residue gives you a lower residue. In other words, the n'th term in this string is the integer which is the sum of the first n terms in the power series form. They also call this "localizing" the rational number "at the prime p" whereas the real numbers are what you get if you "localize" at infinity. I won't go into that any more, because it gets pretty deep into algebraic number theory. Now, if you have a basic idea of what p-adic integers are, I will tell you how they relate to p-adic rational numbers. As you would expect, a rational p-adic number is one p-adic integer divided by another (nonzero) p-adic integer. But here's the kicker: Recall that every number mod p^n has a multiplicative inverse mod p^n unless the number is divisible by p (this is part of why the p has to be prime). Well, we can write any p-adic integer in the form p^k * r where r is a p-adic integer that is not divisible by p, just by factoring out p's. Now r has a multiplicative inverse mod p^n for every n, so 1/r is a p-adic integer! That's a very important step. So that means that if we divide p^m * s p^k * r, then we get the p-adic rational number p^(m-k) * (s * 1/r), which is a p-adic integer times a power of p. It will be another p-adic integer if the power of p is nonnegative, but p-adic rationals can have negative powers of p here, and that is why I said earlier that p-adic rational numbers can have finitely many (and not infinitely many) digits to the right of the decimal when you write them in base p. For example, a 2-adic number could be written in binary as something like which would mean 2^-7 * (1 + 2^2 + 2^4 + 2^8 + 2^9 + 2^13 + 2^15 + ...). A search on the internet for "p-adic" yields a link to MathWorld, which is always a good math reference: and a link to a surprisingly accurate article for a general encyclopedia on Wikipedia: Note their comment, "They have been used to solve several problems in number theory, many of them using Helmut Hasse's local-global principle, which roughly states that an equation can be solved over the rational numbers if and only if it can be solved over the real numbers and over the p-adic numbers for every prime p." And this is the short answer to your second question. The MathWorld article also lists several applications of the p-adics. There are many other internet articles that will tell you more about p-adics. But I find that it's always easier to learn from a book, and many modern graduate-level texts on number theory will have at least a chapter on p-adic numbers. If you have access to a university library, then I would look for a book there. You might even find a book exclusively about p-adics. I learned from Serre, "A Course in Arithmetic," but I'm sure there are many others. One nice program that does math with p-adics is GNU Pari, whose home page is You can download the program for free. (It was made for UNIX, but there is a Windows version available.) Pari writes p-adics in the form I described first, using big-oh notation for the infinite part that it doesn't calculate (much like any calculator only computes a certain number of decimal digits). For example, you can ask it log(1 + 2 + O(2^8)) which is the same as log(3 + O(2^8)) and it will tell you 2^2 + 2^4 + 2^5 + 2^6 + 2^7 + O(2^8). Or you can ask it 1/(3 + O(2^10)) and it will tell you 1 + 2 + 2^3 + 2^5 + 2^7 + 2^9 + O(2^10). Pari is also great for Taylor series, number fields, and elliptic curves, among other things. If you have any more questions about this, please write back and I will try to explain further. - Doctor Vogler, The Math Forum Date: 04/29/2004 at 13:44:33 From: Scott Subject: Thank you (p-adic numbers) Thanks for answering my question!
{"url":"http://mathforum.org/library/drmath/view/65286.html","timestamp":"2014-04-16T16:41:44Z","content_type":null,"content_length":"12123","record_id":"<urn:uuid:51a1d43a-08df-4c5c-9edc-edaf64797c13>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Field Subfields Major theories Concepts Astrophysics Cosmology, Planetology, Plasma Big Bang Cosmic inflation General Black hole Cosmic background radiation Galaxy Gravity Gravitational radiation Planet Solar system physics relativity Law of universal gravitation Star Atomic, molecular, and Atomic physics, Molecular physics Quantum optics Diffraction Electromagnetic radiation Laser Polarization Spectral line optical physics , Optics, Photonics Particle physics Accelerator physics, Nuclear Standard Model Grand unification theory Fundamental force (gravitational, electromagnetic, weak, strong) Elementary particle Antimatter physics M-theory Spin Spontaneous symmetry breaking Theory of everything Vacuum energy Condensed matter Solid state physics, Materials BCS theory Bloch wave Fermi gas Fermi Phases (gas, liquid, solid, Bose-Einstein condensate, superconductor, superfluid) Electrical physics physics, Polymer physics liquid Many-body theory conduction Magnetism Self-organization Spin Spontaneous symmetry breaking
{"url":"http://www.physicsdaily.com/","timestamp":"2014-04-18T00:43:03Z","content_type":null,"content_length":"37387","record_id":"<urn:uuid:39842f27-a4a6-4651-be4c-5b21cb40402f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Fi nd polynom roots using Newton-Raphson method December 7th 2012, 12:21 AM #1 May 2011 Fi nd polynom roots using Newton-Raphson method Hi all I try to devellop an algorithm to find roots of any polynom using the Newton Raphson method x = x - fx/f'x for ex f(x) = x^5 -3^2 + 4 x = x - (x^5 -3^2 + 4) / (5*x^4 -6*x) and repeat for each result of x until you get f(x) = 0 the problem is the method get only one solution according to the first x we choose which x to choose for xo ? and how to know if the are another solution ? Thanks in advance Re: Fi nd polynom roots using Newton-Raphson method I would probably begin here: Descartes' rule of signs - Wikipedia, the free encyclopedia The article gives links to other methods as well. Re: Fi nd polynom roots using Newton-Raphson method Usually, start by sketching a graph or, often better, graphs. For this example, you could start by splitting $f(x)=x^{5}-3x^{2}+4$ into $f(x)=g(x)-h(x),$ where $g(x)=x^{5}$ and $h(x)=3x^{2}-4.$ The graphs of $y=g(x)$ and $y=h(x)$ are both easy to sketch and the x coordinates of their points of intersection will be the the values of x for which $f(x)=0.$ (If you don't like this, your alternative is to sketch the graph of $y=f(x)$ and look for intersections with the x-axis.) A further point is that complex roots occur as conjugate pairs. That means that a fifth order equation like this will have either 1,3 or 5 real roots to go with either 4,2 or zero complex roots. The graphs should show you that there is a single intersection to the left of the origin. You might have some doubt as to intersections to the right of the origin but substituting x = 1 and 2 should convince you that there aren't any. Next is to try to come up with a suitable first approximation. Do this by substituting some (negative in this case) values for x and look for a pair of values for x where $g(x)$ and $h(x)$ cross over each other, or, oops in this case, they are equal to each other. Newton -Raphson is not needed for this example ! December 7th 2012, 12:56 AM #2 December 7th 2012, 03:32 AM #3 Super Member Jun 2009
{"url":"http://mathhelpforum.com/algebra/209256-fi-nd-polynom-roots-using-newton-raphson-method.html","timestamp":"2014-04-17T23:30:31Z","content_type":null,"content_length":"37529","record_id":"<urn:uuid:06155c89-ff2a-402c-9a61-505485a6e152>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
MythBusters tackles &quot;plane on a conveyor belt problem&quot; - Page 19 - Science Discussion &amp; News No-one has yet managed to provide a mechanism whereby a normal conveyor (albeit pretty big) is able to excerpt a force on a normal aircraft that can prevent the aircraft from moving, given that the wheels of a normal aircraft are free to spin in either direction and at any speed Let's check it. The conveyor exerts force on the PLANE (Because there is friction between the plane and the wheels). So the conveyor ~ slightly pressed brakes. So the problem becomes: "Can the plane take off the runaway if the brakes are pressed a little". And the answer depend on the concrete parameters/situation.
{"url":"http://www.neowin.net/forum/topic/616462-mythbusters-tackles-plane-on-a-conveyor-belt-problem/page-19","timestamp":"2014-04-17T18:56:14Z","content_type":null,"content_length":"129880","record_id":"<urn:uuid:90dab15b-165f-44fd-bc67-33a23ce656bd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Whippany Trigonometry Tutor Find a Whippany Trigonometry Tutor ...But thanks to having a great teacher, I was able to understand every concept that was taught, and even better, I knew how to explain whatever problems there were to the other students. Now, I am more than ready and able to explain all facets of math to any struggling student. My goal is to make whatever is challenging you most seem easy. 15 Subjects: including trigonometry, geometry, algebra 1, statistics ...I passed the rigorous 8 hour long Fundamentals of Engineering Exam(FE) which tested every math and science topic I offer and some. I have volunteered at St.Joseph-St. Thomas and Gonzaga schools helping students in math and science. 9 Subjects: including trigonometry, chemistry, physics, calculus ...I have taught it many times in a community college setting. I was an actuary for about 7 years. I have passed the first 4 exams of the Casualty Actuarial Society. 28 Subjects: including trigonometry, physics, GRE, calculus ...I find education so important, which is why I am a tutor. I want to help people do better in school, on exams, and simply learn to enjoy learning. I have taken numerous biology (intro to upper level), math (including graduate level bio-statistics), and science classes. 29 Subjects: including trigonometry, chemistry, reading, English ...My instruction begins with a brief discussion of how this math relates to everyday life. After the student grasps the meaning, then we can work together to obtain the knowledge and review the steps necessary to successfully solve the problem. This is a process that will result in better grades. 14 Subjects: including trigonometry, calculus, statistics, geometry
{"url":"http://www.purplemath.com/whippany_trigonometry_tutors.php","timestamp":"2014-04-17T04:34:35Z","content_type":null,"content_length":"23997","record_id":"<urn:uuid:422704f8-c596-4be3-80eb-4d24b16ff342>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Calculations For a humidification application we are basically interested only in how much dry air is entering the space to be humidified. If we have a room in which we wanted to maintain 70^oF and 50% relative humidity, and the room was air tight with a vapor barrier, we would only have to introduce the proper amount of water into the air once. If we maintain a constant 70^oF we would also maintain a constant 50% RH. This would be due to the fact that no dry air could get in to mix with our conditioned air and no moisture could get out. But, even our most modern buildings are not that tight. Outside air enters through open doors, cracks, ventilation, make-up air or exhaust systems. This leakage flow is called infiltration. Ground Rules for Estimating. In estimating a humidification application we must find : 1. Indoor design condition: The desired temperature and relative humidity. For example, 70^oF and 50% RH. The psychrometric chart gives the amount of moisture in the air at these conditions as 55 gr/ 2. Outdoor design condition: The given winter temperature and relative humidity for the location. It is the temperature for which heating systems are designed. For example, it may be -10^oF and 40% RH (moisture = 2gr/lb.) in the North, or 35^oF and 60% RH (moisture = 17 gr/lb.) in the South. 3. Volume of outside air entering the space to be humidified. In a residence, outside air enters by natural infiltration, which in turn, depends on tightness of construction. Typically this varies from 1/4 to 1 air volume exchange per hour and may be more with a fireplaces or fresh air exchange devices. In a factory, warehouse or other buildings without air ducts, infiltration, exhaust fans or loading docks are the major sources of fresh air. Infiltration is difficult to calculate and is usually an “engineering estimate” based on a percentage of total volume. Example: A building with 100,000 cubic feet of space. There is no mechanical ventilation or make-up air system. Assume 1 air change per hour. The outdoor heating design temperature is 0^oF and we require 50% RH at 70^oF. The formula for H (lbs/hr) is: H = Volume X Air Changes X Grains of Moisture Required Specific Volume X 7000 Grains of Moisture Required From psychrometric chart = 56 grains of moisture per pound of air at 70^oF and 50% RH, minus 9 grains of moisture per pound already in the air (56 - 9 = 47). Specific Volume From psychrometric chart = 13.5 cu. ft./lb. of air at 70^oF, 50% RH and 7,000 = Number of grains per pound of water, a conversion constant.
{"url":"http://www.humiditysource.com/RH_101.html","timestamp":"2014-04-20T00:39:31Z","content_type":null,"content_length":"9112","record_id":"<urn:uuid:9d81df3d-69db-4b47-a031-1de21e82d548>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Functions for indexing into certain parts of an array (2d) [Numpy-discussion] Functions for indexing into certain parts of an array (2d) Bruce Southey bsouthey@gmail.... Sun Jun 7 08:19:29 CDT 2009 On Sun, Jun 7, 2009 at 3:37 AM, Fernando Perez<fperez.net@gmail.com> wrote: > On Sun, Jun 7, 2009 at 1:28 AM, Fernando Perez <fperez.net@gmail.com> wrote: >> OK. Will send it in when I know whether you'd want the fill_diagonal >> one, and where that should go. > One more question. For these *_indices() functions, would you want an > interface that accepts *either* > diag_indices(size,ndim) As I indicated above, this is unacceptable for the apparent usage. I do not understand what is expected with the ndim argument. If it is the indices of an array elements of the form: [0][0][0], [1][1][1], ... [k][k][k] where k=min(a.shape) for some array a then an ndim args is total redundant (although using shape is not correct for 1-d arrays). This is different than the diagonals of two 2-d arrays from an shape of 2 by 3 by 4 or some other expectation. > or > diag_indices(anarray) More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-June/043170.html","timestamp":"2014-04-16T10:30:12Z","content_type":null,"content_length":"4208","record_id":"<urn:uuid:2995c5ad-d102-4c94-8b93-0e4f7c167ede>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Zero-Knowledge Proofs with Efficient Provers: Lattice Problems and More Results 1 - 10 of 30 , 2007 "... We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “ha ..." Cited by 104 (20 self) Add to MetaCart We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “hash-and-sign ” digital signature schemes, universally composable oblivious transfer, and identity-based encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a Gaussian-like probability distribution whose standard deviation is essentially the length of the longest vector in the basis. In particular, the crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis. ∗ Supported by the Herbert Kunzel Stanford Graduate Fellowship. † This material is based upon work supported by the National Science Foundation under Grants CNS-0716786 and CNS-0749931. Any opinions, findings, and conclusions or recommedations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ‡ The majority of this work was performed while at SRI International. 1 1 , 2008 "... We construct public-key cryptosystems that are secure assuming the worst-case hardness of approximating the length of a shortest nonzero vector in an n-dimensional lattice to within a small poly (n) factor. Prior cryptosystems with worst-case connections were based either on the shortest vector probl ..." Cited by 84 (18 self) Add to MetaCart We construct public-key cryptosystems that are secure assuming the worst-case hardness of approximating the length of a shortest nonzero vector in an n-dimensional lattice to within a small poly(n) factor. Prior cryptosystems with worst-case connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertext-secure system that has a much simpler description and tighter underlying worst-case approximation factor than prior constructions. - In Proc. of EUROCRYPT, volume 6110 of LNCS , 2010 "... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a pleth ..." Cited by 39 (7 self) Add to MetaCart The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ring-LWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE. 1 , 2009 "... We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis dire ..." Cited by 38 (6 self) Add to MetaCart We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis directly corresponds to milder underlying complexity assumptions and smaller key sizes. The contributions of this work are twofold. First, using the Hermite normal form as an organizing principle, we simplify and generalize an approach due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by tightening the length of the output basis essentially to the optimum value. , 2008 "... In this chapter we describe some of the recent progress in lattice-based cryptography. Lattice-based cryptographic constructions hold a great promise for post-quantum cryptography, as they enjoy very strong security proofs based on worst-case hardness, relatively efficient implementations, as well a ..." Cited by 36 (5 self) Add to MetaCart In this chapter we describe some of the recent progress in lattice-based cryptography. Lattice-based cryptographic constructions hold a great promise for post-quantum cryptography, as they enjoy very strong security proofs based on worst-case hardness, relatively efficient implementations, as well as great simplicity. In addition, lattice-based cryptography is believed to be secure against quantum computers. Our focus here - SIAM Journal on Computing , 2004 "... We prove a number of general theorems about ZK, the class of problems possessing (computational) zero-knowledge proofs. Our results are unconditional, in contrast to most previous works on ZK, which rely on the assumption that one-way functions exist. We establish several new characterizations of ZK ..." Cited by 27 (7 self) Add to MetaCart We prove a number of general theorems about ZK, the class of problems possessing (computational) zero-knowledge proofs. Our results are unconditional, in contrast to most previous works on ZK, which rely on the assumption that one-way functions exist. We establish several new characterizations of ZK, and use these characterizations to prove results such as: 1. Honest-verifier ZK equals general ZK. 2. Public-coin ZK equals private-coin ZK. 3. ZK is closed under union. 4. ZK with imperfect completeness equals ZK with perfect completeness. 5. Any problem in ZK ∩ NP can be proven in computational zero knowledge by a BPP NP prover. 6. ZK with black-box simulators equals ZK with general, non-black-box simulators. The above equalities refer to the resulting class of problems (and do not necessarily preserve other efficiency measures such as round complexity). Our approach is to combine the conditional techniques previously used in the study of ZK with the unconditional techniques developed in the study of SZK, the class of problems possessing statistical zero-knowledge proofs. To enable this combination, we prove that every problem in ZK can be decomposed into a problem in SZK together with a set of instances from which a one-way function can be constructed. , 2008 "... There is an inherent difficulty in building 3-move ID schemes based on combinatorial problems without much algebraic structure. A consequence of this, is that most standard ID schemes today are based on the hardness of number theory problems. Not having schemes based on alternate assumptions is a c ..." Cited by 19 (6 self) Add to MetaCart There is an inherent difficulty in building 3-move ID schemes based on combinatorial problems without much algebraic structure. A consequence of this, is that most standard ID schemes today are based on the hardness of number theory problems. Not having schemes based on alternate assumptions is a cause for concern since improved number theoretic algorithms or the realization of quantum computing would make the known schemes insecure. In this work, we examine the possibility of creating identification protocols based on the hardness of lattice problems. We construct a 3-move identification scheme whose security is based on the worst-case hardness of the shortest vector problem in all lattices, and also present a more efficient version based on the hardness of the same problem in ideal , 2009 "... We prove the equivalence, up to a small polynomial approximation factor p n / log n, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the r ..." Cited by 17 (4 self) Add to MetaCart We prove the equivalence, up to a small polynomial approximation factor p n / log n, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard GapSVP, as well the BDD problem commonly used in coding theory. The main cryptographic application of our work is the proof that the Ajtai-Dwork ([AD97]) and the Regev ([Reg04a]) cryptosystems, which were previously only known to be based on the hardness of uSVP, can be equivalently based on the hardness of worst-case GapSVP O(n 2.5) and GapSVP O(n 2), respectively. Also, in the case of uSVP and BDD, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory. 1 , 2007 "... Lattice problems are known to be hard to approximate to within sub-polynomial factors. For larger approximation factors, such as √ n, lattice problems are known to be in complexity classes such as NP ∩ coNP and are hence unlikely to be NP-hard. Here we survey known results in this area. We also disc ..." Cited by 12 (1 self) Add to MetaCart Lattice problems are known to be hard to approximate to within sub-polynomial factors. For larger approximation factors, such as √ n, lattice problems are known to be in complexity classes such as NP ∩ coNP and are hence unlikely to be NP-hard. Here we survey known results in this area. We also discuss some related zero-knowledge protocols for lattice problems. , 2008 "... In this paper, we show that two variants of Stern’s identification scheme [IEEE Transaction on Information Theory ’96] are provably secure against concurrent attack under the assumptions on the worst-case hardness of lattice problems. These assumptions are weaker than those for the previous lattice- ..." Cited by 11 (0 self) Add to MetaCart In this paper, we show that two variants of Stern’s identification scheme [IEEE Transaction on Information Theory ’96] are provably secure against concurrent attack under the assumptions on the worst-case hardness of lattice problems. These assumptions are weaker than those for the previous lattice-based identification schemes of Micciancio and Vadhan [CRYPTO ’03] and of Lyubashevsky [PKC ’08]. We also construct efficient ad hoc anonymous identification schemes based on the lattice problems by modifying the variants.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103537","timestamp":"2014-04-18T06:47:22Z","content_type":null,"content_length":"38550","record_id":"<urn:uuid:696739ec-0424-40b9-bd61-de5c423a0233>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
ormula for Power Calculations for One-Sample Z Test Consider the case where the null hypothesis is that the population mean, μ[0,] is 100 with a population standard deviation, σ, of 10. If the true population, μ[1], mean is 105 and we use a sample of size n=25, how likely is it that we will be able to reject the null hypothesis (one-tailed alpha = .05)? That is, how much statistical power do we have? The WISE Power Applet shown below indicates that the power is .804. The calculations below show how we could compute the power ourselves. Derivation of a formula for power computation If we observe a sample mean greater than the critical value indicated by the red dashed line, we will reject the null hypothesis. Let us refer to this critical value as C. The critical value C is defined on the blue null distribution as the value that cuts off the upper .05 (i.e., alpha) of the blue distribution. We can find the Z score for this value by using a standard Z table or the WISE p-z converter applet. Let us call this score Z[α] because it is defined on the null distribution by alpha. In our example, Z[α] = 1.645. Thus, the critical value C is 1.645 standard errors greater than the mean of the null distribution, μ[0]. In our example, Similarly, we can generate a formula for the value of C on the red sampling distribution for the alternate hypothesis, μ[1] = 105. Beta error corresponds to the portion of the red curve that falls below C, while statistical power corresponds to the portion of the red curve that falls above C. Let us use Z[β] as the label for the standardized score on the red distribution corresponding to C because the beta error is defined as the portion of the red curve that falls below C. Z[β] is a negative number in our example. We can set the two equations for C equal to each other, giving Rearranging terms gives where d is a common measure of effect size: d = (μ[1] – μ[0]) / σ. The result is this elegant formula: This formula expresses the relationship between four concepts. If we know any three, we can compute the fourth. An easy way to remember these four concepts is with the mnemonic BEAN: • B = beta error rate, represented by Z[β]. Power is (1 – beta error rate). • E = effect size, represented by d • A = alpha error rate, represented by Z[α] • N = sample size, represented by n Questions, comments, difficulties? See our technical support page or contact us: wise@cgu.edu.
{"url":"http://wise.cgu.edu/powermod/computing.asp","timestamp":"2014-04-21T10:51:28Z","content_type":null,"content_length":"25146","record_id":"<urn:uuid:82fc871b-e480-4173-8da6-a8e446243e78>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Central Falls Trigonometry Tutor Find a Central Falls Trigonometry Tutor ...I scored an 800 on the SAT Math section in high school. I generally work well with my students. I have a Bachelors and Masters degree in Computer Science. 14 Subjects: including trigonometry, calculus, geometry, statistics ...The other two major tools of calculus - differentiation and integration - are simply application of the limit to different kinds of problems. Mastering calculus requires a strong foundation of algebra and trigonometry, followed by an in-depth understanding of limits. I love the foundational cal... 19 Subjects: including trigonometry, chemistry, statistics, calculus ...To help you learn to do this better, I would bring in sorting and matching activities. That way, you can get a lot of practice identifying the different kinds of equations, describing their graphs, and explaining how to solve them. I think some of the big ideas in algebra are solving and graphing different kinds of equations. 14 Subjects: including trigonometry, chemistry, calculus, geometry ...I received both my bachelors and masters degree from Rhode Island College. I have taught students with all level of abilities. I have taught grades 6 through 12, this included general math, pre-algebra, algebra, geometry, trigonometry and analysis. 10 Subjects: including trigonometry, geometry, ASVAB, algebra 1 ...I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students. I did this from 1998-2000. At CASA I tutored algebra through calculus.I am a homeschooling mom who has currently been homeschooling for 8 years. 25 Subjects: including trigonometry, English, reading, calculus
{"url":"http://www.purplemath.com/Central_Falls_Trigonometry_tutors.php","timestamp":"2014-04-20T03:58:11Z","content_type":null,"content_length":"24231","record_id":"<urn:uuid:1bbf3b66-3dd1-4edd-b003-cdc22e577291>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone explain me why $n^2+n+1$ isn't dividable with 125? Quote: Originally Posted by DenMac21 Consider $n^2+n+1$ modulo $10$, for $n=10k+r$ with $r\in \{0,1,2,3,4,5,6,7,8,9\}$. You will find $n^2+n+1$ is never divisible by $5$ and so cannot be divisible by $125$. RonL Quote: Originally Posted by DenMac21 If, $n^2+n+1\equiv 0\mod 5$ Noticing that 5 is a prime, look at its discrimanant ( $b^2-4ac)$. The above congruence has a solution if and only if its discrimant is a quadradic residue. Since $b^2-4ac=-3\equiv 2$. Apply Euler's Criterion to determine if something is a quadradic residue we get that, $2^{\frac{5-1}{2}}ot \equiv 1\mod 5$ Thus, $n^2+n+1$ is never divisible by 5, thus how can it be divisible by $125=5^3$? Thus, we have shown it is impossible. Q.E.D. To remind you Euler's Criterion states, $a$ is a quadradic residue of $p$ if and only if, $a^{\ frac{p-1}{2}}\equiv 1\mod p$
{"url":"http://mathhelpforum.com/number-theory/1830-explanation-print.html","timestamp":"2014-04-19T09:54:59Z","content_type":null,"content_length":"8187","record_id":"<urn:uuid:d1690cf5-37a6-4415-b170-f5b09651dc22>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete nodal domain theorems Results 1 - 10 of 18 - Rev. E , 2001 "... Spectral properties of Coupled Map Lattices are described. Conditions for the stability of spatially homogeneous chaotic solutions are derived using linear stability analysis. Global stability analysis results are also presented. The analytical results are supplemented with numerical examples. The q ..." Cited by 35 (13 self) Add to MetaCart Spectral properties of Coupled Map Lattices are described. Conditions for the stability of spatially homogeneous chaotic solutions are derived using linear stability analysis. Global stability analysis results are also presented. The analytical results are supplemented with numerical examples. The quadratic map is used for the site dynamics with different coupling schemes such as global coupling, nearest neighbour coupling, intermediate range coupling, random coupling, small world coupling and scale free coupling. PACS Numbers: 05.45.Ra, 05.45.Xt, 89.75.Hc - SIAM REVIEW , 2002 "... Fitness landscapes have proven to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. A fitness landscape is a mapping from a configuration space into the real numbers. The configuration space is equipped with some notion of adjacency, ne ..." Cited by 33 (2 self) Add to MetaCart Fitness landscapes have proven to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. A fitness landscape is a mapping from a configuration space into the real numbers. The configuration space is equipped with some notion of adjacency, nearness, distance or accessibility. Landscape theory has emerged as an attempt to devise suitable mathematical structures for describing the "static" properties of landscapes as well as their influence on the dynamics of adaptation. In this review we focus on the connections of landscape theory with algebraic combinatorics and random graph theory, where exact results are available. - EUROGRAPHICS 2007 , 2007 "... Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early works in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis ..." Cited by 17 (0 self) Add to MetaCart Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early works in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis of mesh geometry based on a combinatorial Laplacian aids our understanding of the low-pass filtering approach to mesh smoothing. Over the past ten years or so, the list of applications in the area of geometry processing which utilize the eigenstructures of a variety of mesh operators in different manners have been growing steadily. Many works presented so far draw parallels from developments in fields such as graph theory, computer vision, machine learning, graph drawing, numerical linear algebra, and high-performance computing. This state-of-the-art report aims to provide a comprehensive survey on the spectral approach, focusing on its power and versatility in solving geometry processing problems and attempting to bridge the gap between relevant research in computer graphics and other fields. Necessary theoretical background will be provided and existing works will be classified according to different criteria — the operators or eigenstructures employed, application domains, or the dimensionality of the spectral embeddings used — and described in adequate length. Finally, despite much empirical success, there still remain many open questions pertaining to the spectral approach, which we will discuss in the report as well. , 2003 "... The concept of a fitness landscape arose in theoretical biology, while that of effective fitness has its origin in evolutionary computation. Both have emerged as useful conceptual tools with which to understand the dynamics of evolutionary processes, especially in the presence of complex genotype-ph ..." Cited by 11 (2 self) Add to MetaCart The concept of a fitness landscape arose in theoretical biology, while that of effective fitness has its origin in evolutionary computation. Both have emerged as useful conceptual tools with which to understand the dynamics of evolutionary processes, especially in the presence of complex genotype-phenotype relations. In this contribution we attempt to provide a unified discussion of these two approaches, discussing both their advantages and disadvantages in the context of some simple models. We also discuss how fitness and effective fitness change under various transformations of the configuration space of the underlying genetic model, concentrating on coarse graining transformations and on a particular coordinate transformation that provides an appropriate basis for illuminating the structure and consequences of recombination. "... Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early work in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis ..." Cited by 9 (1 self) Add to MetaCart Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early work in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis of mesh geometry based on a combinatorial Laplacian aids our understanding of the low-pass filtering approach to mesh smoothing. Over the past fifteen years, the list of applications in the area of geometry processing which utilize the eigenstructures of a variety of mesh operators in different manners have been growing steadily. Many works presented so far draw parallels from developments in fields such as graph theory, computer vision, machine learning, graph drawing, numerical linear algebra, and high-performance computing. This paper aims to provide a comprehensive survey on the spectral approach, focusing on its power and versatility in solving geometry processing problems and attempting to bridge the gap between relevant research in computer graphics and other fields. Necessary theoretical background is provided. Existing works covered are classified according to different criteria: the operators or eigenstructures employed, application domains, or the dimensionality of the spectral embeddings used. Despite much empirical success, there still remain many open questions pertaining to the spectral approach. These are discussed as we conclude the survey and provide our perspective on possible future research. , 2006 "... Abstract. We study the number of nodal domains (maximal connected regions on which a function has constant sign) of the eigenfunctions of Schrödinger operators on graphs. Under certain genericity condition, we show that the number of nodal domains of the n-th eigenfunction is bounded below by n − ℓ, ..." Cited by 7 (1 self) Add to MetaCart Abstract. We study the number of nodal domains (maximal connected regions on which a function has constant sign) of the eigenfunctions of Schrödinger operators on graphs. Under certain genericity condition, we show that the number of nodal domains of the n-th eigenfunction is bounded below by n − ℓ, where ℓ is the number of links that distinguish the graph from a tree. Our results apply to operators on both discrete (combinatorial) and metric (quantum) graphs. They complement already known analogues of a result by Courant who proved the upper bound n for the number of nodal domains. To illustrate that the genericity condition is essential we show that if it is dropped, the nodal count can fall arbitrarily far below the number of the corresponding eigenfunction. In the appendix we review the proof of the case ℓ = 0 on metric trees which has been obtained by other authors. 1. , 2001 "... Combinatorial optimization problems defined on sets of phylogenetic trees are an important issue in computational biology, for instance the problem of reconstruction a phylogeny using maximum likelihood or parsimony approaches. The collection of possible phylogenetic trees is arranged as a so-called ..." Cited by 6 (1 self) Add to MetaCart Combinatorial optimization problems defined on sets of phylogenetic trees are an important issue in computational biology, for instance the problem of reconstruction a phylogeny using maximum likelihood or parsimony approaches. The collection of possible phylogenetic trees is arranged as a so-called Robinson graph by means of the nearest neighborhood interchange move. The coherent algebra and spectra of Robinson graphs are discussed in some detail as their knowledge is important for an understanding of the landscape structure. We consider simple model landscapes as well as landscapes arising from the maximum parsimony problem, focusing on two complementary measures of ruggedness: the amplitude spectrum arising from projecting the cost functions onto the eigenspaces of the underlying graph and the topology of local minima and their connecting saddle points. , 2005 "... The Discrete Nodal Domain Theorem states that an eigenfunction of the k-th largest eigenvalue of a generalized graph Laplacian has at most k (weak) nodal domains. We show that the number of strong nodal domains cannot exceed the size of a maximal induced bipartite subgraph and that this bound is sha ..." Cited by 3 (0 self) Add to MetaCart The Discrete Nodal Domain Theorem states that an eigenfunction of the k-th largest eigenvalue of a generalized graph Laplacian has at most k (weak) nodal domains. We show that the number of strong nodal domains cannot exceed the size of a maximal induced bipartite subgraph and that this bound is sharp for generalized graph Laplacians. Similarly, the number of weak nodal domains is bounded by the size of a maximal bipartite minor. - LINEAR ALGEBRA APPL , 2002 "... Eigenvectors of the Laplacian of a graph G have received increasing attention in the recent past. Here we investigate their so-called nodal domains, i.e., the connected components of the maximal induced subgraphs of G on which an eigenvector psi does not change sign. An analogue of Courant's nodal d ..." Cited by 2 (2 self) Add to MetaCart Eigenvectors of the Laplacian of a graph G have received increasing attention in the recent past. Here we investigate their so-called nodal domains, i.e., the connected components of the maximal induced subgraphs of G on which an eigenvector psi does not change sign. An analogue of Courant's nodal domain theorem provides upper bounds on the number of nodal domains depending on the location of psi in the spectrum. This bound, however, is not sharp in general. In this contribution we consider the problem of computing minimal and maximal numbers of nodal domains for a particular graph. The class of Boolean Hypercubes is discussed in detail. We find that, despite the simplicity of this graph class, for which complete spectral information is available, the computations are still non-trivial. Nevertheless, we obtained some new results and a number of conjectures.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=730646","timestamp":"2014-04-20T06:57:34Z","content_type":null,"content_length":"37714","record_id":"<urn:uuid:e6c50c43-f8f6-44fb-b4e3-f13d0ede0812>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. Experimental Section3. Results3.1. Experiments on Model Individuals3.2. Population Studies3.2.1. Comparison with NHANES Data3.2.2. Plasma Hcy and 5mTHF3.2.3. Methylation Capacity3.2.4. Interaction among Mutations3.2.5. Correlations with Extreme Plasma Hcy Values3.2.6. Nonlinear Relationships4. Discussion5. ConclusionsAcknowledgmentsConflict of InterestReferencesSupplementary Files nutrients Nutrients Nutrients Nutrients 2072-6643 MDPI 10.3390/nu5072457 nutrients-05-02457 Article A Population Model of Folate-Mediated One-Carbon Metabolism Duncan Tanya M. 1 Reed Michael C. 2 Nijhout H. Frederik 1 * Department of Biology, Duke University, Durham, NC 27708, USA; E-Mail: tmk5@duke.edu Department of Mathematics, Duke University, Durham, NC 27708, USA; E-Mail: reed@math.duke.edu Author to whom correspondence should be addressed; E-Mail: hfn@duke.edu; Tel.: +1-919-684-2793; Fax: +1-919-660-7293. 05 07 2013 07 2013 5 7 2457 2474 12 04 2013 29 05 2013 04 06 2013 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Background: Previous mathematical models for hepatic and tissue one-carbon metabolism have been combined and extended to include a blood plasma compartment. We use this model to study how the concentrations of metabolites that can be measured in the plasma are related to their respective intracellular concentrations. Methods: The model consists of a set of ordinary differential equations, one for each metabolite in each compartment, and kinetic equations for metabolism and for transport between compartments. The model was validated by comparison to a variety of experimental data such as the methionine load test and variation in folate intake. We further extended this model by introducing random and systematic variation in enzyme activity. Outcomes and Conclusions: A database of 10,000 virtual individuals was generated, each with a quantitatively different one-carbon metabolism. Our population has distributions of folate and homocysteine in the plasma and tissues that are similar to those found in the NHANES data. The model reproduces many other sets of clinical data. We show that tissue and plasma folate is highly correlated, but liver and plasma folate much less so. Oxidative stress increases the plasma S-adenosylmethionine/S-adenosylhomocysteine (SAM/SAH) ratio. We show that many relationships among variables are nonlinear and in many cases we provide explanations. Sampling of subpopulations produces dramatically different apparent associations among variables. The model can be used to simulate populations with polymorphisms in genes for folate metabolism and variations in dietary input. folate cycle methionine cycle mathematical model population model NHANES The folate and methionine cycles play critical roles in cell metabolism with profound consequences for health and disease. Defects in the enzymes or insufficiencies in the B vitamins that act as cofactors and carbon carriers in these metabolic cycles have been associated with birth defects, certain cancers, cardiovascular disease and neurological disorders [1,2,3,4,5,6]. Among the critical enzymes in the folate and methionine cycles are thymidylate synthase, which provides the rate-limiting step for DNA synthesis [7] and the DNA methyl transferases (DNMTs) that regulate epigenetic control of gene transcription. In the methionine cycle, S-adenosylmethionine (SAM) is the universal methyl donor and is a substrate for over 150 methyl transferases [8]. The level of SAM and the S-adenosylmethionine/S-adenosylhomocysteine (SAM/SAH) ratio are thought to be good indicators of methylation capacity [9]. The methionine cycle, via cystathionine-β-synthase (CBS), also provides the first step in the synthesis of glutathione, the principal endogenous antioxidant [10,11]. Lowered levels of folate and elevated levels of homocysteine (Hcy) in the plasma are commonly used as indicators of insufficiency in these metabolic cycles and are used as biomarkers for oxidative stress, and for the risk of neurodegenerative disease, atherosclerosis, coronary heart disease, and birth defects [5,9,12]. The status of the folate and methionine cycles is typically assessed by measuring plasma levels of folate, homocysteine and the SAM/SAH ratio. Although plasma metabolites are well established as biomarkers of disease, an understanding of the mechanism of disease requires knowing metabolite levels in the cells, how these are affected by variation in genetics and nutrition, and exactly how that variation is reflected in their plasma values. Previously we developed a mathematical model of the methionine cycle in the liver and peripheral tissues, and of transport mechanisms between these compartments and the plasma, and we showed that under some circumstances the plasma levels of metabolites are not good indicators of their intracellular levels [13]. More importantly, by in silico experimentation, that model made it possible to investigate and explain the causes of those deviations and to explore specifically how differences in folate status, methionine input, and enzyme polymorphisms, affect the levels of intracellular and plasma metabolites. In the present paper we expand our mathematical model in two ways. First, we incorporate the folate cycle into the liver and tissue compartments of the methionine cycle model, and account for the uptake of folate and its transport, as 5-methyltetrahydrofolate (5mTHF), between liver, tissue, and plasma compartments and its elimination in the urine. Second, we develop a version of the model that allows us to introduce stochastic variation in the activities of enzymes, transporters, and nutrient inputs, as well as systematic variation due to polymorphisms in the genes in this system. With this model we create large populations of virtual individuals with specific patterns of polymorphisms and variation in nutrition. We show that the plasma and tissue distributions of folate and homocysteine in this virtual population closely resemble those distributions in the NHANES studies. The population model also accurately reproduces distributions of plasma metabolites measured in populations with different genetic makeups. We use the population data to study and explain the associations between plasma and intracellular levels of metabolites. We show that the relationships among metabolite concentrations and reaction fluxes are often highly non-linear, which may help explain the variability and context-dependency of empirical data on the associations of biomarkers with disease outcomes. We previously developed a mathematical model for intracellular methionine cycle kinetics in the liver and peripheral tissue compartments that contained input of substrates into the plasma, transport of methionine and folate between plasma and liver, plasma and tissues, and removal of metabolites by catabolism and excretion [13]. The tissue compartment refers to non-liver tissues and includes erythrocytes. In the present study we expand this model in two ways. First, we incorporate the folate cycle into the liver and peripheral tissue compartments, as outlined in [14,15,16], and add transport of SAM, SAH, and 5mTHF between the plasma, tissue, liver and urine. The metabolic reaction diagrams and transport directions are shown in Figure 1. Full names of all acronyms are in the Supplementary Materials. Structure of the metabolic system described by the model. Boxes are metabolites, ellipses contain the acronyms for enzymes, and arrows indicate the directions of flux. Full names of the metabolites and enzymes are in the Supplementary Materials. The kinetics of the liver and tissue methionine and folate cycles are derived from [13,14,15,16,17]. The model consists of 26 differential equations that express the rates of change of the metabolites in Figure 1. Each of the differential equations is a mass balance equation expressing the time rate of change of the concentration of the particular metabolite as the sum of the rates at which it is being made minus the rates at which it is being consumed in biochemical reactions, plus or minus the net transport rates from or to other compartments. The differential equations, rate equations, transporter kinetics and justifications are given in the Supplementary Materials together with all parameter values and steady-state values. The model was implemented in MATLAB (Mathworks, Natick, MA, USA). Second, we used this expanded model to create virtual populations by introducing stochastic variation in the activities of enzymes and input rates as follows. To simulate stochastic variation in enzyme activity, the V[max] of each enzyme was multiplied by a number close to one taken from a log-normal distribution (using the location parameter μ = −0.09, and the scale parameter σ = 0.427, giving a mean of 1 and a standard deviation of 0.2). The log-normal distribution is widespread and probably the most common frequency distribution in biological systems [18]. In addition, we simulated genetic polymorphisms (for instance in thymidylate synthase (TS) and methylene-tetrahydrofolate reductase (MTHFR)) by using a random number generator to select two “alleles” from the known frequency distributions, using for each allele a V[max] value that corresponds to the enzyme activity of that mutant (as given in [19]). In all cases the two alleles were assumed to act additively. For each set of random choices, the model was then run to equilibrium giving the metabolite concentrations and reaction fluxes for that virtual individual. We then followed this procedure many (1000–10,000) times to generate a population of virtual individuals, in which each “individual” had a different randomly generated set of enzyme activities. Statistical analyses were done using JMP Pro 9.0 (SAS Institute Inc., Cary, NC, USA). In order to be sure that the mathematical model represents the underlying physiology well, we conducted numerous in silico experiments, where the results could be compared to clinical studies in the literature. We discuss three of those experiments here. First, we conducted a methionine load test on the model. We added a pulse to the methionine input that raised the methionine in the plasma by a factor of 3 with a peak about 3 h after the dose. The exact formula for the methionine pulse is given in the Supplementary Materials. The model then computed the time courses of all the concentrations in the model and all the fluxes. In Figure 2 we show the model computations of plasma Hcy and plasma SAM, two variables that are commonly measured in the clinic. These curves correspond very well with those measured in [20]. Model simulation of a methionine load test. The curves show the response of plasma homocysteine (Hcy) (filled circles) and S-adenosylmethionine (SAM) (open circles) to a pulse of methionine. Effect of increasing oxidative stress. Model calculations show that increasing oxidative stress from normal (0.01 μM hydrogen peroxide (H[2]O[2])) causes plasma Hcy to decrease, the plasma S-adenosylmethionine/S-adenosylhomocysteine (SAM/SAH) ratio to increase, and the tissue and liver concentrations of 5-methyltetrahydrofolate (5mTHF) to increase considerably. Thus oxidative stress creates a methyl trap. Plasma 5mTHF is unchanged and therefore is not a good indicator of oxidative stress. Next, we examined the effects of oxidative stress on folate and methionine metabolism by raising the level of H[2]O[2] in our model from 0.01 μM (normal) to 0.02, 0.03, and 0.04. The results can be seen in Figure 3. The most striking consequence is that 5mTHF in the liver and in the peripheral tissues increases about threefold at the highest level of oxidative stress. Thus, oxidative stress in our model creates a methyl trap wherein 5mTHF increases considerably and the substrates for the TS and AICART reactions decrease considerably, thereby inhibiting cell division. We note that although intracellular levels of 5mTHF increase dramatically, the plasma level changes only slightly, and thus plasma levels of 5mTHF are not a good biomarker of oxidative stress. Jill James [9,21] studied autistic children and found that they have high oxidative stress (higher reduced glutathione (GSSG) and lower glutathione (GSH)). She found, as we do, that Hcy decreases in the plasma compared to normal controls. Experiments with this model and our GSH model [22] provide the reason why intracellular levels of 5mTHF increase substantially. Oxidative stress increases dramatically the intracellular concentration of GSSG, a substrate that inhibits MAT-I and MAT-III [23]. This lowers the flux around the methionine cycle. In addition oxidative stress stimulates the enzyme CBS, so more of the lowered flux is sent down the transsulfuration pathway. This lowers intracellular Hcy, which greatly decreases the flux in the MS reaction, thereby causing 5mTHF to build up. Finally, we examined the effect on the model of an extra pulse of folate (5mTHF) input to the plasma such as one would get by taking a folate supplement. We did not attempt to model the difficult and controversial questions surrounding folate bioavailability [24,25,26], which involve breakdown in the gut and transfer to the plasma, but instead simply increased the input of 5mTHF to the plasma in pulsatile fashion. Most of the extra folate enters the plasma in the first two hours after the dose; the exact formula is given in the Supplementary Materials. Figure 4 shows that plasma folate increases rapidly, peaking at about two hours and then declines slowly so that it is almost back to normal after 10 h. The time course of plasma folate computed by the model is very similar to the clinical curves in [25,27]. Notice that a single supplemental dose of folate has no noticeable effect on liver or tissue folate (Figure 4). This corresponds with many clinical observations that plasma folate is quite variable, but liver and tissue folate change slowly on a much longer time scale. We also conducted experiments with the model (simulations not shown) that showed that liver and tissue folate have a half-life of about 90 days, corresponding to clinical observations [1]. Response to a folate pulse. The curves show model computations of 5mTHF in the plasma, liver and tissue after a pulse of 5mTHF. Intracellular values do not change but plasma values rise almost 3-fold and take more than 8 h to return to steady-state. We generated a virtual population of 10,000 individuals as outlined in Methods. In addition to stochastic variation in each enzyme, this virtual population contains polymorphisms for the genes for two enzymes: MTHFR (modeling the 677C > T polymorphism; frequency of the T allele = 0.31; activity of enzyme CC = 100%, CT = 60%, TT = 30%) and TS (modeling the 1494del6 polymorphism; frequency of −6bp = 0.32; activity of enzyme +6bp/+6bp = 100%, +6bp/−6bp = 48%, −6bp/−6bp = 24%). A spreadsheet with all metabolite concentrations and reaction fluxes for these 10,000 individuals is deposited in the Supplementary Materials as DuncanPopulationData.xls. Data for the genotypic values of each enzyme for each individual in this virtual population are also given. We used this virtual population to generate several of the figures and tables discussed below. The mean plasma folate, tissue folate, and plasma Hcy concentrations for the individuals in the population model are shown in Table 1, together with NHANES postfortification data for 1999–2000 and 2001–2002 [28]. The model predicts folate and homocysteine levels within the ranges found by the NHANES studies. nutrients-05-02457-t001_Table 1 Comparison of model results to NHANES data. Metabolite N Mean Plasma folate * Model ^1 (post-fortification) 10,000 29.7 ± 0.2 NHANES ^2,3 (1999–2000) 3223 30.2 ± 0.7 NHANES ^2,3 (2001–2002) 3931 27.8 ± 0.5 Tissue folate * Model ^1 (post-fortification) 10,000 602 ± 2.15 NHANES ^2,3 (1999–2000) 3249 618 ± 0.11 NHANES ^2,3 (2001–2002) 3977 611 ± 0.9 Plasma Hcy ** Model ^1 (post-fortification) 10,000 7.1 ± 0.03 NHANES ^2,3 (1999–2000) 3246 7.0 ± 0.01 NHANES ^2,3 (2001–2002) 3976 7.3 ± 0.01 * nmol/L; ^1 values are averages ± SE; ^2 NHANES analysis from [28]; ^3 values are geometric means ± SE. ** μmol/L. The frequency distributions of individual tissue folate, plasma folate, and plasma Hcy levels for our virtual population model are shown in Figure 5, where they are compared to the distributions found in the NHANES studies. The tissue folate and plasma Hcy distributions are nearly identical to the NHANES data. In our model a higher frequency of individuals was found to have low plasma folate levels than reported in the NHANES studies. One possible explanation for this deviation is that individuals may have other sources of folate, such a gut microbiota, that are not included in our model. Another is that plasma folate levels are very sensitive to recent folate intake [29], making them quite variable (e.g., Figure 4), and this may account for the discrepancy. Real populations of course differ in accuracy of self-reporting dietary intake and in many uncontrolled and unknowable factors that might affect folate levels. Recently, it has been shown that the several different methods used in the NHANES studies to measure plasma folate levels generated large variation in the levels reported [30]. The NHANES data are thus subject to inaccuracies in measurement and incomplete knowledge of recent folate intake, yet the fit of the model results to these empirical data are very good. Frequency distributions of (A) tissue folate (B) plasma folate and (C) plasma Hcy found in two NHANES data sets are compared to distributions in the model population for In the model, population plasma folate is represented by 5mTHF and tissue folate is the sum of all folate derivatives. The model results show that level of plasma Hcy is negatively correlated with plasma folate level (Figure 6), although the relationship is not linear. This corresponds well with the relationship found by Selhub et al. [31]. At high plasma folate levels, these authors found somewhat higher plasma Hcy than we see in our simulations, which may be due to the fact that we are simulating a post-fortification population. The correlations between 5mTHF and Hcy levels in the three compartments are shown in Figure 7. Plasma Hcy is strongly correlated with tissue Hcy but not with liver Hcy. This indicates that variation in plasma values is determined almost entirely by variation in tissue levels of Hcy. This is because the volume of the tissue compartment is much larger than that of the plasma and the liver, and because the tissue is a net exporter of Hcy whereas the liver imports Hcy from the plasma and remethylates it to methionine. We found that the levels of liver homocysteine and liver 5mTHF are well correlated. The relationship between tissue Hcy and tissue 5mTHF is highly nonlinear (Figure 7), indicating that one is not a good predictor of the other. Scattergram of the relationship between plasma 5mTHF and plasma Hcy. Each point is an individual in a simulated population of 1000 virtual individuals. The SAM/SAH ratio is commonly used as an index of cellular methylation capacity [12,32]. A decrease in the SAM/SAH ratio can be due to a decrease in SAM, the universal methyl donor, or an increase in SAH, which is a general inhibitor of methyl transfer reactions [33,34]. Melynk et al. [12] found a positive relationship between plasma SAM/SAH and lymphocyte SAM/SAH, suggesting that plasma values are a good indicator of intracellular values. Our model results show a positive relationship between plasma SAM/ SAH and tissue SAM/SAH (Figure 8A), similar to that found by Melnyk et al. [12], as well as a negative association between plasma SAM/ SAH and plasma Hcy (Figure 8B), likewise similar to the findings of Melnyk et al. [12]. Scattergrams of relationships between 5mTHF and Hcy in plasma (P-), tissue (T-) and liver (L-). The diagonal gives the names of the metabolites; for each metabolite the column represents variation of the variable along the x-axis and the y-axes in the column show the variation of the different row-variables. Each graph contains 10,000 data points, one for each virtual individual. The red ellipses contain approximately 95% of the data points. Units of the axes are μM, except for P-5mTHF where they are nM. Model scattergram for plasma versus tissue SAM/SAH ratio (A) and for plasma Hcy versus the plasma SAM/SAH ratio (B). Plasma and tissue SAM/SAH are well-correlated. Plasma Hcy is not a good predictor of plasma SAM/SAH. In our model we have two methyltransferases, DNMT and GNMT, and assume that the sum of the flux through these two enzymes is a measure of the total methylation rate. The model reveals that the maximum flux through the DNMT reactions is associated with low to moderate plasma SAM/SAH ratios and saturates at medium to high ratios (Figure 9). Flux through the GNMT reactions, by contrast, is positively correlated with the plasma SAM/SAH ratio at low to medium values, but decreases when the SAM/SAH ratio becomes very high (Figure 9). The reason for the decline in methylation capacity at a very high SAM/SAH ratio is that high concentrations of SAM inhibit MTHFR, which reduces the availability of 5mTHF, and this slows down the methionine synthase reaction and flux around the methionine Scattergrams of the relationships between plasma (P-) SAM/SAH ratio and the DNA-methyltransferase (DNMT) and glycine N-methyltransferase (GNMT) reaction fluxes in liver (L-) and tissues (T-). Each graph contains 10,000 data points; the red ellipses contain approximately 95% of the data. Units of axes are μM/h. Our virtual population has polymorphisms in MTHFR and TS. The activities of the enzymes for homozygotes and heterozygotes for these mutations and the allele frequencies are described in Section 3.2. The C677T polymorphism of MTHFR has been shown to be associated with an increased plasma Hcy, whereas the 1494del6 polymorphism of TS is associated with a decreased plasma Hcy [19,35,36]. In a sizable population many individuals will carry both mutations, either in the heterozygous or homozygous condition. Our virtual population allowed us to examine the population-wide interactions among these mutations. Their joint effects on plasma Hcy are shown in Figure 10, which illustrates the mean deviations from the mean wild-type phenotype for each genotype. It is clear that the effect of gene dosage is not always monotonic, and the interactions among the two genes are not additive. The reduced function allele of TS appears to have a stronger effect on plasma Hcy than that of MTHFR. Mean values of plasma Hcy as a function of 5,10-methylenetetrahydrofolate reductase (MTHFR) and thymidylate synthase (TS) genotypes in a population of 10,000 virtual individuals. The basis for comparison is the double homozygous wild-type (2,2). The TS polymorphism decreases plasma Hcy and the MTHFR polymorphism increases plasma Hcy. However their joint effects are not additive. From our database population of 10,000 we selected the 1000 individuals with the highest plasma Hcy concentrations, and the 1000 individuals with the lowest plasma Hcy concentrations in order to study which internal variables (metabolite concentrations, enzyme fluxes) were best associated with these extremes. Scattergrams of selected pairwise combinations are shown in Figure 11. In general, the values of tissue metabolites and enzyme fluxes are better correlated with plasma Hcy than are those in the liver. This is because the peripheral tissue is a much larger compartment. The strongest correlations with plasma Hcy were the tissue CBS and DNMT reactions (Figure 11). The correlation with CBS activity is positive, as one would expect, since higher levels of Hcy will drive that reaction faster. There is no relationship, however, with tissue MS activity, presumably because MS also requires 5mTHF as a co-substrate; higher concentrations of Hcy would drive the MS reaction faster and thus draw down 5mTHF. There is a negative relationship between plasma Hcy and tissue DNMT rate (Figure 11), which is due to the negative association of Hcy and tissue SAM, which is the substrate for the DNMT reaction. Most other enzymes and metabolites show a weak correlation with plasma Hcy, at least in these simple pairwise comparisons. Scattergrams for various metabolite concentrations and reaction rates in virtual individuals with high plasma Hcy (gray) and low plasma Hcy (black). The 1000 individuals with the highest plasma Hcy and the 1000 individuals with the lowest plasma Hcy were selected from a population of 10,000 virtual individuals. The scattergrams allow one to assess whether the relationship between other variables differ in the two sub-populations. Units of the axes are μM for concentrations and μM/h for reaction rates. The results illustrated in Figure 6, Figure 7, Figure 8, Figure 9, Figure 11 show that the relationships among metabolites and reaction fluxes in this system can be very nonlinear. Indeed, a global analysis of relationships throughout the system modeled here shows that almost half the pairwise relationships among metabolites and reaction fluxes are nonlinear (see also Figures S2 and S3 in Supplementary Materials). This finding implies that observed correlations among variables in the system will be sensitive to the values of other (possibly unmeasured) variables. Sampling a sub-population with particular characteristics, for instance, can produce dramatically different apparent associations among variables. Figure 12 is an example of the relationship between fluxes through the CBS and MTHFR reactions for four different subsamples of the population. Each subpopulation has a different relationship even though the underlying mechanism is exactly the same for all. Estimated relationships between CBS and MTHFR fluxes based on selective data from the same population of 10,000 virtual individuals. Panel (A) shows 100 individuals with the highest (black) and 100 with the lowest (gray) plasma folate. Panel (B) shows 100 individuals with the highest (black) and 100 with the lowest (gray) plasma Hcy. For each sub-population the slope of the regression estimating the relationship between CBS and MTHFR fluxes is quite different. Lines are least-square regressions. We have developed a mathematical model for whole-body one-carbon metabolism and the transport of metabolites between the plasma, liver and peripheral tissues. We developed two versions of the model: an individual model in which we studied the time-dependence of variation in cellular metabolism due to variation in nutrient and vitamin input, and a population model in which we studied the effects of variation in enzyme activities, nutrient input and genetic polymorphisms on the distribution of steady-state values of metabolite concentrations and metabolic fluxes. A methionine load test with the individual model produces plasma profiles of Hcy and SAM that are closely similar to those found empirically. The individual model also shows that a pulse of folate input changes the plasma folate significantly but has little effect on the tissue and liver folate levels. Prolonged exposure to altered folate input levels is required to substantially change the liver and tissue values [13]. We also used the individual model to examine the effects of increased oxidative stress on metabolite levels and reaction fluxes. Oxidative stress causes substantial changes in tissue and liver 5mTHF, which are not reflected in plasma values. The model allowed us to explain the causal pathway by which oxidative stress produces this effect. We used the model to produce a population of 10,000 virtual individuals with stochastic variation in enzyme activities and polymorphisms in MTHFR and TS. The frequency distribution of tissue folate and of plasma folate and homocysteine was nearly identical to that found in the NHANES studies, which suggests that the amount and distribution of stochastic variation in our virtual population fairly resembled that of a US population. The model population distributions of plasma and tissue SAM/SAH ratios, and the relationship between plasma Hcy, folate and SAM/SAH are likewise similar to those found in human population studies. For obvious reasons, most biomarkers are measured in the blood or in the urine and then one assumes that these values are a good representation of what is happening in the liver or the tissues. Our model gives a way of assessing this assumption. P-5mTHF is not very correlated to T-5mTHF or L-5mTHF (Figure 7). P-Hcy is tightly correlated with T-Hcy (even though the relationship is nonlinear), but P-Hcy is quite uncorrelated with L-Hcy (Figure 7). P-SAM/SAH is reasonably well correlated with T-SAM/SAH (Figure 8A). P-SAM is tightly correlated to T-SAM but only moderately coupled to L-SAM (scatterplots not shown). In general, we have found that tissue values and plasma values are often well correlated but plasma values and liver values are not. This is not surprising since the volume of the tissues in our model is much larger than the volume of the liver. Nevertheless, these results show that plasma-tissue and plasma-liver correlations may differ considerably for different variables and that it is always risky to assume that the plasma measurements represent well either tissue or liver values. The population model allows us to partition a population by genotype and/or phenotype and investigate the associations of intracellular metabolites and reaction fluxes and of plasma metabolites with each different condition. Partitioning a population by individuals with the highest and lowest plasma Hcy values allowed us to determine which other pairwise relationships are different for the two populations. For example, in all the pairwise scatterplots of CBS with other variables in Figure 11, the high and low Hcy populations fall in two separate clouds. The reason is easy to understand. Plasma Hcy is highly correlated with tissue Hcy (see Figure 7). Thus, if plasma Hcy is high (low), then tissue Hcy will be high (low) and will drive the CBS flux at a high (low) rate. On the other hand, some of the scatter plots of tissue MS flux versus other variables show separate populations (T-MS vs. T-CBS), some show partially overlapping populations (T-MS vs. L-MS), and some show almost completely overlapping populations (L-MS vs. L-CBS). In each case, one can use the reaction diagram (Figure 1) and model experiments to figure out why these scatterplot results are true. Thus, the virtual population allows one to investigate the behavior of all concentrations and fluxes, and correlations between them in selected subpopulations. When one does such an investigation, one can sometimes find surprising results. In Figure 12A we saw that MTHFR flux was positively correlated with CBS flux in the high folate populations but negatively correlated with CBS flux in the low folate population. And in Figure 12B we saw that the positive relationship of MTHFR flux to CBS flux has a very different slope in the high Hcy population as compared to the low Hcy population. This is a good reminder that one has to be very careful to choose a “random” population if one expects to infer universal relationships between Perhaps the most striking result obtained from our population studies is that many of the scatterplots are highly nonlinear; that is, there is no linear relationship between the variables that captures the essential features of the data. Consider, for example, the scatterplots of T-5mTHF vs. P-Hcy in Figure 7, of P-SAM/SAH vs. L-DNMT in Figure 9, or T-GNMT vs. T-DNMT in Figure 9. In one sense such outcomes are not surprising since almost all the velocities in the reaction diagram (Figure 1) are (highly) nonlinear functions of the current values of various concentrations. On the other hand, when one computes correlations, it is tempting to assume that the linear regression line expresses a real relationship between the variables. These scatterplots are strong evidence that caution should be exercised in making such simplifying assumptions. 1. The deterministic model is a useful tool for studying the casual relationships between changes in different metabolite and biomarker levels. 2. The virtual population model allows one to study interesting relationships suggested by the NHANES studies or other population studies, because in our model one can study the correlations among all the variables in the system. 3. One needs to take great care before assuming that changes in plasma levels of metabolites will be simple reflections of changes in liver levels of the same metabolites. 4. Selecting random populations is crucial to population studies, because correlations between variables may be very different in different non-random subpopulations. This research was supported by NSF grant EF-1038593 to HFN and MCR, and a subcontract on NIH grant R01 ES019876 (Duncan Thomas, PI). The authors declare they have no conflicts of interest. Bailey L.B. CRC Press Boca Raton, FL, USA 2009 Carmel R. Jacobsen D. Cambridge University Press Cambridge, UK 2001 Duthie S.J. Folic acid deficiency and cancer: Mechanisms of DNA instability 1999 55 578 592 10.1258/0007142991902646 Potter J.D. Colorectal cancer: Molecules and populations 1999 91 916 932 10.1093/jnci/91.11.91610359544 Eskes T.K.A.B. Homocysteine in Human Reproduction Carmel R. Jacobsen D.W. Cambridge University Press Cambridge, UK 2001 451 465 Refsum H. Ueland P.M. Nygard O. Vollset S.E. Homocysteine and cardiovascular disease 1998 49 31 62 10.1146/annurev.med.49.1.31 Liu Y. Xia Q. Jia Y. Guo H. Wei B. Hua Y. Yang S. Significance of differential expression of thymidylate synthase in normal and primary tumor tissues from patients with colorectal cancer 2011 4 33 10.1186/1756-8722-4-33 Lennard L. Elsevier Kidlington, UK 2010 James S.J. Cutler P. Melnyk S. Jernigan S. Janak L. Gaylor D.W. Neubrander J.A. Metabolic biomarkers of increased oxidative stress and impaired methylation capacity in children with autism 2004 80 1611 1617 15585776 Wu G. Fang Y.-Z. Yang S. Lupton J.R. Turner N.D. Glutathione metabolism and its implications for health 2004 134 489 492 14988435 Ishibashi M. Akazawa S. Sakamaki H. Matsumoto K. Yamasaki H. Yamaguchi Y. Goto S. Urata Y. Kondo T. Nagataki S. Oxygen-induced embryopathy and the significance of glutathione-dependent antioxidant system in the rat embryo during early organogenesis 1997 22 447 454 10.1016/S0891-5849(96)00338-3 Melnyk S. Pogribna M. Pogribny I. Yi P. James S. Measurement of plasma and intracellular S-adenosylmethionine and S-adenosylhomocysteine utilizing coulometric electrochemical detection: Alterations with plasma homocysteine and pyridoxal 5′-phosphate concentrations 2001 47 612 Duncan T. Reed M. Nijhout H. The relationship between intracellular and plasma levels of folate and metabolites in the methionine cycle: A model 2013 57 626 638 Nijhout H. Reed M. Anderson D. Mattingly J. James J. Ulrich C. Long-range allosteric interactions between the folate and methionine cycles stabilize DNA methylation reaction rate 2006 1 81 87 10.4161/epi.1.2.267717998813 Nijhout H.F. Reed M.C. Budu P. Ulrich C.M. A mathematical model of the folate cycle––New insights into folate homeostasis 2004 279 55008 55016 10.1074/jbc.M410818200 Reed M.C. Nijhout H.F. Neuhouser M.L. Gregory J.F. Shane B. James S.J. Boynton A. Ulrich C.M. A mathematical model gives insights into nutritional and genetic aspects of folate-mediated one-carbon metabolism 2006 136 2653 2661 16988141 Reed M. Nijhout H. Sparks R. Ulrich M. A mathematical model of the methionine cycle 2004 226 33 43 10.1016/j.jtbi.2003.08.001 Limpert E. Stahel W. Abbt M. Log-normal distributions across the sciences: Keys and clues 2001 51 341 352 10.1641/0006-3568(2001)051[0341:LNDATS]2.0.CO;2 Ulrich C.M. Neuhouser M. Liu A.Y. Boynton A. Gregory J.F. III Shane B. James S.J. Reed M.C. Nijhout H.F. Mathematical modeling of folate metabolism: Predicted effects of genetic polymorphisms on mechanisms and biomarkers relevant to carcinogenesis 2008 17 1822 1831 10.1158/1055-9965.EPI-07-2937 Ueland P.M. Refsum H. Plasma homocysteine, a risk factor for vascular disease: Plasma levels in health, disease, and drug therapy 1989 114 473 501 2681479 James S.J. Melnyk S. Jernigan S. Cleves M.A. Halsted C.H. Wong D.H. Cutler P. Bock K. Boris M. Bradstreet J.J. Metabolic endophenotype and related genotypes are associated with oxidative stress in children with autism 2006 141B 947 956 10.1002/ajmg.b.30366 Reed M. Thomas R. Pavisic J. James S. Ulrich C. Nijhout H. A mathematical model of glutathione metabolism 2008 5 10.1186/1742-4682-5-8 Pajares M. Duran C. Corrales F. Pliego M. Mato J. Modulation of rat liver S-adenosylmethionine synthetase activity by glutathione 1992 267 17598 17605 1517209 Gregory J.F. Case study: Folate bioavailability 2001 131 1376S 1382S 11285357 Wright A.J.A. Finglas P.M. Dainty J.R. Hart D.J. Wolfe C.A. Southon S. Gregory J.F. Single oral doses of 13c forms of pteroylmonoglutamic acid and 5-formyltetrahydrofolic acid elicit differences in short-term kinetics of labelled and unlabelled folates in plasma: Potential problems in interpretation of folate bioavailability studies 2003 90 363 371 10.1079/BJN200390812908897 Gregory J.F. Quinlivan E.P. Davis S.R. Integrating the issues of bioavailability, intake and metabolism in the era of fortification 2005 16 229 240 10.1016/j.tifs.2005.03.010 McKillop D.J. McNulty H. Scott J.M. McPartlin J.M. Strain J. Bradbury I. Girvan J. Hoey L. McCreedy R. Alexander J. The rate of intestinal absorption of natural food folates is not related to the extent of folate conjugation 2006 84 167 173 16825692 Ganji V. Kafai M. Trends in serum folate, rbc folate, and circulating total homocysteine concentrations in the united states: Analysis of data from national health and nutrition examination surveys, 1988–1994, 1999–2000, and 2001–2002 2006 136 153 158 16365075 Lindenbaum J. Allen R. Clincal Spectrum and Diagnosis of Folate Deficiency Bailey L.B. Marcel Dekker, Inc. New York, NY, USA 1995 43 74 Yetley E.A. Johnson C.L. Folate and vitamin B-12 biomarkers in nhanes: History of their measurement and use 2011 94 10.3945/ajcn.111.013300 Selhub J. Jacques P.F. Wilson P.W.F. Rush D. Rosenberg I.H. Vitamin status and intake as primary determinants of homocysteinemia in an elderly population 1993 270 2693 2698 10.1001/jama.1993.03510220049033 Williams K.T. Schalinske K.L. New insights into the regulation of methyl group and homocysteine metabolism 2007 137 311 314 17237303 Caudill M.A. Wang J.C. Melnyk S. Pogribny I.P. Jernigan S. Collins M.D. Santos-Guzman J. Swendseid M.E. Cogger E.A. James S.J. Intracellular S-adenosylhomocysteine concentrations predict global DNA hypomethylation in tissues of methyl-deficient cystathionine beta-synthase heterozygous mice 2001 131 2811 2818 11694601 Stam F. van Guldener C. ter Wee P.M. Kulik W. Smith D.E.C. Jakobs C. Stehouwer C.D.A. de Meer K. Homocysteine clearance and methylation flux rates in health and end-stage renal disease: Association with S-adenosylhomocysteine 2004 287 F215 F223 10.1152/ajprenal.00376.2003 Kealey C. Brown K. Woodside J. Young I. Murray L. Boreham C. McNulty H. Strain J.J. McPartlin J. Scott J. A common insertion/deletion polymorphism of the thymidylate synthase (TYMS) gene is a determinant of red blood cell folate and homocysteine concentrations 2005 116 347 353 10.1007/s00439-004-1243-2 De Bree A. Verschuren W. Bjørke-Monsen A. van der Put N. Heil S. Trijbels F. Blom H. Effect of the methylenetetrahydrofolate reductase 677c→T mutation on the relations among folate intake and plasma folate and homocysteine concentrations in a general population sample 2003 77 687 693 12600862 Supplementary Material (DOCX, 4552 KB) Compatibility mode (XLS, 20042 KB)
{"url":"http://www.mdpi.com/2072-6643/5/7/2457/xml","timestamp":"2014-04-18T00:18:47Z","content_type":null,"content_length":"96691","record_id":"<urn:uuid:cca61e50-8524-4898-9490-d5a1fe1589db>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
New Method for Obtaining Emission Coefficients from Emitted Spectral Intensities. Part II—Asymmetrical Sources The method given in Part I for obtaining emission coefficients from emitted spectral intensities is generalized here to include asymmetrical sources as well. In this method the emission coefficient is expanded in terms of a complete set of functions which are invariant in form to a rotation of axes and the integral equation relating the emission coefficient to the emitted spectral intensity is used to determine the unknown expansion coefficients. The method has been checked by means of a hypothetical example corresponding to a source whose emission coefficient is a displaced gaussian. This same hypothetical example is used to test the numerical method which was developed for summing the series representation for the emission coefficient in situations where the emitted spectral intensity is given in the form of experimental data. Finally the numerical method is used to obtain the spatial distribution of the emission coefficient corresponding to an atomic spectral line of argon emitted by a free-burning argon arc which is distorted by an external magnetic field. C. D. MALDONADO and H. N. OLSEN, "New Method for Obtaining Emission Coefficients from Emitted Spectral Intensities. Part II—Asymmetrical Sources," J. Opt. Soc. Am. 56, 1305-1313 (1966) Sort: Year | Journal | Reset 1. C. D. Maldonado, A. P. Caron, and H. N. Olsen, J. Opt. Soc. Am. 55, 1247 (1965); this paper is referred to as I, and any of its equations will be identified by the prefix I. 2. S. I. Herlitz, Arkiv Fys. 23, 571 (1963). 3. C. D. Maldonado, J. Math. Phys. 6, 1935 (1965). 4. A. Erdelyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, Higher Transcendental Functions (McGraw-Hill Book Co., Inc., New York, 1963), Vols. I and II. 5. M. H. Stone, Ann. Math. (2) 29, 1 (1927); G. Sansone, Orthogonal Functions (Interscience Publishers, Inc., New York, 1939), Chap. IV. 6. H. N. Olsen, C. D. Maldonado, G. D. Duckworth, and A. P. Caron, "Investigation of the Interaction of an External Magnetic Field with an Electric Arc," Final Report under Contract AF33 (615)-1105, ARL 66-0016 (January 1966). 7. H. N. Olsen, J. Quant. Spectry. Radiative Transfer 3, 305 (1963). 8. A. Erdelyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, Tables of Integral Transforms (McGraw-Hill Book Co., Inc., New York, 1963), Vol. I. 9. W. Gröbner and N. Hofreiter, Integraltafel erster teil Unbestimmte Integrale (Springer-Verlag, Vienna, 1961). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josa/abstract.cfm?uri=josa-56-10-1305","timestamp":"2014-04-21T12:30:19Z","content_type":null,"content_length":"64154","record_id":"<urn:uuid:7b425f4f-d448-4f12-a283-e5f7a1dcf29b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Physicists propose solution to constraint satisfaction problems (PhysOrg.com) -- Maria Ercsey-Ravasz, a postdoctoral associate and Zoltan Toroczkai, professor of physics at the University of Notre Dame, have proposed an alternative approach to solving difficult constraint satisfaction problems. Their paper, "Optimization hardness as transient chaos in an analog approach to constraint satisfaction," was published this week in the journal Nature Physics. The approach proposed by Ercsey-Ravasz and Toroczkai involves Boolean satisfiability (k-SAT), one of the most studied optimization problems. It applies to a vast range of decision-making, scheduling and operations research problems from drawing delivery routes to the placement of circuitry elements in microchip design. The formal proof for the existence or non-existence of fast and efficient (polynomial-time) algorithms to solve such problems constitutes the famous P vs. NP problem. It is one of the six greatest unsolved problems of mathematics, called Millennium Prize Problems, with $1 million allocated to the solution of each problem, awarded by the Clay Institute of Mathematics. The paper proposes a mapping of k-SAT into a deterministic continuous-time (that is, analog) dynamical system with a unique correspondence between its attractors and the k-SAT solution clusters. It shows that as the constraints are increased, i.e., as the problems become harder, the analog trajectories of the system become transiently chaotic, signaling the appearance of optimization hardness. The proposed dynamical system always finds solutions if they exist, including for problems considered among the hardest algorithmic benchmarks. It finds these solutions in polynomial continuous-time, however, at the cost of trading search times for unbounded fluctuations in the system s energy function. The authors establish a fundamental link between optimization hardness and chaotic behavior, suggesting new ways to approach hard optimization problems both theoretically, using nonlinear dynamical systems methods, and practically, via special-purpose built analog devices. More information: www.nature.com/nphys/journal/vaop/ncurrent/full/nphys2105.html 5 / 5 (1) Oct 10, 2011 Most of the fight with the P=NP question was made through unsuccessful discrete approach. Translating it into continuous problem moves it to completely different kingdom of analysis, which is practically unexplored. I see above article sees the problem with huge amount of constrains such approach is believed to required, like in this paper: But it occurred that constrains are not required: k-SAT can be translated into just optimization of nonnegative polynomial, for example (x OR y) can be changed into optimizing ((x-1)^2 plus y^2)((x-1)^2 plus (y-1)^2)(x^2 plus (y-1)^2) and correspondingly 14 degree polynomial for alternatives of 3 variables. Zeros of sum of such polynomials for 3-SAT terms corresponds to the solution. Its gradient flow is dynamical system like it the article, but without constrains. ( http://www.usenet...&p=0 ) not rated yet Oct 10, 2011 For those who don't like the jargon: The exact(most efficient) solution to the traveling salesman problem is an example of what the authors want. They just word this differently. I don't know a description of Nature that avoids our use of constructs, especially our constructs of measure. "Constraints" for the salesman are the points he must visit. not rated yet Oct 10, 2011 "We show that beyond a constraint density threshold, the analog trajectories become transiently chaotic and the boundaries between the basins of attraction of the solution clusters become fractal..." - authors Let's translate this for the salesman and tell him what the above means for him: When the salesman has so many points to visit - too many points to visit - his movements (trajectories) appear to become chaotic and indeterminate, i.e., random. The salesman remains undaunted by the description the authors have bestowed upon him. He knows his trajectories are simply misleading the researchers and that his travels are optimal. The salesman can't wait to meet the researchers and tell them the optimal solution to his problem - regardless of how many points he must visit. 1 / 5 (1) Oct 10, 2011 Kind of fuzzy logic? not rated yet Oct 10, 2011 fuzzy logic is where you take a classic problem with two solutions say is the light on -- yes / no - or 1/0 -- and the fuzzy equivalent is kind of like making logic understand terms like mostly, kind of, and slightly. so is the light on is no longer yes - it is slightly on - no longer 1 for yes but .25 Fuzzy logic allows a computer to evaluate the question - is the car going fast -- well the discrete measure of speed is 55mph and that equates to say .60 for fast but a discrete measure of 95 mph would translate into .95. The equation that translates a discrete measure to its fuzzy answer does not need to be a linear equation - but it must satisfy the condidtion of an answer between 0 - 1. not rated yet Oct 10, 2011 The overall effectiveness of this algorithm will in the end be determined by how long it takes to compute the solutions to practical problems. After all, Quick-Sort is O(n*n) while Merge- Sort is O (n*logn), but for most problems (even very large ones) qsort with a random pivot is faster. I solve the kinds of problems this new algorithm addresses in my every day work. I look forward to seeing what (if any) results are published on practical chip design layout optimization (for not rated yet Oct 10, 2011 Then solve exactly the traveling salesman problem. And line your pocketbook with $1 million allocated to the solution. I recommend you follow Grigori Perelman example. Once you are chosen to accept the prize, turn the prize down. not rated yet Oct 10, 2011 Can anyone access the article and share the PDF please? My department doesn't subscribe to Nature Physics. Hush1: That would set a nice tradition :) not rated yet Oct 10, 2011 Then solve exactly the traveling salesman problem. And line your pocketbook with $1 million allocated to the solution. I recommend you follow Grigori Perelman example. Once you are chosen to accept the prize, turn the prize down. I never said that we solve the problems optimally. There are a large number of companies out there that solve these and other NP-complete type problems using heuristics. We can never be as optimal as we like of course. Every printed circuit board, chip layout, and other design solution requires (as close as possible) solutions to these problems. What do you think we do? Throw our hands in the air and pronounce them not rated yet Oct 10, 2011 Endorsing the share request too. http://docs.googl..._web.pdf Optimization hardness as transient chaos in an analog approach to constraint satisfaction&hl=en&gl=us&pid=bl&srcid= He is touching as many bases as possible. Unusual in theoretical physics. not rated yet Oct 10, 2011 I encouraged you. You know exactly how close you are to the optimal in your solutions. Even if the tools - the heuristic means - don't take you to the optimal, you are close. Never liking an exact solution for optimization? Don't we both have to laugh at this point? lol My words were not meant to entice, lure, or trigger a defensive rhetorical question about what you do. Intuitively, we both know, for the salesman, there exist an exact solution. No can get this close and "Throw their(our) hands in the air and pronounce them (him) unsolvable?" That rhetorical question answers itself. An answer we both share.
{"url":"http://phys.org/news/2011-10-physicists-solution-constraint-satisfaction-problems.html","timestamp":"2014-04-18T11:35:45Z","content_type":null,"content_length":"83106","record_id":"<urn:uuid:366674cf-fd11-49ba-8bfd-cd15683f6552>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Emmy Noether Noether, Emmy (Amalie Emmy Noether)ämälˈyə ĕmˈē nöˈtər, 1882–1935, German mathematician, b. Erlangen, Germany, grad. Univ. of Erlangen (Ph.D. 1908). She made important contributions to the development of abstract algebra, which studies the formal properties, e.g., associative law, commutative law, and distributive law, of algebraic operations. In 1915 she joined David Hilbert and C. F. Klein at Göttingen Univ. at their invitation, and finally secured an official appointment there in 1919 (although without a salary until after 1922). At Göttingen, Noether developed the theories of ideals and of noncommutative algebras; she also proved two theorems concerning the connection between symmetries and conservation laws, the first of which has been particularly important to the development of modern physics. When the Nazis dismissed her and other Jewish professors in 1933, she immigrated to the United States, briefly teaching at Bryn Mawr College and at the Institute for Advanced Study, Princeton, before she died. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
{"url":"http://www.factmonster.com/encyclopedia/people/noether-emmy.html","timestamp":"2014-04-17T04:40:14Z","content_type":null,"content_length":"20489","record_id":"<urn:uuid:dbe16bec-a7bc-41e8-a836-49daf02140d6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2007(2007), No. 110, pp. 1-10. Positive periodic solutions of neutral logistic difference equations with multiple delays Yongkun Li, Qian Chen Abstract: Using a fixed point theorem of strict-set-contraction, we some established the existence of positive periodic solutions for the neutral logistic difference equation, with multiple delays, Submitted May 22, 2007. Published August 14, 2007. Math Subject Classifications: 34K13, 34K40, 92B05. Key Words: Positive periodic solution; neutral functional difference equation; strict-set-contraction. Show me the PDF file (217 KB), TEX file, and other files for this article. │ Yongkun Li │ │ Department of Mathematics, Yunnan University │ │ Kunming, Yunnan 650091, China │ │ email: yklie@ynu.edu.cn │ │ Qian Chen │ │ Department of Mathematics, Yunnan University │ │ Kunming, Yunnan 650091, China │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2007/110/abstr.html","timestamp":"2014-04-16T10:31:55Z","content_type":null,"content_length":"1725","record_id":"<urn:uuid:e1566eac-cbe2-4c55-97e5-941a8ddb378b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
prove function is one-to-one February 8th 2010, 07:56 AM #1 Junior Member Oct 2009 prove function is one-to-one I am asked to prove that the following function is one-to-one. $f(x)=\sqrt(x), x \geq 0$ I understand that the proof template for this is: Suppose f(x) = f(y) ... then x = y. So, is this sufficient? Suppose $f(x) = f(y)$. Then $\sqrt(x) = \sqrt(y)$. Square both sides, and we have that $x = y$. Therefore, $f(x)$ is one-to-one. Well that works. Does it not? BTW: In LaTeX [tex]\sqrt{(x)} = \sqrt{(y)}[/tex] gives $\sqrt{(x)} = \sqrt{(y)}$. Note the extra braces. Thanks Plato. It just seemed so trivial, I wasn't sure if I was missing something. I've been making an effort to do all my homework in latex - I thought that square root looked funny. thanks for the tip! I am asked to prove that the following function is one-to-one. $f(x)=\sqrt(x), x \geq 0$<---This is not a function. I understand that the proof template for this is: Suppose f(x) = f(y) ... then x = y. So, is this sufficient? Suppose $f(x) = f(y)$. Then $\sqrt(x) = \sqrt(y)$. Square both sides, and we have that $x = y$. Therefore, $f(x)$ is one-to-one. You do not have a function. By definition, every element in the domain of a function must have only one image in the codomain. A square root of a number has two values, the positive and negative. No, that's not correct. The Function square root always gives you a positive value. $\sqrt{4}=2eq -2=-\sqrt{4}$. Another thing, is to solve a cuadratic equation $x^2=4$ that effectively has 2 That's incorrect. The correct function should be $f(x)= |\sqrt{x}|$, for $x >0$. $f(x)= \sqrt{x}$, for $x >0$ is not a function. By definition, the square root of a positive number has a positive and a negative values, and the square root of a negative number has the real and the imaginary parts. That's incorrect. The correct function should be $f(x)= |\sqrt{x}|$, for $x >0$. $f(x)= \sqrt{x}$, for $x >0$ is not a function. By definition, the square root of a positive number has a positive and a negative values, and the square root of a negative number has the real and the imaginary parts. novice you are wrong, sadly wrong on this point. $f(x)=\sqrt{x}$ is function with domain $[0,\infty )$. You make a common mistake made by many novices. Here it is $\sqrt{4}=2$ and it is absolutely true that $\sqrt{4} ~{\color{blue}ot=}-2$. It is true that $4$ has two square roots. They are $\sqrt{4}~\&~-\sqrt{4}$. Glad to hear from you. I thought you were ignoring us. That's what I learned in highschool. I thought it was an over simplification. February 8th 2010, 08:23 AM #2 February 8th 2010, 09:05 AM #3 Junior Member Oct 2009 February 8th 2010, 10:56 AM #4 Sep 2009 February 8th 2010, 10:59 AM #5 Feb 2009 February 8th 2010, 12:15 PM #6 Sep 2009 February 8th 2010, 12:32 PM #7 February 8th 2010, 12:45 PM #8 Sep 2009
{"url":"http://mathhelpforum.com/discrete-math/127778-prove-function-one-one.html","timestamp":"2014-04-23T23:02:23Z","content_type":null,"content_length":"60204","record_id":"<urn:uuid:ac9f813f-fc35-40a4-8740-68a184c6b831>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Manassas, VA Algebra 1 Tutor Find a Manassas, VA Algebra 1 Tutor I am a current student at George Mason University studying Biology which allows me to connect to other students struggling with certain subjects. I tutor students in reading, chemistry, anatomy, and math on a high school level and lower. I hope to help students understand the subject they are working with by repetition, memorization, and individualized instruction. 9 Subjects: including algebra 1, English, reading, anatomy ...I have always felt that Mathematics is interesting and can be fun. I can break down difficult concepts, thereby helping students understand the concepts in all types and aspects of math problems and the step-by-step process of solving them. My primary focus is building the student’s confidence, through thorough explanation of the basic concepts involved, by interactive practice and feedback. 7 Subjects: including algebra 1, geometry, algebra 2, trigonometry ...The GED Math test includes number computations, Algebra, and Geometry. You probably remember more than you think you do. Together we can review concepts and then practice in test-like conditions so you feel completely confident walking into your test. 10 Subjects: including algebra 1, geometry, algebra 2, GED ...After that, I got my master's degree from George Mason University's School of Public Policy in 2010. I have been tutoring since high school. I know French, Spanish and Portuguese. 19 Subjects: including algebra 1, Spanish, English, French Hi! My name is Paul, and I'm here to help students improve their math and science scores. I've always been passionate about teaching, and WyzAnt provides a flexible platform that brings the most qualified tutor directly to the student they can best help. 29 Subjects: including algebra 1, reading, calculus, geometry Related Manassas, VA Tutors Manassas, VA Accounting Tutors Manassas, VA ACT Tutors Manassas, VA Algebra Tutors Manassas, VA Algebra 2 Tutors Manassas, VA Calculus Tutors Manassas, VA Geometry Tutors Manassas, VA Math Tutors Manassas, VA Prealgebra Tutors Manassas, VA Precalculus Tutors Manassas, VA SAT Tutors Manassas, VA SAT Math Tutors Manassas, VA Science Tutors Manassas, VA Statistics Tutors Manassas, VA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Annandale, VA algebra 1 Tutors Burke, VA algebra 1 Tutors Centreville, VA algebra 1 Tutors Chantilly algebra 1 Tutors Fairfax, VA algebra 1 Tutors Falls Church algebra 1 Tutors Herndon, VA algebra 1 Tutors Manassas Park, VA algebra 1 Tutors Mc Lean, VA algebra 1 Tutors Reston algebra 1 Tutors Springfield, VA algebra 1 Tutors Stafford, VA algebra 1 Tutors Sterling, VA algebra 1 Tutors Vienna, VA algebra 1 Tutors Woodbridge, VA algebra 1 Tutors
{"url":"http://www.purplemath.com/manassas_va_algebra_1_tutors.php","timestamp":"2014-04-21T14:44:34Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:7b59fc34-aafa-4c49-963a-04dfded73ef2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from June 2008 on The Gauge Connection We have seen here that the gluon propagator can be fitted with a Yukawa form that gives a mass gap $m_0=1.25 \times 440 MeV = 550 MeV$. The pure number 1.25 fixes the ground state of the theory. Higher excited states in the spectrum will be characterized by a greater pure number multiplying the QCD constant of 440 MeV. The question we ask here is: Are there other lattice computations supporting this view? We show here that this is indeed the case. As already pointed out, the spectrum in D=3+1 for Yang-Mills theory, also claimed to be a glueball spectrum, has been computed by Teper et al. and Morningstar et al. But these computations generally give the first few states for each kind of resonance. So to say, one has just $0^{++}$ and $0^{++*}$ and this may not be too convincing. But there is a paper on arxiv due to Harvey Meyer, that worked with Teper’s Group in Cambridge, that extends the computations further (see here). This is unpublished material being a PhD thesis. We can sum up the pure numbers obtained by Meyer for the $0^{++}$ case with the relative error: Now we assume that this spectrum has the simple form of an harmonic oscillator, that is we take $m_n=(2n+1)m_0$. Our aim will be to obtain $m_0$ in close agreement with computations for the gluon propagator. We obtain the following table There is a lot to say about this result. It is striking indeed! We get $m_0$ practically the same as the one obtained from lattice computations of the gluon propagator that, as said above was 1.25, if we consider that the error increases with $n$. But what is more, one can extrapolate the above results to $n=0$ obtaining the proper ground state of the theory again recognizing the $\sigma$ resonance. We just note that if we use the improved values 3.55(7) and 5.69(10) by Teper et al. (see here) we get a still better agreement obtaining 1.183 with an error of 0.023 and 1.14 and error of 0.05, making $m_0$ very near the value 1.19 that we claimed would give an improved spectrum in AdS/CFT computations (see here). This computation makes clear a point. Present lattice computations of the glueball spectrum are not fully satisfactory for D=3+1 as they do not seem to hit the right ground state of the theory as is done by lattice computations for the propagators. I think some improvement is needed here for a better understanding of the situation. We will have more to say about Yang-Mills spectrum in future posts. How far are we from a proof of existence of the mass gap? A well acquired fact is that there are seven problems that will be awarded with a rich prize by the Clay Institute (see here). One of these problems touch us that work on QCD very near. This is the question of the existence of a mass gap for a Yang-Mills theory (see here). I would like to emphasize that Clay Institute is a mathematical institution and, as such, it is acquired that a proof given by a physicist could not be enough to satisfy the criteria of professional mathematicians to be called a proof. Anyhow, one can always suppose that a sound mathematical idea due to some physicists can be made rigorous enough by the proper intervention of a mathematician. But this last passage is generally neither that simple nor obtainable in a short time. As we have discussed here, present lattice results are already enough to have given to physicists a sound proof of the existence of a mass gap for a Yang-Mills theory in D=3+1. In this post I will avoid to discuss about theoretical work in D=3+1 but rather I would like to point out some relevant work appeared for the case D=2+1. In this case there are two categories of papers to be considered. The first category corresponds to the works of Bruce McKellar and Jesse Carlsson. These works are largely pioneering. In these papers the authors consider the theory on the lattice and try to solve it through analytical means (here, here and here appeared in archival journals). They reached a relevant conclusion: The spectrum of the Yang-Mills theory in D=2+1 is that of an harmonic oscillator. This conclusion should be compared with the results in D=3+1 on the lattice. We have seen here that looking in a straightforward way to these computations one arrives easily to the conclusion that the theory is trivial. Yes, it has a mass gap but is trivial. The only missing block is the spectrum. So, Carlsson and McKellar give us the missing step. Also the spectrum is consistent with the view that the theory is trivial. Then we look at the second category of papers. These papers arose from an ingenious idea due to Kim, Karabali and Nair (e.g. see here) that introduced the right variables to manage the theory. In this way one reduces the problem to the one of diagonalizing a Hamiltonian obtaining eigenstates and eigenvalues. Building on this work, Leigh, Minic and Yelnikov were able to postulate a new nontrivial form of the ground state wavefunctional producing the spectrum of the theory in D=2+1 in closed analytical form (see here and here). The spectrum was given as the zeros of Bessel functions that in some approximation can be written as that of an harmonic oscillator. The open problem with this latter approach relies on the proof of existence of the postulated wavefunctional. This may not be easy. The conclusion to be drawn from this is that we have already sound evidences that a mass gap for Yang-Mills theory exists. These proofs could not be satisfactory for a mathematician but surely for us physicists give a solid ground to work on. A note about AdS/CFT and Yang-Mills spectrum There is a lot of activity about using this AdS/CFT symmetry devised in the string theory context to obtain the spectrum of a pure Yang-Mills theory in D=3+1. To have an idea about one can read here. The essential point about this approach is that it does not permit the computation of the mass gap. Rather, when given the ground state value as an input, it should be able to obtain all the spectrum. I should say that current results are indeed satisfactory and this approach is worthing further pursuing. Apart from this, we have seen here that the ground state, as derived using the lattice computations of the gluon propagator, is quite different from current results for the spectrum as given here and here. Indeed, from the gluon propagator, fixing the QCD constant at 440 MeV, we get the ground state at 1.25. Teper et al. get 3.55(7) and Morningstar et al (that fix the constant at 410 MeV) get about 4.16(11). Morningstar et al use an anisotropic lattice and this approach has been problematic in computations for the gluon propagator ( see the works by Oliveira and Silva about) producing disagreement with all others research groups or, at best, a lot of ambiguities. In any case we note a large difference between propagator and spectrum computations for the ground state. This is a crucial point. Experiments see f0(600) or $\sigma$. Propagator computations see f0(600) and $\sigma$ but spectrum computations do not. Here there is a point to be clarified between these different computations about Yang-Mills theory. Is it a problem with lattice spacing? In any case we have a discrepancy to be understood. Waiting for an answer to this dilemma, it would be interesting for people working on AdS/CFT to lower the input ground state assuming the one at 1.25 (better would be 1.19 but we will see this in future posts) and then checking what kind of understanding is obtained. E.g. is the state at 3.55 or 4.16 recovered? Besides, it would be nice to verify if some kind of regularity is recovered from the spectrum computed in this way. A nice paper Today in arxiv a new paper appeared by Dudal, Gracey, Sorella, Vandersickel and Verschelde here. This is a cooperation between Universities in Belgium, UK and Brazil. These authors have found a way to keep alive the Gribov-Zwanziger scenario by working out an extended Yang-Mills Lagrangian. Indeed, by taking into account the recent lattice results, these authors modify Yang-Mills Lagrangian to be consistent with them by adding meaningful terms that maintain renormalizability and prove that in this way the Gribov-Zwanziger scenario survives. This work is a continuation of other interesting works by the same authors. The only point to be emphasized is that in this case we lose a minimality criterion that would imply the formation of a mass gap through the dynamics of the theory itself. This latter point seems the one supported by lattice computations as we discussed here. Meaning of lattice results for the gluon propagator We have shown the results of the lattice computations of the gluon propagator here. It would be important to get an understanding of what is going on in the low momentum limit. Indeed, this is not impossible but rather very easy. The reason for this relies on the fact that we assume that a mass gap forms in the limit of low momenta explaining the short range proper to nuclear forces. But there is more to say. This mass gap must represent an observable resonance in the spectrum of the light scalar mesons. We expect this as Yang-Mills theory is the one describing nuclear forces and if this theory is the true one, apart mixing with quarks states, its spectrum must be observed in nature. In order to pursue our aims we consider the data coming from Attilio Cucchieri and Teresa Mendes being those for the largest volume $(27fm)^4$here. If there is a mass gap the simplest form of propagator to use (in Euclidean metric) is the Yukawa one being $A$ a constant and $m$ the mass gap entering into the fit. We obtain the following figure for $A=1.1441$ and $m=550 MeV$. This result is really shocking. The reason is that a resonance with this mass has been indeed observed. This is f0(600) or $\sigma$ and recent analysis by S. Narison, W. Ochs and others ( see here and here) and chiral perturbation theory as well (see here or here) have shown that this resonance has indeed a large gluonic content. But this is not enough. If we assume that the computation by Cucchieri and Mendes used a QCD constant of about 440 MeV we can take the ratio with the mass gap to obtain an adimensional number of about 1.25. But for the QCD constant, that should be fixed in any computation on the lattice, there is no general agreement about and this constant should be obtained from experimental data. When one is able to solve Yang-Mills theory by analytical means this constant is an integration constant arising from the conformal invariance of the theory. The other value that is generally chosen by lattice groups is 410 MeV. This would give a mass gap of about 512 Mev that is very near the value obtained from experimental data ( e.g. see here). So, the conclusions to be drawn from lattice computations are really striking. We have seen that the ghost behaves like a free particle, i.e. decouples from the gluon field. On the same ground it is also seen that lowering momenta makes the running coupling going to zero. Together with a fit to a Yukawa propagator we recognize here all the chrisms of a trivial theory! The same that is believed to happen to scalar field theory in the same limit in four dimensions. Indeed, this is not all the story. The spectrum of a Yang-Mills theory is more complex and higher excitations than $\sigma$ are expected. We will discuss this in the future and we will see how to recover the spectrum of light scalar mesons, at least for the gluonic part, without losing the property of triviality proper to the theory. Strong perturbation theory In another post we have defined what a strongly perturbed physical system is. This implied that we know how to do strong perturbation theory, that is, one takes her preferred differential equation and get a solution series for the case of a large parameter. We show here that is indeed the case. So, let us consider as an example the following differential equation: $\dot y(t)+y(t)+\lambda y(t)^2=0$. We know the exact solution of this equation being $y(t) = \frac{1}{\lambda(e^t-1)+e^t}$ to be compared to our approximated ones. The weak perturbation case, $\lambda\rightarrow 0$, asking for a solution in the form $y(t)=y_0(t)+\lambda y_1(t) + \lambda^2 y_2(t) + O(\lambda^3)$ gives the set of equations $\dot y_0+y_0=0$ $\dot y_1 + y_1=-y_0^2$ $\dot y_2 + y_2=-2y_0y_1$ and finally $y(t)=e^{-t}+\lambda e^{-2t}+\lambda^2 e^{-3t}+O(\lambda^3)$ and from the numerical comparison for $\lambda=0.005$ we get the curves that is really satisfactory. We now look for a solution series in the form but a direct substitution into the original equation gives nonsense. There is one more step to do and this is a rescaling in time, that is we use instead of $t$ the scaled variable $\tau=\lambda t$. After this substitution is accomplished we can put the strong perturbation series into the equation and obtain the meaningful set of differential equations $\dot z_0+z_0^2=0$ $\dot z_1+z_0+2z_0z_1=0$ $\dot z_2+z_1+z_1^2+2z_2z_0=0$ where now “dot” means derivation with respect to $\tau$. We have finally the solution series, undoing the rescaling in time, $y(t)=\frac{1}{\lambda t+1}-\frac{1}{\lambda}\frac{\frac{\lambda^2 t^2}{2}+\lambda t}{(\lambda t+1)^2}+\frac{1}{\lambda^2}\frac{\frac{\lambda^3t^3}{12}+\frac{\lambda^2t^2}{4}+\frac{\lambda t}{4}+\ frac{1}{4(\lambda t +1)}-\frac{1}{4}}{(\lambda t + 1)^2}+O\left(\frac{1}{\lambda^3}\right)$ Numerical comparison with $\lambda=100$ gives that is almost perfect in the coincidence between the exact and the approximate case. The method indeed works! We note the following: • From the set of equations of the strong perturbation case we note that we have just interchanged the perturbation with respect to the weak perturbation case to obtain the series with inverted expansion parameter. This serves just as a bookeeper but it is not needed. This is duality in perturbation theory (look here and here). • The strong perturbation series is a series in small times. Indeed the time scale is set by $\lambda$ that decides how far can we go into the time scale for the comparison. It is just curious that no mathematician in the history was able to get such a method out understanding that it was just a rescaling of the independent variable away. I was lucky as this did not So, as said at the start, you can take your preferred differential equation and sort out a solution series in a regime you have never seen before. Have fun! Gluon and ghost propagators on the lattice For a true understanding of QCDat low energies we need to know in a clear way the behavior of the gluon and ghost propagators in this limit. This should permit to elucidate the general confinement mechanism of a Yang-Mills theory in the strong coupling limit. Things are made still more interesting if this comprehension should imply a dynamical mechanism for the generation of mass. The appearance of a mass in a otherwise massless theory gives rise to the so-called “mass gap” that should appear in a Yang-Mills theory to explain the short range behavior of the strong force. A lot of computational effort has been devoted to this task mainly due to three research groups in Brazil, Germany and Australia. These authors aimed to reach larger volumes on the lattice as some theoreticians claimed loudly that finite volume effects was entering in these lattice results. This because all the previous theoretical research using the so called “functional methods”, meaning in this way the solution of Dyson-Schwinger equations in some approximation, pointed toward a gluon propagator going to zero at low momenta, a ghost propagator going to infinity faster than the free case and the running coupling (defined in some way) reaching a fixed point in the same limit. These theoretical results support views about confinement named Gribov-Zwanziger and Kugo-Ojima scenarios after the authors that proposed them. Two points should be emphasized. The idea that the running coupling in the infrared should go to a fixed point is a recurrent prejudice in literature that has no theoretical support. The introduction of a new approach to solve the Dyson-Schwinger equations by Alkofer and von Smekal supported this view and the confinement scenarios that were implied in this way. These scenarios need the gluon propagator going to zero. But these authors are unable to derive any mass gap in their theory and so we have no glueball spectrum to compare with experiment and other lattice computations. Anyhow, this view has been behind all the research on the lattice that was accomplished in the last ten years and whatever computation was done on the lattice, the gluon propagator refused to go to zero and the running coupling never reached a fixed point. It should be said that some voices out of the chorus indeed there are. These are mainly the group of Philippe Boucaud in France and some numerical works on Dyson-Schwinger equations due to Natale and Aguilar in Brazil. These authors were able to describe the proper physical situation as appears today on lattice but they met difficulties due to skepticism and prejudices about their work due to the mainstream view described above. Since 2005 I proposed a theoretical approach that fully accounts for these results and so I just entered into the club of some heretical view! I will talk of this in my future posts. After the Lattice 2007 Conference (for the proceedings see here) people working on lattice decided to not pursue further increasing volumes as now an agreement is reached on the behavior of the propagators. This view is at odds with functional methods and no precise understanding exists about why things are so. Some results in two dimensions agree with functionalmethods but the theory is known to be trivial in this case (there is a classical paper by ‘t Hooft about). To give you an idea of the situation I have put the following lattice results at $(13.2fm)^4$here, $(19.2fm)^4$here and $(27fm)^4$here respectively: and finally Then we can sum up the understanding obtained from the lattice about Yang-Mills theory: • Gluon propagator reaches to finite value at low momenta. • Ghost propagator is that of a free particle. • Running coupling, using the definition of Alkofer and von Smekal, goes to zero at low momenta. These results agree perfectly with the ones obtained by Boucaud et al., Aguilar and Natale and myself. In a future post we will discuss the way these lattice results seem to give a consistent view with current phenomenology of light scalar mesons. When is a physical system strongly perturbed? Generally speaking, a physical system is described by a set of differential equations whose solution is given by a state variable $u$. So, one has $L_0(u)=0$. The solution of this set of differential equations gives all one needs to obtain observables, that is numbers, to be compared with experiments. When the physical system undergoes the effect of a perturbation $L_1(u)$ the problem to be solved is $L_0(u)+\lambda L_1(u) = 0$ and we say that perturbation is small (or weak) as we consider the limit $\lambda\rightarrow 0$. In this case we look for a solution series in the form $u=u_0+\lambda u_1+O(\lambda^2)$. The case of a strong perturbation is then easily obtained. We say that a physical system is strongly perturbed when the limit $\lambda\rightarrow\infty$ is considered and the solution series we look for has the form We will see that this solution series can generally be built and can be obtained by a “duality principle” in perturbation theory arising from the freedom that exists in the choice of what is the perturbation and what is left unperturbed. A World of Physics and a lot more This is the start-up post of this new physics blog. I will talk about a lot of questions I care about, mostly current research open problems that need some tribune to be heard. But I will also discuss some other arguments that could be of interest to a larger audience. Most of the questions on physics that pass to the media are about string theory, unification, quantum gravity and all other aspects of fundamental research that can hit the interest of common people. But one should consider that most of the breakthroughs that happened in science have come from unexpected directions. And last but not least, there is generally an average time, that is not short, before an important finding becomes a recognized and acquired truth. This means that is important to make knowledge to flow as fastly as possible and blogs are an excellent way to achieve such a goal. I am working mostly on QCD and for this particular field there are different alleys that are currently pursued and just now something is going to converge for experiments, lattice and theory and is worthwhile to let everybody know the work of these people that someday could impact in a more generally way to all the community. The reason to expect this relies on the fact that QCD is a theory eminently not perturbative and a lot of innovative views should be expected here. Indeed, one of the most serious problems to face in physics is the treatment of problems where a large parameter enters that makes the problem non perturbative, that is, impossible to manage with perturbation theory. Till now, perturbation theory appears as the only serious approach to get analytical solutions to equations and needs a small parameter to work. Despite several important attempts as the renormalization group and a few of exact solutions, in the end of the day one has to rely on this venerable approach to extract physically meaningful results from equations. This approach has become such a part of physics that a lot of prejudices invaded some fields where it applies. A clear example of this is quantum field theory. Quantum field theory works only through small perturbation theory and so, renormalization that was born with it is believed to be inherent to any quantum field theory independently on the method one use to treat it. But this latter point of view is to be proved! So, let us begin. Happy blogging!
{"url":"http://marcofrasca.wordpress.com/2008/06/","timestamp":"2014-04-18T18:16:10Z","content_type":null,"content_length":"135251","record_id":"<urn:uuid:dd48bf22-a20b-4c51-ac3b-f44a42cedab7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
fun with "necessarily" abelian October 25th 2010, 02:35 PM #1 Junior Member Sep 2010 fun with "necessarily" abelian I have this problem: Suppose that $X$ is a group with $x^2=1$ for all $x \in X$. Show that $X$ is necessarily abelian. Prove that if $X$ is finite, then $|X|=2^n$ for some $n \geq 0$ and $X$ needs at least $n$ generators. For the first part I did: $\forall x \in X$ , $x(1)=x=(1)x$, so: $e_X=1$ and: $x^2=e_X \, \forall x \in X \Rightarrow x = x^{-1} \, \forall x \in X$ $\forall x,y \in X:$ So $X$ is abelian. Is that the same as necessarily abelian? Not sure how to go about the second part... Last edited by MichaelMath; October 25th 2010 at 03:12 PM. What you proved is that a group X with the given property must be Abelian. This means, there can not exist a group with the same property that is not Abelian. (that's what they mean with necessarily Abelian) suppose there's a prime $peq 2$ such that $p$ divides $|X|$ then there's an element of order p in X....so? If it X had less then n generators....then $|X|=$...? For the first part, could I just have said : $\forall x,y \in X$ $x^2=1$ and $y^2=1$ and $(xy)^2=1$ so: $(xy)^2=x^2y^2$ and: $xy=(xy)^{-1}=y^{-1}x^{-1}=yx$ You could have, but the only important part is: $(\forall x)(x=x^{-1})$ This observation is made by multiplying $x^2=1$ with $x^{-1}$ on both sides... Hence $(\forall x)(\forall y)(xy = (xy)^{-1}= y^{-1}x^{-1}=yx)$ If $peq 2$ divides $|X|$ then there's an element of order $p$ in $X$....wich is a contradiction (why?) So $p$ can not divide $|X|$ and consequently, only 2 divides $|X|$ so $|X|=2^n$ If $X$ has $n-k$ generators then $|X|=2^{n-k}...$ (the generators must also be of order 2...) This is perfect, but you're using a fairly advanced theorem (that there exists an element of order $p$ for a prime $p$ dividing $|G|$). Another way to see it is that $G$ can be considered as a vector space over the field with two elements. Switching to additive notation, the above identities read as $2(x+y)=2x+2y$... since every vector space has a basis, we have an isomorphism $G \cong (Z_2)^n$ for some $n$, hence $|G|=2^n$. October 25th 2010, 02:53 PM #2 October 25th 2010, 03:04 PM #3 Junior Member Sep 2010 October 25th 2010, 03:10 PM #4 October 25th 2010, 04:12 PM #5 Junior Member Sep 2010 October 25th 2010, 04:25 PM #6 October 26th 2010, 09:49 AM #7
{"url":"http://mathhelpforum.com/advanced-algebra/160967-fun-necessarily-abelian.html","timestamp":"2014-04-17T19:15:49Z","content_type":null,"content_length":"58821","record_id":"<urn:uuid:3e236545-966b-4274-abc6-6f95db85ff8f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
papers on symbolic data dependence analysis Graham Jones <gxj@dcs.ed.ac.uk> Tue, 30 Aug 1994 09:39:26 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: Graham Jones <gxj@dcs.ed.ac.uk> Keywords: analysis, summary, bibliography Organization: Department of Computer Science, University of Edinburgh Date: Tue, 30 Aug 1994 09:39:26 GMT In answer to my question In M. Haghighat and C. Polychronopoulos paper "Symbolic Dependence Analysis for High Performance Parallelising Compilers, Advances in Languages and Compilers for Parallel Processing, 1991" the authors indicate the large number of arrays with symbolic subscripts. Classical dependency tests fail in the presence of symbolic terms but can be assisted by global constant propagation, forward and induction variable substitution. Does anyone know of the number of cases that still remain unanalysable after these optimisations have taken place ? Given that some subsripts can still not be analysed, we need to develop symbolic dependency tests to handle these cases. Does anyone know of any current work and useful papers in this area ? I have received the following replies from Jim Davies jrbd@craycos.com Alain Darte darte@acri.fr Be'atrice Creusillet creusillet@cri.ensmp.fr Lode Nachtergaele nachterg@imec.be Many thanks for your help. I have included their replies for anyone interested in this area. If you know of any other works I would be glad to hear from you Graham Jones -------------------- jrbd@craycos.com ----------------------- I believe there was a study of different types of subscripts and their frequency in Fortran programs done by Zhiyuan Li and Pen-Chung Yew at CSRD a few years ago. It might have been in the ICPP proceedings from 1989 or 1990 -- I'm not sure. The authors are apparently both going to be at the University of Minnesota this fall; the last email addresses I have for them are li@cs.umn.edu and yew@csrd.uiuc.edu respectively. gxj > I looked this up, I believe it is gxj > @InProceedings{shen:89, gxj > author = " Z. Shen and Z. Li and P-C. Yew ", gxj > title = "An {E}mpirical {S}tudy on {A}rray {S}ubscripts and gxj > {D}ata {D}ependences", gxj > booktitle = "1989 International Conference on Parallel Processing", gxj > year = "1989", gxj > pages = "II-145--II-152" -------------------- darte@acri.fr ------------------------- The best references seem to be the work by Paul Feautrier: AUTHOR = {Paul Feautrier}, JOURNAL = {RAIRO Recherche Op\'{e}rationnelle}, MONTH = sep, PAGES = {243--268}, TITLE = {Parametric Integer Programming}, VOLUME = {22}, YEAR = {1988} AUTHOR = {Paul Feautrier}, ADDRESS = {North Holland}, BOOKTITLE = {Parallel and distributed algorithms}, EDITOR = {M. Cosnard and al.}, PAGES = {309-320}, PUBLISHER = {Elsevier Science Publishers B.V.}, TITLE = {Semantical analysis and mathematical programming}, YEAR = {1989} AUTHOR = {Paul Feautrier}, JOURNAL = {Int. J. Parallel Programming}, NUMBER = {1}, PAGES = {23-51}, TITLE = {Dataflow analysis of array and scalar references}, VOLUME = {20}, YEAR = {1991} The last reference explains the dependence analysis technique for flow dependences. This technique can be easily extended to other kind of dependences. The first reference describes the linear programming algorithm that is needed. It is of course a parametric algorithm. These algorithms have been implemented and I think are available. Ask directly to Paul Feautrier: Paul.Feautrier@prism.uvsq.fr. A lot of people now use this kind of technique as a base for their personal work: you may be interested by work by Vincent Van Dongen in Montreal, Patrice Quinton in Rennes, Yves Robert and myself in Lyon, etc.. If you are interested, I may send you some papers. --------------------- creusillet@cri.ensmp.fr ------------------ In PIPS, the Interprocedural Parallelizer of Scientific Programs developed at Ecole des Mines de Paris, interprocedural symbolic semantics analysis is performed in order to parallelize large Fortran programs. Several kind of information are propagated through the flow graph of each procedure or function and through the call graph. Briefly, the 'preconditions ' are predicates over scalar integer variable values, which hold true just before a statement is executed. It is an interprocedural version of Cousot and Halbwachs' algorithm. For instance, the predicate 'P(I,K,N,M) {1<=I,I<=N, K==I+N}' shows that variables I, K, N and M have already been assigned a value, and we know that the current value of I is between 1 and N and that the values of K, I, and N are constrained by K==I+N. The 'regions' are sets of array elements defined by affine equalities and inequalities. For instance the region <A(PHI1)-{I<=PHI1, PHI1<=I+1}> represents the set of the two elements A(I) and A(I+1) (PHI1 represents the elements of the first dimension). Regions are also propagated interprocedurally. They include the relations given by the repconditions. Preconditions and regions can be used to improve dependence analsyis. Experiments performed on codes from the Perfect Club yield encouraging & nb of array & independences found & pairs & prec. only & regions & & & arc2d & 6843 & 2311 & 2557 flo52 & 3717 & 1216 & 1446 mdg & 2971 & 200 & 342 track & 2028 & 107 & 200 trfd & 1335 & 2 & 6 You will find more precise information in the following papers: AUTHOR = "Irigoin, Franc,ois and Jouvelot, Pierre and Triolet, Re'mi ", TITLE = {Semantical Interprocedural Parallelization : An Overview of the {PIPS} project}, BOOKTITLE = {Supercomputing'91}, YEAR = 1991, MONTH = jun, AUTHOR = "Irigoin, Franc,ois", TITLE = {Interprocedural Analyses for Programming Environments}, BOOKTITLE = {Workshop on Environments and Tools for Parallel Scientific Programming, Saint-Hilaire du Touvier, France}, YEAR = 1992, MONTH = sep, AUTHOR = {Yang, Yi-Qing}, TITLE ={Tests des De'pendances et Transformations de Programme}, SCHOOL = {Universite Paris VI, France}, YEAR = 1993, NOTE = {in French}, MONTH = nov, ----------------------- nachterg@imec.be ------------------------ please find included a list of reference to sources that deal with dependence testing. Some of them only deal with static dependences. key = {Anc91}, author = {C. Ancourt and F. Irigoin}, title = {Scanning polyhedra with DO loops}, booktitle = {Third ACM Symposium on Princliples and Practice of parallel year = {1991}, month = {April}, pages = {pp. 39-50}, note = {no paper copy} key = {Ban93}, author = {Uptjal Banarjee}, title = {Loop transformations for restructuring compilers : The foundation}, year = {1993}, publisher = {Kluwer Academic Publisher} key = {Bar87}, author = {I. B{\'a}r{\'a}ny and Z. F{\"u}redi}, title = {Computing the volume is difficult}, journal = {Discrete Comput. Geom.}, year = 1987 , volume = 2 , pages = {319--326}, keywords = {approximation, volume, convex} key = {Bu88}, author = {Jichun Bu and Ed F. Deprettere}, title = {Converting Sequential Iterative Algorithms to Recurrent Equations for Automatic Design of Systolic Arrays}, booktitle = {ICASSP '88}, year = {1988}, pages = {pp. 2025-2028} key = {Bu88}, author = {Jichun Bu and Ed F. Deprettere}, title = {Analysis and Modeling of Sequential Iterative Algorithms for parallel and pipeline Implementations}, booktitle = {ISCAS'88}, organization = {IEEE}, year = {1988}, pages = {pp. 1961-1965} key = {Col93}, author = {Jean-Francois Collard and Paul Feautrier and Tanguy Risset}, title = {Construction of DO loops from systems of affine constraints}, institution = {Laboratoire de l'informatique du Parallelisme, Ecolo Normale Superieure de Lyon, Institut IMAG}, year = {1993}, month = {May}, note = {ftp://lip.ens-lyon.fr:pub/LIP/Rapports/RR/RR93/RR93-15.ps.Z} key = {Col93a}, author = {Jean-Francois Collard}, title = {Code generation in automatic parallelizers}, institution = {Laboratoire de l'informatique du Parallelisme, Ecolo Normal Superieure de Lyon, Institut IMAG}, year = {1993}, month = {July}, note = {ftp://lip.ens-lyon.fr:pub/LIP/Rapports/RR93/RR93-21.ps.Z} key = {Eis92}, author = {Christine Eisenbeis and Jean-Claude Sogno}, title = {A general algorithm for data dependence analysis}, year = {1992}, month = {May}, category = {ldde}, note = {Have to find the number yet} key = {Eis92}, author = {Christine Eisenbeis and Oliver Temam and Harry Wijshoff}, title = {On efficiently characterizing solutions of linear Diophantine equations and its application to data dependence analysis}, institution = {INRIA}, year = {1992}, month = {July}, address = {Domaine de Voluceau, 78153 Le Chersnay CEDEX, France} key = {Hav94}, author = {Paul Havlak}, title = {Interprocedural Symbolic Analysis}, year = {1994}, month = {May}, address = {Houston, Texas}, school = {Rice University} key = {Jeg93}, author = {Yvon Jegou}, title = {Characterization of program dependencies by integer programming techniques}, institution = {INRIA}, year = {1993}, month = {November}, address = {IRISA, Campus universitaire de Beaulieu, 35042 RENNES Cedex (France)}, number = {Nr. 2138} key = {Kul94}, author = {Dattatraya Kulkarni and Michael Stumm}, title = {Computational alignment: A new, unified program transformation for local and global optimization}, institution = {Computer Systems Research Institute, Depearment of computer science}, year = {1994}, month = {january}, address = {Toronto, Canada M5S 1A4}, note = {ftp://ftp.csri.toronto.edu:csri-technical-reports/292/292.ps.Z} key = {Mas94}, author = {Vadim Maslov}, title = {Lazy Array Data-Flow Dependence Analysis}, booktitle = {Proceedings of the 21st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages}, year = {1994}, month = {January}, pages = {pp. 311-325} key = {Mas94}, author = {Vadim Maslow and William Pugh}, title = {Simplifying Polynomial Constraints over integers to make dependence analysis more precise}, institution = {Dept. of computer Science, University of Maryland}, year = {1994}, month = {February}, address = {College Park, MD 20742}, number = {CS-TR-3109.1} key = {May92}, author = {Dror Eliezer Maydan}, title = {Accurate analysis of array references}, institution = {Computer systems laboratory, Stanford University, Stanford}, year = {1992}, month = {September}, number = {CSL-TR-92-547} key = {Mvs94}, author = {Michael F.X.B. van Swaaij and Frank H.M. Franssen and Francky V.M. Catthoor and Hugo J. De Man}, title = {Modeling Data Flow and Control Flow for DSP System Synthesis}, booktitle = {VLSI Design Methodologies for Digital Signal Processing Architectures}, year = {1994}, editor = {Magdy A. Bayoumi}, publisher = {Kluwer Academic Publishers}, pages = {pp. 219-259} key = {Psa93}, author = {Kleanthis Psarris and Xiangyun Kong and David Klappholz}, title = {The Direction Vector I test}, journal = {IEEE Transactions on Parallel and Distributed Systems}, year = {1993}, month = {November}, volume = {Vol. 4}, number = {No. 11}, pages = {pp. 1280} key = {Pug92}, author = {William Pugh and David Wonnacott}, title = {Static Analysis of Upper Bounds on Parallelism}, institution = {Institute for Advanced Computer Studies, Dept. of computer science}, year = {1992}, month = {November}, number = {CS-TR-2994}, category = {ldde} key = {Pug92}, author = {William Pugh and David Wonnacott}, title = {Eliminating False Data Dependences using the Omage Test}, booktitle = {SIGPLAN PLDI'92}, year = {1992} key = {Pug92}, author = {William Pugh and David Wonnacott}, title = {Going beyond Integer Programming with the Omega test to Eliminate False Data Dependences}, institution = {Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland}, year = {1992}, month = {December}, address = {College Park, MD 20742} key = {Puh93}, author = {William Pugh and David Wonnacott}, title = {An exact method for analysis of value-based array data institution = {Institute for advanced computer studies}, year = {1993}, month = {December}, address = {Univ. of Maryland, College Park, MD 20742}, number = {CS-TR-3196} key = {Qui94}, author = {Patrice Quinton and Sanjay Rajopadhye and Doran Wilde}, title = {Using static analysis to derive imperative code from ALPHA}, institution = {Irisa}, year = {1994}, month = {May} key = {Ram94}, author = {L.Ramachandran and D.Gajski and V.Chaiyakul}, title = {An Algorithm for Array Variable Clustering}, booktitle = {Proc. 5th ACM/IEEE Europ. Design and Test Conf.}, year = {1994}, month = {Feb.}, address = {Paris, France}, pages = {262-266} key = {Ver94}, author = {Herv\'{e} Le Verge and Vincent Van Dongen and Doran K. Wilde}, title = {Loop nest synthesis using the polyhedral library}, institution = {IRISA}, year = {1994}, month = {May} key = {VSw92}, author = {Michael F.X.B. van Swaaij}, title = {Data Flow Geometry: Exploiting Regularity in System-level year = {1992}, school = {Katholieke Universiteit Leuven} key = {Wal94}, author = {Edward Walker}, title = {Extracting dataflow information for parallelizing FORTRAN nested loop kernels}, year = {1994}, month = {April}, address = {Advanced Computer Architecture Group, The Department of Computer Scinece}, school = {The University of York}, note = {ftp://minster.york.ac.uk/pub/edward} key = {Wil93}, author = {Wilde, D.}, title = {A library for Doing Polyhedral Operations}, institution = {IRISA}, year = {1993}, month = {December}, address = {Rennes, France}, number = {Internal Publication 785} key = {Wol87}, author = {Michael Wolfe and Uptjal Banarjee}, title = {Data Dependence and its Application to Parallel Processing}, journal = {International Journal of Parallel Programming}, year = {1987}, volume = {Vol. 16}, number = {No. 2}, pages = {pp. 137-178} key = {Xin93}, author = {Zhaoyun Xing and Weijia Shang}, title = {Accurate Data Dependence Test}, year = {1993}, month = {May}, note = {Submitted to ASAP112A} Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/94-09-009","timestamp":"2014-04-19T09:24:15Z","content_type":null,"content_length":"33418","record_id":"<urn:uuid:f707f2c4-6fa7-4679-b8a5-e15f93f34468>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Question about the use of ratio variables Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Question about the use of ratio variables From Fernando Rios Avila <f.rios.a@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Question about the use of ratio variables Date Mon, 9 Jul 2012 22:43:43 -0400 I think your problem is similar to the division bias issue in Labor economics. I suggest you to look for the following paper: The Relationship between Wages and Weekly Hours of Work: The Role of Division Bias George J. Borjas The Journal of Human Resources Vol. 15, No. 3 (Summer, 1980), pp. 409-423 It discusses the problem, consequences and some solutions. On Mon, Jul 9, 2012 at 10:20 PM, hsini <hsini92@gmail.com> wrote: > Dear all, > I have a question about the use of ratio variables. > Is it problematic to have a regression equation like : (Yi/population) = b0+ b1*(Xi/Yi) +b2X2 + error? > Let's say Yi is the number of innovation per region, and Xi is the number of small firm innovation. The ratio variable is a very important measure since in my theory it represents the organizational ecology of the region. I excluded regions with zero innovation. However, I am afraid of any function problem since I have Yi on the both side of the equation. I have read Bradshaw and Radbill (1987) paper discussing the use of ratio variables. It seems to me they were dealing with the ratio is generated by a third variable (Z), which is a bit different from my situation. Any suggestion? > Sincerely, > Hsini Huang > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-07/msg00298.html","timestamp":"2014-04-20T21:09:34Z","content_type":null,"content_length":"9198","record_id":"<urn:uuid:07014d62-9634-4a41-a29c-284191ce51dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A group of 3 boys and 2 girls are seated in a row of 5 chairs find the probability that they will be seated alternately..plz help.. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51209c14e4b03d9dd0c54189","timestamp":"2014-04-20T06:35:43Z","content_type":null,"content_length":"63019","record_id":"<urn:uuid:7845e9a9-759c-4662-b055-a3b2f49f179e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve for w in P = 2 w + 2 l ,P = 38 and l= 12 Best Response You've already chosen the best response. So basically: 38=2*W +2*12 Best Response You've already chosen the best response. ummm is it not that you need to find w? then move the equation around to find w Best Response You've already chosen the best response. Best Response You've already chosen the best response. How do I do that? Best Response You've already chosen the best response. I'm not very good at doing that... Best Response You've already chosen the best response. well you have 38=2w+24 so move the 24 to the left by eliminating it on the right by -24 meaning also reducing -24 from the left => so 38-24=2w+24-24 that is 14=2w now divide both by two to obtain a 1 before w since 2/2=1 and 1w=w so 14/2=2/2w=w => w=7 Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4db022d5d6938b0b067eab4d","timestamp":"2014-04-16T07:43:13Z","content_type":null,"content_length":"41921","record_id":"<urn:uuid:f1a329fc-ce4f-46cc-96a7-569ffe78bacc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Can you recommend a MATHS PRIMER type book? I ordered that book from Amazon that you recommended… “Mathematics of 3D Game Programming and Computer Graphics”… So um do you have a book you can recommend that sorta explains this one? I need some kind of Maths Primer that can give me though more of a introduction to maths in general that would be helpful in programming. I haven’t been at school for like over 20 years and haven’t looked at a math book since then… so like I said I need a book (with exercises would be great) to bush up… as I read that book and every thing was cool in chapter 1.. then when the maths came I was like.. I do not even know what these symbols they are using mean. So do you have any suggestions for a more beginning type person? There is a person, called Donald Knuth, that writes nice books about computer science in general. He’ve written a nice book about mathematics too, you can take a look at. Just Google for Don Knuth &
{"url":"http://devmaster.net/posts/15939/can-you-recommend-a-maths-primer-type-book","timestamp":"2014-04-16T23:12:40Z","content_type":null,"content_length":"13050","record_id":"<urn:uuid:ec462c97-245c-44bb-8521-74096cefd74b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
1. If f(x)=3x-4 and g(x)=x+2: find f-g 2. If f(x)=3x-4 and g(x) = x+2: find f-g Will someone help me solve these 2... - Homework Help - eNotes.com 1. If f(x)=3x-4 and g(x)=x+2: find f-g 2. If f(x)=3x-4 and g(x) = x+2: find f-g Will someone help me solve these 2 problems and get the correct answers, my answers were incorrect. my answers were: 1. x+2 2. 2x+2 f(x)=3x-4 g(x)=x+2 Gathering the terms in x together and the free ones together, the result will be: f(x)-g(x)= (3x-x)+(-4-2)=2x-6 *f(x)=3x-4 *g(x)=x+2 *you have put x and the free ones together, my result is>>> f(x)-g(x)= (3x-x)+(-4-2)=2x-6 that is my answer but 3x-x if you want not put () ok...bye.. if f(x)=2xsquare-x-3 and g(x)=x+1 find f=g i came up with 2xsuare-2 1) Given that f(x)= 3x-4 and g(x)= x+2 To find f-g: f(x)-g(x)= 3x-4-(x+2) = 3x-4-x-2 = 2x-6, so your answer is wrong. I think maybe you type wrongly for the 2nd problem, it is the same as the first one. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/1-f-x-3x-4-g-x-x-2-find-f-g2-f-x-3x-4-g-x-x-2-find-58759","timestamp":"2014-04-17T21:32:12Z","content_type":null,"content_length":"31315","record_id":"<urn:uuid:0d56934b-93ed-4499-bb69-22fd02ff4798>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Devlin's Angle This month’s column comes in lecture format. It’s a narrated videostream of the presentation file that accompanied the featured address I made recently at the MidSchoolMath National Conference , held in Santa Fe, NM, on March 27-29. It lasts just under 30 minutes, including two embedded videos. In the talk, I step back from the (now largely metaphorical) blackboard and take a broader look at why we and our students are there is the first place. Download the video here Because this blog post covers both mountain biking and proving theorems, it is being simultaneously published in Devlin’s more wide ranging blog profkeithdevlin.org. In my post last month, I described my efforts to ride a particularly difficult stretch of a local mountain bike trail in the hills just west of Palo Alto. As promised, I will now draw a number of conclusions for solving difficult mathematical problems. Most of them will be familiar to anyone who has read George Polya’s classic book How to Solve It. But my main conclusion may come as a surprise unless you have watched movies such as Top Gun or Field of Dreams, or if you follow professional sports at the Olympic level. Here goes, step-by-step, or rather pedal-stroke-by-pedal-stroke. (I am assuming you have recently read my last post.) BIKE: Though bikers with extremely strong leg muscles can make the Alpine Road ByPass Trail ascent by brute force, I can't. So my first step, spread over several rides, was to break the main problem—get up an insanely steep, root strewn, loose-dirt climb—into smaller, simpler problems, and solve those one at a time. MATH: Breaking a large problem into a series of smaller ones is a technique all mathematicians learn early in their careers. Those subproblems may still be hard and require considerable effort and several attempts, but in many cases you find you can make progress on at least some of them. The trick is to make each subproblem sufficiently small that it requires just one idea or one technique to solve it. In particular, when you break the overall problem down sufficiently, you usually find that each smaller subproblem resembles another problem you, or someone else, has already solved. When you have managed to solve the subproblems, you are left with the task of assembling all those subproblem solutions into a single whole. This is frequently not easy, and in many cases turns out to be a much harder challenge in its own right than any of the subproblem solutions, perhaps requiring modification to the subproblems or to the method you used to solve them. BIKE: Sometimes there are several different lines you can follow to overcome a particular obstacle, starting and ending at the same positions but requiring different combinations of skills, strengths, and agility. (See my description last month of how I managed to negotiate the steepest section and avoid being thrown off course—or off the bike—by that troublesome tree-root nipple.) MATH: Each subproblem takes you from a particular starting point to a particular end-point, but there may be several different approaches to accomplish that subtask. In many cases, other mathematicians have solved similar problems and you can copy their approach. BIKE: Sometimes, the approach you adopt to get you past one obstacle leaves you unable to negotiate the next, and you have to find a different way to handle the first one. MATH: Ditto. BIKE: Eventually, perhaps after many attempts, you figure out how to negotiate each individual segment of the climb. Getting to this stage is, I think, a bit harder in mountain biking than in math. With a math problem, you usually can work on each subproblem one at a time, in any order. In mountain biking, because of the need to maintain forward (i.e., upward) momentum, you have to build your overall solution up in a cumulative fashion—vertically! But the distinction is not as great as might first appear. In both cases, the step from having solved each individual subproblem in isolation to finding a solution for the overall problem, is a mysterious one that perhaps cannot be appreciated by someone who has not experienced it. This is where things get interesting. Having had the experience of solving difficult (for me) problems in both mathematics and mountain biking, I see considerable similarities between the two. In both cases, the subconscious mind plays a major role—which is, I presume, why they seem mysterious. This is where this two-part blog post is heading. BIKE: I ended my previous post by promising to "look at the role of the subconscious in being able to put together a series of mastered steps in order to solve a big problem. For a very curious thing happened after I took the photos to illustrate this post. I walked back down to collect my bike from…where I'd left it, and rode up to continue my ride. It took me four attempts to complete that initial climb! And therein lies one of the biggest secrets of being able to solve a difficult math problem." BOTH: How does the human mind make a breakthrough? How are we able to do something that we have not only never done before, but failed many times in attempts to do so? And why does the breakthrough always seem to occur when we are not consciously trying to solve the problem? The first thing to note is that we never experience the process of making that breakthrough. Rather, what we experience, i.e., what we are conscious of, is having just made the breakthrough! The sensation we have is a combined one of both elation and surprise. Followed almost immediately by a feeling that it wasn’t so difficult after all! What are we to make of this strange process? Clearly, I cannot provide a definitive, concrete answer to that question. No one can. It’s a mystery. But it is possible to make a number of relevant observations, together with some reasonable, informed speculations. (What follows is a continuation of sorts of the thread I developed in my 2000 book The Math Gene.) The first observation is that the human brain is a result of millions of years of survival-driven, natural selection. That made it supremely efficient at (rapidly) solving problems that threaten survival. Most of that survival activity is handled by a small, walnut-shaped area of the brain called the amygdala, working in close conjunction with the body’s nervous system and motor control In contrast to the speed at which our amydala operates, the much more recently developed neo-cortex that supports our conscious thought, our speech, and our “rational reasoning,” functions at what is comparatively glacial speed, following well developed channels of mental activity—channels that can be built up by repetitive training. Because we have conscious access to our neo-cortical thought processes, we tend to regard them as “logical,” often dismissing the actions of the amygdala as providing (“mere,” “animal-like”) “instinctive reactions.” But that misses the point that, because that “instinctive reaction organ” has evolved to ensure its owner’s survival in a highly complex and ever changing environment, it does in fact operate in an extremely logical fashion, honed by generations of natural selection pressure to be in sync with its owner’s environment. Which leads me to this. Do you want to identify that part of the brain that makes major scientific (and mountain biking) breakthroughs? I nominate the amygdala—the “reptilian brain” as it is sometimes called to reflect its evolutionary origin. I should acknowledge that I am not the first person to make this suggestion. Well, for mathematical breakthroughs, maybe I am. But in sports and the creative arts, it has long been recognized that the key to truly great performance is to essentially shut down the neo-cortex and let the subconscious activities of the amygdala take over. Taking this as a working hypothesis for mathematical (or mountain biking) problem solving, we can readily see why those moments of great breakthrough come only after a long period of preparation, where we keep working away—in conscious fashion—at trying to solve the problem or perform the action, seemingly without making any progress. We can see too why, when the breakthrough (or the great performance) comes, it does so instantly and surprisingly, when we are not actively trying to achieve the goal, leaving our conscious selves as mere after-the-fact observers of the outcome. For what that long period of struggle does is build a cognitive environment in which our reptilian brain—living inside and being connected to all of that deliberate, conscious activity the whole time—can make the key connections required to put everything together. In other words, investing all of that time and effort in that initial struggle raises the internal, cognitive stakes to a level where the amygdala can do its stuff. Okay, I’ve been playing fast and loose with the metaphors and the anthropomorphization here. We’re really talking about biological systems, simply operating the way natural selection equipped them. But my goal is not to put together a scientific analysis, rather to try to figure out how to improve our ability to solve novel problems. My primary aim is not to be “right” (though knowledge and insight are always nice to have), but to be able to improve performance. Let’s return to that tricky stretch of the ByPass section on the Alpine Road trail. What am I consciously focusing on when I make a successful ascent? BIKE: If you have read my earlier account, you will know that the difficult section comes in three parts. What I do is this. As I approach each segment, I consciously think about, and fix my eyes on, the end-point of that segment—where I will be after I have negotiated the difficulties on the way. And I keep my eyes and attention focused on that goal-point until I reach it. For the whole of the maneuver, I have no conscious awareness of the actual ground I am cycling over, or of my bike. It’s total focus on where I want to end up, and nothing else. So who—or what—is controlling the bike? The mathematical control problem involved in getting a person-on-a-bike up a steep, irregular, dirt trail is far greater than that required to auto-fly a jet fighter. The calculations and the speed with which they would have to be performed are orders of magnitude beyond the capability of the relatively slow neuronal firings in the neocortex. There is only one organ we know of that could perform this task. And that’s the amygdala, working in conjunction with the nervous system and the body’s motor control mechanism in a super-fast constant feedback loop. All the neo-cortex and its conscious thought has to do is avoid getting in the way! These days, in the case of Alpine Road, now I have “solved” the problem, the only things my conscious neo-cortex has to do on each occasion are switching my focus from the goal of one segment to the goal of the next. If anything interferes with my attention at one of those key transition moments, my climb is over—and I stop or fall. What used to be the hard parts are now “done for me” by unconscious circuits in my brain. MATH: In my case at least, what I just wrote about mountain biking accords perfectly with my experiences in making (personal) mathematical problem-solving breakthroughs. It is by stepping back from trying to solve the problem by putting together everything I know and have learned in my attempts, and instead simply focusing on the problem itself—what it is I am trying to show—that I suddenly find that I have the solution. It’s not that I arrive at the solution when I am not thinking about the problem. Some mathematicians have expressed their breakthrough moments that way, but I strongly suspect that is not totally true. When a mathematician has been trying to solve a problem for some months or years, that problem is always with them. It becomes part of their existence. There is not a single waking moment when that problem is not “on their mind.” What they mean, I believe, and what I am sure is the case for me, is that the breakthrough comes when the problem is not the focus of our thoughts. We really are thinking about something else, often some mundane detail of life, or enjoying a marvelous view. (Google “Stephen Smale beaches of Rio” for a famous example.) This thesis does, of course, explain why the process of walking up the ByPass Trail and taking photographs of all the tricky points made it impossible for me to complete the climb. True, I did succeed at the fourth attempt. But I am sure that was not because the first three were “practice.” Heavens, I’d long ago mastered the maneuvers required. It was because it took three failed attempts before I managed to erase the effects of focusing on the details to capture those images. The same is true, I suggest, for solving a difficult mathematical problem. All of those techniques Polya describes in his book, some of which I list above, are essential to prepare the way for solving the problem. But the solution will come only when you forget about all those details, and just focus on the prize. This may seem a wild suggestion, but in some respects it may not be entirely new. There is much in common between what I described above and the highly successful teaching method of R. L. Moore. For sure you have to do a fair amount of translation from his language to mine, but Moore used to demand that his students not clutter their minds by learning stuff, rather took each problem as it came and then try to solve it by pure reasoning, not giving up until they found the solution. In terms of training future mathematicians, what these considerations imply, of course, is that there is mileage to be had from adopting some of the techniques used by coaches and instructors to produce great performances in sports, in the arts, in the military, and in chess. Sweating the small stuff will make you good. But if you want to be great, you have to go beyond that—you have to forget the small stuff and keep your eye on the prize. And if you are successful, be sure to give full credit for that Fields Medal or that AMS Prize where it is rightly due: dedicate it to your amygdala. It will deserve it. Because this blogpost covers both mountain biking and proving theorems, it is being simultaneously published in Devlin’s more wide ranging blog profkeithdevlin.org. Mountain biking is big in the San Francisco Bay Area, where I live. (In its present day form, using specially built bicycles with suspension, the sport/pastime was invented a few miles north in Marin County in the late 1970s.) Though there are hundreds of trails in the open space preserves that spread over the hills to the west of Stanford, there are just a handful of access trails that allow you to start and finish your ride in Palo Alto. Of those, by far the most popular is Alpine Road. My mountain biking buddies and I ascend Alpine Road roughly once a week in the mountain biking season (which in California is usually around nine or ten months long). In this post, I'll describe my own long struggle, stretching over many months, to master one particularly difficult stretch of the climb, where many riders get off and walk their bikes. [SPOILER: If your interest in mathematics is not matched by an obsession with bike riding, bear with me. My entire account is actually about how to set about solving a difficult math problem, particularly proving a theorem. I'll draw the two threads together in a subsequent post, since it will take me into consideration of how the brain works when it does mathematics. For now, I'll leave the drawing of those conclusions as an exercise for the reader! So when you read mountain biking, think math.] Alpine Road used to take cars all the way from Palo Alto to Skyline Boulevard at the summit of the Coastal Range, but the upper part fell into disrepair in the late 1960s, and the two-and-a-half-mile stretch from just west of Portola Valley to where it meets the paved Page Mill Road just short of Skyline is now a dirt trail, much frequented by hikers and mountain bikers. Alpine Road. The trail is washed out just round the bend A few years ago, a storm washed out a short section of the trail about half a mile up, and the local authority constructed a bypass trail. About a quarter of a mile long, it is steep, narrow, twisted, and a constant staircase of tree roots protruding from the dirt floor. A brutal climb going up and a thrilling (beginners might say terrifying) descent on the way back. Mountain bike heaven. There is one particularly tricky section right at the start. This is where you can develop the key abilities you need to be able to prove mathematical theorems. So you have a choice. Read Polya's classic book, or get a mountain bike and find your own version of the Alpine Road ByPass Trail. (Better still: do both!) When I first encountered Alpine Road Dirt a few years ago, it took me many rides before I managed to get up the first short, steep section of the ByPass Trail. It starts innocently enough—because you cannot see what awaits just around that sharp left-hand turn. After you have made the turn, you are greeted with a short narrow downhill. You will need it to gain as much momentum as you can for what follows. The short, narrow descent I've seen bikers with extremely strong leg muscles who can plod their way up the wall that comes next, but I can't do it that way. I learned how to get up it by using my problem-solving/ theorem-proving skills. The first thing was to break the main problem—get up the insanely steep, root strewn, loose-dirt climb—into smaller, simpler problems, and solve those one at a time. Classic Polya. But it's Polya with a twist—and by "twist" I am not referring to the sharp triple-S bend in the climb. The twist in this case is that the penalty for failure is physical, not emotional as in mathematics. I fell off my bike a lot. The climb is insanely steep. So steep that unless you bend really low, with your chin almost touching your handlebar, your front wheel will lift off the ground. That gives rise to an unpleasant feeling of panic that is perhaps not unlike the one that many students encounter when faced with having to prove a theorem for the first time. If you are not careful, your front wheel will lift off the ground. The photo above shows the first difficult stretch. Though this first sub-problem is steep, there is a fairly clear line to follow to the right that misses those roots, though at the very least its steepness will slow you down, and on many occasions will result in an ungainly, rapid dismount. And losing momentum is the last thing you want, since the really hard part is further up ahead, near the top in the picture. Also, do you see that rain- and tire-worn groove that curves round to the right just over half way up—just beyond that big root coming in from the left? It is actually deeper and narrower than it looks in the photo, so unless you stay right in the middle of the groove you will be thrown off line, and your ascent will be over. (Click on the photo to enlarge it and you should be able to make out what I mean about the groove. Staying in the groove can be tricky at times.) Still, despite difficulties in the execution, eventually, with repeated practice, I got to the point of being able to negotiate this initial stretch and still have some forward momentum. I could get up on muscle memory. What was once a series of challenging problems, each dependent on the previous ones, was now a single mastered skill. [Remember, I don't have super-strong leg muscles. I am primarily a road bike rider. I can ride for six hours at a 16-18 mph pace, covering up to 100 miles or more. But to climb a steep hill I have to get off the saddle and stand on the pedals, using my body weight, not leg power. Unfortunately, if you take your weight off the saddle on a mountain bike on a steep dirt climb, your rear wheel will start to spin and you come to a stop - which on a steep hill means jump off quick or fall. So I have to use a problem solving approach.] Once I'd mastered the first sub-problem, I could address the next. This one was much harder. See that area at the top of the photo above where the trail curves right and then left? Here is what it looks like up close. The crux of the climb/problem. Now it is really steep. (Again, click on the photo to get a good look. This is the mountain bike equivalent of being asked to solve a complex math problem with many variables.) Though the tire tracks might suggest following a line to the left, I suspect they are left by riders coming down. Coming out of that narrow, right-curving groove I pointed out earlier, it would take an extremely strong rider to follow the left-hand line. No one I know does it that way. An average rider (which I am) has to follow a zig-zag line that cuts down the slope a bit. Like most riders I have seen—and for a while I did watch my more experienced buddies negotiate this slope to get some clues—I start this part of the climb by aiming my bike between the two roots, over at the right-hand side of the trail. (Bottom right of picture.) The next question is, do you go left of that little tree root nipple, sticking up all on its own, or do you skirt it to the right? (If you enlarge the photo you will see that you most definitely do not want either wheel to hit it.) The wear-marks in the dirt show that many riders make a sharp left after passing between those two roots at the start, and steer left of the root protrusion. That's very tempting, as the slope is notably less (initially). I tried that at first, but with infrequent success. Most often, my left-bearing momentum carried me into that obstacle course of tree roots over to the left, and though I sometimes managed to recover and swing out to skirt to the left of that really big root, more often than not I was not able to swing back right and avoid running into that tree! The underlying problem with that line was that thin looking root at the base of the tree. Even with the above photo blown up to full size, you can't really tell how tricky an obstacle it presents at that stage in the climb. Here is a closer view. The obstacle course of tree roots that awaits the rider who bears left If you enlarge this photo, you can probably appreciate how that final, thin root can be a problem if you are out of strength and momentum. Though the slope eases considerably at that point, I—like many riders I have seen—was on many occasions simply unable to make it either over the root or circumventing it on one side—though all three options would clearly be possible with fresh legs. And on the few occasions I did make it, I felt I just got lucky—I had not mastered it. I had got the right answer, but I had not really solved the problem. So close, so often. But, as in mathematics, close is not good enough. After realizing I did not have the leg strength to master the left-of-the-nipple path, I switched to taking the right-hand line. Though the slope was considerable steeper (that is very clear from the blown-up photo), the tire-worn dirt showed that many riders chose that option. Several failed attempts and one or two lucky successes convinced me that the trick was to steer to the right of the nipple and then bear left around it, but keep as close to it as possible without the rear wheel hitting it, and then head for the gap between the tree roots over at the right. After that, a fairly clear left-bearing line on very gently sloping terrain takes you round to the right to what appears to be a crest. (It turns out to be an inflection point rather than a maximum, but let's bask for a while in the success we have had so far.) Here is our brief basking point. The inflection point. One more detail to resolve. As we oh-so-briefly catch our breath and "coast" round the final, right-hand bend and see the summit ahead, we come—very suddenly—to one final obstacle. The summit of the climb At the root of the problem (sorry!) is the fact that the right-hand turn is actually sharper than the previous photo indicates, almost a switchback. Moreover, the slope kicks up as you enter the turn. So you might not be able to gain sufficient momentum to carry you over one or both of those tree roots on the left that you find your bike heading towards. And in my case, I found I often did not have any muscle strength left to carry me over them by brute force. What worked for me is making an even tighter turn that takes me to the right of the roots, with my right shoulder narrowly missing that protruding tree trunk. A fine-tuned approach that replaces one problem (power up and get over those roots) with another one initially more difficult (slow down and make the tight turn even tighter). And there we are. That final little root poking up near the summit is easily skirted. The problem is solved. To be sure, the rest of the ByPass Trail still presents several other difficult challenges, a number of which took me several attempts before I achieved mastery. Taken as a whole, the entire ByPass is a hard climb, and many riders walk the entire quarter mile. But nothing is as difficult as that initial stretch. I was able to ride the rest long before I solved the problem of the first 100 feet. Which made it all the sweeter when I finally did really crack that wall. Now I (usually) breeze up it, wondering why I found it so difficult for so long. Usually? In my next post, I'll use this story to talk about strategies for solving difficult mathematical problems. In particular, I'll look at the role of the subconscious in being able to put together a series of mastered steps in order to solve a big problem. For a very curious thing happened after I took the photos to illustrate this post. I walked back down to collect my bike from the ByPass sign where I'd left it, and rode up to continue my ride. It took me four attempts to complete that initial climb! And therein lies one of the biggest secrets of being able to solve a difficult math problem. To be continued ... See "How Mountain Biking Can Provide the Key to the Eureka Moment" It’s one of the most famous lines from one of the most famous movies of all time, Casablanca. Except it’s not what Ilsa, played by Ingrid Bergman, actually said, which was “Play it once, Sam, for old times' sake . . . [NO RESPONSE] . . . Play it, Sam. Play 'As Time Goes By.'” This month’s column is in response to the emails I receive from time to time asking for a reference to articles I have written for the MAA since I began on that mathemaliterary journey back in 1991. (Yes, I just made that word up. Google returns nothing. But it soon will.) During that time, in addition to moving from print to online, the MAA website went through two overhauls, leaving the archives spread over three volumes: January 1996 – December 2003 January 2004 – July 2011 August 2011 – present Throughout those 23 years, I’ve wandered far and wide across the mathematical and mathematics education landscape. But three ongoing themes emerged. None of them was planned. In each case, I simply wrote something that generated interest – and for one theme considerable controversy – and as a result I kept coming back to it. I continue to receive emails asking about articles I wrote on the first two of those three themes, and the third is still very active. So I am devoting this month’s column to providing an index to those three themes. I’ll start with the most controversial: what is multiplication? This began innocently enough, with a throw-away final remark to a piece I wrote back in 2007. I little knew the firestorm I was about to unleash. September 2007, What is conceptual understanding? June 2008, It Ain't No Repeated Addition July-August 2008, It's Still Not Repeated Addition September 2008, Multiplication and Those Pesky British Spellings December 2008, How Do We Learn Math? January 2009, Should Children Learn Math by Starting with Counting? January 2010, Repeated Addition - One More Spin January 2011, What Exactly is Multiplication? November 2011, How multiplication is really defined in Peano arithmetic I first started making the distinction between mathematics and mathematical thinking in the early 1990s, when an extended foray into mathematical linguistics and then sociolinguistics led to an interest in mathematical cognition that continues to this day. April 1996, Are Mathematicians Turning Soft? October 1996, Wanted: A New Mix September 1999, What Can Mathematics Do For The Businessperson? January 2008, American Mathematics in a Flat World February 2008, Mathematics for the President and Congress October 2009, Soft Mathematics July 2010, Wanted: Innovative Mathematical Thinking No introduction necessary. MOOCs are constantly in the news. Though I was one of the early pioneers in developing the Stanford MOOCs that generated all the media interest in 2012, and I believe the first person to offer a mathematics MOOC (Introduction to Mathematical Thinking), the idea goes back to a course given at Athabasca University in Canada, back in 2008. May 2012, Math MOOC – Coming this fall. Let’s Teach the World November 2012, MOOC Lessons December 2012, The Darwinization of Higher Education January 2013, R.I.P. Mathematics? Maybe. February 2013, The Problem with Instructional Videos March 2013, Can we make constructive use of machine-graded, multiple-choice questions in university mathematics education? September 2013, Two Startups in One Week An irregular series of posts starting on May 5, 2012 December 2013, MOQR, Anyone? Learning by Evaluating March 2, 2013, MOOCs and the Myths of Dropout Rates and Certification March 27, 2013, Can Massive Open Online Courses Make Up for an Outdated K-12 Education System? August 19, 2013, MOOC Mania Meets the Sober Reality of Education November 18, 2013, Why MOOCs May Still Be Silicon Valley's Next Grand Challenge Many colleges and universities have a mathematics or quantitative reasoning requirement that ensures that no student graduates without completing at least one sufficiently mathematical course. Recognizing that taking a regular first-year mathematics course—designed for students majoring in mathematics, science, or engineering—to satisfy a QR requirement is not educationally optimal (and sometimes a distraction for the instructor and the TAs who have to deal with students who are neither motivated nor well prepared for the full rigors and pace of a mathematics course), many institutions offer special QR courses. I’ve always enjoyed giving such courses, since they offer the freedom to cover a wide swathe of mathematics—often new or topical parts of mathematics. Admittedly they do so at a much more shallow depth than in other courses, but a depth that was always a challenge for most students who signed up. Having been one of the pioneers of so-called “transition courses” for incoming mathematics majors back in the 1970s, and having given such courses many times in the intervening years, I never doubted that a lot of the material was well suited to the student in search of meeting a QR requirement. The problem with classifying a transition course as a QR option is that the goal of preparing an incoming student for the rigors of college algebra and real analysis is at odds with the intent of a QR requirement. So I never did that. Enter MOOCs. A lot of the stuff that is written about these relatively new entrants to the higher education landscape is unsubstantiated hype and breathless (if not fearful) speculation. The plain fact is that right now no one really knows what MOOCs will end up looking like, what part or parts of the population they will eventually serve, or exactly how and where they will fit in with the rest of higher education. Like most others I know who are experimenting with this new medium, I am treating it very much as just that: an experiment. The first version of my MOOC Introduction to Mathematical Thinking, offered in the fall of 2012, was essentially the first three-quarters of my regular transition course, modified to make initial entry much easier, delivered as a MOOC. Since then, as I have experimented with different aspects of online education, I have been slowly modifying it to function as a QR-course, since improved quantitative reasoning is surely a natural (and laudable) goal for online courses with global reach—that “free education for the world” goal is still the main MOOC-motivator for me. I am certainly not viewing my MOOC as an online course to satisfy a college QR requirement. That may happen, but, as I noted above, no one has any real idea what role(s) MOOCs will end up fulfilling. Remember, in just twelve months, the Stanford MOOC startup Udacity, which initiated all the media hype, went from “teach the entire world for free” to “offer corporate training for a fee.” (For my (upbeat) commentary on this rapid progression, see my article in the Huffington Post.) Rather, I am taking advantage of the fact that free, no-credential MOOCs currently provide a superb vehicle to experiment with ideas both for classroom teaching and for online education. Those of us at the teaching end not only learn what the medium can offer, we also discover ways to improve our classroom teaching; while those who register as students get a totally free learning opportunity. (Roughly three-quarters of them already have a college degree, but MOOC enrollees also include thousands of first-time higher education students from parts of the world that offer limited or no higher education opportunities.) The biggest challenge facing anyone who wants to offer a MOOC in higher mathematics is how to handle the fact that many of the students will never receive expert feedback on their work. This is particularly acute when it comes to learning how to prove things. That’s already a difficult challenge in a regular class, as made clear in this great blog post by “mathbabe” Cathy O’Neil. In a MOOC, my current view is it would be unethical to try. The last thing the world needs are (more) people who think they know what a proof is, but have never put that knowledge to the test. But when you think about it, the idea behind QR is not that people become mathematicians who can prove things, rather that they have a base level of quantitative literacy that is necessary to live a fulfilled, rewarding life and be a productive member of society. Being able to prove something mathematically is a specialist skill. The important general ability in today’s world is to have a good understanding of the nature of the various kinds of arguments, the special nature of mathematical argument and its role among them, and an ability to judge the soundness and limitations of any particular argument. In the case of mathematical argument, acquiring that “consumer’s understanding” surely involves having some experience in trying to construct very simple mathematical arguments, but far more what is required is being able to evaluate mathematical arguments. And that can be handled in a MOOC. Just present students with various mathematical arguments, some correct, others not, and machine-check if, and how well, they can determine their validity. Well, that leading modifier “just” in that last sentence was perhaps too cavalier. There clearly is more to it than that. As always, the devil is in the details. But once you make the shift from viewing the course (or the proofs part of the course) as being about constructing proofs to being about understanding and evaluating proofs, then what previously seemed hopeless suddenly becomes rife with possibilities. I started to make this shift with the last session of my MOOC this fall, and though there were significant teething troubles, I saw enough to be encouraged to try it again—with modifications—to an even greater extent next year. Of course, many QR courses focus on appreciation of mathematics, spiced up with enough “doing math” content to make the course defensibly eligible for QR fulfillment. What I think is far less common—and certainly new to me—is using the evaluation of proofs as a major learning vehicle. What makes this possible is that the Coursera platform on which my MOOC runs has developed a peer review module to support peer grading of student papers and exams. The first times I offered my MOOC, I used peer evaluation to grade a Final Exam. Though the process worked tolerably well for grading student mathematics exams—a lot better than I initially feared—to my eyes it still fell well short of providing the meaningful grade and expert feedback a professional mathematician would give. On the other hand, the benefit to the students that came from seeing, and trying to evaluate, the proof attempts of other students, and to provide feedback, was significant—both in terms of their gaining much deeper insight into the concepts and issues involved, and in bolstering their confidence. When the course runs again in a few week's time, the Final Exam will be gone, replaced by a new course culmination activity I am calling Test Flight. How will it go? I have no idea. That’s what makes it so interesting. Based on my previous experiments, I think the main challenges will be largely those of implementation. In particular, years of educational high-stakes testing robs many students of the one ingredient essential to real learning: being willing to take risks and to fail. As young children we have it. Schools typically drive it out of us. Those of us lucky enough to end up at graduate school reacquire it—we have to. I believe MOOCs, which offer community interaction through the semi-anonymity of the Internet, offer real potential to provide others with a similar opportunity to re-learn the power of failure. Test Flight will show if this belief is sufficiently grounded, or a hopelessly idealistic dream! (Test flights do sometimes crash and burn.) The more people learn to view failure as an essential constituent of good learning, the better life will become for all. As a world society, we need to relearn that innate childhood willingness to try and to fail. A society that does not celebrate the many individual and local failures that are an inevitable consequence of trying something new, is one destined to fail globally in the long For those interested, I’ll be describing Test Flight, and reporting on my progress (including the inevitable failures), in my blog MOOCtalk.org as the experiment continues. (The next session starts on February 3.) The trouble with writing about, or quoting, Liping Ma, is that everyone interprets her words through their own frame, influenced by their own experiences and beliefs. “Well, yes, but isn’t that true for anyone reading anything?” you may ask. True enough. But in Ma’s case, readers often arrive at diametrically opposed readings. Both sides in the US Math Wars quote from her in support of their positions. That happened with the book that brought her to most people’s attention, Knowing and Teaching Elementary Mathematics: Teachers' Understanding of Fundamental Mathematics in China and the United States , first published in 1999. And I fear the same will occur with her recent article "A Critique of the Structure of U.S. Elementary School Mathematics," published in the November issue of the American Mathematical Society Notices. Still, if I stopped and worried about readers completely misreading or misinterpreting things I write, Devlin’s Angle would likely appear maybe once or twice a year at most. So you can be sure I am about to press ahead and refer to her recent article regardless. My reason for doing so is that I am largely in agreement with what I believe she is saying. Her thesis (i.e., what I understand her thesis to be) is what lay behind the design of my MOOC and my recently released video game. (More on both later.) Broadly speaking, I think most of the furor about K-12 mathematics curricula that seems to bedevil every western country except Finland is totally misplaced. It is misplaced for the simple, radical (except in Finland) reason that curriculum doesn’t really matter. What matter are teachers. (That last sentence is, by the way, the much sought after “Finnish secret” to good education.) To put it I am very familiar with the Finnish education system. The Stanford H-STAR institute I co-founded and direct has been collaborating with Finnish education researchers for over a decade, we host education scholars from Finland regularly, I travel to Finland several times a year to work with colleagues there, I am on the Advisory Board of CICERO Learning, one of their leading educational research organizations, I’ve spoken with members of the Finnish government whose focus is education, and I’ve sat in on classes in Finnish schools. So I know from firsthand experience in the western country that has got it right that teachers are everything and curriculum is at most (if you let it be) a distracting side-issue. The only people for whom curriculum really matters are politicians and the politically motivated (who can make political capital out of curriculum) and publishers (who make a lot of financial capital out of it). But I digress: Finland merely serves to provide an existence proof that providing good mathematics education in a free, open, western society is possible and has nothing to do with curriculum. Let’s get back to Liping Ma’s recent Notices article. For she provides a recipe for how to do it right in the curriculum-obsessed, teacher-denigrating US. Behind Ma’s suggestion, as well as behind my MOOC and my video game (both of which I have invested a lot of effort and resources into) is the simple (but so often overlooked) observation that, at its heart, mathematics is not a body of facts or procedures but a way of thinking. Once a person has learned to think that way, it becomes possible to learn and use pretty well any mathematics you need or want to know about, when you need or want it. In principle, many areas of mathematics can be used to master that way of thinking, but some areas are better suited to the task, since their learning curve is much more forgiving to the human brain. For my MOOC, which is aimed at beginning mathematics students at college or university, or high school students about to become such, I take formalizing the use of language and the basic rules of logical reasoning (in everyday life) as the subject matter, but the focus is as described in the last two words of the course’s title: Introduction to Mathematical Thinking. Apart from the final two weeks of the course, where we look at elementary number theory and beginning real analysis, there is really no mathematics in my course in the usual sense of the word. We use everyday reasoning and communication as the vehicle to develop mathematical thinking. [SAMPLE PROBLEM: Take the famous (alleged) Abraham Lincoln quote, “You can fool all of the people some of the time and some of the people all of the time, but you cannot fool all the people all the time.” What is the simplest and clearest positive expression you can find that states the negation of that statement? Of course, you first have to decide what “clearest”, “simplest”, and “positive” Ma’s focus in her article is beginning school mathematics. She contrasts the approach used in China until 2001 with that of the USA. The former concentrated on “school arithmetic” whereas, since the 1960s, the US has adopted various instantiations of a “strands” approach. (As Ma points out, since 2001, China has been moving towards a strands approach. By my read of her words, she thinks that is not a wise move.) As instantiated in the NCTM’s 2001 Standards document, elementary school mathematics should cover ten separate strands: number and operations, problem solving, algebra, reasoning and proof, geometry, communication, measurement, connections, data analysis and probability, and representation. In principle, I find it hard to argue against any of these—provided they are viewed as different facets of a single whole. The trouble is, as soon as you provide a list, it is almost inevitable that the first system administrator whose desk it lands on will turn it into a tick-the-boxes spreadsheet, and in turn the textbook publishers will then produce massive (hence expensive) textbooks with (at least) ten chapters, one for each column of the spreadsheet. The result is the justifiably maligned “Mile wide, inch deep” US elementary school curriculum. It’s not that the idea is wrong in principle. The problem lies in the implementation. It’s a long path from a highly knowledgeable group of educators drawing up a curriculum to what finds its way into the classroom—often to be implemented by teachers woefully unprepared (through no fault of their own) for the task, answerable to administrators who serve political leaders, and forced to use textbooks that reinforce the separation into strands rather than present them as variations on a single whole. Ma’s suggestion is to go back to using arithmetic as the primary focus, as was the case in Western Europe and the United States in the years of yore and China until the turn of the Millennium, and use that to develop all of the mathematical thinking skills the child will require, both for later study and for life in the twenty-first century. I think she has a point. A good point. She is certainly not talking about drill-based mastery of the classical Hindu-Arabic algorithms for adding, subtracting, multiplying, and dividing, nor is she suggesting that the goal should be for small human beings to spend hours forcing their analogically powerful, pattern-recognizing brains to become poor imitations of a ten-dollar calculator. What was important about arithmetic in past eras is not necessarily relevant today. Arithmetic can be used to trade chickens or build spacecraft. No, if you read what she says, and you absolutely should, she is talking about the rich, powerful structure of the two basic number systems, the whole numbers and the rational numbers. Will that study of elementary arithmetic involve lots of practice for the students? Of course it will. A child’s life is full of practice. We are adaptive creatures, not cognitive sponges. But the goal—the motivation for and purpose of that practice—is developing arithmetic thinking, and moreover doing so in a manner that provides the foundation for, and the beginning of, the more general mathematical thinking so important in today’s world, and hence so empowering for today’s citizens. The whole numbers and the rational numbers are perfectly adequate for achieving that goal. You will find pretty well every core feature of mathematics in those two systems. Moreover, they provide an entry point that everyone is familiar with, teacher, administrator, and beginning elementary school student alike. In particular, a well trained teacher can build the necessary thinking skills and the mathematical sophistication —and cover whatever strands are in current favor—without having to bring in any other mathematical structure. When you adopt the strands approach (pick your favorite flavor), it’s very easy to skip over school arithmetic as a low-level skill set to be “covered” as quickly as possible in order to move on to the “real stuff” of mathematics. But Ma is absolutely right in arguing that this is to overlook the rich potential still offered today by what are arguably (I would so argue) the most important mathematical structures ever developed: the whole and the rational numbers and their associated elementary arithmetics. For what is often not realized is that there is absolutely nothing elementary about elementary arithmetic. Incidentally, for my video game, Wuzzit Trouble, I took whole number arithmetic and built a game around it. If you play it through, finding optimal solutions to all 75 puzzles, you will find that you have to make use of increasingly sophisticated arithmetical reasoning. (Integer partitions, Diophantine equations, algorithmic thinking, and optimization.) I doubt Ma had video game instantiations of her proposal in mind, but when I first read her article, almost exactly when my game was released in the App Store (the Android version came a few weeks later) that’s exactly what I saw. Other games my colleagues and I have designed but not yet built are based on different parts of mathematics. We started with one built around elementary arithmetic because arithmetic provides all the richness you need to develop mathematical thinking, and we wanted our first game to demonstrate the potential of game-based learning in thinking-focused mathematical education (as opposed to the more common basic-skills focus of most mathematics-educational games). In starting with an arithmetic-based game, we were (at the time unknowingly) endorsing the very point Ma was to make in her article. In last month’s column, I reflected on how modern technology enables one person—in my case an academic—to launch enterprises with (potential) global reach without (i) money and (ii) giving up his day job. That is true, but technology does not replace expertise and its feeder, experience. In the case of my MOOC, now well into its third offering, I’ve been teaching transition courses on mathematical thinking since the late 1970s, and am able to draw on a lot of experience as to the difficulties most students have with what for most of them is a completely new side to mathematics. Right now, as we get into elementary, discrete number theory, the class (the 9,000 of 53,000 registrants still active) is struggling to distinguish between division—a binary operation on rationals that yields a rational number for a given pair of integers or rationals—and divisibility—a relation between pairs of integers that is either true or false for any given pair of integers. Unused to distinguishing between different number systems, they suddenly find themselves lost with what they felt they knew well, namely elementary arithmetic. Anyone who has taught a transition course will be familiar with this problematic rite of passage. I suspect I am not alone in having vivid memories of when I myself went through it, even though it was many decades ago! As a result of all those years teaching this kind of material, I pretty well know what to expect in terms of student difficulties and responses, so can focus my attention on figuring out how to make it work in a MOOC. I know how to filter and interpret the comments on the discussion forum, having watched up close many generations of students go through it. As a result, doing it in a MOOC format with a class spread across the globe is a fascinating experiment, when it could so easily have been a disaster. My one fear is that, because the course pedagogy is based on Inquiry-Based Learning, it may be more successful with experienced professionals (of whom I have many in the class), rather than the course’s original target audience of recent high school graduates. In particular, I suspect it is the latter who constantly request that I show them how to solve a problem before expecting them to do so. If all students have been exposed to is instructional teaching, and they have never experienced having to solve a novel problem—to figure it out for themselves—it is probably unrealistic to expect them to make that leap in a Web-based course. But maybe it can be made to work. Time will tell. The other startup I wrote about was my video game company. That is a very different experience, since almost everything about this is new to me. Sure, I’ve been studying and writing about video game learning for many years, and have been playing video games for the same length of time. But designing and producing a video game, and founding a company to do it, are all new. Although we describe InnerTube Games as “Dr. Keith Devlin’s video game company,” and most of the reviews of our first release referred to Wuzzit Trouble as “Keith Devlin’s mathematics video game,” that was like referring to The Rolling Stones as “Mick Jagger’s rock group.” Sure he was out in front, but it was the entire band that gave us all those great performances. In reality, I brought just three new things to our video game design. The first is our strong focus on mathematical thinking (the topic of my MOOC) rather than the mastery of symbolic skills (which is what 99% of current math ed video games provide). The second is that the game should embed at least one piece of deep, conceptual mathematics. (Not because I wanted the players to learn that particular piece of mathematics. Rather that its presence would ensure a genuine mathematical experience.) The third is the design principle that the video game should be thought of as an instrument on which you “play math,” analogous to the piano, an instrument on which you play music. In fact, I was not alone among the company co-founders in favoring the mathematical thinking approach. One of us, Pamela, is a former middle-school mathematics teacher and an award winning producer of educational television shows, and she too was not interested in producing the 500^th animated-flash-card, skills-mastery app. (Nothing wrong with that approach, by the way. It’s just that the skills-mastery sector is already well served, and we wanted to go instead for something that is woefully under-served.) I may know a fair amount about mathematics and education, and I use technology, but that does not mean I'm an expert in the use of various media in education. But Pamela is. And this is what this month’s column is really about: the need for an experienced and talented team to undertake anything as challenging as designing and creating a good educational learning app. Though I use my own case as an example, the message I want to get across is that if, like me, you think it is worthwhile adding learning apps and video games to the arsenal of media that can be used to provide good mathematics learning, then you need to realize that one smart person with a good idea is not going to be anything like enough. We need to work in teams with people who bring different I’ve written extensively in my blog profkeithdevlin.org about the problems that must be overcome to build good learning apps. In fact, because of the history behind my company, we set our bar even higher. We decided to create video games that had all the features of good commercial games developed for entertainment. Games like Angry Birds or Cut the Rope, to name two of my favorites. Okay, we knew that, with a mathematics-based game, we are unlikely to achieve the dizzying download figures of those industry-leading titles. But they provided excellent exemplars in game structure, game mechanics, graphics, sounds, game characters, etc. In the end, it all comes down to engagement, whether the goal is entertainment and making money or providing good learning. In other words, we saw (and see) ourselves not as an “educational video game company” but as a “video game company.” But one that creates video games built around important mathematical concepts. (In the case of Wuzzit Trouble, those concepts are integer arithmetic, integer partitions, and Diophantine equations.) Going after that goal requires many different talents. I’ve already mentioned Pamela, our Chief Learning Officer. I met her, together with my other two co-founders, when I worked with them for several years on an educational video game project at a large commercial studio. That project never led to a released product, but it provided all four of us with the opportunity to learn a great deal about the various crucial components of good video game design that embeds good learning. Enough to realize, first, that we all needed one another, and second that we could work well together. (Don’t underestimate that last condition.) By working alongside video game legend John Romero, I learned a lot about what it takes to create a game that players will want to play. Not enough to do so myself. But enough to be able to work with a good game developer to inject good mathematics into such a game. That’s Anthony, the guy on our team who takes a mathematical concept and turns it into a compelling game activity. (The guy who can give me three good reasons why my “really cool idea” really won’t work in a game!) Pamela, Anthony, and I work closely together to produce fun game activities that embed solid mathematical learning, each bringing different perspectives. Take any one of us out of the picture, and the resulting game would not come close to getting those great release reviews we did. And without Randy, there would not even be a game to get reviewed! Video games are, after all, a business. (At some point, we will have to bring in revenue to continue!) The only way to create and distribute quality games is to create a company. And yes, that company has to create and market a product—something that’s notoriously difficult. (Google “why video game companies fail.”) Randy (also a former teacher) was the overall production manager of the project we all worked on together, having already spent many years in the educational technology world. He’s the one who keeps everything Like it or not, the world around us is changing rapidly, and with so many things pulling on our students’ time, it’s no longer adequate to sit back on our institutional reputations and expect students to come to us and switch off the other things in their lives while they take our courses. One case: I cannot see MOOCs replacing physical classes with real professors, but they sure are already changing the balance. And you don’t have to spend long in a MOOC to see the similarities with MMOs (massively multiplayer online games). We math professoriate long ago recognized we needed to acquire the skills to prepare documents using word processing packages and LaTeX, and to prepare Keynote or PowerPoint slides. Now we are having to learn the rudiments of learning management systems (LMSs), video editing, the creation of applets, and the use of online learning platforms. Creating video games is perhaps more unusual, since it requires so many different kinds of expertise, and I am only doing that because a particular professional history brought me into contact with the gaming industry. But plenty of mathematical types have created engaging math learning apps, and some of them are really very good. Technology not only makes all of these developments possible, it makes it imperative that, as a community, we get involved. But in the end, it’s people, not the technology, that make it happen. And to be successful, those people may have to work in collaborative teams.
{"url":"http://devlinsangle.blogspot.com/","timestamp":"2014-04-21T07:19:11Z","content_type":null,"content_length":"227948","record_id":"<urn:uuid:3eeabd2a-0de5-4ead-8306-203154647301>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Revision and exam help Please Select one of the links below to browse the information available in this section: Doing calculations • Some are simple one-step calculations and are worth perhaps a couple of marks. Give the formula you are using, show the calculation and give the answer clearly. Ensure your workings are all • Some calculations are longer and require an intermediate step. Ensure that you clearly give intermediate results. For example calculating a cross-sectional area is often a necessary stage before finding out stress (the force per unit area.) Even if you make a mistake at the end, you could still gain marks if you have done the cross-sectional area correctly. Obviously these marks cannot be given to you if you didn’t show that stage clearly! • If you make a mistake cross it simply and redo it, but never cross out work unless you replace it by something else in case you lose possible marks contained in the calculations. • If you are giving a numerical answer, remember to give the appropriate S.I. unit. • Write the answer to a reasonable number of significant figures rather than everything from your calculator display. A reasonable number if significant figures is the same as the number of significant figures in the information provided. Two points of caution – do calculations to at least one extra significant figure in order to avoid rounding errors, and just because an answer seems to come out to a nice number don’t forget the significant figures. For instance, if you work out velocity from information given to 3 significant figures and the answer is 6 on your calculator, don’t write down 6 ms-1. The answer should be 6.00 ms-1. • Don’t give up a whole question just because you get stuck early on. If you can’t do an early part, write down a reasonable guess at an answer and write it down for that first part. Then clearly use that answer in later calculations. The examiner can still award you full marks for “error carried forward” (e.c.f. on some mark schemes) where you use your answer correctly. It is really important to write the guessed answer down where it should have first been given, not just later when you need to use it. Also you must show your calculations exactly as advised so all the method is clear.
{"url":"http://www.physics.org/4landing.asp?contentid=442&pid=413&hsub=1","timestamp":"2014-04-17T07:15:21Z","content_type":null,"content_length":"12279","record_id":"<urn:uuid:bfacf3b3-e1e5-4dd5-89ff-946973fe0396>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Why is using single-precision slower than using double-precision hebert@prism.uvsq.fr (Renaud HEBERT) Thu, 24 Nov 1994 08:50:17 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.parallel,comp.arch,comp.compilers From: hebert@prism.uvsq.fr (Renaud HEBERT) Keywords: arithmetic, C Organization: Laboratoire PRiSM - Universite de Versailles-St Quentin - France References: <3aqv5k$e27@monalisa.usc.edu> <3b07cs$mdv@hubcap.clemson.edu> Date: Thu, 24 Nov 1994 08:50:17 GMT Did you used float constant when you were calculating in single precision? Remenber that floating points constants are double and that if you have float x,y; y = 5.0 * x; here you have (double) * (float) -> (float) so in C you have the y = (float)(5.0 * (double)x) so you have two conversions.. On the other hand if you have float x,y; y = 5.0f * x; ^a single-precision constant Here you have an operation between two float, I thought that the compiler would generate the following (float) * (float) -> (float). But everyone here is telling that the compiler will generate y = (float)((double)5.0f * (double)x) ^^^of course this isn't a true conversion here. Well if you want to know what's going on, the only way is to look at the assembly I've compiled on a sun y = 5.0f * x; (y,x float) with gcc -O2 the multiply instruction is fmuls %f0,%f2,%f0 (single precision multiply) and it seems that there isn't any conversion (I don't know sparc assembly...). So why all these people talked about an automatic conversion ? Because they are too lazy to insert a "f" after their constants :-). But it may depends on the compiler though. Renaud HEBERT Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/94-11-171","timestamp":"2014-04-18T23:15:49Z","content_type":null,"content_length":"6818","record_id":"<urn:uuid:5cc396ba-b00a-4d00-97eb-266201a3351f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Gull Wing Lead Example Gull Wing home page Next Gull Wing page Gull Wing Lead Example gullwing-1 for the Surface Evolver [Click for the gullwing-1.fe datafile in a second window.] This first file does just the pad and lead geometry, without any solder. As a result, there is no energy to optimize. The geometry set up in this file will serve just as display in the later files, so one can see the pad and lead. This is the time to get various geometry decisions made before things get too complicated. Design steps: • General: □ The first decision is the orientation of the lead. I choose to center the pad and lead lengthwise along the y axis, so I see a side view when it loads in the Evolver. On initial loading, the x axis is out of the screen, and the y axis is to the right. The horizontal section of the lead is down the negative y axis (leftwards), and the join to the circular part is at y = 0. "Toe" will refer to the negative x end of pad and lead, "heel" will refer to the positive y end (chip end). □ If the geometry were always to stay symmetric about the x = 0 plane, then we could save about half in storage and execution time by doing just half the lead and pad. But we will later want to break the symmetry to calculate the force in the x direction, so we do a full geometry. □ The geometric dimensions are defined as Evolver parameters. Most of them could be defined as macros, since they are mostly used as initial values and changing them at run time would not have the desired effect. Moving things around will be done by explicitly moving vertices, rather than just changing parameter values. The advantage of having them as parameters is that their values are accessible by name at run time. • The pad: • The lead: □ The curved portion of the lead is modeled with circular arcs. Other curves, such as parabolas or sine waves, could have been used, but it is simplest to get constant thickness with circular □ There are six parameters defining the shape of the lead: parameter thick = 6 //Thickness of lead in the z direction. parameter width = 10 // Width of lead in y direction parameter leadtoe = -30 // Toe of lead, relative to heel parameter bend = 90*pi/180 // bending angle of a curved section, radians parameter rad = 10 // inside radius of bends parameter orad = rad+thick // outside radius of bends Note that the bend angle is expressed as degrees converted to radians, since Evolver uses radians in trig functions. Leadtoe is actually the negative length of the horizontal part of the □ The position of the lead is defined by three parameters: parameter gap = 5 // Height of the bottom of the gull foot above the pad parameter leadheel = 0 // y coord of flat-curve junction of lead parameter leadshift = 0 // x offset of centerline of lead These are the only parameters that should be changed at runtime to move the lead, and even then the changing should only be done by the carefully defined movement commands discussed below. □ The curved surfaces of the lead are defined by four level-set constraints: constraint 1 // lower side of lower curve formula: (z-(gap+orad))^2 + (y-leadheel)^2 = orad^2 constraint 2 // upper side of lower curve formula: (z-(gap+orad))^2 + (y-leadheel)^2 = rad^2 constraint 3 // lower side of upper curve formula: (z-(gap+(rad+orad)*(1-cos(bend))-rad))^2 + (y-leadheel-(rad+orad)*sin(bend))^2 = rad^2 constraint 4 // upper side of upper curve formula: (z-(gap+(rad+orad)*(1-cos(bend))-rad))^2 + (y-leadheel-(rad+orad)*sin(bend))^2 = orad^2 Constraints 1 and 3 could have been combined into one constraint using conditional expressions, as could 2 and 4, but there is no real point in getting that fancy here. □ These constraints will be used only for the display surface. Constraints with the same formulas will be defined later for the solder. It is generally wise to keep separate constraints for display surfaces and energy surfaces. □ No constraints are defined here for the flat surfaces of the lead, since refining keeps them flat, and the movement commands are guaranteed to keep them flat. □ I made a hand sketch to number all the vertices, edges, and facets, and typed them all in. The faces are all oriented with outward normal. This is not strictly necessary, but I felt like doing it that way. □ I did not make the lead edges and faces no_refine since at least the curved surfaces must refine in order to follow the circular arcs. □ The lead faces are all colored yellow, suggestive of gold, and distinct from the white that the solder surface will be. • The read section: . □ This section of the datafile gets executed after the surface is set up, just as if the user had typed it in at the command prompt. Two types of things happen here: commands that execute immediately to initially massage the surface, and definitions of commands the user can run later. □ The first massaging done is to fix up the triangulation on the sides of the curved parts of the lead: // Subdivide some curve edges for better display refine edge where (id >= 17 and id <= 20) or (id >= 23 and id <= 26) // Get rid of pesky vertices in middle of curved sides. delete edge ee where sum(ee.facet,color==yellow)==2 and ee.length < thick/3 foreach facet ff where color==yellow do { fix ff.edge; fix ff.vertex; fix ff} The problem is that the original triangulation upon loading can put a vertex too near the inside arc, and later refining creates facets that cut across the arc. Try commenting out the refine line and see what refining does. The use of hardwired edge numbers might be considered poor programming style, but it is the easiest way to identify these particular edges, and the numbers are not likely to change in the later datafiles of this series. The delete command gets rid of a couple of short edges. Since delete won't delete fixed edges, the side faces were left unfixed in the Faces section. The last command takes care of that. □ Next come some commands to move the lead in three directions. For example, translation in x is done by: delta_x := 0 move_x := { leadshift := leadshift + delta_x; // do parameters first set vertex x x+delta_x where z > 0; The movement is relative, of magnitude delta_x. Note that delta_x is used in an immediately executed assignment statement so the parser has a declaration of it before its use in move_x and the user can set delta_x before invoking move_x. In move_x, the leadshift parameter is changed first, because setting a vertex coordinate value causes immediate projection to any constraints it is on, and we want those constraints to be in the new position. • Frills: □ To reduce the display to just a wire-frame outline, the file outline.cmd contains a command outline: outline := { dissolve facet where color==yellow or color==green; foreach edge ee where original == -1 and valence == 0 do { unfix ee; dissolve ee } The dissolve command simply eliminates objects without collapsing them (as delete does). It will not dissolve vertices still used by edges, or edges still used by facets. It will (as of version 2.14) dissolve facets used by bodies. An original number of -1 is assigned to edges that are produced by subdividing facets; the condition is used to prevent dissolving the edges descended from those originally in the datafile. The unfix is necessary since fixed edges will not be dissolved. □ For later use in totally eliminating all display elements, outline.cmd also contains the command vanish: vanish := { outline; dissolve edges; dissolve vertices; □ To refine the lead to smoothness with the minimal number of elements, the file slats.cmd contains the command slats. This command refines the curved edges and faces in such a way as to create a lot of thin, horizontal triangles, and then fixes the display facets so refining won't affect them. □ Both of these files are automatically read in at the end of all the datafiles in this series. Gull Wing home page Next gullwing page Surface Evolver home page More Surface Evolver examples Ken Brakke's home page This page last modified August 7, 1999
{"url":"http://www.susqu.edu/brakke/evolver/gullwing/gullwing-1.html","timestamp":"2014-04-20T09:00:48Z","content_type":null,"content_length":"12947","record_id":"<urn:uuid:78e25939-8f25-45f1-a6d2-19b505d92881>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In probabilityland, one third of population is senior. Every winter 20% of population gets the flu. on average, two out of five seniors get the flu. what is the probability that a person with the flu is senior? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50671e6be4b06e5324216fe8","timestamp":"2014-04-19T13:07:51Z","content_type":null,"content_length":"77757","record_id":"<urn:uuid:cdc6da40-d2c4-49cd-bc35-2ea4325347fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
13 Feb 23:06 2013 How to quantify the overhead? Daryoush Mehrtash <dmehrtash <at> gmail.com> 2013-02-13 22:06:33 GMT Before I write the code I like to be able to quantify my expected result. But I am having hard time quantifying my expected result for alternative approaches in Haskell. I would appreciate any comments from experienced Haskell-ers on this problem: Suppose I have a big list of integers and I like to find the first two that add up to a number, say 10. One approach is to put the numbers in the map as I read them and each step look for the 10-x of the number before going on to the next value in the list. Alternatively, I am trying to see if I it make sense to covert the list to a series of computations that each is looking for the 10-x in the rest of the list (running them in breath first search). I like to find out sanity of using the Applicative/Alternative class's (<|>) operator to set up the computation. So each element of the list (say x) is recursively converted to a computation to SearchFor (10 - x) that is (<|>)-ed with all previous computation until one of the computations returns the pair that add up to 10 or all computation reach end of the list. Does the approach make any sense? What kind of overhead should I expect if I convert the numbers to a computation on the list? In one case I would have a number in a map and in the other I would have a number in a computation over a list. How do I quantify the overheads? Haskell-Cafe mailing list Haskell-Cafe <at> haskell.org
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/103393","timestamp":"2014-04-20T23:29:47Z","content_type":null,"content_length":"10133","record_id":"<urn:uuid:41d1f26a-548c-4794-835e-692d4ce41cf0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
single phase ac induction motor ppt Some Information About single phase ac induction motor ppt is hidden..!! Click Here to show single phase ac induction motor ppt's more details.. Do You Want To See More Details About "single phase ac induction motor ppt" ? Then .Ask Here..! with your need/request , We will collect and show specific information of single phase ac induction motor ppt's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service from our side).....Our experts are ready to help you... .Ask Here..! In this page you may see single phase ac induction motor ppt related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with attachments Page / Author tags Posted by: Created at: Thursday ppt for single phase induction motor speed control through gsm , a c motor control, induction regulator ppt , single phase induction motor ppt seminar dawnload, speed contol 25th of October 2012 of single phase induction motor ppt , seminar on single phase induction motor, single phase induction motor ppt , example single phase induction motorppt, single phase 01:54:33 AM induction motor speed control projects , ppt speed control of 1phase im, single phase ac induction motor ppt , speed control ac induction motors ppt, ppt of speed control of Last Edited Or Replied inverter fed single phase induction machine , touch screen based speed control of single phase induction motor, single phase motor power point , at :Thursday 25th of October 2012 01:54:33 speed control of single phase induction motor by using ac regulator ..................[:=> Show Contents <=:] Posted by: tmplsureshkumar ac voltage controller using scr, ac voltage controller using triac , ac voltage controller using triac and diac, single phase ac voltage controller with inductive rl load , Created at: Saturday ac voltage controller scr, ac voltage controller theory , ac voltage controller tutorial, ac voltage controller notes , ac voltage controller note, ac voltage controller ppt 06th of March 2010 , ac voltage controller lecture notes, ac voltage controller ic , ac voltage controller for resistance heating, ac voltage controller application , ac voltage controller circ 11:27:27 PM , Last Edited Or Replied at :Monday 08th of March 2010 04:46:08 AM [/f..................[:=> Show Contents <=:] Posted by: project brushless doubly fed generator , doubly fed induction generator model, doubly fed induction generator systems , doubly fed asynchronous generator, doubly fed induction report tiger generator basics , doubly fed induction generator principle, doubly fed induction generator systems for wind turbines , doubly fed induction generator, doubly fed induction Created at: Saturday motor pdf , doubly fed induction motor ppt, doubly fed induction motor , report, full , motor, induction , doubly, doubly fed induction motor , synchronous motor and 20th of February 2010 induction motor difference uses in wind turbine in ppt, report on induction motors , ppt on type of induction motor, doubly fed induction motors , the project induction moor, 09:40:13 PM singly fed and doubly fed motors ppt , double fed induction machine, doubly fed induction generator basics ppt , doubly fed induction generator basics, doubly fed electric Last Edited Or Replied machines , doubly feed induction generator basics ppt, induction motor seminar , doubl e fed induction motor, doubly fed induction machine , ppt on double fed electric at :Saturday 20th of machine, doubley feed induction machine , dobly fed ndction moto, seminar topics on doubly , February 2010 09:40:13 less cost but instability is more. Field can be from rotor or stator or from both. Both Active Power (for torque) and Reactive Power (for Flux) has to be fed to ROTOR. Multi-phase supply with frequency f given to stator. Frequency converter converts power from supply frequency to slip frequency. Theoretically system cost is half of other machines with same rating. HIGHER efficiency due to less LOSS. Rotor core is effectively utilized hence Power density is large. Active & Reactive Power to GRID c..................[:=> Show Contents <=:] Posted by: speed control of ACDCInduction motor using MATLAB , speed, control , ACDCInduction, induction motor theorymotor , induction motor theorythree phase induction motor, induction nitz456@gmail.com motor speed control , principle of induction motor, speed control of induction motor , induction motor pdf, induction motor animation , induction motor ppt, 3 phase induction Created at: Tuesday motor , three phase induction motor, using , MATLAB, dc motor matlab , matlab code for speed control of induction motor, ac motor speed control using matlab and , induction 16th of February 2010 motor matlab project report, induction machine matlab , induction motor with matlab, speed control of induction motor using matlab , induction motor with matla, dc motor 02:33:42 AM project based on matlab , induction motor speed control using matlab, induction motor speed control matlab , induction motor speed control matlab simulation, project on Last Edited Or Replied induction motor using matlab programming , project on induction motor using matlab, matlab projects on induction motors , induction motor matlab projects, matlab projects for at :Monday 16th of dc motors , matlab project motor ac, August 2010 12:44:16 project details about speed control of AC,DC,Induction motor using MATLAB. Thank you. Please send me i..................[:=> Show Contents <=:] Posted by: speed control of dc shunt motor, speed control of dc machines , speed control of dc servo motor, speed control of dc motor using pwm , speed control of dc drives, speed nitz456@gmail.com control of dc motor using fuzzy logic , speed control of dc motor ppt, speed control of dc motor using matlab , speed control of dc motor using scr, Induction Motor Using PID Created at: Monday , induction motor theory, induction motor speed control , induction motor pdf, induction motor ppt , induction motor drives, induction motor principle , induction motor 15th of February 2010 starter, ind , speed control of an induction motor using a pi controller, compound pid controllers ppt , speed control of induction motor by fuzzy logic controller matlab 12:53:19 AM programing, speed control of induction motor ppt , matlab code for speed control of induction motor, fuzzy logic control of induction motor matlab model file , dc motor speed Last Edited Or Replied pid control sample code, design of fuzzy logic controller using matlab , design of dc servo motor speed control by using thyristors seminar topics, pid controlller , speed at :Monday 30th of control of dc induction motor using pid fuzzy controller, matlab coding for speed control of an induction motor using fuzzy logic controller , fuzzy pid matlab, January 2012 12:20:43 i would like to have project report or some details regarding motor control using fuzzy logic ..................[:=> Show Contents <=:] Posted by: vspeace induction motors england, induction motors ev , induction motors examples, induction motors equivalent circuits , induction motors equations, induction motors explained , Created at: Wednesday induction motors disadvantages, single phase induction motors circuit , induction motors construction, induction motors calculations , induction motors basics, using 27th of January 2010 induction motors as generators , induction motors animation, induction motors advantages , induction motors applications, induction motors and universal motors , induction 07:11:09 AM motors as gener, 73 rf control of induction motors and other industrial , rf control of induction motors and other industrial loads pdf, rf control of induction motors , rf Last Edited Or Replied control of induction motors ppt, rf control of induction motors and other industrial loads , rf control to induction motors ppt, r f control of induction motor ppt , rf at :Wednesday 27th of control of induction motors and other industrial, induction motor control using rf , radio frequency control and indution motor, rf controlled induction motor pdf , function January 2010 07:44:10 of industrial loads, rf controlled using of inuction motors , rf control of induction motors and other industrial load, what is rf control of induction motors , # can I know m..................[:=> Show Contents <=:] Posted by: computer science technology Study , RegulatorsA, phase , Phase Angle Control in Triac based Single phase AC Regulators, Single , based, Triac , Control, Angle , Phase, application of the single phase ac Created at: Monday voltage controller using triac , ppt on triac, ac motor speed control using triac , single phase ac voltage controller using triac, triac rc phase control formula , single 18th of January 2010 phase ac voltage controller with triac, thyristor control of ac circuits , thyristor based, thyristor triac controlled charger circuit , phase angle regulator, diode based 11:43:53 AM thyristor based , single phase a c dimmer, single phase halfwave r l load regelator , triac phase contol, mcu based triac motor phase regulator , triac based charging Last Edited Or Replied controller, period starting motor single phase using triac , triac based dc dc circuit, phase angle control c by thyristor circuit , at :Thursday 17th of March 2011 01:13:37 AM onverters, are described. Two basic circuits - star- connected and delta-connected, are first taken up. The operation of the two circuits with three-phase balanced resistive (R) load, along with the waveforms, is then discussed. Lastly, the important points of comparison of the performance with different types of circuits, including the above two, are presented. In this case, the load is balanced inductive (R-L) one. In this lesson - the third and final one in the first half, firstly, the circuit used for the phase angle control in triac-based single- phase ac re..................[:=> Show Contents <=:] Posted by: electronics seminars ieee power electronics projects, power electronics project topics , power electronics projects pdf, power electronics projects , power system projects company psp, power Created at: Friday system projects , projects, system , power, modelling of shunt converter using matlab , direct torque control of induction machines utilizing 3 level cascaded h bridge 15th of January 2010 multilevel inverter and fuzzy logic, single phase ac to dc system 25kv composite system ppt , power system based projects, matlab simulink on power application , photovoltaic 01:27:41 PM effect related projects for eee students, search control and loss model based controller of power factor , power system projects, projects on power system , fuzzy rules in Last Edited Or Replied simulink battery, power system , pv bess ppt, modelling power quality enhancement induction motor synchronous reference frame pdf , projects on power systems, power systems , at :Monday 11th of eee tpp topics, February 2013 11:39:10 tion of Three-Phase Unbalanced Radial Distribution System ttttt. A Three-Phase Power Flow Method For Real-Time Distribution System Analysis uuuuu. novel Method of Load Compensation under Unbalanced and Distorted Voltages vvvvv. Development of Three-Phase Unbalanced Power Flow Using PV and PQ Models for Distributed Generation and Study of the Impact of DG Models wwwww. Position Control Design Project (PD Controller “ Root “Locus Design) xxxxx. Application of Voltage- and Current- Controlled Voltage Source Inverters for Distributed Generation Systems yyyyy. Multi-Input In..................[:=> Show Contents <=:] Cloud Plugin by Remshad Medappil
{"url":"http://seminarprojects.net/c/single-phase-ac-induction-motor-ppt","timestamp":"2014-04-20T00:39:00Z","content_type":null,"content_length":"43756","record_id":"<urn:uuid:bb794247-bdc1-4163-b070-32f7f7dd3883>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2006 [00218] [Date Index] [Thread Index] [Author Index] RE: how-to show a contourplot together with a point grid • To: mathgroup at smc.vnet.net • Subject: [mg72055] RE: [mg72006] how-to show a contourplot together with a point grid • From: "David Annetts" <davidannetts at aapt.net.au> • Date: Sun, 10 Dec 2006 04:49:02 -0500 (EST) > i have a matix of dimension {m,n} and make a > listcontourplot out of it. and i have another matrix of the > same dimension and make a plot by > : Show[Map[Point,grid]]. > i want to overlap these two plots together. And I have > tried using the second one as the Epilog of the first one, > but it did not work. any ideas? > generally, how can i overlap two plots in one figure? > thanks for your attentions and reply It is very difficult to give specific help since you heven't given a specific example. In future, I suggest you post the code that you are having problems with. Generate some data, then plot it data = Table[Sin[x y], {x, -Pi, Pi, Pi/10}, {y, -Pi, Pi, Pi/10}]; cplt = ListContourPlot[data, ColorFunction -> Rainbow, MeshRange -> {{-Pi, Pi}, {-Pi, Pi}}, FrameTicks -> {PiScale, PiScale, None, None}]; For the colour function, I define Rainbow[z_] := Hue[.8 z]; You will also need to load Graphics`Graphics for PiScale to work. Next, generate the points and plot them pnts = Table[{x, y}, {x, -Pi, Pi, Pi/10}, {y, -Pi, Pi, Pi/10}]; pplt = Show[Graphics[Point[#] & /@ Partition[Flatten[pnts], 2]]]; Finally, display the two plots Show[cplt, pplt]; Show[] is usually how you'd display two (or more) plots together.
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00218.html","timestamp":"2014-04-16T19:20:50Z","content_type":null,"content_length":"35422","record_id":"<urn:uuid:fe993773-ea44-4df3-8bb7-95e8233a2086>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Code System Effect of using different codes for Power reduction This paper presents the investigative study of power reduction technique using different coding schemes while keeping the low probability of bit error. Hamming, extended Golay and BCH (Bos-chdhuri-Hocquenghem) are selected to illustrate the purpose of power reduction. For the simulation purpose different rates along with different coding techniques are selected. The results show that an efficient and rightful selection of code can improve the performance of any communication system with lower Eb/No (Ratio of the signal energy per bit to noise power density) and bit error 1. Introduction Recently there is a tremendous growth in digital communications sector especially in the fields of wireless and computer communication. In these communication systems, the information is represented as a sequence of binary bits. These binary bits are transformed into analog wave forms using different modulation techniques. The communication channel introduces noise and interference that corrupts the transmitted signal. So at the time of reception corrupted signal is received. Bit errors may result due to the transmission and the number of bit errors depends on the amount of noise and interference in the communication channel. Channel coding is often used in digital communication systems to protect the digital information from noise and interference and reduce the number of bit Use of channel coding as shown in figure 1 to design low bit error rate communication system has recently remained an active research area [1][2][3]. The ability of different codes to detect and correct data at receiving side improves the quality of communication and also minimizes the chances of re-transmissions. The importance of error rate is realized in [1] [2] [4] [5]. Apart from the ability of using codes to minimize the error rate the possibility of reducing peak power by using specific codes is now under consideration [1][2][3]. Use of codes to reduce power will help in building robust and stable systems with better quality of voice and data. With the preference of wireless devices for the communication purpose the focus of research has now mainly shifted to wireless communication system design and also takes the channel coding aspect with it. The battery power is a big constraint for the long duration communication and adjustment of power during calls cause a lot of power usage. This problem gives a clear motivation for investigating such codes that can reduce the power transmission while keeping the same or low bit error rate [1]. This paper presents an investigative study of using popular codes to minimize the transmission power. Effect of using codes on transmitted power is investigated in detail and observations are noted about the effect on bit error rate, which remains the main quality standard. This paper is divided into 5 sections. Section 2 provides details about the well know codes like Hamming, Golay and BCH that are used in simulations. Section 3 presents details about simulation setup. Section 4 provides results and discussion on them. Finally section 5 gives conclusion and future directives. 2. Types of Codes Channel coding is mostly used in digital communication systems to protect the digital information from noise and interference and reduce the number of bit errors. Channel coding is mostly accomplished by selectively introducing redundant bits into the transmitted information stream. Addition of these bits will allow detection and correction of bit errors in the received data bit stream, and provide more reliable information transmission. There are two main types of channel codes, Block codes & Convolution codes. Block codes are based rigorously on finite field arithmetic and abstract algebra. They can be used to either detect or correct errors. Block codes accept a block of k information bits and produce a block of n coded bits. By predetermined rules, n-k redundant bits are added to the k information bits to form the[] coded bits. Commonly, these codes are referred to as[] block codes. Some of the commonly used block codes are Hamming codes, Golay codes, Extended Golay, BCH codes, and Reed Solomon codes [6]. This research work has utilized three of the well know block code namely Hamming, Extended Golay and BCH. Brief description is given as below. 2.1 Hamming Extension of hamming codes that can correct one and detect more than one error is widely used in different applications. The main principal behind working of Hamming codes is parity. Parity is used to detect and correct errors. These parity bits are the resultant of applying parity check on different combination of data bits. Structural representation of Hamming codes can be given as [6] Where[] .. For hamming codes Syndrome decoding is well suited. It is possible to use syndrome to act as a binary pointer to identify location of error. If hard decision decoding is assumed then the probability of bit error[]can be given as Where p is the channel symbol error probability [6]. An identical equation can be written as [6]. 2.2 Extended Golay The extended Golay code uses 12 bits of data and coded it in 24-bit word. This (24,12) extended Golay is derived by adding parity bit to (23, 12) Golay code. This added parity bit increases the minimum distance[]from 7 to 8 and produces a rate code, which is easier to implement than the rate[] that is original Golay code [6]. Though the advantages of using extended Golay is much more to that of ordinary Golay but at the same time the complexity of decoder increases and with the increase in code word size the bandwidth is also utilized more. Extended Golay is also considered more reliable and powerful as compared to Hamming code. If probability of bit error is given by Pb and dmin is 8 with the assumption of hard decision then error probability is given by [6] 2.3 BCH Codes BCH belongs to powerful class of cyclic codes. BCH codes are powerful enough to detect multiple errors. The most commonly used BCH codes employs a binary alphabet and a codeword block length of n= 2^ m-1, where m= 3, 4,...... [6]. 3. Simulation In this simulation we used fixed data rate for e.g 9600 with different code rates of hamming, golay and BCH.We have also assumed the modulation to be BPSK (binary phase shift keying). Matlab is used as a model simulating tool. Eb/No is taken as a power comparison parameter for coded and uncoded signal. The[] parameter is calculated using formula [6] Where R is the data rate in bits per second. Pr/No is ratio the received power to the noise. Apart from[]comparison, the value of bit error rate is also compaired for coded and uncoded bit stream which is calculated by the formula given in eq 6 & eq respectively [6] Where Q(x) is called complementary error function or co-error function, it is commonly used symbol for probability under the tail of Gaussian pdf. Where Pu is probability of error in un-coded bit sequence and Pc is the probability of error in coded bit sequence. Ec/No is ratio of energy per bit to noise spectrum density of coded bit sequence. Finally the most important parameter that shows the edge of using codes with the data is calculated which is probability of bit detected correctly for coded and uncoded bit sequence which is given by equations [6] []and []are probability of un-coded message block received in error and probability of coded block received in error respectively. 4. Simulation results This section presents graphs that are obtained through simulation. The parameter of investigation are []of coded and uncoded sequences, Correlation Coefficient (coded power, code rate and error) and bit error propabiliy at receiving end. Graph of figure 2 represents ratio of energy per bit to noise power density[]on X-axis and error probability on the Y-axis. Graph shows that (error right curve) using code we have aquired reduction in power from 4 to 2 dB with same low probability of error i.e 3x10^-2. . Thus with the help of codes it is shown that more reliable transmission with the reduced power is posible. Thus the power can be efficiently used and will help to improve the up time for the mobile devices with better quality of data transaction. Graph of figure 3 shows correlation coefficient (Coded Power & Coded rate). This graph is based of three attributes of signal. One is Coded power, second is coded rate and third is Error. Interresting part to be noted about the graph is that from 12 dB(coded power) onwards we have almost '0' probability of error. Finally graph of Figure 4 represents the most important comparision of different codes performances. This graph is obtained by keeping the same data rate and varying code rate and code types. For example as decsribed in ealier section three well know block code like Hamming, Extended Golay and BCH are taken for study. For hamming three different rate that is (7,4), (15,11) and (31,26) are taken. For Extended Golay (24,12) and for BCH (127,64) and (127,36) rates are used. The graph show the performance of all the codes with Eb/No on X-axis and Error probability on the Y-axis. It can be eassily infered by the given graph that Golay and BCH shows better performance and these codes give the optimal power of 3.6 dB. 5. Conclusions In this research we inferred that using codes we have low probaility of error with reduced power. Golay and BCH gave optimized power. For future work, codes may be used to implement on multi-carrier systems such as CDMA and OFDM. [1]. Kenneth G. Paterson, Member, IEEE On Codes with Low Peak-to-Average Power Ratio for Multi-Code CDMA IEEE Transactions on Information Theory, VOL 50 (3), 2004, 550-559 [2]. Tony Ottosson Precoding in Multicode DS-CDMA Systems 1997 IEEE [3]. R. Neil Braithwaite Using Walsh Code Selection to Reduce the Power Variance of Band-Limited Forward-Link CDMA Waveforms IEEE Journal on Selected Areas on Communications, VOL. 18, NO. 11 NOVEMBER [4]. Samuel C. Yang: CDMA RF SYSTEM ENGINEERING, Artech House, INC, 685 Canton Street Norwood, MA 02062, 1998 [5]. Henrik Schulze and Christen Luders Theory and application of OFDM and CDMA Wideband Wireless Communication, John Wiley and Sons, Ltd, 2005 [6]. Bernard Sklar, Digital Communications: Fundamentals and Applications, 2nd Edition, Pearson Education, 2006 Share This Essay Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay. More from UK Essays Need help with your essay? We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay, you will see an instant price based on your specific needs before the order is processed:
{"url":"http://www.ukessays.com/essays/engineering/power-code-system.php","timestamp":"2014-04-16T10:47:55Z","content_type":null,"content_length":"32294","record_id":"<urn:uuid:3a5fa29f-736f-4aa6-a3f5-0f6b332dfd93>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/espex/answered","timestamp":"2014-04-19T22:52:42Z","content_type":null,"content_length":"133329","record_id":"<urn:uuid:fa105473-1527-4d1d-b2b6-b56019edbbc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Sites with Mathematical Content A Collection of Web Sites with Mathematical Content The following is a list of web sites with mathematical content. The list is not intended to be exhaustive. Rather, it is designed to give a feel for the range of materials available. You certainly won't have time to look at all the material on all the sites, but it will give you an idea of what is out there -- and what you might be able to use as part of your project. 1. MAA's Mathematical Sciences Digital Library (MathDL) · JOMA: The Journal of Online Mathematics and its Applications, the journal of MathDL · Digital Classroom Resources, the general collection of MathDL. This is just now starting to put up content. Much more will come soon. 2. Duke Material · The Duke Connected Curriculum Project (CCP): Interactive materials for courses from precalculus to linear algebra, differential equations, and engineering mathematics. Most modules have discussion and instructions in web pages with downloadable computer algebra system worksheets for student exploration and reports. The following modules have filled-in Maple worksheets available temporarily. Click on the workshop icon to download these. (This will enable you to see the intent of the module without having to work through the Maple.) Radioactive Decay (in Differential Calculus) Equiangular Spirals (in Multivariable Calculus) Logistic Growth Model (in Differential Equations) Introduction to the One-Dimensional Heat Equation (in Engineering Mathematics -- there is no Maple file for this module) · The Post CALC Project: These materials are designed for high school students who have finished a yearlong course in calculus, but still have time left in their high school career. The format is similar to the CCP materials, but these modules are considerably longer. 3. MERLOT: This extensive project, part of the emerging National Science Digital Library (NSDL), contains materials that range across many disciplines besides mathematics. 4. iLumina: Another collection in the NSDL. This site has extensive metadata available on their entries. The material available covers biology, chemistry, computer science, mathematics, and physics. Use Internet Explorer to search this site. 5. NCTM E-Examples site: These materials are designed to illuminate the NCTM Principles and Standards. 6. Virtual Laboratories in Probability and Statistics: This site was created by Kyle Siegrist at the University of Alabama at Huntsville. For a quick look, scroll down to the bottom of the homepage and click on Applets. Check out the Interactive Histogram applet and the Dice Experiment Applet. 7. Demos with Positive Impact: This site was created by Dave Hill at Temple and Lila Roberts at Georgia Southern. It is a collection of demos that use various technologies and can be used for a variety of courses. 8. Interactive Mathematics: This site is at Utah State.In the 9-12 Geometry section, check out The Pythagorean Theorem and the Platonic solids 9. Math Forum: One of the oldest web sites featuring mathematics, this site focuses on materials and services for K-12. 10. Eisenhower National Clearinghouse (ENC): A large site covering many disciplines with emphasis on K-12. 11. MathWorld: An encyclopedic math site created by Eric Weisstein at Wolfram. 12. The MacTutor History of Mathematics archive: A rich site. Click on the Famous Curves Index and check out a couple of the curves.
{"url":"https://www.math.duke.edu/education/webfeatsII/websites.html","timestamp":"2014-04-17T03:50:11Z","content_type":null,"content_length":"13761","record_id":"<urn:uuid:c936af94-f899-4f4d-8110-baab451793f8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Chicopee Math Tutor Find a Chicopee Math Tutor ...I have successfully tutored calculus over a period of 16 years. I have successfully tutored Geometry over a period of 16 years' I have tutored an entire course in geometry. I have a master's degree in mathematics. 5 Subjects: including algebra 1, algebra 2, calculus, geometry ...For more advanced students in math and chemistry, I have experience teaching undergraduate freshmen through seniors and have even tutored fellow graduate students on everything from General Chemistry, Organic, Analytical & Inorganic Chemistry to Physical Chemistry. I am also available for advanced math help. My experience includes Calculus I & II, Linear Algebra, & Differential 28 Subjects: including algebra 1, ACT Math, algebra 2, geometry ...I have assisted students with basic subject learning as well as MCAS testing and high school preparation. I fully expect parental involvement for the duration of our tutoring experience, not with the session itself, but with our pre/post tutoring correspondence; a little bit of review time with ... 10 Subjects: including prealgebra, algebra 1, reading, writing ...Though I have had a good deal of experience in working with youth, I also thoroughly enjoy working with students in high school and consider high school students my peers. I strongly believe in equality when it comes to teaching and that the tutor is neither inferior or superior to the student. ... 15 Subjects: including calculus, physics, study skills, writing I have been teaching Chemistry for 12 years total, the last 5 of which have been in Longmeadow, MA. In that time I have taught every level of Chemistry from Advanced Placement to a remedial level designed for students who struggle with Reading and Math. I actually designed the remedial class to maximize the learning of the concepts while teaching the required math during the lessons. 2 Subjects: including SAT math, chemistry
{"url":"http://www.purplemath.com/chicopee_ma_math_tutors.php","timestamp":"2014-04-19T05:34:18Z","content_type":null,"content_length":"23754","record_id":"<urn:uuid:f0c6f0d1-4d64-4466-8ba2-bc5c625f0952>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Hermann Minkowski "From henceforth, space by itself, and time by itself, have vanished into the merest s and only a kind of blend of the two exists in its own right." Hermann Minkowski was born on June 22, 1864 in the town of Alexotas in what was once the Russian Empire (now Kaunas, Lithuania). He studied at the Universities of Berlin and Königsberg, recieving his doctorate from Königsberg in 1885. He taught at several universities, in Bonn, Königsberg and Zurich. In Zurich, Albert Einstein was a student in several of his courses. Minkowski once said of Einstein, "The mathematical education of the young physicist (Einstein) was not very solid, which I am in a good position to evaluate since he obtained it from me in Zurich some time ago." Minkowski was also a teacher of Max Born. He settled down in 1902 after accepting a chair in the mathematics department at the University of Göttingen in Germany, where he stayed for the rest of his life. It was in Göttingen that Minkowski entered into the field that would be his life's work. He learned mathematical physics from David Hilbert, and participated in a seminar on electron theory in 1905, diving into the field of electrodynamics. By 1907, Minkowski had realized that the work of Hendrik A. Lorentz and his former pupil Albert Einstein would be best understood in a non-euclidean space - three flat dimensions were simply too rigid. Minkoski was the first to consider the abstract concepts of space and time to be linked in a four-dimensional 'space-time continuum'. This 4-D treatment of electrodyinamics was published in his work Raum und Zeit (Time and Space) that same year, and provied the framework for later mathematical work in relativity. Einstein used this "Minkowski space" in developing his paper Relativity: The Special and General Theory (see Einstein's General Theory of Relativity. Hermann Minkowski died suddenly from a ruptured appendix in 1909, in Göttingen. He was 44. Source: The University of St. Andrews at http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Minkowski.html
{"url":"http://everything2.com/title/Hermann+Minkowski","timestamp":"2014-04-21T13:32:38Z","content_type":null,"content_length":"22249","record_id":"<urn:uuid:a7c1895d-25a3-419f-9683-7318c6e2d484>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
The Farsight Institute | A Brief Description of SAM - Test Three: Correlation and Correspondence Farsight's Session Analysis Machine (SAM) TEST THREE: Correspondence and Correlation All targets have a variety of descriptive characteristics (that is, SAM target attributes). When comparing one target with another, both similarities and differences will be found between the two. The correspondence numbers are one measure of the degree of similarity between any two sets of SAM data, and these numbers can be used to compare one target with another target, or a remote-viewing session with a target. The correspondence numbers are calculated as per Test One. Proportion A is the total matches between the session and the target as a proportion of the total number of target attributes. Proportion B is the total matches between the session and the target as a proportion of the total number of session entries (not target attributes as with proportion A). The average of proportions A and B is called the "correspondence number" for the session, and it is a general measure of the correspondence between the observed remote-viewing data and the actual target attributes. Again, correspondence numbers can also be calculated between any two targets to measure their degree of similarity. Test three evaluates the correspondence numbers for each session. The better a remote-viewing session describes all of a target's characteristics, the higher will be the correspondence number between the session and the target. Used in this way, the correspondence number is called the "session/target" correspondence number. When correspondence numbers are calculated that compare one target with another, such numbers are called "target/target" correspondence numbers. We want to do two things with these correspondence numbers. First we want to note the relative ranking of the session/target correspondence number for the remote-viewing session and its real target as compared with the session/target numbers for the session and other (bogus) targets in a pool of targets. If a session describes the actual target relatively well, then its correspondence number should be high relative to alternative correspondence numbers for bogus targets selected from a pool. Second, we want to compare the variation of both the session/target numbers and the target/ target numbers with regard to the pool of targets. Since a pool of targets normally contains targets with a great variety of descriptive characteristics, comparing any given real target with other bogus targets will result in finding various collections of similarities across the comparisons. For example, the real target may have a mountain and a structure. Comparing this target with another target that has only a mountain will find the similarity in terms of the mountain but not in terms of the structure. Comparing the same real target with another target that has only a structure will find the similarity with respect to the structure, but not with respect to the mountain. Using a number of comparisons in this way across a pool of targets allows us to account for all or most of the real target's important characteristics. This returns us to wanting to compare the variation between the two sets of session/target and target/target correspondence numbers across the pool of targets as a means of evaluating the overall success of a remote-viewing session in capturing its real target's total set of attributes. When compared with other targets which in the aggregate contain many different attribute sets, both the remote-viewing session and its real target should have correspondence numbers that vary similarly. The correlation coefficient summarizes this relationship. The correlation coefficient can vary between -1 and 1. The closer its value is to 1, the more closely the remote-viewing session describes all of its real target's various To begin this comparison in test three, correspondence numbers between the remote-viewing data and all 13 targets that were chosen for the public demonstration of remote viewing are calculated and presented in a table. This allows for a direct comparison of correspondence numbers between the remote-viewing session and the real target as compared with those numbers involving the other targets in this small pool. An accurate session should have a correspondence number for the real target that has a relatively high ranking as compared with the correspondence numbers involving the other targets. The correlation coefficient for the session/target and target/target correspondence numbers is also calculated. A high correlation between the two sets of numbers indicates that the session data and the target attributes for the real target for the experiment are similar when compared with target attributes for other targets in the public demonstration pool. In Part II of this test, correspondence numbers for the given remote-viewing session and all targets in a diverse pool of 240 SAM targets are calculated. Additionally, correspondence numbers calculated using the real target for the remote viewing experiment and all targets in the SAM pool are also calculated. If the remote-viewing session describes the real target well, then the two sets of correspondence numbers (that is, one comparing the session with the SAM pool, and the other comparing the real target with the SAM pool) should vary similarly. Since it is impractical to examine and compare each pair of correspondence numbers using this larger pool of targets as is done in Part I for this test, only the correlation coefficient for the two sets of correspondence numbers is calculated and presented.
{"url":"http://farsight.org/demo/Demo1999/SAMtestThreeDescription.html","timestamp":"2014-04-21T01:59:47Z","content_type":null,"content_length":"16700","record_id":"<urn:uuid:73582625-2c8b-48e9-83d7-1db6dea90e1c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Attleboro Trigonometry Tutor Find an Attleboro Trigonometry Tutor ...Some of my Algebra innovations developed for the SAT will also help students in Algebra I and Algebra II classes. I scored an 800 on the SAT Math section in high school. I generally work well with my students. 14 Subjects: including trigonometry, calculus, geometry, statistics ...I took calculus in high school and several levels of calculus in college. I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in calculus. 10 Subjects: including trigonometry, calculus, physics, algebra 2 ...I usually tutor students at their homes, but I am willing to work at another location (public library, coffee shop, etc.), if preferred. Please contact me for more details, past results, and references. Looking forward to working with you! 41 Subjects: including trigonometry, English, reading, chemistry ...My BA involved reading and interpreting writing of all sorts. I enjoy helping others to reason through their ideas about a given text. I received a perfect score on the GRE general test verbal portion, and have extensive experience tutoring and working one-on-one with students. 29 Subjects: including trigonometry, English, reading, writing ...Algebra 2 skills, including factoring, finding roots, solving sets of equations and classifying functions by their properties, are a necessary foundation for trigonometry, pre-calculus, calculus and linear algebra. Particularly important are operations with exponents and an understanding of the ... 7 Subjects: including trigonometry, calculus, physics, algebra 1
{"url":"http://www.purplemath.com/attleboro_trigonometry_tutors.php","timestamp":"2014-04-18T05:34:01Z","content_type":null,"content_length":"23994","record_id":"<urn:uuid:0840329a-1bd9-4925-ba29-39c58e4c6c44>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Revista de Saúde Pública Services on Demand Related links Print version ISSN 0034-8910 Rev. Saúde Pública vol.39 n.1 São Paulo Jan. 2005 ORIGINAL ARTICLES Impact of influenza vaccination on mortality by respiratory diseases among Brazilian elderly persons Priscila Maria Stolses Bergamo Francisco^I; Maria Rita de Camargo Donalisio^I; Maria do Rosário Dias de Oliveira Lattorre^II ^IDepartamento de Medicina Preventiva e Social. Faculdade de Ciências Médicas. Universidade Estadual de Campinas. Campinas, SP, Brasil ^IIDepartamento de Epidemiologia. Faculdade de Saúde Pública. Universidade de São Paulo. São Paulo, SP, Brasil OBJECTIVE: Respiratory diseases, especially infectious ones, are becoming increasingly representative in the morbidity and mortality patterns of elderly persons. The aim of the present study was to analyze trends in the mortality by respiratory diseases and to observe the impact of influenza vaccination on mortality rates. METHODS: The study was carried out between 1980 and 2000. Subjects were elderly persons living in the State of São Paulo, and mortality data were obtained from the Mortality Information System of the Brazilian Ministry of Health. This is an ecological time-series study. We analyzed the time trends of standardized mortality rates by infectious diseases, according to age group (60-64, 65-69, 70-74, 75-79, and 80+ years) and sex, using polynomial regression. We estimated confidence intervals for the mean expected response in the years following the intervention. RESULTS: Rates increased for both sexes among the elderly population. After the intervention, we observed a declining trend with respect to mortality indicators. For older males, the mean rate in the 1980-1998 period was 5.08 deaths per thousand men, with a linear, non-constant increase of 0.13 per year; in 2000, the rate observed was 4.72 deaths per thousand men. The mean annual rate among women over 60 years was 3.18 deaths per thousand women, with a non-constant increase of 0.08 per year; in 2000, the rate observed was 2.99 deaths per thousand women. There was also a significant reduction in mortality rates in all age groups. CONCLUSIONS: Data indicate the importance of respiratory diseases among the elderly population and suggest that specific protection against influenza has a positive effect on the prevention of mortality due to these diseases. Keywords: Aged. Aging health. Respiratory tract diseases, mortality. Influenza vaccine. Mortality, trends. Respiratory tract diseases, infectious ones especially, are an important cause of morbidity and mortality among the elderly population worldwide.^4,7,10 In Brazil, data from the Sistema de Informações sobre Mortalidade (Mortality Information System) indicates the growing importance of hospital admissions and deaths due to respiratory diseases among the elderly, even considering the ageing of the population.^6 In 1995, in São Paulo State, the proportional rate of mortality due to pneumonia among persons older than 70 years was 9%, with a specific mortality of 594.03 per 100,000 population. In the 60-70 years age group, 1,676 deaths were registered, with a proportional mortality of 4.75% and a specific rate of 101.39 per 100,000 Influenza epidemics happen more frequently during winter. Such epidemics account for a mean 20,000 yearly deaths in the United States.^21 Influenza outbreaks are associated with increases in hospital admissions and deaths, mostly due to complications of the disease and to chronic subjacent ilnesses.^2,3 Vaccination has been the major method for preventing influenza and its more severe complications. When vaccine composition coincides with circulating viral strains, vaccine efficacy can be as high as 70-90% in healthy adults. Among persons older than 60 years, however, efficacy falls to the 30-40% range.^2,8,10 Even considering the greater physiological and immunological susceptibility of elderly persons to infection, influenza vaccination has a positive effect on the prevention of severe influenza, pneumonia, and mortality in this risk group.^4,8-10,17 The aim of the present study is to analyze trends in mortality by respiratory diseases among elderly persons and to observe the impact of influenza vaccination on mortality indicators. This is an ecological time-series study, based on the mortality records of the Sistema de Informações sobre Mortalidade do SUS (Mortality Information System of the Brazilian Unified Healthcare System - SIM/SUS) for São Paulo State, between the years 1980 and 2000. Estimates of the elderly population living in the State were obtained from the Instituto Brasileiro de Geografia e Estatística (Brazilian Institute for Geography and Statistics - IBGE).^* Elderly persons were divided into five age groups: 60 to 64 years, 65 to 69 years, 70 to 74 years, 75 to 79 years, and 80 years and older. We analyzed diagnoses referring to pneumonias and influenza (until 1997, ICD-9 classifications 480-483 and 485-487 were used), bronchitis (490 and 491) and chronic airway obstruction (496). For 1998, ICD-10 classifications were used (J10 to J15, J18, J22, J40 to J42, and J44). These classifications have been used by a number of authors attempting to measure the impact of influenza on the community.^7,17,22 We chose to include chronic obstructive pulmonary disease (COPD) in light of its intimate relationship with pulmonary infection among the elderly.^22 We calculated standardized mortality rates using the harmonic mean of the populations in the 1980-1998 period as a standard population.^13 We calculated the annual ratio between standardized male/female rates and evaluated changes in this relationship throughout the years using simple linear regression models. We considered p-values above 0.05 as indicative of an absence of change in this ratio within the studied period.^11 Initially, we built scatter plots opposing mortality rates and calendar years in order to better visualize the function that might better express the relationship between these variables. Based on the functional relationship observed, we estimated polynomial regression models, which, in addition to being statistically powerful, are easy to elaborate and interpret.^12,15 During the modeling process, we considered the rates of mortality due to selected diagnoses as the dependent variable (Y) and calendar years as the independent variable (X). The transformation of variable year into variable centralized year (year minus the midpoint of the time series) was required, since, in polynomial regression models, the terms in the equation are often self-correlated.^15 As a measure of the model's precision, we used the coefficient of determination (r^2). We verified adherence to normal distribution using the Kolmogorov-Smirnov test; all series were normally distributed. Residual analysis confirmed the assumed homocedasticity of the model.^11,15 We tested the simple linear regression model (U=b[0]+b[1]C) and then second degree (U=b[0]+b[1]C+b[2]C^2), third degree (U=b[0]+b[1]C+b[2]C^2+ b[3]C^3), and exponential (U=e^b0+b1C) models. In light of the statistical similarity of two of the models, we chose that of lower degree. We considered as significant trends whose estimated models obtained p-values below 0.05.^11 In these models, b[0] is the mean yearly rate, b[1] is the linear effect coefficient (speed), and b[2] is the quadratic effect coefficient (acceleration). We considered 1989 as the midpoint of the time series. For some of the age groups, variations in the series were smoothed using a moving average centered on three terms. In this process, the smoothed rate of year i (Y[ai]) corresponds to the arithmetic mean of the coefficients of the previous year (i-1), of the year itself (i) and of the following year (i+1): Based upon the models we estimated using data from the 1980-98 period, we calculated confidence intervals for the mean expected response, i.e., the mortality rates referent to the two subsequent years (1999 and 2000). In addition to models and confidence intervals, we also present the rates obtained after vaccination. We calculated mortality rates and generated graphs using Microsoft Excel (Version 7.0 for Windows 95) spreadsheets. We performed trend analyses using SAS (Statistical Analysis System) Version 8.0 Standardized mortality rates by selected respiratory diseases increased among the population 60 years and older in São Paulo State between 1980 and 1998. This was true for both men and women (Figure 1). Among men, the mean rate in the period was 5.08 deaths per thousand men, with a linear non-constant increase of 0.13 per year. Among women, the mean annual rate was 3.18 deaths per thousand women, with a non-constant increase of 0.08 per year. The male to female standardized mortality rate ratio did not change with time (p=0.338). This ratio remained in average 1.55 men for each woman, showing the greater importance of respiratory diseases among men. An analysis of trends in separate age groups showed that, for men and women alike (Table 1, Figure 2), the older the age group, the greater the magnitude of the annual increment. The annual linear non-constant increase (b[1]) was greater among older males. In the 75-79 years age group, this increase was as much as three times that of the female population in the same age group. In both sexes, the population 80 years and older is distinguished by the magnitude of the mean annual rate (b[0]) (Table 1). An analysis of mortality rates after the vaccine intervention shows that, in 1999, there was a significant reduction in these rates among women in the 70-74 years age group. In 2000, there was a steep decline in rates among men in the 70-74 and 80+ years groups, along with a significant reduction in rates among the female population in all age groups (Table 2). For the population over 60 years as a whole, without distinction of age group, standardized mean mortality rates by selected respiratory diseases fell within the expected interval for both male and female populations in 1999. In 2000, however, there was a significant reduction in rates, i.e., rates fell below the lower limit of the confidence interval (Table 2, Figure 3). Among the male population, the absence of impact of vaccination in the years following the intervention was observed only in the 60-64 years group, whereas the remaining groups showed a decreasing trend. In the female population, the decreasing trend is clearer, especially among the 70-74 and 75-79 age groups (Table 2). Mortality information registries cover the vast majority (95%) of deaths in São Paulo State, and thus have sufficient explanatory power for the construction of satisfactorily reliable mortality Ecological studies of time series are capable of showing the evolution of disease rates in a given geographically defined population, as well as of evaluating the impact of healthcare interventions, being, therefore, an adequate design for examining trends in mortality rates with time.^5,11,14 In the present study, we found that the trends in mortality by selected respiratory diseases showed a real increase between 1980 and 1998 in São Paulo State, even after controlling for the age composition in the period by using standardized rates. However, the increase in such indicators was asymmetric with respect to sex and age groups, with a greater annual increase among males and older groups. Respiratory diseases were confirmed as an important cause of death among the elderly, corroborating the findings of other authors.^6,7,19 Generally speaking, mortality rates by selected respiratory diseases usually differ between the sexes in the age groups studied, but behave similarly in terms of trends throughout the period. The peaks in mortality in 1988, 1990, and 1994/95, seen in Figures 1 and 2, could not be explained by the present study. Some hypotheses can be offered for later investigation, including the greater circulation of virulent strains, the circulation of other etiological agents, and climactic factors. The reduction in mortality rates following vaccine implementation may be due to reductions in the number of cases or in the incidence of more severe cases after vaccination, to the greater sensitivity of healthcare services in the early diagnosis of severe pulmonary conditions, or to an improvement in the specific treatments administered. When evaluating the impact of influenza vaccination, one must consider also that virus and bacteria of different etiologies may be involved in respiratory conditions leading to the hospitalization and death of elderly persons,^2,3 especially during cold and dry seasons, this being a worldwide phenomenon.^1 Our data show that mortality rates in São Paulo State were lower in 2000 for both sexes. Repeated yearly vaccination is associated with greater levels of immunological protection and with reduced mortality compared to the first immunization.^1,2 Even considering the weak immune response of elderly persons to vaccination, a perspective study conducted in the Netherlands in 1994^8 showed that vaccination can reduce clinical and serological influenza by one-half in non-institutionalized elderly persons. Nichol et al^16 (1994), in a cohort study carried out in the United States between 1990 and 1993 with 25 thousand subjects aged 65 years and older, found an impact on the prevention of hospital admissions due to pneumonia and influenza (48% to 57%) and to all acute and chronic respiratory conditions (27% to 39%). In a meta-analysis study, Gross et al^9 (1995) confirmed the reduction in respiratory diseases, hospital admissions, and death among institutionalized elderly persons following vaccination. The results of the present study also indicate a reduction in mortality by selected respiratory diseases, varying according to sex and between different age groups. Nevertheless, a number of factors must be considered when evaluating the protective effect of influenza vaccination, including vaccine immunogenicity, the agreement between the vaccine's antigen content and circulating viral strains, ^8,9 the prevalence of chronic diseases in the community, and previous exposure to the Influenza virus. These factors vary between the seasons, as well as between different regions. Apart from these considerations, it is likely that the investments in healthcare directed towards specific anti-influenza immunization of elderly persons in the State of São Paulo are having a positive effect on this populational segment. In the present study, we sought to draw a general picture of the behavior of mortality due to respiratory tract diseases among elderly persons in the last decades. Continuous evaluation of this trend in years to come may provide more consistent evidence of the impact of successive wide-coverage vaccination campaigns on the Brazilian elderly population. 1. Ahmed AH, Nicholson KG, Nguyen-Van-Tam JS. Reduction in mortality associated with influenza vaccine during 1989-90 epidemic. Lancet 1995;346:591-5. [ Links ] 2. Centers for Disease Control and Prevention [CDC]. Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Morb Mortal Wkly Rep 2000;49 (RR-3):1-38. [ Links ] 3. Chien JW, Johnson JL. Viral pneumonias. Epidemic respiratory viruses. Postgrad Med 2000;107:41-52. [ Links ] 4. Dodet B. Immunity in the elderly. Vaccine 2000;18:1565. [ Links ] 5. França Júnior I, Monteiro CA. Estudo da tendência secular de indicadores de saúde como estratégia de investigação epidemiológica. Rev Saúde Pública 2000;34 Supl 6:57. [ Links ] 6. Francisco PMSB, Donalisio MRC, Latorre MRDO. Tendência da mortalidade por doenças respiratórias em idosos do Estado de São Paulo, 1980 a 1998. Rev Saúde Pública 2003;37:191-6. [ Links ] 7. Glezen WP, Greenberg SB, Atmar RL, Pietra PA, Couch RB. Impact of respiratory virus infections on persons with chronic underlying conditions. JAMA 2000;283:499-505. [ Links ] 8. Govaert TME, Thijs CTMCN, Masurel N, Sprenger MJW, Dinant GJ, Knottnerus J A. The efficacy of influenza vaccination in elderly individuals - A randomized double-blind placebo-controlled trial. JAMA 1994;272:1661-5. [ Links ] 9. Gross PA, Hermogenes AW, Sacks HS, Lau J, Levandowski RA. The efficacy of influenza vaccine in elderly persons. A meta-analysis and review if the literature. Ann Intern Med 1995;123:517-28. [ Links ] 10. Kaiser L, Couch RB, Galasso GJ, Glezen WP, Webster RG, WriIght PF, Hayden FG. First international symposium on influenza and other respiratory viruses: summary and overview. Antiviral Res 1999; 42:149-76. [ Links ] 11. Latorre MRDO. A mortalidade por câncer de estômago no Brasil: análise do período de 1977 a 1989. Cad Saúde Pública 1997;13 Supl 1:67-78. [ Links ] 12. Latorre MRDO, Cardoso MRA. Análise de séries temporais em epidemiologia: uma introdução sobre os aspectos metodológicos. Rev Bras Epidemiol 2001;4:145-52. [ Links ] 13. Laurenti R, Mello Jorge MHP, Lebrão ML, Gotlieb, SL. Estatísticas de saúde. São Paulo: EPU; 1987. [ Links ] 14. Morgenstern, H. Ecologic studies. In: Rothman KJ, Greenland S. 2^nd ed. Modern epidemiology. Philadelphia: Lippincott-Raven Publishers; 1998. [ Links ] 15. Neter J, Wasserman W, Kutner MH. Polynomial regression. In: Neter J, Wasserman W, Kutner MH. Applied linear statistical models. Boston: Irwin; 1990. p. 315-41. [ Links ] 16. Nichol KL, Margolis KL, Wuorenma J, Von Sternberg T. The efficacy and cost effectiveness of vaccination against influenza among elderly persons in the community. N Engl J Med 1994;331:778-84. [ Links ] 17. Nichol KL, Baken L, Nelson A. Relation between influenza vaccination and outpatient visits, hospitalization, and mortality in elderly persons with chronic lung disease. Ann Intern Med 1999; 130:397-403. [ Links ] 18. Paes NA, Albuquerque MEE. Avaliação da qualidade dos dados populacionais e cobertura de registros de óbitos para as regiões brasileiras. Rev Saúde Pública 1999;33:33-43. [ Links ] 19. Ruiz T. Estudo da mortalidade e dos seus preditores na população idosa do município de Botucatu, SP [tese de doutorado]. Campinas: Faculdade de Ciências Médicas da Unicamp; 1996. [ Links ] 20. Secretaria da Saúde. Centro de Vigilância Sanitária. Campanha Nacional de Vacinação para o Idoso. São Paulo; 1999. [ [ Links ]Informe Técnico] 21. Simonsen L, Schonberger LB, Stroup DF, Arden NH, Cox, NJ. The impact of influenza on mortality in the USA. In: Brown LE, Hampson AW, Webster RG. Options for the control of influenza III. Amsterdam: Elsevier Science; 1996. [ Links ] 22. Upshur REG, Knight K, Goel V. Time-series analysis of the relation between influenza virus and hospital admissions of the elderly in Ontario, Canada, for pneumonia, chronic lung disease, and congestive heart failure. Am J Epidemiol 1999;149:85-92. [ Links ] Correspondece to Priscila Maria Stolses Bergamo Francisco Caixa Postal 6111 13083-970 Campinas, SP, Brasil E-mail: priscila@nepo.unicamp.br Received on 3/2/2004. Approved on 8/6/2004 Based on the Master's dissertation presented at the Faculdade de Ciências Médicas da Universidade Estadual de Campinas in 2002. * Information obtained online from URL: < http://www.datasus.gov.br/cgi/ibge/popmao.htm>
{"url":"http://www.scielosp.org/scielo.php?script=sci_arttext&pid=S0034-89102005000100010&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-19T22:30:08Z","content_type":null,"content_length":"50982","record_id":"<urn:uuid:19a6723b-370b-400e-bdfa-b3eb8042902b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: arXiv:0812.5098v3[math.GT]18Aug2009 Abstract. It is known that every exotic smooth structure on a simply con- nected closed 4-manifold is determined by a codimention zero compact con- tractible Stein submanifold and an involution on its boundary. Such a pair is called a cork. In this paper, we construct infinitely many knotted imbeddings of corks in 4-manifolds such that they induce infinitely many different exotic smooth structures. We also show that we can imbed an arbitrary finite num- ber of corks disjointly into 4-manifolds, so that the corresponding involutions on the boundary of the contractible 4-manifolds give mutually different exotic structures. Furthermore, we construct similar examples for plugs. 1. Introduction In [1] the first author proved that E(2)#CP2 changes its diffeomorphism type if we remove an imbedded copy of a Mazur manifold inside and reglue it by a natural involution on its bounday. This was later generalized to E(n)#CP2 (n 2) by Bizaca-Gompf [8]. Here E(n) denotes the relatively minimal elliptic surface with no multiple fibers and with Euler characteristic 12n. Recently, the authors [6] and the first author [4] constructed many such examples for other 4-manifolds. The following general theorem was first proved independently by Matveyev [16],
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/598/0206742.html","timestamp":"2014-04-19T00:42:15Z","content_type":null,"content_length":"8416","record_id":"<urn:uuid:6f47d602-3d26-4a9d-bbd4-d567fafedab0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Re: Using (s)printf() Actually, this isn't quite the whole story. Most everyone follows the IEEE convention of "round towards nearest or even." Examples (rounding all of these to the one's place): 2.51 becomes 3 (this is the 'nearest' rule, which always comes first) 2.49 becomes 2 (again, 'nearest') However, what happens if you have 2.50 ? Which way do you round it... 'tis no nearer to 2 than to 3. The IEEE standard says if there is a tie, round to the even number. 2.50 becomes 2 3.50 becomes 4 You have to pick up or down... this method is consistent and thus tends to make your errors (statistically) smaller.
{"url":"http://www.perlmonks.org/index.pl?node_id=109363","timestamp":"2014-04-21T01:49:41Z","content_type":null,"content_length":"20983","record_id":"<urn:uuid:29e973d5-99b4-4fb8-aa12-9fc7bb9316f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
problem solving with linear functions November 7th 2007, 05:35 PM #1 Nov 2007 problem solving with linear functions Allied Airlines charges $90 for a ticket to fly between two cities 260 mi apart and $150 for a ticket to fly between to cities 500 mi apart. At this rate, what would it cost for a trip between two cities 1000 mi apart? use coordinates to fit a line through the two relations. Let $x$ be how many miles apart the cities are Let $y$ be the cost of the trip. then for the first trip, we have $(x,y) = (260,90)$ and for the second trip, we have $(x,y) = (500, 150)$ Now let $(x_1,y_1) = (260, 90)$ and $(x_2,y_2) = (500,150)$ then the slope of the line connecting these two points is given by: $m = \frac {y_2 - y_1}{x_2 - x_1}$ now, by the point slope form, we have the equation of the line given by: $y - y_1 = m(x - x_1)$ plug in the values when you find $m$ and solve for $y$. after that, you have the relationship. to find how much a trip between cities 1000 miles apart costs, just plug in $x = 1000$ into the equation of the line and solve for $y$ try it thanks for the help but how do i find the cost of the trip between the two cities 1000 mi apart? November 7th 2007, 05:47 PM #2 November 7th 2007, 06:12 PM #3 Nov 2007 November 7th 2007, 06:28 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/22248-problem-solving-linear-functions.html","timestamp":"2014-04-17T05:43:10Z","content_type":null,"content_length":"43401","record_id":"<urn:uuid:30c4f028-2ec2-400f-b6e4-c8b75f4c880d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
A Certain Parallel Circuit Coornsists Of Onrely ... | Chegg.com A certain parallel circuit coornsists of onrely 1/2 W resistors each having the same resistance. The total resistance is 1 k ohm and the total current is 50mA. If each resistor is operating at one-half its maximum power level, determine the following: (a) the number of resistors (b) the value of each resistor (c) the current in each branch (d) the applied voltage Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/certain-parallel-circuit-coornsists-onrely-1-2-w-resistors-resistance-total-resistance-1-k-q880084","timestamp":"2014-04-21T00:57:26Z","content_type":null,"content_length":"21004","record_id":"<urn:uuid:9f7633f9-36d1-4b6a-980c-0241662393cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. Measuring Health Inequality3. Key Attributes of Health Inequality Measures3.1. Reference Point for Comparisons3.2. Relative versus Absolute Inequality3.3. Ordinal versus Nominal Social Groups3.4. Explicit Value Judgments3.5. Key Inequality Measures and Their Attributes4. Selecting Health Inequality Measures for Environmental Justice Analyses4.1. Selected Decomposable Inequality Measures4.1.1. Variance4.1.2. Measures of Entropy: Theil Index and Mean Log Deviation4.1.3. Atkinson Index4.1.4. Concentration Index4.2. Selection and Application of Inequality Indicators for Environmental Justice Analyses5. Summary and ConclusionsAcknowledgmentsConflicts of InterestDisclaimerReferences ijerph International Journal of Environmental Research and Public Health Int. J. Environ. Res. Public Health International journal of environmental research and public health 1660-4601 MDPI 10.3390/ ijerph10094039 ijerph-10-04039 Review Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses Harper Sam 1 † * Ruder Eric 2 Roman Henry A. 2 Geggel Amelia 2 Nweke Onyemaechi 3 Payne-Sturges Devon 4 Levy Jonathan I. 5 † Department of Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, QC H3A 1A2, Canada Industrial Economics, Inc., Cambridge, MA 02140, USA; E-Mails: eruder@indecon.com (E.R.); hroman@indecon.com (H.A.R.); ageggel@indecon.com (A.G.) Office of Environmental Justice, US Environmental Protection Agency, Washington, DC 20460, USA; E-Mail: onyemaechi.nweke@hhs.gov National Center for Environmental Research, US Environmental Protection Agency, Washington, DC 20460, USA; E-Mail: payne-sturges.devon@epa.gov Department of Environmental Health, Boston University School of Public Health, Boston, MA 02118, USA; E-Mail: jonlevy@bu.edu These authors contributed equally to this work. Author to whom correspondence should be addressed; E-Mail: sam.harper@mcgill.ca; Tel.: +1-514-398-2856; Fax: +1-514-398-4503. 30 08 2013 09 2013 10 9 4039 4059 06 06 2013 01 08 2013 19 08 2013 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative measures of health inequality in other settings, and these measures may be applicable to environmental regulatory analyses. In this paper, we provide information to assist policy decision makers in determining the viability of using measures of health inequality in the context of environmental regulatory analyses. We conclude that quantification of the distribution of inequalities in health outcomes across social groups of concern, considering both within-group and between-group comparisons, would be consistent with both the structure of regulatory analysis and the core definition of environmental justice. Appropriate application of inequality indicators requires thorough characterization of the baseline distribution of exposures and risks, leveraging data generally available within regulatory analyses. Multiple inequality indicators may be applicable to regulatory analyses, and the choice among indicators should be based on explicit value judgments regarding the dimensions of environmental justice of greatest interest. regulatory analysis health inequalities environmental justice Regulatory analyses, which focus on quantifying the health and environmental benefits of alternative policy measures, are required for major environmental regulations in the United States and elsewhere. While regulatory analyses often include some discussion about environmental justice implications, they rarely engage in formal quantitative analyses to compare how alternative policy measures could differentially influence environmental justice. Whether it is viable to conduct such analyses depends on a number of factors, including whether “justice” is something that can be analyzed or quantified. As discussed elsewhere [1], it is important to have clarity about core concepts and language to answer this question, in part because there are a number of similar terms used in the context of studying inequalities in health or environmental exposures (e.g., difference, disparity, inequality, inequity, disproportionality, justice). The key conceptual distinction for our purpose is between the concept of inequality and that of inequity. Inequality is a relative (i.e., relational) concept that contains both qualitative and quantitative elements [2]. Inequality plays a central role in the context of philosophical discussions of justice, but primarily as a qualitative concept that involves comparisons between a group of different objects, persons, processes or circumstances (e.g., the opportunity for well-being, equality before the law). Such comparisons may also be quantitative, in which case the concept of inequality may refer specifically to the measurement of differences in the distribution of goods such as income or wealth [3]. In contrast to the concept of inequality, the term “inequity” refers specifically to a subset of measured inequalities that are judged to be unfair or unjust. Judgments concerning inequity rely on social, political and ethical discourse about what a society believes is unfair, and are thus considerably more difficult to quantify [4,5]. Determining whether, or how much of, observed inequalities are inequitable requires consideration of important issues such as whether the inequality is avoidable, unfair, or remediable [6]. The quantification of inequality in health or exposure to environmental hazards or benefits is therefore necessary, but not sufficient, for determining whether or not such a distribution is indeed inequitable. The implication is that quantitative metrics can be used to measure inequality in health outcomes, paralleling the structure of the regulatory analyses, but that a determination of inequity or injustice would be beyond the scope of such analyses. In other words, the tools of regulatory analysis are not well suited for determining which inequalities are unjust and unfair, or whether the processes that led to the status quo situation occurred fairly and reasonably. Nevertheless, given the core definition of environmental justice used by the federal government in the United States [7], which identifies minority and low-income groups as the target populations, addressing environmental justice within regulatory analyses first requires an understanding of inequality in risks between defined population groups. For some applications, it may also be valuable to consider inequality in regulatory costs (e.g., if the cost of the regulation is passed through to consumers in a manner that disproportionately affects certain population groups), but we focus herein on health related to environmental exposures. The question is therefore whether between-group inequality in environmental health risk could be reasonably characterized within regulatory analyses and, if so, what the most logical approaches are for doing so. In this paper, we provide an overview intended to help environmental regulatory analysts understand how health inequality can be measured and how inequality measures can be applied in a new context. We first review the literature on income and health inequality indicators to determine the viability of quantitative measures of environmental health inequality within regulatory analyses. Given these insights, we propose four fundamental attributes of health inequality measures that should be explicitly evaluated before selecting an indicator. We then focus on health inequality measures that provide between-group comparisons consistent with environmental justice concepts, and we conclude by providing a logical approach by which policy decision makers could select among candidate indicators. We are primarily concerned with characterizing the degree of inequality across social groups in defined health outcomes and how that inequality changes as a function of regulatory measures targeting environmental exposures. While this has not been done to date within regulatory analyses, similar questions have been addressed in the realm of income inequality through the development of numerous indicators with various attributes over the last century [3,8]. Indicators such as the Gini coefficient, Theil’s index, and the Atkinson index have been regularly applied, with rigorous debate about the strengths and weaknesses of alternate measures and their interpretations. Each of these indicators employs a slightly different construct, and decision-makers may prefer certain indicators over others, but most of them adhere to basic established principles that facilitate their interpretation. Income is different from health in some fundamental ways, raising the question of whether and how the income inequality indicator literature can be leveraged to inform understanding of health inequality. As has been argued elsewhere [9], mortality or disease risk is strictly bounded between 0 and 1, while income is effectively unbounded (and may even be negative); income can be directly transferred between individuals to remedy existing inequality, while risk cannot be transferred in this manner; and health risk is a multi-dimensional construct with a complex time component. More fundamentally, income is a “good” and health risk is a “bad,” so quantitative indicators of the distribution of income will be interpreted differently from the same indicators as applied to health risk. Moreover, dichotomous health states may be defined in either “positive” (absence of disease) or “negative” (presence of disease) terms. This has important consequences for the comparative analysis of health inequalities [10,11,12,13]. Although these are important observations, the differences are not as stark as they may appear [14]. For example, health policy choices fundamentally deal with distributions and tradeoffs of health risk factors, so risk can be redistributed across the population even if it cannot literally be handed from one person to another. Some attributes of health inequality also make it less challenging to characterize than income inequality [15]. For example, there are fewer unit conversion issues. In addition, mortality is easier to define than income across numerous countries, although measurement issues can be challenging for morbidity outcomes and risk factors. Health measures can also be characterized in a variety of ways, including as “goods” (e.g., life expectancy) or “bads” (e.g., mortality risk), though consistency in how health states are defined for bounded health variables is important for comparative purposes. Given this, health inequality has been characterized in the peer-reviewed literature and in policy analyses for decades, and there are numerous examples of quantitative metrics of inequality being applied to health outcomes. Relatively simple summary metrics have been used to characterize health inequality, comparing levels of health across different pre-defined groups [15]. The Gini coefficient and Atkinson index have been used to characterize health inequality between countries [15,16], as well as to evaluate changes in inequality resulting from environmental policy measures [17,18,19]. Numerous publications have applied the Concentration Index [20,21] to characterize health inequality [22,23]. Thus, the question is not whether health inequality can be meaningfully characterized, but rather how an indicator of health inequality should be constructed in the context of regulatory analyses. First, because the goal of these measures would be to address environmental justice concerns, they should be able to provide comparisons between socioeconomic or racial/ethnic groups of concern. However, this does not necessarily imply that only between-group comparisons are germane. For example, if a pollutant displays significant spatial variability with local “hot spots,” it could be important to target the high-risk individuals within a minority or low-income population to improve environmental justice, rather than reducing risks uniformly across the minority or low-income population, even if the between-group differences are reduced identically. More generally, understanding whether differences in risk are more strongly driven by geography, demographics, or other factors (e.g., behaviors, co-exposures) is important in designing optimal interventions. Second, because regulatory analyses focus on characterizing health benefits/harms from regulatory measures, any indicators of environmental inequality should be based on the corresponding distribution of health outcomes. Or, if data are lacking to characterize the distribution of health benefits corresponding to a regulatory measure, indicators should be based on the distribution of exposures to health-relevant pollutants, determining how that distribution changes as a result of the regulatory measure. The general point is that the outcomes used in the inequality indicator should aim to be consistent with those outcomes used to characterize aggregate benefits within the regulatory analysis [1]. As a corollary, because of the interest in environmental justice and comparisons among defined population groups, inequality analysis should take account of differences in baseline disease rates or key effect modifiers across such groups. Incorporating differences in baseline disease rates is important not only for appropriate application of inequality indicators, but also for identifying high-risk populations. Similarly, the same incremental exposure change could have a greater effect on some individuals than others, and a number of these modifying factors could be socioeconomically or racially patterned [24]. Third, inequality measures themselves often have little meaning absent a context for interpretation, but are useful for comparative purposes. This aligns well with the structure of regulatory analyses, which involve comparing a defined set of policy options with the status quo or baseline, to determine the benefits of the regulation (often in comparison with the costs). Inequality measures will therefore be most meaningful when multiple policy options are under consideration and analyses consider the degree to which inequalities change as a result of the policy options. Finally, inequality measures will also be interpretable only when they take account of baseline inequality and are evaluated in conjunction with health benefits. To illustrate the importance of baseline values, suppose that two different low-income populations could be targeted for risk-reduction measures. The magnitude of risk reduction would be the same for both measures, but one group has elevated baseline health risks in comparison to higher-income populations, while the second group does not differ in its baseline health risks. Clearly, the option targeting the first population would be preferable from an environmental justice perspective, all else being equal, but this difference would be masked if the baseline distribution were not incorporated into the analysis. Including health benefits data is critical because, without measures of the magnitude of health benefits, inequality metrics could be used to argue for “leveling down,” in which environmental justice concerns could be met by increasing exposures among high-income or non-minority populations [25,26]. Whether within the inequality indicator itself or as a separate measure used in multi-attribute decision-making, the magnitude of health benefits must be considered at the same time as the distribution of health benefits. In summary, health inequality has been characterized in the peer-reviewed literature and in policy contexts for decades. Approaches to characterize health inequality have ranged from simple summary measures to more complex statistical formulations, but there is a strong consensus in the literature that it is appropriate to develop and implement health inequality measures. Although such measures have had limited application in the context of environmental regulatory analysis, the prior applications in the health literature suggest inequality analyses are feasible for assessments of environmental justice. While inequality measures can be described by a number of attributes, including adherence to various mathematical axioms common within the income inequality literature, we focus in this paper on four choices that we consider to be fundamental for developing interpretable measures of health inequality—reference points, scale, social group ordering, and explicit value judgments [27]. In this section, we introduce some well-established inequality indicators and consider their attributes with respect to these four choices. We note that some of these topics have been reviewed extensively in the literature [1,28,29,30,31,32,33], and we focus herein on information that would help environmental regulatory analysts understand the implications of choosing a specific inequality measure. Any inequality measure reflects a comparison between a reference group and other members of the population (or, in the case of between-group comparisons, members of another population group). For example, each individual might be considered relative to the average member of the population, where the degree of inequality is an aggregation of the differences between each individual and the average. As discussed elsewhere [2,33], this is a reasonably intuitive comparison that is common to many inequality measures. However, it does not directly capture some of the philosophical constructs relevant to inequality. For example, some philosophers consider the status of the worst-off to be most relevant for considering the degree of inequality [34]. One could also consider each individual relative to the best-off person or social group in society [35,36,37]. This has some theoretical appeal, as it reflects the idea that the best-off person or social group is at an attainable level that others could achieve with improvements to the physical and social environment [1,33]. However, there are potential issues, as attaining the well-being of the best-off person or group may not be a realistic goal. More practically, it may be difficult to characterize or quantify the level of health risk for the best-off person or group in a regulatory analysis [1], and there could be statistical instability if this group is relatively small in size [33]. A variant of this comparison would consider the best-off person whose condition is not anomalous, which may provide a more attainable goal but can be hard to define and quantify [2]. A third formulation involves comparing health risks of an individual or group to all those who are better off, rather than just the single best-off group or person. This provides a greater characterization of the full range of health risks across the population in relation to one another, and is less dependent on the experience of the best-off individual or group [1,33]. This may be appealing, since it reflects the logical idea that the number of people who are better or worse off than an individual should matter. However, it can only be incorporated within a subset of statistical indicators, because it requires a number of pairwise comparisons to be calculated. While these reference points are the most common by far, multiple variants could be considered. For example, various points along the distribution could be selected rather than the average (e.g., the median), although this is rarely done because of the challenges in constructing a single equation that could be clearly presented, as would be needed for interpretable inequality measures. For health risks (as opposed to positive health states), it could be argued that comparison with the worst-off or worst-off person/group whose condition is not anomalous would have value, but it is more typical to consider inequality in the context of the positive steps that could be taken to move individuals to a preferred state. Regardless, it is important to recognize that each statistical formulation has an implicit or explicit reference group defined, and that the choice of reference group needs to be consistent with the priorities and beliefs of decision makers. It is also important to recognize that this choice has some significant implications. Consider a simple example, in which there are 4 people in the world, with initial health status of 10, 8, 4, and 2, respectively (on a scale from 1 to 10, where 10 is perfect health). Suppose that a policy measure would lead the distribution of health status to change to be 9, 9, 4, and 2—effectively, a one-unit transfer from the healthiest to the second-healthiest. If each individual is compared to the best off, the situation is unequivocally better with respect to inequality—the gaps have changed from (2, 6, 8) to (0, 5, 7), so each person is closer to the ideal. However, if each individual is compared to the average, it is no longer the case that the situation is unequivocally better with respect to inequality—if all differences were equally weighted, inequality would be unchanged. If the individual with health status of 10 is considered “anomalous”, then the policy measure would increase inequality by widening the gap between the two worst off (4, 2) and the second-best-off individual (9). The choice of reference group and form of statistical comparison should be consistent with how decision makers would perceive alternative scenarios. Another one of the fundamental questions for any inequality measure is whether it is capturing relative or absolute comparisons among the population [38]. Some measures are based on differences between groups or individuals and the reference point, while others are based on ratios or are constructed in a manner that is scale invariant. In other words, if health risks for all members of the population increased by a factor of two, measures based on absolute inequality would change, while measures based on relative inequality would not. Similarly, if health risks for all members of the population increased by an additive constant, measures based on absolute inequality would not change, while measures based on relative inequality would change. Further complicating this issue is the fact that for all-or-none health states (e.g., presence or absence of disease) the magnitude of relative inequality will depend on whether one considers inequality in the presence or absence of disease [10,11,12,13,39,40]. For this reason, in comparing the magnitude of relative inequality between two counterfactual situations, decision-makers should be consistent in how the health state is defined. Whether relative or absolute inequality measures are more appropriate for health inequality in the context of regulatory analysis is not immediately obvious. On the one hand, environmental regulatory analyses typically apply results from epidemiological studies that generally calculate and report uncertainty on a relative scale. It is appealing to have an inequality indicator be insensitive to these relative uncertainties [1]. In addition, if the inequality indicators are applied to environmental exposures, it is beneficial for the results not to depend on whether exposures are reported in parts per billion or parts per million. However, the dominant use of relative risks in the epidemiological literature may obscure important differences in baseline risk across different contexts, although the historical justifications for the use of relative risks (e.g., transportability across studies or environmental contexts) may also apply in the context of absolute risk differences [41]. Measures of absolute inequality are expressed in the same units as the health outcome or exposure under consideration, which can facilitate closer links between inequality and average health. Given concern about the amount of societal resources required to remedy an existing inequality, the absolute difference in health risk may be an important consideration. More fundamentally, measures of relative inequality and absolute inequality may sometimes produce conflicting findings with respect to how health inequalities change after a policy or intervention [27]. When making this decision, it is important to recognize that the inequality measure in a regulatory analysis is not being used in a vacuum, and does not need to both capture environmental justice and overall health issues. In other words, a situation in which health status in a 4-person world changed from (10, 10, 8, 8) to (5, 5, 4, 4) is a much worse situation all things considered, even if relative health inequality had not changed. An absolute inequality measure may most appropriately reflect the priorities and perspectives of decision makers, but it should not be selected solely because the amount of health risk matters as a separate decision parameter. However, some decision makers might determine that (10, 10, 8, 8) is a more unequal situation independent of the risk level, because more societal resources are required to attain equality (transferring two “units” rather than one “unit” of health). Others focusing only on relative inequality would be indifferent with respect to these two choices, and those more concerned with inequality among those below a certain level of baseline health might find (5, 5, 4, 4) to be a less desirable situation. As above, the choice of the scale of the inequality measure has important implications for evidence and policy on health inequalities. Another potential criterion for choosing a measure of health inequality is the type of the social groups under consideration [29]. Irrespective of their health or exposure status, some social groups have an inherent ordering (“ordinal” groups). For example, there is a clear ordering of social groups defined by income or education, indicators often used to measure socioeconomic position. On the other hand, nominal social groups such as race/ethnicity, or geographic areas, do not have any inherent ordering, though they may obviously be ranked by health or exposure status. Investigating ordinal social groups allows for the quantification of health gradients, meaning situations where measures of health or exposure status either increase or decrease with increasing social group status. Distinguishing ordinal from nominal social groups is useful because certain inequality measures are able to reflect either positive or negative social gradients. In some cases one could, for example, create an ordinal-type measure using nominal characteristics across geographic units. For example, by ordering neighborhoods or census tracts by the proportion of minority population, it becomes possible to utilize measures of inequality designed for ordinal comparisons. However, it should be noted that doing so makes an important assumption that the ranking of areas by proportion of minority population is unambiguously associated with increasing disadvantage. Such an assumption may not be tenable if, for example, there are well-off areas with large proportions of minority populations. In addition, assigning the same value to all residents of the neighborhood may mask important within-neighborhood patterns for analyses with geographically resolved exposure information. Such assumptions could be tested or overcome in cases where individual-level data are available on both exposures and social status. But keeping such assumptions and data limitations in mind in the context of group-level data is crucial for a thorough and detailed analysis of inequality. Any inequality measure involves an implicit or explicit weighting scheme that considers transfers/changes in some parts of the distribution more or less significant than transfers/changes in other parts of the distribution. Even those measures without explicit weights involve an implicit decision about weights (i.e., that all populations should be considered identically, rather than considering high-risk or low-risk individuals differently). Because such decisions are inevitable in the context of measuring inequality [2,28], the choice of a specific inequality measure may be more readily made in the context of explicit value judgments about the significance of changes across different parts of health or exposure distributions. One way of addressing this concern is to use multiple inequality measures deemed suitable, and to determine if the policy choices are sensitive to the measure selected. Some inequality measures have an explicit weighting parameter, where the value of this parameter influences the relative weights across the distribution. Typically, these parameters can be considered as reflecting the degree of societal aversion toward inequality, or more formally, the amount of weight placed on differences at various points in the distribution [33]. The advantage of these measures is that preferences can be explicitly and quantitatively expressed, and policy choices can be evaluated with respect to the weighting parameter. Even if all decision makers considering environmental justice have some degree of concern about inequality, there may not be consensus regarding how much more weight to place on a change in health risk at the 95th percentile of health risk relative to the median. In general, it is important for any analyst using inequality measures to carefully describe how the measure treats transfers in different parts of the distribution. Numerous inequality measures have been developed and applied to characterize inequality in health, income, or other attributes. Harper and Lynch [33] listed and described the attributes of 22 inequality measures, and Levy et al. [1] identified 19 inequality measures in a literature search and focused on five that were most commonly used or discussed. In Table 1 below, we present a modified version of a table generated by Harper and Lynch [33], focusing on a subset of 20 measures that include some dimension of social group inequality. We characterize these measures by their reference point for comparisons, whether they reflect absolute or relative inequality, whether they have an explicit parameter for inequality aversion, and whether they involve social groups that are ordered (i.e., ordinal) vs. unordered (i.e., nominal). Description of all of these candidate inequality measures is beyond the scope of this paper, but in the following section of the paper we define a subset of them (in bold text) that have differing interpretations and offer both within-group and between-group variability, as discussed below. ijerph-10-04039-t001_Table 1 Candidate inequality measures and their key attributes. Derived from Harper and Lynch [33]. Inequality measure Reference group Absolute or relative inequality Explicit inequality aversion parameter Ordered social groups Absolute Difference Best off Absolute No Yes Relative Difference Best off Relative No Yes Regression-Based Relative Effect Best off Relative No Yes Regression-Based Absolute Effect Best off Absolute No Yes Slope Index of Inequality Average Absolute No Yes Relative Index of Inequality Average Relative No Yes Index of Disparity Best off Relative No No Population Attributable Risk Best off Absolute No No Population Attributable Risk% Best off Relative No No Index of Dissimilarity Average Absolute No No Index of Dissimilarity% Average Relative No No Relative Concentration Index Average Relative Yes Yes Absolute Concentration Index Average Absolute Yes Yes Between-Group Variance Average Absolute No No Squared Coefficient of Variation Average Relative No No Atkinson Index Average Relative Yes No Gini Coefficient Average/All those better off Relative No No Theil Index Average Relative No No Mean Log Deviation Average Relative No No Variance of Logarithms Average Relative No No As discussed above, between-group comparisons are fundamental to being able to interpret measures of health inequality in the context of environmental justice. Such comparisons can be conducted using straightforward comparisons of distributions between population groups, both before and after a potential policy change. This could involve simple statistical comparisons of mean levels of pollutants or health outcomes, or the fraction of the population above a certain threshold of exposure or risk (e.g., exceeding the 95th percentile of a distribution). While these simple comparisons have the benefit of generally being more transparent and familiar to analysts and policy makers, they have some serious limitations that should be recognized and considered in the context of regulatory analysis. Pairwise comparisons are of diminishing utility as the number of groups, outcomes, or comparisons increases, particularly in the context of evaluating counterfactual exposure scenarios. With a large number of pairwise comparisons, the information may be difficult to present in a straightforward manner, and decision makers may not be able to readily answer the overarching question about whether or not a policy decision will affect broadly defined health inequalities. And in some cases policy decisions may identify social inequalities in health, broadly defined, as the outcome of interest. For example, US policy targets for health inequalities are framed in overall terms (i.e., reducing racial inequalities in health) rather than in terms of specific social group comparisons (e.g., African Americans) [42,43]. Moreover, simple summary metrics may also lead analysts to rely on arbitrary classifications of the population into a small number of groups of interest (e.g., >50% non-Hispanic black, >50% minority) to classify units into exposure categories, which may be unlikely to capture meaningful differences in risk across such thresholds. Finally, the use of simple summary metrics rather than measures that account for the full range of social group distributions may lead to very different conclusions about the magnitude of baseline inequalities, trends over time, or the potential impact of policy changes on inequalities [44]. We therefore focus on quantitative measures of inequality that can provide insight about between-group differences while also characterizing overall (or within-group) inequality. Because overall health inequality and social inequalities in health may measure different aspects of distributions of health [45,46], it is useful to explore measures that may quantify each of these components. In particular, we focus on measures of inequality that are additively decomposable, defined as those that can be expressed as the sum of: (1) the inequality between groups; and (2) a weighted sum of inequality within groups [28,30,47]. The main benefit of decomposing inequality into constituent parts is that it can shed light on whether most of the health inequality in a population may be explained by differences in health across social groups [46,48]. This can help to contextualize between-group inequality and can potentially direct analysts toward the groups that have the greatest inequality in exposures or risks. It may also reveal different determinants of the overall distribution of health vs. social group differences in health, including aspects of sub-group susceptibility that are important for risk assessment [1]. Moreover, it may be possible that changes over time in policy could affect between-group inequality and have very little impact on overall inequality, or vice versa. The primary advantage of using additively decomposable inequality measures is that it allows one to determine not just whether between-group inequality is increasing, but whether the share of total inequality that is due to inequality between groups is increasing or decreasing. In any case, both dimensions of inequality should be kept in mind in the context of any analysis of inequality change. Below we describe in detail some selected measures of inequality that may be used to measure both overall and between-group inequality. More exhaustive reviews and technical details of other measures can be found elsewhere [28,30,49]. Multiple inequality measures in Table 1 are derived from the Variance. For example, one can take the logarithm of the health/risk measure, in which case it is called the Variance of Logarithms (VarLog), or one can normalize the health/risk measure by the mean, in which case it is called the squared Coefficient of Variation (CV^2). The Variance is also widely recognized and easy to communicate to decision makers and others familiar with basic statistics. We therefore briefly discuss key aspects of the Variance. The generic formula for the total variance of a distribution is: where y[i] is a measure of health/exposure status for individual i, y is the mean health/exposure of the population, and n is the number of individuals in the population. The Variance is therefore a measure of absolute inequality with the average member of the population as the reference point for comparisons. The Variance can also be easily decomposed into between-group and within-group components. For a simple two-group decomposition (e.g., for rich and poor), the total variance can be written as a function of two parts. The between-group part is calculated by assigning rich and poor individuals the average health of their respective groups, and taking the variance of that distribution of groups (this is essentially equivalent to what the variance would be were there no inequality within social groups); and the within-group part is calculated by calculating the variance separately for rich and poor and taking a weighted average of those two variances, with the weights equal to the share of total observations in each group [28,30]: where V[B] and V[W] are, respectively, the between-group and within-group variance, y[j] is the mean health of the jth group, V[rich] and V[poor] are, respectively, the variance estimated separately among the rich and among the poor, and n [rich] and n[poor] are the numbers of rich and poor individuals in the population. The third equation simply shows that the within-group inequality component may be extended to include J groups, as most environmental justice analyses will consider more than one population group (e.g., multiple racial/ethnic groups, multiple income strata, multiple geographic areas). In the context of analyses focused only on estimating the magnitude of inequality between groups, the first bracketed term on the right-hand side of Equation (2) may be used to measure inequality between groups and is sometimes called the Between Group Variance [33]. Because it is applicable in the context of any social group comparisons, the Between Group Variance may be a useful indicator of absolute inequality for social groups that do not have any inherent ordering (e.g., across geographic units, or across racial/ethnic groups). The Variance does not have an explicit inequality aversion parameter, but it does incorporate an implicit weighting by squaring differences and therefore placing a greater weight on large differences from the average. Thus, any decision maker using this measure should be comfortable with the interpretation that a large difference for a small number of people could outweigh a small difference for a large number of people. Some of the modified forms of the Variance, such as VarLog and CV^2, share similar attributes as the Variance but function as measures of relative inequality. Both VarLog and CV^2 are also additively decomposable inequality measures, but require adjustments to the weighting scheme for the within-group inequality component [1,28,30]. Measures of general entropy may also be used in the context of measuring within- and between-group inequality in health [50]. This family of measures may be less familiar to decision makers and regulatory analysts than the family of measures derived from the Variance, but they offer some significant advantages. Generalized measures of entropy incorporate a parameter that allows for differential sensitivity of the resulting index to different parts of the health distribution [30,49]. The parameter value leads to the choice of a specific index within the family of measures of general entropy. Two common indices of inequality that are part of the class of entropy-based measures are the Theil index (T) and the Mean Log Deviation (MLD) [51]. For individual-level data, total inequality in health/exposure y measured by the Theil index can be written [52] as: where p[i] is an individual or group’s population share (which in the case of individual data will be 1/n, so that ∑ p[i]=1) and y[i]/y is the ratio of the individual or group i's health to the average health of the population. When the population of individuals is arranged into J groups, the equation is the exact sum of two parts: between-group inequality and a weighted average of within-group inequality: where T[B] is the between-group Theil index, y[i] is the average health in group j, T[W] is the total within-group Theil index, and T[j] is the inequality in health within group j. The within-group component [the second term on the right side of Equation (4)] is effectively weighted by group j’s share of total health, since p[j] × y[i]/y = y[j] (where y[j] is the share of total health in group j). The above decomposition of T[T] also makes it clear that it is possible to calculate between-group inequalities in health without having data on each individual's health status. The only data needed are the proportions of the population in each social group (p[j]) and the ratio of the group’s health to that of total population (y[i]/y). Given the structure of most regulatory analyses and the reliance on Census data that are characterized at spatially aggregate levels, this is an appealing feature. As with many other common inequality measures, the Theil index involves a comparison with the average and is a relative rather than an absolute measure (Table 1). Using the Theil index involves a choice among the various generalized measures of entropy. Thus, an explicit inequality aversion parameter is involved in the selection process, with the statistical formulation of the Theil index placing greater weight at the upper end of the distribution. However, there is no explicit inequality aversion parameter within the Theil index itself, so one cannot characterize differential sensitivity to inequality without also considering other measures (within the generalized entropy family or otherwise). While this measure has very attractive qualities, the between-group/ within-group decomposition requires continuous outcome data estimated for individuals, so it is not clear whether this can be applied for some binary health outcomes (e.g., incidence, mortality or screening). But even for non-continuous outcomes entropy indices can easily be used to calculate between-group inequality in the absence of individual-level data. For example, suppose that absent a continuous indicator of risk we wanted to measure the between-group disparity in cancer mortality rates. This could be accomplished by calculating the first term on the right side of the above equation (T[B]) using the data on each group’s proportion in the population (p[j]) and their rate of mortality relative to the overall population rate (y[i]/y)—data that may be readily available. Another entropy-based measure that is additively decomposable is the Mean Log Deviation (MLD), sometimes called Theil’s second measure. One way [52] of writing the formula for the total MLD is: where the quantities p[i] and y are defined as above for the Theil index. To decompose MLD[T] into between-group and within-group components the following formula [52] may be used: where MLD[B] is the between-group MLD, MLD[W] is the total within-group MLD, and MLD[j] is the inequality in health within group j. Again, it is straightforward to see that the total within-group component is a weighted average of the within-group inequalities, with weights equal to the population size of each social group. The main difference between T and MLD is differential sensitivity to different parts of the health distribution, with the former being more sensitive to the upper part of the health distribution, and the latter the lower part of the distribution. Additionally, T is weighted by shares of health in each social group, whereas MLD is weighted by shares of population. Thus, in the context of regulatory analyses, the T will be somewhat more influenced by groups with larger health ratios (y[i]/y), whereas MLD will be somewhat more influenced by groups with large population shares (p[j]). It should also be noted that a ‘symmetrized’ entropy index has been proposed [53] to measure between-group inequality that is effectively a weighted average of the T and MLD. The Atkinson index has been used in a number of income and health inequality applications, in part because it has many desirable features, including sub-group decomposability and an explicit inequality aversion parameter. The overall index may be written as: where y[i] represents the health status of the ith individual, n represents the number of individuals, and ε represents an inequality aversion parameter. In contrast to the class of general entropy measures above, the Atkinson index is not strictly additively decomposable. However, it may be usefully decomposed into a between-group component, a within-group component, and a residual term that is minus the product of the between and within components [1,30]. By replacing each individual’s health/exposure with the average of the value for their social group, one can use the Atkinson index to measure between-group inequality: where now y[j] represents the average health of group j. The formula for the within-group component is somewhat more complicated than for the entropy-based measures above, and is given by Cowell [49]. The explicit inequality aversion parameter is an appealing feature of the Atkinson index in the context of an environmental regulatory analysis where it is important to make transparent any assumptions about how different populations have been weighted. However, one concern that has been raised is the fact that an increase in the inequality aversion parameter places increasing weight at the bottom of the distribution, whereas one would prefer increasing weight at the top of the distribution if characterizing adverse health outcomes. This can be addressed by characterizing health as a “good” when theoretically appropriate, by applying basic transformations to the health measure (i.e., using the inverse of risk), or by working with a narrow range of inequality aversion parameters that are empirically justified and avoid more extreme interpretations. The Concentration Index (CI) has been used extensively in the health inequality literature. It involves ordering the population first based on an ordinal social grouping, and then plotting the cumulative percentage of the population against a cumulative measure of health [33]. It therefore can be displayed graphically, which could help decision makers to better understand and interpret the results. The CI can either be relative, if calculated as a percent of the total amount of health, or absolute, if calculated as the cumulative amount of health. It can also be defined with respect to a positive health status measure or an indicator of adverse health outcomes, and can be constructed for defined social groups or at an individual level. The CI has a number of statistical formulations, but a common version [20] of the relative CI is: where p[j] is the social group’s population share, μ[j] is the group’s mean health, and R[j] is the relative rank of the jth socioeconomic group, which is defined as: where p[γ] is the cumulative share of the population up to and including group j and p[j] is the share of the population in group j. R[j] essentially indicates the cumulative share of the population up to the midpoint of each group interval. The absolute Concentration Index (ACI) is obtained by multiplying the RCI by average health [33]. While the formula for the RCI above does not have an explicit parameter for inequality aversion, there is an extended version of the RCI that offers this capability [21,54]. The aversion parameter changes the weight attached to the health of different socioeconomic groups in a manner similar to the Atkinson index described above. The formula for this extended version of the RCI for grouped data is: where ν is the “aversion parameter” and the other quantities are defined as in Equation (9) above. Generally, the weight attached to the health of lower socioeconomic groups increases and the weight attached to the health of higher socioeconomic groups decreases as ν increases. For the “standard” RCI the value of the parameter (ν) is 2, which leads to respective weights of 2, 1.5, 1, 0.5, and 0 for the health of individuals at the 0th, 25th, 50th, 75th, and 100th percentile of the cumulative distribution according to socioeconomic position [21]. The health of the poorest person in the population is thus weighted by 2 and the weights decline as socioeconomic rank increases. Since this inequality aversion parameter may be adjusted, decision makers could potentially specify exactly how much weight to give to each social group in the context of a regulatory analysis. It should also be noted that the issue of sensitivity of the RCI to how binary health indicators are considered (presence vs. absence of disease remains a concern for incorporating the aversion parameter [54]. The CI offers population group decomposition, although not in a strictly additive sense [55]—it is equal to the sum of a between-group component (comparing the mean levels of health across population groups), a within-group component (a weighted sum of the population group concentration indices, where the weights are the product of the health share and the population share), and a residual term that is present if the population groups overlap in their income ranges or other social group ordering. However, it requires an explicit social group ordering, so it may not be suitable for situations in which there are not obvious rankings of social groups. Income or other measures of socioeconomic status have a strict ordering, but race/ethnicity and other demographic characteristics do not. As noted above, it may be possible to rank geographic areas with respect to nominal characteristics, though doing so involves making assumptions that should be carefully considered. We would consider as plausible candidate indicators to incorporate environmental justice into regulatory analyses any quantitative measures that adhere to basic rules for inequality indicators, and that allow for decomposition of inequality into between-group and within-group components. All of the indicators listed in Section 4.1 meet those criteria, as do others in the literature. While a number of questions and factors could be considered before arriving at the subset of candidate indicators, we consider three to be most significant: (1) Are relative or absolute measures of inequality of greater concern?; (2) Is there a desire to have an explicit inequality aversion parameter within a single selected indicator?; and (3) Are the population groups to be compared in the environmental justice analysis inherently ordered (e.g., an income gradient)? The first two of these questions are policy decisions to be made at the level of regulatory decision makers. As discussed above, there are compelling arguments to be made for both relative and absolute concepts of health inequality. Decision makers could conclude that one construct is more suitable given their understanding of environmental justice, or could determine that either concept is reasonable and evaluate the sensitivity of policy conclusions to this choice. The desire for an explicit inequality aversion parameter would be a preference that decision makers might express given the objective to minimize implicit policy decisions within the inequality indicator calculations, although this same preference could be met by using multiple alternative indicators (e.g., multiple generalized entropy measures). The third question may be influenced by the application and subset of environmental justice questions under consideration – socioeconomic status may be the more pertinent measure for some policies, while race/ethnicity may be the more pertinent measure for others. After choosing a decomposable measure of inequality and assembling the requisite health/exposure data, analysis and decomposition of inequalities is relatively straightforward. All of the equations listed above can be readily implemented in spreadsheets or statistical analysis software [56,57,58]. Measures of uncertainty exist for most inequality measures (or may be estimated using bootstrapping or other resampling techniques) [56,57,59], and should also be reported alongside point estimates. However, appropriate interpretation of point estimates may be less straightforward. In a regulatory analysis context, the crucial questions will often involve comparing inequality measures before and after proposed policy measures. In situations where only a single policy option is considered, it is challenging to determine whether the magnitude of any change in inequality (positive or negative) is important relative to other decision-relevant metrics, though the direction of changes is clearly interpretable. For regulatory analyses in which multiple options are under consideration, the changes in inequality measures can be used along with other metrics to determine policy options that best meet multi-attribute decision criteria. For example, one analysis building on an EPA case study [19] compared two emissions control strategies and showed that a multi-pollutant/risk-based approach both led to greater health benefits and more reductions in health inequality (as measured by the Atkinson index and Gini coefficient). This analysis also showed that overall inequality was dominated by differences in risk between vulnerable/susceptible individuals and the rest of the population, reinforcing an environmental justice framework. Another study examined many hypothetical emission control strategies for power plants and determined the subset of policies that were optimal with respect to both total health benefits and reductions in health inequality [17]. At times, regulatory analyses (or environmental justice analyses in other contexts) will involve examining baseline trends over time to determine whether circumstances have been improving or getting worse. While this analysis is computationally straightforward, interpreting changes over time in between-group inequality may be complicated, especially over longer periods. One potential complication is that changes in relative and absolute inequality may diverge, leading to potentially opposing conclusions about the effect of the policy on health inequalities. A second complication occurs because changes in the value of between-group inequality are a function of two quantities: changing social group proportions and changing health status among social groups. Differentiating between these two components of change may be important from an environmental justice perspective. If between-group inequality is increasing but the main reason for the observed change is that the share of the population among groups at the tails of the health distribution has increased, it simply demonstrates that the inequality increase is primarily due to the movement into and out-of different social groups and may not be the result of differential changes in health within those groups. This explanation would not necessarily imply that between-group inequality would not be a growing concern, but would emphasize that demographic patterns and other societal factors explain the trends better than changing environmental exposures. On the other hand, if we find that population shares have remained relatively constant over time (likely in the case of shorter periods of observation) but between-group inequality has increased because of changes in the health status of social groups, this implicates differential sources of changes in health status and may imply a need to address the causes of differential health change. In this paper, we have provided both theoretical and empirical arguments that measurement of health inequality is feasible in the context of environmental justice analyses conducted for evaluating regulatory policy. Health inequality has been characterized in numerous prior investigations following well-established approaches. The questions from the perspective of environmental justice relate to the context in which an inequality measure would be used, the data required for a meaningful measure of health inequality, the criteria for selecting inequality measures to apply in regulatory analyses, and the ultimate application and interpretability of the results. The regulatory analysis application implies an orientation around health outcomes and how they are distributed, both at present and after a potential policy change. In addition, conceptions of environmental justice suggest that pre-defined social groups are to be given direct consideration. These two contexts emphasize how inequality measures should be applied within regulatory analyses—with characterization of both baseline inequality and how inequality would change given a policy change, and utilizing between-group comparisons while also considering within-group inequality. To make this characterization meaningful, the health risk models must have sufficient resolution to allow for between-group and within-group variation in exposure and susceptibility. The geographic resolution would need to be consistent with both available demographic data and the anticipated spatial contrasts of the exposure—the resolution required to characterize a near-roadway environment would be different from the resolution required for regional air pollution. Ideally, data on differential baseline disease rates or effect modification by demographics relevant to environmental justice analyses would also be available. Absent information of this sort, even the most theoretically desirable inequality measure will not yield meaningful insights. That said, most regulatory analyses involve spatially resolved characterizations of exposure and/or health risk, at baseline and after proposed regulations. So, the analytical foundation is generally in place for health inequality assessments, with the need to ensure that sociodemographic information is considered wherever possible. Given sufficient information to characterize health risks at baseline and after proposed regulations, decision makers and analysts must choose among candidate inequality measures. In this paper, we do not recommend a specific inequality measure, largely because this is a policy choice. However, we do outline the questions that decision makers would need to ask and answer in order to focus on the subset of indicators best representing their values. Specifically, a decision is necessary regarding whether relative or absolute concepts of inequality are more appropriate; whether a selected indicator must have an explicit inequality aversion parameter (and, if so, the degree of aversion); and whether any environmental justice analyses would involve comparisons only among inherently ordered population groups (i.e., socioeconomic gradients). These are not simple questions to answer, and it is likely that multiple views exist on these questions. Therefore a default to a suite of inequality measures representing a range of viewpoints would seem a reasonable choice. In some situations in which multiple policy options were under consideration, a single option will emerge as preferable across all candidate inequality measures. In this case, the choice among policy options is clear, at least from an environmental justice perspective. When the choice among policy options differs across inequality measures, analysts will need to articulate the basis for this difference and in particular, lay out the concepts of inequality that will inform the choice of one policy over another. If decision makers chose a default inequality measure and perspective on inequality, this decision would clarify the implications of that choice. If no default were developed, differences across inequality measures could mean that there is no ideal measure with respect to environmental justice (in which case policy choices could be based on other criteria), or that new policy options could be developed to better reduce exposures among high-risk population groups and therefore more clearly improve environmental justice. Quantitative measures of inequality cannot represent all dimensions of environmental justice, and analysts should be clear about this point. That said, inequality measures provide important insight into how patterns of health risks are changing over time and space, and if selected and presented appropriately, can make meaningful contributions to regulatory analyses of environmental justice. Support for this work was provided by the U.S. Environmental Protection Agency (EPA) under contract EP-W-10-002 to Industrial Economics, Inc. We declare that we have no conflict of interest. The views expressed in this article are those of the authors and do not necessarily represent those of the U.S. Environmental Protection Agency. No official Agency endorsement should be inferred. Levy J.I. Chemerynski S.M. Tuchmann J.L. Incorporating concepts of inequality and inequity into health benefits analysis 2006 5 10.1186/1475-9276-5-2 Temkin L.S. Oxford University Press New York, NY, USA 1993 Sen A.K. Clarendon Press Oxford, UK 1973 Whitehead M. The concepts and principles of equity and health 1992 22 429 445 10.2190/986L-LHQ6-2VTE-YRRN Peter F. Evans T. Ethical Dimensions of Health Equity Whitehead M. Diderichsen F. Bhuiya A. Wirth M. Oxford University Press New Work, NY, USA 2001 25 33 Braveman P. Gruskin S. Defining equity in health 2003 57 254 258 10.1136/ jech.57.4.254 Clinton W.J. Executive Order 12898 of February 11, 1994: Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations 1994 59 February 16 Allison P.D. Measures of inequality 1978 43 865 880 10.2307/2094626 Cox L.A. Why income inequality indexes do not apply to health risks 2012 32 192 196 10.1111/j.1539-6924.2011.01756.x Clarke P.M. Gerdtham U.G. Johannesson M. Bingefors K. Smith L. On the measurement of relative and absolute income-related health inequality 2002 55 1923 1928 10.1016/S0277-9536(01)00321-5 Keppel K. Pamuk E. Lynch J. Carter-Pokras O. Kim I. Mays V. Pearcy J. Schoenbach V. Weissman J.S. Methodological issues in measuring health disparities 2005 141 1 16 16032956 Erreygers G. Correcting the concentration index 2009 28 504 515 10.1016/j.jhealeco.2008.02.003 Erreygers G. van Ourti T. Measuring socioeconomic inequality in health, health care and health financing by means of rank-dependent indices: a recipe for good practice 2011 30 685 694 10.1016/j.jhealeco.2011.04.004 Fann N. Roman H.A. Fulcher C.M. Gentile M.A. Hubbell B.J. Wesson K. Levy J.I. Why income inequality indexes do not apply to health risks response 2012 32 197 199 10.1111/j.1539-6924.2011.01758.x Legrand J. Inequalities in health—Some international comparisons 1987 31 182 191 10.1016/0014-2921(87)90030-4 Gakidou E.E. Murray C.J. Frenk J. Defining and measuring health inequality: An approach based on the distribution of health expectancy 2000 78 42 54 10686732 Levy J.I. Wilson A.M. Zwack L.M. Quantifying the efficiency and equity implications of power plant air pollution control strategies in the United States 2007 115 743 750 10.1289/ehp.9712 Levy J.I. Greco S.L. Melly S.J. Mukhi N. Evaluating efficiency-equality tradeoffs for mobile source control strategies in an urban area 2009 29 34 47 10.1111/j.1539-6924.2008.01119.x Fann N. Roman H.A. Fulcher C.M. Gentile M.A. Hubbell B.J. Wesson K. Levy J.I. Maximizing health benefits and minimizing inequality: Incorporating local-scale data in the design and evaluation of air quality policies 2011 31 908 922 10.1111/j.1539-6924.2011.01629.x Kakwani N. Wagstaff A. van Doorslaer E. Socioeconomic inequalities in health: Measurement, computation, and statistical inference 1997 77 87 103 10.1016/S0304-4076(96)01807-6 Wagstaff A. Inequality aversion, health inequalities and health achievement 2002 21 627 641 10.1016/S0167-6296(02)00006-112146594 Font J.C. Hernandez-Quevedo C. McGuire A. Persistence despite action? Measuring the patterns of health inequality in England (1997–2007) 2011 103 149 159 10.1016/j.healthpol.2011.07.002 Arokiasamy P. Pradhan J. Measuring wealth-based health inequality among Indian children: The importance of equity vs. efficiency 2011 26 429 440 10.1093/heapol/czq075 O'Neill M.S. Jerrett M. Kawachi I. Levy J.I. Cohen A.J. Gouveia N. Wilkinson P. Fletcher T. Cifuentes L. Schwartz J. Health, wealth, and air pollution: Advancing theory and methods 2003 111 1861 1870 10.1289/ehp.6334 Whitehead M. Dahlgren G. World Health Organization Copenhagen, Denmark 2006 Temkin L.S. Equality, priority or what? 2003 19 61 87 10.1017/S0266267103001020 Harper S. King N.B. Meersman S.C. Reichman M.E. Breen N. Lynch J. Implicit value judgements in the measurement of health inequalities 2010 88 4 29 10.1111/ j.1468-0009.2010.00587.x Sen A.K. Foster J.E. Clarendon Press Oxford, UK 1997 Wagstaff A. Paci P. van Doorslaer E. On the measurement of inequalities in health 1991 33 545 557 10.1016/0277-9536(91) 90212-U Hao L. Naiman D.Q. SAGE Los Angeles, CA, USA 2010 Firebaugh G. Empirics of world income inequality 1999 104 1597 1630 10.1086/210218 Milanovic B. Princeton University Press Princeton, NJ, USA 2005 Harper S. Lynch J. National Cancer Institute Bethesda, MD, USA 2005 Rawls J. Harvard University Press Cambridge, MA, USA 1971 Sen A. Harvard University Press Cambridge, MA, USA 1992 Pearcy J.N. Keppel K.G. A summary measure of health disparity 2002 117 273 280 10.1016/S0033-3549(04)50161-9 Erreygers G. Can a single indicator measure both attainment and shortfall inequality? 2009 28 885 893 10.1016/j.jhealeco.2009.03.005 Ravallion M. Thorbecke E. Pritchett L. Brookings Institution Press Washington, DC, USA 2004 1 38 Lasso de la Vega C. Aristondo O. Proposing indicators to measure achievement and shortfall inequality consistently 2012 31 578 583 10.1016/j.jhealeco.2012.02.006 Allanson P. Petrie D. On the choice of health inequality measure for the longitudinal analysis of income-related health inequalities 2013 22 353 365 10.1002/hec.2803 Poole C. On the origin of risk relativism 2009 21 3 9 10.1097/EDE.0b013e3181c30eba Koh H.K. A 2020 vision for healthy people 2010 362 1653 1656 10.1056/NEJMp1001601 U.S. Department of Health and Human Services U.S. Department of Health and Human Services Washington, DC, USA 2000 Harper S. Lynch J. Meersman S.C. Breen N. Davis W.W. Reichman M.E. An overview of methods for monitoring social disparities in cancer with an example using trends in lung cancer incidence by area-socioeconomic position and race-ethnicity, 1992–2004 2008 167 889 899 10.1093/aje/kwn016 Braveman P. Krieger N. Lynch J. Health inequalities and social inequalities in health 2000 78 232 234 10743295 Houweling T.A.J. Kunst A.E. Mackenbach J.P. World health report 2000: Inequality index and socioeconomic inequalities in mortality 2001 357 1671 1672 10.1016/S0140-6736(00)04829-7 Shorrocks A.F. The class of additively decomposable inequality measures 1980 48 613 626 10.2307/1913126 Asada Y. Hedemann T. A problem with the individual approach in the WHO health inequality measurement 2002 1 10.1186/1475-9276-1-2 Cowell F.A. 3rd ed. Oxford University Press Oxford, UK 2011 Theil H. North-Holland Amsterdam, Netherlands 1967 Chakravarty S.R. The variance as a subgroup decomposable measure of inequality 2001 53 79 95 10.1023/ A:1007100330501 Firebaugh G. Harvard University Press Cambridge, MA, USA 2003 Borrell L.N. Talih M. A symmetrized Theil index measure of health disparities: An example using dental caries in U.S. children and adolescents 2011 30 277 290 10.1002/sim.4114 Erreygers G. Clarke P. van Ourti T. Mirror, mirror, on the wall, who in this land is fairest of all?—Distributional sensitivity in the measurement of socioeconomic inequality of health 2012 31 257 270 10.1016/j.jhealeco.2011.10.009 Clarke P.M. Gerdtham U.G. Connelly L.B. A note on the decomposition of the health concentration index 2003 12 511 516 10.1002/hec.767 O’Donnell O. van Doorslaer E. Wagstaff A. Lindelow M. World Bank Washington, DC, USA 2007 Araar A. Duclos J.-Y. 2.2 Université Laval, PEP, CIRPÉE and World Bank Quebec, Canada 2012 National Cancer Institute Surveillance Research Program 1.1.0 National Cancer Institute Bethesda, MD, USA 2010 Biewen M. Jenkins S.P. Variance estimation for generalized entropy and atkinson inequality indices: The complex survey data case 2006 68 371 383 10.1111/j.1468-0084.2006.00166.x
{"url":"http://www.mdpi.com/1660-4601/10/9/4039/xml","timestamp":"2014-04-19T02:06:38Z","content_type":null,"content_length":"137432","record_id":"<urn:uuid:6df9804a-111a-4444-82d7-29b4663cbee1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Analysis Proof November 16th 2008, 10:35 AM #1 Nov 2008 Real Analysis Proof Please can someone help me, I'm hopeless with proofs and this is driving me crazy. If $lim_{x \to a} g(x) = m$ with $m eq 0$ then $lim_{x \to a} \frac{1}{g(x)} = \frac{1}{m}$ There are two cases here is the first. Suppose m > 0 Since the limit exits $\delta_1 > 0$ such that when $|x-a|< \delta_1 \implies |g(x)-m|< \frac{m}{2} \implies$ $-\frac{m}{2}< g(x)-m < \frac{m}{2} \implies \frac{m}{2} < g(x)$ you can show something similar when m <0 together this implies that Now choose a $\delta_2$ such that when $|x-a|< \delta_2 \implies |g(x)-m|< \frac{m^2 \epsilon}{2}$ let $\delta =min\{\delta_1,\delta_2 \}$ if $|x-a|< \delta$ $|\frac{1}{g(x)}-\frac{1}{m}|=\frac{|m-g(x)|}{|m\cdot g(x)|}<\frac{\delta}{m\frac{m}{2}}< \epsilon$ November 28th 2008, 07:29 PM #2
{"url":"http://mathhelpforum.com/calculus/59865-real-analysis-proof.html","timestamp":"2014-04-20T07:09:58Z","content_type":null,"content_length":"35989","record_id":"<urn:uuid:5d3b30a1-7c84-4763-8041-71aa60bdbf8a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 306-04 Tuesday, March 18, 2008 Wednesday, February 27, 2008 So...the test on Thursday covers the following topics Independent/Dependent variables Graphing situations Making rules (equations) from a table or word problem or a pair of points Naming the type of relation-direct,partial,constant Completing tables Finding the solution (intersection) of two lines using a graph, a table and the algebraic method Practice some problems! Friday, January 25, 2008
{"url":"http://math306-04.blogspot.com/","timestamp":"2014-04-21T07:04:28Z","content_type":null,"content_length":"65091","record_id":"<urn:uuid:5be41d6a-130a-4eb0-88a9-587be72b2f92>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 142 - Formal Methods in System Design , 1996 "... McMillan has recently proposed a new technique to avoid the state explosion problem in the verification of systems modelled with finite-state Petri nets. The technique requires to construct a finite initial part of the unfolding of the net. McMillan's algorithm for this task may yield initial parts ..." Cited by 180 (9 self) Add to MetaCart McMillan has recently proposed a new technique to avoid the state explosion problem in the verification of systems modelled with finite-state Petri nets. The technique requires to construct a finite initial part of the unfolding of the net. McMillan's algorithm for this task may yield initial parts that are larger than necessary (exponentially larger in the worst case). We present a refinement of the algorithm which overcomes this problem. 1 Introduction In a seminal paper [10], McMillan has proposed a new technique to avoid the state explosion problem in the verification of systems modelled with finite-state Petri nets. The technique is based on the concept of net unfolding, a well known partial order semantics of Petri nets introduced in [12], and later described in more detail in [4] under the name of branching processes. The unfolding of a net is another net, usually infinite but with a simpler structure. McMillan proposes an algorithm for the construction of a finite initial part... , 1998 "... Message sequence charts (MSC) are commonly used in designing communication systems. They allow describing the communication skeleton of a system and can be used for finding design errors. First, a specification formalism that is based on MSC graphs, combining finite message sequence charts, is p ..." Cited by 52 (9 self) Add to MetaCart Message sequence charts (MSC) are commonly used in designing communication systems. They allow describing the communication skeleton of a system and can be used for finding design errors. First, a specification formalism that is based on MSC graphs, combining finite message sequence charts, is presented. We present then an automatic validation algorithm for systems described using the message sequence charts notation. The validation problem is tightly related to a natural language-theoretic problem over semi-traces (a generalization of Mazurkiewicz traces, which represent partially ordered executions). We show that a similar and natural decision problem is undecidable. 1 , 1997 "... We show that safe timed Petri nets can be represented by special automata over the (max,+) semiring, which compute the height of heaps of pieces. This extends to the timed case the classical representation a la Mazurkievicz of the behavior of safe Petri nets by trace monoids and trace languages. Fo ..." Cited by 44 (15 self) Add to MetaCart We show that safe timed Petri nets can be represented by special automata over the (max,+) semiring, which compute the height of heaps of pieces. This extends to the timed case the classical representation a la Mazurkievicz of the behavior of safe Petri nets by trace monoids and trace languages. For a subclass including all safe Free Choice Petri nets, we obtain reduced heap realizations using structural properties of the net (covering by safe state machine components). We illustrate the heap-based modeling by the typical case of safe jobshops. For a periodic schedule, we obtain a heap-based throughput formula, which is simpler to compute than its traditional timed event graph version, particularly if one is interested in the successive evaluation of a large number of possible schedules. Keywords Timed Petri nets, automata with multiplicities, heaps of pieces, (max,+) semiring, scheduling. I. Introduction The purpose of this paper 1 is to prove the following result: Timed safe Pe... , 1997 "... A basic result concerning LTL, the propositional temporal logic of linear time, is that it is expressively complete; it is equal in expressive power to the first order theory of sequences. We present here a smooth extension of this result to the class of partial orders known as Mazurkiewicz traces. ..." Cited by 42 (5 self) Add to MetaCart A basic result concerning LTL, the propositional temporal logic of linear time, is that it is expressively complete; it is equal in expressive power to the first order theory of sequences. We present here a smooth extension of this result to the class of partial orders known as Mazurkiewicz traces. These partial orders arise in a variety of contexts in concurrency theory and they provide the conceptual basis for many of the partial order reduction methods that have been developed in connection with LTL-specifications. We show that LTrL, our linear time temporal logic, is equal in expressive power to the first order theory of traces when interpreted over (finite and) infinite traces. This result fills a prominent gap in the existing logical theory of infinite traces. LTrL also constitutes a characterisation of the so called trace consistent (robust) LTL-specifications. These are specifications expressed as LTL formulas that do not distinguish between different linearisations of the same trace and hence are amenable to partial order reduction methods. - In Proc. of MFCS'99, LNCS 1672 , 1999 "... Message sequence charts (MSC) are a graphical specification language widely used for designing communication protocols. Our starting point are two decision problems concerning the correctness and the consistency of a design based by MSC graphs. Both problems are shown to be undecidable, in gener ..." Cited by 41 (11 self) Add to MetaCart Message sequence charts (MSC) are a graphical specification language widely used for designing communication protocols. Our starting point are two decision problems concerning the correctness and the consistency of a design based by MSC graphs. Both problems are shown to be undecidable, in general. Using a natural connectivity assumption from Mazurkiewicz trace theory we show both problems to be EXPSPACE-complete for locally synchronized graphs. The results are based on new complexity results for star-connected rational trace languages. - Theoretical Computer Science , 1993 "... The main results of the present paper are the equivalence of definability by monadic second-order logic and recognizability for real trace languages, and that first-order definable, star-free, and aperiodic real trace languages form the same class of languages. This generalizes results on infinite w ..." Cited by 31 (4 self) Add to MetaCart The main results of the present paper are the equivalence of definability by monadic second-order logic and recognizability for real trace languages, and that first-order definable, star-free, and aperiodic real trace languages form the same class of languages. This generalizes results on infinite words and on finite traces to infinite traces. It closes an important gap in the different characterizations of recognizable languages of infinite traces. 1 Introduction In the late 70's, A. Mazurkiewicz introduced the notion of trace as a suitable mathematical model for concurrent systems [16] (for surveys on this topic see also [1, 6, 10, 17]). In this framework, a concurrent system is seen as a set \Sigma of atomic actions together with a fixed irreflexive and symmetric independence relation I ` \Sigma \Theta \Sigma. The relation I specifies pairs of actions which can be carried out in parallel. It generates an equivalence relation on the set of sequential observations of the system. As ... - STACS 2002, LNCS 2030 , 2002 "... Abstract. High-level Message Sequence Charts are a well-established formalism to specify scenarios of communications in telecommunication protocols. In order to deal with possibly unbounded specifications, we focus on star-connected HMSCs. We relate this subclass with recognizability and MSO-definab ..." Cited by 27 (4 self) Add to MetaCart Abstract. High-level Message Sequence Charts are a well-established formalism to specify scenarios of communications in telecommunication protocols. In order to deal with possibly unbounded specifications, we focus on star-connected HMSCs. We relate this subclass with recognizability and MSO-definability by means of a new connection with Mazurkiewicz traces. Our main result is that we can check effectively whether a star-connected HMSC is realizable by a finite system of communicating automata with possibly unbounded channels. Message Sequence Charts (MSCs) are a popular model often used for the documentation of telecommunication protocols. They profit by a standardized visual and textual presentation (ITU-T recommendation Z.120 [11]) and are related to other formalisms such as sequence diagrams of UML. An MSC gives a graphical description of communications between processes. It usually abstracts away from the values of variables and the actual contents of messages. However, this formalism can be used at a very early stage of design to detect errors in the specification - In ICALP 2002, volume 2380 of LNCS , 2002 "... Abstract. We consider three natural classes of infinite-state HMSCs: ..." - In CAV, LNCS 4144 , 2006 "... Abstract. Atomicity is an important generic specification that assures that a programmer can pretend blocks occur sequentially in any execution. We define a notion of atomicity based on causality. We model the control flow of a program with threads using a Petri net that naturally abstracts data, an ..." Cited by 22 (4 self) Add to MetaCart Abstract. Atomicity is an important generic specification that assures that a programmer can pretend blocks occur sequentially in any execution. We define a notion of atomicity based on causality. We model the control flow of a program with threads using a Petri net that naturally abstracts data, and faithfully captures the independence and interaction between threads. The causality between events in the partially ordered executions of the Petri net is used to define the notion of causal atomicity. We show that causal atomicity is a robust notion that many correct programs adopt, and show how we can effectively check causal atomicity using Petri net tools based on unfoldings, which exploit the concurrency in the net to yield automatic partial-order reduction in the state-space. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=104388","timestamp":"2014-04-23T09:36:07Z","content_type":null,"content_length":"36701","record_id":"<urn:uuid:2d340032-3e58-4b16-bc6e-b32df5989083>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6920246 - Method for segmentation-based recognizing handwritten touching numeral strings 1. Field of the Invention The present invention relates to a method for segmentation-based recognizing handwritten touching numeral strings, and more particularly, to a method of segmenting touching numeral strings contained in handwritten touching numeral strings, and recognizing the numeral strings by use of feature information and recognized results provided by inherent structure of digits. 2. Background of the Related Art Recognition of handwritten numeral strings is one of pattern recognizing fields which have been most actively researched, because of having various application field such as zip codes recognition, check recognition, format document recognition or the like. A typical method of recognizing handwritten touching numeral strings is executed by a following process. Firstly, after the handwritten numerals are scanned, candidate segmentation points are determined. Strokes are obtained from the candidate segmentation points. After the obtained stroke are aggregated and recognized, the aggregation of the strokes with the highest recognition result value is set as the results of recognizing numeral string. It is difficult to segment the handwritten numeral strings by use of a character width used in the typical print character segmenting method, because of having the variety of writing forms and writing paraphernalia contrary to the print character. In addition, the segmented separate numerals in the touching numeral strings may exhibit a structural feature having a different stroke width due to the segmentation of the overlapped numeral string, contrary to the independent separate numerals contained in the numeral strings, so that it is difficult to normally segment the touching numeral strings based on the only recognized results. However, the touching numeral string contained in the handwritten numeral strings is a major factor of the error recognition in the recognition of the handwritten numeral string. Furthermore, in case of no having preliminary knowledge on the length of the touching numeral string, it is more difficult to recognize the touching numeral string. Accordingly, it is very difficult to segment and recognize the touching numeral string from the handwritten numeral strings. In addition, it is appeared that the recognized results are low relative to the recognized results of numeral strings consisting of only independent separate numerals. In order to overcome the above drawbacks, several methods have been proposed. According to one method, candidate segmentation points are obtained from the touching numeral string, and the strokes extracted from the segmentation points are aggregated, thereby regarding the strokes with the excellent recognized results. Meanwhile, according to another method, the touching numeral strings are not segmented, but global numeral strings are recognized. The former prior art proposes an off-line recognition system for recognizing the handwritten numeral strings contained in the touching numerals and separate numerals. The system is consisting of four major modules of pre-segmentation, digit detection, segmentation-free, and global decision. The pre-segmentation module divides the input numeral strings into independent groups of numerals. The digit detection module recognizes the numeral groups containing separate numerals. The segmentation-free module segments and recognizes the touching numeral groups containing arbitrary numerals. The global decision module integrates the results of all modules, and determines the acceptance or rejection of the results. The touching numeral strings are recognized through a next step. Potential splitting points are obtained to segment the touching numeral strings. The segmentation point is obtained from the session image, and the potential splitting points comprise a singular point, an end point, a T-joint, and a crossing point. Firstly, the singular point is searched in the session image of the touching numeral strings, and then is eliminated. Very small connecting components which are resulted from after eliminating the singular point are eliminated. After labeling the remaining connecting components, the session image is extended by a stroke width of the original touching numeral string image. The strokes obtained by the above method are aggregated, and the aggregated strokes are recognized. The aggregations of the strokes with the largest width are accepted as the recognized results. The method extracts the strokes from the touching numeral strings by use of feature segmentation points to recognize the touching numeral strings, and aggregates the strokes depending upon the recognized results. The more a length of the numeral strings is long, the more the number of the strings to be aggregated is increased. Therefore, in order to obtain the final recognized results, the more calculating amount is required. Error recognition may be happened in the aggregation of the strings depending upon the highest recognition result value among the recognized results of the aggregated strings. The above method has a drawback in that the more a length of the numeral strings is long, the more the error recognizing rate is increased. According to another prior art, a method for segmenting one character in print character strings is proposed. The method for segmenting the character by use of a character width in the print character strings is unsuitable for the handwritten forms provided by various writing paraphernalia. Accordingly, the present invention is directed to a method for segmentation-based recognizing handwritten touching numeral strings that substantially obviates one or more problems due to limitations and disadvantages of the related art. An object of the present invention is to reduce an error recognizing rate due to error segmentation in case of segmenting the numerals based on only recognized results of the prior segmentation-based recognition method. Another object of the present invention is to obtain stable recognized results regardless of a length of the numeral strings. To achieve the object and other advantages, according to one aspect of the present invention, there is provided a method for segmentation-based recognizing handwritten touching numeral strings, the method comprising the steps of: a) receiving a handwritten numeral string extracted from a pattern document; b) smoothing a curved numeral image of the handwritten numeral string, and searching connecting components in the numeral image; c) determining whether or not the numeral string is a touching numeral string; d) if it is determined that the numeral string is the touching numeral string, searching a contour of the touching numeral string image; e) searching candidate segmentation points in the contour, and segmenting sub-images; f) computing a segmentation confidence value on each segmented sub-image by use of a segmentation error function to select the sub-image with the highest segmentation confidence value as a segmented numeral image in the touching numeral string image; g) if it is determined in the step c that the numeral string is not the touching numeral string, extracting a feature to recognize the segmented numeral image; h) segmenting the numeral image selected from the touching numeral string in the highest segmenting confidence value; and i) obtaining remaining numeral string image. In the step a, samples of handwritten numeral strings extracted from a NIST SD19 database are used to obtain samples of numeral strings handwritten in various forms. In the step e, the candidate segmentation points comprise local minimum and maximum points, and Large-to-Small or Small-to-Large transition points. The the step e comprises the steps of: e-1) if a distance difference between contours of neighboring pixels is more than a critical value, selecting the pixel as the candidate segmentation point; e-2) obtaining a region in which the candidate segmentation points are existed, and selecting the local minimum and maximum points as the candidate segmentation point existed in the region; e-3) analyzing the candidate segmentation points, and removing all of candidate segmentation points damaging a portion of a stroke, among the analyzed candidate segmentation points; and e-4) segmenting the image from a left of a minimum boundary rectangle to the candidate segmentation point in the numeral string image to create sub-images. The step f comprises the steps of: f-1) defining a segmentation error function by use of structural feature information and recognized results of the digit; f-2) computing a critical value of the structural features and a rejection value on the recognized result by use of numeral image samples used in the study; f-3) computing each constructional component value of the error function on each sub-image; f-4) computing a segmentation confidence value by use of the pre-calculated critical value and recognition rejection value; f-5) computing a recognition probability value r[j ]of a sub-image l^th-segmented by the candidate segmentation point, a horizontal transition value t[l ]of a pixel on a partial region, and an aspect ratio a[l ]of the numeral image; f-6) computing three component values of the l^th-segmented sub-image on each component of segmentation error function; f-7) computing a segmentation error value of the l^th-segmented sub-image by use of the error values; and f-8) computing a segmentation confidence value of the l^th-segmented sub-image. In the step f-2, an average value of the aspect ratio of the \numeral image every numeral classes 0 to 9, an average horizontal pixel transition value, and an average recognition probability value are computed to be used as an critical value, thereby computing the segmentation confidence value of the segmented sub-image. The step f-2 comprises the steps of: f-2-1) computing a minimum boundary rectangle on the numeral image; f-2-2) computing an average value of the aspect ratio of the digit; f-2-3) computing a horizontal transition average value of the pixel; and f-2-4) computing an average recognition probability value. The the step f-2-2 comprises the steps of: f-2-2-l) computing the aspect ratio of the digits corresponding to digit classes 0 to 9 used in the study; f-2-2-2) accumulating the aspect ratio computed in the step f-2-2-1; and f-2-2-3) computing the average value of the aspect ratio on each of digit classes 0 to 9. In the step f-2-2, the average value of the aspect ratio of the digit is computed in accordance with: $T a ⁢ ( i ) = 1 N i ⁢ ∑ j = 0 N i ⁢ a ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9$ wherein, T[a](i) is an average value of an aspect ratio of a numeral image computed on a digit class i, a[ij ]is the aspect ratio of the image of the j^th sample contained in the digit class i, and N [i ]is the number of samples contained in each class. The step f-2-3 comprises the steps of: f-2-3-1) normalizing the numeral image in a 50×50 size; f-2-3-2) accumulating the horizontal transition value which is transited from the background pixel to a digit region pixel at 5 pixel intervals, i.e., 5, 10, 15, . . . , 50^th row; and f-2-3-3) computing the horizontal pixel transition average value on each digit class. In the step f-2-3, horizontal transition average value of the pixel is computed in accordance with: $T t ⁢ ( i ) = 1 N i ⁢ ⁢ ∑ j = 0 N i ⁢ t ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9$ wherein, T[t](i) is a horizontal transition average value of a pixel on a partial region computed on a digit class i, t[ij ]is the horizontal transition average value of the j^th sample contained in the digit class i, and N[i ]is the number of samples contained in each class. The step f-2-4 comprises the steps of: f-2-4-1) accumulating the recognized results every digit class relative to the independent separate numerals used in the study; and f-2-4-2) dividing the accumulated recognition result value with the number of digit classes to compute an average value. In the step f-2-4, the average recognition probability value is computed in accordance with: $T t ⁢ ( i ) = 1 N i ⁢ ⁢ ∑ j = 0 N i ⁢ t ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9$ wherein, T[t](i) is a horizontal transition average value of a pixel on a partial region computed on a digit class i, t[ij ]is the horizontal transition average value of the j^th sample contained in the digit class i, and N[i ]is the number of samples contained in each class. In the step f-6, the segmentation error value is calculated in accordance with: $err a ⁡ ( l ) = a l - T a ⁡ ( i ) max ⁢ a l err t ⁡ ( l ) = t l - T t ⁡ ( i ) max ⁢ t l err r ⁡ ( l ) = r l - T r ⁡ ( i )$ wherein, i is a recognized digit class, S is the number of segmented sub-images, l is a sub-image l^th-segmented from 1 to S, a[l ]is an aspect ratio of the numeral image, t[l ]is a horizontal transition value of the pixel relative to the partial region, r[l ]is a recognition probability value of the sub-image l^th-segmented by the candidate segmentation point, T[a](i) is an average value of an aspect ratio of a numeral image computed on a digit class i, T[t](i) is a horizontal transition average value of a pixel relative to a partial region computed on a digit class i, and T[r](i) is an average recognition probability value each computed on a digit class i. In the step f-7, the segmented error value of the l^th-segmented sub-image is calculated in accordance with: E(l)=Γ(err[a](l), err[t](l), err[r](l)), wherein Γ(a,b,c)=(a ^2 +b ^2 +c ^2) In the step f-8, the segmentation confidence value of the l^th-segmented sub-image is calculated in accordance with: R(l)=1−E(l) l=1, 2, 3, . . . , S In the step h, a leftmost digit of touching digits is selected as the sub-image with the highest confidence value among the computed segmented confidence value on each sub-image. The method further comprises a step j) of segmenting the numeral image in the touching image, and if a next numeral string image is existed, proceeding to the step c. It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings: FIG. 1 is a flow chart of a method for segmentation-based recognizing handwritten touching numeral strings according to one preferred embodiment of the present invention; FIGS. 2A to 2E show samples of handwritten numeral strings contained in touching numeral strings; FIGS. 3A to 3G show transition candidate segmentation points used for segmenting touching numeral strings; FIGS. 4A to 4C show focal minimum and maximum candidate segmentation points used for segmenting touching numeral strings; FIGS. 5A and 5D show candidate segmentation points searched in the touching numeral strings; FIGS. 6A to 6E show sub-images segmented by candidate segmentation points; FIGS. 7A to 7C show three construction components of the segmentation error function defined to segment touching numeral strings; and FIGS. 8A to 8E are views showing a process of segmenting digits with the highest confidence value in the touching numeral string by computing a segmentation confidence value from a partial error function value. A method for segmentation-based recognizing handwritten touching numeral strings according to one preferred embodiment of the present invention will now be explained with reference to the accompanying drawings. An embodiment of recognizing numeral strings will be explained according to the method of recognizing handwritten numeral strings shown in FIG. 1. In steps S1 and S2, handwritten numeral strings extracted from a pattern document prepared in various handwritten forms. Specifically, samples of handwritten numeral strings extracted from a NIST SD19 database are used to obtain samples of numeral strings handwritten in various forms. FIGS. 2A to 2E show the samples of handwritten numeral strings contained in touching numeral strings. In order to recognize the numeral strings handwritten in various forms, it uses a numeral string database obtained from the environment in which writing paraphernalia and written region are not restricted. The samples of numeral strings handwritten in various forms may be extracted from the handwritten forms of NIST SD19, and separate numerals and touching numeral string are contained in the numeral strings, respectively. In step S2, a smoothing process of image of curved digit is implemented, and connecting components are found. Specifically, a smoothing algorism is employed to the input image of numeral strings, to extract the connecting components. The extraction of the connecting components is to classify a pattern of the digits in the handwriting numeric strings. FIG. 2A shows that a numeral string has 7 connecting components, a length of the numeral string being ten. The length of the numeral string indicates the number of digits contained in the numeral string. FIGS. 2B and 2C show a sample of touching numeral string having three touching digits and two touching digits in the numeral string, the length of the numeral string being four. FIGS. 2D and 2E show the numeral string having five digits. FIGS. 2A to 2E show various images of numeral strings, wherein the forms of touching digit are various. In order to smooth the curved image happened at an inputting process by using of unrestricted writing paraphernalia and the image obtaining process, the smoothing algorism is employed. The smoothing-processed image of numeral strings prevents the selection of multiple candidate segmentation points at the process of searching the candidate segmentation point of touching numeral string. In addition, the smoothing-processed image is a factor effecting on the recognizing result of segment digit. In steps S4 and S5, it determines whether the input numeral string is a touching numeral string. If it is the touching numeral string, it searches a contour of the touching numeral string image. In other words, the candidate segmentation points for segmenting the touching numeral string are obtained on the contour of the numeral string image. The candidate segment point may be obtained from structural feature information shown in the touching numeral region on the contour of the touching numeral string. In step S6, it searches four kinds of candidate segmentation points. In other words, the candidate segmentation points for segmenting the touching numeral strings are searched. FIGS. 3A to 3G show transition candidate segmentation points used for segmenting the touching numeral string. FIG. 3A shows Large-to-Small transition points (LS transition point) or Small-to-Large transition points (SL transition point) in which a ratio of vertical differences d[1 ]and d[2 ]between upper and lower contours of the neighboring points. For instance, when a person writes 20-digits, it supposes that 2-digit and 0-digit are touched to each other such as the shapes shown in FIG. 3B. At that time, acquiring the image of the 20-digit written by the person, an aggregate of points having a minimum value y relative to the same x-axis on the upper contour may be obtained, such as the picture of FIG. 3C. Similarly, a vertical difference between points each having a minimum value y relative to the same x-axis on the upper contour and a maximum value y relative to the same x-axis on the lower contour may be obtained, as shown in FIG. 3D. As shown in FIGS. 3E and 3F, it searches the LS transition points and the SL transition points, in which its level is more than a critical value, among the ratio of d[1 ]and d[2 ]by computing the normalized vertical difference. Finally, two LS transition points and one SL transition point are found. The LS transition points are a: (x[1], y[1]) and b: (x[2], y[2]), while the SL transition point is c: (x[3], y[3]). FIGS. 4A to 4C show local minimum and maximum candidate segmentation points used for segmenting the touching numeral strings. The local minimum point is a point in that the value y is to be minimized relative to the same x-axis as shown in FIG. 4A, while the local maximum point is a point in that the value y is to be maximized relative to the same x-axis as shown in FIG. 4B. The regions in that the local point is in the obtained image are called as a valley region, while the region having the local maximum point is called as a slope region. In other words, as will be seen from FIG. 4C, the local minimum points in the valley regions are d: (x[4], y[4]) and e: (x[5], y[5]), while the local maximum points in the slope regions are f: (x[6], y[6]) and g: (x[7], y[7]). FIGS. 5A and 5D show the candidate segmentation points searched in the touching numeral strings. Firstly, it searches the local maximum points in the valley region, in which points 21, 22 and 23 are the local minimum points in FIG. 5B. In addition, it searches the segmentation point due to a distance difference between the upper and lower contours, in which a point 31 is the segmentation point due to the distance difference between the upper and lower contours in FIG. 5C. Finally, the candidate segmentation points analyzed as FIG. 5D are obtained. With reference to FIGS. 3A to 3G, 4A to 4C, and 5A to 5D, the process of computing the candidate segmentation points which may shown when digits are touched to each other will be again explained. Firstly, as shown in FIGS. 3A to 3G, if the distance difference between the upper and lower contours of neighboring pixel is more than the critical value, it is selected as the candidate segmentation Secondarily, as shown in FIGS. 4A to 4C and 5A to 5D, the region in which the candidate segmentation points may be existed is computed, and the local and minimum and maximum points existed in the region are selected as the candidate segmentation points. In step S7, sub-images are segmented by use of the candidate segmentation points. In other words, the sub-images are segmented from the numeral strings by use of the candidate segmentation points. FIGS. 6A to 6E show the sub-images segmented by the candidate segmentation points, after the candidate segmentation points are acquired from the touching numeral string image. In FIG. 6A, the local minimum and maximum points, and the SL and LS transition points are indicated. The sub-images segmented by the local minimum point are shown in FIG. 6B, while the sub-images segmented by the local maximum point are shown in FIG. 6C. The sub-images segmented by the SL transition point are shown in FIG. 6D, while the sub-images segmented by the LS transition point are shown in FIG. 6E. After all of the candidate segmentation points damaging a portion of the stroke are removed by analyzing the candidate segmentation points obtained in step S6, the images are segmented from the leftmost of the numeral string image to the separate candidate segmentation point to create the sub-images. In steps S8 and S9, a segmentation confidence value is computed from individual sub-images by use of a defined segmentation error function, and the sub-image with the highest segmentation confidence value is selected as the numeral image segmented from the touching numeral string image. The segmentation confidence value is computed from the critical value on three structural components of the segmentation error function calculated by use of the numeral images samples used in study and three constructional components of segmented sub-image. FIGS. 7A to 7C show three construction components of the segmentation error function defined to segment the touching numeral strings. FIG. 7A shows a recognition probability value of the recognition result according to the feature input, FIG. 7B shows an aspect ratio of the image, and FIG. 7C shows a transition value from a background region to a numeral region in a horizontal direction. The critical value and a recognition rejection value are computed every numeral classes 0 to 9 on each component from independent separate numeral samples used in the study. Firstly, a minimum boundary rectangle (MBR) of the numeral image is computed. Secondarily, an average value of the aspect ratio of the digit is computed. After the aspect ratio of the digits corresponding to digit classes 0 to 9 used in the study is computed and accumulated, the average value of the aspect ratio on each of digit classes 0 to 9. Its mathematically defining equation is as following: $T a ⁡ ( i ) = 1 N i ⁢ ∑ j = 0 N i ⁢ a ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9 Equation ⁢ ⁢ 1$ wherein, T[a](i) is an average value of an aspect ratio of a numeral image computed on a digit class i, a[ij ]is the aspect ratio of the image of the j[th ]sample contained in the digit class i, and N[i ]is the number of samples contained in each class. Thirdly, a horizontal transition average value of the pixel is computed. After the numeral image is normalized in a 50×50 size, and the horizontal transition value which is transited from the background pixel to a digit region pixel at 5 pixel intervals, i.e., 5, 10, 15, . . . , 50^th row is accumulated, the horizontal pixel transition average value is computed on each digit class. $T t ⁡ ( i ) = 1 N i ⁢ ⁢ ∑ j = 0 N i ⁢ t ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9 Equation ⁢ ⁢ 2$ wherein, T[t](i) is a horizontal transition average value of a pixel relative to a partial region computed on a digit class i, t[ij ]is the horizontal transition average value of the j^th sample contained in the digit class i, and N[i ]is the number of samples contained in each class. Fourthly, an average recognition probability value is computed. The recognized results every each digit class of the independent separate digits used in the study are accumulated to obtain an average value. Its mathematically defining equation is as following: $T r ⁡ ( i ) = 1 N i ⁢ ⁢ ∑ j = 0 N i ⁢ r ij ⁢ ⁢ i = 0 , 1 , 2 , … ⁢ , 9 Equation ⁢ ⁢ 3$ wherein, T[r](i) is an average recognition probability value each computed on a digit class i, r[ij ]is the recognition probability value of the j^th sample contained in the digit class i, and N[i ] is the number of samples contained in each class. FIGS. 8A to 8E are views showing a process of segmenting digits with the highest confidence value in the touching numeral string by computing the segmentation confidence value from the partial error function value. In order to select normalized sub-image of the sub-images segmented by the candidate segmentation point as shown in FIGS. 6A to 6E, the segmentation error function is defined by use of the structural feature information and recognized results of the digit, as shown in FIGS. 7A to 7C. FIG. 8A shows the candidate segmentation points of the touching numeral strings, the sub-images segmented by the segmentation points, and the segmentation error function values E(0) to E(7) on each sub-image. In addition, FIG. 8B shows the candidate segmentation points of the remaining touching numeral strings after the first digit is segmented, the sub-images segmented by the segmentation points, and the segmentation error function values E(0) to E(4) on each sub-image. FIGS. 8C to 8E are images of the digits segmented in the highest confidence value. FIG. 8C is the image of the firstly segmented digit, FIG. 8D is the image of the secondarily segmented digit, and FIG. 8E is the image of the thirdly segmented digit. It computes the critical value of the structural features and the recognized results with the rejection value by use of the numeral image samples used in the study. After computing each constructional component value of the error function on each sub-image as shown in FIGS. 8A to 8E, the segmentation confidence value is computed by use of the pre-calculated critical value and the confidence rejection value, thereby obtaining an aspect ratio a[l ]of the numeral image, a horizontal transition value t[l ]of the pixel relative to the partial region, and a recognition probability value r[l ]on the sub-image l^th-segmented by the candidate segmentation point. A segmentation error of the l^th-segmented sub-image on each component of the segmentation error function is computed as following: $err a ⁡ ( l ) = a l - T a ⁡ ( i ) max ⁢ a l | i = recognized ⁢ ⁢ class , ⁢ wherein ⁢ ⁢ l = 1 , 2 , 3 , … , ⁢ S Equation ⁢ ⁢ 4 err t ⁡ ( l ) = t l - T t ⁡ ( i ) max ⁢ t l | i = recognized ⁢ ⁢ class , wherein ⁢ ⁢ l = 1 , 2 , 3 , … , ⁢ S Equation ⁢ ⁢ 5$ err[r](l)=r [l] −T [r](i)t [l] |i=recognized class, wherein l=1, 2, 3, . . . , SEquation wherein, i is a digit class, l is the l^th-segmented sub-image, and S is the number of segmented sub-images. By use of error values of three components obtained according to the equations 4 to 6, the segmented error value on the l^th-segmented sub-image may be computed as following: E(l)=Γ(err[a](l), err[t](l), err[r](l)), wherein Γ(a,b,c)=(a ^2 +b ^2 +c ^2)Equation 7 73 The segmented confidence value of the l^th-segmented sub-image may be computed as following: R(l)=1−E(l) l=1, 2, 3, . . . , SEquation 8 In step S10, the feature is extracted to recognize the segmented numeral image. In order to recognize the segmented digit, a mesh, a horizontal transition point, a directional component of a chain code, the number of holes, an aspect ratio of the digit, distance features and the like are extracted to constitute the feature vector. In step S11, the segmented numeral image is recognized. In step S12, the numeral image selected from the touching numeral string in the highest segmentation confidence value is segmented. In other words, the leftmost digit of the touching digits is selected as the sub-image with the highest confidence value. As shown in FIGS. 8C to 8E, the sub-image having the highest confidence value among the segmentation confidence values computed on each sub-image in step S10 is selected ass the segmented results of the segmented numeral image. After the numeral image is segmented in the touching numeral strings, if a next numeral string image is existed, the process proceeds to step S4. If it is the touching numeral string, the processes S 5 to S9 are repeated to segment the digit. Specifically, if the image which is left after segmenting the digit selected from the touching numeral strings is the touching numeral string image, the processes S5 to S9 are repeated. After analyzing that it is determined whether the touching numeral string image which is left after segmenting the numeral image selected at step S9 is the separate numeral image or touching numeral string image, the process of segmenting the separate numeral image is repeated until there is no any touching numeral string. The present invention suggests the method of segmenting touching numeral strings contained in handwritten touching numeral strings, and recognizing the numeral strings by use of characteristic information and recognized results provided by inherent structure of digits. In order to improve the accuracy of the segmentation, the segmentation error function is defined, and the sub-images are segmented by use of the candidate segmentation points found from the touching numeral strings. The sub-image with the highest confidence value is selected as a final segmentation numeral image. With the method described above, the present invention employs the structural feature information of the digit and the recognized result value to segment the touching numeral string into the separate numerals and recognize the digit, and selects the segmented image with the highest confidence value as the finally segmented results by defining the segmentation error function, thereby improving a recognizing rate of the numeral strings by reducing an error recognizing rate according to the error segmentation of the typical segmentation-based recognition method. The present segmentation method segments and recognizes the separate numerals from the numeral strings, without having pre-knowledge on the length of the numeral strings, thereby no depending upon the length of the numeral strings and thus obtaining the stable recognized results. This can improve the recognizing rate of the touching numeral string which is a major factor of the error recognition in the recognition of the handwritten numeral string, so that the present invention can be employed in the application system in which the handwritten numeral string recognition is applied to the environment not restricting the handwriting condition. The present invention relates to reduce the error recognizing rate happened due to the error segmentation by normally segmenting the touching numeral string which is a major factor of the error recognition in the recognition of the handwritten numeral string. Based on the feature information (the aspect ratio of the digit and the transition value of the horizontal pixel relative to the partial region) and the recognized information, the method of segmenting and recognizing the numeral strings contained the touching numeral strings computes the segmentation error value of each sub-image segmented by the candidate segmentation points, and segments the sub-image with the highest confidence value into the numeral image. The forgoing embodiment is merely exemplary and is not to be construed as limiting the present invention. The present teachings can be readily applied to other types of apparatuses. The description of the present invention is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art.
{"url":"http://www.google.com/patents/US6920246?dq=6356708","timestamp":"2014-04-18T20:35:49Z","content_type":null,"content_length":"118977","record_id":"<urn:uuid:bb2108cd-81e0-4504-96bf-c8168da49577>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Peirce's Logic Stanford Encyclopedia of Philosophy A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z Peirce's Logic Peirce's contributions to logical theory are numerous and profound. His work on relations building on ideas of De Morgan influenced Schroder, and through Schroder Peano, Russell, Lowenheim and much of contemporary logical theory. Although Frege anticipated much of Peirce's work on relations and quantification theory, and to some extent developed it to a greater extent, Frege's work remained out of the mainstream until the twentieth century. Thus it is plausible that Peirce's influence on the development of logic has been of the same order as Frege's. Further discussion of Peirce's influence can be found in Dipert (1995). In contrast to Frege's highly systematic and thoroughly developed work in logic, Peirce's work remains fragmentary and extensive, rich with profound ideas but most of them left in a rough and incomplete form. Three of the Peirce's contributions to logic that are not as well-known as others are described below: Among Peirce's other contributions to logic are: (i) quantification theory (see Peirce (1885) and Berry (1952)), (ii) propositional logic (see Berry (1952)), (iii) Boolean algebra (see Lewis (1918)), and (iv) "Peirce's Remarkable Theorem" (see Herzberger (1981) and Berry (1952)). In three unnumbered pages from his unpublished notes written before 1910, Peirce developed what amounts to a semantics for three-valued logic. This is at least ten years before Emil Post's dissertation, which is usually cited as the origin of three-valued logic. A good source of information about these three pages is Fisch and Turquette (1966), which also includes reproductions of the three pages from Peirce's notes. In his notes, Peirce experiments with three symbols representing truth values: V, L, and F. He associates V with "1" and "T", indicating truth. He associates F with "0" and "F", indicating falsehood. He associates L with "1/2" and "N", indicating perhaps an intermediate or unknown value. Peirce defines a large number of unary and binary operators on these three truth values. The semantics for the operators is indicated by truth tables. Two examples are given here. First, the bar operator (indicated here by a minus sign) is defined as follows: x V L F -x F L V Applied to truth the bar operator yields falsehood, applied to unknown it yields unknown and applied to falsehood it yields truth. The Z operator is a binary operator which Peirce defines as follows: V L F V | V L F L | L L F F | F F F Thus, the Z operator applied to a falsehood and anything else yields a falsehood. The Z operator applied to an unknown and anything but a falsehood yields an unknown. And the Z operator applied to a truth and some other value yields the other value. The bar operator and the Z operator provide the essentials of a truth-functionally complete strong Kleene semantics for three-valued logic. In addition to these two strong Kleene operators, Peirce defines several other forms of negation, conjunction, and disjunction. The notes also provide some basic properties of some of the operators, such as being symmetric and being associative. Building on ideas of De Morgan, Peirce fruitfully applied the concepts of Boolean algebra to relations. Boolean algebra is concerned with operations on general or class terms. Peirce applied the same idea to what he called "relatives" or "relative terms." While his ideas evolved continually over time on this subject, fairly definitive presentations are found in Peirce (1870) and Peirce (1883). The calculus of relatives is developed further in Tarski (1941). A history of work on the subject is Maddux (1990). Given relative terms such as "friend of" and "enemy of" (more briefly "f" and "e"), Peirce studied various operations on these terms such as the following: (union) friend of or enemy of A pair <a,b> stands in this relation if and only if if stands in one or both of the relations. In symbols "f + e". (intersection) friend of and enemy of A pair <a,b> stands in this relation if and only if if stands in both of the relations. In symbols "f . e". (relative product) friend of an enemy of A pair <a,b> stands in this relation if and only if there is a c such a is a friend of c and c is an enemy of b. In symbols "f ; e". (relative sum) friend of every enemy of A pair <a,b> stands in this relation if and only if a is the friend of every object c that is the enemy of b. In symbols "f , e" (Peirce uses a dagger rather than a comma) (complement) is not a friend of A pair <a,b> stands in this relation if and only if <a,b> does not stand in the friend-of relation. In symbols "-f" (Peirce places a bar over the relative term). (converse) is one to whom the other is friend A pair <a,b> stands in this relation if and only if b is a friend of a. In symbols "~f" (Peirce places an upwards facing semi-circle over the relative term). Peirce presented numerous theorems involving his operations on relative terms. Examples of the numerous such laws identified by Peirce are: ~(r + s) = ~r + ~s -(r ; s) = -r , -s (r . s) , t = (r , s) . (r , t) Peirce's calculus of relations has been criticized for remaining unnecessarily tied to previous work on Boolean algebra and the equational paradigm in mathematics. It has been frequently claimed that real progress in logic was only realized in the work of Frege and later work of Peirce in which the equational paradigm was dropped and the powerful expressive ability of quantification theory was Nevertheless, Peirce's calculus of relations has remained a topic of interest to this day as an alternative, algebraic approach to the logic of relations. It has been studied by Lowenheim, Tarski and others. Lowenheim's famous theorem was originally a result about the calculus of relations rather than quantification theory, as it is usually presented today. Some of the subsequent work on the calculus of relations is outlined in Maddux (1990). Following his development of quantification theory, Peirce developed a graphical system for analyzing logical reasoning that he felt was superior in analytical power to his algebraic and quantificational notations. A large portion of this material is reprinted as volume 4, book 2 of Peirce (1933) and is discussed, for example, in Roberts (1964), Roberts (1973), Zeman (1964) and Hammer (1996). This system of "existential graphs" encompassed propositional logic, first-order logic with identity, higher-order logic, and modal logic. The "alpha" portion of the system of existential graphs is concerned with propositional logic. Conjunction is indicated by juxtaposing graphs next to one another. Negation is indicated by enclosing a graph within an enclosed circle or other closed figure, which Peirce calls a "cut". Here (and occasionally in Peirce's writings) cuts will be indicated by matching parentheses. So is equivalent to "not P", and is equivalent to "if P then Q". Observe that this is the same graph as because order is irrelevant. Juxtaposition and enclosure are the only relevant logical operations. Peirce provides five elegant rules of inference that form a complete set. The rules are Insertion in Odd, Erasure in Even, Iteration, Deiteration, and Double Cut. Insertion in Odd Any graph can be added to an area enclosed within an odd number of cuts. The following table gives some examples of this rule: ((B)) (A (B (C))) (A) ((B)A) (A (B (C D))) ((B)A) In the first case from "not not B", "If A then B" is obtained. In the second case from "If A, then if B then C", "If A, then if B then both C and D" is obtained. In the third case from "not A", "If A then B" is obtained. Erasure in Even Any graph can be erased that occurs within an even number of cuts. The following table gives some examples of this rule: (A(B)) (A (B (C))) B(A) (A()) (A ( (C))) B In the first case from "if A then B", "if A then true" is obtained. In the second case from "If A, then if B then C", "if A, then not C" is obtained. In the third case from "not A and B", "B" is Iteration Any graph can be recopied to any other area that occurs within all the cuts the original occurs within. Here are some examples of Iteration: (A(B)) ( (A) (B) ) B(A) (A(AB)) ( (A) (B(A)) ) B(A)B(A) In the first case from "if A then B", "if A then both A and B" is obtained. In the second case from "If not A then B", "if not A then both B and not A" is obtained. In the third case from "B and not A", "B and not A and B and not A" is obtained. Deiteration Any graph that could have been obtained by iteration can be erased. Here are some examples of Deiteration: (A(AB)) ( (A) (B(A)) ) B(A)B(A) (A(B)) ( (A) (B) ) B(A) These are just the exact converses of the examples of Iteration. Double cut Two cuts can be put immediately around any graph, and two cuts immediately around any graph can be erased. Here are some examples of Deiteration: (A(B)) (A) ((B))(A) (((A))(B)) (((A))) B(A) From "if A then B", "if not not A, then B" is obtained. From "not A", "not not not A" is obtained. From "not not B and not A", "B and not A" is obtained. A proof of modus ponens: P (P(Q)) premises: (i) if P then Q. (ii) P P ( (Q)) Deiteration P Q Double Cut Q Erasure in Even A proof of "if A, then if B then A": (()) Double Cut (A()) Insertion in Odd (A( A )) Iteration (A( ((A)) )) Double Cut (A( (B(A)) )) Insertion in Odd A proof of "if not B then not A" from "if A then B": (A(B)) premise ( ((A)) (B)) Double Cut Finally, a proof of "if A then C" from "if A then B" and "if B then C": (A(B)) (B(C)) premises (A( B (B(C)) )) (B(C)) Iteration (A( B (B(C)) )) Erasure in Even (A( B ( (C)) )) Deiteration (A( B C )) Double Cut (A(C)) Erasure in Even The "beta" portion of Peirce's system of existential graphs is equivalent to first-order logic with identity. It does not use variables to fill the argument places of predicates. Instead, the argument places are filled by drawn lines. Two or more such argument places can be identified (analogous to filling them with the same variable) by connecting them with a drawn line. These "lines of identity" play the role of quantifiers as well as variables. The order of interpretation of the various lines of identity and cuts of a beta graph is determined by the portions of lines of identity that are enclosed within the fewest cuts. Elements enclosed by fewest cuts are interpreted before more deeply embedded elements. Rules of inference for the beta portion are generalizations of the rules for the alpha portion. They allow lines of identity to be manipulated in various ways, such as erasing portions of lines connecting loose ends, extending loose ends, and retracting loose ends. More information about the beta portion of the system of existential graphs can be found in Roberts (1973). • Berry, George D. W. (1952) "Peirce's Contributions to the Logic of Statements and Quantifiers." In P. Wiener and F. Young (Eds.) Studies in the Philosophy of Charles S. Peirce Cambridge: Harvard University Press. • Burch, Robert W. (1991) A Peircean Reduction Thesis. Texas Tech University Press. • Dipert, Randall (1995) "Peirce's Underestimated Role in the History of Logic." In Kenneth Ketner (Ed.) Peirce and Contemporary Thought. New York: Fordham University Press. • Fisch, Max and Atwell Turquette (1966) "Peirce's Triadic Logic." Transactions of the Charles S. Peirce Society 11: 71 - 85. • Hammer, Eric M. (1996) "Semantics for Existential Graphs." Journal of Philosophical Logic (to appear). • Herzberger, Hans (1981) "Peirce's Remarkable Theorem." In L. W. Sumner, J. G. Slater, and F. Wilson (Eds.) Pragmatism and Purpose: Essays Presented to Thomas A. Goudge Toronto: University of Toronto Press. • Lewis, C. I. (1918) A Survey of Symbolic Logic. Berkeley: University of California Press. • Maddux, Roger D. (1990) "The Origin of Relation Algebras in the Development and Axiomatization of the Calculus of Relations." Studia Logica 3: 421 - 55. • Peirce, Charles S. (1870) "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic." Memoirs of the American Academy of Sciences 9: 317 - 78. Reprinted in Peirce (1933). • Peirce, Charles S. (1883) "Note B: The Logic of Relatives." In Studies in Logic by Members of the Johns Hopkins University Boston: Little Brown and Co. Reprinted in Peirce (1933). • Peirce, Charles S. (1885) "On the Algebra of Logic; A Contribution to the Philosophy of Notation." American Journal of Mathematics 7: 180 - 202. Reprinted in Peirce (1933). • Peirce, Charles S. (1933) "Collected Papers." Edited by Charles Hartshorne and Paul Weiss. Cambridge: Harvard University Press. In Edward Moore and Richard Robin (Eds.) Studies in the Philosophy of Charles S. Peirce., Amherst, University of Massachusetts Press. • Roberts, Don (1964) "The Existential Graphs and Natural Deduction." In Edward Moore and Richard Robin (Eds.) Studies in the Philosophy of Charles S. Peirce, Amherst, University of Massachusetts • Roberts, Don (1973) The Existential Graphs of Charles S. Peirce. Mouton and Co. • Tarski, Alfred (1941) "On the Calculus of Relations." Journal of Symbolic Logic 6: 73 - 89. • Zeman, J. Jay (1964) The Graphical Logic of C. S. Peirce. Ph.D. Diss., University of Chicago. Copyright © 1995, 1996 by Eric M. Hammer A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z First published: December 15, 1995 Content last modified: January 2, 1996
{"url":"http://www.science.uva.nl/~seop/archives/fall1997/entries/peirce-logic/","timestamp":"2014-04-20T23:31:17Z","content_type":null,"content_length":"18034","record_id":"<urn:uuid:3bc659a7-5fd3-4abc-ae27-2c4a8f774ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about philosophy of science on Matt Dickenson What do fighter pilots, casinos, and streetlights all have in common? These three disparate topics are all the subject of statistical thinking that led to (and benefitted from) the development of modern computing. This process is described in Turing’s Cathedral by George Dyson, from which most of the quotes below are drawn. Dyson’s book focuses on Alan Turing far less than the title would suggest, in favor of John von Neumann’s work at the Institute for Advanced Studies (IAS). Von Neumann and the IAS computing team are well-known for building the foundation of the digital world, but before Turing’s Cathedral I was unaware of the deep connection with statistics. Statistical thinking first pops up in the book with Julian Bigelow’s list of fourteen “Maxims for Ideal Prognosticators” for predicting aircraft flight paths on December 2, 1941. Here is a subset (p. 7. Never estimate what may be accurately computed. 8. Never guess what may be estimated. 9. Never guess blindly. This early focus on estimation will reappear in a moment, but for now let’s focus on the aircraft prediction problem. With the advent of radar it became possible for sorties at night or in weather with poor visibility. In a dark French sky or over a foggy Belgian city it could be tough to tell who was who until, otherwise adversarial forces agreed on a system of coded signals identifying their aircraft as friend or foe. In contrast to the work of wartime cryptographers, whose job was to design codes that were as difficult to understand as possible, the goal of IFF [Identification Friend or Foe] was to develop codes that were as difficult to misunderstand as possible…. We owe the existence of high-speed digital computer to pilots who preferred to be shot down intentionally by their enemies rather than accidentally by their friends. (p. 116) In statistics this is known as the distinction between Type I and Type II errors, which we have discussed before. Pilots flying near their own lines likely figured there was a greater probability that their own forces would make a mistake than that the enemy would detect them–and going down as a result of friendly fire is no one’s idea of fun. This emergence of a cooperative norm in the midst of combat is consistent with stories from other conflicts in which the idea of fairness is used to compensate for the rapid progress of weapons technology. Chapter 10 of the book (one of my two favorites along with Chapter 9, Cyclogenesis) is entitled Monte Carlo. Statistical practitioners today use this method to simulate statistical distributions that are analytically intractable. Dyson weaves the development of Monte Carlo in with a recounting how von Neumann and his second wife Klari fell in love in the city of the same name. A full description of this method is beyond the scope of this post, but here is a useful bit: Monte Carlo originated as a form of emergency first aid, in answer to the question: What to do until the mathematician arrives? “The idea was to try out thousand of such possibilities and, at each stage, to select by chance, by means of a ‘random number’ with suitable probability, the fate or kind of event, to follow it in a line, so to speak, instead of considering all branches,” [Stan] Ulam explained. “After examining the possible histories of only a few thousand, one will have a good sample and an approximate answer to the problem.” For a more comprehensive overview of this development in the context of Bayesian statistics, check out The Theory That Would Not Die. The third and final piece of the puzzle for our post today is the well-known but not sufficiently appreciated distinction between correlation and causation. Philip Thompson, a meteorologist who joined the IAS group in 1946, learned this lesson at the age of 4 and counted it as the beginning of his “scientific education”: [H]is father, a geneticist at the University of Illinois, sent him to post a letter in a mailbox down the street. “It was dark, and the streetlights were just turning on,” he remembers. “I tried to put the letter in the slot, and it wouldn’t go in. I noticed simultaneously that there was a streetlight that was flickering in a very peculiar, rather scary, way.” He ran home and announced that he had been unable to mail the letter “because the streetlight was making funny lights.” Thompson’s father seized upon this teachable moment, walked his son back to the mailbox and “pointed out in no uncertain terms that because two unusual events occurred at the same time and at the same place it did not mean that there was any real connection between them.” Thus the four-year-old learned a lesson that many practicing scientists still have not. This is also the topic of Chapter 8 of How to Lie with Statistics and a recent graph shared by Cory Doctorow. The fact that these three lessons on statistical thinking coincided with the advent of digital computing, along with a number of other anecdotes in the book, impressed upon me the deep connection between these two fields of thought. Most contemporary Bayesian work would be impossible without computers. It is also possible that digital computing would have come about much differently without an understanding of probability and the scientific method. Micro-Institutions Everywhere: Species and Regime Types blogs this paragraph from a paper Ian Lustick: One might naively imagine that Darwin’s theory of the “origin of species” to be “only” about animals and plants, not human affairs, and therefore presume its irrelevance for politics. But what are species? The reason Darwin’s classic is entitled Origin of Species and not Origin of the Species is because his argument contradicted the essentialist belief that a specific, finite, and unchanging set of categories of kinds had been primordially established. Instead, the theory contends, “species” are analytic categories invented by observers to correspond with stabilized patterns of exhibited characteristics. They are no different in ontological status than “varieties” within them, which are always candidates for being reclassified as species. These categories are, in essence, institutionalized ways of imagining the world. They are institutionalizations of difference that, although neither primordial nor permanent, exert influence on the futures the world can take—both the world of science and the world science seeks to understand. In other words, “species” are “institutions”: crystallized boundaries among “kinds”, constructed as boundaries that interrupt fields of vast and complex patterns of variation. These institutionalized distinctions then operate with consequences beyond the arbitrariness of their location and history to shape, via rules (constraints on interactions), prospects for future kinds of change. Jay follows this up with an interesting analogy to political regime types–the “species” that political scientists study: Political regime types are the species of comparative politics. They are “analytic categories invented by observers to correspond with stabilized patterns of exhibited characteristics.” In short, they are institutionalized ways of thinking about political institutions. The patterns they describe may be real, but they are not essential. They’re not the natural contours of the moon’s surface; they’re the faces we sometimes see in them. I have no comment other than that I think Jay is right, and it reminds me of a Robert Sapolsky lecture on the dangers of categorical thinking. And yes, Sapolsky is a biologist. We’ll go right to the best part (19:40-22:05) but the whole lecture is worth watching: What is the Future of Publishing? Today’s journal publishing system is the best possible. If you limit yourself to 17th century technology, that is. Quips like these were sprinkled throughout Jason Priem’s presentation on altmetrics at Duke on Monday. Altmetrics is short for “alternative metrics,” or ways of measuring the impact of a particular author or article rather than the canonical impact factor of journals (which, it turns out, was initially resisted; Thomas Kuhn FTW). Priem is a doctoral candidate at UNC, and recently started a site called ImpactStory. According to the LSE blog: ImpactStory is a relaunched version of total-impact. It’s a free, open-source webapp we’ve built (thanks to a generous grant by the Sloan Foundation and others) to help researchers tell these data-driven stories about their broader impacts. To use ImpactStory, start by pointing it to the scholarly products you’ve made: articles from Google Scholar Profiles, software on GitHub, presentations on SlideShare, and datasets on Dryad (and we’ve got more importers on the way). Then we search over a dozen Web APIs to learn where your stuff is making an impact. Instead of the Wall Of Numbers, we categorize your impacts along two dimensions: audience (scholars or the public) and type of engagement with research (view, discuss, save, cite, and recommend). Priem’s presentation was informative and engaging. He has clearly spent a good deal of time thinking about academic publishing, and about the scientific undertaking more generally. I particularly liked how he responded to some tough audience questions about potential for gaming the system by re-iterating that we do not want a “Don Draper among the test tubes,” but for better or worse the way that we communicate our ideas makes a difference in how they are received. If you are interested in hearing more of Jason’s ideas, here is a video of a similar talk he gave at Purdue earlier this year. The altmetrics portion starts around the 25-minute mark. My Ten Favorite Posts from the Past Year As promised yesterday, here are my top ten favorite posts from the first year of YSPR. They are arranged chronologically. Addiction in The English Opium Eater Iraq Casualties and Public Opinion, 2003 Problems with [DEL:Science:DEL] Statistics Aren’t New Wednesday Nerd Fun: The Game of 99 Micro-Institutions Everywhere: Crime Bosses Do you have a favorite post? Is there something you would like to see on YSPR that you haven’t yet? Put that comments button to good use. Profile of a Conflict Statistician BALL IS 46, STOCKY, SHORT, and bearded, with glasses and reddish-brown hair, which he used to wear in a ponytail. His manner is mostly endearing geek. But he is also an evangelist, a true believer in the need to get history right, to tell the truth about suffering and death. Like all evangelists, he can be impatient with people who do not share his priorities; his difficulty suffering fools (a large group, apparently) does not always help his cause…. He first applied statistics to human rights in 1991 in El Salvador. The U.N. Commission on the Truth for El Salvador arose at an auspicious moment — the new practice of collecting comprehensive information about human rights abuses coincided with advances in computing that allowed people with ordinary personal computers to organize and use the data. Statisticians had long done work on human rights — people like William Seltzer, the former head of statistics for the United Nations, and Herb Spirer, a professor and mentor to almost everyone in the field today, had helped organizations choose the right unit of analysis, developed ways to rank countries on various indices, and figured out how to measure compliance with international treaties. But the problem of counting and classifying mass testimony was new. Ball, working for a Salvadoran human rights group, had started producing statistical summaries of the data the group had collected. The truth commission took notice and ended up using Ball’s model. One of its analyses plotted killings by time and military unit. Killings could then be compared with a list of commanders, making it possible to identify the military officers responsible for the most brutality. “El Salvador was a transformational moment, from cases, even lots of them, to mass evidence,” Ball says. “The Salvadoran commission was the first one to understand that many, many voices could speak together. After that, every commission had to do something along these lines.” That’s an excerpt Foreign Policy’s “The Body Counter,” and it’s worth reading in full, especially if you enjoyed this post. Visualization Basics: Japanese Multiplication Data visualization became very popular in 2011, as evidenced by NYT pieces like this one and the release of Nathan Yau‘s book Visualize This. It seems to me that the upper limit of the amount of information a dataviz/infographic/pick-your-term can convey is bounded by three things: the creativity of the designer, technology available to him/her, and the perceptibility of the viewer. But is there an optimal point where simplicity of design and information conveyed are both maximized? For one answer to this question, consider multiplication. In most (all?) American schools that I know of, multiplication is taught in terms of area (two terms) or volume (three terms). Harvard’s Stats 110 begins by teaching probability as area. This concept is simple enough, and is particularly handy because often what we care about in practical terms can be expressed as an area/volume: how much wallpaper do I need? how much water will fit in that bucket? But in terms of just manipulating the numbers, the area/volume interpretation can be a bit clunky–it doesn’t really save any steps, and once you have more digits than you can hold in your head, most people will reach straight for a calculator. There’s nothing wrong with that, except that there are many applications of multiplication beyond area or volume (take total cost of large order as an example). The Japanese have a different method, as the video below shows. Two characteristics readily recommend this method. First, it is a very basic visualization that, if practiced, seems like it could make multiplication of large numbers in your head simpler. Second, there is no strict interpretive paradigm imposed on the answer. The practical meaning of the answer need not be an area, or anything geometric at all. However, it is not clear how this would extend beyond three terms either. It may be that readers more experienced than myself with linear algebra or geometry will sense intuitively how this works. If you have a straightforward explanation, please share it in the comments. (h/t @brainpicker) Einstein and Reality David Duff had some thoughtful comments on my post about Stephen Jay Gould’s arguments in The Mismeasure of Man about whether IQ is a “real thing” or just the result of measurement. I will provide further illustration of what I meant in that post, and then share some thoughts from a biography of Einstein that speak to David’s last sentence. (“This is not reification, this is normal science.”) To show that creating a numeric index does not necessitate an underlying reality, suppose I created a “health-per-wealth” index to determine whether someone ate sufficiently healthy for a member of their social class. To create my index, we count the number of different types of vegetables in someone’s refrigerator and divide that by the number of walls in their home. My own number right now is an embarrassingly low 0.16. Now, has this measurement told me anything about how healthy I am relative to other members of the population? Not really. Nor would adding more data on other individuals help to do so. The measure itself is flawed, relying on two (very) imperfect proxies for real underlying characteristics. Most people would agree that health and wealth are real concept, but we learn very little about them from shopping habits and architecture. Likewise, intelligence seems to me a real enough “thing,” but I am not convinced that IQ tests are an accurate measure, nor that any one-dimensional measure would suffice. Speaking more directly to David’s point about whether reification inheres in normal science, I would concede that it does. This is the essence of inductive or causal reasoning: to take disparate facts and reason that there is some logic underlying them, and hopefully a relatively simple logic at that. But we cannot be convinced that it actually exists without a great deal of either experimentation or faith, or both. Walter Isaacson’s excellent Einstein has a helpful example from the debate between Einstein and Planck over quantum theory: For Planck, a reluctant revolutionary, the quantum was a mathematical contrivance that explained how energy was emitted and absorbed when it interacted with matter. But he did not see that it related to a physical reality that was inherent in the nature of light and the electromagnetic field itself…. Einstein, on the other hand, considered the light quantum to be a feature of reality: a perplexing, pesky, mysterious, and sometimes maddening quirk of the cosmos. For him, these quanta of energy (which in 1926 were named photons) existed even when light was moving through a vacuum. (p. 99) At stake here is exactly the same issue as in the IQ case–whether a theoretical concept (in this case, quanta) was a feature of reality or a mere incident of measurement. Modern physical theory has generally accepted the reality of quanta, but the acceptance was by no means automatic. Isaacson’s book makes clear how the scientific process can be affected by personal politics. Later on, Einstein takes Planck’s side in a debate over relativity and the principle of least action. Planck was pleased. “As long as the proponents of the principle of relativity constitute such a modest little band as is now the case,” he replied to Einstein, “it is doubly important that they agree among themselves.” (p. 141) In another instance, Isaacson describes how increasing anti-Semitism spurred Einstein into being more conscience of his Jewish identity. (Some might ascribe this to a social form of Newton’s third law.) The biography is interesting throughout, and highly recommended. Are Casualty Statistics Reliable? The question posed in the title is obviously too broad to be addressed in a single post, but the short answer is “no.”* This has been an unfortunate awakening for me, since I got into the study of political violence for the simple reason that measurement seemed straightforward. “Public opinion polls are so fuzzy,” I naively thought, “but a dead body is a dead body.” I have become aware of several problems with this view, a few of which I will share in this post. Was the death due to conflict? This one is more complex than it first seems. A bullet in the head is pretty directly attributable to conflict. But what about someone who dies from treatable illness because the road to the hospital was blocked by fighters? Health care is increasingly integrated into research about conflict. The boundary line between what is or is not conflict-related, however, remains blurry. Who is responsible for the death? I encountered this issue in my recent work on Mexico. The dataset that I relied on was one of three that counted “drug-related” murders. Since I was arguing that a certain policy had increased violence, I went with the smallest numbers to try to prevent false positives. The fact that there were three different datasets that attributed different body counts to the same cause reveals that there is still work to be done in this area. What is the counterfactual? The first two are questions of causality, whereas this one addresses policy implications. Would the person with the illness above still have died in the absence of conflict? Would violence have become much worse without X or Y happening? Definitive answers to these questions may never be possible, but trying to answer them is at the heart of scientific research on violence. These problems become even more exaggerated when looking at historical conflicts and trying to put them in context. Readers may recognize that Steven Pinker faced just that challenge in his recent book, The Better Angels of Our Nature, which argues that violence has declined over time. I am sympathetic to the basic point of “things aren’t as bad as you think,” but it turns out that there are some problems with his method. Michael Flynn points out two major issues, the first being the quality of the casualty data and the second being Pinker’s efforts to treat them as percentages of contemporary world population. One egregious error is attributing a large portion of the decrease between two consecutive Chinese censuses to the An Lushan revolt of the eighth century. I do not mean to pick on Pinker, since I have yet to read his book, but his errors do show that someone with the capacity to write an entire book on this subject and get lots of press can still make basic mistakes while raising very few critical reviews. Doing good science is hard, even with body counts. Further reading: Statistics of Deadly Quarrels, review by Brian Hayes (via Michael Flynn) *Note: Short answers being what they are, this leaves a lot to be desired. I must say that modern militaries are quite good at maintaining records of their own casualties. Most of the problems I mention here pertain primarily to non-state fighters or civilian casualties. “You Are Not So Smart” That’s the title of a new book by David McRaney. Here’s part of an excerpt from The Atlantic: You rarely watch films in a social vacuum with no input at all from critics and peers and advertisements. Your expectations are the horse, and your experience is the cart. You get this backwards all the time because you are not so smart. I would call this something like “the socialization of priors,” meaning that beliefs are informed by social group before they are informed by the (non-social) world around us. This is a topic that I am just beginning to consider, so I am far from having strong beliefs on it. So feel free to socialize my beliefs in the comments. In particular, how does socialization impact the scientific process? Does it have any bearing on the implications that Michael Nielsen discusses here of the “new era of networked science”? Nature and Politics In the last post I discussed how nature has come to be regarded as a synonym for good, and suggested that that has not always been the case. Indeed, I am indebted to William Cronon for making the same point much better.* Allow me to quote from him before I move on to the main point of this post: But the most troubling cultural baggage that accompanies the celebration of wilderness has less to do with remote rain forests and peoples than with the ways we think about ourselves—we American environmentalists who quite rightly worry about the future of the earth and the threats we pose to the natural world. Idealizing a distant wilderness too often means not idealizing the environment in which we actually live, the landscape that for better or worse we call home…. Indeed, my principal objection to wilderness is that it may teach us to be dismissive or even contemptuous of such humble places and experiences. Without our quite realizing it, wilderness tends to privilege some parts of nature at the expense of others. The trouble that Cronon mentions–figuring out where to focus our definition of nature–at first seems tangential to politics, until we remember that several of the first great modern political philosophers were greatly concerned with answering the question, “What is the state of nature?” The answer that one gives to that question is extremely consequential to everything that follows in his argument about how to best structure a society. For Thomas Hobbes, famously, life in the state of nature was “nasty, poor, brutish and short.” Thus, anyone powerful enough to protect men from such a miserable life and quick, violent death could be regarded as a legitimate ruler. Jean-Jacques Rousseau, on the other hand, regarded nature as a land of peace and plenty. His idea for society, then, was that it should be as unrestrictive (“natural”) as possible, with some accommodations made to induce social cooperation.** In this day and age, we have the ability to learn more about how nature does things and design our shoes and supermarkets accordingly. Can the same be done for human nature and society? Indeed, psychology and many of the social sciences are already attempting to answer this question. But they are doing so in ways that we would consider outmoded in other areas: we are at the point of making clogs, not barefoot running shoes; we talk with you about the social equivalent of a hydroponic system, but not organic vegetables. Getting there will be the next great challenge for the social sciences, and in my view it is going to require a paradigm shift away from unsatisfactory models that rest on excessively artificial assumptions. Nevertheless our new approaches, whatever they may eventually become, will still be simplifications of reality. Let’s not confuse them with an overly simplistic definition of nature that is exclusively good or bad. We live in a complex world, and that is enough. *And to Eric Higgins, for encouraging me to explore Cronon’s work as part of our teaching/facilitating of the spring 2011 ENGL 1304 course at the University of Houston. **Apologies to the great thinkers of ages past for doing such violence to their philosophies by summarizing them so briefly.
{"url":"http://mattdickenson.com/tag/philosophy-of-science/","timestamp":"2014-04-19T11:56:12Z","content_type":null,"content_length":"122230","record_id":"<urn:uuid:8beee24e-acfb-4b29-af17-c80e7b1b3e64>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Data design Explanations > Social Research > Design > Data design Frequency counts vs. scores | Data types | Independence of variables | Study or experiment | Repeated or independent measures | Parametric and non-parametric data | See also When designing social research, one of the biggest questions that will affect your research is what data you will collect. There is generally a return on investment -- the more effort you put into collecting data, the more chance you have of creating useful results. There is also a law of diminishing returns. 'More' does not always mean better and 'enough' to meet your goals is often the best Frequency counts vs. scores There are two basic ways of calculating with data: counting how many of a particular item you have (eg. 'How many people prefer Sudso soap'), or doing more complex analysis. Counting frequencies is used with nominal categories and is relatively limiting as it constrains you the use of the Chi-square test in analysis and consequently may limit your conclusions. With scores, you have more choices, as in the sections below. With frequency counts the choice stops here. Data type There are four basic types of data you can use: nominal, ordinal, interval and ratio. Depending on the experiment, these can be increasingly difficult to collect, but they give increasing rewards in what may be concluded. Interval and ratio data may allow for parametric analysis, whilst nominal and ordinal data limit the analysis to non-parametric forms. Independence of variables Independent variables are those that you control, whilst dependent ones are those which change as a result of this. When considering multiple variables, you are often looking to understand the relationship between these. Variable dependence depends on what you want to discover or prove. To show cause, you may have one independent variable and one or more dependent variables, with which you may seek correlation (and later infer cause). Sometimes you can measure two or more independent variables and look for relationships between these (and perhaps, later, discover a common cause). Generally speaking, the more variables you have, the more complex the analysis may be, but then you may also draw interesting conclusions. Study or experiment In a study, you look at what is there, seeking to discover from simple observation. You can look at people in different contexts and measure them in different ways, though you usually want to avoid change what you are watching and so may take a carefully non-invasive position. In an experiment, you have people, measures and treatments. If you change any of these or change sequences, you have different type of experiment. Repeated or independent measures Repeated measures are used in experiments where you apply the same treatment to the same group, for example in a before-and-after test. Independent measures are used where the same measure is applied to a range of different groups, for example where an ability test is applied to separate groups of men and women. You can use a combination of repeated and independent measures, for example where you measure men and women for intelligence before and after a brain-stimulating treatment. Parametric and non-parametric data Parametric data follows particular rules and mathematical algorithms. As a result detailed conclusions may be drawn about the data. Experiments are thus often designed to use parametric data. Research that does not create parametric data is non-parametric. Much research data is of this form and useful information can often be gained. There are very different parametric and non-parametric tests used in analysis, depending on the type of data you chose during the design. See also Types of data, Parametric vs. non-parametric tests
{"url":"http://changingminds.org/explanations/research/design/data_design.htm","timestamp":"2014-04-19T09:26:42Z","content_type":null,"content_length":"34396","record_id":"<urn:uuid:2021fb3b-7467-4247-a04f-0b5bc60da565>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Trine Trophy Guide Players: 1-3 Online Trophies: No Cheat Codes Affect Trophies: N/A Estimated Time to Platinum: 8-15 hours Enemy EXP - Just kill enemies until there are none, or they stop dropping EXP. Not every enemy drops EXP. You will normally get all of the enemy EXP without going back to the level and getting it. Campaign - Start you first run on easy mode, getting as much EXP as possible, and trying to get all the secrets (Treasure Chests). Then, when you beat the game, start over on very hard mode and collect any left over EXP. This will make it easier, as your characters will be leveled very well and you will have items that will improve strength, stamina, energy, etc. Way Out of the Trine Earn all Trophies in Trine Simple! Just earn all of the trophies in this game! Astral Introduction Complete Astral Academy This is the tutorial level. Just complete it and BAM! Trophy! Academy Master Collect all experience in Astral Academy Hallways Master Collect all experience in Academy Hallways Wolvercote Master Collect all experience in Wolvercote Catacombs Graveyard Master Collect all experience in Dragon Graveyard Caverns Master Collect all experience in Crystal Caverns Crypt Master Collect all experience in Crypt of the Damned Dungeon Master Collect all experience in Forsaken Dungeons Castle Master Collect all experience in Throne of the Lost Forest Master Collect all experience in Fangle Forest Thicket Master Collect all experience in Shadowthorn Thicket Ruins Master Collect all experience in Ruins of the Perished Mines Master Collect all experience in Heartland Mines Village Master Collect all experience in Bramblestoke Village Forge Master Collect all experience in Iron Forge Tower Master Collect all experience in Tower of Sarek Master Collector Collect all experience in the game Easy. Collect all EXP in every level. Refer to each individual EXP trophy for a complete walkthrough. Complete the entire game Complete your first run on easy, or see the "Campaign" section at the top. What's next? Complete the entire game on Very Hard difficulty Try the game on very hard after you beat it on easy. See "Campaign" section at the top. Treasure Hunter Find all secret items in every level Survive a level other than Astral Academy without any deaths Best done on Crystal Caverns, as you can do this and the Flower Power trophy at the same time. No bosses, no forceful fighting, NO WORRIES! I Did It My Way Complete a level with 15 or more character deaths Best done when there is a checkpoint right before a huge fall, or some acid. Just keep on dying with the wizard, as he has the least health and shall go faster, and he will respawn when you get to the checkpoint! Hit at least three different monsters with a single bow shot In the level, Ruins of the Perished, there is a part where the doors close and you are ambushed, with a fully upgraded theif, and the Blue Master stone, let them gether together and just lob one shot up, releasing 4 arrows that will get you your trophy Kill at least 5 monsters in 3 seconds Best done in a level where bats swarm in packs of 6-7. Use the Thief's triple shot to quickly demolish them. What a View! Build a tower with at least 12 Wizard-created objects and stand on top of the tower without collapsing it Must have a fully upgraded wizard, and you need the Blue Master stone which increases the number of boxes by 1, the Red Master stone which increases the number of planks by 1, and Wolfgang's Music box which allows the wizard to make 2 more boxes. When all of the above is completed, start to build your tower! Master Ninja Complete 5 swings in a row to the same direction with the grappling hook Best done in Ruins of the Perished. Instead of walking on the bridge, just swing under it. BUT DON'T PRESS Kill 3 monsters with a single physical object drop or throw Just get a lot of enemies in one spot, and quickly draw a plank over all of them All Boxes And No Play... Create 500 objects in a single level Best done at any checkpoint, as you will have unlimited magic, and magic boosting items will help too. Draw short planks as they are faster to draw. Better Than Developers! Complete Tower of Sarek without any deaths on Very Hard difficulty Use the Thief most of the time. Her fire arrows break all the objects the necromancer can make, and when you get to the top, stay at the checkpoint and just pick off the enemies until they all die. Then just swing up and you are done! Footskull Fan Kill at least 50 monsters in a single level with the Knight's Charge ability Once you unlock the knights charge ability, find an area where lots of enemies spawn and tap Flower Power Complete a level without any monster getting killed Best done in the Crystal Caverns, as there are no bosses and it is the fastest level to complete. The Cool Way In a single level, kill one monster by jumping on it with the Knight, one with the Wizard's abilities and one with the Thief's grappling hook kick Best done in Academy Hallways. First, draw a box over an enemy's head, high enough so when it appears you kill it. Then with the thief, use the grappling hook, then while holding onto it, just swing into an enemy to kill it. Then in this level, after you activate the first floor switch, there should be a hole, and you should be able to see some enemies down there. Just use the knight to jump down on top of them and kill Created by , 09-26-2011 at 08:39 PM Last comment by on 03-18-2013 at 10:06 AM 1 Comments, 9,108 Views All times are GMT -5. The time now is 03:18 AM. Powered by vBulletin® Version 4.1.10 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. "Wiki" powered by VaultWiki v3.0.20 PL 1. Search Engine Optimization by vBSEO
{"url":"http://ps3trophies.com/forums/showwiki.php?title=PS3-Trophy-Guides:Trine-Trophy-Guide","timestamp":"2014-04-20T08:18:29Z","content_type":null,"content_length":"156054","record_id":"<urn:uuid:a0d93b87-e6d1-4af2-b295-5ac489e3f6ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2005 [00556] [Date Index] [Thread Index] [Author Index] Re: More strange behavior by ComplexExpand • To: mathgroup at smc.vnet.net • Subject: [mg60622] Re: [mg60603] More strange behavior by ComplexExpand • From: Pratik Desai <pdesai1 at umbc.edu> • Date: Thu, 22 Sep 2005 02:08:15 -0400 (EDT) • References: <200509210720.DAA08138@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Raul Martinez wrote: >To Mathgroup, >I use Mathematica 5.2 with Mac OS X (Tiger). >Add the following to a recent thread on the sometimes strange >behavior of ComplexExpand. >I used ComplexExpand with an argument in which all the variables in >the argument of the function are real. Since ComplexExpand is >supposed to assume that all variables are real by default, one would >expect ComplexExpand to return the expression without change, but it This is not exactly true, in mathematica all the variables are assumed complex (alteast what I have experienced so far) In my opinion not only doyou have to specifically assign your variable to be real by using Assuming or $Assumptions +Simplify, but also you have specifiy the values of "a" on the real line and hence your expression will change based on the complex expand algorithm $Assumptions = {-â?? < a < 0, t ϵ Reals} s1=ComplexExpand[(a/Ï?)^(1/4) *Exp[-(a t^2)/2]] // Simplify >>((1 + I)*(a^2)^(1/8)*Sqrt[E^(a*t^2)])/(Sqrt[2]*E^(a*t^2)*Pi^(1/4)) $Assumptions = {0 < a < â??, t ϵ Reals} s2=ComplexExpand[(a/Ï?)^(1/4) *Exp[-(a t^2)/2]] // Simplify $Assumptions = {a ϵ Reals, t ϵ Reals} s3=ComplexExpand[(a/Ï?)^(1/4) *Exp[-(a t^2)/2]] // Simplify >>((a^2)^(1/8)*Sqrt[E^(a*t^2)]*(Cos[Arg[a]/4] + s4=Simplify[(a/Ï?)^(1/4) *Exp[-(a t^2)/2]] fundamentally you are asking by, using complexexpand, to expand your given function in a complex way.... >Instead, here is what it does: > ComplexExpand[ (a / Pi)^(1/4) Exp[ (-(a t^2)/2 ] ] > (Exp[-a t^2] Sqrt[Exp[a t^2]] (a^2)^(1/8) Cos[Arg[a] / 4]) / Pi^ >(1/4) + i (Exp[-a t^2] Sqrt[Exp[a t^2]] (a^2)^(1/8) Sin[Arg[a] / >4]) / Pi^(1/4). >I have inserted parentheses in a few places to improve the legibility >of the expressions. >ComplexExpand treats the variable "a" as complex, but "t" as real. >This is puzzling to say the least. Moreover, it renders a^(1/4) as >(a^2)^(1/8), which seems bizarre. If you look closely at your expression, the only issue of complexity occurs with your "a" variable because appears as a radical which may have complex nature based on where it is defined on the Real number line >My interest is not in obtaining the correct result, which is easy to >do. Rather, I bring this up as yet another example of the >unreliability of ComplexExpand. In case anyone is wondering why I >would use ComplexExpand on an expression I know to be real, the >reason is that the expression in question is a factor in a larger >expression that contains complex variables. Applied to the larger >expression, ComplexExpand returned an obviously incorrect expansion >that I traced to the treatment of the example shown above. >I welcome comments and suggestions. >Thanks in advance, >Raul Martinez Hope this helps Pratik Desai Pratik Desai Graduate Student Department of Mechanical Engineering Phone: 410 455 8134 • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Sep/msg00556.html","timestamp":"2014-04-16T19:02:24Z","content_type":null,"content_length":"37918","record_id":"<urn:uuid:cad2bcf8-71c1-458f-b79d-7334f0bac4d7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Tds Equation and Entropy I was wondering, with regards to the Tds equation Tds = de + pdv: 1. All of my textbooks state that integrating this equation, although derived for a reversible process, will give the entropy change regardless of the process or whether or not the process is reversible. However, I don't understand this concept because if the Tds equation was re-derived from first law for a general process, I thought that there would be a Tσ (entropy generation term, zero for reversible) in the equation since ds = δQ/T + σ and entropy generation would be path dependent? 2. With regards to the reversible Tds equation, I have trouble seeing how this equation is path independent since, for two fixed states, I always thought there was more than one possible work path or pdv expression in which state 1 can be used to move to state 2 and if this were true it would end up giving different s2-s1 values? 3. I have only found proofs for entropy as a state property for reversible process and argued that by extension it must be a state property for any process but is there a proof that directly show that it is a state property for any process? Thanks very much
{"url":"http://www.physicsforums.com/showpost.php?p=4224600&postcount=1","timestamp":"2014-04-18T03:03:23Z","content_type":null,"content_length":"9538","record_id":"<urn:uuid:0f7b0bbd-a2dc-4e76-a790-8328a353a179>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 4 The General Optical System The General Matrix Figure 10. In the preceding discussion we were able to describe the passage of a ray from one point to another in a simple optical system by means of two translation matrices and one refraction matrix. A more complicated optical system may contain many refracting surfaces of different indices of refraction. Let us for example consider the system of Fig. 10. A ray passes the plane PP at height y, angle These nine matrices may be multiplied together to give one two by two matrix relating y ' and y O ' then we have the matrix, which represents the effect on a ray in going from the plane through the first vertex O of the system through to the plane through the final vertex O ' of the system. It is useful in describing the general optical system to base our discussion on this matrix which we will designate as which in this example equals Obtaining the elements of the matrix may involve a lengthy series of matrix multiplications as in our example above, involving 7 matrices. However, all the necessary information about the optical system is included in this resultant two by two matrix and from it the cardinal points and planes of an optical system are readily obtainable. From these cardinal points we may completely describe the image-forming characteristics of the system. Figure 11. In general then, we can assume the situation of Fig. 11. We know (or can compute) the matrix relating the rays going from the plane through O to the plane through O ' namely The matrix equation relating the emergent ray (y ' at plane QQ a distance D ' from O ' to the incident ray (y In Fig. 10 we had D ' = 2 cm and D = 3 cm. We are now in a position to discuss the first two cardinal points (and planes), the first and second focal points (and planes).
{"url":"http://www.bama.ua.edu/~ddesmet/id/c4ax.html","timestamp":"2014-04-17T18:22:58Z","content_type":null,"content_length":"4302","record_id":"<urn:uuid:523a5230-8824-4e5f-8891-3cb3fadec528>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] 461: Reflections on Vienna Meeting Vaughan Pratt pratt at cs.stanford.edu Thu Jun 16 03:07:40 EDT 2011 On 6/14/2011 11:48 PM, Alex Simpson wrote: > I don't read Angus here as saying that he believes the consistency of PA > to be a "legitimate mathematical problem". For example, what Angus says > is wholly compatible with the view, expressed by William Tait in his > posting of 22nd May, that "the search for nontrivial consistency proofs > is off the board" (since, roughly, any consistency proof is at least as > doubtful as the system whose consistency it establishes). Also, given the > tenor of the rest of his talk, I imagine that Angus would very > much not view consistency investigations as the concern of legitimate > mathematics. I'd be interested in seeing how this view is reconciled with Harvey's point that Con(PA) is provable in WKL_0. It seems to me that Harvey's offline correspondence with Voevodsky, as reported here on May 26 by Harvey, was going well up until the following exchange, which appears to have brought the correspondence to an untimely end. VV: Is this question [about a 1/n Cauchy convergent subsequence of rationals] related to the compactness of the unit interval in R ? HF: No. There are two differences. 1. I went out of my way to avoid bringing in real numbers, for a number of reasons. 2. Also, the compactness of the unit interval in R is logically substantially weaker, and will not suffice to interpret PA. The "no" response appears to have led VV to respond that the correspondence has at the moment "reached the discussion of areas which are not well known to me." Surely the correct answer to VV's question would have been "yes." The subtext of this exchange as I understood it was not directly about logical strength but simply whether VV would accept the existence of a 1/n Cauchy convergent subsequence of rationals. Assuming he's ok with Koenig's Lemma for binary trees, he should be ok with that existence. since the original proof of the Heine-Borel theorem based on Koenig's Lemma, which must be well known to VV, does not raise any obvious (to me) concerns for the unit interval in Q. The point that its application to Q raises arithmetic issues that do not arise when applied to R (if that's the correct interpretation of Harvey's "no"), which may well be unknown to VV, seems a distraction in this context. The interesting question here would be, does VV have any problem with Koenig's Lemma for binary trees? If so it would be instructive to understand his objection to it. If not, what other concern could he have about WKL_0? Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2011-June/015577.html","timestamp":"2014-04-16T17:31:48Z","content_type":null,"content_length":"5318","record_id":"<urn:uuid:972536cc-81a7-4c2c-8f41-264321d71be4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Marina Dl Rey, CA Math Tutor Find a Marina Dl Rey, CA Math Tutor I am a credentialed ten-year veteran teacher. I have taught and tutored students with all levels of math skills, and have developed a variety of strategies that address the needs of all learners. I work online and also tutor at public libraries near me.I am available for help on specific types of ... 10 Subjects: including precalculus, algebra 1, algebra 2, geometry ...I have been trained at the University of Chicago, in English literature by one of the most renowned English professors in the world. I teach anyone English literature from school age students to adults. Professionally, I have extensively studied English at both the University of Chicago and the University of Illinois with two professors from Harvard University. 109 Subjects: including algebra 2, calculus, chemistry, logic ...I also took courses in solid state, and Master's level quantum mechanics in the Physics department. My later 2 modern physics courses included both Newtonian mechanics, as well as quantum mechanics, with emphasis on the theory behind the basic particle physics experiments in the early 20th century. I received an 'A' letter grade in these courses. 33 Subjects: including algebra 1, differential equations, ACT Math, probability ...As a teacher I implemented various ways of instruction focusing on facilitating learning through exploration and activities. As a tutor I am able to do the same by finding the best way to deliver information to each student, so that they can comprehend the information to the best of their abilit... 10 Subjects: including algebra 1, vocabulary, geometry, phonics ...Most recently, I taught and tutored English as a second language to children in Taiwan.I have studied and spoken French for over ten years. I have lived in France, completely immersing myself in the language and reading, writing, and speaking daily only in French. I have a strong background in French grammar and also have experience with colloquial expressions. 24 Subjects: including algebra 2, geometry, prealgebra, precalculus Related Marina Dl Rey, CA Tutors Marina Dl Rey, CA Accounting Tutors Marina Dl Rey, CA ACT Tutors Marina Dl Rey, CA Algebra Tutors Marina Dl Rey, CA Algebra 2 Tutors Marina Dl Rey, CA Calculus Tutors Marina Dl Rey, CA Geometry Tutors Marina Dl Rey, CA Math Tutors Marina Dl Rey, CA Prealgebra Tutors Marina Dl Rey, CA Precalculus Tutors Marina Dl Rey, CA SAT Tutors Marina Dl Rey, CA SAT Math Tutors Marina Dl Rey, CA Science Tutors Marina Dl Rey, CA Statistics Tutors Marina Dl Rey, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Marina_Dl_Rey_CA_Math_tutors.php","timestamp":"2014-04-19T12:13:42Z","content_type":null,"content_length":"24431","record_id":"<urn:uuid:20c4ebd2-6e0c-45b6-bec9-585b214eaecf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
I don't want to go into extensive covering of basic horn theory, Witold Waldman's Site has a very good overview of what is available in terms of general loudspeaker theory and documentation thereof, so please go there to find the way to more specific stuff. Briefly however all acoustic plane wave horns share a common formula, the various horn shapes are obtained by tweaking or omitting variables. The Duelund Horn is simply a chosen version of that formula that comes with the advantage of the most linear loading of the unit, and with the slight disadvantage compared to an exponential horn that it will theoretically pay for that linearity in its lowest octave with a slighty lower maximum SPL if distortion parameters are to be applied. This because it has a somewhat slower initial expansion rate. In horn "family name terms" it is a hyperbolic horn, exactly the intermediate between a catenoidal horn and an exponential horn, i. e. it is exponential-catenoidal. Here is the formula describing the linear area expansion of all planar wave horns, tractrix horns (a.k.a. known as "Kurvenwellentrichter") have their own formula with a few extra hieroglyphs added. [1] Sx = St(cosh mx + Tsinh mx)2'ed Where S is the cross-sectional area, Sx the area at the distance x, St is the area at the throat, x is the distance from the start of the horn, and m is a constant derived from the intended cut-off frequency FcutoffHz determined by: [2] FcutoffHz = --- 2 pi i. e. (if I didn't get it wrong) 2 pi FcutoffHz [3] m = -------------- Where c is the speed of sound. The Duelund Horn is characterized by the paramater T in equation [1] being chosen as 0.6. Solving Equation [1] with a start area of zero shows why it is such a wonderful advantage to use a large beginning area, i. e. that the way to make a horn shorter for the use in any given band is to use a larger initial area, which implies a larger loudspeaker unit. There are some additional concerns in horn design, it is beyond this basic description of the formula chosen for The Duelund Horn to go into them. But if you want to construct a different implementation of The Duelund Horn, you may have to so do in case you want to minimize distortion and maximize efficiency. There is a simple and basic difference between rear loaded horns and front loaded horns that you must understand. A rear loaded horn has a chamber between the unit and the start of the horn. This chamber functions as a cross-over, sound below the Fs (resonance frequency) of the loudspeaker in it gets passed on via the horn throat into the horn. A front loaded horn should have the smallest possible volume of air between the loudspeaker membrane and the horn so as to have the widest possible frequency range. To also have maximum efficiency and minimum distortion it is required that the unit has a rear chamber, a conventional closed cabinet, with a size that results in a compliance that is equal to the compliance of the horn mouth, otherwise the units membrane will move in an asymmetric way and thus create second order distortion. This usually means that the rear chamber should be perhaps surprisingly small, and usually filled up with suitable damping material to avoid midrange resonances - it can with advantage also be irregular. You may also want to look for litterature on matching the loudspeaker unit compliance to the horn compliance, they should match for maximum efficiency, the Qt may a good parameter to look for and at. Loudspeaker units that are designed for horn use usually have very low Qt's, i. e. in the range below Qt = 0.3. If the maximum linear bass reflex cabinet gets ridiculously small in a bass reflex modelling, then the unit is more likely to give good results when horn loaded. Compression is usually the name of the game, i. e. that the area of the horn throat must he [the compression factor] smaller than area of the unit. Compression factors usually end up being in the range between 2 and 3, so a good "dumb guess" probably is a compression factor of 2.5. However, while compression improves coupling to the horn, and thus improves efficiency, then it comes with a penalty: air is not linear, and will exhibit significant harmonic and intermodulation distortion at levels in the 150 dB SPL range and beyond. It follows from this that a horns maximum SPL at the mouth (!) when distortion parameters are considered gets smaller with increased throat
{"url":"http://muyiovatki.dk/duelund/theory.htm","timestamp":"2014-04-21T04:31:37Z","content_type":null,"content_length":"6398","record_id":"<urn:uuid:46dfd73d-550f-46a9-b4ea-2d1a8aab405c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
1620 Data Processing System General information The IBM 1620 was a general-purpose, stored-program data processing system for small businesses, research and engineering departments of large companies, and schools requiring solutions to complex problems in the areas of engineering, research, and management science. • Punched card, paper tape and keyboard input; card, paper tape and printed output. • Simultaneous read, compute and punch when using card input-output. • Large-capacity core storage - up to 60,000 digits. • High internal processing speeds. Access time - 20 microseconds. • Compatibility with other IBM equipment through FORTRAN. • Real-time input and output features enabled the 1620 to be expanded into a closed-loop process control system (IBM 1710). • Available in a wide variety of systems configurations with many special features available upon request. • Low installation and operating costs. • A comprehensive but simplified operator's console. • Simple but powerful instruction set. • Decimal and alphameric notation eased program writing and clarified report writing. • Variable field and record length for optimum use of storage. • Extensive library of programming systems. • Simplified two-address instruction format. One instruction could perform any one of the following combinations: Find, add and store the sum of two numbers. Designate the data to be operated upon, specify the return address from a subroutine, and branch to a subroutine. Transmit from location to location an entire record regardless of length. Contain a constant for use in a problem in addition to the operation performed. • Automatic validity checking of all data transfers, arithmetic functions, and input-output operations. • Automatic sign control. • Additional instructions available. • Automatic floating point feature available. Physical characteristics Solid-state components. High reliability with low maintenance. Internal data representation Self-checking, six-bit, binary-coded decimal.Four-bit numerical value (1-2-4-8) Flag-bit for field and sign designation. Check-bit to give odd parity check. Direct conversion from card code to two-digit alphameric coding. Processing speeds Fixed point operations Basic machine cycle was 21 microseconds. Time included the fetching of two factors and was the complete interval elapsed from one instruction to the next. Addition or subtraction (5 digits) - 560 microseconds. A rate of 1,780 per second. Multiplication (5 digits by 5 digits) - 4.96 milliseconds. A rate of 200 per second. Division (5-digit quotient) with automatic divide feature - 16.86 milliseconds. A rate of 56 per second. Logical decisions - 200 microseconds. A rate of 5,000 per second. Data transmission of 5-digit fields - 360 microseconds. A rate of 2,800 per second. Optional automatic floating point operations When using this hardware feature, floating point numbers consisted of a variable length mantissa with a two digit exponent. So that the required degree of precision could be specified, the mantissa could vary from 2 to 100 digits in length and the exponent field could range from -99 to + 99. The times listed are based on a two-digit exponent and an eight-digit mantissa. They include normalizing and access to two floating point fields. Floating add or subtract — l.2 milliseconds. Floating multiply — 12.5 milliseconds. Floating divide — 41.7 milliseconds. IBM 1620 Central Processing Unit Contains console, arithmetic and logical unit, and core storage. • Visual display of machine check indicators, program registers and storage locations. • Control keys and switches for manual and semiautomatic control of computer operations. • Typewriter and typewriter release/start key for simultaneous release and start were included as part of the console. It functioned as a direct input-output device. Arithmetic and logical unit • Two-address instruction format, 12 digits. • 32 powerful commands — could be expanded to 47 with optional features. • Addition, subtraction and multiplication accomplished by automatic table lookup in core storage. • Division accomplished by available subroutine or by optional automatic divide feature. • Console switches and machine check indicators could be interrogated by the program. Core storage • A basic system contained 20,000 digits of core storage. • Each digit position individually addressable by a five-digit address. • 300 positions permanently assigned for use in arithmetic operations. IBM 1623 Storage Unit Expanded core storage to 40,000 or 60,000 positions. Model 1 contained an additional 20,000 positions. Model 2 contained an additional 40,000 positions. IBM 1622 Card Read Punch Read 250 cards per minute, maximum. Punched 125 cards per minute, maximum. Synchronizer storage for input and output. Overlap of reading, computing and punching. Automatic conversion when reading or punching alphameric data. Large-capacity radial, nonstop stackers. Automatic checking of reading and punching. IBM 1621 Paper Tape Reader Read 150 characters per second. Eight-channel paper tape. Self-checking code insured accuracy of reading. Accommodated both numerical and alphabetic information in single-character coding. IBM 1624 Tape Punch Punched 15 characters per second. Eight-channel tape with self-checking code. Accommodated both numerical and alphabetic information in single-character coding. Optional features Automatic division. Automatic floating point operations. Multilevel indirect addressing. Additional instructions. Checking features Odd-bit parity check on internal data transmission. Odd-bit parity check on tape input-output. Automatic checking of card reading or punching. Overflow check on addition, subtraction and compare. Table-lookup arithmetic fully checked. Expansion to a 1710 Control System A 1620 was field-convertible to a 1710 Control System. A 1711 Data Converter model 1 connected to a 1620 Data Processing System simplified collection and analysis of analog data without off-line conversion units. Data from analog measuring devices was transferred through the 1711 directly to the 1620. The system's versatility made it ideal for quality control applications, process studies, and process optimization. With the 1711 model 2 and 1712 Multiplexer and Terminal Unit connected to the 1620, the computer not only received data from analog measuring devices, but fed results through the 1711/1712 to control processes by closing contacts which completed circuits to the instrumentation for closed-loop process control. Programming languages and systems Symbolic Programming System 1620 symbolic language allowed the programmer to refer to instructions and data in the program by name or other meaningful designation without regard to their location in the machine to facilitate relocating sections of programs, incorporating subroutines, and inserting or deleting instructions. Programming was further simplified through the use of macroinstructions which generated linkages and incorporated subroutines into the object program. Subroutines available for floating point operations included add, subtract, multiply, divide, square root, sine, cosine, arc tangent, and (for natural and base ten) exponential and logarithm. A subroutine for single precision division was also available. SPS would make use of variable lengths subroutines, as well as automatic floating point, on an optional basis. One-Pass SPS A subset of the 1620 SPS, this system required only one pass of the source program tape to assemble and punch out an object program. A programming system which permitted users to write their programs in a language closely resembling that of mathematics. A source program written in the FORTRAN language was processed by the 1620 FORTRAN compiler to produce a 1620 machine-language program. A separate subroutine package using the automatic divide feature was included. Automatic floating point operations were used on an optional basis. FORTRAN operated on a 20,000, 40,000 or 60,000 digit system. FORTRAN Pre-Compiler Used to edit source programs written in the 1620 FORTRAN language, FORTRAN Pre-Compiler eliminated many common errors in the source program. Console switch control permitted many input-output options in checking a program prior to compilation. Designed for a 1620 system equipped with optional automatic divide, indirect addressing, additional core storage (1623 Model 1) and a 1622 Card Read Punch, FORTRAN II was an extension of the basic 1620 FORTRAN system. Additional FORTRAN language statements were included, and high degrees of precision in computation could be achieved by specifying the length of number fields in excess of the fixed length normally allowed. A simple "load-and-go" program whose language is a subset of, and compatible with, the 1620 FORTRAN, 1620 GOTRAN eliminated the compilation phase and went directly to problem solution. Upon execution of one program, the system was ready to accept another GOTRAN program. Program Library This included programs for mathematical functions, for utility (commonly used small programs), and for engineering applications. IBM Services Executive and programming schools. Systems engineering representatives in local IBM offices. Special representatives in specific industries. Tested applied programming routines. An extensive library of technical publications. A complete line of data processing machines and systems. Prompt, efficient equipment maintenance and service. Over 200 branch offices to serve customer needs.
{"url":"http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP1620.html","timestamp":"2014-04-21T04:44:59Z","content_type":null,"content_length":"21002","record_id":"<urn:uuid:2dab59b3-e0cd-4c28-8639-f8a0cb79b8dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
quintic equation and constants hello! so, I've came accross a problem I can't solve at all :c. The problem is as it follows: The constant term of is equal -270, then a=? I searched how to solve it and found only one solution, but I didn't understand what was done... they turned into and somehow found that k = 2. also, they found "", which give us a = -3. That's actually the right answer, but I can't figure out what was done to solve this problem... :c Can you guys explain it to me? Thanks c:
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=276555","timestamp":"2014-04-20T00:55:30Z","content_type":null,"content_length":"16449","record_id":"<urn:uuid:6a645109-fa46-40a3-8a58-51d0879fe974>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help September 27th 2010, 07:25 AM #1 Sep 2010 P and q are two distinct primes. H ia s proper subset of integers and also it is a group under addition. another set A={p,p+q,pq,p^q,q^p}. now H has three elements and those three elements are from A. which are those three elements? please reply in detail I presume you mean $H \cap A$ contains 3 elements, not $H$ contains three elements (as subgroups of $\mathbb{Z}$ are either trivial or contain infinitely many elements). Now, subgroups of $\mathbb{Z}$ are all of the form $n\mathbb{Z} = \{ni; i \in \mathbb{Z}\}$ (why?). Thus, there exists $n \in \mathbb{Z}$ such that $H=<n>$ ( $H$ is the subgroup generated by $n$ Basically, this all translates as "the elements you are looking for are elements $a, b, c \in A$ such that $gcd(a, b, c) eq 1$" (not equal to 1 as H is a proper subgroup). Can you find three such elements? first of all thank you for replying. but question in the book goes like this only..that's why i am unable to get the solution the solution given in book is {p,p+q,p^q}. but how is it possible that sum of any two of these three elements also lies in the group? September 27th 2010, 07:46 AM #2 September 27th 2010, 08:54 AM #3 Sep 2010 September 27th 2010, 12:13 PM #4 September 27th 2010, 12:27 PM #5 Sep 2010 September 27th 2010, 11:16 PM #6
{"url":"http://mathhelpforum.com/advanced-algebra/157587-question.html","timestamp":"2014-04-19T08:51:35Z","content_type":null,"content_length":"42203","record_id":"<urn:uuid:9a9c9ec4-bf2b-4da5-bfcf-9a791b53f2ff>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Eaton, CO Precalculus Tutor Find an Eaton, CO Precalculus Tutor ...I had 30+ hours of training to prepare me for my Biology MCAT classes. In order to teach for this agency I had to score at least a 13 on the Biology section, which I did. I took a human nutrition class as an undergraduate at UC San Diego and received an A in the class. 26 Subjects: including precalculus, reading, SAT reading, SAT writing ...I am fluent in various levels of Mathematics (I have a Ph.D. in experimental particle physics). Statistics is a knowledge base used quite often in this scientific field. As someone who works with Cloud Software, Java has become the mainstay application development language for internet communica... 47 Subjects: including precalculus, chemistry, calculus, physics ...I feel very comfortable with the material and effective ways to explain the algorithms and practical applications of all precalculus topics. Those topics include: logarithmic and exponential functions, polynomial and rational functions, vectors, conic sections, complex numbers, trigonometry and ... 14 Subjects: including precalculus, reading, geometry, algebra 1 ...I also hold a National Strength and Conditioning Association personal trainers certificate and have done so since 2009. Aside from my fitness background I have compete and coached in Track and Field at the collegiate and secondary levels. My major during my bachelors degree was Physical Education and I have taught PE for 4 years. 11 Subjects: including precalculus, geometry, algebra 1, algebra 2 ...Subjects ranged from remedial algebra, geometry, trigonometry, to calculus, calculus-based physics, and chemistry. In graduate school, I instructed undergraduate courses in Weather and Climate, and 'The Habitable Planet' in a classroom setting, teaching students about introductory earth science.... 16 Subjects: including precalculus, chemistry, calculus, physics Related Eaton, CO Tutors Eaton, CO Accounting Tutors Eaton, CO ACT Tutors Eaton, CO Algebra Tutors Eaton, CO Algebra 2 Tutors Eaton, CO Calculus Tutors Eaton, CO Geometry Tutors Eaton, CO Math Tutors Eaton, CO Prealgebra Tutors Eaton, CO Precalculus Tutors Eaton, CO SAT Tutors Eaton, CO SAT Math Tutors Eaton, CO Science Tutors Eaton, CO Statistics Tutors Eaton, CO Trigonometry Tutors
{"url":"http://www.purplemath.com/eaton_co_precalculus_tutors.php","timestamp":"2014-04-17T07:39:43Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:1d21e5df-b35d-427c-9acf-7861b53bfd16>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Lemon Grove Prealgebra Tutor Find a Lemon Grove Prealgebra Tutor ...As a tutor, I have led 87% of my SAT students to score increases of at least 100 total points and 40% to score increases of more than 200 total points. According to the CollegeBoard website, only about 4% of students improve their SAT scores by at least 100 points. After graduating from college... 54 Subjects: including prealgebra, Spanish, English, writing ...I am a certified elementary teacher. I have a bachelor's degree in education. I believe these two things greatly qualify me for tutoring in study skills. 18 Subjects: including prealgebra, reading, geometry, GED ...This is very important, because math is a series of building blocks. Once the basics are learned, a student can then develop the skills to tackle any math problem. I believe anyone can learn to understand math and science with just a little bit of time and help from a tutor that cares. 35 Subjects: including prealgebra, reading, writing, English I am a recent graduate with a Master of Science degree in Traditional Oriental Medicine Candidate, who relocated from Orlando to San Diego to complete my studies. I currently also hold a Bachelor of Science degree in Electronic Engineering Technology. My career path has been varied, having worked ... 10 Subjects: including prealgebra, English, reading, biology ...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future 14 Subjects: including prealgebra, calculus, physics, geometry
{"url":"http://www.purplemath.com/lemon_grove_ca_prealgebra_tutors.php","timestamp":"2014-04-19T05:01:28Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:5acacedd-39b4-41c9-aeec-9841e02941f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying Mathematical Problems in Engineering Volume 2012 (2012), Article ID 140301, 23 pages Research Article Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying Department of Mathematics and Systems Science, College of Science, National University of Defense Technology, Changsha 410073, China Received 30 September 2011; Revised 9 December 2011; Accepted 12 December 2011 Academic Editor: Silvia Maria Giuliatti Winter Copyright © 2012 Jia Tu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR) missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1mm level InSAR spatial baseline determination should be realized. 1. Introduction Close formation flying satellites equipped with synthetic aperture radar (SAR) antenna could provide advanced science opportunities, such as generating highly accurate digital elevation models (DEMs) from Interferometric SAR (InSAR) [1, 2]. Compared to a single SAR satellite system, the performance of two SAR satellites flying in close formation can be greatly enhanced. Nowadays, close satellite formation flying has become the focus of space technology and geodetic surveying. In order to realize the advanced space mission goal of InSAR mission, the high-precision determination of inter-satellite interferometric baseline [3] is a fundamental issue. Take the TanDEM-X mission for instance. TanDEM-X mission is the first bistatic single-pass SAR satellite formation, which is formed by adding a second TanDEM-X, almost identical spacecraft, to TerraSAR-X and flying the two satellites in a closely controlled formation. The primary mission goal is the derivation of a high-precision global DEM according to high-resolution terrain information (HRTI) level 3 accuracy [4–6]. The generation of accurate InSAR-derived DEMs requires a precise knowledge of the interferometric baseline with an accuracy of 1mm (1D, RMS) [7]. Therefore high-precision determination of inter-satellite interferometric baseline is a prerequisite for InSAR mission. The interferometric baseline is defined as the separation between two SAR antennas that receive echoes of the same ground area [8]. Based on this definition, the interferometric baseline can be denoted as the resultant vector of temporal baseline and spatial baseline, that is, where , are epochs that two SAR antennas receive echoes of the same ground area, , represent the positions of SAR antenna phase centers of satellite 1 and satellite 2 at epoch in International Terrestrial Reference Frame (ITRF), respectively, is the spatial baseline, is the temporal baseline which is the velocity integral of satellite 1. For close formation flying (1km-2km) with single-pass bistatic acquisitions, the deviation of epochs that two SAR antennas receive echoes of the same ground area is typically on the millisecond level. When the velocity is determined on the mm/s level, its influence in the temporal baseline can be neglected. Therefore, the accuracy of interferometric baseline is mainly determined by the accuracy of spatial baseline. Note that only spatial baseline is considered in this paper. The spaceborne dual-frequency GPS measurement scheme [9–11] is widely used for inter-satellite baseline determination currently. This scheme for spatial baseline determination consists of two steps. Firstly, the relative position of two formation satellites is determined by dual-frequency GPS measurement, and then spatial baseline is transformed from inter-satellite relative position. The relative position here is the vector that links the mass centers of two formation satellites. In our research, impacts of the errors introduced by spatial baseline measurement are analyzed. This paper starts with a description of spatial baseline measurement using dual-frequency GPS. The baseline transformation from the relative position to spatial baseline is given. In a second step, errors are classified into two groups: errors related to GPS measurement and errors related to baseline transformation. The error characters are studied, and the impact of each error on spatial baseline determination is analyzed from theoretical aspect. Then the impacts of each error and total errors on spatial baseline determination are analyzed by single factor simulations and total error sources simulations. At last, conclusions are shown. 2. Generation of Spatial Baseline In preparation for latter description some coordinate systems are introduced at first, which are illustrated in Figure 1. Coordinate systems employed in this paper contain Conventional Inertial Reference Frame (CIRF), ITRF, satellite body coordinate system, and satellite orbit coordinate system. CIRF used here is J2000.0 inertial system and ITRF is ITRF2000 system. The definitions of these coordinate systems can be found in [12]. As the spatial baseline is determined by spaceborne dual-frequency GPS measurement scheme, the entire process of spatial baseline determination consists of relative positioning and baseline transformation. Figure 2 is the geometric relation for spatial baseline determination. Relative positioning is the determination of by dual-frequency GPS observation data. As the real position of signal reception is the phase center of GPS receiver antenna, GPS observation data has to be revised to the mass center of satellite using the phase center data of GPS receiver antenna during relative positioning. From Figure 2, baseline transformation can be described as follows: where is the spatial baseline in ITRF, is the relative position of two satellites in ITRF, is a vector that links SAR antenna phase center to mass center of satellite in body coordinate system of Satellite , is a transformation matrix of Satellite from satellite body coordinate system to ITRF. The flow chart of spatial baseline determination is shown in Figure 3. 3. Errors of Spatial Baseline Measurement According to the generation of spatial baseline in Section 2, the errors of spatial baseline measurement can be classified into two groups: errors related to GPS measurement, which are introduced by relative positioning using dual-frequency GPS measurement, and errors related to baseline transformation, which are generated by the transformation from relative position to spatial baseline. 3.1. Errors Related to GPS Measurement The relative positions of two satellites are determined by the reduced dynamic carrier phase differential GPS approach. In this approach, the absolute orbits of one reference satellite (Satellite 1) are fixed, which are determined by the zero-difference reduced dynamic batch least squares approach based on GPS measurements of single satellite. Only the relative positions are estimated by reduced dynamic batch least-squares approach based on differential GPS measurements. The integer double difference ambiguities for relative positioning are resolved by estimating wide-lane and narrow-lane combinations [13]. The well-known Least-Squares Ambiguity Decorrelation Adjustment (LAMBDA) method [14, 15] is implemented for the integer estimate. By differenced GPS observation, common errors can be eliminated or reduced. International GNSS Service (IGS) final GPS ephemeris product (orbit product and clock product) [16] is often adopted for orbit determination based on GPS observation. The accuracy of GPS final orbit product is presently on the order of 2.5cm. For 2km separation of satellite formation, the impact of GPS ephemeris error on single-difference GPS observation is about 0.0025mm [17], which can be neglected. The impact of GPS clock error can be well cancelled out by differential GPS observation. Due to the close separation (1km-2km) and similar materials, configuration, and in-flight environment of formation satellites, near-field multipath, thermal distortions of satellites, and other external perturbations can also be effectively reduced by differential GPS observation. In addition, the influence of differential ionospheric path delay is mainly from the first order, which can be eliminated by constructing ionosphere free differential GPS observation. Therefore, the errors related to GPS measurement that have to be considered consist of noise of GPS carrier phase measurement, ground calibration error of GPS receiver antenna phase center, error of satellite attitude measurement, and fixing error of GPS receiver antenna. 3.1.1. Noise of GPS Carrier Phase Measurement The quality of GPS carrier phase observation data used is of utmost importance for relative positioning. The noise of GPS carrier phase measurement belongs to random error, which cannot directly be eliminated by GPS differential observation. Take the BlackJack receiver and its commercial Integrated GPS and Occultation Receiver (IGOR) version, for example, which are widely used for geodetic grade space missions and exhibit a representative noise level of 1mm for L1 and L2 carrier phase measurements [18]. The reduced dynamic relative positioning approach makes use of dynamical models of the spacecraft motion to constrain the resulting relative position estimates, which allows an averaging of measurements from different epochs. The influence of GPS carrier phase noise can be effectively reduced by reduced dynamic relative positioning approach. 3.1.2. Ground Calibration Error of GPS Receiver Antenna Phase Center The phase center location accuracy of the GPS receiver antenna will directly affect the veracity of GPS observation modeling. GPS receiver antenna phase center is the instantaneous location of the GPS receiver antenna where the GPS signal is actually received. It depends on intensity, frequency, azimuth, and elevation of GPS receiving signal. The phase center locations can be described by the mechanical antenna reference point (ARP), a phase center offset (PCO) vector, and phase center variations (PCVs). The PCO vector describes the difference between the mean center of the wave front and the ARP. PCVs represent direction-dependent distortions of the wave front, which can be modeled as a consistent function that depends on azimuth and elevation of the observation from the position indicated by the PCO vector. The position of GPS receiver antenna phase center can be measured by ground calibration, such as using an anechoic chamber and using field calibration techniques [18, 19]. Take the SEN67-1575-14+CRG antenna system for instance. It is a dual-frequency GPS receiver antenna and has been used for TanDEM-X mission. Its phase center has been measured by automated absolute field calibration [20]. The mean value of calibration result is shown in Figure 4 that the pattern of PCVs has obvious character of systematic deviation. The maximum value for the mean PCVs on ionosphere-free combination can reach to 1.5cm. In addition, there also exist random errors in the same direction of different receptions. The random errors are similar to the noise of GPS carrier phase measurement and can also be smoothed by reduced dynamic relative positioning approach. As there is a slim difference between the line of sight (LOS) vectors of two satellites during close formation flying, the common systematic errors of GPS receiver antenna phase center and near-field multipath can be eliminated by differential GPS observation. Therefore, the same type of GPS receiver antenna has to be selected for both formation satellites in order to reduce the impact of these 3.1.3. Error of Satellite Attitude Measurement Satellite attitude data are obtained from star camera observations and provided as quaternion. The error of satellite attitude measurement consists of a slowly varying bias and a random error. Its impact on GPS relative positioning appears on the correction for GPS observation data of single satellite, that is, the reference point of GPS observation data has to be corrected from GPS receiver antenna phase center to the mass center of satellite by satellite attitude data and GPS receiver antenna phase center data. Take Satellite 1 for instance. The correction in direction of LOS vector (in CIRF) is given by where is GPS receiver antenna phase center location in body coordinate system of Satellite 1, is the transformation matrix from body coordinate system of Satellite 1 to CIRF and can be obtained by attitude quaternion data, is the transformation matrix from body coordinate system to orbit coordinate system of Satellite 1, and is the transformation matrix from orbit coordinate system of Satellite 1 to CIRF. Assuming that the Euler angles are , , and respectively, we can get where , , are rotation matrices around roll axis, pitch axis, and yaw axis, respectively. Assuming that the errors of Euler angle measurements are , and , respectively, and the corresponding error matrix of is , the relation between and can be expressed as Furthermore, the impact of Euler angle errors on in (3.1) can be obtained as is a three-dimensional random vector and its magnitude can be described as the mean value of space radius, that is where denotes the magnitude of a vector, denotes the expectation of a random Assuming Euler angle errors of different axes are independent, we can get where denotes the variation of a random variable. As , , , and are orthogonal matrices, for any , we can get Taking (3.7) into (3.6), we can get Hence, For differential GPS observation, the impact of attitude determination error on two satellites can be given as follows According to the TanDEM-X missions, the attitude determination accuracy has a slowly varying bias of in the yaw, pitch, and roll components plus a 0.003° sigma random error [21]. From (3.8), (3.9), and (3.10), we can get Take the GPS receiver antenna ARP location of TanDEM-X mission for instance, that is, we can get 3.1.4. Fixing Error of GPS Receiver Antenna The fixing error of GPS receiver antenna is caused by the inaccuracy of the fixed position of antenna onboard the satellite. This error is a random error for multiple repeated satellite missions. But for a single launch, it is considered to be a fixed bias vector in satellite body coordinate system during satellite flying. The fixing errors of GPS receiver antenna in body coordinate system of two satellites are assumed as follows: For a mutually observed GPS satellite , the LOS vectors of two formation satellites are assumed to be and . The impact of fixing errors of GPS receiver antenna for both formation satellites on GPS observation data can be denoted as The impact of fixing error of GPS receiver antenna on differential GPS observation is Due to the close separation of two satellites, we can assume From (3.16) and (3.17), we can get As the magnitudes of and are small (generally less than 0.5mm) and the difference between and is insignificant; therefore, the impact of in (3.18) can be neglected and the main influence is from . If the magnitude of GPS receiver antenna fixing error is 0.5mm for each formation satellite, the maximum 3-dimensional impact on relative positioning can reach to 1mm. In addition, we can also draw a conclusion from the aforementioned analysis that the GPS receiver antenna bias caused by thermal distortions of satellites can be cancelled out by differential GPS 3.2. Errors Related to Baseline Transformation From (2.1), errors related to baseline transformation consist of two parts: one part is introduced by transformation matrices and , which is mainly caused by the satellite attitude measurement error; the other part is introduced by S[1]O[1] and S[2]O[2], which is caused by the inconsistency of two SAR antenna phase centers. 3.2.1. Error of Satellite Attitude Measurement Take for instance, where is a transformation matrix from CIRF to ITRF, has been defined in (3.2). Note that the transformation from CIRF to ITRF is in accordance with IERS 1996 conventions [22] and this transformation error can be neglected. The errors of and are also introduced by satellite attitude measurement errors. Similar to the analysis of satellite attitude measurement error related to GPS measurement, from (3.8), we can obtain Hence, the impact of attitude determination errors on baseline transformation is given as follows: Take the attitude determination accuracy of TanDEM-X mission for instance and select the magnitudes of and as follows we can get 3.2.2. Consistency Error of SAR Antenna Phase Center Unlike GPS receiver antenna, active phased array antenna is selected for SAR antenna. The phase center of the SAR antenna describes the variation of the phase curve within the coverage region against a defined origin, here the origin of the antenna coordinate system [18]. For two formation satellites of InSAR mission, the same type of SAR antenna should be selected. As the identical processes of the scheme designing, manufacturing, and testing are selected for SAR antennas of the same type, theoretically the consistency in configuration and electric performance of SAR antennas should be well achieved. But factually there exist the errors during manufacturing, fixing, and deploying of SAR antenna, therefore, the consistency error of the SAR antenna phase center corresponding to the same beam occurs. It is mainly caused by two factors:(1)The inconsistency between receiver channels, which is introduced by manufacturing process, such as the instrument difference, machining art level, module assembling level and the work temperature difference, et al.(2)The inconsistency between the locations of apertures, which is mainly caused by the fixing flatness difference, relative dislocation difference, the deployment inconsistency of SAR antennas and the configuration distortions caused by different thermal circumstances, and others. According to current ability of engineering, the phase inconsistency between T/R modules at X-band can be constrained to and the inconsistency between the locations of apertures can be constrained to [23] that equals to of phase inconsistency. Assuming that the number of T/R modules of an SAR antenna is , the synthetic phase consistency error can be constrained to . Hence, the consistency error of two SAR antenna phase center locations can be constrained to Take the TanDEM-X mission, for example. Setting , m, the consistency error of SAR antenna phase center location can be constrained to 0.25mm (). 4. Simulations for InSAR Spatial Baseline Determination 4.1. Simulation Settings The HELIX satellite formation is selected for the simulations and the orbit elements of two satellites are shown in Table 1. The spaceborne SAR is assumed to work at X-band with a wavelength of 0.032m and consist of 384 T/R modules. The entire simulation consists of GPS measurement simulation and baseline transformation simulation. The flow chart of GPS observation data simulation is shown in Figure 5. The International Reference Ionosphere 2007 (IRI2007) model is used to simulate ionospheric delay, Allan variation is used to simulate the clock offset of GPS receiver, and the ARP data, PCO data [18] and PCVs data of GPS receiver antenna system SEN67-1575-14+CRG are used to simulate the GPS receiver antenna phase center locations. The PCVs data contains the mean values and RMS values corresponding to frequency, azimuth, and elevation of received signal. The attitude data of formation satellite is generated as follows: at first, a transformation matrix from CIRF to satellite orbit coordinate system is obtained from orbit data of a formation satellite in CIRF; second, assuming the real Euler angles are 0°, that is, satellite orbit coordinate system and satellite body coordinate system are the same, the simulating data of Euler angles are generated by attitude measurement error model list in Table 2; third, the transformation matrix from satellite orbit coordinate system to satellite body coordinate system can be obtained by the simulating data of Euler angles; at last, the attitude quaternion is obtained by the transformation matrix from CIRF to satellite body coordinate system. Baseline transformation simulation is the process that the spatial baseline in ITRF is obtained by mass center data of formation satellites in ITRF, attitude simulation data, and SAR antenna phase center simulation locations in satellite body coordinate system. The real SAR antenna phase center simulation location in satellite body coordinate system is (1.2278m, 1.5876m, 0.0223m). The error accuracies and models in the simulations are shown in Table 2. 4.2. Simulations of Errors Related to GPS Measurement Each error related to GPS measurement is analyzed by single factor simulation, which is intended to obtain its impact on relative positioning based on dual-frequency GPS. The impact of each error is drawn by the comparison residuals between the relative position solutions determined by GPS observation data and relative positions obtained by standard orbits of formation satellites. The relative position solutions are implemented in the separate software tools as part of the NUDT Orbit Determination Software 1.0. The GPS observation data processing consists of GPS observation data preprocessing [24], reduced dynamic precise orbit determination for single satellite [25], GPS observation data editing [17, 24], and reduced dynamic precise relative positioning. The RMS values of KBR comparison residuals of GRACE relative position solution are about 1-2mm implemented by this software. 4.2.1. Simulations for GPS Carrier Phase Measurement Noise The noises of GPS carrier phase (L1 and L2) measurements are separately simulated by second-order autoregressive model (AR(2)) as follows where is the noise of carrier phase measurement for GPS satellite at epoch , is the Gaussian white noise. From the following formula where denotes the standard deviation of a random variable, we can get One instance of carrier phase noise simulation is shown in Figure 6. 50 groups of 24h GPS observation data (interval of 30s) for two formation satellites are simulated by only adding the noises of GPS carrier phase (L1 and L2) measurements. By the processing of relative positioning, the mean RMS values of comparison residuals of relative position solutions in ITRF (Figure 7) are 0.340mm of -axis, 0.333mm of -axis, 0.288mm of -axis, and 0.560mm of 3 dimensions. It is shown by simulation results that the GPS carrier phase noise can be well smoothed by reduced dynamic relative positioning approach. 4.2.2. Simulations for Ground Calibration Error of GPS Receiver Antenna Phase Center Ground calibration error of GPS receiver antenna phase center is mainly caused by PCVs. The PCVs values are described by the mean value and RMS value corresponding to the direction of received signal. The PCV value corresponding to the direction of received signal is simulated by Gaussian white noise with mean value and RMS value obtained from ground calibration result of GPS receiver antenna system SEN67-1575-14+CRG. The GPS observation data are simulated only considered ground calibration error of GPS receiver antenna phase center. By the precise orbit determination for single satellite, the mean RMS values of comparison residuals of orbit solutions in ITRF are 4.018mm of -axis, 4.154mm of -axis, 2.427mm of -axis, and 6.269mm of 3 dimensions. The impacts of PCVs on single satellite orbit solutions are mainly made by the mean value part of PCVs, while the impacts of RMS part in ITRF are only 0.119mm of -axis, 0.094mm of -axis, 0.116mm of -axis, and 0.191mm of 3 dimensions, and the RMS part of PCVs can nearly be smoothed. By the processing of relative positioning, the mean RMS values of comparison residuals of relative position solutions in ITRF are 0.067mm of -axis, 0.070mm of -axis, 0.056mm of -axis, and 0.112mm of 3-dimensions. As the nearly equal models of ground calibration errors of GPS receiver antenna phase centers for two formation satellites are selected and the LOS vectors are nearly the same for close satellite formation, the impacts of mean value part of PCVs can nearly be cancelled out by differential GPS observation and impacts of RMS part can be well smoothed by the constraints of orbit dynamical models. It is shown by the results of single satellite orbit solutions and relative position solutions that the characters of GPS receiver antenna phase centers onboard two formation satellites must have great consistency. 4.2.3. Simulations of Satellite Attitude Measurement Error for GPS Relative Positioning The Euler angle errors are simulated by Gaussian white noise with 0.005° of mean value and 0.003° of standard deviation, and 50 groups of 24h GPS observation data for two formation satellites are simulated by only adding the attitude measurement errors. The mean RMS values of comparison residuals of relative position solutions in ITRF (Figure 8) are 0.069mm of -axis, 0.075mm of -axis, 0.081mm of -axis, and 0.128mm of 3 dimensions. The 3 dimensional maximum of comparison residuals in these 50 simulations is 0.219mm, which is less than 0.47mm and is well consistent with aforementioned analysis in Section 3.1.3. 4.2.4. Simulations for Fixing Error of GPS Receiver Antenna The fixing error of GPS receiver antenna belongs to systematic error and it is a fixed bias vector in satellite body coordinate system. At first, four representatively “extreme” circumstances of fixing errors of GPS receiver antennas onboard two formation satellites are simulated. The so-called “extreme” circumstance is that the directions of two fixed bias vectors are opposite. Four representatively “extreme” circumstances of fixing errors of GPS receiver antennas here are directions along -axis, -axis, -axis, and diagonal of -axis, -axis, -axis in satellite body coordinate system, respectively. All the magnitudes of fixed bias vectors are selected 0.5mm. 24h GPS observation data for two formation satellites are simulated by only considering the four representatively “extreme” circumstances of fixing errors of GPS receiver antennas. The results of relative positioning are shown in Table 3. From Table 3, it is shown that the fixing errors of GPS receiver antenna along -axis and -axis will mainly be absorbed by relative position solutions and the impact can reach to 1mm, but the error along -axis can be smoothed by the constraints of orbit dynamical models. In practice, the occurrence of “extreme” circumstances is extremely low and they are just analyzed as the ultimate circumstances. For multiple repeated satellite missions, the fixing error of GPS receiver antenna is a random error. So this error can be simulated as a fixed vector with direction randomly drawn from unit ball and magnitude of 0.5mm in each satellite body coordinate system. 50 groups of 24h GPS observation data for two formation satellites are simulated by only adding the simulations of fixing error of GPS receiver antenna. The mean RMS values of comparison residuals of relative position solutions in ITRF (Figure 9) are 0.295mm of -axis, 0.294mm of -axis, 0.249mm of -axis, and 0.495mm of 3 dimensions. From aforementioned simulations of each error related to GPS measurement, it is shown that the impacts of GPS carrier phase measurement noise and fixing error of GPS receiver antenna on GPS relative positioning are much bigger than other errors related to GPS measurement and these two errors are the main factors of errors related to GPS measurement. 4.3. Simulations of Errors Related to Baseline Transformation In this section, the impact of each error on baseline transformation is obtained by single factor simulation. Each impact is given by the comparison between the spatial baseline solutions obtained with and without errors. 4.3.1. Simulations of Satellite Attitude Measurement Error for Baseline Transformation The satellite attitude simulation data used here are the same as Section 4.2.3. By baseline transformation with attitude simulation data, the mean RMS values of comparison residuals of spatial baseline solutions in ITRF (Figure 10) are 0.115mm of -axis, 0.115mm of -axis, 0.133mm of -axis, and 0.210mm of 3 dimensions. The 3 dimensional maximum of comparison residuals in these 50 simulations is 0.213mm, which is less than 0.50mm and is consistent with aforementioned analysis in Section 3.2.1. 4.3.2. Simulations for Consistency Error of SAR Antenna Phase Center It is shown by the analysis in Section 3.2.2 that the accuracy of consistency error of SAR antenna phase center is better than 0.25mm (3σ) in current simulation circumstances. This error is only added to the SAR antenna phase center of Satellite 1 and can be simulated as a fixed vector with direction randomly drawn from unit ball and magnitude of 0.25mm in body coordinate system of satellite 1. By 50 groups of simulations, the mean RMS values of comparison residuals of spatial baseline solutions in ITRF (Figure 11) are 0.142mm of -axis, 0.142mm of -axis, and 0.153mm of 4.4. Simulations of Total Error Sources In this section, all the errors are added to the flow of spatial baseline determination simulations according to the error models listed in Table 2. By 50 groups of total error sources simulations, the mean RMS values of comparison residuals of spatial baseline solutions in ITRF (Figure 12) are 0.500mm of -axis, 0.500mm of -axis, 0.452mm of -axis, and 0.845mm of 3 dimensions. In addition, the impact of total errors related to GPS measurement on GPS relative positioning in ITRF (Figure 13) is 0.454mm of -axis, 0.452mm of -axis, 0.388mm of -axis, and 0.755mm of 3 dimensions, and the impact of total errors related to baseline transformation in ITRF (Figure 14) is 0.185mm of -axis, 0.185mm of -axis, 0.206mm of -axis, and 0.334mm of 3 dimensions. It is shown by the simulations of total error sources that errors related to GPS measurement are the main error sources for the spatial baseline determination and 1mm level InSAR spatial baseline determination can be realized according to current simulation conditions. 5. Conclusions In this paper, the errors introduced by spatial baseline measurement for InSAR mission are deeply studied. The impacts of errors on spatial baseline determination are analyzed by single factor simulations and total error sources simulations. The main conclusions are drawn as follows.(1)The spatial baseline measurement errors can be classified into two groups: errors related to GPS measurement and errors related to baseline transformation. By simulations, the three-dimensional impacts of these errors on spatial baseline determination in ITRF are 0.755mm and 0.334mm, respectively. It is shown that the errors related to GPS measurement are the main influence on spatial baseline determination.(2)By the results of single factor simulations, the three dimensional impacts of GPS carrier phase measurement noise and the fixing error of GPS receiver antenna on GPS relative positioning in ITRF are 0.560mm and 0.495mm, respectively. These two errors are the main factors of errors related to GPS measurement.(3)It is shown by total error sources simulations that the impact of all the errors on spatial baseline determination in ITRF is 0.500mm of -axis, 0.500mm of -axis, 0.452mm of -axis, and 0.845mm of 3 dimensions. Therefore, 1mm level InSAR spatial baseline determination can be realized. The mean antenna phase center description for the Sensor Systems SEN67157514 antenna has been contributed by the German Space Operations Center (GSOC), Deutsches Zentrum für Luft- und Raumfahrt (DLR), Wessling, to enable the simulation of antenna phase center data of dual-frequency GPS receiver. Precise GPS ephemerides for use within this study have been obtained from the Center for Orbit Determination in Europe at the Astronomical Institute of the University of Bern (AIUB). The authors extend special thanks to the support of the above institutions. This paper is supported by the National Natural Science Foundation of China (Grant no. 61002033 and no. 60902089) and Open Research Fund of State Key Laboratory of Astronautic Dynamics of China (Grant no. 2011ADL-DW0103). 1. G. Krieger, I. Hajnsek, K. P. Papathanassiou, M. Younis, and A. Moreira, “Interferometric synthetic aperture radar (SAR) missions employing formation flying,” Proceedings of the IEEE, vol. 98, no. 5, pp. 816–843, 2010. View at Publisher · View at Google Scholar · View at Scopus 2. M. L. Jiao, “A review on latest Interferometric Synthetic Aperture Radar researches,” in WRI World Congress on Software Engineering (WCSE '09), pp. 387–390, May 2009. View at Publisher · View at Google Scholar · View at Scopus 3. W. Wang, “Optimal baseline design and error compensation for bistatic spaceborne InSAR,” in Proceedings of Fringe Workshop, November-December 2005. 4. G. Krieger, A. Moreira, H. Fiedler et al., “TanDEM-X: a satellite formation for high-resolution SAR interferometry,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 11, pp. 3317–3340, 2007. View at Publisher · View at Google Scholar · View at Scopus 5. M. Zink, H. Fiedler, I. Hajnsek, G. Krieger, A. Moreira, and M. Werner, “The TanDEM-X mission concept,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS '06), pp. 1938–1941, August 2006. View at Publisher · View at Google Scholar · View at Scopus 6. R. Werninghaus and S. Buckreuss, “The TerraSAR-X mission and system design,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 2, pp. 606–614, 2010. View at Publisher · View at Google Scholar · View at Scopus 7. M. Wermuth, O. Montenbruck, and A. Wendleder, “Relative navigation for the TanDEM-X mission and evaluation with DEM calibration results,” in the 22nd International Symposium on Space Flight Dynamics, Sao Jose dos Campos, Brazil, 2011. 8. H. Xu, Y. Zhou, and C. Li, “Analysis and simulation of spaceborne SAR interferometric baseline,” in Proceedings of the CIE International Conference on Radar, pp. 639–643, Beijing, China, October 2001. View at Scopus 9. R. Kroes, O. Montenbruck, W. Bertiger, and P. Visser, “Precise GRACE baseline determination using GPS,” GPS Solutions, vol. 9, no. 1, pp. 21–31, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. O. Montenbruck, P. W. L. van Barneveld, Y. Yoon, and P. N. A. M. Visser, “GPS-based precision baseline reconstruction for the TanDEM-X SAR-formation,” in the 20th International Symposium on Space Flight Dynamics, pp. 24–28, 2007. 11. S. D’Amico and O. Montenbruck, “Differential GPS: an enabling technology for formation flying satellites,” in the 7th IAA Symposium on Small Satellites for Earth Observation, pp. 457–464, 2009. 12. O. Montenbruck and E. Gill, Satellite Orbits: Models, Methods and Applications, Springer, Heidelberg, Germany, 2000. 13. P. W. Binning, Absolute and relative satellite to satellite navigation using GPS, Ph.D. dissertation, University of Colorado, 1997. 14. P. J. G. Teunissen, “The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation,” Journal of Geodesy, vol. 70, no. 1-2, pp. 65–82, 1995. View at Publisher · View at Google Scholar · View at Scopus 15. P. J. G. Teunissen, P. J. De Jonge, and C. C. J. M. Tiberius, “The least-squares ambiguity decorrelation adjustment: its performance on short GPS baselines and short observation spans,” Journal of Geodesy, vol. 71, no. 10, pp. 589–602, 1997. View at Scopus 16. J. Kouba, A guide using International GPS Service (IGS) products, Jet Propulsion Laboratory, Pasadena, Calif, USA, 2009. 17. R. Kroes, Precise relative positioning of formation flying spacecraft using GPS, Ph.D. dissertation, Delft University of Technology, The Netherlands, 2006. 18. O. Montenbruck, M. Garcia-Fernandez, Y. Yoon, S. Schön, and A. Jäggi, “Antenna phase center calibration for precise positioning of LEO satellites,” GPS Solutions, vol. 13, no. 1, pp. 23–34, 2009. View at Publisher · View at Google Scholar · View at Scopus 19. A. Jäggi, R. Dach, O. Montenbruck, U. Hugentobler, H. Bock, and G. Beutler, “Phase center modeling for LEO GPS receiver antennas and its impact on precise orbit determination,” Journal of Geodesy , vol. 83, no. 12, pp. 1145–1162, 2009. View at Publisher · View at Google Scholar · View at Scopus 20. M. Garcia and O. Montenbruck, “TerraSAR-X/TanDEM-X GPS antenna phase center analysis and results,” German Space Operations Center, Germany, 2007. 21. J. H. Gonzalez, M. Bachmann, G. Krieger, and H. Fiedler, “Development of the TanDEM-X calibration concept: analysis of systematic errors,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 2, pp. 716–726, 2010. View at Publisher · View at Google Scholar · View at Scopus 22. D. D. McCarthy, “IERS conventions,” IERS Technical Note 21, Observatoire de Paris, Paris, France, pp. 20–39, 1996. 23. W. T. Wang and Y. H. Qi, “A new technique to compensate for error in SAR antenna power pattern,” Chinese Space Science and Technology, vol. 17, no. 3, pp. 65–70, 1997. 24. D. F. Gu, The spatial states measurement and estimation of distributed InSAR satellite system, Ph.D. dissertation, National University of Defense Technology, China, 2009. 25. O. Montenbruck, T. Van Helleputte, R. Kroes, and E. Gill, “Reduced dynamic orbit determination using GPS code and carrier measurements,” Aerospace Science and Technology, vol. 9, no. 3, pp. 261–271, 2005. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/mpe/2012/140301/","timestamp":"2014-04-20T00:57:02Z","content_type":null,"content_length":"342477","record_id":"<urn:uuid:d9a9daf2-a69d-4982-b766-70f1c7d23225>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - How to figure out points that are close to each other in an arbitrary set of points 04-28-2005, 01:55 AM Ok, I'm sure I'm not the first with this idea... I'm working on a plug-in to align points that are "close enough" to each other points from an arbitrary set of points. For example, you're working on a model that consists of multiple mesh objects and you want to make sure that the edges of the various meshes line up exactly. You select the points on the matching edges, and if they're within a given tolerence the postion of those two points will be averaged. In order to avoid having to compare the location of every point to every other point in the set I want to group them so I only have to compare the locations of those that are close to each other. My cunning plan is to create a sparse matrix and the location of each point will be specified by the original coordinate divided by the tolerence. And then traverse through the sparse matrix looking for clusters of points that might be close enough and then average them. Does this make sense?
{"url":"http://forums.cgsociety.org/archive/index.php/t-234950.html","timestamp":"2014-04-19T10:06:19Z","content_type":null,"content_length":"6406","record_id":"<urn:uuid:a38f971a-6434-4fd6-8e99-19b2a66d45f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
The pion's electromagnetic form factor at small momentum transfer in full lattice QCD Boyle, P.A., Flynn, J.M., Jüttner, A., Kelly, C., de Lima, H.Pedroso, Maynard, C.M., Sachrajda, C.T. and Zanotti, J.M. (2008) The pion's electromagnetic form factor at small momentum transfer in full lattice QCD. Journal of High Energy Physics, 2008, (7), 112-[21pp]. (doi:10.1088/1126-6708/2008/07/112). Full text not available from this repository. We compute the electromagnetic form factor of a ``pion" with mass mπ = 330 MeV at low values of Q2≡−q2, where q is the momentum transfer. The computations are performed in a lattice simulation using an ensemble of the RBC/UKQCD collaboration's gauge configurations with Domain Wall Fermions and the Iwasaki gauge action with an inverse lattice spacing of 1.73(3) GeV. In order to be able to reach low momentum transfers we use partially twisted boundary conditions using the techniques we have developed and tested earlier. For the pion of mass 330 MeV we find a charge radius given by rπ2330 MeV = 0.354(31) fm2 which, using NLO SU(2) chiral perturbation theory, translates to a value of rπ2 = 0.418(31) fm2 for a physical pion, in agreement with the experimentally determined result. We confirm that there is a significant reduction in computational cost when using propagators computed from a single time-slice stochastic source compared to using those with a point source; for mπ = 330 MeV and volume (2.74 fm)3 we find the reduction is approximately a factor of 12 Actions (login required)
{"url":"http://eprints.soton.ac.uk/143313/","timestamp":"2014-04-17T06:52:43Z","content_type":null,"content_length":"30539","record_id":"<urn:uuid:1c2df76c-aa18-4b52-ae79-c3365773230a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Molar Absorptivity I am having difficulty with these problems and am unable to figure it out. help would be appreciated. A yellow dye, FD&C yellow 3 is used in candy coatings. A 1.50 x 10^-5 M solution of this dye has absorbance of 0.209 lambda max. Calc. molar absorptivity,epsilon (e), of the dye at this wavelength, assuming sample cell of 1.0cm pathlength. The yellow dye from one piece of candy is extracted into 10 mL of water and diluted to 50 mL with water. The absorbance of the diluted solution is 0.496 at its lambda max. Calc. the concentration of the dye in the diluted solution. Calc the number of grams of this dye in the coating of one piece of candy (MW of dye = 271 g/mol). *i kind of understand the concept of these questions but im having difficulty solving it out. Tue, 2008-03-04 19:14 Is there equations in your book to calculate molar absorptivity, epsilon (e) and also what unit is lambda max, is there a somekind of conversion factors related to it? Chemistry Tutor Tue, 2008-03-04 19:28 just use Beer's law A = e bc where A is absorbance e is molar abosorbtivity b is path length and c is concentration
{"url":"http://www.mychemistrytutor.com/questions/molar-absorptivity","timestamp":"2014-04-19T02:11:16Z","content_type":null,"content_length":"18082","record_id":"<urn:uuid:ff3b54aa-a073-4a87-b427-c434295c0759>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] simplification problem October 24th 2009, 09:32 AM [SOLVED] simplification problem I ran into this issue with a trig identity, but the part I'm curious about is really just algebra, so I hope this is the right place for it. At any rate, after doing the appropriate calculations I ended up with a formula of the book, however, lists the answer as The two are equivalent, I'm just curious as to the mechanism of converting from one to the other. Can someone explain it to me, if possible? October 24th 2009, 09:43 AM I ran into this issue with a trig identity, but the part I'm curious about is really just algebra, so I hope this is the right place for it. At any rate, after doing the appropriate calculations I ended up with a formula of the book, however, lists the answer as The two are equivalent, I'm just curious as to the mechanism of converting from one to the other. Can someone explain it to me, if possible? $-\sqrt{3 - 2\sqrt{2}}$ $-\sqrt{2 - 2\sqrt{2} + 1}$ $-\sqrt{(\sqrt{2} - 1)^2}$ $1 - \sqrt{2}$ October 24th 2009, 09:50 AM ah, it's just a matter of factoring it. Thank you, totally missed that.
{"url":"http://mathhelpforum.com/algebra/110121-solved-simplification-problem-print.html","timestamp":"2014-04-21T07:23:49Z","content_type":null,"content_length":"6722","record_id":"<urn:uuid:b4555687-bcb4-41d9-bd87-19337a81a913>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Blow-up Theory for Elliptic PDEs in Riemannian Geometry (MN-45): More About This Book Elliptic equations of critical Sobolev growth have been the target of investigation for decades because they have proved to be of great importance in analysis, geometry, and physics. The equations studied here are of the well-known Yamabe type. They involve Schrödinger operators on the left hand side and a critical nonlinearity on the right hand side. A significant development in the study of such equations occurred in the 1980s. It was discovered that the sequence splits into a solution of the limit equation--a finite sum of bubbles--and a rest that converges strongly to zero in the Sobolev space consisting of square integrable functions whose gradient is also square integrable. This splitting is known as the integral theory for blow-up. In this book, the authors develop the pointwise theory for blow-up. They introduce new ideas and methods that lead to sharp pointwise estimates. These estimates have important applications when dealing with sharp constant problems (a case where the energy is minimal) and compactness results (a case where the energy is arbitrarily large). The authors carefully and thoroughly describe pointwise behavior when the energy is arbitrary. Intended to be as self-contained as possible, this accessible book will interest graduate students and researchers in a range of mathematical fields. Read More Show Less What People Are Saying Elliptic equations of critical Sobolev growth have been the target of investigation for decades because they have proved to be of great importance in analysis, geometry, and physics. The equations studied here are of the well-known Yamabe type. They involve Schrödinger operators on the left hand side and a critical nonlinearity on the right hand side. A significant development in the study of such equations occurred in the 1980s. It was discovered that the sequence splits into a... See more details below William Beckner This is an important and original work. It develops critical new ideas and methods for the analysis of elliptic PDEs on compact manifolds, especially in the framework of the Yamabe equation, critical Sobolev embedding, and blow-up techniques. This volume will have an important influence on current research. — William Beckner, University of Texas at Austin Read More Show Less Product Details • ISBN-13: 9781400826162 • Publisher: Princeton University Press • Publication date: 1/10/2009 • Series: Mathematical Notes • Sold by: Barnes & Noble • Format: eBook • Edition description: Course Book • Pages: 224 • File size: 16 MB • Note: This product may take a few minutes to download. Read an Excerpt Blow-up Theory for Elliptic PDEs in Riemannian Geometry (MN-45) By Olivier Druet Emmanuel Hebey Frédéric Robert Princeton University Press Copyright © 2004 Princeton University Press All right reserved. ISBN: 978-0-691-11953-3 Chapter One Background Material We recall in this chapter basic facts concerning Riemannian geometry and nonlinear analysis on manifolds. For reasons of length, we are obliged to be succinct and partial. Possible references are Chavel, do Carmo, Gallot-Hulin-Lafontaine, Hebey, Jost, Kobayashi-Nomizu, Sakai, and Spivak. As a general remark, we mention that Einstein's summation convention is adopted: an index occurring twice in a product is to be summed. This also holds for the rest of this book. 1.1 RIEMANNIAN GEOMETRY We start with a few notions in differential geometry. Let M be a Hausdorff topological space. We say that M is a topological manifold of dimension n if each point of M possesses an open neighborhood that is homeomorphic to some open subset of the Euclidean space [[??].sup.n]. A chart of M is then a couple ([OMEGA], [phi]) where [OMEGA] is an open subset of M, and [phi] is a homeomorphism of [OMEGA] onto some open subset of [[??].sup.n]. For y [member of] [OMEGA], the coordinates of [phi](y) in [[??].sup.n] are said to be the coordinates of y in ([OMEGA], [phi]). Anatlas of M is a collection of charts [([[OMEGA].sub.i], [[phi].sub.i]), i [member of] I, such that M = [U.sub.i]member of]I] [[OMEGA].sub.i]. Given an atlas [([[OMEGA].sub.i], [[phi].sub.i]).sub.i]member of]I, the transition functions are with the obvious convention that we consider [[phi].sub.j] [omicron] [[phi].sup.-1.sub.i] if and only if [[OMEGA].sub.i] [intersection] [[OMEGA].sub.j] [not equal to] 0. The atlas is then said to be of class [C.sup.k] if the transition functions are of class [C.sup.k], and it is said to be [C.sup.]k-complete if it is not contained in a (strictly) larger atlas of class [C.sup.k]. As one can easily check, every atlas of class [C.sup.k] is contained in a unique [C.sup.k]-complete atlas. For our purpose, we will always assume in what follows that k = + [infinity] and that M is connected. One then gets the following definition of a smooth manifold: A smooth manifold M of dimension n is a connected topological manifold M of dimension n together with a [ITLITL.sup.[infinity]]-complete atlas. Classical examples of smooth manifolds are the Euclidean space [[??].sup.n] itself, the torus [T.sup.n], the unit sphere [S.sup.n] of [[??].sup.n+1], and the real projective space [[??].sup.n] Given two smooth manifolds, M and N, and a smooth map f : M [right arrow] N from M to N, we say that f is differentiable (or of class [C.sup.k]) if for any charts ([OMEGA], [phi]) and ([??], [??]) of M and N such that f([OMEGA]) [subset] [??], the map is differentiable (or of class [C.sup.k]). In particular, this allows us to define the notion of diffeomorphism and the notion of diffeomorphic manifolds. We refer to the above definition of a manifold as the abstract definition of a smooth manifold. As a surface gives the idea of a two-dimensional manifold, a more concrete approach would have been to define manifolds as submanifolds of Euclidean space. According to a well-known result of Whitney, any paracompact (abstract) manifold of dimension n can be seen as a submanifold of some Euclidean Let us now say some words about the tangent space of a manifold. Given M a smooth manifold and x [member of] M, let [F.sub.x] be the vector space of functions f : M [right arrow] [??] which are differentiable at x. For f [member of] [F.sub.x], we say that f is flat at x if for some chart ([OMEGA], [phi]) of M at x, D[(f [omicron] [[phi].sup.-1]).sub.[??]](x)] = 0. Let [N.sub.x] be the vector space of such functions. A linear form X on [F.sub.x] is then said to be a tangent vector of M at x if [N.sub.x] [subset] KerX. We let [T.sub.x](M) be the vector space of such tangent vectors. Given ([OMEGA], [phi]) some chart at x, of associated coordinates [x.sup.i], we define [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by, for any f [member of] F.sub.x, As a simple remark, one gets that the [([partial derivative]/[partial derivative][x.sub.i]).sub.x]'s form a basis of [T.sub.x](M). Now, one defines the tangent bundle of M as the disjoint union of the [T.sub.x](M)'s, x [member of] M. If M is n-dimensional, one can show that T(M) possesses a natural structure of a 2n-dimensional smooth manifold. Given a chart ([OMEGA], [??]) of M, is a chart of T(M), where for X [member of] [T.sub.x](M), x [member of] [OMEGA], [the coordinates of x in ([OMEGA], [phi]) and the components of X in ([OMEGA], [phi]), that is, the coordinates of X in the basis of [T.sub.x](M) associated with ([OMEGA], [phi]) by the process described above]. By definition, a vector field on M is a map X : M [right arrow] T(M) such that for any x [member of] M, X(x) [member of] [T.sub.x](M). Since M and T(M) are smooth manifolds, the notion of a vector field of class [C.sup.k] makes sense. A manifold M of dimension n is said to be parallelizable if there exist n smooth vector fields [X.sub.i], i = 1,..., n, such that for any x [member of] M, the [X.sub.i](x)'s, i = 1,..., n, define a basis of [T.sub.x](M). Given two smooth manifolds, M and N, a point x in M, and a differentiable map f : M [right arrow] N at x, the tangent linear map of f at x (or the differential map of f at x), denoted by [f.sub.*] (x ), is the linear map from [T.sub.x](M) to [T.sub.f(x)](N) defined, for X [member of] [T.sub.x](M) and g : N [right arrow] [??] differentiable at f(x), by ([f.sub.*](x) · (X) · (g) = X(g [omicron] f). By extension, if f is differentiable on M, one gets the tangent linear map of f, denoted by [f.sub.*]. That is the map [f.sub.*] : T(M) [right arrow] T(N) defined, for X [member of] [T.sub.x](M), by [f.sub.*](X) = [f.sub.*](x).(X). As one can easily check, [f.sub.*] is [C.sup.k-1] if f is [C.sup.k]. Similar to the construction of the tangent bundle, one can define the cotangent bundle of a smooth manifold M as the disjoint union of the [T.sub.x][(M).sup.*]'s, x [member of] M. In a more general way, one can define [T.sup.q.sub.p](M) as the disjoint union of the [T.sup.q.sub.p]([T.sub.x] (M))'s, where [T.sup.q.sub.p]([T.sub.x](M)) is the space of (p, q)-tensors on [T.sub.x](M). Then [T.sup.q.sub.p](M) possesses a natural structure of a smooth manifold of dimension n(1 + [n.sup.p+q -1]). A map T : M [right arrow] T.sup.q.sub.p](M) is then said to be a (p, q)-tensor field on M if for any x [member of] M, T(x) [member of] [T.sup.q.sub.p]([T.sub.x](M)). It is said to be of class [C.sup.k] if it is of class [C.sup.k] from the manifold M to the manifold [T.sup.q.sub.p](M). Given two manifolds M and N, a map f : M [right arrow] N of class [C.sup.k+1], and a (p, 0)-tensor field T of class [C.sup.k] on N, one can define the pullback [f.sup.*]T of T by f, that is, the (p, 0)-tensor field of class [C.sup.k] on M defined for x [member of] M and [X.sub.1],..., [X.sub.p] [member of] [T.sub.x](M), by We now define the notion of a linear connection. Denote by [GAMMA](M) the space of differentiable vector fields on M. A linear connection D on M is a map D : T(M) × [GAMMA](M) [right arrow] T(M) which satisfies a certain number of propositions. In local coordinates, given a chart ([OMEGA], [??]), this is equivalent to having [n.sup.3] smooth functions [[GAMMA].sup.k.sub.ij] : [OMEGA] [right arrow] [??], that we refer to as the Christoffel symbols of the connection in ([OMEGA], [phi]). They characterize the connection in the sense that for X [member of] [T.sub.x](M), x [member of] [OMEGA], and Y [member of] [GAMMA](M), where the [X.sup.i]'s and [Y.sup.i]'s denote the components of X and Y in the chart ([OMEGA], [phi]), and for f : M [right arrow] [??] differentiable at x, As one can easily check, the [[GAMMA].sup.k.sub.ij]'s are not the components of a (2, 1)-tensor field. An important remark is that linear connections have natural extensions to differentiable tensor fields. Given a differentiable (p, q)-tensor field, T, a point x in M, X [member of] [T.sub.x](M), and a chart ([OMEGA], [phi]) of M at x, [D.sub.X](T) is the (p, q)-tensor on [T.sub.x](M) defined by [D.sub.X](T) = [X.sup.i]([[nabla].sub.I]T)(x), where The covariant derivative commutes with the contraction in the sense that where [C.sup.[k.sub.2].sub.[k.sub.1]] T stands for the contraction of T of order ([k.sub.1], [k.sub.2]). Given a (p, q)-tensor field of class [C.sup.k+1], T, we let [nabla]T be the (p + 1, q)-tensor field of class [C.sup.k] whose components in a chart are given by By extension, one can then define [[nabla].sup.2]T, [[nabla].sup.3]T, and so on. For f : M [right arrow] [??] a smooth function, one has that [nabla] f = df and, in any chart ([OMEGA], [phi]) of M, In the Riemannian context, [[nabla].sup.2] f is called the Hessian of f and is sometimes denoted by Hess(f). The torsion T of a linear connection D can be seen as the smooth (2, 1)-tensor field on M whose components in any chart are given by the relation [T.sup.k.sub.ij] = [[GAMMA].sup.k.sub.ij]- [[GAMMA].sup.k.wub.ji]. One says that the connection is torsion-free if T [equivalent to] 0. The curvature R of D can be seen as the smooth (3, 1)-tensor field on M whose components in any chart are given by the relation As one can easily check, R.sup.l.sub.ijk] = -[R.sup.l.sub.ikj]. Moreover, when the connection is torsion-free, one has that Such relations are referred to as the first Bianchi and second Bianchi identities. We now discuss Riemannian geometry. Let M be a smooth manifold. A Riemannian metric g on M is a smooth (2, 0)-tensor field on M such that for any x [member of] M, g(x) is a scalar product on [T.sub.x](M). A smooth Riemannian manifold is a pair (M, g) where M is a smooth manifold and g a Riemannian metric on M. According to Whitney, for any paracompact smooth n-manifold there exists a smooth embedding f : M [right arrow] [[??].sup.2n+1]. One then gets that any smooth paracompact manifold possesses a Riemannian metric. Just think of g = [f.sup.*][xi], where [xi] is the Euclidean metric. Two Riemannian manifolds ([M.sub.1], [g.sub.1]) and ([M.sub.2], [g.sub.2]) are said to be isometric if there exists a diffeomorphism f : [M.sub.1] [right arrow] [M.sub.2] such that [f.sup.*][ g.sub.2] = [g.sub.1]. Given a smooth Riemannian manifold (M, g), and [gamma] : [a, b] [right arrow] M a curve of class [ITLITL.sup.1], the length of [gamma] is where [(d]gamma]/dt).sub.t] [member of] [T.sub.[gamma](t)](M) is such that [(d]gamma]/dt).sub.t] · f = (f [omicron] [gamma])'(t) for any f : M [right arrow] [??] differentiable at [gamma](t). If [gamma] is piecewise [ITLITL.sup.1], the length of [gamma] may be defined as the sum of the lengths of its [ITLITL.sup.1] pieces. For x and y in M, let [C.sub.xy] be the space of piecewise [ITLITL.sub.1] curves [gamma] : [a, b] [right arrow] M such that [gamma](a) = x and [gamma](b) = y. Then defines a distance on M whose topology coincides with the original one of M. In particular, by Stone's theorem, a smooth Riemannian manifold is paracompact. By definition, [d.sub.g] is the distance associated with g. Let (M, g) be a smooth Riemannian manifold. There exists a unique torsion-free connection on M having the property that [nabla]g = 0. Such a connection is the Levi-Civita connection of g. In any chart ([OMEGA], [??]) of M, of associated coordinates [x.sup.i], and for any x [member of] [OMEGA], its Christoffel symbols are given by the relations where the [g.sup.ij]'s are such that [g.sub.im][g.sup.mj] = [[delta].sup.j.sub.i]]. Let R be the curvature of the Levi-Civita connection as introduced above. One defines 1. the Riemann curvature [R.sub.mg] of g as the smooth (4, 0)-tensor field on M whose components in a chart are [R.sub.ijkl] = [g.sub.i[alpha]][R.sup.[alpha].sub.jkl, 2. the Ricci curvature R[c.sub.g] of g as the smooth (2, 0)-tensor field on M whose components in a chart are [R.sub.ij] = [R.sub.[alpha]I[beta]j][g.sup.[alpha][beta]], and 3. the scalar curvature [S.sub.g] of g as the smooth real-valued function on M whose expression in a chart is [S.sub.g] = [R.sub.ij][g.sup.ij]. As one can check, in any chart, and the two Bianchi identities are In particular, the Ricci curvature is symmetric, so that in any chart [R.sub.ij] = [R.sub.ji]. Given a smooth Riemannian manifold (M, g), and its Levi-Civita connection D, a smooth curve [gamma] : [a, b] [right arrow] M is said to be a geodesic, if for all t, This means again that in any chart, and for all k, Excerpted from Blow-up Theory for Elliptic PDEs in Riemannian Geometry (MN-45) by Olivier Druet Emmanuel Hebey Frédéric Robert Copyright © 2004 by Princeton University Press. Excerpted by All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. Read More Show Less Table of Contents Preface vii Chapter 1. Background Material 1 1.1 Riemannian Geometry 1 1.2 Basics in Nonlinear Analysis 7 Chapter 2. The Model Equations 13 2.1 Palais-Smale Sequences 14 2.2 Strong Solutions of Minimal Energy 17 2.3 Strong Solutions of High Energies 19 2.4 The Case of the Sphere 23 Chapter 3. Blow-up Theory in Sobolev Spaces 25 3.1 The H 2/1-Decomposition for Palais-Smale Sequences 26 3.2 Subtracting a Bubble and Nonnegative Solutions 32 3.3 The De Giorgi-Nash-Moser Iterative Scheme for Strong Solutions 45 Chapter 4. Exhaustion and Weak Pointwise Estimates 51 4.1 Weak Pointwise Estimates 52 4.2 Exhaustion of Blow-up Points 54 Chapter 5. Asymptotics When the Energy Is of Minimal Type 67 5.1 Strong Convergence and Blow-up 68 5.2 Sharp Pointwise Estimates 72 Chapter 6. Asymptotics When the Energy Is Arbitrary 83 6.1 A Fundamental Estimate: 1 88 6.2 A Fundamental Estimate: 2 143 6.3 Asymptotic Behavior 182 Appendix A. The Green's Function on Compact Manifolds 201 Appendix B. Coercivity Is a Necessary Condition 209 Bibliography 213 Read More Show Less
{"url":"http://www.barnesandnoble.com/w/blow-up-theory-for-elliptic-pdes-in-riemannian-geometry-olivier-druet/1100017769?ean=9781400826162","timestamp":"2014-04-20T02:57:04Z","content_type":null,"content_length":"143234","record_id":"<urn:uuid:faffe405-4e4f-4c1e-a26c-1d04999f689e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Testing Einstein's famous equation E=mc<sup>2</sup> in outer space University of Arizona physicist Andrei Lebed has stirred the physics community with an intriguing idea yet to be tested experimentally: The world's most iconic equation, Albert Einstein's E=mc^2, may be correct or not depending on where you are in space. With the first explosions of atomic bombs, the world became witness to one of the most important and consequential principles in physics: Energy and mass, fundamentally speaking, are the same thing and can, in fact, be converted into each other. This was first demonstrated by Albert Einstein's Theory of Special Relativity and famously expressed in his iconic equation, E=mc^2, where E stands for energy, m for mass and c for the speed of light Although physicists have since validated Einstein's equation in countless experiments and calculations, and many technologies including mobile phones and GPS navigation depend on it, University of Arizona physics professor Andrei Lebed has stirred the physics community by suggesting that E=mc^2 may not hold up in certain circumstances. The key to Lebed's argument lies in the very concept of mass itself. According to accepted paradigm, there is no difference between the mass of a moving object that can be defined in terms of its inertia, and the mass bestowed on that object by a gravitational field. In simple terms, the former, also called inertial mass, is what causes a car's fender to bend upon impact of another vehicle, while the latter, called gravitational mass, is commonly referred to as "weight." This equivalence principle between the inertial and gravitational masses, introduced in classical physics by Galileo Galilei and in modern physics by Albert Einstein, has been confirmed with a very high level of accuracy. "But my calculations show that beyond a certain probability, there is a very small but real chance the equation breaks down for a gravitational mass," Lebed said. If one measures the weight of quantum objects, such as a hydrogen atom, often enough, the result will be the same in the vast majority of cases, but a tiny portion of those measurements give a different reading, in apparent violation of E=mc^2. This has physicists puzzled, but it could be explained if gravitational mass was not the same as inertial mass, which is a paradigm in physics. "Most physicists disagree with this because they believe that gravitational mass exactly equals inertial mass," Lebed said. "But my point is that gravitational mass may not be equal to inertial mass due to some quantum effects in General Relativity, which is Einstein's theory of gravitation. To the best of my knowledge, nobody has ever proposed this before." Lebed presented his calculations and their ramifications at the Marcel Grossmann Meeting in Stockholm last summer, where the community greeted them with equal amounts of skepticism and curiosity. Held every three years and attended by about 1,000 scientists from around the world, the conference focuses on theoretical and experimental General Relativity, astrophysics and relativistic field theories. Lebed's results will be published in the conference proceedings in February. In the meantime, Lebed has invited his peers to evaluate his calculations and suggested an experiment to test his conclusions, which he published in the world's largest collection of preprints at Cornell University Library (see Extra Info). "The most important problem in physics is the Unifying Theory of Everything -- a theory that can describe all forces observed in nature," said Lebed. "The main problem toward such a theory is how to unite relativistic quantum mechanics and gravity. I try to make a connection between quantum objects and General Relativity." The key to understand Lebed's reasoning is gravitation. On paper at least, he showed that while E=mc^2 always holds true for inertial mass, it doesn't always for gravitational mass. "What this probably means is that gravitational mass is not the same as inertial," he said. According to Einstein, gravitation is a result of a curvature in space itself. Think of a mattress on which several objects have been laid out, say, a ping pong ball, a baseball and a bowling ball. The ping pong ball will make no visible dent, the baseball will make a very small one and the bowling ball will sink into the foam. Stars and planets do the same thing to space. The larger an object's mass, the larger of a dent it will make into the fabric of space. In other words, the more mass, the stronger the gravitational pull. In this conceptual model of gravitation, it is easy to see how a small object, like an asteroid wandering through space, eventually would get caught in the depression of a planet, trapped in its gravitational field. "Space has a curvature," Lebed said, "and when you move a mass in space, this curvature disturbs this motion." According to the UA physicist, the curvature of space is what makes gravitational mass different from inertial mass. Lebed suggested to test his idea by measuring the weight of the simplest quantum object: a single hydrogen atom, which only consists of a nucleus, a single proton and a lone electron orbiting the Because he expects the effect to be extremely small, lots of hydrogen atoms would be needed. Here is the idea: On a rare occasion, the electron whizzing around the atom's nucleus jumps to a higher energy level, which can roughly be thought of as a wider orbit. Within a short time, the electron falls back onto its previous energy level. According to E=mc^2, the hydrogen atom's mass will change along with the change in energy level. So far, so good. But what would happen if we moved that same atom away from Earth, where space is no longer curved, but flat? You guessed it: The electron could not jump to higher energy levels because in flat space it would be confined to its primary energy level. There is no jumping around in flat space. "In this case, the electron can occupy only the first level of the hydrogen atom," Lebed explained. "It doesn't feel the curvature of gravitation." "Then we move it close to Earth's gravitational field, and because of the curvature of space, there is a probability of that electron jumping from the first level to the second. And now the mass will be different." "People have done calculations of energy levels here on Earth, but that gives you nothing because the curvature stays the same, so there is no perturbation," Lebed said. "But what they didn't take into account before that opportunity of that electron to jump from the first to the second level because the curvature disturbs the atom." "Instead of measuring weight directly, we would detect these energy switching events, which would make themselves known as emitted photons -- essentially, light," he explained. Lebed suggested the following experiment to test his hypothesis: Send a small spacecraft with a tank of hydrogen and a sensitive photo detector onto a journey into space. In outer space, the relationship between mass and energy is the same for the atom, but only because the flat space doesn't permit the electron to change energy levels. "When we're close to Earth, the curvature of space disturbs the atom, and there is a probability for the electron to jump, thereby emitting a photon that is registered by the detector," he said. Depending on the energy level, the relationship between mass and energy is no longer fixed under the influence of a gravitational field. Lebed said the spacecraft would not have to go very far. "We'd have to send the probe out two or three times the radius of Earth, and it will work." According to Lebed, his work is the first proposition to test the combination of quantum mechanics and Einstein's theory of gravity in the solar system. "There are no direct tests on the marriage of those two theories," he said. " It is important not only from the point of view that gravitational mass is not equal to inertial mass, but also because many see this marriage as some kind of monster. I would like to test this marriage. I want to see whether it works or not." Story Source: The above story is based on materials provided by University of Arizona. The original article was written by Daniel Stolte. Note: Materials may be edited for content and length. Cite This Page: University of Arizona. "Testing Einstein's famous equation E=mc^2 in outer space." ScienceDaily. ScienceDaily, 8 January 2013. <www.sciencedaily.com/releases/2013/01/130108162227.htm>. University of Arizona. (2013, January 8). Testing Einstein's famous equation E=mc^2 in outer space. ScienceDaily. Retrieved April 16, 2014 from www.sciencedaily.com/releases/2013/01/130108162227.htm University of Arizona. "Testing Einstein's famous equation E=mc^2 in outer space." ScienceDaily. www.sciencedaily.com/releases/2013/01/130108162227.htm (accessed April 16, 2014).
{"url":"http://www.sciencedaily.com/releases/2013/01/130108162227.htm","timestamp":"2014-04-16T13:30:49Z","content_type":null,"content_length":"91638","record_id":"<urn:uuid:ae794652-19e6-4cc9-a50d-afd28f75d2c9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Reason and its Limitations Wanted: Answers to some Big Questions &nbsp Contents |&nbsp rgb Home | Philosophy Home | Axioms | Other Books by rgb: | The Book of Lilith | Subsections Wanted: Answers to some Big Questions &nbsp Contents |&nbsp rgb Home | Philosophy Home | Axioms | Other Books by rgb: | The Book of Lilith | Copyright © 2010-01-21 Duke Physics Department Box 90305 Durham, NC 27708-0305
{"url":"http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/Reason_its_Limitations.html","timestamp":"2014-04-19T14:31:45Z","content_type":null,"content_length":"6186","record_id":"<urn:uuid:84cf8a93-b94c-483c-ac52-9ce69dbe356d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
A specific projection and compactness on the Bargmann-Fock space up vote 2 down vote favorite Let $F_2$ be the Bargmann Fock space defined as the space of entire functions $f$ on $\mathbb{C}$ such that \begin{align*} \int_{\mathbb{C}} |f(z)|^2 e^{- |z|^2} dA(z) \end{align*} ($dA$ is just ordinary area measure, and I'm restricting to the one dimensional case for simplicity.) The normalized reproducing kernel $k_z$ is the function $k_z(w) = e^{w \overline{z} - \frac{|z|^2}{2}}$. I'm interested in an example of a bounded operator $S$ on $F_2$ where \begin{align*} \underset{|z| \rightarrow \infty}{\lim} \|S k_z\|_{F_2} = 0 \end{align*} and $S$ is non-compact on $F_2$. The obvious try is the well known example that works for the Bergman space (see Axler/Zheng "Compact Operators via the Berezin Transform"): \begin{align*} S (\sum_{k = 0} ^\infty a_k z^k) = \sum_{k = 0} ^\infty a_{2^k} z^{2^k}. \end{align*} Trivially $S$ is non-compact as a self adjoint projection with infinite dimensional range. Computing the above limit for this $S$ gives the limit \begin{align*} \underset{|z| \rightarrow \infty}{\ lim} e^{-|z|^2} \sum_{k = 0}^\infty \frac{|z|^{2^{k + 1}}}{(2^k)!} \end{align*} I've tried a few things but I can't seem to show this limit is $0$ (and admittedly I'd like to use this example in a talk of mine very soon). Does anyone know if this is true, or if this is in the literature anywhere? Are there other examples out there ($S$ needs not necessarily be self adjoint or a projection, $S$ just needs to satisfy the norm limit above being $0$ and $S$ is non-compact). fa.functional-analysis operator-theory cv.complex-variables 1 The limit is zero because the Taylor series of $e^{R^2}$ is smeared uniformly over about $R$ terms around the maximal one and your lacunary series picks up just one of those. Just note that the maximal term corresponds to the index $n=R^2$ (give or take one to get an integer) and that $R$ adjacent terms on both sides have comparable sizes after which you have Gaussian decay like $e^{- (m-n)^2/R}$. Basically, that's the Laplace asymptotic formula in the discrete setting. – fedja Jan 7 '12 at 22:09 Sorry, it should be $e^{-c(m-n)^2/R^2}$. – fedja Jan 7 '12 at 22:10 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fa.functional-analysis operator-theory cv.complex-variables or ask your own question.
{"url":"http://mathoverflow.net/questions/85150/a-specific-projection-and-compactness-on-the-bargmann-fock-space","timestamp":"2014-04-18T13:44:51Z","content_type":null,"content_length":"49706","record_id":"<urn:uuid:48c5cf27-f88b-4a5f-bf08-a96ba4979e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Equal partial derivatives Do you mean [itex]\partial f/\partial x= C[/itex] and [itex]\partial f/\partial y= C[/itex]? The same constant or different constants? From [itex]\partial f/\partial x= C[/itex], we get [itex]f(x,y)= Cx+ g(y)[/itex] where g can be any function of y. Differentiating that with respect to y, [itex]\partial f/\partial y= g'(y)= C[/itex] which tells us that g(y)= Cy+ C' where C' is an arbitrary constant of integration. That is, f(x,y)= Cx+ Cy+ C'.
{"url":"http://www.physicsforums.com/showthread.php?t=547035","timestamp":"2014-04-18T10:47:09Z","content_type":null,"content_length":"23041","record_id":"<urn:uuid:a41eddc2-99ce-45b8-b870-cb3f6169b0c3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
crossover component - diyAudio Yes, give us your speaker impedance, and your desired crossover frequency. Vac = Volts Alternating Current, meaning that the applied voltage will go both positive and negative. Vdc = Volts Direct Current, meaning the applied volatage will be on a continuous single (either positive or negative) polarity. Notice that Volts DC is typically higher than Volts AC. You want to use Volts AC as your standard, and you want to rate your Caps at twice the anticipated voltage. To determine the maximum voltage of your amp, use this formula - Voltage (E) is the square root of the Power(P) times the Resistance(R). E = SqRt(P x R) Assuming 100 watt amp and 8 ohm speakers E = SqRt(100watts x 8ohms) E = SqRt(800) E = 28.3 volts Meaning you need at least a 60 Vac capacitor. Using the Capacitor calculator I linked to, and using 3 sample crossover points, assuming 8 ohm speakers, and assuming a Butterworth crossover design, I come up with - 1,000 hz = 20uF (micro-Farads) 2,000 hz = 10uF 3,000 hz = 6.6uF That seems far off from the 82uF you estimated. The values above are consistent with the formula - C= 1 / [2(pi)fR] Where C = capacitance, (pi) = 3.14159, f = the crossover frequency, and R = the rated impedance of one speaker.
{"url":"http://www.diyaudio.com/forums/multi-way/116685-crossover-component.html","timestamp":"2014-04-17T11:40:31Z","content_type":null,"content_length":"55476","record_id":"<urn:uuid:50ea8df7-8e34-4f4a-90dd-ad239a9671af>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Bicentennial, CA Statistics Tutor Find a Bicentennial, CA Statistics Tutor ...I am a certified Montessori Teacher PreK-8 from UCSD. I taught high school for over 25 years so I know how to prepare students. I know what's coming so I know what to emphasize. 72 Subjects: including statistics, English, reading, writing ...I was an economics major at USC and got an A in econometrics. I also tutored the class at USC since the second semester of my sophomore year, often to great effect. Additionally, I still use regression analysis regularly for my job as a management consultant. 16 Subjects: including statistics, economics, SAT math, LSAT ...Often, when a student is having challenges in math, it is because one or two concepts slipped by, or weren’t taught properly. This makes the next concept more difficult, and pretty soon a student can be overwhelmed. I will quickly help you get caught up, and then help you get ahead, so that you... 63 Subjects: including statistics, reading, English, chemistry ...I took five years of Spanish culminating in my junior year of high school when I scored a 5 on the AP Spanish exam. I took AP Statistics in high school and earned a 5 on the AP exam. I also took a statistical inference course in college and earned an A. 18 Subjects: including statistics, chemistry, Spanish, biology ...I'm confident that I can help almost any student with my methods. I tutor Korean and Chinese students who are learning English as a second language regularly. I have several years experience tutoring these students one-on-one at after-school prep academies. 73 Subjects: including statistics, reading, English, chemistry Related Bicentennial, CA Tutors Bicentennial, CA Accounting Tutors Bicentennial, CA ACT Tutors Bicentennial, CA Algebra Tutors Bicentennial, CA Algebra 2 Tutors Bicentennial, CA Calculus Tutors Bicentennial, CA Geometry Tutors Bicentennial, CA Math Tutors Bicentennial, CA Prealgebra Tutors Bicentennial, CA Precalculus Tutors Bicentennial, CA SAT Tutors Bicentennial, CA SAT Math Tutors Bicentennial, CA Science Tutors Bicentennial, CA Statistics Tutors Bicentennial, CA Trigonometry Tutors Nearby Cities With statistics Tutor Baldwin Hills, CA statistics Tutors Briggs, CA statistics Tutors Dockweiler, CA statistics Tutors Farmer Market, CA statistics Tutors Lafayette Square, LA statistics Tutors Miracle Mile, CA statistics Tutors Oakwood, CA statistics Tutors Pico Heights, CA statistics Tutors Preuss, CA statistics Tutors Rancho Park, CA statistics Tutors Rimpau, CA statistics Tutors Sanford, CA statistics Tutors Vermont, CA statistics Tutors Wilcox, CA statistics Tutors Wilshire Park, LA statistics Tutors
{"url":"http://www.purplemath.com/bicentennial_ca_statistics_tutors.php","timestamp":"2014-04-18T18:37:47Z","content_type":null,"content_length":"24278","record_id":"<urn:uuid:ce4de62e-5658-453b-91f1-c72a34a0290a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? February 4, 2013 By Andrew (This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.) Justin Kinney writes: Since your blog has discussed the “maximal information coefficient” (MIC) of Reshef et al., I figured you might want to see the critique that Gurinder Atwal and I have posted. In short, Reshef et al.’s central claim that MIC is “equitable” is incorrect. We [Kinney and Atwal] offer mathematical proof that the definition of “equitability” Reshef et al. propose is unsatisfiable—no nontrivial dependence measure, including MIC, has this property. Replicating the simulations in their paper with modestly larger data sets validates this finding. The heuristic notion of equitability, however, can be formalized instead as a self-consistency condition closely related to the Data Processing Inequality. Mutual information satisfies this new definition of equitability but MIC does not. We therefore propose that simply estimating mutual information will, in many cases, provide the sort of dependence measure Reshef et al. seek. For background, here are my two posts (Dec 2011 and Mar 2012) on this method for detecting novel associations in large data sets. I never read the paper in detail but on quick skim it looked really cool to me. As I saw it, the clever idea of the paper is that, instead of going for an absolute measure (which, as we’ve seen, will be scale-dependent), they focus on the problem of summarizing the grid of pairwise dependences in a large set of variables. Thus, Reshef et al. provide a relative rather than absolute measure of association, suitable for comparing pairs of variables within a single dataset even if the interpretation is not so clear between datasets. At the time, I was left with two questions: 1. What is the value of their association measure if applied to data that are on a circle? For example, suppose you generate these 1000 points in R: n <- 1000 theta <- runif (n, 0, 2*pi) x <- cos (theta) y <- sin (theta) Simulated in this way, x and y have an R-squared of 0. And, indeed, knowing x tells you little (on average) about y (and vice-versa). But, from the description of the method in the paper, it seems that their R-squared-like measure might be very close to 1. I can’t really tell. This is an interesting to me because it’s not immediately clear what the right answer “should” be. If you can capture a bivariate distribution by a simple curve, that’s great; on the other hand if you can’t predict x from y or y from x, then I don’t know that I’d want a R-squared-like summary to be close to 1. No measure can be all things to all datasets, so let me emphasize that the above is not a criticism of the idea of Reshef et al. but rather an exploration. 2. I wonder if they’d do even better by log-transforming any variables that are all-positive. (I thought about this after looking at the graphs in Figure 4.) A more general approach would be for their grid boxes to be adaptive. My second post reported some criticisms of the method. Reshef et al. responded in a comment. In any case, all these methods (including the method discussed in the paper by Simon and Tibshirani) seem like a step forward from what we typically use in statistics. So this all seems like a great discussion to be having. I like how Kinney and Atwal are going back to first principles. P.S. There was one little thing I’m skeptical of, not at all central to Kinney and Atwal’s main points. Near the bottom of page 11 they suggest that inference about joint distributions (in their case, with the goal of estimating mutual information) is not a real concern now that we are in such a large-data world. But, as we get more data, we also gain the ability and inclination to subdivide our data into smaller pieces. For example, sure, “consumer research companies routinely analyze data sets containing information on ∼ 10^5 shoppers,” but it would be helpful to break up the data and learn about different people, times, and locations, rather than computing aggregate measures of association. So I think “little-data” issues such as statistical significance and efficiency are not going away. Again, this is only a small aside in their paper but I wanted to mention the point. Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science
{"url":"http://www.statsblogs.com/2013/02/04/too-many-mcs-not-enough-mics-or-what-principles-should-govern-attempts-to-summarize-bivariate-associations-in-large-multivariate-datasets-2/","timestamp":"2014-04-19T17:02:06Z","content_type":null,"content_length":"39617","record_id":"<urn:uuid:7c2dd086-1c82-498b-903b-7aede4526590>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f1e10e5e4b04992dd24862b","timestamp":"2014-04-18T16:26:55Z","content_type":null,"content_length":"124692","record_id":"<urn:uuid:862d5273-769e-4270-8b91-c965764578dc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 19th 2009, 03:30 PM #1 Nov 2009 Help please!! TRIANGLE DEF has vertices D(4,5) E(-4,3) F(0,-5). Determine a)An equation of EB, the median from E to DF b)an equation of FA, the altitude from f to DE c)an equation of PQ, the right bisector of EF This question doesnt really make sense to me so if u can help me i really appreciate it. : D a median of triangle is a line drawn from a vertex ot the triangle to the midpoint of the opposite side of the triangle. first find the slopes of sides of the triangle then use point slope form to get the equations the midpoint is just distance between vertexes divided by 2 use the distance formula can you get it from here.... yea but i learned that already But i never learn how to do B and C an altitude of a triangle is a line drawn from a vertex to the triangle perpendicular to the opposite side of the triangle. c) is just a perpendicular line going thru the midpoint of EF.... at least assuming that what they mean by "right bisector" ussually its called "perpendicular bisector" will show the math if you want... let me go home first i am at work.... unless others help... i would appreciate if you show me the calculation thanks you November 19th 2009, 03:50 PM #2 November 19th 2009, 03:52 PM #3 Nov 2009 November 19th 2009, 04:14 PM #4 November 19th 2009, 04:19 PM #5 Nov 2009
{"url":"http://mathhelpforum.com/geometry/115649-help-please.html","timestamp":"2014-04-20T16:04:45Z","content_type":null,"content_length":"34468","record_id":"<urn:uuid:7cd56f22-1f10-4dd9-9d26-7c133b85c686>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Find The Total Charge Passing Through An Element ... | Chegg.com Find the total charge passing through an element at one cross section over the time interval of 0 to 5 seconds if the current through the same cross section is given by: i(t)= 2te^-3t when t 0, t I'm having trouble taking the antiderivative of 2te^-3t Please explain how to do this!!! Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/find-total-charge-passing-element-one-cross-section-time-interval-0-5-seconds-current-cros-q1096849","timestamp":"2014-04-21T15:52:51Z","content_type":null,"content_length":"21771","record_id":"<urn:uuid:cfa80ffd-b49b-492b-acbc-219ff6ac77d5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Inequality Worksheet - DOC Applications of Math 11 Linear Programming Unit Outline Chapter 4 General Prescribed Learning Outcomes: Represent and analyze situations that involve expressions, equations and Use linear programming to solve optimization problems. Specific Prescribed Learning Outcomes: Graph linear inequalities, in two variables. Students were introduced to inequalities with one variable in Grade 9 mathematics, where they solved inequalities and graphed inequalities on a number Coordinate geometry skills are essential for this specific outcome. The concepts of plotting points and intercepts, line graphing, and the use of calculators are the more important concepts to review. Ax + By + C = 0 can be sketched using intercepts. Conversion from the Ax + By + C = 0 form to any y = form is a necessary preliminary step to the use of a graphing calculator. Window settings, on the graphing calculator, are useful to replace the plotting of horizontal and vertical lines. For example, x < 5 could be entered into window settings as an x-max of 5. Students should first graph manually—to solve for and sketch the solution region—and then graph using the graphing calculator. An even balance between both approaches is recommended Acceptable Standard: graph the boundary line between two half planes use a test point, usually (0, 0), to determine the solution region that satisfies the inequality, given a boundary line graph a linear inequality expressed in the form y = mx + b, using <, >, £, ³ rewrite any inequality expressed in the Ax + By = C form in the y = mx + b form, where A, B, C are integral and B > 0 Standard of Excellence: distinguish between the use of solid and broken lines in solution regions graph any linear inequality in two variables rewrite any inequality expressed in the Ax + By = C form in the y = mx + b form, and graph explain why the shaded half plane represents the solution region of the inequality Mark Healy mbhealy@telus.net Page 1 West Vancouver Secondary School Solve systems of linear inequalities in two variables using technology. Many constraint inequalities are of the form Ax + By = C, rather than Ax + By + C = 0. It is important for students to first sketch the system of linear inequalities, because the graphing calculator may need adjustments in window settings or use of the zoom function to show the intersection point or points of the inequalities. With multiple inequalities, students may find it easier to see the intersection by graphing the opposite inequalities (reverse shading), so the intersection region is Examples should include both open solution regions and closed polygon solution Acceptable Standard: graph the boundary line between two half planes use a test point, usually (0, 0), to determine the solution region that satisfies the inequality, given a boundary line graph a linear inequality expressed in the form y = mx + b, using <, >, ≤, or ≥ rewrite any inequality expressed in the Ax + By = C form in the y = mx + b form, where A, B, C are integral and B > 0 explain why the solution region represents the solution to the problem Standard of Excellence: distinguish between the use of solid and broken lines in solution regions graph any linear inequality in two variables rewrite any inequality expressed in the Ax + By = C form in the y = mx + b form, and graph explain why the shaded half plane represents the solution region of the inequality explain how the solution region is a combination of different half planes Design and solve linear and nonlinear systems, in two variables, to model problem situations. Technology is a key mathematical process for this outcome. Use window settings for horizontal and vertical lines. For problems where the intersection of the inequalities is difficult to distinguish on the graphing calculator, students could reverse the inequality signs and view the intersection as the unshaded region. Most problem situations have solutions in the first quadrant. Price and profit problems can lead to quadratic inequalities. Area and perimeter problems often lead to non-linear inequalities. Intersection points are often found using technology. All nonlinear inequalities should be solved using technology Students may find this outcome challenging as a whole; it may be useful to guide students into structured solutions. Mark Healy mbhealy@telus.net Page 2 West Vancouver Secondary School Apply linear programming to find optimal solutions to decision-making The optimal solution is found by substituting the coordinates of the vertices into the expression for the objective function. The optimal solution must lie within the domain of the relation; e.g., if x represents the number of people, then x must be a whole number. In addition to the stated restrictions, other inequalities may be implicit and must be considered to find the solution; e.g., if x represents the length of a rectangle, then x > 0 The optimal solution may involve either real numbers or natural numbers, depending on the context of the problem. Vertices should be part of the domain for all assessment problems. Acceptable Standard: Write the system of inequalities corresponding to a problem context Graph inequalities Find vertices Write an expression for the objective function Substitute vertices into an expression for the objective function to find the optimal Standard of Excellence: Integrate the elements of the solution process to find the optimal solution to a decision-making problem Summarize and explain why the solution is correct Mark Healy mbhealy@telus.net Page 3 West Vancouver Secondary School Lesson Sequence Day 1 – Tutorial 4.1 – The Graph of a Linear Inequality Discuss preamble at beginning of tutorial Use attached handout to discuss lesson or go through Investigation 1 and 2 as a Go through ―Discussing the Ideas‖ #3 – 6 Assignment: Attached Worksheet Day 2 – Tutorial 4.2 – Graphing a Linear Inequality in Two Variables http://faculty.stcc.mass.edu/zee/newpage128.htm provides good examples for this Discuss steps outlined at beginning of Tutorial. Utility 21 on page 417 discusses use of graphing calculator to graph inequalities. Use steps to complete examples 1 and 2 using both graphing by hand and graphing using technology. Go through ―Discussing the Ideas‖ #1 – 4 Assignment: Page 178 #1, 2, 3aceg, 4ac, 6ac, 7, 9 Day 3 and 4 – Tutorial 4.3 – The Solution of a System of Linear Inequalities Day 3: Discuss the steps to graphing a system of linear inequalities by hand o http://faculty.stcc.mass.edu/zee/newpage210.htm provides good examples for this lesson o Complete example 1 (need different color pencil crayons) o Assignment: Page 187 #2, 3, 4ab, 6 Day 4: Discuss the steps to graphing a system of linear inequalities using technology (reverse shading) o Discuss using window settings for horizontal and vertical lines. o Use reverse shading so solution area becomes the only part non-shaded. o Go through examples 2 and 3 o Go through ―Discussing the Ideas‖ #1-5 o Assignment: Page 187 #1, 3(using TI-83), 4c, 5, 7(either way), 8 (either Day 5 – Review and Quiz Day 6 and 7 – Tutorial 4.4 – Modelling a Problem Situation Day 6: Modelling linear system situations o Go through Examples 1 and 2 o Go through Discussing the Ideas #1-4 o Assignment: Page 195 #1 – 3 Day 7: Modelling nonlinear system situations o Go through Example 3 o Assignment: Page 196 #4 – 7 Mark Healy mbhealy@telus.net Page 4 West Vancouver Secondary School Day 8 – Project & Introduction to Tutorial 4.5 – Optimization Problems Read through Project Introduction on page 168 Complete #1 – 3 of project on page 198 (~50 minutes) using the attached handout. Introduce Optimization problems by completing Investigation on Page 200. Day 9 – Tutorial 4.5 continued Go through attached lesson to discuss optimization. Go through Examples 1 and 2 for more examples. Assignment: Page 207 #1 – 5 ***If time, add an extra day to complete Tutorial 4.5!!! Use attached handout for more Day 10 – Review and Project Completion Complete project requirements on page 210 Review Unit using Unit Review on Pages 213 – 218 Day 11 – Unit Exam Mark Healy mbhealy@telus.net Page 5 West Vancouver Secondary School
{"url":"http://www.docstoc.com/docs/82973052/Linear-Inequality-Worksheet---DOC","timestamp":"2014-04-18T05:15:50Z","content_type":null,"content_length":"63476","record_id":"<urn:uuid:2a0947cb-3583-49ef-b089-ce8f577c693f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 1997 [00228] [Date Index] [Thread Index] [Author Index] Re: Help with findroot • To: mathgroup at smc.vnet.net • Subject: [mg9173] Re: [mg9169] Help with findroot • From: David Withoff <withoff> • Date: Tue, 21 Oct 1997 02:02:47 -0400 • Sender: owner-wri-mathgroup at wolfram.com > I'm having a problem using findroot to solve an equation. Perhaps > someone > could shed some light on what's wrong. > FindRoot[Sqrt[x/(1.2 10^ -4)]==-0.1*Tan[Sqrt[x/(1.2*10^ > -4)]],{x,0.1,0.1,2}] > Mathematica 3. returns a value of -0.07 for x which is not anywhere > close to correct. > Further, I've tried several different starting values and min/max > limits, but > a negative answer is always returned. Eventually I'de like to compile > a list > of all the roots of the equation up to, say, x=1000, but I can't even > get one > right now. > Thanks, > Karl Kevala One of the more useful ways to understand this type of example is to look at plots of the functions, such as Plot[{Sqrt[x/(1.2 10^-4)], -0.1 Tan[Sqrt[x/(1.2 10^-4)]]}, {x, 0, .01}, PlotRange -> {-5, 15}] which correctly suggests that this equation has an infinite number of solutions (as well as some awkward singularities that are likely to make the problem computationally challenging). There is a solution near .0003 In[12]:= FindRoot[Sqrt[x/(1.2 10^ -4)]==-0.1*Tan[Sqrt[x/(1.2*10^-4)]], Out[12]= {x -> 0.000319609} and another one near .0027 In[22]:= FindRoot[Sqrt[x/(1.2 10^ -4)]==-0.1*Tan[Sqrt[x/(1.2*10^-4)]], Out[22]= {x -> 0.00268874} and so forth. Because of the singularities, you may need to choose starting values that are rather close to the solution in order for the algorithm to converge to the solution that you want. The plots can be very useful in choosing good starting values. Dave Withoff Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00228.html","timestamp":"2014-04-17T21:44:10Z","content_type":null,"content_length":"35564","record_id":"<urn:uuid:c0f32f92-1ab9-4ed6-8fef-1800eaad3d5d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Westwood Area 2, OH SAT Math Tutor Find a Westwood Area 2, OH SAT Math Tutor ...My strong points are patience and building confidence. I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. 14 Subjects: including SAT math, Spanish, reading, ESL/ESOL ...I have worked at Lindamood-Bell Learning Processes with students in this age range. I have also been privately tutoring for the past 4 years in the Los Angeles area at several schools including Campbell Hall Episcopal, Los Feliz Charter, and Wonderland Elementary School. I was trained in Phonetics at Lindamood-Bell Learning Processes. 24 Subjects: including SAT math, reading, English, writing ...I studied privately for 10 years, under Sheila Reinhold, who was mentored by violin-great Jascha Heifetz. During my college years, I took on a handful of beginner violin students. I love teaching the violin! 42 Subjects: including SAT math, reading, English, elementary (k-6th) ...I have extensive experience tutoring for test preparation, and writing. I have an SAT manual, and I can also recommend additional materials for more practice. I was on swim team in high school and college. 35 Subjects: including SAT math, English, Spanish, elementary math ...I have also worked with the Georgian Heights Elementary after school program to help prepare 5th grade students for the math portion of the Ohio Achievement Assessment test. I have helped develop study skills for elementary school children as a mentor. I have also worked with adults returning to school years later. 10 Subjects: including SAT math, geometry, algebra 1, algebra 2 Related Westwood Area 2, OH Tutors Westwood Area 2, OH Accounting Tutors Westwood Area 2, OH ACT Tutors Westwood Area 2, OH Algebra Tutors Westwood Area 2, OH Algebra 2 Tutors Westwood Area 2, OH Calculus Tutors Westwood Area 2, OH Geometry Tutors Westwood Area 2, OH Math Tutors Westwood Area 2, OH Prealgebra Tutors Westwood Area 2, OH Precalculus Tutors Westwood Area 2, OH SAT Tutors Westwood Area 2, OH SAT Math Tutors Westwood Area 2, OH Science Tutors Westwood Area 2, OH Statistics Tutors Westwood Area 2, OH Trigonometry Tutors Nearby Cities With SAT math Tutor Baldwin Hills, CA SAT math Tutors Briggs, CA SAT math Tutors Century City, CA SAT math Tutors Farmer Market, CA SAT math Tutors Green, CA SAT math Tutors Holly Park, CA SAT math Tutors La Costa, CA SAT math Tutors Miracle Mile, CA SAT math Tutors Paseo De La Fuente, PR SAT math Tutors Playa, CA SAT math Tutors Preuss, CA SAT math Tutors Rancho La Costa, CA SAT math Tutors Rancho Park, CA SAT math Tutors Westwood, LA SAT math Tutors Wilcox, CA SAT math Tutors
{"url":"http://www.purplemath.com/Westwood_Area_2_OH_SAT_Math_tutors.php","timestamp":"2014-04-17T00:55:59Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:620ec920-1d72-4b73-abc9-acc145d9b14b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
KnittingHelp.com Forum - View Single Post - Is there a way to use a different size needle than the pattern calls for and still get the same size hat? Yes, you'd have to adjust the number of stitches. Figure out how many inches around the hat should be by dividing the pattern gauge (stitches per inch) into the number of stitches. Then see how many stitches you get with the other needles and multiply by the measurement you got before. You may have to adjust the number a little to work any ribbing or other stitch repeat - for example, k2 p2 ribbing won't work if the stitch number is a multiple of 5, you have to have a multiple of 4 sts. Then you may have to adjust the decrease ratio a little. It helps if you can link a pattern, or tell us what it is, so we can look at it to help you in more detail. sue- knitting heretic Last edited by suzeeq : 10-22-2012 at 12:48 PM. Reason: spelling
{"url":"http://www.knittinghelp.com/forum/showpost.php?p=1358921&postcount=2","timestamp":"2014-04-19T21:28:36Z","content_type":null,"content_length":"12819","record_id":"<urn:uuid:971de878-0efd-486d-a51e-07e2bf39c136>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00225-ip-10-147-4-33.ec2.internal.warc.gz"}