content
stringlengths
86
994k
meta
stringlengths
288
619
Electromagnetic scattering by a reciprocal uniaxial bianisotropic cylinder with arbitrary cross section: extended mode-matching method Based on the eigenfunction expansion of electromagnetic waves in a reciprocal uniaxial bianisotropic medium, an extended mode-matching method is proposed to study the scattering of an infinitely long reciprocal uniaxial bianisotropic cylinder with arbitrary cross section. Excellent convergence of the echo width is numerically verified, which establishes the reliability and applicability of the present extended mode-matching method for the two-dimensional single-body problem in which the reciprocal uniaxial bianisotropic medium has a curved surface. © 1999 Optical Society of America OCIS Codes (160.1190) Materials : Anisotropic optical materials (290.0290) Scattering : Scattering Dajun Cheng and Yahia M. M. Antar, "Electromagnetic scattering by a reciprocal uniaxial bianisotropic cylinder with arbitrary cross section: extended mode-matching method," J. Opt. Soc. Am. A 16, 831-836 (1999) Sort: Year | Journal | Reset 1. W. W. Hansen, “A new type of expansion in radiation problems,” Phys. Rev. 47, 139–143 (1935). 2. J. A. Stratton, Electromagnetic Theory (McGraw-Hill, New York, 1941). 3. P. M. Morse and H. Feshbach, Methods of Theoretical Physics (McGraw-Hill, New York, 1953). 4. C. T. Tai, Dyadic Green’s Functions in Electromagnetic Theory (Intertext, New York, 1971). 5. R. D. Graglia, P. L. E. Uslenghi, and R. S. Zich, “Moment method with isoparameteric elements for three-dimensional anisotropic scatterers,” Proc. IEEE 77, 750–760 (1989). 6. V. V. Varadan, A. Lakhtakia, and V. K. Varadan, “Scattering by a three dimensional anisotropic scatterers,” IEEE Trans. Antennas Propag. 37, 800–802 (1989). 7. S. N. Papadakis, N. K. Uzunoglu, and N. Christos, “Scattering of a plane wave by a general anisotropic dielectric ellipsoid,” J. Opt. Soc. Am. A 7, 991–997 (1990). 8. J. A. Kong, Theory of Electromagnetic Waves (Wiley, New York, 1975). 9. C. Yeh, “Perturbation method in the diffraction of electromagnetic waves by arbitrarily shaped penetrable obstacles,” J. Math. Phys. 6, 2008–2013 (1965). 10. V. K. Varadan and V. V. Varadan, eds., Acoustic, Electromagnetic and Elastic Wave Scattering—Focus on the T-Matrix Approach (Pergamon, New York, 1980). 11. E. J. Rothwell and L. L. Frasch, “Propagation characteristics of dielectric-rod-loaded waveguides,” IEEE Trans. Microwave Theory Tech. 36, 594–600 (1988). 12. C. Hafner, The Generalized Multipole Technique for Computational Electromagnetics (Artech House, London, 1990). 13. I. V. Lindell, A. H. Sihvola, S. A. Tretyakov, and A. J. Viitanen, Electromagnetic Waves in Chiral and Bianisotropic Media (Artech House, Norwood, 1994). 14. A. Priou, ed., Bianisotropic and Biisotropic Media and Applications (EMW, Boston, 1994). 15. N. Engheta, ed., Special issue, “Wave interaction with chiral and complex media,” J. Electromagn. Waves Appl. 6, 537–793 (1992). 16. S. A. Tretyakov and A. A. Sochava, “Plane electromagnetic waves in uniaxial bianisotropic media,” in Proceedings of the Second International Conference and Workshop on Electromagnetics of Complex Media (Helsinki University of Technology, Helsinki, 1993), pp. 46–49. 17. D. Cheng, “Field representations in reciprocal uniaxial bianisotropic media by cylindrical vector wave functions,” Microwave Opt. Technol. Lett. 13, 358–363 (1996). 18. D. Cheng and W. Ren, “Green dyadics in reciprocal uniaxial bianisotropic medium by cylindrical vector wave functions,” Phys. Rev. E 54, 2917–2924 (1996). 19. I. V. Lindell, A. J. Viitanen, and P. K. Koivisto, “Plane-wave propagation in a transversely bianisotropic uniaxial medium,” Microwave Opt. Technol. Lett. 6, 478–481 (1993). 20. S. A. Tretyakov and A. A. Sochava, “Proposed composite material for non-reflecting shields and antenna redomes,” Electron. Lett. 29, 1048–1049 (1993). 21. A. J. Viitanen and I. V. Lindell, “Uniaxial chiral quarter-wave polarization transformer,” Electron. Lett. 29, 1074–1075 (1993). 22. I. V. Lindell, S. A. Tretyakov, and A. J. Viitanen, “Plane-wave propagation in a uniaxial chiro-omega medium,” Microwave Opt. Technol. Lett. 6, 517–520 (1993). 23. N. K. Uzunoglu, P. G. Cottis, and J. G. Fikioris, “Excitation of electromagnetic wave in a greoelectric cylinder,” IEEE Trans. Antennas Propag. 33, 90–98 (1985). 24. W. H. Eggiman, “Scattering of a plane wave on a ferrite cylinder at normal incidence,” IEEE Trans. Microwave Theory Tech. 8, 440–445 (1960). 25. N. Okamoto, “Matrix formulation of scattering by a homogeneous gyrotropic cylinder,” IEEE Trans. Antennas Propag. 18, 642–649 (1970). 26. R. D. Graglia and P. L. E. Uslenghi, “Electromagnetic scattering from anisotropic materials, part II: computer code and numerical results in two dimensions,” IEEE Trans. Antennas Propag. 35, 225–232 (1987). 27. L. Tsang, J. A. Kong, and R. T. Shin, Theory of Microwave Remote Sensing (Wiley, New York, 1985). 28. R. J. Pogorzelski and E. Lun, “On the expansion of cylindrical vector waves in terms of spherical vector waves,” Radio Sci. 11, 753–761 (1976). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-16-4-831","timestamp":"2014-04-16T11:04:56Z","content_type":null,"content_length":"110229","record_id":"<urn:uuid:fd8b9939-0e55-4afa-aae6-feff6e51a5bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Mechanical & Aerospace Engineering Wyckham, C. and Smits, A. J., “Aero-optic distortion in turbulent compressible boundary layers.” Submitted Journal of Fluid Mechanics. Sahoo, D. and Smits, A. J., “PIV Experiments on a Rough Wall Hypersonic Turbulent Boundary Layer,” AIAA Paper 2010-4471, 4th AIAA Fluid Dynamics Meeting, Chicago, Illinois, June 2010. Sahoo, D., Desai, P. and Smits, A. J., “Experimental Investigation of Helium Injection in a Hypersonic Turbulent Boundary Layer,” AIAA Paper 2010-1559, 48th AIAA Aerospace Sciences Meeting, Orlando, Florida, January 2010 Smits, A. J., Martin, P. M. and Girimaji, S. “Current status of basic research in hypersonic turbulence,” AIAA Paper 2009-0151, 47th AIAA Aerospace Sciences Meeting, Orlando, Florida, January 2009. Sahoo, D., Schultze, M. and Smits, A. J., “Effects of Roughness on a Turbulent Boundary Layer in Hypersonic Flow,” Paper 2009-3678, 39th AIAA Fluid Dynamics Conference, San Antonio, Texas, 22 - 25 June 2009. Sahoo, D., Ringuette, M. J. and Smits, A. J., “Experimental Investigation of Hypersonic Turbulent Boundary Layer,” AIAA Paper 2009-0780, 47th AIAA Aerospace Sciences Meeting, Orlando, Florida, January 2009. Ringuette, M. J. and Smits, A. J., “Wall-Pressure Measurements in a Mach 3 Shock-Wave Turbulent Boundary Layer Interaction at a DNS Accessible Reynolds Number,” AIAA Paper 2007-4113, 37th AIAA Fluid Dynamics Meeting, Tampa, Florida, June 2007. Ringuette, M. J., Martin, M. P. and Smits, A. J., “Variation in the Turbulence Structure of Supersonic Boundary Layers with Wall Temperature using DNS Data,” AIAA Paper 2007-1130, 45th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 2007. Smits, A. J. and Martin, P., “Turbulence in supersonic and hypersonic boundary layers,” One Hundred Years of Boundary Layer Research, eds. G. E. A. Meier and K. R. Sreenivasan, Springer Series in Solid Mechanics and Its Applications, 2006. Wyckham, C. and Smits, A. J., “Comparison of aero-optic distortion in hypersonic and transonic turbulent boundary layers with gas injection,” AIAA Paper 2006-3067, 36th AIAA Plasmadynamics and Lasers Conference, San Francisco, CA, June 2006. Ringuette, M. J., Martin, M. P. and Smits, A. J., “Characterization of the Turbulence Structure in Supersonic Boundary Layers using DNS Data,” AIAA Paper 2006-3539, 36th AIAA Fluid Dynamics Conference, San Francisco, CA, June 2006. Martin, M. P., Smits, A. J., Wu, M. and Ringuette, M., “The Turbulence Structure of Shockwave and Boundary Layer Interaction in a Compression Corner,” AIAA Paper 2006-0497, 44th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 2006 A.J. Smits and J.P. Dussauge “Turbulent Shear Layers in Supersonic Flow.” AIP Press/Springer Verlag, 1996, 357 pages. Second Edition, Springer Verlag, December 2005, 424 pages. ISBN (1^st Ed.) 1-56396-260-8. ISBN (2^nd Ed.) 0387261400. Dussauge, J. P., Dupont, P. and Smits, A. J. “Analogie entre champs thermiques et dynamiques appliquée aux écoulements libres en régime turulent supersonique,” 12th Journées Internationales de Thermique, Tangier, Morocco, November 15-17, 2005, T.1, pp. 165-168. Wyckham, C. M., Zaidi, S. H., Miles, R. B. and Smits, A. J., “Measurement of Aero-Optic Distortion in Transonic and Hypersonic, Turbulent Boundary Layers with Gas Injection” AIAA Paper 2005-4775, 35th AIAA Plasmadynamics and Lasers Conference, 6-9 June 2005, Toronto, Ontario, Canada. Bookey, P. Wyckham, C. M. and Smits A. J., “Experimental Investigations of Mach 3 Shock-Wave Turbulent Boundary Layer Interactions,” AIAA Paper 2005-4899, 35th AIAA Fluid Dynamics Conference, 6-9 June 2005, Toronto, Ontario, Canada. Taylor, E. M., Martin, M. P. and Smits A. J., “Preliminary Study of the Turbulence Structure in Supersonic Boundary Layers using DNS Data,” AIAA Paper 2005-5290, 35th AIAA Fluid Dynamics Conference, 6-9 June 2005, Toronto, Ontario, Canada. Bookey, P., Wyckham, C., Smits, A. J. and Martin, P., “New Experimental Data of STBLI at DNS/LES Accessible Reynolds Numbers,” AIAA Paper 2005-0309, 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10-13, 2005. Wu, M., Bookey, P., Martin, P. and Smits, A. J., “Analysis of Shockwave/Turbulent Boundary Layer Interaction Using DNS and Experiment,” AIAA Paper 2005-0310, 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10-13, 2005. Poggie, J., Erbland, P. J., Smits, A.J. and Miles, R. B., “Quantitative visualization of compressible turbulent shear flows using condensate-enhanced Rayleigh scattering.” Experiments in Fluids, Vol. 37, no. 3, pp. 438-454, 2004. Poggie, J. and Smits, A.J., “Experimental Evidence for the Plotkin Model of Shock Unsteadiness in Separated Flow.” Physics of Fluids, Vol. 17, 018107, 2004. Smits, A. J. and Martin, P., “Turbulent boundary layers at supersonic speeds” IUTAM Symposium on One Hundred Years of Boundary Layer Research, Aug. 12-14, 2004; DLR-Göttingen, Germany. Poggie, J. and Smits, A. J., “Large-scale structures in a compressible mixing layer over a cavity.” AIAA Journal. Vol. 41, No. 12, December 2003. Zaidi, S. H., Wyckham, C. M., Miles, R. B. and Smits A. J. “Characterization of Optical Wavefront Distortions due to a Boundary Layer at Hypersonic Speeds,” AIAA Paper 2003-4308, 34th AIAA Plasmadynamics and Lasers Conference, 23-26 June 2003, Orlando, Florida. Poggie, J. and Smits, A.J., “Shock Unsteadiness in a Reattaching Shear Layer.” Journal of Fluid Mechanics, Vol. 429, pp. 155-185, 2001. (This is a revised and extended version of Paper #96). Auvity, B., Etz, M.R. and Smits, A.J. “Effects of Transverse Helium Injection on Hypersonic Boundary Layers.” Physics of Fluids, Vol. 13(10), pp. 3025-32, 2001. (This is a revised and extended version of Paper #100). Konrad, W., Smits, A.J. and Knight, D., “Mean Flow Field Scaling of Supersonic Shock-Free Three-Dimensional Turbulent Boundary Layer.” AIAA Journal, Vol. 38, pp. 2120-2126, 2000. Huntley, M. and Smits, A.J. “Transition Studies on an Elliptic Cone in Mach 8 Flow Using Filtered Rayleigh Scattering,” European Journal of Mechanics B–Fluids, Vol. 19, No. 5, pp. 695-706, 2000. Wright, M.J., Sinha, K. Olejniczak, J., Candler, G.V., Magruder, T.D. and Smits, A.J., “Numerical and Experimental Investigation of Double-Cone Shock Interactions.” AIAA Journal, Vol. 38, pp. 2268-2276, 2000. Auvity, B., Etz, M., Huntley, M., Pingfan Wu and Smits, A.J. “Control of Hypersonic Boundary Layers by Helium Injection,” AIAA Paper 2000-2322, Fluids 2000 Conference, Denver, Colorado, June 19-22, Huntley, M. and Smits, A.J. “MHz Rate Imaging of Boundary Layer Transition on Elliptic Cones at Mach 8,” AIAA paper #00-0379, 38th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10-13, 2000. Poggie, J. and Smits, A.J., “Shock Unsteadiness in a Reattaching Shear Layer.” AIAA paper #2000-0140, 38th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10-13, 2000. Huntley, M. and Smits, A.J. “Transition Studies on Elliptic Cones in Mach 8 Flow Using Filtered Rayleigh Scattering,” First International Symposium on Turbulence and Shear Flow Phenomena, Sept. 12-15, 1999, Santa Barbara, CA. Wright, M.J., Sinha, K. Olejniczak, J., Candler, G.V., Magruder, T.D. and Smits, A.J., “Numerical and Experimental Investigation of Double-Cone Shock Interactions.” Army High Performance Computing Research Center AHPCRC Preprint 99-079. Konrad, W. and Smits, A.J., “Three-Dimensional Supersonic Turbulent Boundary Layer Generated by an Isentropic Compression.” Journal of Fluid Mechanics, Vol. 372, pp.1-23, 1998. Erbland, P.J., Etz, M.R., Huntley, M., Smits, A.J. and Miles, R.B., “Imaging the Evolution of Turbulent Structures in a Hypersonic Boundary Layer,” AIAA paper #98-2510, 20th AIAA Advanced Measurement and Ground Testing Conference, Albuquerque, NM, June 15-June 18, 1998. Erbland, P.J., Etz, M.R., Lempert, W.R., Smits, A.J. and Miles, R.B., “Optical Refraction from High Mach Number Turbulent Boundary Layer Structures,” AIAA paper #98-0399, 36th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 12 - 15, 1998. Debiève, J.F., Dupont, P., Smits, A.J. and Dussauge, J.P., “Balance of kinetic energy in a supersonic mixing layer compared to subsonic mixing layer and subsonic jets with variable density,” Proceedings of the IUTAM Symposium on Variable Density Low-Speed Turbulent Flows, Marseille, France, July 8-10, 1996. Published by Kluwer, Fulachier, L., Lumley, J.L., and Anselmet, F. (eds), 1997. Dussauge, J.P. and Smits, A.J., “Characteristic Scales for Energetic Eddies in Turbulent Supersonic Boundary Layers.” Experimental Thermal and Fluid Science, Vol. 14, No. 1, 1997. Smith, D.R. and Smits, A.J., “The Effects of Successive Distortions on the Behavior of a Turbulent Boundary Layer in a Supersonic Flow.” Journal of Fluid Mechanics, Vol. 351, pp. 253-288, 1997. Debiève, J.F., Dupont, P., Smith, D.R. and Smits, A.J., “The Response of a Supersonic Turbulent Boundary Layer to a Step Change in Wall Temperature.” AIAA Journal, Vol. 35, pp. 51-57, 1997. Poggie, J. and Smits, A.J., “Wavelet Analysis of Wall-Pressure Fluctuations in a Supersonic Blunt Fin Flow.” AIAA Journal, Vol. 35, pp. 1597-1603, 1997. Smits, A.J. “Compressible Turbulent Boundary Layers,” Chapter 1, AGARD Report 819 “Turbulence in Compressible Flows,” NATO, June 1997. Fielding, J., Yetter, R.A., Dryer, F.L. and Smits, A.J., “Reaction of Hydrgen-Oxygen Mixtures in a Laminar Supersonic Wind Tunnel,” Paper No. 97S-061, Spring Technical Meeting of the Western States Section of the Combustion Institute, Sandia National Laboratories, Livermore, CA, April 14-15, 1997. Wright, M.J., Olejniczak, J., Candler, G.V., Magruder, T.D. and Smits, A.J., “Numerical and Experimental Investigation of Double-Cone Shock Interactions,” AIAA paper #97-0063, 35th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 6 - 9, 1997. Also, University of Minnesota Supercomputer Institute Research Report 97/187, October 1997. Baumgartner, M.L., Erbland, P.J., Etz, M.R., Yalin, A., Muzas, B., Smits, A.J., Lempert, W. and Miles, R.B., “Structure of a Mach 8 Turbulent Boundary Layer,” AIAA paper #97-0765, 35th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 6 - 9, 1997. Dussauge, J.-P., Fernholz, H.H., Finley, P.J., Smith, R.W., Smits, A.J. and Spina, E.F., “Turbulent Boundary Layers in Subsonic and Supersonic Flows,” NATO-Advisory Group for Aerospace Research and Development AGARDograph, #335, 1996. Debiéve, J.F., Dupont, P., Dussauge, J.P., and Smits, A.J., “Compressibility versus Density Variations and the Structure of Turbulence: A Viewpoint from Experiments” IUTAM Symposium on Variable Density Low Speed Turbulent Flows, IRPHE, UMR CNRS/Universités d’Aix-Marseille I et II, July 8-10, 1996. Poggie, J. and Smits, A.J., “Large-Scale Coherent Turbulence Structures in a Compressible Mixing Layer,” AIAA paper #96-0440, 34th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 15 - 18, Poggie, J. and Smits, A.J., “Quantitative Visualization of Supersonic Flow Using Rayleigh Scattering,” AIAA paper #96-0436, 34th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 15 - 18, 1996. Smith, M.W. and Smits, A.J., “Visualization of the Structure of Supersonic Turbulent Boundary Layers.” Experiments in Fluids, Vol. 18, pp. 288-302, 1995. Dussauge, J.P. and Smits, A.J., “Characteristic Scales for Energetic Eddies in Turbulent Supersonic Boundary Layers.” Tenth Symposium on Turbulent Shear Flows, Pennsylvania State University, University Park, PA, August 14-16, 1995. Smits, A.J., “Mach and Reynolds Number Effects on Turbulent Boundary Layers.” AIAA Paper #95-0578, 33rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 9 - 12, 1995. Spina, E.F., Smits, A.J. and Robinson, S.K., “The Physics of Supersonic Turbulent Boundary Layers.” Annual Review of Fluid Mechanics, Vol. 26, pp. 287-319, 1994. Donovan, J.F., Spina, E.F. and Smits, A.J., “The Structure of Supersonic Turbulent Boundary Layers Subjected to Concave Surface Curvature.” Journal of Fluid Mechanics, Vol. 259, pp. 1 -24, 1994. Konrad, W., Smits, A.J. and Knight, D., “A Combined Experimental and Numerical Study of a Three-Dimensional Supersonic Turbulent Boundary Layer” Experimental Thermal and Fluid Science, Vol. 9, No. 2, pp. 156-164, 1994. Smith, D.R. and Smits, A.J., “The Effects of Streamline Curvature and Pressure Gradient on the Behavior of Turbulent Boundary Layers in Supersonic Flow,” AIAA Paper #94-2227, 25th AIAA Fluid Dynamics Conference, June 20-23, 1994. Smith, D.R. and Smits, A.J., “Multiple Distortion of a Supersonic Turbulent Boundary Layer.” Proceedings Fourth European Turbulence Conference, Delft, The Netherlands, July 1992. Applied Scientific Research, Vol. 51, pp. 223-229, 1993 (This is a revised and extended version of Paper #49). Shen, Z.-H., Smith, D.R. and Smits, A.J., “Wall Pressure Fluctuations in the Reattachment Region of a Supersonic Free Shear Layer.” Experiments in Fluids, Vol. 14, pp. 10-16, 1993. Smith, D.R. and Smits, A.J., “Simultaneous Measurement of Velocity and Temperature Fluctuations in the Boundary Layer of a Supersonic Flow.” Experimental Thermal and Fluid Science, Vol. 7, pp. 221-229, 1993. Poggie, J. and Smits, A.J., “Control of Pressure Fluctuations in the Reattachment Region of a Supersonic Shear Layer,” AIAA paper #93-3248, AIAA 3rd Shear Flow Control Conference, Orlando, Florida, July 7-9, 1993. Cogne, S., Forkey, J., Miles, R.B. and Smits, A.J., “The Evolution of Large-Scale Structures in a Supersonic Turbulent Boundary Layer,” Proceedings Symposium on Transitional and Turbulent Compressible Flows, ASME Fluids Engineering Conference, Washington, D.C., June 20-24, 1993. Konrad, W. and Smits, A.J., “Reynolds Stress Measurements in a Three-Dimensional Supersonic Turbulent Boundary Layer,” Proceedings Third International Symposium on Thermal Anemometry, ASME Fluids Engineering Conference, Washington, D.C., June 20-24, 1993. Konrad, W., Smits, A.J. and Knight, D., “A Combined Experimental and Numerical Study of a Three-Dimensional Supersonic Turbulent Boundary Layer” Proceedings Second International Symposium on Engineering Turbulence Modelling and Measurements, Wolfgang Rodi and F. Martelli (eds.), Elsevier Science, 1993. Poggie, J. and Smits, A.J., “Control of Pressure Fluctuations in the Reattachment Region of a Supersonic Shear Layer,” AIAA Paper #93-0385, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January Konrad, W., Smits, A.J. and Knight, D., “Mean Flowfield Structure of a Three-Dimensional Supersonic Turbulent Boundary Layer,” AIAA Paper #93-0661, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January 1993. Smits, A.J., Introduction to Chapter 3 “Compressibility Effects”, Proceedings of Fifth Symposium on Turbulent Shear Flows, Munich, Germany, September 1991, pp. 225-228. Springer-Verlag, 1992. Smith, D.R. and Smits, A.J., “Multiple Distortion of a Supersonic Turbulent Boundary Layer.” Proceeding Fourth European Turbulence Conference, Delft, The Netherlands, July 1992. Smith, D.R. and Smits, A.J., “The Effect of Multiple Distortions on the Boundary Layer in a Supersonic Flow,” AIAA Paper #92-0309, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January 1992. Konrad, W., Smits, A.J. and Knight, D., “A Three-Dimensional Supersonic Turbulent Boundary Layer Generated by an Isentropic Compression,” AIAA Paper #92-0310, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January 1992. Poggie, J., Smits, A.J. and Glezer, A., “The Dynamics and Control of Fluctuating Pressure Loads in the Reattachment region of a Supersonic Free Shear Layer,” AIAA Paper #92-0178, AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, January 1992. Spina, E.F., Donovan, J.F. and Smits, A.J., “On the Structure of High-Reynolds- Number Supersonic Turbulent Boundary Layers,” Journal of Fluid Mechanics, Vol. 222, pp.293-327, 1991. Smith, D.R., Poggie, J. and Smits, A.J., “Application of Rayleigh Scattering to Supersonic Turbulent Flows”, Proc. Fifth International Symposium on Applications of Laser Techniques to Fluid Mechanics, July 9-12 1990, Lisbon, Portugal. Springer-Verlag, 1991. Spina, E.F., Donovan, J.F. and Smits, A.J., “Convection Velocity in a Supersonic Turbulent Boundary Layer,” Physics of Fluids A, Vol. 3, No.12, December 1991. Selig, M.S. and Smits, A.J., “Effect of Periodic Blowing on Attached and Separated Supersonic Turbulent Boundary Layers”. AIAA Journal, Vol. 29, pp. 1651-1658, 1991. Smits, A.J., “Turbulent Boundary Layer Structure in Supersonic Flow.” Philosophical Transactions of the Royal Society, A, Vol. 336, pp. 81-93, 1991. Smith, D.R. and Smits, A.J., “The Rapid Expansion of a Turbulent Boundary Layer in a Supersonic Flow,” Abstract published in Studies in Turbulence, Eds. T.B. Gatski, S. Sarkar and C.G. Speziale, Springer-Verlag NY, 1991. Full paper appeared in Theoretical and Computational Fluid Dynamics, Vol. 1, pp. 319-328, 1991. Smith, R.W., Poddar, K. and Smits A.J., “Application of the Wavelet Transform to the Analysis of Turbulent Flows,” USA-French Workshop Wavelets and Turbulence, Princeton, NJ, June 3-7, 1991. Smith, D.R., Poggie, J., Konrad, W. and Smits, A.J., “Visualization of the Structure of Shock Wave Turbulent Boundary Layer Interactions Using Rayleigh Scattering”, AIAA Paper #91-0651, AIAA 29th Aerospace Sciences Meeting, Reno, Nevada, January 1991. Selig, M.S. and Smits, A.J., “The Dynamic Behavior of Shock-Wave/Turbulent Boundary Layer Interactions”, Proceedings of the Zoran Zaric Memorial Meeting on Near-Wall Turbulence, Dubrovnik, Yugoslavia, May 16-20, 1988. Springer-Verlag, 1990. Fernando, E.M. and Smits, A.J., “A Supersonic Turbulent Boundary Layer in an Adverse Pressure Gradient,” Journal of Fluid Mechanics, Vol.211, pp.285-307, 1990. Smits, A.J., “New Developments in the Understanding of Supersonic Turbulent Boundary Layer Structure”, Proc. Twelfth Symposium on Turbulence, University of Missouri-Rolla, Rolla, Missouri, October 17-19, 1990. Spina, E.F., Donovan, J.F. and Smits, A.J., “Convection Velocity in a Supersonic Turbulent Boundary Layer,” Proc. Twelfth Symposium on Turbulence, University of Missouri-Rolla, Rolla, Missouri, October 17-19, 1990. Shen, Z.H., Smith D.R. and Smits, A.J., “Wall Pressure Fluctuations in the Reattachment Region of a Supersonic Free Shear Layer”, AIAA Paper #90-1461, AIAA 21st Fluid and Plasmadynamics Conference, Seattle, Washington, June 18-20 1990. Donovan, J.F. and Smits, A.J., “Large-Scale Motions in a Supersonic Turbulent Boundary Layer on a Curved Surface”, AIAA Paper 90-0019, AIAA 28th Aerospace Sciences Meeting, Reno, Nevada, January Fernholz, H.H., Smits, A.J., Dussauge, J.-P. and P.J. Finley, (Eds.), “A Survey of Measurements and Measuring Techniques in Rapidly Distorted Compressible Turbulent Boundary Layers,” NATO-Advisory Group for Aerospace Research and Development AGARDograph, #315, 1989. Dussauge, J.P., Debieve, J.F. and Smits, A.J., “Rapidly Distorted Compressible Boundary Layers”, Chapter 2, NATO AGARDograph #315, 1989 Smits, A.J. and Watmuff, J.H., “Large-Scale Motions in Supersonic Turbulent Boundary Layers”, Chapter 3, NATO AGARDograph, #315, 1989. Degani, D. and Smits, A.J., “Numerical Study of the Response of a Compressible Turbulent Boundary Layer to a Short Region of Surface Curvature,” AIAA Journal, Vol.27, pp.23-26, 1989. This article is a revised and expanded version of AIAA Paper 85-1677. Selig, M.S., Andreopoulos, J., Muck, K.C., J.P. Dussauge and Smits, A.J., “Turbulence Structure in a Shock-Wave/Turbulent Boundary Layer Interaction”, AIAA Journal, Vol.27, pp.862-869, 1989. This article is a revised and expanded version of AIAA Paper 87-0550. Jayaram, M., Donovan, J.F., Dussauge, J.-P. and Smits, A.J., “Analysis of a Rapidly Distorted, Supersonic, Turbulent Boundary Layer,” The Physics of Fluids A, Vol.1(11), pp.1855-1864, Nov. 1989. Smits, A.J., “The Structure of Supersonic Turbulent Boundary Layers: What We Know and What We Think We Know”, Proceedings of the International Workshop on the Physics of Compressible Turbulent Mixing, Princeton, N.J., October 24-27, 1988. Springer-Verlag, 1989. Selig, M.S., Fernando, E.M. and Smits, A.J., “Periodic Slot Blowing of a Supersonic Turbulent Boundary Layer,” Proceedings of the International Workshop on the Physics of Compressible Turbulent Mixing, Princeton, N.J., October 24-27, 1988. Springer-Verlag, 1989. Smith, M.W., Smits, A.J. and Miles, R.B., “Compressible Boundary Layer Density Cross Sections by UV Rayleigh Scattering,” Optics Letters, Vol.14(17), pp.916-918, 1989. Degani, D. and Smits, A.J., “The Effect of Short Regions of Surface Curvature on Compressible Turbulent Boundary Layers,” AIAA Journal, Vol.28, pp.113-119, 1990. This paper replaces an incorrect version published through editorial error in the AIAA Journal, Vol.27, pp.23-28, 1989. Smith, M.W., Kumar, V., Smits, A.J. and Miles, R.B., “The Structure of the Instantaneous Density Field in Supersonic Turbulent Boundary Layers”, Paper 2A-2, Tenth Australasian Fluid Mechanics Conference, Univ. of Melbourne, Melbourne, Australia, December 11-15, 1989. Smith, M.W., Kumar, V., Smits, A.J. and Miles, R.B., “The Structure of Supersonic Turbulent Boundary Layers as Revealed by Line Profiles and Density Cross Sections”, Paper 19-1, Proc. Seventh Symp. on Turbulent Shear Flows, Stanford Univ., Stanford, California, August 1989. Smits, A.J., Spina, E.F., Alving, A.E., Smith, R.W., Fernando, E.M. and Donovan, J.F., “A Comparison of the Turbulence Structure of Subsonic and Supersonic Boundary Layers”, The Physics of Fluids A, Vol.1(11), pp.1865-1875, Nov. 1989. Smith, M.W. and Smits, A.J., “Cinematic Visualization of Coherent Density Structures in a Supersonic Turbulent Boundary Layer”, AIAA Paper 88-0500, AIAA 26th Aerospace Sciences Meeting, Reno, Nevada, January 1988. Smits, A.J., Alving, A.E., Smith, R.W., Spina, E.F., Fernando, E.M. and Donovan, J.F., “A Comparison of the Turbulence Structure of Subsonic and Supersonic Boundary Layers”, Proc. Eleventh Symposium on Turbulence, University of Missouri-Rolla, Rolla, Missouri, October 17-19, 1988. 10Carvin, C., Debieve, J.F. and Smits, A.J., “The Near-Wall Temperature Profile of Turbulent Boundary Layers”, AIAA Paper 88-0136, AIAA 26th Aerospace Sciences Meeting, Reno, Nevada, January 1988. Jayaram, M., Taylor, M.W., and Smits, A.J., “The Response of a Compressible Turbulent Boundary Layer to Short Regions of Concave Surface Curvature”, Journal of Fluid Mechanics, Vol.175, pp.343-362, Spina, E.F. and Smits, A.J., “Organized Structures in a Compressible Turbulent Boundary Layer”, Journal of Fluid Mechanics, Vol.182, pp.85-109, 1987. Smits, A.J. and Muck, K.C., “Experimental Study of Three Shock Wave Turbulent Boundary-Layer Interactions”, Journal of Fluid Mechanics, Vol.182, pp.291-314, 1987. Fernando, E.M., Spina, E.F., Donovan, J.F. and Smits, A.J., “Detection of Large-Scale Organized Motions in a Turbulent Boundary Layer”, Proc. Sixth Symp. on Turbulent Shear Flows, Toulouse, France, Sept. 7-9, 1987, pp.16.8.1-16.8.6. Donovan, J.F. and Smits, A.J., “ A Preliminary Investigation of Large-Scale Organized Motions in a Supersonic Turbulent Boundary Layer on a Curved Surface”, AIAA Paper 87-1285, AIAA 19th Fluid Dynamics, Plasma Dynamics and Laser Conference, Honolulu, Hawaii, June 1987. Fernando, E.M. and Smits, A.J., “The Effects of an Adverse Pressure Gradient on the Behavior of a Flat Plate Supersonic Turbulent Boundary Layer”, AIAA Paper 87-1286, AIAA 19th Fluid Dynamics, Plasma Dynamics and Laser Conference, Honolulu, Hawaii, June 1987. Selig, M.S., Andreopoulos, J., Muck, K.C., Dussauge, J.P. and Smits, A.J., “Simultaneous Wall-Pressure and Mass-Flow Measurements Downstream of a Shock Wave/Turbulent Boundary Layer Interaction,” AIAA Paper 87-0550, 25th Aerospace Sciences Meeting, Reno, Nevada, January 1987. Spina, E.F. and Smits, A.J., “The Effect of Compressibility on the Large-Scale Structure of a Turbulent Boundary Layer”, AIAA Paper 87-0195, 25th Aerospace Sciences Meeting, Reno, Nevada, January Spina, E.F. and Smits, A.J., “Organized Structures in a Supersonic, Turbulent Boundary Layer”, Proc. of the Ninth Australasian Fluid Mechanics Conference, Univ. of Auckland, Auckland, N.Z., December Smits, A.J. and Bogdonoff, S.M., “A `Preview’ of Three-Dimensional Shock-Wave/Turbulent Boundary Layer Interactions,” Proceedings of the IUTAM Symposium on Turbulent Shear Layer/Shock Wave Interactions, Palaiseau, France, September 9-12, 1985. Published by Springer Verlag, 1986. Jayaram, M., Dussauge, J.-P. and Smits, A.J., “Analysis of a Rapidly Distorted, Supersonic, Turbulent Boundary Layer,” Proc. Fifth Symposium on Turbulent Shear Flows, Cornell University, Ithaca, NY, August 7-9, 1985. Degani, D. and Smits, A.J., “Numerical Study of the Response of a Compressible Turbulent Boundary Layer to a Short Region of Surface Curvature,” AIAA Paper 85-1677, AIAA 18th Fluid Dynamics, Plasma Dynamics and Lasers Conference, Cincinnati, Ohio, July 1985. Jayaram, M. and Smits, A.J., “The Distortion of a Supersonic Turbulent Boundary Layer by Bulk Compression and Streamline Curvature,” AIAA Paper 85-0299, AIAA 03rd Aerospace Sciences Meeting, Reno, Nevada, January 1985. Muck, K.C. and Smits, A.J., “The Behavior of a Compressible Turbulent Boundary Layer Under Incipient Separation Conditions”, Proceedings of the Fourth Symposium on Turbulent Shear Flows, Karlsruhe, September, 1983, Published by Springer-Verlag, 1984. Hayakawa, K., Smits, A.J. and Bogdonoff, S.M., “Hot-Wire Investigation of an Unseparated Shock-Wave/Turbulent Boundary Layer Interaction”, AIAA Journal, Vol.22, pp.579-585, 1984. This article is a revised and expanded version of AIAA Paper 82-0985 (paper #7). Hayakawa, K., Smits, A.J. and Bogdonoff, S.M., “Turbulence Measurements in a Compressible Reattaching Shear Layer”, AIAA Journal, Vol.22, pp.889-895, 1984. This article is a revised and expanded version of AIAA Paper 83-0299. Taylor , M.W. and Smits, A.J., “The Effect of a Short Region of Concave Curvature on a Supersonic Turbulent Boundary Layer,” AIAA Paper 84-0169, AIAA 22nd Aerospace Sciences Meeting, Reno, Nevada, January 1984. Muck, K.C., and Smits, A.J., “Behavior of a Turbulent Boundary Layer Subjected to a Shock-Induced Separation,” AIAA Paper 84-0097, AIAA 22nd Aerospace Sciences Meeting, Reno, Nevada, January 1984. Hayakawa, K., Smits, A.J. and Bogdonoff, S.M., “Turbulence Measurements in Two Shock-Wave/Shear Layer Interactions”, Proceedings of the I.U.T.A.M. Symposium on the Structure of Complex Turbulent Shear Flow, Marseille, August-September, 1982. Published by Springer-Verlag, 1983. Hayakawa, K., Muck, K.C., Smits, A.J. and Bogdonoff, S.M., “The Evolution of Turbulence in Shock-Wave/Boundary Layer Interactions,” Proc. of the Eighth Australasian Fluid Mechanics Conference, Newcastle, Australia, December, 1983, pp.9B 7-9. Hayakawa, K., Smits, A.J. and Bogdonoff, S.M., “Turbulence Measurements in a Compressible Reattaching Shear Layer,” AIAA Paper 83-0299, AIAA 21st Aerospace Sciences Meeting, Reno, Nevada, January 10-13, 1983. Hayakawa, K., Smits, A.J., and Bogdonoff, S.M., “Hot-Wire Investigation of an Unseparated Shock-Wave/Turbulent Boundary Layer Interaction,” AIAA Paper No. 82-0985, Third AIAA/ASME Joint Thermophysics, Fluids, Plasma and Heat Transfer Conference, St. Louis, Missouri, June 1982.
{"url":"http://www.princeton.edu/mae/people/faculty/smits/homepage/facilities/compressible-flow-and-sho/","timestamp":"2014-04-17T16:17:54Z","content_type":null,"content_length":"40822","record_id":"<urn:uuid:a38bada4-6840-49ed-b985-e2ae9c41d4f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Last of the Careless Men Euler Project #17 : "If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total. "If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?" Their notes say something about "and" in numbers British-style, but I have ignored this as I have no idea where the ands would go. This is a slow, stupid way to solve this problem. (Though I'm reasonably happy with it as an example of Perl 6 programming.) On the other hand, this way of doing it makes it much easier to check for correctness. (Though I discovered I'd misspelled one of the numbers in the process of posting this. Sigh.) This runs in just over 3 minutes on my MBP. I will be very disappointed if I can't get that time below 20 seconds. But that is a matter for a future post. PS I would really like to use Yuval Kogman's system for cleaning up / speeding up Gist embedding in blog posts. Unfortunately, I can't quite make sense of what he is actually doing. Afraid I have fallen a bit behind the technological curve here... 2 comments: 1. If using Perl 5, this problem just begs for some CPAN leverage: use 5.10.0; use Lingua::EN::Numbers qw/ num2en /; use List::Util qw/ sum /; say sum map { y/a-z// } map { num2en( $_ ) } 1..1000; 2. Ha! Yes, as always, you can get rather drastic improvements by using CPAN.
{"url":"http://lastofthecarelessmen.blogspot.com/2009/08/project-euler-17.html?showComment=1249306756243","timestamp":"2014-04-18T18:11:23Z","content_type":null,"content_length":"51179","record_id":"<urn:uuid:227d348c-1c35-4b9b-8f7c-f074c762254a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
What is importance of frequency domain analysis over time ? 1. 9th February 2009, 13:30 #1 Newbie level 6 Join Date Feb 2009 0 / 0 What is importance of frequency domain analysis over time ? Why signals are mostly studied in frequency domain? 2. 9th February 2009, 15:30 #2 What is importance of frequency domain analysis over time ? I think, because if you know frequency response of the system, you can easily predict its response in time domain for any type of input signal. (And vice-versa as well). Another reason could be well known technique of determining system stability based on frequency response. 3. 10th February 2009, 11:49 #3 Newbie level 6 Join Date Feb 2009 0 / 0 What is importance of frequency domain analysis over time ? □ 10th February 2009, 11:49 4. 10th February 2009, 18:16 #4 Re: What is importance of frequency domain analysis over ti if you know frequency response of the system, you can easily predict its response in time domain for any type of input signal. This must be not a simplest explanation, but anyway... Every system has a transfer function in frequency domain: T(ω). Very often it is done not as a function of frequency ω (ω=2πf), but Laplace parameter s=jω: T(s). If you apply signal f(t) to the system and want to know output g(t), there is a simple formula in frequency domain: G(s) = T(s)*F(s), where G(s) and F(s) are Laplace transforms of g(t) and f(t). So, you have f(t), then found F(s), multiply by T(s), get G(s), performe inverse Laplace transform to obtain g(t). Another reason could be well known technique of determining system stability based on frequency response. This is part of control theory. Simple example is using gain and phase margins on Bode plot to analyse stability of the closed loop system. Here is a link. Thank you. 5. 11th February 2009, 08:46 #5 Advanced Member level 1 Join Date May 2008 51 / 51 What is importance of frequency domain analysis over time ? we can see some poles n zeros of system easily to check abt system stability
{"url":"http://www.edaboard.com/thread142660.html","timestamp":"2014-04-18T20:44:34Z","content_type":null,"content_length":"71382","record_id":"<urn:uuid:62b3ca33-7b16-448c-af97-0786498dbb00>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Section 25-46 Intersections. Section 25-46 Intersections. (a) Streets shall intersect one another at as near a ninety-degree angle as possible. No intersection of streets at angles less than sixty (60) degrees shall be approved. (b) When streets intersect at a ninety-degree angle or when a street intersects with a cul-de-sac terminal bulb, the intersection right-of-way lines shall be rounded by a curve with a radius of not less than twenty (20) feet for residential streets and not less than thirty (30) feet for nonresidential streets. (c) When streets intersect at an angle of less than ninety (90) degrees, the director of public works may require the intersecting right-of-way lines to be rounded by a curve with a radius greater than, required for streets intersecting, at a ninety-degree angle. (d) The intersection of more than two (2) streets at, any one (1) point shall be avoided except where necessary; to secure a proper street system. (e) Intersecting streets shall have center lines as nearly straight as possible. Streets with center line offsets at intersections shall be offset by less than five (5) feet or more than one hundred twenty-five (125) feet. (Code 1964, § 19.544(2)(D); Ord. No. 10099, § 1, 3-5-84)
{"url":"http://gocolumbiamo.com/Council/Columbia_Code_of_Ordinances/Chapter_25/46.html","timestamp":"2014-04-20T08:15:05Z","content_type":null,"content_length":"2172","record_id":"<urn:uuid:c1e69951-ab69-43f5-9f34-c6723db4fe2f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Dr. C. Scott Ananian In a previous post I introduced the "Baker's" square dance concept. By analogy to "Baker's dozen", it was proposed the "Baker's" meant to do 13/12 of the call it modified. But let's reconsider: a "baker's half dozen" isn't 6 1/2 eggs, it's 7. Let's redefine the "Baker's" concept to mean "one more part" of the call. (Hence the title of this post: "one more square dance So a "Baker's Square Thru" is a square thru 5. A "Baker's Eight Chain 3" is an eight chain 4. A "Baker's Right and Left Grand" goes 5 hands around. Note that this is different from "do the last part twice"; we continue the alternation of parts in the base call. Let's push this a little further to include calls with arithmetic sequences. "Baker's Remake the Wave" would end with 4 quarters by the left. "Baker's Quarter Thru" would be a remake the wave. "Baker's Three Quarter Thru" is a reverse order remake. Slightly more controversial: "Baker's Swing the Fractions" would end with zero quarters by the left (would that still set a roll direction?). "Baker's Touch By 1/4 By 1/2" from a single file column of 6 would have the very outsides touch 3/4. Sue Curtis suggests that "Baker's Reverse Echo As Couples Trade" (from a tidal one-faced line) would be "trade, as couples trade, as couples as couples trade", or equivalently "trade, couples trade, couples of 4 trade". This definition for "Baker's" seems a lot more fun. Do any other square dance callers have concept-worthy last names?
{"url":"http://cananian.livejournal.com/?skip=35","timestamp":"2014-04-25T07:34:06Z","content_type":null,"content_length":"199556","record_id":"<urn:uuid:a6efba30-8c13-40c6-9612-0e4a0b5b0020>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Work out the resulant force 7 Answers I have to figure out the resultant force of this system, and I haven't a clue where to start I have added an attachment picture which I created in paint with the diagram. I have literally just signed up to this site minutes ago in desperation as this question has my head scrambled! Thank you for any help received!
{"url":"http://www.askmehelpdesk.com/physics/work-out-resulant-force-398398.html","timestamp":"2014-04-16T10:19:47Z","content_type":null,"content_length":"78313","record_id":"<urn:uuid:65fb62fa-26b6-487e-b6e7-847e6d26fc88>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
X.0 vs. X.9 Rear Shifter 01-25-2010 #1 mtbr member Join Date Oct 2008 X.0 vs. X.9 Rear Shifter Is it worth paying a little bit extra to get the X.0 over the X.9? Both have zero loss travel. I'm upgrading from X.7 and it will be shifting an X.0 RD. I do like the bling factor of the X.0 and would love a red one to match my cassette and RD but not sure its worth the extra cash. If you ever see a turtle on a telephone pole, remember he had help getting there. Is there anything beer can't do? more worth it to spend $ on XO shifters. Rimmer - "There's an old human saying - if you talk garbage, expect pain" I have owned both x7 and x9, noticed no differences between those two. I have ridden others' bikes equipped with x0 shifters and the feel is definitely better. Definitely more of a quicker click with the x0's, it also shifts with a shorter lever throw. 2006 Turner 5 Spot 2007 Trek 2100 ZR Do it!!! then get the shifter upgrade. You can then have spares, see very logical. Matt 891 29 inch wheels are a crutch for riders with sub-par skills. Is it worth paying a little bit extra to get the X.0 over the X.9? Both have zero loss travel. I'm upgrading from X.7 and it will be shifting an X.0 RD. I do like the bling factor of the X.0 and would love a red one to match my cassette and RD but not sure its worth the extra cash. Functionally they're both the same. Whether you want to pay more for the bling or not is up to you. Twist? Trigger? What shifter are you asking a question about? I assume trigger. ... And I Am You, And What I See Is Me! Twist? Trigger? What shifter are you asking a question about? . I am amazed anyone gave advice without asking this question first. Twist? Trigger? What shifter are you asking a question about? I assume trigger. Sorry, didn't think about that. Trigger shifters. If you ever see a turtle on a telephone pole, remember he had help getting there. Is there anything beer can't do? Do it!!! then get the shifter upgrade. You can then have spares, see very logical. What's the shifter upgrade you are talking about? X.7 to X.0? I didn't clarify I guess but I already have an X.0 RD powered by X.7 triggers. I'm wanting to get X.9 or X.0 triggers as an upgrade. If you ever see a turtle on a telephone pole, remember he had help getting there. Is there anything beer can't do? 01-25-2010 #2 mtbr member Join Date Apr 2009 01-25-2010 #3 mtbr member Join Date Apr 2008 01-26-2010 #4 Rip Van Winkle Join Date Feb 2009 01-26-2010 #5 mtbr member Join Date Jan 2004 01-26-2010 #6 mtbr member Join Date Jun 2007 01-26-2010 #7 Join Date Jan 2004 01-26-2010 #8 mtbr member Join Date Oct 2008 01-26-2010 #9 mtbr member Join Date Oct 2008
{"url":"http://forums.mtbr.com/drivetrain-shifters-derailleurs-cranks/x-0-vs-x-9-rear-shifter-588281.html","timestamp":"2014-04-17T08:13:01Z","content_type":null,"content_length":"94685","record_id":"<urn:uuid:cbc1ff4f-bec0-4917-b93e-dab83b19189c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Collatz Sequence Computed by a Turing Machine The Collatz sequence is formed by an iteration of numbers produced by the following rule: If is even, then replace by ; if is odd then replace by . This Demonstration implements an 8-state 3-color Turing machine that computes this sequence. The machine works like this: Write an initial number in base 2 on the blank tape. The head of the machine is initially located over the least significant byte of the number. If is even, the head erases the digit 0 and moves to the left, so that it lies over the less significant byte of . If is odd, the machine performs the sum by adding the number with itself shifted one position to the left plus one. To do this, the head writes 0, saves the digit 0 and the residue 1 and then moves to the left. Now, the head repeatedly adds the current digit plus the saved digit plus the residue saved in the head and saves the residue of the operation and the current digit in the head, then moves to the left. When the addition is done, the head returns to the position of the least significant byte of . The computation does not stop at . It keeps computing the periodic sequence 1, 2, 4, 1, …, which can be interpreted as stopping.
{"url":"http://demonstrations.wolfram.com/CollatzSequenceComputedByATuringMachine/","timestamp":"2014-04-21T04:36:12Z","content_type":null,"content_length":"44308","record_id":"<urn:uuid:eaa01981-c7d7-4f5e-9af8-1aa124d9b214>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry and the imagination You are currently browsing the tag archive for the ‘slice’ tag. Last week, Michael Brandenbursky from the Technion gave a talk at Caltech on an interesting connection between knot theory and quasimorphisms. Michael’s paper on this subject may be obtained from the arXiv. Recall that given a group $G$, a quasimorphism is a function $\phi:G \to \mathbb{R}$ for which there is some least real number $D(\phi) \ge 0$ (called the defect) such that for all pairs of elements $g,h \in G$ there is an inequality $|\phi(gh) - \phi(g) - \phi(h)| \le D(\phi)$. Bounded functions are quasimorphisms, although in an uninteresting way, so one is usually only interested in quasimorphisms up to the equivalence relation that $\phi \sim \psi$ if the difference $|\phi - \psi|$ is bounded. It turns out that each equivalence class of quasimorphism contains a unique representative which has the extra property that $\phi(g^n) = n\phi(g)$ for all $g\in G$ and $n \in \mathbb{Z}$. Such quasimorphisms are said to be homogeneous. Any quasimorphism may be homogenized by defining $\overline{\phi}(g) = \lim_{n \to \infty} \phi(g^n)/n$ (see e.g. this post for more about quasimorphisms, and their relation to stable commutator length). Many groups that do not admit many homomorphisms to $\mathbb{R}$ nevertheless admit rich families of homogeneous quasimorphisms. For example, groups that act weakly properly discontinuously on word-hyperbolic spaces admit infinite dimensional families of homogeneous quasimorphisms; see e.g. Bestvina-Fujiwara. This includes hyperbolic groups, but also mapping class groups and braid groups, which act on the complex of curves. Michael discussed another source of quasimorphisms on braid groups, those coming from knot theory. Let $I$ be a knot invariant. Then one can extend $I$ to an invariant of pure braids on $n$ strands by $I(\alpha) = I(\widehat{\alpha \Delta})$ where $\Delta = \sigma_1 \cdots \sigma_{n-1}$, and the “hat” denotes plat closure. It is an interesting question to ask: under what conditions on $I$ is the resulting function on braid groups a quasimorphism? In the abstract, such a question is probably very hard to answer, so one should narrow the question by concentrating on knot invariants of a certain kind. Since one wants the resulting invariants to have some relation to the algebraic structure of braid groups, it is natural to look for functions which factor through certain algebraic structures on knots; Michael was interested in certain homomorphisms from the knot concordance group to $\mathbb{R}$. We briefly describe this group, and a natural class of homomorphisms. Two oriented knots $K_1,K_2$ in the $3$-sphere are said to be concordant if there is a (locally flat) properly embedded annulus $A$ in $S^3 \times [0,1]$ with $A \cap S^3 \times 0 = K_1$ and $A \cap S^3 \times 1 = K_2$. Concordance is an equivalence relation, and the equivalence classes form a group, with connect sum as the group operation, and orientation-reversed mirror image as inverse. The only subtle aspect of this is the existence of inverses, which we briefly explain. Let $K$ be an arbitrary knot, and let $K^!$ denote the mirror image of $K$ with the opposite orientation. Arrange $K \cup K^!$ in space so that they are symmetric with respect to reflection in a dividing plane. There is an immersed annulus $A$ in $S^3$ which connects each point on $K$ to its mirror image on $K^!$, and the self-intersections of this annulus are all disjoint embedded arcs, corresponding to the crossings of $K$ in the projection to the mirror. This annulus is an example of what is called a ribbon surface. Connect summing $K$ to $K^!$ by pushing out a finger of each into an arc in the mirror connects the ribbon annulus to a ribbon disk spanning $K \# K^!$. A ribbon surface (in particular, a ribbon disk) can be pushed into a (smoothly) embedded surface in a $4$-ball bounding $S^3$. Puncturing the $4$-ball at some point on this smooth surface, one obtains a concordance from $K\#K^!$ to the unknot, as claimed. The resulting group is known as the concordance group $\mathcal{C}$ of knots. Since connect sum is commutative, this group is abelian. Notice as above that a slice knot — i.e. a knot bounding a locally flat disk in the $4$-ball — is concordant to the unknot. Ribbon knots (those bounding ribbon disks) are smoothly slice, and therefore slice, and therefore concordant to the trivial knot. Concordance makes sense for codimension two knots in any dimension. In higher even dimensions, knots are always slice, and in higher odd dimensions, Levine found an algebraic description of the concordance groups in terms of (Witt) equivalence classes of linking pairings on a Seifert surface; (some of) this information is contained in the signature of a knot. Let $K$ be a knot (in $S^3$ for simplicity) with Seifert surface $\Sigma$ of genus $g$. If $\alpha,\beta$ are loops in $\Sigma$, define $f(\alpha,\beta)$ to be the linking number of $\alpha$ with $\ beta^+$, which is obtained from $\beta$ by pushing it to the positive side of $\Sigma$. The function $f$ is a bilinear form on $H_1(\Sigma)$, and after choosing generators, it can be expressed in terms of a matrix $V$ (called the Seifert matrix of $K$). The signature of $K$, denoted $\sigma(K)$, is the signature (in the usual sense) of the symmetric matrix $V + V^T$. Changing the orientation of a knot does not affect the signature, whereas taking mirror image multiplies it by $-1$. Moreover, if $\Sigma_1,\Sigma_2$ are Seifert surfaces for $K_1,K_2$, one can form a Seifert surface $\ Sigma$ for $K_1 \# K_2$ for which there is some sphere $S^2 \in S^3$ that intersects $\Sigma$ in a separating arc, so that the pieces on either side of the sphere are isotopic to the $\Sigma_i$, and therefore the Seifert matrix of $K_1 \# K_2$ can be chosen to be block diagonal, with one block for each of the Seifert matrices of the $K_i$; it follows that $\sigma(K_1 \# K_2) = \sigma(K_1) + \ sigma(K_2)$. In fact it turns out that $\sigma$ is a homomorphism from $\mathcal{C}$ to $\mathbb{Z}$; equivalently (by the arguments above), it is zero on knots which are topologically slice. To see this, suppose $K$ bounds a locally flat disk $\Delta$ in the $4$-ball. The union $\Sigma':=\Sigma \cup \Delta$ is an embedded bicollared surface in the $4$-ball, which bounds a $3$-dimensional Seifert “surface” $W$ whose interior may be taken to be disjoint from $S^3$. Now, it is a well-known fact that for any oriented $3$-manifold $W$, the inclusion $\partial W \to W$ induces a map $H_1(\ partial W) \to H_1(W)$ whose kernel is Lagrangian (with respect to the usual symplectic pairing on $H_1$ of an oriented surface). Geometrically, this means we can find a basis for the homology of $\ Sigma'$ (which is equal to the homology of $\Sigma$) for which half of the basis elements bound $2$-chains in $W$. Let $W^+$ be obtained by pushing off $W$ in the positive direction. Then chains in $W$ and chains in $W^+$ are disjoint (since $W$ and $W^+$ are disjoint) and therefore the Seifert matrix $V$ of $K$ has a block form for which the lower right $g \times g$ block is identically zero. It follows that $V+V^T$ also has a zero $g\times g$ lower right block, and therefore its signature is zero. The Seifert matrix (and therefore the signature), like the Alexander polynomial, is sensitive to the structure of the first homology of the universal abelian cover of $S^3 - K$; equivalently, to the structure of the maximal metabelian quotient of $\pi_1(S^3 - K)$. More sophisticated “twisted” and $L^2$ signatures can be obtained by studying further derived subgroups of $\pi_1(S^3 - K)$ as modules over group rings of certain solvable groups with torsion-free abelian factors (the so-called poly-torsion-free-abelian groups). This was accomplished by Cochran-Orr-Teichner, who used these methods to construct infinitely many new concordance invariants. The end result of this discussion is the existence of many, many interesting homomorphisms from the knot concordance group to the reals, and by plat closure, many interesting invariants of braids. The connection with quasimorphisms is the following: Theorem(Brandenbursky): A homomorphism $I:\mathcal{C} \to \mathbb{R}$ gives rise to a quasimorphism on braid groups if there is a constant $C$ so that $|I([K])| \le C\cdot\|K\|_g$, where $\|\cdot\| _g$ denotes $4$-ball genus. The proof is roughly the following: given pure braids $\alpha,\beta$ one forms the knots $\widehat{\alpha\Delta}$, $\widehat{\beta\Delta}$ and $\widehat{\alpha\beta\Delta}$. It is shown that the connect sum $L:= \widehat{\alpha \Delta} \# \widehat{\beta\Delta} \# \widehat{\alpha\beta\Delta}^!$ bounds a Seifert surface whose genus may be universally bounded in terms of the number of strands in the braid group. Pushing this Seifert surface into the $4$-ball, the hypothesis of the theorem says that $I$ is uniformly bounded on $L$. Properties of $I$ then give an estimate for the defect; It would be interesting to connect these observations up to other “natural” chiral, homogeneous invariants on mapping class groups. For example, associated to a braid or mapping class $\phi \in \text {MCG}(S)$ one can (usually) form a hyperbolic $3$-manifold $M_\phi$ which fibers over the circle, with fiber $S$ and monodromy $\phi$. The $\eta$-invariant of $M_\phi$ is the signature defect $\eta (M_\phi) = \int_Y p_1/3 - \text{sign}(Y)$ where $Y$ is a $4$-manifold with $\partial Y = M_\phi$ with a product metric near the boundary, and $p_1$ is the first Pontriagin form on $Y$ (expressed in terms of the curvature of the metric). Is $\eta$ a quasimorphism on some subgroup of $\text{MCG}(S)$ (eg on a subgroup consisting entirely of pseudo-Anosov elements)? Recent Comments Ian Agol on Cube complexes, Reidemeister 3… Danny Calegari on kleinian, a tool for visualizi… Quod est Absurdum |… on kleinian, a tool for visualizi… dipankar on kleinian, a tool for visualizi… Ludwig Bach on Liouville illiouminated
{"url":"http://lamington.wordpress.com/tag/slice/","timestamp":"2014-04-17T07:22:45Z","content_type":null,"content_length":"62519","record_id":"<urn:uuid:f4051062-5d94-4a67-b7bc-58c61132c306>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
snippet lin0 <p>A [http://en.wikipedia.org/wiki/Covariance_matrix|Covariance Matrix] is a matrix of [http://en.wikipedia.org/wiki/Covariance|covariances] (the measure of how much two random variables vary together) between elements of a vector.</p> <p>In this snippet, I present [http://en.wikipedia.org/wiki/Estimation_of_covariance_matrices|how to compute a covariance matrix] using the [http:// pdl.perl.org/|Perl Data Language]. The input is a piddle (see comment below for a definition) in which each row represents an input vector and each column represents a dimension of the input vector. The output is a piddle that holds the covariance matrix.</p> <p>What are Piddles?</p> <p>They are a new data structure defined in the [http://pdl.perl.org/|Perl Data Language]. As indicated in [id:// 598007]:</p> <blockquote><i>Piddles are numerical arrays stored in column major order (meaning that the fastest varying dimension represent the columns following computational convention rather than the rows as mathematicians prefer). Even though, piddles look like Perl arrays, they are not. Unlike Perl arrays, piddles are stored in consecutive memory locations facilitating the passing of piddles to the C and FORTRAN code that handles the element by element arithmetic. One more thing to note about piddles is that they are referenced with a leading $</i></blockquote> <p>Cheers,</p> <p> [lin0]</p> <CODE> #!/usr/bin/perl use warnings; use strict; use PDL; # ================================ # covariance: # # $Sigma = covariance( $X ) # # computes the Sample Covariance Matrix of # a sample X1...Xn of p-dimensional vectors # ================================ sub covariance { my ( $X ) = @_; my $Diff = $X - average( $X->xchg(0,1) ); my $Sigma = ( 1 / ( $X->getdim(1) - 1 ) ) * transpose( $Diff ) x $Diff; return $Sigma; } </CODE>
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=625532","timestamp":"2014-04-17T22:14:56Z","content_type":null,"content_length":"2529","record_id":"<urn:uuid:0341bdb8-c8b1-4d76-a0f5-36d411a9c8b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Trig May 6th 2008, 04:05 PM #1 can some one please show me how to show that this equation is true? cot^2x-cos^2x= cot^2x * cos^2x and i have one more question: in solving: i get +- sqrt. 3/2 this means that there are 4 answers, but in the book, it gives me 2 : pi/6+npi and 5pi/6+npi Last edited by >_<SHY_GUY>_<; May 6th 2008 at 04:07 PM. Reason: missing info Consider the $LHS$. $\cot^2x - \cos^2 x = \frac {\cos^2 x}{\sin^2 x} - \cos^2 x$ $= \cos^2 x \left( \frac 1{\sin^2 x} - 1 \right)$ $= \cos^2 x (\csc^2 x - 1)$ $= \cos^2 x \cot^2 x$ and i have one more question: in solving: i get +- sqrt. 3/2 this means that there are 4 answers, but in the book, it gives me 2 : pi/6+npi and 5pi/6+npi no, there are infinitely many answers! you were not given any bounds. the answers given are the general formulas for ALL solutions so that would include the solutions in quads 3 and 4? what is LHS? so that would explain that it would alternate to quad 1 to quad 3...solution wise? how do you get that equation to look like that? May 6th 2008, 04:13 PM #2 May 6th 2008, 04:15 PM #3 May 6th 2008, 04:19 PM #4 May 6th 2008, 04:21 PM #5 MHF Contributor Apr 2008 May 6th 2008, 04:23 PM #6
{"url":"http://mathhelpforum.com/trigonometry/37447-solved-trig.html","timestamp":"2014-04-19T02:33:50Z","content_type":null,"content_length":"48647","record_id":"<urn:uuid:6b0ebda9-b4ae-4e88-b9c8-19f033a30943>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: mvsumm with missing obs [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: mvsumm with missing obs From Christopher F Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: mvsumm with missing obs Date Thu, 12 Aug 2004 08:45:22 -0400 On Aug 12, 2004, at 2:33 AM, Jan wrote: thanks you for your prompt responses. I understand your concern. However, I am applying -mvsumm- in a finance setting: I need to estimate the volatility of stock returns using the preceding 60 months of returns. Sometimes, I have just 1 observation missing... and as long as this observation is part of the preceding 60 months, I get no estimate of the volatility for that company's stock (i.e. I lose 60 observations by insisting on a complete history). I am willing to make the tradeoff between gaining 60 observations at the cost of a slightly less reliable measure of volatility. In your answer you suggested to clone and reprogram your original -mvsumm-. I am not a programming whiz - do you have any pointers? This is a common procedure in finance - I think you may underestimate the demand for such a feature in -mvsumm- ;-) Thanks a lot!! Jan A reasonable approach would be to fill in the series with some form of interpolation where you have missing observations in the middle of the series. I don't quite understand how one can have one missing return observation, since a return is a log price relative, and the prices needed to estimate the missing observation will be used to compute the prior and subsequent observations. But if for some reason you do not have the original data available, then interpolation might be a sensible option. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-08/msg00393.html","timestamp":"2014-04-16T19:55:38Z","content_type":null,"content_length":"6243","record_id":"<urn:uuid:02557d38-633d-4815-ab9a-25c2e1f4be7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Drawing planar graphs using the lmc-ordering (extended abstract - COMPUT. GEOM. THEORY APPL , 1998 "... An orthogonal drawing of a graph is an embedding in the plane such that all edges are drawn as sequences of horizontal and vertical segments. We present a linear time and space algorithm to draw any connected graph orthogonally on a grid of size n \Theta n with at most 2n + 2 bends. Each edge is ben ..." Cited by 61 (6 self) Add to MetaCart An orthogonal drawing of a graph is an embedding in the plane such that all edges are drawn as sequences of horizontal and vertical segments. We present a linear time and space algorithm to draw any connected graph orthogonally on a grid of size n \Theta n with at most 2n + 2 bends. Each edge is bent at most twice. In particular for non-planar and non-biconnected planar graphs, this is a big improvement. The algorithm is very simple, easy to implement, and it handles both planar and non-planar graphs at the same time. , 1998 "... . We consider the problem of coding planar graphs by binary strings. Depending on whether O(1)-time queries for adjacency and degree are supported, we present three sets of coding schemes which all take linear time for encoding and decoding. The encoding lengths are significantly shorter than th ..." Cited by 50 (11 self) Add to MetaCart . We consider the problem of coding planar graphs by binary strings. Depending on whether O(1)-time queries for adjacency and degree are supported, we present three sets of coding schemes which all take linear time for encoding and decoding. The encoding lengths are significantly shorter than the previously known results in each case. 1 Introduction This paper investigates the problem of encoding a graph G with n nodes and m edges into a binary string S. This problem has been extensively studied with three objectives: (1) minimizing the length of S, (2) minimizing the time needed to compute and decode S, and (3) supporting queries efficiently. A number of coding schemes with different trade-offs have been proposed. The adjacency-list encoding of a graph is widely useful but requires 2mdlog ne bits. (All logarithms are of base 2.) A folklore scheme uses 2n bits to encode a rooted n-node tree into a string of n pairs of balanced parentheses. Since the total number of such trees is...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1815614","timestamp":"2014-04-21T13:40:34Z","content_type":null,"content_length":"15348","record_id":"<urn:uuid:6223a3b2-e1e4-4fc9-8e71-86283c60317c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] generic expectation operator nicky van foreest vanforeest@gmail.... Thu Aug 23 12:11:58 CDT 2012 I noticed that rv_frozen does not have an expect attribute. Is there a design reason for this? It feels somewhat unnatural to me that frozen distributions cannot be used directly to compute expectations. On 23 August 2012 18:36, nicky van foreest <vanforeest@gmail.com> wrote: > Hi Josef, > Thanks for your answers. >>>> return scipy.integrate.quad(lambda x: x*g(x), X.dist.a, X.dist.b) >>>> X = stats.norm(3,sqrt(3)) >>>> print E(sqrt, X) >> I don't know why this works, sqrt of a normal distributed random >> variable has lots of nans (or complex numbers) > You are right, it shouldn't work. The point is that my code above is > this: lambda x: x*g(x), while it should have been lambda x: f(x)*g(x). > I'll give expect a try. > Nicky More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-August/032857.html","timestamp":"2014-04-16T18:59:37Z","content_type":null,"content_length":"3598","record_id":"<urn:uuid:13a7c2bd-085b-4e57-b893-daf298e054f9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Uniform Random Variables November 23rd 2009, 10:01 AM #1 Mar 2009 Independent Uniform Random Variables If X1,X2, . . . ,Xn are independent U(0, 1) random variables and Y=min (X1,X2, . . . ,Xn) has PDF given by fY(x) = { $n(1-x)^{n-1}$, 0<=x<=1}, 0, otherwise Identify this distribution, and hence state E[Y] and Var(Y). Im thinkin it looks similar to a binomial distribution? Am i on the right lines? Last edited by sirellwood; November 24th 2009 at 04:34 AM. If X1,X2, . . . ,Xn are independent U(0, 1) random variables and Y=min (X1,X2, . . . ,Xn) has PDF given by fY(x) = { $n(1-x)^n-1)$, 0<=x<=1}, 0, otherwise Mr F says: What you have posted here is wrong. It's not even a pdf .... Identify this distribution, and hence state E[Y] and Var(Y). Im thinkin it looks similar to a binomial distribution? Am i on the right lines? There is a (simple) mistake in your pdf which I'll leave you to figure out and fix. The correct pdf has a beta distribution (and you will need to determine what value each parameters has). Ah thank you mr fantastic! its edited now! So i first need to find the sample mean and variance right? Because it is uniformally distributed, is the sample mean just (x1+xn)/2? Im getting a little confused with the word "min"(x1,x2....) in the Y term. minimum of X1,...,Xn is the smallest of these rvs It's also known as the smallest order stat. It couldn't be a binomial since the underlying distribution here is continuous (uniform) $F_Y(a)=P(Y\le a)=1-P(Y>a)$ for any 0<a<1 $=1- P(X_1>a)P(X_2>a)\cdots P(X_n>a)$ $=1- (1-a)^n$ So $f_Y(a)={d\over da}F_Y(a)=n(1-a)^{n-1}$ THIS is a Beta with parameters 1 and n. Look that up for your mean and variance. The min business is irrelevant (unless you want to derive the given pdf). The bottom line is that you're given a pdf and asked to identify it and hence get it's mean and variance. Note that you can check your answers by calculating the mean and variance directly from the pdf. November 23rd 2009, 06:22 PM #2 November 24th 2009, 04:42 AM #3 Mar 2009 November 24th 2009, 10:57 PM #4 November 25th 2009, 04:03 AM #5
{"url":"http://mathhelpforum.com/advanced-statistics/116311-independent-uniform-random-variables.html","timestamp":"2014-04-20T16:33:30Z","content_type":null,"content_length":"47614","record_id":"<urn:uuid:36a7fcb8-0bbc-466e-9ba9-77e14df86b98>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wholes to Parts From Wholes to Parts: Operating with Factors, Multiples, and Fractions Students use the number sense they have developed with whole numbers to begin investigating fractions. Building on whole number concepts of prime factorization and multiples, students develop an understanding of equivalent fractions and common denominators. The unit ends with the exploration and practice of the four basic operations with fractions. NCTM Standard: Number and Operations • Understand numbers, ways of representing numbers, relationships among numbers, and number systems • Understand meanings of operations and how they relate to one another • Compute fluently and make reasonable estimates Primary Mathematical Goals • Determine GCF and LCM of given numbers • Use order of operations to evaluate numerical expressions • Use order of operations to solve problems • Write equivalent fractions • Compare and order fractions • Find and apply a variety of methods for adding and subtracting fractions • Find and apply a variety of methods for multiplying and dividing fractions
{"url":"http://www2.edc.org/Mathscape/6th/fwtp.asp","timestamp":"2014-04-19T19:35:55Z","content_type":null,"content_length":"5144","record_id":"<urn:uuid:f7b09a2a-bbff-431a-9973-451dadb14f40>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
The Golden Mean is a ratio which has fascinated generation after generation, and culture after culture. It can be expressed succinctly in the ratio of the number "1" to the irrational "l.618034... ", but it has meant so many things to so many people, that a basic investigation of what might is the "Golden Mean Phenomenon" seems in order. So much has been written over the centuries on the Mean, both fanciful imaginings and recondite mathematicizations, that a review of the literature on the subject would be oversize, and probably lose the focus of the problem. This purpose of this paper is to state in the simplest form problems which relate to the Golden Mean, and pursue a variety of directions which aim to explain the origin of this remarkable ratio and its ultimate meaning in the world of mind and matter. In modern times there has been much interest in the Golden Proportion, Section or Mean. Since the Renaissance it has been used extensively in art and architecture, it figures in the Venetian Church of St. Mark built early in the 16th century, and has become a standard proportion for width in relation to height as used in facades of buildings, in window sizing, in first story to second story proportion, at times in the dimensions of paintings and picture frames. There is something "satisfactory" about the relationships of the Greek "divided lines" proportion, which some have felt to be modern acculturation since the Renaissance. In the l930's the Pratt Institute of New York did a study on various rectangular proportions laid out as vertical frames, and asked several hundred art students to comment on which seemed the most pleasing. The ratio of 1 : 2 was least liked, while the Golden Ratio was favored by a very large margin, which seemed to point to the actual dimensions as generating a pleasing response by their size. The French architect LeCorbusier noted that the human body when measured from foot to navel and then again from navel to top of head, showed average numbers very near to the Golden Ratio. He extended this to height compared with arm-span, and designed doorways consonant with these numbers. But of course much of this was based in averages rather than exact numbers, and so falls into the general area of esthetic design, rather than mathematical proportion. However studies have shown that the patterns of tree- branching adhere to the GM proportion, although again not exactly, while the dendritic cracking in certain metallic alloys which occurs as very low temperatures is basically GM based. In an entirely different area, Duckworth at Princeton found in the early l940's a GM relationship in the length of paragraphs in Vergil's Aeneid, with the figures becoming ever more accurate as larger samples were taken. Lendvai has demonstrated that Bartok used the GM ratio extensively in composing music, the question remaining whether an artist as an educated person uses the GM ratio consciously as a framework for his work, or unconsciously because of its ubiquitous appearance in the world around us, something we sense by living in a GM proportioned world. The Algebraic Approach FIRST let us examine the Golden Section from a algebraic direction : The Golden Section is the division of a given unit of length into two parts such that the ratio of the shorter to the longer equals the ratio of the longer part to the whole. Calling the longer part x and accordingly the shorter part 1-x, this condition reads 1-x is to x as x is to 1 (1-x)/x = x/1 This is solved by multiplying both sides by x, to get 1-x = x^2 x^2 + x - 1 = 0 The Quadratic Formula (x = (-b +/- sq.r.(b^2 - 4ac))/2a) applies here with a=1, b=1, c=-1, and yields the answer x = (-1 + sqr(5))/2 =. 618, nearly. 2) SECOND I point to the circular method given in standard algebra textbooks, which I can not reproduce here since it demands a diagram and I am using a text-only format for this material. It follows Euclidean procedure in working with a circular display. Briefly, as far back as about 500 BC it was observed that in the regular decagon (figure of 10 equal sides inscribed in a circle), the triangle formed by one of the exterior segments and two radii will show the Golden Proportion in the ratio of short to long leg of that isosceles triangle. Incidentally, its base angles (72 degr.) are just twice its apex angle (36deg). A traditional description of this process in formal terms can be seen in E P Vance's Modern Algebra and Trigonometry, l962. or in any algebra textbook. This is especially interesting in that it involves the construction of a pentagon and the 10 fold division of a circle, with dimensions which evolve from the 1 : 2 rectangle. The common denominator to both procedures is of course the sqr 5! Perhaps it is better to see all this in diagram and follow the derivation as given there. This, compared with the previous section, is a somewhat different, non-quadratic way of finding the GM ratio, it is geometric and more in the spirit of the early Greek investigators than the algebraic methods given above. An Approximative Approach Here is a method of my own, proceeding by a series of approximations, which I present with enthusiasm, since I have seen no parallel to it elsewhere. Starting with the number one (1), I want to find any number larger than it, the inverse of which is smaller by the difference of one (1) while retaining the same digits. If I try random numbers, I find the difference either too large or too small, so by a rather exhausting session with the Method of Exhaustions, I find my numbers converging on the GM figures: .618034 and 1 and 1.618034. (In order to check accuracy I try it with 10/9 places:: 1.6180339887 and. 618033989, with rounding off on my calculator, so we have a continuing series.) By this crude and curious method I have avoided engaging , in true classical Greek fashion, with the irrational square root of 5, which the algebraic solutions brings up. I suspect that this method can only be done with numbers, that it has no analog with stick or string which a Greek architechtural workman could have used. A mathematical friend inspected this last method, and commented that I might point out to the general reader to the fact that the way. .618 is characterized in the exhaustions paragraph, stems from the first equation 1) above: (1-x)/x = x and write the left-hand side a 1/x - x/x, so you get 1/x - 1 = x or 1/x = x + 1 This says that when you add 1 to the number you want (.618), you get the reciprocal of that number. One way to home in on it, aside from the random approximations you mention, is as follows: Start with any convenient number, e.g.. 5 Add 1 --- getting 1.5 in this case. Form the reciprocal --- getting 1/1.5 or 0.667........ Add 1 --- getting 1.6 Form the reciprocal --- getting .625 Add 1 Form reciprocal..." Soon you see convergence. You can start with any other number (between 0 and 1) in the place of. 5, and get the same. .618 ultimately. Irrationals and the Greeks Now we come to another approach, which I believe was the one the Greeks used. First let me set the stage with some background material which bears on my solution: a) Plato had described in the Meno common knowledge about the squaring of the square, by constructing a larger square based on the hypotenuse of the original diagrammed square. He doubled the area, and neatly avoided having to deal with the square root of 2 by simply squaring it and returning it the realm of usable numbers. What he had been dealing with was of course 1.414213562.... b) With such interesting returns from the experiment with the square, a next natural trial might well be dealing with a rectangle with an adjacent side twice the length of its partner, hence a 1 : 2 rectangle. Now by Pythagorean theorem the hypotenuse will be the square root of 5 which the Greek cannot deal with, nor will it give an interesting return if handled like the 1 : 1 square. (A larger square based on this diagonal will have an area of 25, not consonant with the original rectangular area of 2, hence not interesting to a Greek. Dead-end in this direction.) c) There is a note in Herodotus, speaking of the Egyptians and Egyptian mathematical knowledge.. H W Turnbull, the distinguished algebraist of the l940's, remarks in an essay in The Great Mathematicians, on Herodotus' passage: "A certain obscure passage in Herodotus can, by the slightest literal emendation, be made to yield excellent sense. It would imply that the area of each triangular face of the Pyramid is equal to the square of the vertical height. and this accords well with the actual facts. If this be so, the ratios of height, slope and base can be expressed in terms of the Golden Section, of the radius of a circle to the side of the inscribed decagon. In short there was already a wealth of geometrical and arithmetical results treasured by the priests of Egypt, before the early Greek travelers became acquainted with mathematics..." d) Herodotus also mentions that the Egyptians had a regular class of mathematical technicians, whom he calls "rope measurers", who were used not only to measure out linear distances for surveying, but to establish complex geometric figures. The convenient whole numbers associated with the Pythagorean theorem, 3, 4 and 5 and any multiples of these, were well known to the Egyptians as basic information. Considering the complexities involved in the dimensioning of the Pyramids, it is clear that the rope measurers were the standard way of converting mathematical data to actual, architectural measurements. I mention this as especially important in relation to the following: section: Measurements and the Parthenon It has always been a problem to undertand how the Greek architect and his consruction workers managed to incorporate into the design of large-scale temples like the Parthenon the "irrational" measurements which the Golden Mean requires. The Greeks had no system for handling irrational numbers in a theoretical manner, let alone applying irrational measurements to an actual conctruction project. Extending the numbers of the GM proportion from one place to another on a building in the process of construction would seem to have been impossible. But the proportions are clearly there in fact. So at this point I want to introduce a method, which I take to be an independent discovery on my part, and the key to the use of the GM ratio in large scale applications in architecture, for example in Iktinos' GM based designs for the Parthenon.. a) I construct a 1 : 2 rectangle of any size, depending on what scale I am working with. b) I fix a non-elastic string or tape to the lower left hand corner of this rectangle, and run it around a point (a nail) at the upper right hand corner then draw it down to the lower right corner. This adds the short side of the rectangle (1) to the diagonal (sqrt5). c) I then take my string, hold the ends together, and stretching it out double, I halve its length. This is now (sqrt 5+1)/2 or numerically 1.618....., the number have been seeking for comparison to one (1). d) I can take this string/number and use it as short side of a new larger rectangle, and construct a new larger rectangular figure with the same proportions preserved. e) But I may want to get smaller, that is find the inverse (1/x) of 1.618 (which is. 618), I can do this by the string method too. I draw my line from left lower to right upper corner, bring down the line to lower right corner, and folding that back along the diagonal, I mark that point, which represents the subtraction of one (1) from sqrt5. If I take that remaining length of my line from the start to the mark, and fold it double, I get. 618 or the inverse of 1.618 (1/x). To us in a day of exact measurements with electronic drafting equipment, it may seem inconvenient to make architectural measurements with a string, but recall that all measurements of objects in the real world are still made with linear devices. In the West we now use a plastic tape of high strength with low expansion, before that it was a woven marked tape laced with copper wires, earlier the "chain" and "rod" which were standards set to a given length. Surveyors still use tapes, usually of steel with a calculation for drop under a specific tension, rather than available optical distance measuring equipment, since they are more accurate for standard civil engineering work. The rolling pi-wheel is used on highways for initial measurement, but never replaces the tape. (It is only in recent years that we have adopted use of wave lengths from certain elements (cadmium) at a fixed temperature, as a standard of length, but that is in laboratory situations only for establishing The Greeks as carpenters, masons and architects, used direct measurement, that is, measurements transferred from one block or board to another which was to be made identical as possible Their constant mention of "congruence" points to this simple matching of direct size, which "congrue" if they match up in all their dimensions. Although the Greek mathematical intellectuals had a tendency to mask their connections with the handicrafts and trades as functioning on a lower plane, architecture and manufacturing were everywhere present in their society. The problem in this case is bringing together the Greek mathematical knowledge of the geometers with the practical transfer of mathematical proportions to actual buildings. In short, I believe the Greeks first explored the possibilities inherent in the rectangle of 1 : 2 ratio, and found that this satisfied in realizable dimensions the ideal proportion which Plato had discussed in his projection of the Divided Line.. The next step was devising a (string) method which would permit transfer of proportional measurements to real objects under construction. Plato had said that a line so divided into two unequal segments so that the smaller bore the same relationship to the larger, and the larger to the whole line, would represent a special kind of proportional relationship with important properties. Euclid discussed this relationship in his book on proportional in geometric terms, naturally stopping short of identifying exact numbers, which would have been inconceivable with the primitive Greek numerical system. That this was a commonly understood and accepted ratio can be inferred from its extensive use in the work of the 5th century architect Iktinos, who designed the Parthenon with the Golden Mean ratio throughout.. A "Standard" for the Proportions Let me describe an important area which should be investigated by someone who has an interest in the proportional measurements of the Parthenon. If we take several of the larger proportions which are well documented with fair agreement of the numbers, and if we scale these down to a size which should represent a working "Standard Scale", as the official basic measurement from which the rope calculators would start their numerical expansions, we would expect to find a "Parthenon-Standard" from which the enlarged proportions used in the construction of the Parthenon were developed. It would not be surprising to find this standard in the range of a human cubit. Since the Greeks were smaller then that Western persons at the present time, I would expect this Standard to be somewhat under the nineteen inches common for a well developed American male. But if no figure in this range emerged, then we would look for another standard for the basic measurement, and it could be one drawn from some other human or animal figure, or it could be entirely arbitrary. What is the value of this Standard? It would provide the link between a physical measurement, which could be a master-measurement on a strip of wood or metal, used in conjunction with the non-numerical calculations of the "Rope Measurers" whose work was entirely proportional. If the architectural measurements taken from a finished building are consistent, then they must go back to some ultimate Standard from which the RHs proportional calculations were drawn. A Summary How could the Greeks, who had a very poor number system which used letters of the alphabet without a zero, who furthermore were confused by "irrationals", as numbers which they could not calculate arithmetically ----. how could they have determined with exactitude the numbers which are discussing, and used them in architectural design? They used the above methods, establishing a rectangle of size consonant with the work to be done, ran a string or copper wire around the points as I have described, and could thus transfer a Golden Section ratio dimension to a column, to the spacing between columns, to a metope, to a plan layout. With knowledge of the properties of the 1 : 2 rectangle, and a mechanical linear method of transfer of measurement, they were able to devote themselves to subtle elements of design. And this without having to construct a numerical interface they way we have done. Our way is easier for us, with fine calculations, CAD layout with lines of no dimension, and plotters printing out to scale. But for this we have had to provide a great deal of physical equipment, and a great deal of intellectual training and preparation for any operation we undertake. The Greek were direct, their architecture is amazingly subtle and persuasive, and I think part of their artistry comes from their use of complex mental processes, coupled with very direct and simple ways of transferring ideas into wood and stone structures. We often speak of the golden proportions of the Parthenon in artistic and aesthetic terms, forgetting that behind all architectural art there must be a firm foundation in ultimate numbers. As Pythagoras had clearly said: FIRST OF ALL IS NUMBER. Return to main index William Harris Prof. Em. Middlebury College
{"url":"http://community.middlebury.edu/~harris/Humanities/TheGoldenMean.html","timestamp":"2014-04-18T23:55:01Z","content_type":null,"content_length":"20633","record_id":"<urn:uuid:00164825-2b39-435f-ae94-0a2f5bbe94fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2010 [00099] [Date Index] [Thread Index] [Author Index] Re: Assertions in Mathematica? • To: mathgroup at smc.vnet.net • Subject: [mg113603] Re: Assertions in Mathematica? • From: David Bailey <dave at removedbailey.co.uk> • Date: Thu, 4 Nov 2010 04:00:04 -0500 (EST) • References: <iaj4ob$n70$1@smc.vnet.net> <iajftk$qas$1@smc.vnet.net> <iaonnh$jmh$1@smc.vnet.net> On 02/11/10 10:04, kj wrote: > In<iajftk$qas$1 at smc.vnet.net> "Sjoerd C. de Vries"<sjoerd.c.devries at gmail.com> writes: >> The mixture of overview >> pages, tutorials, function doc pages, crosslinks, and the fact that >> doc pages are 'active' is very effective in my humble opinion. Access >> to the docs in function browser, book, and doc centre form is nice as >> well. Navigating around is extremely well done. > I guess you did not read where I wrote >>> Whatever the reason, the fact remains >>> that, when it comes to the *content* of its documentation (as >>> opposed to its presentation), Mathematica is third-rate at best, >>> which is scandalous for software as expensive as it is. > All the things you list as pluses fall under "presentation", not > content. > Do you understand what I mean by "formal specification"? Don't you > see that Mathematica's documentation avoids formal specification > as much as possible? Don't you see that this documentation forces > readers to *guess* at the formal specification on the basis of a > few examples, and unbounded amounts of trial-and-error? If you > don't understand these points, then of course my criticism of > Mathematica's documentation won't make any sense to you. > In response to the attack on Unix man pages in a different response, > I disagree. The Unix man pages are, on the whole, incredibly useful > to the experienced programmer. In fact, that's exactly what I > think Mathematica is missing, a proper *detailed* reference manual, > with full, explicit specification of what every function does, as > opposed to what it has now, which is, at best, a lightweight user's > guide (trying to pass itself off as a reference manual). There's > nothing wrong with a user's guide; it definitely has a place in > the whole documentation library. But it is not even close to being > enough. As far as I know there has never been an attempt to make > a Mathematica reference manual available to the public (although > I'm sure Wolfram's developers have access to that level of > documentation internally; it would be impossible for them to do > their work without it). > Software developers need formal specifications to do their work. > That's why formal APIs and formal protocols (HTTP, SMTP, etc.) > exist. It would be impossible to implement a web browser or a mail > reader without such *exhaustive* formal specifications, where > nothing is left for the reader to guess over. By the same token, > it is impossible to write solid code in Mathematica without formal > specifications that say *exactly* what each Mathematica function > does. This lack of formal specification is particularly egregious > in the area of research-oriented programming, where the wrong > "educated guess" on the part of the programmer over what the > Mathematica code is doing can eventually lead to the publication > of erroneous results. > ~kj I have been a software developer for over 30 years, and I must say, I use formal documentation as a last resort! Here are some of the reasons why: 1) It is rare to wish to use a function or command in its full generality. A few well chosen examples may cover 95% of the cases, and are obviously much easier to access. The Mathematica documentation always gives access to the syntax of the rarer cases, so nothing is lost 2) Formal documentation seems to squeeze out facts that don't fit the schema. For example, FullSimplify uses some form of search process that is terminated by a time constraint - a formal spec for such a function would be very hard to write. 3) Many functions, such as Plot, call other functions that could, in principle have side effects that could introduce all types of complications. It is far better to leave the question of just how many function calls are made, as indeterminate. 4) Some functions - such as Integrate - do as best they can. Any formal specification as to what Integrate could or could not do, would be pretty mindless. 5) A lot of formal documentation only works because those who read it, have access to a lot of informal information and/or example programs. In addition, it is important to remember that Mathematica is sold to many people who have only a limited experience of programming - the kind of documentation you prefer would be very intimidating. David Bailey
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00099.html","timestamp":"2014-04-19T14:33:25Z","content_type":null,"content_length":"29965","record_id":"<urn:uuid:aff12728-4615-4893-b31c-24fd9a1fd491>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
What is an Ordinal Number? An ordinal number is a number that indicates where something is in sequence related to another number or object. In English, an ordinal number is different than other types of numbers in that there are usually a couple of letters added to the root word to produce the ordinal number. However, most ordinal numbers are very similar to their cardinal number counterparts. For example, cardinal numbers are one, two, three and so on. Ordinal numbers are first, second, third and so on. Ordinal numbers were invented by Georg Cantor in 1897, a German mathematician who was actually born in Russia. He is probably best known for devising the set theory. The set theory basically explains that numbers can work as a set, and there may be numbers common to both sets. For example, if there is a set {1,2,3} and a set {2,3,4}, the common numbers between them would be {2,3}. The common numbers are called the intersection of the sets. There are a number of other operations that go along with the set theory as well. The set theory also makes it possible to include the number zero as a natural number. The number zero is the only natural number that cannot be an ordinal number. An ordinal number is commonly used in English when describing the relationship of natural numbers. Natural numbers are counting numbers, or the traditional numbers we think of in mathematics. They are also called counting numbers. An ordinal number can be treated the same as a cardinal number, and thus is subject to any mathematical computations. However, an ordinal number is not commonly used in mathematic computations, except perhaps at the end of the computation. Ordinal numbers are also very similar to integers, which include both natural numbers and their negative counterparts. However, an ordinal number is never used in the negative form. Therefore, as there are no ordinal numbers representing negative numbers or zero, it is logical to conclude that ordinal numbers represent only positive, whole numbers. In modern usage, ordinal numbers are mainly used to count places. For example, if a group finished a race, the top three we would say finished first, second and third. The next three would finish fourth, fifth and sixth. In school, this is a common way to refer to grade levels.
{"url":"http://www.wisegeek.com/what-is-an-ordinal-number.htm","timestamp":"2014-04-17T15:37:07Z","content_type":null,"content_length":"65365","record_id":"<urn:uuid:49712262-944d-4585-9433-19e8e3d5f22e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Is there any webpage or math program that can write fracitons,<br> numbers into bijective enumeration? Date: Mar 20, 2013 9:17 AM Author: JT Subject: Re: Is there any webpage or math program that can write fracitons,<br> numbers into bijective enumeration? On 20 mar, 11:25, JT <jonas.thornv...@gmail.com> wrote: > On 20 mar, 10:46, 1treePetrifiedForestLane <Space...@hotmail.com> > wrote: > > what is the canonical digital representation > > for base-one accounting? > > (inductive proof .-) > > > I don't think anyone is interested (I'm certainly not). > Accounting you mean counting? You ask me what counting is? It is a > collection of discrete entities ranging from first to last member (inf > is not member of any set). > The first member in counting numbers in is generally one or 1, unless > you do not adhere to some headless infinity working collective. > Below you can see sets? of discrete natural items and the summation of > members that make up a set of countable naturals, as you see they > range from first to last since their countable and they are the reason > numbers have comparable magnitudes, 1 is the base unit of math it does > have a comparable magnitude, you can cut it to make fractions, count > it to make sets with comparable magnitudes. The whole idea of > numberline is wrong since 1 do not have any geometric properties/ > attributes. It does have a magnitude though since it is divisible into > fractions, the cuts from fractions also have magnitudes that > comparable to 1. Partitioning into bases is a principle with geometric > properties, but base one have no other projection than counting from > the first to the last discrete member making up a natural number. > 1={1} > 2={1,1} > 3={1,1,1} > 4={1,1,1,1} > 5={1,1,1,1,1} > 6={1,1,1,1,1,1} > 7={1,1,1,1,1,1,1} > 8={1,1,1,1,1,1,1,1} > 9={1,1,1,1,1,1,1,1,1} > A={1,1,1,1,1,1,1,1,1,1} So in this hypothese 1 is discrete, and there is only a single natural number the rest is groups, sets or labels if you so want that depicts how many discrete items in the set. This hypthese when it is extended leads to that 1 must be a continuum of a certain magnitude, since it is a continuum it has no granularity so try to partition it would just be foolish, so without granularity we can recursely cut and cut into any number of cuts that furter can be cut into any number of cuts. This means that a single discrete item 1 could be expressed as a sum of fractions. But it does not mean that any fraction can be expressed into a partitioned base because they can't there is not 1/3 in decimal base only an approximation and a stipulation regarding a neverending decimal expansion.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8694440","timestamp":"2014-04-19T01:52:30Z","content_type":null,"content_length":"4030","record_id":"<urn:uuid:6e154dcd-dfe1-40ff-9bab-03587b0b7e5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Providence students take part in project News and Tribune May 1, 2013 Providence students take part in project Honors Algebra II students at Our Lady of Providence Jr.-Sr. High School are discovering that the problem-solving process can be even more important than the solution itself. These math students are taking part in Collaborative Mathematics and the Video Challenge Project, a web-based mathematical problem-solving group for middle- and high-school students. The Collaborative Mathematics site and project is designed to present students with challenging problems so that they must spend a day or more thinking about the solution. The students then create a video to describe the process and solution. They must submit the video to their teacher, and they have the option of posting the video on the Collaborative Mathematics site where other contributors can view it. For example, one challenge asked students on which finger would they end if they counted on their fingers to 1,000. The students each created a video that presents the problem and narrates how they solved it. Sometimes the answers are incorrect or the steps to achieve the correct answer contain mistakes, but the video allows the students to demonstrate and explain the process, said math teacher Jason Mullis. Mullis said the Collaborative Mathematics project has helped his students develop problem-solving skills that go beyond rushing through their homework. “It’s definitely made them think a little more,” he said. “It gets them interested in problem solving in general. Previously, they were just interested in the fastest way to an outcome and didn’t care if it was right. Now that other people are seeing their stuff, they’re putting more thought into it.” Kelsey Rodgers, a junior from Louisville, said she has gotten a lot out of the project, especially being able to hear the perspective of the Collaborative Mathematics site’s founder, Jason Ermer, a mathematician from Texas who is doing research overseas in Olsen, Norway. She and her classmates recently participated in a class Facetime session with Ermer, who discussed some of the students’ posted solutions as well as how he approaches problem solving. “It’s been interesting to see how to solve different types of problems,” Kelsey said. The students’ latest challenge is to come up with a problem that Ermer won’t be able to solve.
{"url":"http://www.newsandtribune.com/clarkcounty/x319974746/Providence-students-take-part-in-project/print","timestamp":"2014-04-18T15:49:07Z","content_type":null,"content_length":"4983","record_id":"<urn:uuid:7b58435d-3a8c-45ae-96ad-7b49d1a47f94>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluid force on a submerged vertical plate May 4th 2008, 08:51 AM #1 Mar 2008 Fluid force on a submerged vertical plate "Find the total force against the rectangular plate submerged vertically in a tank of water. The plate is 2' (vertical) by 5' (horizontal) and is touching the bottom of a tank of water that is 6 feet deep." The formula for this is the integral from a to b of (strip depth)(strip length)dy, and for water multiply this by 62.4 lb/ft^3. I put the strip depth is -y and the limits of integration are from -6 to -4 for the integral of (-y)(5)dy. This is incorrect. What am I doing wrong? That should be correct except you get a negative answer. $62.4\int_{4}^{6}5xdx=3120 \;\ lbs$ $62.4\int_{4}^{6}5xdx=3120 \;\ lbs$ is the same as $62.4\int_{-6}^{-4}-5xdx=3120 \;\ lbs$, both positive. I missed this question on a test, but the question only asked to set up the integral. Maybe my teacher set up the integral a different way, but both ways give the same answer. I'll ask her about it the next time I see her. May 4th 2008, 08:56 AM #2 May 4th 2008, 09:17 AM #3 Mar 2008
{"url":"http://mathhelpforum.com/advanced-applied-math/37102-fluid-force-submerged-vertical-plate.html","timestamp":"2014-04-19T09:40:01Z","content_type":null,"content_length":"36368","record_id":"<urn:uuid:f2bf5cfc-91b4-4eb3-b046-553c7d4968ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Moufang loop Moufang loop I) The following conditions are equivalent. $\displaystyle(x(yz))x$ $\displaystyle=$ $\displaystyle(xy)(zx)\qquad\text{for all }x,y,z\in Q$ (1) $\displaystyle((xy)z)y$ $\displaystyle=$ $\displaystyle x(y(zy))\qquad\text{for all }x,y,z\in Q$ (2) $\displaystyle(xz)(yx)$ $\displaystyle=$ $\displaystyle x((zy)x)\qquad\text{for all }x,y,z\in Q$ (3) $\displaystyle((yz)y)x$ $\displaystyle=$ $\displaystyle y(z(yx))\qquad\text{for all }x,y,z\in Q$ (4) For a proof, we refer the reader to the two references. Kunen in [1] shows that that any of the four conditions implies the existence of an identity element. And Bol and Bruck [2] show that the four conditions are equivalent for loops. Definition: A nonempty quasigroup satisfying the conditions (1)–(4) is called a Moufang quasigroup or, equivalently, a Moufang loop (after Ruth Moufang, 1905–1977). [1] Kenneth Kunen, Moufang Quasigroups, J. Algebra 83 (1996) 231–234. (A preprint in PostScript format is available from Kunen’s website: Moufang Quasigroups.) [2] R. H. Bruck, A Survey of Binary Systems, Springer-Verlag, 1958. Mathematics Subject Classification no label found Added: 2003-08-12 - 04:54
{"url":"http://planetmath.org/moufangloop","timestamp":"2014-04-19T01:52:42Z","content_type":null,"content_length":"56860","record_id":"<urn:uuid:0d933efb-c86b-4016-acf1-1ed1a408caec>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Scholarships and Awards Department of Mathematics Scholarships and Awards The Department of Mathematics offers several scholarships and awards each year to deserving students. There are two scholarships given to incoming freshmen each year who wish to major in Mathematics. Seven scholarships and awards are given to upperclassmen each year based on academic excellence in the field of Mathematics. Incoming Freshman Scholarships Marjorie Watson Mathematics Scholarship Dorothy Dean Shelton Mathematics Scholarship Scholarships and Awards Given to Current Students Actuarial Science Award Established in 2003 by the Mathematics Department in recognition of students passing the Actuarial examinations. Freshman Mathematics Award The recipient must be a freshman as of the Spring semester when the test is administered or the previous Fall semester. This student scored the highest of all participants on the Freshman Mathematics Award Test. James G. Ware Mathematics Education Award Established in 1994 by Dr. James G. Ware, faculty member of the Department of Mathematics for 30 years, 22 of which were as head of the department, upon the occasion of his retirement. The ward goes to the outstanding student planning to teach Mathematics at the high school level. John W. Jayne Memorial Mathematics Award Established in 1994 by family and colleagues in memory of Dr. John W. Jayne, member of the Department of Mathematics for 22 years, who died in 1993. The award is given each year to an outstanding Mathematics student. Outstanding Graduate Student Award This award is presented to an outstanding student in the Mathematics Graduate Program. Ruth Clark Perry Memorial Mathematics Award Established in 1969 by Mrs. Leonora Miller Seids of Perry, Oklahoma, in memory of her friend, UC Dean of Women from 1924 to 1943 and Professor of Mathematics from 1922 until her death in 1955, to be awarded to an outstanding upper class woman majoring in Mathematics. Winston L. Massey Memorial Mathematics Award Established in 1973 by the University of Chattanooga Foundation in honor of W. L. Massey, Guerry Professor of Mathematics, on the occasion of his retirement after 40 years of service to his alma mater, for an outstanding upper class man majoring in Mathematics.
{"url":"http://www.utc.edu/mathematics/students/awards.php","timestamp":"2014-04-17T04:03:16Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:d6bcaa3b-16f2-4dad-a144-e969239c7c00>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Memory use when factoring big polynomials Karim BELABAS on Fri, 30 Oct 1998 13:07:37 +0100 (MET) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Re: Memory use when factoring big polynomials [Roland Dreier:] > I recently asked gp to factor a polynomial of degree 3425 over F_5, and > even with 40M of memory, gp still ran out of stack space. The culprit > appears to be very profligate use of memory with polynomial GCDs; even > taking the GCD of two polynomials of this degree can use up all of gp's > memory. > I wouldn't exactly describe this as a bug but I would say that gp needs > improvement here. For comparison, Shoup's NTL can factor my polynomial in > a few minutes using maybe 4M of memory, so gp could do a lot better. > I got lost looking at ggcd in polarit2.c (admittedly, I didn't spend much > time poking around). Perhaps someone who understands the code better can > explain where the problem is. I've rewritten this part for version 2.0.12 (due to the initial message of Igor Schein on this list, complaining about factor(x^120-1) taking ages). The real culprit is "generic computation" here, which causes millions of unnecessary copies and divisions whenever you're working in a finite field. 2.0.11 (and all previous versions) was completely unable to deal with polynomials of degree bigger than, say, 1000 in a satisfactory way. You'd immediately get a SEGV when trying to factor such a beast over Z for instance (fixed-size buffers, targeted for degree less than 700...). I'll let Igor comment on benches about the situation in version 2.0.12 if he feels like it (he has done a tremendous amount of testing). The 2.0.12 update is long overdue (sorry about that, too much work, too many things included). I'll try to release it next week. I had initially decided to make a stable release first (2.1, say), but I've changed so many things in the meantime (to really fix problems and not simply patch the obvious bug) that I don't think it's a good idea anymore. I'm even hesitating to still call it "beta" ... Karim Belabas email: Karim.Belabas@math.u-psud.fr Dep. de Mathematiques, Bat. 425 Universite Paris-Sud Tel: (00 33) 1 69 15 57 48 F-91405 Orsay (France) Fax: (00 33) 1 69 15 60 19 PARI/GP Home Page: http://pari.home.ml.org
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-9810/msg00030.html","timestamp":"2014-04-18T13:10:25Z","content_type":null,"content_length":"6227","record_id":"<urn:uuid:020697e1-ef9a-43eb-bd6a-5633f1583eed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic Gödel Machine Hardware Next: Limitations of the Gödel Up: Discussion Previous: Example Applications Probabilistic Gödel Machine Hardware Above we have focused on an example deterministic machine. It is straight-forward to extend this to computers whose actions are computed in probabilistic fashion, given the current state. Then the expectation calculus used for probabilistic aspects of the environment simply has to be extended to the hardware itself, and the mechanism for verifying proofs has to take into account that there is no such thing as a certain theorem--at best there are formal statements which are true with such and such probability. In fact, this may be the most realistic approach as any physical hardware is error-prone, which should be taken into account by realistic probabilistic Gödel machines. Juergen Schmidhuber 2003-09-29 Back to Goedel machine home page
{"url":"http://www.idsia.ch/~juergen/gmweb1/node14.html","timestamp":"2014-04-16T18:56:37Z","content_type":null,"content_length":"3103","record_id":"<urn:uuid:4e2b1c06-cd8f-4c31-a0d5-a6da1bc24847>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Permanents, Transportation Polytopes and Positive Definite Kernels on Histograms Marco Cuturi For two integral histograms r=(r[1],...,r[d]) and c=(c[1],...,c[d]) of equal sum N, the Monge-Kantorovich distance d[MK](r,c) between r and c parameterized by a d-times-d distance matrix T is the minimum of all costs <F,T> taken over matrices F of the transportation polytope U(r,c). Recent results suggest that this distance is not negative definite, and hence, through Schoenberg's well-known result, exp(-t d[MK]) may not be a positive definite kernel for all t > 0. Rather than using directly d[MK] to define a similarity between r and c, we propose in this paper to investigate kernels on r and c based on the whole transportation polytope U(r,c). We prove that when r and c have binary counts, which is equivalent to stating that r and c represent clouds of points of equal size, the permanent of an adequate Gram matrix induced by the distance matrix T is a positive definite kernel under favorable conditions on T. We also show that the volume of the polytope U(r,c), that is the number of integral transportation plans, is a positive definite quantity in r and c through the Robinson-Schensted-Knuth correspondence between transportation matrices and Young Tableaux. We follow by proposing a family of positive definite kernels related to the generating function of the polytope through recent results obtained separately by A. Barvinok on the one hand, and C.Berg and A.J. Duran on the other hand. We finally present preliminary results led on a subset of the MNIST database to compare clouds of points through the permanent kernel. Subjects: 12. Machine Learning and Discovery Submitted: Oct 16, 2006 This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://www.aaai.org/Library/IJCAI/2007/ijcai07-117.php","timestamp":"2014-04-18T05:37:25Z","content_type":null,"content_length":"3824","record_id":"<urn:uuid:707a92da-542b-4720-ac06-39b4a130adce>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Natick Algebra 1 Tutor Find a Natick Algebra 1 Tutor ...I am currently teaching Honors Algebra 2, Senior Math Analysis, and MCAS prep courses, as well as 7-8 grade math, and SAT Prep courses. I have taught courses in Algebra 1, Geometry, Trigonometry, and Pre-calculus as well. I am a licensed, certified teacher for the state of Massachusetts. 12 Subjects: including algebra 1, geometry, GED, algebra 2 ...I work with each student and their family to assess student needs, set goals, and develop a plan for success. I support students on their road to knowledge through private tutoring, test preparation, and academic workshops. I work with students in the core academic subjects: math, science, social studies, and English. 31 Subjects: including algebra 1, chemistry, English, writing I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including algebra 1, calculus, geometry, statistics ...I also assisted adults in a career center as the disabilities specialist. I aided in tutoring adults in GED prep working with those who could not understand due to the teaching methods being utilized. I would identify what was needed to help the student see the concepts. 45 Subjects: including algebra 1, chemistry, reading, physics ...I am very patient and respectful to each person, because I want to have an ongoing relationship with each student over time. When I tutor, I learn from the student as well, which improves my ability to teach. I am looking forward to the opportunity in meeting new students, and pursing his or her own success. 25 Subjects: including algebra 1, calculus, statistics, geometry
{"url":"http://www.purplemath.com/Natick_algebra_1_tutors.php","timestamp":"2014-04-18T21:57:18Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:76e44412-e9e3-4bd1-8044-b08844581410>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Characteristic classes on complex manifolds and Chern-number inequalities on compact Kähler surfaces Abstract (Summary) (Uncorrected OCR) Abstract of thesis entitled CHARACTERISTIC CLASSES ON COMPLEX MANIFOLDS AND CHERN-NUMBER INEQUALITIES ?ON COMPACT KAHLER SURFACES submitted by Yang Chen for the degree of Master of Philosophy at The University of Hong Kong in August 2004 The Euler characteristic is a fundamental topological invariant of a compact oriented differentiable manifold. Hopf in his thesis calculated the Euler characteristic by the index of a generic vector field on the differentiable manifold, which is the Poincar?-Hopf theorem. The Euler-Poincar? characteristic is a topological invariant for vector bundles over compact differentiable manifolds. The representation of the Euler-Poincar? characteristic by characteristic classes can be viewed as a generalization of the Poincar?-Hopf theorem, which is the Hirzebruch? Riemann-Roch theorem. Definitions of characteristic classes were given in three forms, namely the ? singular cohomology, the Cech cohomology, and the de Rham cohomology. The form of Chern classes represented by curvature tensors of vector bundles was used to calculate a number of interesting Chern-number inequalities. After representing the proof of Riemann-Roch theorem in [Hir], deformations of complex structures of [Kodaira1] were studied. We used Kuranishi? theory to represent the deformation by Euler-Poincar? characteristics and the Riemann-Roch theorem to calculate the Euler-Poincar? characteristics. Bibliographical Information: School:The University of Hong Kong School Location:China - Hong Kong SAR Source Type:Master's Thesis Keywords:euler characteristic complex manifolds inequalities mathematics kahlerian Date of Publication:01/01/2005
{"url":"http://www.openthesis.org/documents/Characteristic-classes-complex-manifolds-Chern-512168.html","timestamp":"2014-04-19T15:19:38Z","content_type":null,"content_length":"9405","record_id":"<urn:uuid:297e5e37-979d-4994-acd3-d2598da40be4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
[next] [prev] [prev-tail] [tail] [up] The Undecidability of Post's Correspondence Problem Applications of Post's Correspondence Problem The Post's Correspondence Problem, or PCP for short, consists of the following domain and question. { <(x[1], y[1]), . . . , (x[k], y[k])> | k [1], . . . , x[k], y[1], . . . , y[k] are strings over some alphabet. } Are there an integer n [1], . . . , i[n] for the given instance <(x[1], y[1]), . . . , (x[k], y[k])> such that x[i[1]] [i[n]] = y[i[1]] [i[n]]? Each sequence i[1], . . . , i[n] that provides a yes answer is said to be a witness for a positive solution to the given instance of PCP. The problem can be formulated also as a "domino" problem of the following form. { [i] on its top and the string y[i] on its bottom, 1 Given k with infinitely many cards in each pile, can one draw a sequence of n from these piles, so that the string x[i[1]] [i[n]] formed on the top of the cards will equal the string y[i[1]] [i[n]] formed on the bottom? Example 4.7.1 PCP has the solution of yes for the instance <(01, 0), (110010, 0), (1, 1111), (11, 01)> or, equivalently, for the following instance in the case of the domino problem. The tuple (i[1], i[2], i[3], i[4], i[5], i[6]) = (1, 3, 2, 4, 4, 3) is a witness for a positive solution because x[1]x[3]x[2]x[4]x[4]x[3] = y[1]y[3]y[2]y[4]y[4]y[3] = 01111001011111. The positive solution has also the witnesses (1, 3, 2, 4, 4, 3, 1, 3, 2, 4, 4, 3), (1, 3, 2, 4, 4,3,1,3,2,4,4,3,1,3,2, 4, 4, 3), etc. On the other hand, the PCP has the solution no for <(0, 10), (01, 1)>. The Undecidability of Post's Correspondence Problem Post's correspondence problem is very useful for showing the undecidability of many other problems by means of reducibility. Its undecidability follows from its capacity for simulating the computations of Turing machines, as exhibited indirectly in the following proof through derivations in Type 0 grammars. Theorem 4.7.1 The PCP is an undecidable problem. Proof By Corollary 4.6.1 the membership problem is undecidable for Type 0 grammars. Thus, it is sufficient to show how from each instance (G, w) of the membership problem for Type 0 grammars, an instance I can be constructed, such that the PCP has a positive solution at I if and only if w is in L(G). For the purpose of the proof consider any Type 0 grammar G = <N, [1], y[1]), . . . , (x[k], y[k])> of PCP be of the following form. PCP has a positive solution at I if and only if I can trace a derivation that starts at S and ends at w. For each derivation in G of the form S [1] [m] [1], . . . , i[n]) of a positive solution such that either depending on whether m is even or odd, respectively. On the other hand, each witness (i[1], . . . , i[n]) of a positive solution for PCP at I has a smallest integer t [i[1]] [i[t]] = y[i[1]] [i[t]]. In such a case, x[i[1]] [i[t]] = y[i[1]] [i[t]] = ¢S [2][4] [m][1] [2] [m] The instance I consists of pairs of the following form The underlined symbols are introduced to allow only (¢S The other pairs are used to force the tracing to go from each given sentential form [i], y[i]) is defined so that y[i] provides a "window" into [i] provides an appropriate replacement for y[i] in The pairs of the form (X, The window is provided because for each 1 [1], . . . , i[j] [i[1]] [i[j]] and y = y[i[1]] [i[j]] satisfy the following properties. a. If x is a prefix of y, then x = y. Otherwise there would have been a least l such that x[i[1]] [i[l]] is a proper prefix of y[i[1]] [i[l]]. In which case (x[i[l]], y[i[l]]) would be equal to (v, uvv') for some nonempty strings v and v'. However, by definition, no pair of such a form exists in I. b. If y is a proper prefix of x, then the sum of the number of appearances of the symbols # and [i[1]] [i[l]] and y[i[1]] [i[l]] do not satisfy the property. In such a case, because of the minimality of l, x[i[l]] and y[i[l]] would have to differ in the number of # and [i[l]], y[i[l]]) would have to equal either ($, [i[1]] [i[l]] = y[i[1]] [i[l]], and (¢[i[1]] [i[l-1]] = y[i[1]] [i [l-1]] (and hence that the property holds). The correctness of the construction can be shown by induction on the number of production rules used in the derivation under consideration or, equivalently, on the number of pairs of type (d) and (e) used in the given witness for a positive solution. Example 4.7.2 If G is a grammar whose set of production rules is {S 4.7.1, is <(¢S The instance has a positive solution with a witness that corresponds to the arrangement in Figure 4.7.1. Figure 4.7.1 An arrangement of PCP cards for describing a derivation for The witness also corresponds to the derivation S Applications of Post's Correspondence Problem The following corollary exhibits how Post's correspondence problem can be used to show the undecidability of some other problems by means of reducibility. Corollary 4.7.1 The equivalence problem is undecidable for finite-state transducers. Proof Consider any instance <(x[1], y[1]), . . . , (x[k], y[k])> of PCP. Let [1], . . . , x[k], y[1], . . . , y[k] are all in Let M[1] = <Q[1], [1], q[0], F[1]> be a finite-state transducer that computes the relation Let M[2] = <Q[2], [2], q[0], F[2]> be a finite-state transducer that on input i[1] [n] outputs some w such that either w[i[1]] [i[n]] or w[i[1]] [i[n]]. Thus, M[2] on input i[1] [n] can output any string in [i[1]] [i[n]][i[1]] [i[n]]. On the other hand, if x[i[1]] [i[n]] = y[i[1]] [i[n]], then M[2] on such an input i[1] [n] can output any string in [i[1]] [i[n]]. It follows that M[1] is equivalent to M[2] if and only if the PCP has a negative answer at the given instance <(x[1], y[1]), . . . , (x[k], y[k])>. Example 4.7.3 Consider the instance <(x[1], y[1]), (x[2], y[2])> = <(0, 10), (01, 1)> of PCP. Using the terminology in the proof of Corollary 4.7.1, [1] can be as in Figure 4.7.2(a), Figure 4.7.2 The finite-state transducer in (a) is equivalent to the finite-state transducer in (b) if and only if the PCP has a positive solution at <(0, 10), (01, 1)>. and the finite-state transducer M[2] can be as in Figure 4.7.2(b). M[2] on a given input i[1] [n] nondeterministically chooses between its components M[x] and M[y]. In M[x] it outputs a prefix of x[i[1]] [i[n]], and in M[y] it outputs a prefix of y[i[1]] [i[n]]. Then M[2] nondeterministically switches to M[>], M[<], or M[]. M[2] switches from M[x] to M[>] to obtain an output that has x[i[1]] [i[n]] as a proper prefix. M[2] switches from M[x] to M[<] to obtain an output that is proper prefix of x[i[1]] [i[n]]. M[2] switches from M[x] to M[] to obtain an output that differs from x[i[1]] [i[n]] within the first |x[i[1]] [i[n]]| symbols. M[2] switches from M[y] to M[>], M[<], M[] for similar reasons, respectively. The following corollary has a proof similar to that given for the previous one. Corollary 4.7.2 The equivalence problem is undecidable for pushdown automata. Proof Consider any instance <(x[1], y[1]), . . . , (x[k], y[k])> of PCP. Let [1] be the minimal alphabet such that x[1], . . . , x[k], y[1], . . . , y[k] are all in [1]*. With no loss of generality assume that [2] = {1, . . . , k} is an alphabet, that [1] and [2] are mutually disjoint, and that Z[0] is a new symbol not in [1]. Let M[1] = <Q[1], [1] [2], [1] [0], [1], q[0], Z[0], F[1]> be a pushdown automaton that accepts all the strings in ([1] [2])*. (In fact, M[1] can also be a finite-state automaton.) Let M[2] = <Q[2], [1] [2], [1] [0], [2], q[0], Z[0], F[2]> be a pushdown automaton that accepts an input w if and only if it is of the form i[n] [1]u, for some i[1] [n] in [1]* and some u in [2]*, such that either u[i[1]] [i[n]] or u[i[1]] [i[n]]. It follows that M[1] and M[2] are equivalent if and only if the PCP has a negative answer at the given instance. The pushdown automaton M[2] in the proof of Corollary 4.7.2 can be constructed to halt on a given input if and only if it accepts the input. The constructed pushdown automaton halts on all inputs if and only if the PCP has a negative solution at the given instance. Hence, the following corollary is also implied from the undecidability of PCP. Corollary 4.7.3 The uniform halting problem is undecidable for pushdown automata. PCP is a partially decidable problem because given an instance <(x[1], y[1]), . . . , (x[k], y[k])> of the problem one can search exhaustively for a witness of a positive solution, for example, in {1, . . . , k}* in canonical order. With such an algorithm a witness will eventually be found if the instance has a positive solution. Alternatively, if the instance has a negative solution, then the search will never terminate. [next] [prev] [prev-tail] [front] [up]
{"url":"http://www.cse.ohio-state.edu/~gurari/theory-bk/theory-bk-fourse7.html","timestamp":"2014-04-16T16:00:00Z","content_type":null,"content_length":"30955","record_id":"<urn:uuid:41ac6f05-546d-435c-8778-9aa6f7cbbdc5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Characteristics Of A Good Average Help for Central - Transtutors Characteristics of a Good Average (i) It should be rigidly defined. If an average is left to the estimation of an observer and if it is not a definite and fixed value it cannot be representative of a series. The bias of the investigator in such cases would considerably affect the value of the average. If the average is rigidly defined; this instability in its value would be no more, and it would always be a definite (ii) It should be based on all the observations of the series. If some of the items of the series are not taken into account in its Calculation the average cannot be said to be a representative one. As we shall see later on there are some averages which do not take into account all the values of a group and to this extent they are not satisfactory averages. (iii) It should be capable of further algebraic treatment. If an average dose not possess this quality, its use is bound to be very limited. It will not be possible to calculate, say, the combined average of two or more series from their individual averages; further it will not be possible to study the average relationship of various parts of a variable if it is expressed as the sum of two or more variables. Many other similar studies would not be possible if the average is not capable of further algebraic treatment. (iv) It should be easy to calculate and simple to follow. If the calculation of the average involves tedious mathematical processes it will not be readily understood and its use will be confined only to a limited number of persons. It can never be a popular average. As such, one of the qualities of a good average is that it should not be too abstract or mathematical and there should be no difficulty in its calculation. Further, the properties of the average should be such that they can be easily understood by persons of ordinary intelligence. (v) It should not be affected by fluctuations of sampling. If two independent sample studies are made in any particular field, the averages thus obtained, should not materially differ from each other. No doubt, when two separate enquires are made, there is bound to be a difference, in the average values calculated but in some cases this difference would be great while in others comparatively less. These averages in which this difference, which is technically called "fluctuation of sampling" is less, are considered better than those in which its difference is more. One more thing to be remembered about averages is that the items whose average is being calculated should form a homogenous group. It is absurd to talk about the average of a man's height and his weight. If the data from which an average is being calculated are not homogeneous, misleading conclusions are likely to be drawn. To find out the average production of cotton cloth per mill, if big and small mills are not separated the average would be unrepresentative. Similarly, to study wage level in cotton mill industry of India, separate averages should be calculated for the male and female workers. Again, adult workers should be separately studied from the juvenile group. Thus we see that as far as possible, the data from which an average is calculated should be a homogeneous lot. Homogeneity can be achieved either by selecting only like items or by dividing the heterogeneous data into a number of homogeneous groups. Email Based, Online Homework Assignment Help in Characteristics of a Good Average Transtutors is the best place to get answers to all your doubts regarding characteristics of a good average. Transtutors has a vast panel of experienced statistics tutorswho can explain the different concepts to you effectively. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers related to characteristics of a good Related Questions • Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where... 16 hrs ago Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where A and B are independent, zero-mean, unit variance random variables. Is the process X (t) (wide-sense) stationary? Tags : Statistics, Descriptive Statistics, Others, College ask similar question • Administrators at a university are planning to offer a summer seminar 18 hrs ago Administrators at a university are planning to offer a summer seminar Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • Suppose you have a box with a very large number of orange and blue beads.... 19 hrs ago Suppose you have a box with a very large number of orange and blue beads. You want to estimate the proportion p of orange beads in the box and you want to be 92% confident that your point estimate, which is the sample... Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • One measure of the risk or volatility of an individual stock is the... 1 day ago One measure of the risk or volatility of an individual stock is the standard deviation of thetotal return (capital appreciation plus dividends) over several periods of time. Althoughthe standard deviation is easy to compute,... Tags : Statistics, Regression, Correlation, Regression, University ask similar question • selecting a test 1 day ago selecting a test Tags : Statistics, Hypothesis Testing, Others, University ask similar question • Suppose the probability density function of the length of computer cables... 1 day ago Suppose the probability density function of the length of computer cables is <span style="color: rgb(0, 0, 0); font Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • Hypothesis testing What do we need to consider when we try to select a... 1 day ago Hypothesis testing<span style='color: rgb(68, 68, 68); line-height: 200%; font-family: "Arial","sans-serif"; font-size: 12pt; mso-fareast-font-family: "Times New Roman"; mso-font-kerning:... Tags : Statistics, Hypothesis Testing, Others, University ask similar question • The following data represent the running times of films produced 1 day ago The following data represent the running times of films produced by 2 motion-picture companies. Test the hypothesis that the average running time of films produced by company 2 exceeds the average running time of films... Tags : Statistics, Hypothesis Testing, t,F,Z distibutions, College ask similar question • Use Excel to compute the descriptive statistics for the following data set:... 1 day ago Use Excel to compute the descriptive statistics for the following data set:25, 45, 73, 16, 34, 98, 34, 45, 26, 2, 56, 97, 12, 445, 23, 63, 110, 12, 17, and 41. Tags : Statistics, Descriptive Statistics, Standard Deviation, College ask similar question • The time it takes a ski patroller to respond to an accident call has an... 2 days ago The time it takes a ski patroller to respond to an accident call has an exponential distribution with an average response time of 5 minutes. As part of ski patrollers’ commitment to prevent an accident related fatalities, the... Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question more assignments »
{"url":"http://www.transtutors.com/homework-help/statistics/central-tendency/good-average-characterstics.aspx","timestamp":"2014-04-17T21:25:30Z","content_type":null,"content_length":"92688","record_id":"<urn:uuid:0627ca2c-1b58-4a11-a011-81feb35c44a7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
-- Blogmeister Blog Entries My Nickname 09/03/08 Lyra -- pollygons Article posted November 17, 2008 at 09:38 PM GMT • comment • Reads 76 Hi World Lyra here I'm going to tell you about my ne wsubject in math pollygons. we are laerning about pollygons in math but not only that we are also learning about the angles and degrees first ill tell you about polly gons we are finding out about if it is a polly gon and angles we kind of just started so i cant tell you much about that but we have also been looking at cordinate grids and there is this fun car game on the front of my dragon blog that you can look at try it=http://bye! Article posted November 17, 2008 at 09:38 PM GMT • comment • Reads 76 Article posted November 17, 2008 at 09:27 PM GMT • comment • Reads 45 In math we learned about coordinate grids. You must start at the X axes, then the Y axes. On the computer, we would enter the coordinates correctly to create a polygon. Example: (-50, 40) to (-30, 20) We would use setxy to create the lines that connect to the vertices. And jumpto to tell it where to start. Polygons can not have 2 or less sids. It must have only straight sides. None curved. It can't have any openings. There are many types of triangles. There is scalene, right, equilateral, isosceles, obtuse, and acute. A right triangle, has one right angle. An obtuse has one angle larger than 90 degrees. A scalene has no equal sides. An isosceles has two equal sides. equilateral has three equal sides. An acute has one angle smaller than 90 degrees. Did I cover them all? Bye! Article posted November 17, 2008 at 09:27 PM GMT • comment • Reads 45 Alex -- cordanites Article posted November 17, 2008 at 09:25 PM GMT • comment (2) • Reads 44 i know that when you turn your turtle 90* you turn you turtle straght up and we are learing about polygons in class and we are also learning about cordanites and useing cordaites to build polygons. i don't really get the *'s past 90*? Article posted November 17, 2008 at 09:25 PM GMT • comment (2) • Reads 44 Article posted November 17, 2008 at 09:34 PM GMT • comment • Reads 45 in class we are learning about polygons. polygons are shapes that only have staight lines and has to have 3 of more sides. a hexagon is a polygon. we are learning how to put polygons on grids. we go on Geo-Leo. this is a computer promgram that teachs us about polygons and coordinate grids. different angles that make different Triangles or squares. now i will talk about The different kinds of triangles. There are many different kinds on triangles that are different because of there sides and angles. if onw of the angles is 90 degrees than the traingle is called a right triangle. if all the sides are the same and all the angles are the same size than the triangle is called a Equilateral Triangle. if two of the sides are the same than it is called a isosceles triagngle. and the last kind is a scalene triangle which is a triangle that none of the sides are the same and none of the angles are the same. and last i will tell you about coordinate grids. coordinate grids are grids that have four different spaces. to tell the differents of the spaces is two lines one line is called the y. that is the line that runs up and down. the other line is called the x. this is the line that runs side to side. to know where things are there are different #'s run down the sides of the x and y lines. to find a place on the grid you have to read a set of #'s that look like this. [40,60]. the #'s are all different. Article posted November 17, 2008 at 09:34 PM GMT • comment • Reads 45 Article posted November 11, 2008 at 12:52 AM GMT • comment • Reads 43 As soon as Ralts got the egg, he spent every second with it! One day when Ralts was waiting for his sister to come to their house for babysitting. But. That's when it happened! The egg started moving! Ralts was shocked to find little tiny crack's in the egg! Ralts screamed for Kirlia! And ten miles away, Kirlia was walking towards her house when her censor sensed Ralts's cry. That was when she saw a firetruck go the path she would take to get to her house! She thought Ralts was in deep trouble! She had to beat the firetruck to the house! She just had to! What if Ralts was burned on the floor? Then she headed in the house. She found a sad little Ralts holding something. Ralts opened his eye's wide, tear's bawling out of them! It was when she got out of the fire that she saw a little pink Poke'mon holding on tightly to Ralts's arm tightly, eyes closed shut in fear. It was a Mesprit. When Ralts awoke by his mother's side he blurted out " I DIDN'T DO IT! IT WAS THE MESPRIT. I-I-I-I-I-I-I- DIDNT DO IT!" Then he fainted again. That night Mesprit slept happily beside Ralts. Kirlia was jealous. She wanted a legendary Poke'mon to be her friend too! Her mom; Gardevoir appeared by her side. " How's he doing?" She said worried, but in a quiet voice. " He hasn't coughed once, and I can see his chest moving." " Good." Gardevoir took with a sigh of relief. Then a cool breeze came from a window in Ralts's room. " I thought I shut that window?" Said Gardevoir in an unusual expression. Then a light filled the room, and two odd shape's appeared in front of the window. It was Uxie and Azelf. TO BE CONTINUED........................... Article posted November 11, 2008 at 12:52 AM GMT • comment • Reads 43 Pikachu -- Polygons Article posted November 22, 2008 at 07:44 PM GMT • comment • Reads 42 POLYGONS have at least three sides. A triangle, square, and many other shapes are POLYGONS. POLYGONS don't have rounded edges. All POLYGONS have at least one type of ANGLE there are acute angles, obtuse angles, and right angles. I know ths is a late one! SORRY! Article posted November 22, 2008 at 07:44 PM GMT • comment • Reads 42 Pichu -- math Article posted November 17, 2008 at 09:41 PM GMT • comment • Reads 43 math for math we've been learning about degrees and angles its actualy sorta fun! and we go on the computer and go on this website thats called g.o. logo its sorta like a puzzle but with degrees and pionts and there is a little turtle that goes were ever you want it to like... [-50 70] or [40 -20] and [-10 90] but some times its really hard. where ever you go the turtle follows and its like a little marker so you can make polygons & triangles Article posted November 17, 2008 at 09:41 PM GMT • comment • Reads 43 Article posted November 17, 2008 at 10:09 PM GMT • comment • Reads 43 What we have lately have been doing in math is.... COORDINATE GRIDS! Sometimes we went to Geo logo and did these game whatevers. It's kinda fun. This one is polygons and coordinate grids. It was sorta ironic beacuse before we were doing poly gons in math. We recently started coordinate grids. Any way, we had this paper grid in ourblue math folders, we had to draw a polygons picture in them. Then we went to Geo logo and did the grid on the computer. It's kinda harder than it looks. Ok, so your this cutsie little turtle, and when the turtle moves, line come from behind it like tracks. That's what makes the polygons. now here's the hard part. We had to look at our picture and see where it starts. Heres an example of what you type in the command center: jumpto [-30 10] Then, you have to do setxy INSTEAD of jumpto. Then the next numbers. I didn't even finish my doggie polygon! it was hard enough drawing it! Think for a moment. What does a polygon have to have, and what can't it? The ear was hard. i hand to kinda go down and make a rectangle on the face with out a top. Then to day we did another verison. It was WAY harder. No coordinate grid. So we had to do left right and stuff. We started out with making a suare the na weird triangle. I was on my way to the last line of my square, i ran out of room in the command center. i made room, then some how screwed it up and i had to make the next to lines again! And it's so hard to get the lines staright! I once again did not finish. At least I still think battle ship is still fun. Your tree climbing and swimming friend, squirtle Article posted November 17, 2008 at 10:09 PM GMT • comment • Reads 43 Article posted November 18, 2008 at 09:55 PM GMT • comment • Reads 53 Hey! Do you guys remember when you got a one of those boxes with a lid on top and you have to drop the shape in the right hole? Playing Battle ship? Playing with pattern blocks? Or when you first learned what a basic triangle is? Well, That's a good start, but theres more to it than that. Well, for starters, There isn't just the basic triangle, theres, a lot of diffrent kinds. yeah. Like scalene. my favourite kind is Acute. Your tree climbing and swimming friend, Squirtle Article posted November 18, 2008 at 09:55 PM GMT • comment • Reads 53 Article posted November 17, 2008 at 10:13 PM GMT • comment (1) • Reads 46 Hey there I am here to tell you about cordinate grids. Grids help you locate things such as on a map they use grids to locate things such as mountains. There are two axis's they are the x and y axis first comes x then y. We use a program called geo logo on geo logo we made polygons. We had to use crdinates to make polygons such as a square, triangle and rectangle. A grid has 4 sections there are nagitive numbers and normal numbers. Well I hope this helped you with grids! Next polygons are shapes such as squares, triangles, rectangle there is even one called a dodeckahedron. polygons are complicated they have things call vertices, and angles. Also triangles have 3 points and each point has an angle each triangle has 180 degrees total. If the triangle doesn't well... it isn't a triangle. And thats all I know about coordinate grids polygons and triangles. Article posted November 17, 2008 at 10:13 PM GMT • comment (1) • Reads 46 Tweety -- ppolygons Article posted November 17, 2008 at 10:08 PM GMT • comment • Reads 48 in class we are learning about polygons. we have been drawing shapes that are polygons and we have been making polygons on a gride and finding the corndanets.in computer lab we have been playing a game called geo logo.here are a few lessons about polygons. one they have to have stright lines. two they can not be curved. and three they have to be conected.well that is all we have learned in class so far. Article posted November 17, 2008 at 10:08 PM GMT • comment • Reads 48 Tweety -- triangles Article posted November 18, 2008 at 09:49 PM GMT • comment (1) • Reads 42 in class we are learning about triangles.right now we are doings games like drawing triangles on the a coordinate gride.and making our own triangles.there are scalene triangles,isoseles triangles,equilateral triangles,and much more!! we are also learning about what angles they have.well that is all we have learned about triangles in class. Article posted November 18, 2008 at 09:49 PM GMT • comment (1) • Reads 42 Brian -- math Article posted November 17, 2008 at 09:36 PM GMT • comment • Reads 47 We are doing lots of fun things in math. We are learning about polygon. On the internet I like to play the turtle game.You type in cornets to make the turtle move to cerate a polygon. you tipe in codes to tell the turtle were to go to make it go forwerd you tipe in fd. ten the number you wont the turtle to move. to make the turtle tern rite you tipe in rt. to make the turtle tern left you tipe in lt. jrunals. In those polygons are a shape that has 3 or more sides. triangles are polygons because thay have 3 sides. so that raps up my article about math and polygons. Article posted November 17, 2008 at 09:36 PM GMT • comment • Reads 47 Article posted November 6, 2008 at 08:16 PM GMT • comment • Reads 41 Number coordinates can make a shape by connecting lines, each of those lines have small dots connected to them. These dots show the coordinates like [-40 30] It must connect to another line or else it won't work. If it dosen't connect, it wont become a shape, and your trying to find hidden shapes. TRIANGLES There are many types of triangles, Like the Scalene triangle or the Isoscoles triangle. (wow, big names huh?) Also, to be a triangle it does not have to have equal sides ( The Right Triangle does not have ALL equal sides, just TWO). But the triangle shape has to close, because it's a Polygon. Polygons are shapes that have........ • straight lines (HAVE TO BE STRAIGHT) • They are closed shapes ( If they weren't closed, they wouldn't be polygons!) • They are bot round like circles (Circles aren't polygons!) Article posted November 6, 2008 at 08:16 PM GMT • comment • Reads 41 ShadowKing -- math Article posted November 17, 2008 at 10:11 PM GMT • comment • Reads 48 We have been learning many different polygons. And how to make a polygon and what polygon's need. We made are polygon's on the computer. There is a turtle that can be moved to move it you have to type stuff to make your polygon. My polygon was a house it took a long time to make. And we have been also making triangles on the computer to. It is not as easy as it sounds because if you want go to the left you have to type lt 90 and it is the same if you want to go right. Article posted November 17, 2008 at 10:11 PM GMT • comment • Reads 48 Article posted November 17, 2008 at 09:36 PM GMT • comment • Reads 42 Here is an article about coordinate grids and polygons. Polygons must have three sides or more to be a polygon. They also must be a closed figure. A few examples of polygons are: square, triangle, trapezoid, rhombus, rectangle octagon, and hexagon. There are a lot more polygons than I just explained. Poly gons are a very common shape. They are everywhere! Coordinate grids can show where something is. If you draw a polygon on a coordinate grid, you could locate it by looking at it's position. A coordinate grid is kind of like a map. A coordinate grid is great! Next are different types of triangles. The isoceles triangle has two equal sides. The equilateral triangle has all even sides. A right triangle has one right angle. That's all about polygons and coordinate grids! Article posted November 17, 2008 at 09:36 PM GMT • comment • Reads 42 Alice -- polygons Article posted April 14, 2009 at 09:30 PM GMT • comment • Reads 41 A polygon can have any amount of sides but they have to be strait not curved and they also have to be closed.that is my article about polygons. Article posted April 14, 2009 at 09:30 PM GMT • comment • Reads 41 About the Blogger
{"url":"http://classblogmeister.com/blog.php?blogger_id=199869&assignmentid=4694","timestamp":"2014-04-18T00:17:36Z","content_type":null,"content_length":"54852","record_id":"<urn:uuid:898423c3-f647-42cd-86c1-304183d9d5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Find X-Intercept | The Classroom | Synonym Cartesian planes, the graphs on them and the equations that define those graphs are a basic element of algebra and more advanced mathematics. The standard form for those equations is y=mx+b; this is called the slope-intercept form. A first operation that is necessary for many more complex problems is to find the intercepts of the function. The intercepts are simply the points on the Cartesian plane where the graph crosses the axes. Algebraically, the intercepts are defined by values of the function where x=0 or y=0. Step 1 Determine the format of the equation you are dealing with. If you have the standard slope-intercept form, then the X-intercept is listed already. For example, in the equation y=3x-4, the X-intercept is -4. Step 2 Set the x value to zero and solve for y. This way, no matter what form the equation is written in, you will find the y value when x=0, which is the X-intercept. Step 3 Set x=0 in point slope form as well, when the function is defined in the form (y-Y)=m(x-X). Step 4 Solve for y. In the equation (y-2)=4(x-1), when we substitute 0 for x, we are left with y-2=4(0-1). Therefore y-2=4(-1); therefore y-2=-4; therefore y=-4+2; therefore y=-2; the X-intercept is -2. Style Your World With Color
{"url":"http://classroom.synonym.com/xintercept-2484.html","timestamp":"2014-04-19T14:30:32Z","content_type":null,"content_length":"29113","record_id":"<urn:uuid:4e55ee18-691e-4cf1-9f6d-1718a1cfb8c4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] matrix indexing question Bill Baxter wbaxter@gmail.... Tue Mar 27 01:14:04 CDT 2007 On 3/27/07, Alan Isaac <aisaac@american.edu> wrote: > > On 3/27/07, Alan Isaac <aisaac@american.edu> wrote: > >> May I see a use case where the desired > >> return when iterating through a matrix > >> is rows as matrices? That has never > >> been what I wanted. > On Tue, 27 Mar 2007, Bill Baxter wrote: > > AllMyPoints = mat(rand(100,2)) # 100 two-d points > > for pt in AllMyPoints: > > xformedPt = pt * TransformMatrix > > # do something to transformed point > This seems artificial to me for several reasons, > but here is one reason: > AllxformedPts = AllMyPoints * TransformMatrix Yeh, I was afraid you'd say that. :-) But maybe I've got a lot of points, and I don't feel like making a copy of the whole set. Or maybe it's not a linear transform, but instead xformedPt = someComplicatedNonLinearThing(pt) I do stuff like the above quite frequently in my code, although with arrays rather than matrices. :-) For instance in finite elements there's assembling the global stiffness matrix step where for each node (point) in your mesh you set some entries in the big matrix K. Something like for i,pt in enumerate(points): K[shape_fn_indices(i)] = stiffness_fn(pt) That's cartoon code, but I think you get the idea. I don't think there's any good way to make that into one vectorized expression. The indices of K that get set depend on the connectivity of your mesh. > Note that I am no longer disputing the convention, > just trying to understand its usefulness. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026835.html","timestamp":"2014-04-18T08:40:15Z","content_type":null,"content_length":"4470","record_id":"<urn:uuid:353d5f47-7b4e-4d69-8e95-b7f11c0b5c37>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
vector analysis (mathematics) Update or expand this article! In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information. Once you are finished, your modifications will be sent to our editors for review. You will be notified if your changes are approved and become part of the published article! Update or expand this article! In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information. Once you are finished, your modifications will be sent to our editors for review. You will be notified if your changes are approved and become part of the published article! vector analysis Article Free Pass vector analysis, a branch of mathematics that deals with quantities that have both magnitude and direction. Some physical and geometric quantities, called scalars, can be fully defined by specifying their magnitude in suitable units of measure. Thus, mass can be expressed in grams, temperature in degrees on some scale, and time in seconds. Scalars can be represented graphically by points on some numerical scale such as a clock or thermometer. There also are quantities, called vectors, that require the specification of direction as well as magnitude. Velocity, force, and displacement are examples of vectors. A vector quantity can be represented graphically by a directed line segment, symbolized by an arrow pointing in the direction of the vector quantity, with the length of the segment representing the magnitude of the vector. Vector algebra. A prototype of a vector is a directed line segment AB (see Figure 1) that can be thought to represent the displacement of a particle from its initial position A to a new position B. To distinguish vectors from scalars it is customary to denote vectors by boldface letters. Thus the vector AB in Figure 1 can be denoted by a and its length (or magnitude) by |a|. In many problems the location of the initial point of a vector is immaterial, so that two vectors are regarded as equal if they have the same length and the same direction. The equality of two vectors a and b is denoted by the usual symbolic notation a = b, and useful definitions of the elementary algebraic operations on vectors are suggested by geometry. Thus, if AB = a in Figure 1 represents a displacement of a particle from A to B and subsequently the particle is moved to a position C, so that BC = b, it is clear that the displacement from A to C can be accomplished by a single displacement AC = c. Thus, it is logical to write a + b = c. This construction of the sum, c, of a and b yields the same result as the parallelogram law in which the resultant c is given by the diagonal AC of the parallelogram constructed on vectors AB and AD as sides. Since the location of the initial point B of the vector BC = b is immaterial, it follows that B C = AD. Figure 1 shows that AD + DC = AC, so that the commutative law holds for vector addition. Also, it is easy to show that the associative law is valid, and hence the parentheses in (2) can be omitted without any ambiguities. If s is a scalar, sa or as is defined to be a vector whose length is |s||a| and whose direction is that of a when s is positive and opposite to that of a if s is negative. Thus, a and -a are vectors equal in magnitude but opposite in direction. The foregoing definitions and the well-known properties of scalar numbers (represented by s and t) show that Inasmuch as the laws (1), (2), and (3) are identical with those encountered in ordinary algebra, it is quite proper to use familiar algebraic rules to solve systems of linear equations containing vectors. This fact makes it possible to deduce by purely algebraic means many theorems of synthetic Euclidean geometry that require complicated geometric constructions. Products of vectors. The multiplication of vectors leads to two types of products, the dot product and the cross product. The dot or scalar product of two vectors a and b, written a·b, is a real number |a||b| cos (a,b), where (a,b) denotes the angle between the directions of a and b. Geometrically, If a and b are at right angles then a·b = 0, and if neither a nor b is a zero vector then the vanishing of the dot product shows the vectors to be perpendicular. If a = b then cos (a,b) = 1, and a·a = |a|^2 gives the square of the length of a. The associative, commutative, and distributive laws of elementary algebra are valid for the dot multiplication of vectors. The cross or vector product of two vectors a and b, written a × b, is the vector where n is a vector of unit length perpendicular to the plane of a and b and so directed that a right-handed screw rotated from a toward b will advance in the direction of n (see Figure 2). If a and b are parallel, a × b = 0. The magnitude of a × b can be represented by the area of the parallelogram having a and b as adjacent sides. Also, since rotation from b to a is opposite to that from a to This shows that the cross product is not commutative, but the associative law (sa) × b = s(a × b) and the distributive law are valid for cross products. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/624327/vector-analysis","timestamp":"2014-04-16T05:48:34Z","content_type":null,"content_length":"84878","record_id":"<urn:uuid:e3bca9d4-a98d-4e25-9afc-9446394de475>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Classifying Triangles Classifying Triangles Study Guide LearningExpress Editors Updated on Oct 3, 2011 Introduction to Classifying Triangles The only angle from which to approach a problem is the TRY-Angle. —Author Unknown The triangle is the fundamental figure in geometry. This lesson will expose the different types of triangles and how to use the Pythagorean theorem. A Triangle is a figure with three equal sides. The sum of the measure of the angles in a triangle is equal to 180°. A triangle can be classified by its angles: acute, right, or obtuse. Acute triangles have all angles less than 90°. Equilateral triangles have all angles equal to 60°. All sides of equilateral triangles are congruent (equal). Obtuse triangles have one angle that is greater than 90°. Right triangles have one right (90°) angle. │Tip: │ │ │ │Be careful when you classify triangles by angle measure. Notice that even though right triangles and obtuse triangles each have two acute angles, their classification is not affected by these │ │angles. Acute triangles have all three acute angles. │ A triangle can also be classified by its sides: scalene, isosceles, or equilateral. Isosceles triangles have two congruent sides (and the angles opposite these equal sides are equal as well). Scalene triangles have no sides that are congruent. Congruent Triangles Triangles with the same size and shape are congruent triangles. The matching parts of congruent triangles are called congruent parts. You can determine that two triangles are congruent if the following corresponding parts are congruent: 3 sides or side-side-side (SSS) 2 angles and the included side or angle-side-angle (ASA) 2 sides and the included angle or side-angle-side (SAS) Similar Triangles Triangles with the same shape, but different sizes, are similar triangles. The angles are equal, but the sides vary in length. Similarity is indicated by the ~ symbol. Right Triangles and The Pythagorean Theorem Right triangles have one right angle. They are special because you can use the Pythagorean theorem: a^2 + b^2 = c^2 Here, a and b are the legs of the triangle and c is the hypotenuse. The hypotenuse is the side opposite the right angle. The hypotenuse is equal to the sum of the squares of the lengths of the legs. Let's look at an example. What is the hypotenuse for the following triangle? Using the Pythagorean theorem, substitute the values that you know: a^2 + b^2 = c^2 6^2 + 8^2 = c^2 36 + 64 = c^2 100 = c^2 √100 = c^2 10 = c So, the hypotenuse length is 10. This also means that the lengths 6, 8, and 10 work in a right triangle. Three numbers that prove the Pythagorean theorem are called Pythagorean triples. Three other Pythagorean triples include 3-4-5, 5-12-13, and 8-15-17. Find practice problems and solutions for these concepts at Classifying Triangles Practice Questions. From Basic Math in 15 Minutes A Day. Copyright © 2008 by LearningExpress, LLC. All Rights Reserved. Ask a Question Have questions about this article or topic? Ask 150 Characters allowed Popular Articles Wondering what others found interesting? Check out our most popular articles.
{"url":"http://www.education.com/study-help/article/classifying-triangles/","timestamp":"2014-04-18T09:10:39Z","content_type":null,"content_length":"90973","record_id":"<urn:uuid:2d6ed376-89a8-4437-9aa1-aae89818eff7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 649 social economics politics What are the various methods of maintaining peace and unity in binational, multinational, and multiethnic states? What are the strengths and weaknesses of each method? Chemistry - Urgent ahhh, I get it now. thanks a lot, I actually did learn how to do it now, believe it or not. Chemistry - Urgent Is there anyway you could work this one out for me and explain what you did? I would probably learn a lot better than all of this reasoning and proportions. It's greatly appreciated. Chemistry - Urgent Totally lost. Chemistry - Urgent Ok, I got that, but what do I do with the 3.16 ×10^23? Chemistry - Urgent Determine the mass of 3.16 ×1023 atoms Cl. Answer in units of g. How do you do this? Need to get it done really fast. Thanks for help. so it would be 6.02 x 10^23 x 50? then whattt? social economics politics what are the various methods of maintaining peace and unity in binational, multinational, and multiethnic states? what are the strengths and weaknesses of each method? I can be three different states of matter, but not at the same time. I am necessary for life, but I can also destroy life. I can be under your feet, around your body, or over your haed. What am I? Im a Danish student, and I've written this summary which I would like someone to correct. Thanks in advance! In the short story Beyond the pale , published in 1888, author Rudyard Kipling states via his narrator that one should stick to its caste. Trejago, who... how do you write the math expression for the product of 82 and g 6/7 because 7/8 and 6/7 Nikki lives near the deepest ocean trench in the world.What country does she live in . Name one land and ocean animal that migrate. plot it on a graph in your calculator. then see the number where the line continues to stay on for a y value. for example, lets say it will go up and down but when it hits x=4 the y value is 2 and every y value after that is 2, then the limit for infinity would be 2. also...fo... I can travel where are no molecules,but never through a brick.I can travel through space,but never around a corner.What am I? Anatomy and Physiology Hi, i have to right a research paper about the origins and history of a cancer. I do not know where to find this information. We can choose any type of cancer, but i do not know where to find this information(who first diagnosed it, when was it first diagnosed) Scientist do experimental tests more than once so they can reduce the chance of errors. This is called conducting what? In the story Alexander Who Used To Be Rich Last Sunday what did Nicky tell him to do with his dollar? if x equals 0,1,2,3 and y = 5,10,15,20 what is the pattern written as an equation--please help can't figure out this one social studies why were european nation competing for land Consider the set of northwestern states or provinces {Montana, Washington, Idaho, Oregon, Alaska, British Columbia, Alberta}. If a person chooses one element, show that in three yes or no questions, we can determine the element. Your Solution Hints I want you to c... Resolve a displacement of 700 cm into two components along direction lines that lie on opposite sides of the displacement and each of which makes an angle of 30° with the displacement. 404 cm and 478 cm 404 cm each 383 cm and 415 cm 415 cm each math...number sequence i need help with this partter n 1,2,5,7,11 give me numbers that follow the parttern Physical Science Our teacher told us that the fourth partial sum approximates the sum of the sequence 7n/(n^3+1). The fourth partial sum is also the error of the sequence and our teacher asks us to find a number X such that the error is less than X. Find a number X such that the error is less than X (The error is 14593/2340 for the sequence, 7n/(n^3+1). How would you start to solve this problem? 12th grade Are you sure you wrote the question correctly? I do not understand how g which is a function of x is written as a function of t, and especially with the dt. A gun that is spring-loaded shoots an object horizontally. The initial height of the gun is h=5 cm and the object lands 20 cm away. What is the gun's muzzle velocity? Do i use something like v=sqr(k) /m*x. am getting something like 3.2, is this somewhat correct at all. A 32-u oxygen molecule (O2) moving in the +x direction at 580 m/s collides with an oxygen atom (mass 16 u) moving at 870 m/s at 27 degrees to the x axis. The particles stick together to form an ozone molecule. Find the velocity of the ozone. Express your answer in terms of the... A 150g fircracker is thrown at 66km/h . It explodes in the air into two pieces, with a 42g piece continuing ahead at 93km/h.How much energy is gained in the explosion by the two pieces. I know you use k=1/2mv^2 though, do you use m1v1+m2v2=(m1+m2)vf I am getting something like... Object A crashes into object B. A is more massive than B. Which is a true statement? a. A experiences an impulse b. B experiences an impulse c. Both A and B experience an impulse of equal magnitude d. Both A and B experience an impulse, but the impulse on B is greater I say C. Two trucks approach at the same speed and undergo an inelastic collision. Their total momentum? a. becomes zero b. remains unchanged c. doubles d. I need to know the masses of the two cars I say B. Two rollerbladers throw a frisbie back and forth on a frictionless surface. Which does not change? a. Momentum of an individual rollerblader b. Momentum of the frisbie c. Momentum of the system consisting of the frisbie and rollerbladers throwing the ball d. Momentum of the sy... Find the force on a hollow dome that is 1000 feet in diameter, that is a mile below the surface. HOw large can the dome be expanded and not surpass 600 billion pounds. I know the height will be 500, and i think that the intergral is 5280 to 4780 int 64,000pih, I am not sure th... 12th grade Area of the house to be painted = A Bill pains "B" area per hour John pains "J" area per hour So, A/B = 4 hours, and A/J = 7 hours => B = A/4 and J = a/7 B and J says how much area they can paint per hour. When both of them work together, assume it takes... wHAT DO I DO WITH THESE VALUES THEN... A 69 kg roller blader, is on a frictionless surface at rest, he then tosses a 4.0 kg rock that has a velocity given by v=3i+4jm/s, the axes are both in the horizontal plane. Find the x component and y component of the subsequent velocity of the roller blader? I think tis is a ... 11th grade-Calculas dD/dt = A(2pi/365)*cos [(2pi/365)*(t-80)] =0 for maximum or minimum (2pi/365)*(t-80) = n*pi/2 when n=1 gives maximum , then (t-80) = 365/4, for minimum n = 3 and t-80 = 3*365/4 (3) = dD/dt = +2/60 (4) = dD/dt = -2/60 5th grade math problem write each number in standard form thirtytwo million,tenthousand one thanks i didnt really get molarity either but my question is for molality. i dont really understand molality. I know that it is a big concept but could someone explain it a little. the question is how would you prepare a molality solution. A. putting more money into circulation. The reason I know the answer is correct is because I just took a test and got that same exact question right!!! ;) If x is a natural number then,(x+5)/5=x+1 If x is a natural number, then (x +4)^2 =x +16 i got 14 Algebra II How do I evaluate log base 8 of 4? 4.5 × 10−12 m English:writing memo Good example I need facts about coral reefs Poem: A Voice by Pat Mora what does the daughter feel about her dad fear, amusment, and love or what? etc...... Poem: Hanging fire by Audre Lorde i think she has breast canswer Poem: Hanging fire by Audre Lorde i think she has breast canswer crt 205 Over time, non-specialists are usually able to assimilate radically new scientific ideas, even though these ideas may seem strange when they are initially introduced. Such was the case with Newtonian physics; when Newton proposed his ideas regarding motion and gravitation in t... For my final exam I must build a paper tower that stands at least 60+ inches. These are what I can use. -One sheet of paper (8.5x11) -Two feet of tape -Pencil -Ruler I can't seem to find the correct way to do this anywhere on the internet, so can I get some help here? thanks but how would i show my work using dimensional analysis. like.. 11 L x ?/? = gal in that form what is the easiest way to convert 11 liters to gallons social studies what is a government in which citizens rule through elected representatives? because vibration causes noises and in an earthquake, there is a lot of vibraion hmm...i need help on this problem, anyone know how to o this? (if u can, put a step-by-step solution) i would really be grateful. the coordinates of two points are A(-2,6) and B(9,3). find the coordinates of the point C on the X-axis so that AC=BC Pigskin Geography A game between two of the competing teams from bordering states might well be called the "Kudzu Klash". The game between which teams would deserve this title? how could you identify a pure metal if you have a balance, a graduated cylinder and a table of densities for metals? language arts can you please help me unscramble these words. nnreital cicnoflt reexatln ciclontf yucataid enruta what does abs mean thank you no it cant be the answer were suppose to have 2 answers Algebra 1 were solving absolute value equations \can you help me out in 1 8-|3x-2|=3 i dont know if we distribute the negative or make it into -1 or its a no value something like that theirs no possible answer 7th grade Jasmine's quarters total $2.80 more than her nickels, of which she has half as many as she has dimes, which total 80 cents more than her quarters. what is velocity Social Studies Caused the isrealites to leave Canaan and go to egypt Eric Sherm began working as a part-time waiter on April 1st, 2008 at Yard restuarant. The cash tips of 475 that he received during April were reported on form 4070, which he submitted to his employer on May 1st. During May, he was paid wages of 630.00 by the restuarant. Comput... Find the equation of the straight line joining a)(4, 4) and ( 2, 0) b)(3, -1), (5, 4) i also need to know how you went about solving this problem so i can understand.. many thanks Math - Factors i need help Math - Factors What are improper factors? How do we find improper factors for any given numbers? suppose h i equals the number of points lindsey scored in the ieth basketball game so far this season, and h1=14, h2=12, h3=18, h4=14, h5=18, h6=19, h8=16 which expression represents the mean number of points scored per game. a)7 E hi i=1/7 b)8 E hi i=1/8 c) 1 E hi i=8/8 d) 8 ... what does technological development have to do with the development of time zones Name a tool that you can use to measure the volume of an object. computer education false, has to be connected to the internet Name a tool that you can use to measure the circumference of an object. Name a tool that is used to measure the circumference of an object and Name a tool that you can use to measure the volume of an object. why is the electron the part of the atom that moves when an electric current flows? Computer Science I have defined a microscopic verson of scheme that utilizes amb. I am currently using Dr. Scheme. This is my definition: (define (amb-eval s environment succeed fail) (cond ((not (pair? s)) (succeed (cond ((eq? s '#t) '#t) ((eq? s '()) '()) ((eq? s '#f) ... Find all angles of a right triangle RST given that r = 18cm and s=20 cm If a triangle has angles 35 and 65 degrees, what is the measurment of the third angle? Is it a right triangle? at what speed did a plane fly if it traveled 1760 meters in 8 seconds I do not see how Reiny went from: cosx(2sin^2x-1)/sinx TO (cosx/sinx)(-cos2x) if you break it up you get (cosx/sinx)((2sin^2x-1)/sinx) help im doing a take home exam lol What is a maerial that was recently alive? How much iron (II) sulfide would be needed to prepare 15L of hydrogen sulfide? My balanced equation is 2HCl + FeS --> H2S + FeCl2. I have converted 15L H2S to mols, but I have no idea where to go from here. what evidence would you look for in a chemical change 5 plus 5/10 ok thank you for all of your help 564t=744(t-.5) 564t=744t-372 564t-564t=744t-372-564t 0=180t-372 0+372=180t-372+372 372=180t 372/180=180t/180 2.07=t Is that correct? no the second one left a half an hour later sorry i forgot to include that A jet leaves the airport traveling at an average rate of 564km/h. Another jet leaves the same airport traveling at a rate of 744km/h in the same direction. How long will it take for the second jet to reach the first jet what is 3(x-4)-2(8-x Wha are the roots of x to the second power divided by x minus six equals Zero? Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jacob&page=6","timestamp":"2014-04-20T19:10:05Z","content_type":null,"content_length":"25457","record_id":"<urn:uuid:7234f814-597e-487f-9fbc-605f5783ce21>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Is spiritual progress worth it? logank9 wrote:I just want to know beyond a shadow of a doubt it is. Because it just doesn't make sense to me why everyone wouldn't follow it. If it is the end all be all to highest happiness why isn't everyone spiritually improving themselves all the time? There is only one way to find out: that is to try it and succeed. If you do, then it was worth the effort. If you try but don't succeed, it was not worth the effort. If you don't try, you'll never know. That is why not everyone makes an effort. There is a reason why religions are referred to as "faiths". You have to have faith that the effort will be worthwhile in order to make the effort. Not everyone has it. Om mani padme hum Re: Is spiritual progress worth it? I don't know if I've made any spiritual progress at all. I can say: as a direct result of my attempts at Buddhist practice, I've become a lot easier for others to deal with, I tend to create fewer problems, and I'm having a lot more fun than I used to. Really, I have a great time most days just doing ordinary routine things. YMMV. So yeah, I'm quite happy to continue on this path. Re: Is spiritual progress worth it? Jikan wrote:I don't know if I've made any spiritual progress at all. I can say: as a direct result of my attempts at Buddhist practice, I've become a lot easier for others to deal with, I tend to create fewer problems, and I'm having a lot more fun than I used to. Really, I have a great time most days just doing ordinary routine things. YMMV. So yeah, I'm quite happy to continue on this path. "Just as a lotus does not grow out of a well-levelled soil but from the mire, in the same way the awakening mind is not born in the hearts of disciples in whom the moisture of attachment has dried up. It grows instead in the hearts of ordinary sentient beings who possess in full the fetters of bondage." -Se Chilbu Choki Gyaltsen Re: Is spiritual progress worth it? Are infinite rebirths full of suffering (samsara) with brief respites here and there worth it? Coz that's the other option. "When one is not in accord with the true view Meditation and conduct become delusion, One will not attain the real result One will be like a blind man who has no eyes." Naropa - Summary of the View from The Eight Doha Treasures Re: Is spiritual progress worth it? logank9 wrote:I always get caught up on whether spiritual progression is worth it....Thoughts anyone? I'm lost today The alternative is wandering further and further in spiritual darkness. That's not acceptable to me. Kirt's Tibetan Translation Notes “All beings are Buddhas, but obscured by incidental stains. When those have been removed, there is Buddhahood.” Hevajra Tantra Re: Is spiritual progress worth it? Johnny Dangerous wrote: Jikan wrote:I don't know if I've made any spiritual progress at all. I can say: as a direct result of my attempts at Buddhist practice, I've become a lot easier for others to deal with, I tend to create fewer problems, and I'm having a lot more fun than I used to. Really, I have a great time most days just doing ordinary routine things. YMMV. So yeah, I'm quite happy to continue on this path. + 1 That's the carrot. I tend not to dwell on the stick - the "infinite rebirths full of suffering (samsara) with brief respites here and there" etc - because all I really know is the here-and-now. Re: Is spiritual progress worth it? Pascal, call your office! What would you estimate to be the likelihood (that is, the prior probability) of Buddhism being true? Let this be (b). If Buddhism is true, then you get a chance to obtain infinite jollies (metric units of happiness, j). What is that chance? Now you have to estimate the odds of Mahayana vs. Theravada. (Let's say 50-50.) If Mahayana, then all sentient beings eventually get saved, so 100 % chance of infinite jollies. If Theravada, infinitesimal chance and infinite jollies, so 1. (Let's also assume that it makes no difference to your fate which one you choose.) If Buddhism is not true, then perhaps some other religion is true. What are the odds? Let this be (a). If so, then what are the odds that Buddhists may be saved, and thereby obtain infinite jollies? Let's say 20 %. If no religion is true, then as an atheist, you get...let's say 1000 jollies over the course of a life spent partying. If no religion is true, but you become a devout Buddhist, you miss out on some of those jollies. Let's say you get 500 jollies. So: [(b) x (.5) x infinity] + [(b) x (.5) x (1) - 1000 vs. [(a) x (.2) x infinity] + [(1-<a+b>) x (.8) x (negative infinity)] + 500 If Buddhism is 10 percent likely, and there is a 40 percent chance of some other religion being true, then (.1 x .5 x infinity) + (.1 x .5 x 1) -1000 vs. (.4 x .2 x infinity) + (.5 x .8 x <negative infinity>) + 500 All clear? Edit: no, it is not. This is a mess. Now you've got *me* confused. Re: Is spiritual progress worth it? Alfredo wrote:All clear? Edit: no, it is not. This is a mess. Now you've got *me* confused. Re: Is spiritual progress worth it? Kim O'Hara wrote: + 1 That's the carrot. I tend not to dwell on the stick - the "infinite rebirths full of suffering (samsara) with brief respites here and there" etc - because all I really know is the here-and-now. I've had more than enough of the "stick" to go back to my old ways, even without considering lifetimes apart from this one. It seems to me that this is the case for many of us--we come to Buddhism because we're hurting, and tired of hurting ourselves and others. Re: Is spiritual progress worth it? But wait! If the Mahayana is true, then all sentient beings (including you and me) will eventually get saved anyway, regardless of what religion (or lack thereof) we pick. So you could enjoy your 1000 metric jollies now (i.e. the fruits of a life of libertine materialism), AND get a chance (if Mahayana is true) of infinite jollies in the future. Taking other religions into account, assuming equal prior probabilities, you should probably choose the most intolerant ones (no sense in joining the Unitarians if you would have gotten into heaven anyway if they turn out to be right), and the ones with the best heaven and the worst hell. The catch, of course, is that there is no non-arbitrary way to assign prior probabilities. Re: Is spiritual progress worth it? Alfredo wrote:But wait! If the Mahayana is true, then all sentient beings (including you and me) will eventually get saved anyway, regardless of what religion (or lack thereof) we pick. So you could enjoy your 1000 metric jollies now (i.e. the fruits of a life of libertine materialism), AND get a chance (if Mahayana is true) of infinite jollies in the future. Taking other religions into account, assuming equal prior probabilities, you should probably choose the most intolerant ones (no sense in joining the Unitarians if you would have gotten into heaven anyway if they turn out to be right), and the ones with the best heaven and the worst hell. The catch, of course, is that there is no non-arbitrary way to assign prior probabilities. ... or benefits. Is the Islamic heaven more heavenly than the Christian heaven? How do they both compare with nibbana? Is rebirth as an animal worse than a time in Purgatory? If we can't know what will happen later, our best strategy is to decide on the basis of the here-and-now - as I said a little while ago. Re: Is spiritual progress worth it? Kim O'Hara wrote: + 1 That's the carrot. I tend not to dwell on the stick - the "infinite rebirths full of suffering (samsara) with brief respites here and there" etc - because all I really know is the here-and-now. Here-and-now? Liberian cannibal warlords. Mexican drug cartels. Refugees drowning in their attempt to enter Europe. Industrial slaughterhouses. HIV prevalence in Swaziland (21% of all adults). Sisa addiction in Brasil. Etc... The infinite rebirths in realms of suffering IS here and now. We just happen to have achieved a precious human rebirth. Extraordinarily precious. If we do not take advantage of it here-and-now, well, you only have to look at the other 95% of the worlds population to see what is in store. "When one is not in accord with the true view Meditation and conduct become delusion, One will not attain the real result One will be like a blind man who has no eyes." Naropa - Summary of the View from The Eight Doha Treasures Re: Is spiritual progress worth it? Alfredo wrote:But wait! If the Mahayana is true, then all sentient beings (including you and me) will eventually get saved anyway, regardless of what religion (or lack thereof) we pick. So you could enjoy your 1000 metric jollies now (i.e. the fruits of a life of libertine materialism), AND get a chance (if Mahayana is true) of infinite jollies in the future. Sure, if you want to wait around for an infinite period of time. "When one is not in accord with the true view Meditation and conduct become delusion, One will not attain the real result One will be like a blind man who has no eyes." Naropa - Summary of the View from The Eight Doha Treasures Re: Is spiritual progress worth it? If 'progress' is defined as 'attaining something' or 'getting somewhere' I would suggest you drop it. If that is really what's driving you, then the answer is 'no, definitely not'. The important point is to sit without any idea of personal gain. If you can't do that, do something else. He that knows it, knows it not. Re: Is spiritual progress worth it? logank9 wrote: Is it really worth it or any more worth it than living as an ordinary person? Nobody is saying you have to do spiritual practice. If you're not sure about it, then why not have a complete break and see how it feels? Re: Is spiritual progress worth it? Sherab Dorje wrote:Here-and-now? Liberian cannibal warlords. Mexican drug cartels. Refugees drowning in their attempt to enter Europe. Industrial slaughterhouses. HIV prevalence in Swaziland (21% of all adults). Sisa addiction in Brasil. Etc... The infinite rebirths in realms of suffering IS here and now. We just happen to have achieved a precious human rebirth. Extraordinarily precious. If we do not take advantage of it here-and-now, well, you only have to look at the other 95% of the worlds population to see what is in store. It pays to heed this reminder. Thanks Greg.
{"url":"http://www.dharmawheel.net/viewtopic.php?f=77&t=14575&start=20","timestamp":"2014-04-17T05:40:39Z","content_type":null,"content_length":"54573","record_id":"<urn:uuid:85688315-de7c-453e-93a1-62490d6dc8f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
ordered of pairs December 7th 2011, 10:41 AM ordered of pairs In 1953, L.J.Mordell said that there were only four ordered triples of integers (x, y, z) for which x^3 +y^3 +z^3 = 3 .one of these is (1, 1, 1). What are the other three ordered triples? December 7th 2011, 11:03 AM Re: ordered of pairs (4, 4, -5) (4, -5, 4) (-5, 4, 4) December 7th 2011, 11:33 AM Re: ordered of pairs can you give me explanation December 7th 2011, 11:39 AM Re: ordered of pairs I saw your question. I thought that I could answer it. I assumed, since the claim was that there were exactly three more solutions, and since the equation is symmetric in x/y/z, that the solutions would be symmetric. I assumed that two of the variables were equal. I tried a few smaller numbers, and considered powers of small numbers. I came across the given solution. I typed in into the box, and hit "Post quick reply". December 7th 2011, 12:02 PM Re: ordered of pairs Good Explanation! December 7th 2011, 12:05 PM Re: ordered of pairs
{"url":"http://mathhelpforum.com/algebra/193681-ordered-pairs-print.html","timestamp":"2014-04-21T06:47:45Z","content_type":null,"content_length":"5836","record_id":"<urn:uuid:7c661915-c24c-4128-89e4-46d3d0b7362f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Visualizing Limits of Functions of 2 Variables The basic idea for limits of functions of one variable is that given any ε we can find a δ such that if we are within δ of x = a in the domain then f(x) lies between the lines y = L - ε and y = L + ε . Formally | f(x) - L| < ε whenever | x-a | < δ. But what about a function of 2 variables? I'll formalize the language later but for now we'll consider the following 3 animations which illustrate qualitatively what limits of functions of 2 variables are all about. The first difference when we talk about functions of 2 variables is that instead of considering an interval containing x = a in the domain of the form a - δ < x < a + δ We talk about a circle centered at a point (x[0],y[0]) of radius δ in the domain of the form (x-x[0])^2 + (y-y[0])^2 < δ^2 . Then the definition of limit takes on a strikingly similar form: The limit of f(x,y) as (x,y) approaches (x[0],y[0]) = L means that Given any ε there is a δ such that the value of z = f(x,y) lies between the planes z = L - ε and z = L + ε Whenever (x,y) is in the circle (x-x[0])^2 + (y-y[0])^2 < δ^2 . See Animation 1 -- Notice as we shrink the circle (let δ go to 0) centered at (x[0],y[0]) the surface is confined between planes which are converging to z = 1.
{"url":"http://calculus7.com/id46.html","timestamp":"2014-04-21T04:32:34Z","content_type":null,"content_length":"37881","record_id":"<urn:uuid:5847bab7-538b-492c-9f2e-5cae757e675c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
User Paul Christiano bio website rationalaltruist.com visits member for 1 year, 2 months seen Apr 7 at 23:57 stats profile views 73 24 awarded Benefactor 19 accepted Are gaussians with different moments far in total variation distance? Mar Are gaussians with different moments far in total variation distance? 19 comment Thanks! I'll give this the bounty unless a significantly cleaner solution crops up (unless there is some etiquette about this I don't know). Minor (I think) issue: the covariance matrices can have eigenvalues more than 1. But you only need to apply this bound for sparse vectors, so it looks like you are good. Mar Are gaussians with different moments far in total variation distance? 18 comment Suvrit: it's a big integral, of the absolute value of the difference between the Gaussian densities. I don't really see how to simplify it usefully, and writing it out is a mess. Mar Are gaussians with different moments far in total variation distance? 18 revised Added remark about 2 dimensional case. Mar Are gaussians with different moments far in total variation distance? 18 comment I certainly should have made that observation... I don't see how to do this nicely even in the case of 2 dimensional Gaussians, but upon consideration it does seem like that should be much easier. 18 awarded Nice Question 17 awarded Promoter Mar Are gaussians with different moments far in total variation distance? 15 comment No; together with the strong concavity of the entropy it might give an alternative and conceptually clearer proof of this claim, in which I am once again interested. What suggests this might be a homework problem? Mar Are gaussians with different moments far in total variation distance? 14 revised edited body; edited title 14 asked Are gaussians with different moments far in total variation distance? 8 awarded Self-Learner Oct An approximate infinite-dimensional fixed point theorem 30 comment This is surely too much to ask, but do you know what happens if every coordinate of $f$ is continuous in the product topology, except for one of them? Intuitively there are two kinds of obstructions from infinite dimension, and it seems like this eliminates one of them (we no longer have to intersect infinitely many sets in a non-compact space). I don't understand the counterexamples to the approximate fixed point property well enough to see whether they work in this setting. (For my purposes, this case would be almost as good as the whole thing.) 29 accepted An approximate infinite-dimensional fixed point theorem 28 awarded Teacher 27 answered An approximate infinite-dimensional fixed point theorem 27 accepted Infinite-dimensional hex Oct Infinite-dimensional hex 27 comment Yes, if the players alternate turns, then everyone only plays once. Calling it a "game" is a bit of a stretch. 26 answered Infinite-dimensional hex Oct revised Infinite-dimensional hex 25 Added the case $k = 3$.
{"url":"http://mathoverflow.net/users/31437/paul-christiano?tab=activity","timestamp":"2014-04-16T10:37:53Z","content_type":null,"content_length":"45708","record_id":"<urn:uuid:02b6e65a-8cb7-4219-953d-e03998576460>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Electromagnetic cosine-Gaussian Schell-model beams in free space and atmospheric turbulence « journal navigation Electromagnetic cosine-Gaussian Schell-model beams in free space and atmospheric turbulence Optics Express, Vol. 21, Issue 22, pp. 27246-27259 (2013) A recently introduced class of scalar cosine-Gaussian Schell-Model [CGSM] beams is generalized to electromagnetic theory. The realizability conditions and the beam conditions on the source parameters are derived. Analytical formulas for the cross-spectral density matrix elements of the electromagnetic cosine-Gaussian Schell-model [EM CGSM] beams propagating in isotropic random medium are derived. It is found that the EM CGSM beams possess single-ring or double-ring intensity profiles, depending of source parameters. As two examples, the statistical characteristics of the EM CGSM beams propagating in free space and non-Kolmogorov turbulent atmosphere are studied numerically. The effects of the fractal constant of the atmospheric spectrum and the refractive-index structure constant on such characteristics are analyzed in detail. © 2013 Optical Society of America 1. Introduction On the other hand, in recent years dark-hollow beams (DHB) have attracted a wealth of attention because of their wide applications in atomic optics. Also, partially coherent DHB have some advantages over the completely coherent DHB because of their low sensitivity to speckle. Therefore the partially coherent DHB may be more useful in atomic optics experiments involving atomic lenses, atom switches and optical tweezers [ 17. G. Gbur and T. D. Visser, “Can spatial coherence effects produce a local minimum of intensity at focus,” Opt. Lett. 28(18), 1627–1629 (2003). [CrossRef] [PubMed] ]. Some theoretical models have been proposed to describe partially coherent DHB and analyze their propagation characteristics [ 18. Y. Cai and F. Wang, “Partially coherent anomalous hollow beam and its paraxial propagation,” Phys. Lett. A 372(25), 4654–4660 (2008). [CrossRef] 19. M. Alavinejad, G. Taherabadi, N. Hadilou, and B. Ghafary, “Changes in the coherence properties of partially coherent dark hollow beam propagating through atmospheric turbulence,” Opt. Commun. 288 , 1–6 (2013). [CrossRef] ]. However, in these models the dark-hollow intensity profiles only remain invariant for short propagation distances. It has been shown that the transverse hollow cross-section of the beam disappears gradually in propagation and becomes Gaussian in the far field. Recently, we have introduced the scalar random sources to generate DHB [ 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] ]. Unlike the deterministic DHB models, the dark-hollow intensity profiles of new random sources are formed not at the source plane but in the far field, where they remain shape-invariant. This feature makes them particularly suitable for applications involving particle trapping in cases when the presence of propagation path between the source and the particle cannot be avoided. In this paper, we extend the results of Refs [ 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express 21(15), 17512–17519 (2013). [CrossRef] [PubMed] ]. to electromagnetic domain, terming the novel class of beams the Electromagnetic cosine-Gaussian Schell-model (EM CGSM) beams, in which all the correlations are prescribed with the help of the scalar CGSM distributions. We stress that unlike the existing partially coherent cosine Gaussian beams [ 22. H. T. Eyyuboğlu and Y. Baykal, “Transmittance of partially coherent cosh-Gaussian, cos-Gaussian and annular beams in turbulence,” Opt. Commun. 278(1), 17–22 (2007). [CrossRef] 23. G. Zhou and X. Chu, “Propagation of a partially coherent cosine-Gaussian beam through an ABCD optical system in turbulent atmosphere,” Opt. Express 17(13), 10529–10534 (2009). [CrossRef] [PubMed] ], the cosine function is now employed for modeling of the source correlations, rather than for the intensity distribution. That is why the CGSM source leads to the qualitatively distinct evolution in the beam’s spectral density on free-space propagation acquiring a robust dark-hollow intensity profile (see also [ 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express 21(15), 17512–17519 (2013). [CrossRef] [PubMed] ]). As we illustrate by numerical examples, similar conclusion can be made regarding the EM CGSM beam’s intensity, coherence and polarization properties, with the only difference that for EM counterpart formation of two rings is also a possibility. The double ring intensity profile may be of use in applications dealing with particle manipulation. Studies of the propagation characteristics of stochastic electromagnetic beam-like fields in a turbulent atmosphere are of importance because of their direct applications in communication and sensing systems [ 24. A. Zilberman, E. Golbraikh, and N. S. Kopeika, “Some limitations on optical communication reliability through Kolmogorov and non-Kolmogorov turbulence,” Opt. Commun. 283(7), 1229–1235 (2010). 29. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-Kolmogorov turbulence,” Opt. Eng. 47(2), 026003 (2008). ]. After introducing the EM CGSM sources and deriving their realizability and beam conditions our task in this study is to explore the behavior of the major second-order properties of the generated beams which propagate in free-space and in the non-Kolmogorov’s atmospheric turbulence with different fractal constant of the atmospheric spectrum and refractive-index structure constant 2. Electromagnetic cosine-Gaussian Schell-model source cross-spectral density (CSD) matrix of a statistically stationary electromagnetic field in the source plane, at points specified by two-dimensional position vectors and angular frequency , is defined by the expression [ ]The matrix elements are scalar correlation functions of the formwhere denote the components of the electric field in two mutually orthogonal directions perpendicular to the -axis, and the angular brackets denote the ensemble average. In what follows the angular frequency dependence of all the quantities of interest will be omitted but implied. A genuine CSD matrix for any electromagnetic stochastic beam must be non-negative definite. This condition is fulfilled if the elements of the CSD matrix it can be written as an integral of the form [ 10. F. Gori, V. Ramírez-Sánchez, M. Santarsiero, and T. Shirai, “On genuine cross-spectral density matrices,” J. Opt. A, Pure Appl. Opt. 11(8), 085706 (2009). [CrossRef] 15. Z. Tong and O. Korotkova, “Electromagnetic nonuniformly correlated beams,” J. Opt. Soc. Am. A 29(10), 2154–2158 (2012). [CrossRef] [PubMed] is an arbitrary non-negative weight function, is an arbitrary kernel. A simple and significant class of the CSD matrices, leading to the vectorial Schell-model sources, can be obtained by assigning to functions a Fourier-like structure. More explicitly, we set:where is the amplitude of the field component, is a profile function. The choice of defines a family of sources with different correlation functions. We now consider a modulation to the conventional Gaussian Schell-model (GSM) coherence functions, by choosing (see also [ 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express 21(15), 17512–17519 (2013). [CrossRef] [PubMed] are positive real constants, is the hyperbolic cosine function, is the single-point correlation coefficient and are the characteristic source correlations. On substituting from Eqs. (4) Eq. (5) and setting the Gaussian profile, for function , one finds the explicit form of the CSD matrix elements: Equation (7) represents a new family of sources with cosine-Gaussian correlation function that may be named Electromagnetic cosine-Gaussian Schell-model (EM CGSM) sources. Our next step is to establish the restrictions for the source parameters guaranteeing that the mathematical model (7) describes a physically realizable field. From the condition that the correlation matrix must be quasi-Hermitian [ ], i.e. that , it follows at once thatFurther, the function ) must be non-negative definite [ 10. F. Gori, V. Ramírez-Sánchez, M. Santarsiero, and T. Shirai, “On genuine cross-spectral density matrices,” J. Opt. A, Pure Appl. Opt. 11(8), 085706 (2009). [CrossRef] ], i.e., ) ≥ 0 andfor any . From Eq. (6) , it follows that ) is surely nonnegative, and substituting it into Eq. (9) implies that it is satisfied if Since function is monotonically increasing but function is monotonically decreasing functions of their argument the product of two functions is not monotonic. Therefore, it is difficult to obtain an analytical formula for the choice of parameters, but numerical solutions can be readily found. Applying inequality (10), we find several values of for different values of and summarize the results in Table 1 . One can see that the minimum values of are almost independent of the values of , being approximately equal to . This is due to the fact that the dependence of the exponential function on its argument is quadratic, and, hence for , inequality (10) implies that . However, the maximum values of depend on the difference between , as well as on the values of . That is, the dynamic range of values is larger for a smaller difference between , and for smaller values of We will now derive the conditions that the parameters of the EM CGSM source should satisfy to generate a beam-like field. Recall that the spectral density at a point specified by a position vector is a unit vector in its direction) in the far zone is given by the expression [ is the wave number of the field, is the projection of onto the source plane, is the angle that unit vector makes with a positive direction, andis the four-dimensional Fourier transform of . On substituting from Eq. (7) first into Eq. (12) and then into Eq. (11) , one find that . In order for the matrix to generate a beam propagating close to a axis, the spectral density in Eq. (13) must be negligible except when unit vector lies in a narrow solid angle about the axis. Since for any values of Eq. (13) implies that this will be the case ifleading to restrictionsor in terms of source parametersWe note that the beam conditions expressed by Eq. (16) are the same as that for the classic EM GSM sources [ 8. O. Korotkova, M. Salem, and E. Wolf, “Beam conditions for radiation generated by an electromagnetic Gaussian Schell-model source,” Opt. Lett. 29(11), 1173–1175 (2004). [CrossRef] [PubMed] 3. EM CGSM beam propagating in free space and in linear random medium We will investigate the major properties of the EM CGSM beams by discussing numerical examples involving their evolution in free space and in the isotropic, homogeneous turbulent atmosphere governed by statistics described by a model [ 29. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-Kolmogorov turbulence,” Opt. Eng. 47(2), 026003 (2008). ] for the power spectrum Ф( ), in which the slope 11/3 of the conventional van Karman spectrum is generalized to an arbitrary parameter , i.e.where being the outer and the inner scale of turbulence, andwith being the Gamma function. The term Eq. (19) is a generalized refractive-index structure parameter with units . With the power spectrum in Eq. (19) the integral in Eq. (18) denotes the incomplete Gamma function. After substituting Eqs. (7) Eq. (17) and calculating the integral we obtain the formula:where From the components of the cross-spectral density matrix (23), the spectral density , the spectral degree of coherence , and the spectral degree of polarization in the turbulent atmosphere are calculated by the expressions [ ]where Det and Tr stand for the determinant and the trace of the matrix. Further, the state of polarization of the polarized portion of the beam may be described in terms of the parameters of the polarization ellipse specified by the orientation angle and the degree of ellipticity [ 12. O. Korotkova and E. Wolf, “Changes in the state of polarization of a random electromagnetic beam on propagation,” Opt. Commun. 246(1–3), 35–43 (2005). [CrossRef] denotes the real part of a complex number. 4. Examples: EM CGSM beams in free space We will first consider the evolution of the spectral density of the EM CGSM beam in free space. Since the spectral density does not depend on the off-diagonal components of W-matrix we will discuss the behavior of unpolarized sourceAx=Ay=1. We also set values of the other source parameters as follows: σ=1cm, λ=632.8nm, δxx=1mm. Figure 1 illustrates typical evolution of the spectral density of the EM GSM beam normalized by its on-axis value in the source plane, in the transverse beam cross-sections, at several distances from the source plane on propagation in free space with . One clearly sees that in general the double-ring profile is gradually generated. Two rings correspond to the spectral densities’ distributions of x and y components of the electric field. In order to demonstrate the dependence of the spectral density behavior on index and on the r.m.s. correlation widths we plot in Fig. 2 its evolution in the transverse beam cross-sections, at several distances from the source plane, for = 0, 1, 2 and for . Note that for the case the distribution is the same as for the scalar CGSM beam [ 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express 21(15), 17512–17519 (2013). [CrossRef] [PubMed] ] for all values of . However, for the genuine electromagnetic beam is radiated, resulting in the partial of full appearance of the second ring. Also, while mode = 0 does not lead to substantial deviation from Gaussian profile, starting from = 1 single or double ring profiles are generated. For values of > 2 (not shown) still only two rings are generated with maxima occurring at larger radial positions for larger values of Figure 3 shows the behavior of the absolute value of the degree of coherence of the EM CGSM beam as a function of the separation half-distance where the two points are chosen at locations symmetric with respect to the optical axis, i.e. The degree of coherence starts as a modulated by cosine function Gaussian distribution but gradually converts to wider Gaussian profile. As figure shows, for low modes substantial quantitative changes in the degree of coherence start to occur at distances much shorter (< 1 m) than those for the spectral density (>10 m). 5. Examples: EM CGSM beams in atmospheric turbulence Figure 5 shows the transverse distribution of the spectral density of an EM CGSM beam with the same parameters as in Figs. 1 at propagation distance z = 10 km in the non-Kolmogorov turbulence for different values of parameters . The first four parts Figs. 5(a) indicate the changes in the Kolmogorov turbulence as the values of increase. For the case Fig. 5(a) of the weakest atmosphere the intensity in the inner ring dominates that in the outer ring, while for next two cases Figs. 5(b) the inner ring is being suppressed first. Finally, in Fig. 5(d) for substantially strong turbulence the inner and the outer rings are suppressed completely and the beams’ intensity resembles Gaussian profile. As is evident from parts Figs. 5(c) , provided the value of is kept fixed, the beam’s intensity distribution is the most robust for Kolmogorov case Fig. 5(c) and is destroyed the most when is in the region of 3.1. This value was also shown to be critical for other beams [ 6. O. Korotkova, S. Sahin, and E. Shchepakina, “Multi-Gaussian Schell-model beams,” J. Opt. Soc. Am. A 29(10), 2159–2164 (2012). [CrossRef] [PubMed] 28. E. Shchepakina and O. Korotkova, “Second-order statistics of stochastic electromagnetic beams propagating through non-Kolmogorov turbulence,” Opt. Express 18(10), 10650–10658 (2010). [CrossRef] We will now turn to the analysis of the spectral degree of coherence of the EM CGSM beam (with the same parameters as in Figs. 1 ) as it travels in the atmosphere. In Fig. 6 the evolution of the modulus of the degree of coherence, as a function of modulus of difference vector of two points symmetric with respect to the optical axis, at several propagation distances in the atmosphere for different parameters is shown. One sees that there are two effects occurring with the degree of coherence as either the propagation distance or the strength of turbulence increases. At first, the cosine modulation is being suppressed and the degree of coherence converts to Gaussian-like. Then the width of the Gaussian profile decreases under the influence of atmosphere. 6. Concluding remarks In this article we have introduced a novel class of stochastic electromagnetic sources, in which the correlations are prescribed with the help of the cosine-Gaussian Schell-model coherence functions. The sufficient realizability conditions and the beam conditions for such sources are derived and analyzed. The analytical formula for the cross-spectral density matrix of the EM CGSM beam on propagation in free space and in linear random medium is derived and used to explore the evolution of its second-order characteristics. We have found that the novel source which can initially have any intensity distribution, say, Gaussian, as in our examples, can produce a robust double-ring intensity distribution in the far field in free place as well as at short distances in the turbulent atmosphere, depending on the values of the refractive-index structure parameter C˜n2 and the slope α of the turbulence power spectrum. For free-space propagation the double ring profile is preserved for any propagation distances but it is destroyed by the atmosphere. For sufficiently large C˜n2 and for α in the vicinity of value 3.1, the influence of the atmosphere to light field is the strongest: the inner and outer rings gradually disappear with the increase in the propagation distance and the Gaussian-like distribution is produced. The initial cosine modulation of the spectral degree of coherence is shown to be suppressed to Gaussian profile both in free space and in the atmosphere. The resulting Gaussian profile broadens in width in free space but shrinks in turbulence, just like for other beam classes. The EM CGSM beams can be produced with the help of the interferometric technique involving two spatial light modulators described in Ref [ 30. T. Shirai, O. Korotkova, and E. Wolf, “A method of generating electromagnetic Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt. 7(5), 232–237 (2005). [CrossRef] ]. For the EM CGSM the phase correlation functions of the modulators should take forms of Gaussian functions, modulated by cosine functions, i.e.,instead of being purely Gaussian, as suggested in [ 30. T. Shirai, O. Korotkova, and E. Wolf, “A method of generating electromagnetic Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt. 7(5), 232–237 (2005). [CrossRef] ] for generation of EM Gaussian Schell-model beams. Z. Mei’s research is supported by the National Natural Science Foundation of China (NSFC) (11247004) and Zhejiang Provincial Natural Science Foundation of China (Y6100605). O. Korotkova’s research is supported by US ONR (N00189-12-T-0136) and US AFOSR (FA9550-12-1-0449). References and links 1. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995). 2. F. Gori, G. Guattari, and C. Padovani, “Modal expansion for J[0]-correlated Schell-model sources,” Opt. Commun. 64(4), 311–316 (1987). [CrossRef] 3. F. Gori, M. Santarsiero, and R. Borghi, “Modal expansion for J[0]-correlated electromagnetic sources,” Opt. Lett. 33(16), 1857–1859 (2008). [CrossRef] [PubMed] 4. H. Lajunen and T. Saastamoinen, “Propagation characteristics of partially coherent beams with spatially varying correlations,” Opt. Lett. 36(20), 4104–4106 (2011). [CrossRef] [PubMed] 5. S. Sahin and O. Korotkova, “Light sources generating far fields with tunable flat profiles,” Opt. Lett. 37(14), 2970–2972 (2012). [CrossRef] [PubMed] 6. O. Korotkova, S. Sahin, and E. Shchepakina, “Multi-Gaussian Schell-model beams,” J. Opt. Soc. Am. A 29(10), 2159–2164 (2012). [CrossRef] [PubMed] 7. Z. Mei and O. Korotkova, “Random sources generating ring-shaped beams,” Opt. Lett. 38(2), 91–93 (2013). [CrossRef] [PubMed] 8. O. Korotkova, M. Salem, and E. Wolf, “Beam conditions for radiation generated by an electromagnetic Gaussian Schell-model source,” Opt. Lett. 29(11), 1173–1175 (2004). [CrossRef] [PubMed] 9. H. Roychowdhury and O. Korotkova, “Realizability conditions for electromagnetic Gaussian Schell-model sources,” Opt. Commun. 249(4–6), 379–385 (2005). [CrossRef] 10. F. Gori, V. Ramírez-Sánchez, M. Santarsiero, and T. Shirai, “On genuine cross-spectral density matrices,” J. Opt. A, Pure Appl. Opt. 11(8), 085706 (2009). [CrossRef] 11. F. Gori, M. Santarsiero, G. Piquero, R. Borghi, A. Mondello, and R. Simon, “Partially polarized Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt. 3(1), 1–9 (2001). [CrossRef] 12. O. Korotkova and E. Wolf, “Changes in the state of polarization of a random electromagnetic beam on propagation,” Opt. Commun. 246(1–3), 35–43 (2005). [CrossRef] 13. X. Du, D. Zhao, and O. Korotkova, “Changes in the statistical properties of stochastic anisotropic electromagnetic beams on propagation in the turbulent atmosphere,” Opt. Express 15(25), 16909–16915 (2007). [CrossRef] [PubMed] 14. Z. Mei, O. Korotkova, and E. Shchepakina, “Electromagnetic multi-Gaussian Schell-model beams,” J. Opt. 15(2), 025705 (2013). [CrossRef] 15. Z. Tong and O. Korotkova, “Electromagnetic nonuniformly correlated beams,” J. Opt. Soc. Am. A 29(10), 2154–2158 (2012). [CrossRef] [PubMed] 16. Z. Mei, Z. Tong, and O. Korotkova, “Electromagnetic non-uniformly correlated beams in turbulent atmosphere,” Opt. Express 20(24), 26458–26463 (2012). [CrossRef] [PubMed] 17. G. Gbur and T. D. Visser, “Can spatial coherence effects produce a local minimum of intensity at focus,” Opt. Lett. 28(18), 1627–1629 (2003). [CrossRef] [PubMed] 18. Y. Cai and F. Wang, “Partially coherent anomalous hollow beam and its paraxial propagation,” Phys. Lett. A 372(25), 4654–4660 (2008). [CrossRef] 19. M. Alavinejad, G. Taherabadi, N. Hadilou, and B. Ghafary, “Changes in the coherence properties of partially coherent dark hollow beam propagating through atmospheric turbulence,” Opt. Commun. 288 , 1–6 (2013). [CrossRef] 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett. 38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express 21(15), 17512–17519 (2013). [CrossRef] [PubMed] 22. H. T. Eyyuboğlu and Y. Baykal, “Transmittance of partially coherent cosh-Gaussian, cos-Gaussian and annular beams in turbulence,” Opt. Commun. 278(1), 17–22 (2007). [CrossRef] 23. G. Zhou and X. Chu, “Propagation of a partially coherent cosine-Gaussian beam through an ABCD optical system in turbulent atmosphere,” Opt. Express 17(13), 10529–10534 (2009). [CrossRef] [PubMed] 24. A. Zilberman, E. Golbraikh, and N. S. Kopeika, “Some limitations on optical communication reliability through Kolmogorov and non-Kolmogorov turbulence,” Opt. Commun. 283(7), 1229–1235 (2010). 25. E. Wolf, Introduction to the Theories of Coherence and Polarization of Light (Cambridge University, 2007). 26. X. Du and D. Zhao, “Polarization modulation of stochastic electromagnetic beams on propagation through the turbulent atmosphere,” Opt. Express 17(6), 4257–4262 (2009). [CrossRef] [PubMed] 27. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Angle of arrival fluctuations for free space laser beam propagation through non Kolmogorov turbulence,” Proc. SPIE 6551(65510E), 65510E (2007). [CrossRef] 28. E. Shchepakina and O. Korotkova, “Second-order statistics of stochastic electromagnetic beams propagating through non-Kolmogorov turbulence,” Opt. Express 18(10), 10650–10658 (2010). [CrossRef] 29. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-Kolmogorov turbulence,” Opt. Eng. 47(2), 026003 (2008). 30. T. Shirai, O. Korotkova, and E. Wolf, “A method of generating electromagnetic Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt. 7(5), 232–237 (2005). [CrossRef] OCIS Codes (010.1300) Atmospheric and oceanic optics : Atmospheric propagation (010.1330) Atmospheric and oceanic optics : Atmospheric turbulence (030.1640) Coherence and statistical optics : Coherence (260.5430) Physical optics : Polarization ToC Category: Atmospheric and Oceanic Optics Original Manuscript: September 6, 2013 Revised Manuscript: October 23, 2013 Manuscript Accepted: October 25, 2013 Published: November 1, 2013 Zhangrong Mei and Olga Korotkova, "Electromagnetic cosine-Gaussian Schell-model beams in free space and atmospheric turbulence," Opt. Express 21, 27246-27259 (2013) Sort: Year | Journal | Reset 1. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995). 2. F. Gori, G. Guattari, and C. Padovani, “Modal expansion for J0-correlated Schell-model sources,” Opt. Commun.64(4), 311–316 (1987). [CrossRef] 3. F. Gori, M. Santarsiero, and R. Borghi, “Modal expansion for J0-correlated electromagnetic sources,” Opt. Lett.33(16), 1857–1859 (2008). [CrossRef] [PubMed] 4. H. Lajunen and T. Saastamoinen, “Propagation characteristics of partially coherent beams with spatially varying correlations,” Opt. Lett.36(20), 4104–4106 (2011). [CrossRef] [PubMed] 5. S. Sahin and O. Korotkova, “Light sources generating far fields with tunable flat profiles,” Opt. Lett.37(14), 2970–2972 (2012). [CrossRef] [PubMed] 6. O. Korotkova, S. Sahin, and E. Shchepakina, “Multi-Gaussian Schell-model beams,” J. Opt. Soc. Am. A29(10), 2159–2164 (2012). [CrossRef] [PubMed] 7. Z. Mei and O. Korotkova, “Random sources generating ring-shaped beams,” Opt. Lett.38(2), 91–93 (2013). [CrossRef] [PubMed] 8. O. Korotkova, M. Salem, and E. Wolf, “Beam conditions for radiation generated by an electromagnetic Gaussian Schell-model source,” Opt. Lett.29(11), 1173–1175 (2004). [CrossRef] [PubMed] 9. H. Roychowdhury and O. Korotkova, “Realizability conditions for electromagnetic Gaussian Schell-model sources,” Opt. Commun.249(4–6), 379–385 (2005). [CrossRef] 10. F. Gori, V. Ramírez-Sánchez, M. Santarsiero, and T. Shirai, “On genuine cross-spectral density matrices,” J. Opt. A, Pure Appl. Opt.11(8), 085706 (2009). [CrossRef] 11. F. Gori, M. Santarsiero, G. Piquero, R. Borghi, A. Mondello, and R. Simon, “Partially polarized Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt.3(1), 1–9 (2001). [CrossRef] 12. O. Korotkova and E. Wolf, “Changes in the state of polarization of a random electromagnetic beam on propagation,” Opt. Commun.246(1–3), 35–43 (2005). [CrossRef] 13. X. Du, D. Zhao, and O. Korotkova, “Changes in the statistical properties of stochastic anisotropic electromagnetic beams on propagation in the turbulent atmosphere,” Opt. Express15(25), 16909–16915 (2007). [CrossRef] [PubMed] 14. Z. Mei, O. Korotkova, and E. Shchepakina, “Electromagnetic multi-Gaussian Schell-model beams,” J. Opt.15(2), 025705 (2013). [CrossRef] 15. Z. Tong and O. Korotkova, “Electromagnetic nonuniformly correlated beams,” J. Opt. Soc. Am. A29(10), 2154–2158 (2012). [CrossRef] [PubMed] 16. Z. Mei, Z. Tong, and O. Korotkova, “Electromagnetic non-uniformly correlated beams in turbulent atmosphere,” Opt. Express20(24), 26458–26463 (2012). [CrossRef] [PubMed] 17. G. Gbur and T. D. Visser, “Can spatial coherence effects produce a local minimum of intensity at focus,” Opt. Lett.28(18), 1627–1629 (2003). [CrossRef] [PubMed] 18. Y. Cai and F. Wang, “Partially coherent anomalous hollow beam and its paraxial propagation,” Phys. Lett. A372(25), 4654–4660 (2008). [CrossRef] 19. M. Alavinejad, G. Taherabadi, N. Hadilou, and B. Ghafary, “Changes in the coherence properties of partially coherent dark hollow beam propagating through atmospheric turbulence,” Opt. Commun.288, 1–6 (2013). [CrossRef] 20. Z. Mei and O. Korotkova, “Cosine-Gaussian Schell-model sources,” Opt. Lett.38(14), 2578–2580 (2013). [CrossRef] [PubMed] 21. Z. Mei, E. Shchepakina, and O. Korotkova, “Propagation of cosine-Gaussian-correlated Schell-model beams in atmospheric turbulence,” Opt. Express21(15), 17512–17519 (2013). [CrossRef] [PubMed] 22. H. T. Eyyuboğlu and Y. Baykal, “Transmittance of partially coherent cosh-Gaussian, cos-Gaussian and annular beams in turbulence,” Opt. Commun.278(1), 17–22 (2007). [CrossRef] 23. G. Zhou and X. Chu, “Propagation of a partially coherent cosine-Gaussian beam through an ABCD optical system in turbulent atmosphere,” Opt. Express17(13), 10529–10534 (2009). [CrossRef] [PubMed] 24. A. Zilberman, E. Golbraikh, and N. S. Kopeika, “Some limitations on optical communication reliability through Kolmogorov and non-Kolmogorov turbulence,” Opt. Commun.283(7), 1229–1235 (2010). 25. E. Wolf, Introduction to the Theories of Coherence and Polarization of Light (Cambridge University, 2007). 26. X. Du and D. Zhao, “Polarization modulation of stochastic electromagnetic beams on propagation through the turbulent atmosphere,” Opt. Express17(6), 4257–4262 (2009). [CrossRef] [PubMed] 27. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Angle of arrival fluctuations for free space laser beam propagation through non Kolmogorov turbulence,” Proc. SPIE6551(65510E), 65510E (2007). [CrossRef] 28. E. Shchepakina and O. Korotkova, “Second-order statistics of stochastic electromagnetic beams propagating through non-Kolmogorov turbulence,” Opt. Express18(10), 10650–10658 (2010). [CrossRef] 29. I. Toselli, L. C. Andrews, R. L. Phillips, and V. Ferrero, “Free-space optical system performance for laser beam propagation through non-Kolmogorov turbulence,” Opt. Eng.47(2), 026003 (2008). 30. T. Shirai, O. Korotkova, and E. Wolf, “A method of generating electromagnetic Gaussian Schell-model beams,” J. Opt. A, Pure Appl. Opt.7(5), 232–237 (2005). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-21-22-27246&id=274061","timestamp":"2014-04-18T08:03:18Z","content_type":null,"content_length":"378031","record_id":"<urn:uuid:3bb9ae66-d379-4196-b5ee-733111301361>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Markov Chains to Generate Test Input September 15, 2009 — Eric Melski One challenge that we’ve faced at Electric Cloud is how to verify that our makefile parser correctly emulates GNU Make. We started by generating test cases based on a close reading of the gmake manual. Then we turned to real-world examples: makefiles from dozens of open source projects and from our customers. After several years of this we’ve accumulated nearly two thousand individual tests of our gmake emulation, and yet we still sometimes find incompatibilities. We’re always looking for new ways to test our parser. One idea is to generate random text and use that as a “makefile”. Unfortunately, truly random text is almost useless in this regard, because it doesn’t look anything like a real makefile. Instead, we can use Markov chains to generate random text that is very much like a real makefile. When we first introduced this technique, we uncovered 13 previously unknown incompatibilities — at the time that represented 10% of the total defects reported against the parser! Read on to learn more about Markov chains and how we applied them in practice. Markov chains A Markov chain is simply a sequence of random values in which the next value is in some way dependent on the current value, rather than being completely random. Consider the case of generating random text one letter at a time, from the set of uppercase English letters (A-Z). If the sequence is completely random, then for each character generated, any letter is equally probable. Regardless of what characters you’ve generated up to this point, you are just as likely to get a D as an X next. Think of it as if you have a bag of tiles, one for each letter. With truly random text, you pick one tile and write down the letter on that tile. Then you return the tile to the bag and pick again. Lather, rinse, repeat until you’ve generated as much text as you like. But suppose we said that the probability of the next character is dependent on the last character we generated. For example, in English text if you run across the letter Q you can be pretty sure that the next character is going to be U. It’s almost certainly not going to be X, or Z, etc. We can build a table that tells us the probabilty that any given letter will be followed by any other letter. For the letter Q, the table might look like this: Letter Probability of appearing after Q A 1% E 1% I 1% O 1% U 96% All others 0% We can do a much better job of generating text that looks like English if we use these tables to guide us. Imagine that instead of one bag of tiles with one tile for each letter, we have one bag for each letter, and we fill each bag with tiles according to the probabilities in our table. For example, the bag for the letter Q would contain 96 U tiles, and one tile each for A, E, I, and O. Each time we want to generate a new letter, we look at the last letter we generated, find the bag of tiles corresponding to that letter, and pick out one tile. After writing down the letter, we return the tile to the bag and repeat the process. The sequence of letters that we generate in this manner is a simple Markov chain. How do you build the probability tables? One way is to generate them from some sample input. If we have a sufficiently large example of text in the target language, we can scan it and count the number of times each letter occurs, and the number of times it is followed by each letter. Note that it is critical to have a large and varied input. If the sample is too small, the resulting probability tables won’t accurately represent the target language. The generator will only be able to generate the input text itself. Markov chains of order m Of course, we don’t have to limit ourselves to considering just the single previous letter. For example, if the previous two characters are ST, then the next character is probably a vowel, or maybe an R. It’s probably not going to be D, or Q, etc. Just as before, we can make a table that tells us the probability that any given pair of letters will be followed by any other letter. And we can keep going, adding more and more of the preceding letters to our formula. The number of previous characters we use is called the order of the chain, so if we use the previous 4 characters, we would say we have a Markov chain of order 4. The greater the order of the chain, the “smarter” our generator becomes, because it is considering more context when choosing letters. As you can see, the generated text looks more and more like the target language as you increase the order (although after a some point, you get little additional benefit from further increases): Order Result 0 (purely random) ehnee.Alr noer ealcra edctn eIi 1 Pige foule.ce d futht wrion e mara 2 Prookiname arg-tm aread on achivedging 3 Yes, and no usinession be 4 Project that last it make you first, moderneath. Using Markov chains in testing In order to use this technique effectively in testing, you need a couple of things besides the generator itself: 1. A large sample input to seed the generator. As noted, the bigger your sample input text, the higher the quality of the probability tables, and therefore the more varied your generated text will be. Since we are trying to generate makefiles, we used several megabytes of makefiles from a variety of open-source projects as the sample text. 2. An automated evaluation mechanism. You have to be able to determine quickly and automatically if a given generated file is processed correctly or not. Of course, correct can mean many different things here. It might be as simple as “does not cause a program crash”. In our case, it means “emake parses this makefile the same way that gmake does”, so we use gmake as a reference implementation. Note that it doesn’t matter if the generated text truly is a completely valid makefile. In fact most of the time it will not be. What matters is those cases is that emake and gmake both report the same error. We wrote a simple shell script to drive the testing process. First, it uses the generator to produce several random makefiles. Then it runs each makefile through both gmake and emake, and compares the results. Any differences are reported for further investigation. Verifying the implementation of an emulator for a complex system is hard, especially when the original system has no formal specification. Using randomly generated input is a useful way to extend the breadth of your testing, and Markov chains make it possible to generate even more useful random input. Our original implementation of this technique uncovered several previously unknown defects, and it continues to pay dividends both by uncovering new defects and by providing a confidence measure for our emulation. If you want to play with Markov chains yourself, you can download the source for the generator used in this article. NB: the program has only been compiled and used on Linux; on other platforms your mileage may vary. For more information about Markov chains, I recommend Section 15.3 of the excellent book Programming Pearls by Jon Bentley. If you enjoyed this article about testing techniques, you may enjoy this related article: 8 Comments 1. After learning more about stochastic modeling and trying my hand at it, this was one of the areas I read up on and this is a good demonstration of why Markov chains help so much. Good job! 2. Very clear and concise overview of a topic that is usually obfuscated by mathematical notation. Great job. 3. Bentley built his Markov chain using his Bell labs colleagues Pike & Kernighan’s Markov algorithm from The Practice of Programming Pike had used this technique earlier when coding up http://en.wikipedia.org/wiki/Mark_V_Shaney with Bruce Ellis I’m sure they were not the first 4. Your explanation of Markov chain’s is crystal clear. Thanks. Im curious to know how you built the probability tables using makefiles as input. How did you tokenize the makefile ? Did you use the same parser in make ? □ That’s a great question. For this implementation, I did not bother trying to tokenize the makefile, but rather built the probability tables based on the character-by-character analysis of the makefiles. This is sometimes called “letter-level” Markov chains, as opposed to “word-level”. It has the advantage of being very simple to implement, and the disadvantage of producing somewhat less makefile-like output. In theory, word-level chains would give me random makefiles that were even more like real makefiles. I explored changing the generator to do word-level chains, but that proved to be more work than I could reasonably justify investing in this side project. 5. Thanks for the reply Eric. 6. Probably the best explanation of Markov Chains on the web. consider updating Wikipedia with this information. thanks! 7. Like everyone else has said, excellent explanation of Markov chains. I’ve had a somewhat passing interest in learning them, but each time didn’t grasp them right away. This was crystal clear. 6 Trackbacks
{"url":"http://www.electric-cloud.com/blog/2009/09/15/using-markov-chains-to-generate-test-input/","timestamp":"2014-04-20T00:47:11Z","content_type":null,"content_length":"42018","record_id":"<urn:uuid:201b5135-cc19-4afa-9c45-3e87594760cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
formula to average how many units per minute formula to average how many units per minute I'm an trying to enter a formula so I can determine how many production units are completed per minute. for example: # of Courses Start Finish Min to Comp Visit publisher's web-site: formula to average how many units per minute Related Tutorials I have data per 10 sec (Huge COLUMNS) and would like average it per 1 minute and per 10 minutes afterwards. I found some info about 1 hour data averaging, tried to modify it but have to do averaging per each set of data which is way too long. Is there any way to set this averaging per minute ? I am not sure I am doing it correctly. I am working w/ a table. Lets say 25 rows/3 columns Col 1 is product name Col 2 is region name Col 3 is number of units sold I would like to get the average number of units for the sum of the products. So if there are 2 products in the US that sold a combined 15 units, the average would be 7.5. I would like to do this for each region. I need a formula to convert 15 minute time step data to hourly average (an average of the four 15 minute data points for the hour). My spreadsheet looks like this: date time in first cell data in second cell etc. I have about hundreds of rows of this type of data. At times there may be missing data, but the correct time is there, there is just no data in the cell. I would like a third column of data that would have the date/time at the top of the hour and the fourth column to have the hourly average. Can you please help me with figuring out what formula I need for below. 0-85,00 units = $0 85,001 -100,00 units = $2 100,00-150,00 units = $3 if I had 90,00 units in a cell, then my result would be 900-8500 =5,00 x $2 if I had 105,00 units in a cell,then my result should be 100,00 -85,00 =15,00 x $2 if I had 70,00 units in a cell, then my result should be 0. I need all 3 if statements in my formula in one cell. Is that possible? I think once I get this then I can go onto the next tier of the $3 in the next cell, I hope?! I am trying write an Excel 2004 formula that will calculate the cost based on a number of units where the cost changes depending upon how many units are involved. 61 and above $58.00 So if we have 30 units, that would be 20 units at $65 per and 10 units at $63 per. the costs for the first 20 units will always be $65 per, the next 20 at $63 per, so on and so forth. My sumproduct formula comes close but upon testing it gives incorrect info. Can someone help correct my formula or come up with something better? Related Applications & Scripts This jQuery plugin parses a given element for measurements (Eg: 5 cups of flour) and shows a tooltip with conversions to different units. jConvert is very simple to use, just call one function and all your measurements will have a tooltip showing conversions to other units. Use jConvert on your recipe website, DIY blog or anywhere you want to offer conversions for your measurements. jConvert is so simple and will make your site more accessible for foreign audiences which use different metrics. This class makes unit conversion verry simple. You only have to call one function to convert a value from one unit to another. It is also verry simple to add your own units to the class using xml. Currently the class supports following units: Angle, Area, Data, Density, Energy, Length, Pressure, Temperature, Time, Velocity, Volume, Weight Auction Module allows to start your own professional online auction web site based on Esvon Classifieds software - universal classified ads solution. This is real time auction software package designed to be easy to operate allowing for flexibility and growth as your business expands. The item must be sold to the highest bidder, regardless of the closing price if reserve price wasn't specified. If a bid is placed during the last 1 minute of an auction, the auction will be automatically extended for an additional 1 minute from the time of the latest bid. The auction will close once all bidding activity has stopped for a period of 1 minute. You manage some units against an AI enemy. There's no unit superior to all others. In facts the game is based on the Rock-Paper-Scissors game logic. Units are moving in a map, filled with sectors to conquer, factories that produce more units, defense towers and magnetic fields. The game is done completely in javascript using the element. Works best on Chrome. Features : automatic resizing of the game with the client window, scrolling map, music (switched off by default), a complete interactive tutorial to learn the basics of the game, a complete AI demo of a battle. The code is completely downloadable (the download version is NOT minimized, so it's more clear to understand and modify the code). Use this practical script to easily figure out how many megabytes is 134,643 kilobytes, or 1024,000,000 bytes is gigabytes. You get the idea. Confused about the various units used to calculate bandwidth, and their equivalent in another unit? Use this bandwidth calculator to instantly figure it out! Just enter a number, select the unit of input, and this script shows its equivalent in the four standard units of bandwidth (Bytes, Kilobytes, Megabytes, and Gigabytes). Very handy script.
{"url":"http://tutorialsources.com/excel/tutorials/formula-to-average-how-many-units-per-minute.htm","timestamp":"2014-04-19T11:56:44Z","content_type":null,"content_length":"20152","record_id":"<urn:uuid:ac4578a2-ff1b-4fd9-8b7f-2ee8cb8d78d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Oak Ridge North, TX Calculus Tutor Find an Oak Ridge North, TX Calculus Tutor my name is Kevin, and I can tutor on a variety of subjects. I am a research scientist and Yale educated with a post graduate degree. I have tutored in the past. 85 Subjects: including calculus, English, Spanish, reading ...I'm a very motivated and driven individual with a positive outlook on life. I love teaching others subjects that I have an in depth understanding in. I think that the best way to learn math is to do lots of example problems. 9 Subjects: including calculus, physics, geometry, algebra 1 ...Writing clearly is a necessity for almost all scientific, technical and business careers. I have published a number of studies in scientific journals and have written successful grant proposals totaling over $60M. I can teach you how to write in technical format, including standard reference and graphic formats. 20 Subjects: including calculus, writing, algebra 1, algebra 2 Most math problems can be solved in 3-5 steps! As a certified math teacher in Cy-Fair and as a tutor, I believe that anyone can learn and understand math. Those problems that appear complicated are just a series of simple concepts woven together. 16 Subjects: including calculus, reading, GRE, algebra 1 I am a certified High School math teacher who enjoys working one on one or with a few students, challenging them to overcome their fears and struggles with math. My tutoring style is much like a coach who encourages and supports his players but demands hard work and good thinking. I am most concer... 11 Subjects: including calculus, statistics, geometry, algebra 2 Related Oak Ridge North, TX Tutors Oak Ridge North, TX Accounting Tutors Oak Ridge North, TX ACT Tutors Oak Ridge North, TX Algebra Tutors Oak Ridge North, TX Algebra 2 Tutors Oak Ridge North, TX Calculus Tutors Oak Ridge North, TX Geometry Tutors Oak Ridge North, TX Math Tutors Oak Ridge North, TX Prealgebra Tutors Oak Ridge North, TX Precalculus Tutors Oak Ridge North, TX SAT Tutors Oak Ridge North, TX SAT Math Tutors Oak Ridge North, TX Science Tutors Oak Ridge North, TX Statistics Tutors Oak Ridge North, TX Trigonometry Tutors Nearby Cities With calculus Tutor Cypress, TX calculus Tutors Hufsmith calculus Tutors New Caney calculus Tutors Oak Ridge N, TX calculus Tutors Patton Village, TX calculus Tutors Patton Vlg, TX calculus Tutors Pinehurst, TX calculus Tutors Porter, TX calculus Tutors Rayford, TX calculus Tutors Roman Forest, TX calculus Tutors Shenandoah, TX calculus Tutors Stagecoach, TX calculus Tutors Tamina, TX calculus Tutors The Woodlands, TX calculus Tutors Woodbranch, TX calculus Tutors
{"url":"http://www.purplemath.com/Oak_Ridge_North_TX_Calculus_tutors.php","timestamp":"2014-04-20T13:59:28Z","content_type":null,"content_length":"24175","record_id":"<urn:uuid:51f497f3-a7f5-4c48-81e7-f2f40d861a23>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Rolling mill management system - Patent # 4745556 - PatentGenius Rolling mill management system 4745556 Rolling mill management system (5 images) Inventor: Turley Date Issued: May 17, 1988 Application: 06/880,801 Filed: July 1, 1986 Inventors: Turley; John W. (Oxford, CT) Assignee: T. Sendzimir, Inc. (Waterbury, CT) Primary Ruggiero; Joseph Attorney Or Frost & Jacobs U.S. Class: 700/149; 72/12.1; 72/229 Field Of 364/472; 364/148; 72/8; 72/11; 72/16 U.S Patent 4494205; 4520642; 4521859; 4598377; 4633693 Abstract: A new method of operating reversing rolling mills whereby rate of production is maximized and strip or sheet flatness is improved.A digital computer which is provided with information describing the rolling mill equipment, the material to be rolled, and the starting dimensions and required finished dimensions of this material, is used to direct the adjustment of the mill settings before every pass, as limited by the load capacity of the mill and the power and speed of its drive, and to equalize the roll separating force on the last several passes in order to achieve the optimum product flatness.The system is designed to allow for full operator intervention at any stage, and will redirect the adjustment of mill settings on all passes succeeding the pass(es) in which the operator intervened, to maximize production and optimize product flatness for the remaining passes. Claim: What is claimed is: 1. A method of optimizing the operation of a rolling mill having a mill structure, a pair of work rolls rotatably supported in the mill structure for reducing the dimensionsof a workpiece being rolled, means for varying the separation force between the work rolls, drive means for rotating the work rolls, and control means for controlling the operation of the rolling mill, said method comprising the steps of: (a) storing, in the control means, values representative of the separation force capacity of the mill structure for the work rolls, and values representative of the drive torque capacity of the drive means; (b) storing, in the control means, values representative of properties of the material from which the workpiece to be rolled is to be formed; (c) storing, in the control means, values representative of the dimensions of the workpiece to be rolled and the desired dimensions to be produced by the rolling mill; (d) storing, in the control means, values representative of the maximum permissible pass reduction for first pass, intermediate passes and final pass of the workpiece through the work (e) using the values stored in the control means to calculate a pass reduction schedule to reduce the workpiece to the desired dimensions by multiple passes through the work rolls, and for each pass making an iterative calculation to determinethe minimum workpiece dimension the mill can achieve as limited by the separating force capacity of the mill structure, the drive torque capacity of the drive means, the skidding of the rolls relative to the workpiece material, the maximum permissiblepass reduction, and the desired workpiece dimension; (f) using the calculated pass schedule to operate the control means to optimize operation of the rolling mill by controlling the separating force between the working rolls and the speed of the drive means. 2. A method as recited in claim 1 wherein the calculated pass schedule is adjusted for selective ones of the last several passes to equalize the separating force on those selective passes for optimizing the flatness of the rolled workpiece. 3. A method as recited in claim 1 wherein the calculated pass schedule includes a particular calculated reduction for the workpiece for each pass, and further including the step of measuring the reduction for each pass and recalculating the passschedule for each subsequent pass if the particular calculated reduction for a pass is not achieved. 4. A method as recited in claim 1 wherein the rolling mill includes a coiler on each side of the working rolls and coiler drives for rotating the coilers, and further including the steps of storing in the control means values representative ofthe maximum capacity of the coiler drives, and calculating for every pass the maximum entry and exit tensions that can be applied to the workpiece as determined by the capacity of the coiler drives and the strength of the workpiece, said iterativecalculation including said maximum entry and exit 5. A method of optimizing the operation of a rolling mill where the roll separating force levels on the final few passes through the work rolls are equalized to optimize the flatness of a rolled strip, including the steps of: (a) storing in a digital computer the values of physical parameters defining the mill structure, the mill drive and the coiler drives; (b) storing in said digital computer the values of physical parameters defining the property of materials to be rolled on the rolling mill; (c) storing in said digital computer the values of the physical parameters defining the workpiece material to be rolled, and the desired workpiece dimensions to be produced by the operation of the rolling mill; (d) storing in said digital computer the values of maximum permissible pass reduction for the first pass, intermediate passes, and final pass; (e) calculating a pass schedule from the values stored in the digital computer, making an iterative calculation for each pass to determine the minimum exit gauge the mill can achieve as limited by the mill's separating force capacity, drivetorque capacity, roll skidding, maximum permissible pass reduction, and final desired gauge for the workpiece, and determining the maximum rolling speed as determined by the power of the mill; (f) calculating the maximum entry and exit tensions that can be applied to the rolled workpiece for each pass as determined by the capacity of the coiler drives and the strength of the rolled workpiece; (g) adjusting the pass reductions on selected of the last few passes to equalize the roll separating force on those passes, and recalculating the pass schedule after such adjusting. (h) storing in the memory of the digital computer the optimum values of exit gauge, rolling speed, entry tension and exit tension for each pass of the calculated pass schedule; and (i) displaying the optimum values of exit gauge, rolling speed, entry tension and exit tension before each pass of the pass schedule to enable the mill operator to set up the mill to achieve the calculated values. 6. A method according to claim 5 wherein the optimum values of exit gauge, rolling speed, entry tension and exit tension before each pass are transferred from the digital computer memory to the rolling mill control systems for automatic controlof the mill without operator intervention. 7. A method according to claim 5 further including the steps of: (a) providing a prompt to the operator after each non-final pass to enable the operator to indicate whether a particular exit gauge was achieved on the previous pass; and (b) in each case where the difference between the calculated exit gauge and a measured exit gauge for a particular pass exceeds a predetermined amount, repeating steps 5(e) through 5 8. A method according to claim 6 wherein provision for operator intervention is made by the steps of: (a) connecting outputs of the digital computer through suitable interface circuits to preset inputs of the rolling mill while enabling manual settings on the mill to remain in use during automatic operation; (b) providing a prompt to the operator after each non-final pass to enable the operator to input information to the digital computer indicating whether or not the exit gauge achieved by the rolling mill on a particular pass is different from thegauge calculated for that pass in the pass schedule; (c) whenever the exit gauge of a particular pass differs from the calculated gauge for that particular pass, repeating steps 5(e) through 5(h) for the remaining passes in the reduction pass schedule. Description: TECHNICAL FIELD The invention relates generally to rolling mills and more particularly to a control system for optimizing the operation for a rolling mill. The invention will be specifically disclosed in connection with a reversing rolling mill control systemfor calculating and adaptively modifying a multi-pass reduction schedule. BACKGROUND OF THE INVENTION Generally, an experienced operator of a reversing rolling mill will adjust his mill settings according to his prior experience with the same mill on a previous occasion. It will be readily appreciated, however, that such a method is almosttotally dependent upon the skill of the operator and is replete with inefficiencies. There are several reasons why the method of managing the operation of the rolling mill is inefficient. First, the operator may not have previously rolled the same material, or, if he has, he may not have worked with the same starting andfinishing gauges. Alternatively, he may not have experience with the particular material being rolled on the rolling mill in question. In such cases, he cannot rely upon his experience and is relegated to trial and error estimates on every pass. It isthen almost impossible to roll efficiently. When the operator is not very experienced, the problem is accentuated further. Moreover, if the rolling mill is operated on a shift basis, as is normal, then each mill operator will set up the mill differently, according to his own previous experience. As a consequence, there are normally large variations in rate ofproduction and product quality achieved from shift to shift. Further, if a plant has several different rolling mills, and there is a need to transfer an operator from one mill to another, the operator's previous experience is of limited value. If the second mill (including its drive) is not identical inall respects to the first, the permissible pass reductions may be greater or less than those for the first mill. Even when a skilled operator has machine specific experience, it is common for inefficiencies to arise. When the strip thickness is approaching the finished gauge, for example, an operator frequently has great difficulty in determiningintermediate gauges. He might, for example, have to decide at a certain point whether to make another 2 or another 3 passes. Even if he chooses the most efficient number of passes, he then has to guess at the appropriate intermediate gauge(s). One prior art method of rolling mill management, which, to some extent overcomes some of the problems outlined above, is the so-called "programmed pass schedule" method. With this method a rolling schedule for a given material, width, startingand finishing gauges, (and a given rolling mill) is stored in a computer memory. When it is desired to repeat that schedule with a fresh coil, the mill settings for each pass are recalled from the memory and the operator sets the mill (or the mill isset automatically) to these settings. This programmed pass schedule method may be satisfactory when the range of materials, starting and finish gauges, and widths is very small. However, if the range is large, the amount of memory needed, and the amount of labor needed to determineall possible schedules and store them in the memory become prohibitive. Even when the range of materials, gauges, etc. is small, the following problems still remain: (1) Any particular schedule stored may not utilize the mill load capacity and mill drive capacity fully. (2) The schedule will, in general, be only good for the one mill. (3) The schedule does not allow for variations in work roll size (as these rolls wear), nor does it allow for the fact that mills can frequently operate at higher power levels (and thus be more productive) in winter than in summer. (4) The schedule cannot allow for operator intervention. Since the operator may have to change an intermediate gauge for a number of reasons, he would then be obliged to reschedule all remaining passes, since the programmed pass schedule will nolonger apply. (5) There will still be some coils to be rolled having combinations of material type, width and gauges which will not be stored in the memory. For any such coils, the operator must determine mill settings by trial and error. BRIEF SUMMARY OF THE INVENTION In accordance with the invention, a method is provided for optimizing the operation of a rolling mill of the type having a mill structure, a pair of work rolls rotatably supported in the mill structure for reducing the dimensions of the workpiecebeing rolled, means for varying the separation force between the work rolls, drive means for rotating the work rolls, and control means for controlling the operation of the rolling mill. The method includes the steps of storing informationrepresentative of the parameters of the rolling mill and the workpiece in the control means. The stored values are then used to calculate a pass reduction schedule to reduce the workpiece to the desired dimensions by multiple passes through the workrolls. An iterative calculation is then performed to determine the maximum workpiece dimension the mill can achieve, as limited by the separating force capacity of the mill structure, the drive torque capacity of the drive means, the skidding of therolls related to the workpiece material, the maximum permissible pass reduction, the entry and exit tensions, if any, on the workpiece, and the desired workpiece dimension. The calculated pass information is then used to operate the control means tooptimize operation of the rolling mill by controlling the separating force between the working rolls and the speed of the drive means. In accordance with one particularly advantageous aspect of the invention, the calculated pass schedule is adjusted for selective of the last several passes to equalize the separating force on those selected passes for optimizing the flatness ofthe rolled workpiece. In a still further aspect of the invention, the actual gauge of the workpiece after rolling a pass is compared to the calculated gauge for that particular pass. If the measured workpiece gauge deviates from the calculated gauge, a new passschedule for all subsequent passes is recalculated. In yet another aspect of the invention, the calculated pass reduction values are transferred from the control means to the rolling mill for automatic control of the mill. In still another aspect of the invention, the control means provides a prompt to the operator after each non-final pass to enable the operator to indicate whether a particular exit gauge was achieved on the previous pass. If the calculated exitgauge and the measured exit gauge for a particular pass differ by a predetermined amount, the pass reduction schedule is In still another aspect of the invention, the control means operates the rolling mill automatically, while still enabling manual settings on the mill to remain in use during automatic BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings incorporated in and forming a part of the specification illustrate several aspects of the present invention, and together with the description serve to explain the principles of the invention. In the drawings: FIG. 1 is a schematic diagram showing how the roll separating force and power are calculated for a pass; FIG. 2 is a logic diagram showing how the system maximizes the reduction to achieve maximum power and/or roll separating force for a pass; FIG. 3 is a logic diagram showing how the system maximizes reductions on a multi-pass schedule, and also equalizes the roll separating force on the last several passes; FIG. 4 is a logic diagram showing how the system can accept operator intervention at any stage and reoptimize the remaining passes; and FIG. 5 is a schematic diagram showing how the system is integrated with a typical prior art mill and its control systems to provide the management function. Reference will now be made in detail to the present preferred embodiment of theinvention, an example of which is illustrated in the accompanying drawings. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG. 1 shows in diagrammatic form the basic calculation method adopted by most theories of cold rolling. Such well-known theories as that of Bland and Ford, and that of Stone, all adopt such methods. It is generally understood by those skilled in the art that the roll separating force can be calculated using an equation of the general form given in step 5 of FIG. 1, regardless of which particular theory is adopted. The differences between the several theories usually lie in the assumptions made, and the methods of calculating the effect of the flattening of the work rolls, and the methods of calculating the pressure multiplication factor (PMF). Roll flattening occurs due to the very high pressures that occur in the cold rolling of metals. It can be particularly severe if strip thickness is small relative to the work roll diameter, and if the material being rolled is very hard. Because the circumferential speed of the rolls is uniform as it passes through the roll bite, but the speed of the strip increases as it reduces in thickness through the bite, the strip is normally slipping backwards relative to the rolls at theentry side of the roll bite, and slipping forwards relative to the rolls at the exit side of the bite. At one point in the bite, the neutral point, the strip will be traveling at the same speed as the rolls. These phenomena of backward slip, forwardslip and neutral point are well known in the art, and are described in any textbook on rolling. In order to overcome the effect of friction between rolls and strip, which tends to resist the forward and backward slip, and hence to resist the elongation of the strip, additional roll separating force (RSF) is required. The factor of increaseof RSF due to friction is known as the pressure multiplication factor (PMF). A common feature of most rolling theories is the necessity to guess or estimate the roll flattening and PMF values at the start of the calculation, then to use an iterative procedure to calculate RSF, the iterative procedure being completed whenthe RSF calculated from the assumed values of roll flattening and PMF (step 9) when used in steps 7 and 8, gives the same values of roll flattening and PMF that were assumed. In order to proceed with the basic calculations, the mill data, coil data and material data as listed in FIG. 1 must be known. The calculation proceeds by calculating roll bite friction coeffecient (step 1) for which rolling speed and coolanttype must be known, and constrained yield stress of material at start (Y1) middle (Y) and end (Y2) of pass (step 2) for which material of strip, gauge at which strip was last annealed, entry and exit gauges must be known. The third and fourth stepsestimate the pressure multiplication factor, and the flattened radius of the work roll. The fifth step calculates entry and exit tension, the entry tension being the actual payoff or uncoiler tension, or the maximum tension as limited by the strength ofthe material (it is usually limited to one-third of Y1) whichever is less, and the exit tension being limited by the coiler tension, or the maximum tension as limited by the strength of the material (usually limited to Y2/3) whichever is less. The roll separating force (RSF) is next calculated (step 6) then the flattened radius (R') (step 7) and the pressure multiplication factor (PMF) (step 8) are calculated using the RSF value from step 6. Finally, the RSF is recalculated (step 9)using the values of R' and PMF from steps 7 and 8. Steps 7, 8 and 9 are repeated until the convergence is obtained, that is, until the value of RSF obtained from step 9, when inserted in step 7, results in the same values of R' and PMF used to calculatethe RSF value. This basic calculation is incorporated at the heart of my mill management system. It is to be understood that the precise theory used and whether it is iterative or noniterative is not important, provided that it is a well tried theory that hasbeen shown to give reasonably close agreement with practice. FIG. 2 is a logic diagram showing how the pass reduction is maximized for any pass. Limiting factors are: (1) Available mill torque (i.e., power at base speed of mill drive); (2) Allowable roll separating force (mechanical limit of mill structure); (3) Skidding limit (if too high a reduction is attempted for a given work roll size, the rolls will skid on the strip, and rolling is impossible); (4) The percentage reduction must not exceed the maximum permissible pass reduction set by the operator (experience or special requirements limit). It is known by experience that pass reductions must be limited with some strip materials, and onsome mills which do not have very high tensions, in order to produce a flat strip. At light gauges, such limits are often achieved before the power limit or RSF limits are reached. For example. on Sendzimir mills rolling light gauge stainless steels,pass reductions of over 60% can be achieved typically. However, in practice, pass reductions greater than 20-25% are rarely taken because of flatness difficulties. Also, special requirements sometimes dictate pass reductions. This is discussed later. (5) Final gauge--the pass reduction cannot be so high that the exit gauge is less than the final (target) gauge. The first step (step 1) is to perform the basic calculation for a nominal pass reduction (say 20%) using the prior art method of FIG. 1. The next step (step 2) is to check if the material is hard enough for the maximum RSF to be developed. (For example, if a material such as lead is rolled in a Sendzimir mill, having very small work rolls, the rolls will cut through the stripbefore maximum RSF is developed). The third step is to compare the RSF with the maximum RSF of the mill, and, if it is not equal to RSF max., then to increase or decrease the exit gauge accordingly, and repeat the basic calculation. This procedure (iteration) is repeated untilthe RSF reaches the maximum value. The fourth step is to check that the roll flattening factor is not too high. If it is too high, then the exit gauge is increased a small step at a time, and basic calculation repeated until roll flattening factor becomes acceptable. The fifth step is to check that the exit gauge is no less than the final desired gauge. If it is, then the exit gauge is made equal to the final gauge, and the basic calculation made once more. The sixth step is to check that the exit gauge is not less than the allowable gauge, as dictated by the skidding limit and experience limit. If it is less than the experience limit or the skidding limit, then the exit gauge is set to theskidding limit or the experience limit (whichever is greater) and the basic calculation repeated. The seventh step is to compare the mill power with the available power from the mill motor at the rolling speed. (Mill power up to the base speed is proportional to speed. Above the base speed mill power is constant.) If the mill power isgreater than the available power, then step eight will be made. If not, then step nine will be made. The eighth step (mill power too high) is to compare the mill speed with the base speed. If the speed is less than or equal to the base speed, then the exit gauge is increased (to give draft H1-H2 reduced in proportion to desirable reduction inmill power) and basic calculation is repeated. If the speed is greater than the base speed, then the speed is reduced, and the exit gauge may be increased, and the basic calculation is repeated. The ninth step (power OK or too low) is to compare the mill speed with the base speed. If the speed is less than the base speed, it means that the speed is limited by the speed of the payoff line, and cannot be increased. In this case, thecalculation is complete. If the speed is greater than or equal to the base speed, then the speed is increased in proportion to the desired increase in mill power, or to the top speed (whichever is lower) and the basic calculation is repeated. If thespeed is equal to the top speed, it cannot be increased and the calculation is complete. Note that each time the basic calculation is repeated the computation returns to step 1 and repeats all the successive steps and must satisfy the conditions of each step again before proceeding, with one exception. The exception is that, if thespeed has not been changed after the last RSF maximization (step 3) then step 3 is omitted. The reason for this is that the pass reduction for maximum RSF does not change (unless the speed is changed) and, therefore, since all steps after step 3 onlyreduce the pass reduction (i.e., increase H2), then the condition of step 3 is automatically satisfied provided that the speed is not changed. Eventually all the conditions will be satisfied and the final gauge and rolling speed achieved will ensure that at least one of the above limits (2)-(5) will be reached for the pass, and that either limit (1) will be reached, or the mill will berolling at the top speed (except for the first pass--known as the payoff pass--where speed is restricted by the speed of the payoff line. However, even in this case the full available mill power at the payoff line speed will be developed). FIG. 3 is an example showing how a multi-pass rolling schedule is developed using the optimizing calculation of FIG. 2. For each pass the calculation of FIG. 2 is used to establish the minimum gauge that can be achieved. This gauge is taken asthe starting gauge for the next pass. Then the procedure is repeated for succeeding passes until the final gauge is achieved. After each pass calculation the results of the calculation are stored. Usually for the last few passes the RSF values will be fairly close to each other except for the final pass which could have a value of RSF anywhere between slightly more than zero and the maximum value (depending how close the exit gauge on thepenultimate pass is to the final gauge). Since it is desirable to have reasonably closely matched RSF values on the last several passes (enabling the same mill profile settings to be used without producing drastic changes in strip profile from pass topass) as is well known in the art, then the system compares the RSF values on the final two passes, and, if they are not equal (within, say, a 10% tolerance band), then the last few passes are repeated, with the RSF limit set to the average value forthese passes. This procedure is repeated until the RSF values on the last few passes are equal (within the allowable tolerance band). In the example of FIG. 3, the system repeats all the passes, or the last four passes, whichever is fewer, with the RSF limit set accordingly. The result of this procedure is that the RSF values for all the passes, or for the last four passes,are equalized, thus giving the best rolling conditions for strip flatness, while the total number of passes is exactly the same as before, so the total time to complete rolling of the coil will be the same as it was for the schedule before the RSFequalizing procedures was followed. In fact, calculations show that the time will be a little shorter, since the exit gauge on all the equalized passes (except the last) will be a little higher than before RSF equalizing, hence the total length of thestrip shorter. Furthermore, as the pass reduction on all the equalized passes (except the last) is small than before RSF equalizing, the rolling speed is usually higher, (at the same mill power level) and this shortens the pass time even more. Table 1 shows a typical display on the monitor of our system after pass reduction optimization, but before RSF equalization. Table 1 shows a 7 pass schedule for rolling stainless steel 50 inches wide from 0.15 inch down to 0.035 inch. The millmotor power is 2500 HP and base speed 500 FPM, and it can be seen that the power limit is reached on passes 2-5. Also, the rolling load (RSF) limit is reached on pass 6. On pass 6 the system increases the mill speed to 558 FPM in order to use up allthe available mill power. Since the gauge after 6 passes (0.037 in.) is so close to the desired final gauge, the final pass reduction is only 5.5%, giving a RSF of only 51%. Table 2 shows how the system updates the monitor display after performing the RSF equalization procedure. It can be seen that the last four passes are repeated and RSF values of about 85% are developed on the last four passes. The rolling speedis increased on all these passes above 500 FPM to utilize the available mill power. Ir can be seen that the total pass time for the last four passes is less after RSF equalization (18.8 minutes) than it was before (21.8 minutes). FIG. 4 shows how my mill management system can accommodate deviations from the planned roll pass schedule and still provide the operator with optimized pass reductions from the point of deviation onward. This feature is of great value, since themill operator may have to adjust the pass reduction on a given pass for a variety of reasons. Possible reasons are (a) the strip flatness at the optimized reduction is unsatisfactory--this could be caused by unusual profile of incoming strip, incorrectmill settings or incorrect roll crowns; (b) the strip is harder (or softer) than was assumed by the computer, due to variations in the proportions of the elements in the alloy being rolled or perhaps to the rolled material being improperly annealedbefore delivery to the mill; (c) roll skidding occurring due to the mill rolls being smoother than usual, or because of a change in the mill coolant, and so on. As shown in FIG. 4 and in Table 2, when the complete optimized and equalized schedule has been presented by our system to the operator, the system asks the operator (using the computer monitor) "START PASS?". When the operator is ready, heresponds (Y [ENTER]" (i.e., the operator presses the "Y" key, then the "ENTER" key) and the system then displays the values of variables for the first pass (i.e., exit gauge, speed, entry tension, and exit tension) so that the operator can set the millto these values. The system also displays "Gauge achieved, if different?" as shown in Table 3. When the operator has completed the pass, if the gauge achieved by the rolling mill was equal to the exit gauge specified by the system for that pass, then the operator simply presses the "ENTER" key on the computer keyboard, and the systemdisplays the variables for the next pass, and "Gauge achieved, if different?". As long as the operator achieves the specified exit gauge on any pass, he presses the "ENTER" key after completing the pass, and the variables for the next pass aredisplayed. This process continues until the final guage is achieved, i.e., rolling of the coil is completed. If the operator does not achieve the specified gauge on a given pass, he types in the gauge achieved in response to the "Gauge achieved, if different?" prompt. For example, as shown in Table 3, if gauge achieved was 0.118 in., he would type"0.118 [ENTER]". The system then performs the basic calculation for the pass just completed (i.e., the calculation given by FIG. 1) and then peforms the optimization procedure for the remaining passes and equalizes the last few passes (i.e., theprocedure given by FIG. 3 is performed for the remaining passes). The system then displays the values of the variables for the first of the remaining passes, as shown in Table 4, enabling the operator to set the mill to these values. Thus, if the operator rolls to a different exit gauge from that specified by the system, then, provided he tells the system (via the keyboard) what gauge the mill actually achieved, the system will reoptimize and reequalize the remaining passesbased upon the actual gauge achieved. It can be seen that the system is highly adaptive to the needs of the mill operator in this respect. This reoptimizing and reequalizing procedure can be performed on every pass, if necessary, (except the lastpass). It is envisaged that the system can also be interfaced to a reversing rolling mill and its drive system, in order to provide the optimum mill settings automatically, without operator intervention being required to set the correct values of themill variables manually. In FIG. 5, I show one example of how the system can be interfaced to a typical prior art rolling mill and its drives and its four main control systems (speed control, entry tension control, exit tension control and gauge control). The rollingmill 11 with view taken from the rear shown in FIG. 5 is a reversing mill, and is provided with tension reels (coilers) 12 and 13 on left and right sides of the mill. The mill and the tension reels are each driven by direct current electric motors 14,15 and 16 through gear sets 17, 18 and 19. The mill incorporates a screwdown 20 to adjust the gap between the work rolls 21 (and hence the thickness of the material 22 rolled by the mill) the screwdown itself incorporating a drive and position controlsystem 23. Thickness gauges 24 and 25 are provided on left and right sides of the mill to measure the thickness or gauge of the strip entering and leaving the mill stand on every pass. The strip is wound into coils 26 and 27 on the tension reels 12 and13. Deflector rolls 28 and 29 are mounted on left and right sides of the mill to provide a constant pass line for the strip 22 passing between the mill rolls 21. The strip wraps around these rolls as it travels between each tension reel and the millrolls. A speed sensing transducer 30 or 31 (tachogenerator or rotary optical incremental encoder) is coupled to each deflector roll. These transducers measure the speed of the deflector rolls (and hence the strip) on left and right sides of the mill. This prior art mill and its drive are controlled as follows: The speed of the mill and coilers is determined by the speed of the mill motor which is controlled by a simple speed control loop, with stable operating conditions being achieved whenthe feedback signal from the exit side strip speed sensing transducer is equal to the speed command or reference Each tension reel motor is controlled to provide constant tension in the strip between reel and the mill stand. In the example shown, tension is effectively sensed by measuring armature current in the reel motor, and this current is suitablescaled to an equivalent tension value and compared with a tension reference signal. Stable operation is achieved when the scaled armature current value is equal to the tension reference signal. The automatic gauge control system operates by comparing the strip exit gauge (measured by a continuous thickness gauge) with the exit gauge reference signal, and sending a commend signal to the mill screwdown drive according to any error (i.e.,difference between exit gauge command and feedback signals) to increase or decrease the roll gap accordingly. As is well known in the art, the gauge control system must allow for the transport lag between the mill rolls and the exit side thicknessgauge, and, so is provided with speed signals from the speed transducers to evaluate this lag. To enable the reversing operation to be controlled, the operator has a switch (not shown) to select mill direction (left to right (R); and right to left (L)). This switch is coupled to an electrical relay known as the mill direction relay (MDR). The MDR is provided with contacts (not shown) to reverse the rotation of the mill motor, and with contacts 32, 33 which provide for correct routing of entry and exit tension command signals, contacts 34, 35 which provide for correct tachometer signal forexit side strip speed sensing, and contacts 36, 37 which provide correct exit side thickness feedback signal, according to the mill direction. In FIG. 5, I show how, using a mode switch, the reference signals for the four main mill control systems can either be set up manually by the mill operator, according to the displayed optimized values given by the mill management system, or theycan be set up directly by the mill management system. In the former case, the mode switch is set to manual, and in the latter case it is set to automatic. FIG. 5 also shows how, even if the mode switch is set to automatic, the operator still retains his power to intervene. When the automatic mode is selected, the computer presets all the manual references (command signals) to the optimized valuesat the beginning of every pass, by presetting units to the computer optimized reference values. After rolling commences, the operator can increase or decrease settings exactly as he would in manual mode because the setting units remain in use inautomatic, as well as in manual mode. When the mode switch 38 is set to either mode, the operator can adjust settings of strip thickness using push buttons 43 and 44 to increase or decrease the value of the reference signal generated by setting unit 39. Similarly, he can adjustrolling speed using push buttons 45 and 46 which control speed setting unit 40, entry tension using push buttons 47 and 48 which control entry tension setting unit 41, and exit tension using push buttons 49 and 50. The design of the setting units is prior art. Typically, they could consist of a voltage to frequency converter, (to convert the operator input signals to a digital rate signal) and a bidirectional counter (to cound the pulses from theconverter, counting up if the increase push button is pressed, and counting down if the decrease push button is pressed). The output of the counter will represent the reference value of the controlled variable. For example, on the thickness settingunit, a count of 1754 could represent 0.1754 inches. The bidirectional counters can be preset to any value using the preset inupts, upon the operator depressing "preset enable" push buttons 60-63. For simplicity I have shown "preset enable" pushbuttons 60-63 on setting units 39-42. In practice, this function would be achieved more conveniently by a single "preset enable" push button with relay connection to the four units, or by relay connection from the mill direction relay (MDR) to actuatethe "preset enable" function on the four setting units whenever the operator changes the mill direction. Such setting units would also be suitable for use on such prior art mill management systems as the "programmed pass schedule" method described abover, where the preprogrammed values of gauges, tensions and speeds could be preset using the presetunits. In the case of our invention, the digital computer 50 is provided with digital output interfaces 52-55. These are commercially available interfaces which can be operated under the control of the computer, and which contain a memory in which thereference value calculated by the computer is stored while the computer proceeds with other tasks. Before the start of every pass, the computer takes the values of exit tension, entry tension, speed and exit gauge from the store for the pass (see FIG. 3and FIG. 4) and transfers these values to the interfaces 52, 53, 54 and 55, respectively. These values remain stored in the respective interfaces until just before the start of the next pass when the computer takes the values of the same variables fromthe store for the next pass and transfers these values to the interfaces. The exact time that the transfer of new values of the variables from the pass store (internal to digital computer 50) to the output interfaces is when the operator types "Y" in response to the system prompt "START PASS?" (see FIG. 4) when theremaining passes have been reoptimized, or when the operator presses the [ENTER] key in response to the system prompt "GAUGE ACHIEVED, IF DIFFERENT?" (in this case, the mill has achieved the exit gauge specified by the computer, and reoptimization is notrequired). At the same time that these new values of the variable are transferred, the same values are displayed on the monitor. The example of my method shown in FIG. 3, which achieves substantially equal roll separating force on the last few passes, serves the requirements of most applications, where the prime requirement is of good strip flatness, with minimum time lostby changing mill settings. In some cases, however, the requirements may be different. For example, if high surface brightness or lustre is to be achieved, best results are obtained if freshly ground or polished work rolls are inserted into the mill just before the finalpass, and if the pass reduction taken on the final pass is very light. In such cases it is a simple matter to change the mill profile settings (and even the work roll crowns) while the work rolls are being changed before the last pass, with noadditional lost time. Also, sometimes the metallurgy of the strip requires a predetermined reduction on the first pass, or on the last pass. In these cases, my method is still valid but is modified in that the method of FIG. 3 is applied only to those passes whose reductions are not predetermined. For example, rolling from 0.35 in. to 0.1 in. thickness, with 10% reduction specifiedfor the final pass, the management system will follow the procedure of FIG. 3 for a starting gauge (Ho) of 0.35 in. and a finish gauge (Hn) of 0.111 in. It will then follow the calculation procedure of FIG. 2 using 10% as the operator's pass reductionlimit to determine the values of variables for the predetermined final pass rolling from 0.111 to 0.1 in. In this way, the program also checks that the 10% reduction is within the capability of the mill and its drives. As another example, when rolling from 0.2 in. thickness to 0.05 in., with 15% reduction specified for the first pass, the management system will follow the procedure of FIG. 3 for all passes, with 15% reduction set as the operator's limit for thefirst pass. This procedure again checks that the predetermined 15% reduction is feasible. The value of the operator's pass reduction limit control can be seen from the above. The operator can impose separate limits for the first pass, the intermediate passes, and for the final pass, so that the above special cases can be handled withease. TABLE 1 __________________________________________________________________________ # VARIABLE OLD VALUE NEW VALUE __________________________________________________________________________ 1 WORK ROLL DIAMETER (IN) = 5.000 2 MATERIAL NUMBER= 13 304 STAINLESS STEEL 3 TENSION STRESS (LB/IN 2) = 50000.000 4 ANNEAL GAUGE (IN) = 0.150 5 STRIP WIDTH (IN) = 50.000 6 COIL WEIGHT (LB) = 20000.000 7 STARTING GAUGE (IN) = 0.150 8 FINAL GAUGE (IN) = 0.035 __________________________________________________________________________ EXIT % TOTAL PASS ENTRY ENTRY EXIT EXIT MILL MILL ROLLG PASS PASS GAUGE RED. RED. SPEED TENS. TENS. TENS. TENS. PWR. PWR. LOAD TIME NO. IN % % FPM LB AMP LBAMP HP AMP % MIN __________________________________________________________________________ 1 0.1155 23.0 23.0 300 5000 152 50000 1522 1527 1899 70.2 4.4 2 0.0942 18.5 37.2 500 37500 1142 50000 1523 2542 3160 78.2 3.5 3 0.0773 18.0 48.5 500 50000 1522 50000 1522 2549 3169 84.3 4.0 4 0.0622 19.5 58.5 500 50000 1523 50000 1522 2528 3144 91.0 4.8 5 0.0485 22.0 67.7 500 50000 1523 50000 1523 2530 3145 98.3 5.9 6 0.0370 23.6 75.3 558 50000 1522 50000 1522 2534 3151 99.9 6.7 7 0.0350 5.5 76.7 1000 40000 1218 40000 1218 849 1056 51.0 4.4 __________________________________________________________________________ TABLE 2 __________________________________________________________________________ # VARIABLE OLD VALUE NEW VALUE __________________________________________________________________________ 1 WORK ROLL DIAMETER (IN) = 5.000 2 MATERIAL NUMBER= 13 304 STAINLESS STEEL 3 TENSION STRESS (LB/IN 2) = 50000.000 4 ANNEAL GAUGE (IN) = 0.150 5 STRIP WIDTH (IN) = 50.000 6 COIL WEIGHT (LB) = 20000.000 7 STARTING GAUGE (IN) = 0.150 8 FINAL GAUGE (IN) = 0.035 __________________________________________________________________________ EXIT % TOTAL PASS ENTRY ENTRY EXIT EXIT MILL MILL ROLLG PASS PASS GAUGE RED. RED. SPEED TENS. TENS. TENS. TENS. PWR. PWR. LOAD TIME NO. IN % % FPM LB AMP LBAMP HP AMP % MIN __________________________________________________________________________ 1 0.1155 23.0 23.0 300 5000 152 50000 1522 1527 1899 70.2 4.4 2 0.0942 18.5 37.2 500 37500 1142 50000 1523 2542 3160 78.2 3.5 3 0.0773 18.0 48.5 500 50000 1522 50000 1522 2549 3169 84.3 4.0 4 0.0638 17.4 57.5 564 50000 1523 50000 1523 2511 3121 84.9 4.3 5 0.0526 17.5 64.9 641 50000 1523 50000 1523 2537 3154 85.0 4.5 6 0.0429 18.4 71.4 703 50000 1523 50000 1523 2533 3149 85.2 4.9 7 0.0350 18.5 76.7 816 49032 1493 49032 1493 2489 3095 81.7 5.1 START PASS ? __________________________________________________________________________ TABLE 3 __________________________________________________________________________ # VARIABLE OLD VALUE NEW VALUE __________________________________________________________________________ 1 WORK ROLL DIAMETER (IN) = 5.000 2 MATERIAL NUMBER= 13 304 STAINLESS STEEL 3 TENSION STRESS (LB/IN 2) = 50000.000 4 ANNEAL GAUGE (IN) = 0.150 5 STRIP WIDTH (IN) = 50.000 6 COIL WEIGHT (LB) = 20000.000 7 STARTING GAUGE (IN) = 0.150 8 FINAL GAUGE (IN) = 0.035 __________________________________________________________________________ EXIT % TOTAL PASS ENTRY ENTRY EXIT EXIT MILL MILL ROLLG PASS PASS GAUGE RED. RED. SPEED TENS. TENS. TENS. TENS. PWR. PWR. LOAD TIME NO. IN % % FPM LB AMP LBAMP HP AMP % MIN __________________________________________________________________________ 1 0.1155 23.0 23.0 300 5000 152 50000 1522 1527 1899 70.2 4.4 GAUGE ACHIEVED (IN), IF DIFFERENT ? .118 __________________________________________________________________________ TABLE 4 __________________________________________________________________________ # VARIABLE OLD VALUE NEW VALUE __________________________________________________________________________ 1 WORK ROLL DIAMETER (IN) = 5.000 2 MATERIAL NUMBER= 13 304 STAINLESS STEEL 3 TENSION STRESS (LB/IN 2) = 50000.000 4 ANNEAL GAUGE (IN) = 0.150 5 STRIP WIDTH (IN) = 50.000 6 COIL WEIGHT (LB) = 20000.000 7 STARTING GAUGE (IN) = 0.150 8 FINAL GAUGE (IN) = 0.035 __________________________________________________________________________ EXIT % TOTAL PASS ENTRY ENTRY EXIT EXIT MILL MILL ROLLG PASS PASS GAUGE RED. RED. SPEED TENS. TENS. TENS. TENS. PWR. PWR. LOAD TIME NO. IN % % FPM LB AMP LBAMP HP AMP % MIN __________________________________________________________________________ 1 0.1180 21.3 21.3 300 5000 152 50000 1523 1346 1674 66.1 4.3 2 0.0962 18.5 35.9 500 37500 1142 50000 1523 2541 3159 77.1 3.4 3 0.0792 17.7 47.2 500 50000 1523 50000 1523 2525 3139 83.2 4.0 4 0.0649 18.0 56.7 532 50000 1522 50000 1522 2498 3106 86.6 4.4 5 0.0533 18.0 64.5 617 50000 1523 50000 1522 2541 3160 86.5 4.6 6 0.0432 18.9 71.2 674 50000 1522 50000 1523 2527 3141 86.8 5.0 7 0.0350 19.0 76.7 785 50000 1523 50000 1523 2484 3088 83.2 5.3 START PASS ? __________________________________________________________________________ * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/4745556.html","timestamp":"2014-04-21T15:29:11Z","content_type":null,"content_length":"66003","record_id":"<urn:uuid:1bfa52fc-47c2-42ec-a4c9-0d2a86845ceb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
A nonlinear scalar model of extreme mass ratio inspirals in effective field theory II. Scalar perturbations and a master source A nonlinear scalar model of extreme mass ratio inspirals in effective field theory II. Scalar perturbations and a master source by Galley, Chad R. For Part 1 of this series, see arXiv:1012.4488. 20 pages, 7 figures The motion of a small compact object (SCO) in a background spacetime is investigated further in a class of model nonlinear scalar field theories having a perturbative structure analogous to the General Relativistic description of extreme mass ratio inspirals (EMRIs). We derive regular expressions for the scalar perturbations generated by the SCO’s motion valid through third order in $latex \epsilon$, the size of the SCO to the background curvature length scale. Our expressions are compared to those calculated through second order in $latex \epsilon$ by Rosenthal in [E. Rosenthal, CQG 22, S859 (2005)] and found to agree but our procedure for regularizing the scalar perturbations is considerably simpler. Following the Detweiler-Whiting (DW) scheme, we use our regular expressions for the field and derive the regular self-force corrections through third order. We find agreement with our previous derivation based on a variational principle of an effective action for the worldline associated with the SCO thus demonstrating the internal consistency of our formalism. This also explicitly demonstrates that the DW decomposition of Green’s functions is a valid and practical method of self force computation at higher orders in perturbation theory and, as we show in an appendix, at all orders in perturbation theory. Finally, we identify a master source from which all other physically relevant quantities are derivable. Knowing the master source perturbatively allows one to construct the waveform measured by an observer, the regular part of the field on the worldline, the regular part of the self force, and orbital quantities such as shifts of the innermost stable circular orbit, etc. The existence of a master source together with the regularization methods implemented in this series should be indispensable for derivations of higher-order gravitational self force corrections.
{"url":"http://brownbag.lisascience.org/arxiv1107-0766/","timestamp":"2014-04-17T09:36:23Z","content_type":null,"content_length":"31352","record_id":"<urn:uuid:9dbd7ff3-f9b6-4323-a93e-4ddb9716f422>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
A function `y = log_12 x` would be the inverse function of what exponential function? - Homework Help - eNotes.com A function `y = log_12 x` would be the inverse function of what exponential function? In order to get the inverse of the function `f(x)` which we denote as `f^(-1)(x)`, we simply interchange x and y, and then solve for y: `y = log_12 x` Interchanging x and y: `x = log_12 y` To solve for y, we note that: `y = log_a b` is equivalent to `a^y = b` , using definitions of log and exponents. Hence, we can solve for y` `as: `y = 12^x` This means that: `f^(-1)(x) = 12^x` To check, note that `f(f^(-1)(x)) = f^(-1)(f(x)) = x` `f^(-1)(f(x)) = 12^(log_12 x) = x` also, `f(f^(-1)x)) = log_12 (12^x) = x.` The inverse function of the function f(x) = log(base 12)x is f^-1(x) = 12^x. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/function-y-log-base-12-x-would-inverse-function-449494","timestamp":"2014-04-18T14:29:11Z","content_type":null,"content_length":"25711","record_id":"<urn:uuid:20561945-0dc2-4744-8369-ac13a5eae0a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
e S The Science of Ice Sheets: the Mathematical Modeling and Computational Simulation of Ice Flows Seminar Room 1, Newton Institute As a complement to the ongoing Newton Institute program "Multiscale Numerics for the Atmosphere and Ocean", we consider another component of climate systems, namely land ice. The melting of ice in Greenland and Antarctica would, of course, be by far the major contributor to sea level rise. Thus, to make science-based predictions about sea- level rise, it is crucial that the ice sheets covering those land masses be accurately mathematically modeled and computationally simulated. In fact, the 2007 IPCC report on the state of the climate did not include predictions about sea level rise because it was concluded there that the science of ice sheets was not developed to a sufficient degree so that such predictions could not be rationally and confidently made. In recent years, there has been much activity in trying to improve the state-of-the-art of ice sheet modeling and simulation. In this lecture, we review a hierarchy of mathematical models for the flow of ice, pointing out the relative merits and demerits of each, showing how they are coupled to other climate system components, and discussing where further modeling work is needed. We then discuss algorithmic approaches for the approximate solution of ice sheet flow models and present and compare results obtained from simulations using the different mathematical models. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/AMM/seminars/2012100117001.html","timestamp":"2014-04-17T00:50:15Z","content_type":null,"content_length":"7094","record_id":"<urn:uuid:fa4d04b0-d753-4594-8a65-17b29f421cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Brookeville Math Tutor Find a Brookeville Math Tutor ...It is from these individual sessions that I find the students learn the most. My tutoring style is strongly influenced by the positive experience that I gained from these personalized instruction with students. Each student has a different way of learning a subject. 16 Subjects: including algebra 1, algebra 2, calculus, geometry ...I truly enjoy working with students and believe that each one has specific learning styles. My job is to successfully work one-on-one with those styles to help them achieve success, and the proof is in the fact that many students rise from a C/D level grade to A/B+. I have several letters of re... 17 Subjects: including trigonometry, algebra 1, algebra 2, calculus ...I have experience tutoring at all grade levels and at a collegiate level. My schedule is extremely flexible, and I can tutor during the day, as well as on evenings and weekends. I can't wait to get started working with you!I have been tutoring the SAT for over 7 years now. 31 Subjects: including algebra 2, MCAT, algebra 1, ACT Math ...Also, I have voluntarily taught an SAT math prep course at the same church between 2004 and 2009. My subject expertise includes Pre-Calculus, Geometry, Trigonometry, Algebra I and II, College Algebra, Finite Math, and standardized test preparation. Students regularly request my services when preparing for the math sections of the ASVAB, SAT, PSAT, ACT, GED, GRE, GMAT, HSA, and MSA 33 Subjects: including algebra 1, algebra 2, grammar, geometry ...I have tutored students in elementary, middle, and high grades. I have developed fun activities for students to actually have fun while they are learning. I look forward to helping your child become a huge success. 18 Subjects: including calculus, elementary (k-6th), phonics, reading
{"url":"http://www.purplemath.com/brookeville_md_math_tutors.php","timestamp":"2014-04-21T12:38:36Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:6ee6e881-f8a0-4640-a8db-9bb7c3585e21>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 204 What is the coupon rate of a two-year, $10,000 bond with semiannual coupons and a price of $9,543.45, if it has yield to maturity of 6.8%? What must be the price of a $10,000 bond with a 6.5% coupon rate, semi-annual coupons, and two years to maturity if it has a yield to maturity of 8% APR? If $12,000 is invested in a certain business at the start of the year, the investor will receive $3,600 at the end of the next four years. What is the present value of this business opportunity if the interest rate is 7% per year? The first term of an AP is 8. The ratio of d 11th term to the 7th term is 5:8 calculate the common difference Differential Equations A college professor contributes $5,000 per year into her retirement fund by making many small deposits throughout the year. The fund grows at a rate of 7% per year compounded continuously. After 30 years, she retires and begins withdrawing from her fund at a rate of $3,000 per... Because they want student know about their career Acetylene gas c2h2 reaction cac2+2h2o. C2h2+Ca(oh)2 how many moles of cac2 needed to prepare 10.0 g c2h2 please show work and answer soon final question on big test review Ten members of a fraternity take a statistics course. Here are their scores on the first exam in the course. 61,74,47,60,62,63,65,79,55,85. In all 135 students took the exam. The third quartile for all 135 scores was 69. How many students had scores higher than 69? What is the equation go the line that is parallel to the graph of y=1/2x+6, and who's y-intercept is -2 13feet ladder is placed 5feet away from the wall . the distance from the ground straight up to the top of the mall is 13 feet. will the ladder touch the top of the mall? Quantum Physics thnx frnds, all of you :-) Quantum physics answer of q 2 is 3/(5*sqrt(2)) |00> (3*i)/(5*sqrt(2)) |01> 4/(5*sqrt(2)) |10> (4*i)/(5*sqrt(2)) |11> Quantum physics answer of q 3. is 3rd option Quantum physics answer of q 3. is 3rd option Exercise anwers Grade 12 LIFE ORIENTATION A uniform beam of length L whose mass is m, rest with its ends on two digital scales ( Figure 13.2). A block whose mass is M rests on the beam, its center is one-fourth away from the beam's left end. What do the scales read ? PQ is a vertical tower 95m high, the points R and S are on the same horizontal plane as Q, the angle of elevation of P from R is 35 if QS =155m and Q from RQS=48 1.Calculate the distance of QR 2.Calculate the distance of RS 3.Calculate the are of QRS The Goal Every cult has a stated, vague and metaphorical goal. Because this goal must serve as the "illuminated eye" of the pyramid, it cannot be attainable. Rather, it is expressed as an abstract idea - like "salvation" - which the cult members will enjoy ... world civilization Chinese civilizations survived, intact, until the twentieth century you have the formula. Plug in 1000 for N and evaluate. Grade 12 LIFE ORIENTATION Quantum Physics thnx, all of you for your help How do you get the big number on a runaway math puzzle worksheet south al For a hypothetical normal distribution of test scores, approximately 95% fall between 38 and 62, 2.5% are below 38, and 2.5% are above 62. Given this information, (a) the mode=_____ and (b) the standard deviation=_____. 4. In applying cladistic analysis to construct a phylogeny of the animal kingdom, why would the presence of flagella be a poor choice of a characteristic for the grouping the phyla into clades? Which of the following is an example of active reading? A. Skimming over information and taking notes. B. Reviewing a chapter, question as you read, and review notes. C. Speed reading and taking notes in your own words. D. Both B & C. I get 0.041meters. this answer does not seem plausible? A block with mass = 5.0 rests on a frictionless table and is attached by a horizontal spring ( = 130 ) to a wall. A second block, of mass = 1.35 , rests on top of . The coefficient of static friction between the two blocks is 0.40. What is the maximum possible amplitude of osc... Archimedes purportedly used his principle to verify that the king's crown was pure gold, by weighing the crown while it was submerged in water. Suppose the crown's actual weight was 14.0 N. The densities of gold, silver, and water are 19.3 g/cm^3, 10.5 g/cm^3, and 1.00... A child bounces a 54 g superball on the side- walk. The velocity change of the superball is from 23 m/s downward to 19 m/s upward. If the contact time with the sidewalk is 1 800 s, what is the magnitude of the average force exerted on the superball by the sidewalk? i need help with my genisis essay and it cant involve religion a puppy runs a distance of 20m to its mother ina time 5s.what is the puppy's speed? Physics 2A A position vector has an x component of -2.5 m and a y component of 4.2. (a) What is the vector's magnitude and directions? b) suppose a spider took 10 minters to first walk -2.5m in the x direction and then 17 mintures 4.2 m in the y direction? Find the spider's avera... English 1101 composition & rhetoric Put these sentences in order 1-6, based on whether it is the TS(1), GD1(2), SD1(3), GD2(4), SD2(5), CS(6), of a body paragraph in a five paragraph essay. A) Benson claims that the typical family eats together once a week now; a family having dinner together every night is no l... English 1101 composition & rhetoric Put these sentences in order 1-6, based on whether it is the TS(1), GD1(2), SD1(3), GD2(4), SD2(5), CS(6), of a body paragraph in a five paragraph essay. A) Benson claims that the typical family eats together once a week now; a family having dinner together every night is no l... The number of dogs and chickens on a farm add up to 14. The number of legs between them is 36. How many dogs and how many chickens are on the farm if there are at least twice as many chickens as dogs Help Solving This? 1. √x+4 = √x-1+1 2.2√n + 3 = n 5 Rational Exponet Equations: 1.3x^5/2 - 9 = 0 2.(2x+3)^1/4=4 its a pic of a train made of squares and the red train is made up of 6 red squares and the green train is made up of four green squares Prime numbers basic science i do not no the anwer basic science if a force of 140 neaten is applied by a car over a distance of 10 metre within 10 second, calculate. (i) work done by the car. (ii) power of the car. Life orientation Alcoholism is a human factor that may cause ill health, accident, crises or disaster Cellular Phone Sales The probability that a cellular phone company kiosk sells X number of new phone contracts per day is shown below. Find the mean, variance, and standard deviation for this probability distribution. X 4 5 6 8 10 P (X) 0.4 0.3 0.1 0.15 0.05 What is the probab... college math boat travels 160 miles downstream in the same time that it takes to go 96 upstream. the pead of the stream is 6 mph. what is the speed of still water thank you, that helps me alot, good night. sorry, my lap top is being stupid, no, i can use the calculator, but i need to show how i got my answer. what is fraction or simpliest form of 38% and 35% what is fraction or simpliest form of 38% and 35% ok thank you both, 6 and 4/5 times 8 and 1/7 = ok, now i am only in 6th grade, this math is really hard, i need 7/8 times 39. showing work. it is 3 and 2/3 times 5 and 1/9 what is the estimated product of 3 2/3 x 5 1/9 Astronomy 1010 The ISS is being pulled by everything in the universe. Very little, but its being pulled. Because Gm1m2/r^2 And you can't divide something and get 0, you can only get close to 0. mth 209 what formula do i use to figure our the usage (in million)of the drug claritin. From the year 2008-2011 physical science posttest i need help because i have this posttest like a homework and i never take this class and I have to finish the posttest and i don't know what to do. Can you help me.? a plane is flying at 250 km hr an hit a 50 km hr headwind what is its resultantt speed relative to the ground During each shift as a bus driver, Haley drives between 80 and 140 miles. She started the month with 4,500 miles driven. What is a reasonable estimate for the number of total miles she has driven the bus, after 20 more shifts? Please help me out... The pressure of a container of fluorine gas with a volume of 1.5L is 678mmHg at 55.0*C. a. If the volume of the fluorine gas increases to 5.0L, what would be the pressure of the gas in atm? The amount and tempature of gas are consistant. b.How many grams ... mercy land the 2nd, 3rd, and 4th term of an arithemetic progression are x-2, 5 and x+2 respectively, calculate the value of x and hence find the 20th term.. 4. A large company insists that all job applicants who are invited for an interview take a psychometric test. The results of these tests follow a normal distribution with a mean of 61% and a standard deviation of 7.2%. a) What proportion of applicants would be expected to scor... According to a bank, the time taken by its customers to use cash dispensing machines is normally distributed with a mean of 18 seconds and a standard deviation of 3 seconds. I) What is the probability that a customer selected at random takes less than 13 seconds? II) Find the ... Directions: Revise the following sentences to eliminate unnecessary shifts in tense, mood, voice, or person and number and between direct and indirect discourse. Most of the items can e revised in more than one way. Examples: When a person goes to college, you face many... Revise the following sentences to eliminate unnecessary shifts in tense, mood, voice, or person and number and between direct and indirect discourse. Most of the items can e revised in more than one way. 1. The greed of the 1980s gave way to the occupational insecurity of the ... Spanish-Please recheck-SraJMcGin please check So I could say " Escalar montañas es el más divertido" or I could say "Escalar montañas es la divertida de las tres activades" Is that correct? Or I still confused? Spanish-SraJMcGin please check I had asked a question the other day but I'm not sure if I'm understanding this- it was a comparison-You said this as a comment: Spanish-8th grade - SraJMcGin, Thursday, October 20, 2011 at 4:02pm 3. also = de las tres actividades Also, #3 is different because it is no... Spanish-8th grade Thank you very much for explaining this-I appreciate it alot-I was pretty confused before Spanish-8th grade We're doing superlatives and we had to have three groups of sentences. Did I compare these correctly?I numbered each group as 1, 2 or 3, not each sentence in the group. 1.Tocar el piano es interesante. Tocar la guitarra es más interesante que tocar el piano. Tocar e... A company produces steel rods. The lengths of the steel rods are normally distributed with a mean of 121.2-cm and a standard deviation of 1.8-cm. Suppose a rod is chosen at random from all the rods produced by the company. There is a 23% probability that the rod is longer than... help please:) Reading titles and headings, and trying to get an overview of what lies ahead in a reading passage is called: concentrated sulfuric acid has a density of 1.84 g/ml. calculate the mass in grams of 1.00 liter of this acid. what is the volume (in ml)of a liquid(density=2.07 g/ml)weighing 130.grams? A cross country team race is about to begin and team Delta decides to use their one-dimensional kinematics knowledge that they learned from their first year physics course. Each team consists of two people and one horse. The clock is started when the team crosses the starting ... is 10/6 + 65/6 + 120/6 + 175/6 + 230/6 equal to 100 Find the equation of the parabola with the vertex at the origin and directrix y =5 Cultural Antrhopology How does the identification of cultural universals impact our understanding of what it means to be human? You are given a vector in the xy plane that has a magnitude of 94.0 units and a y component of -44.0 units. A. What are the two possibilities for its component? For this part I got x = 83.1 and -83.1 B. Assuming the x component is known to be positive, specify the vector V whi... Pelicans tuck their wings and free-fall straight down when diving for fish. Suppose a pelican starts its dive from a height of 15.0 m and cannot change its path once committed. If it takes a fish 0.25 s to perform evasive action, at what minimum height must it spot the pelican... An unmarked police car traveling a constant 90 km/h is passed by a speeder. Precisely 2.00 s after the speeder passes, the police officer steps on the accelerator. If the police car accelerates uniformly at 2.00 m/s^2 and overtakes the speeder after accelerating for 7.00 s, wh... Determine the stopping distances for an automobile with an initial speed of 95km/h and human reaction time of 1.0s: (a) for an acceleration a= -5.2m/s^2 (b) for a= -6.7m/s^2 A space vehicle accelerates uniformly from 50m/s at t=0 to 182m/s at t=1 0.0s. How far did it move between t=2.0s and t=6.0s? What is one specific diagnosis for Anxiety Disorder? A faucet leaks at the rate of four drops each second.If air resistance is neglected, determine the vertical separation between two consecutive drops after the lower one has fallen 20m Find the unknown dimension width 5 1/2 ft 3 1/4 ft volume 143 ft3 an equilateral triangle of side 20cm is inscribed in a circle.calculate the distance of a side of the triangle from th centre of the circle. 3. You are going to plant trees in your hilly backyard. Tree A is located at coordinates (1,4) and Tree B is located at (5,12). What is the slope of the hill between the two trees? Math 10+ Write an expression with an exponent that is equvalent to 1/8 Math 10+ Write an expression with an exponent that is equvalent to 1/8 Math 10+ Write an expression with an exponent that is equvalent to 1/8 A 1L flask is filled with 1.000atm of H2 and 2.000atm of I2 at 448*C. The following equilibrium is established: H2(g)+I2(g)= 2HI(g). The value of K for this equilibri- um is 50.5. What are equilibrium partial pressures of H2,I2,and HI? pls help! discuss the effect of acids and alkalis on DNA & RNA. Thanks. You are driving in your car and you apply a constant force to your brake pedal. Which of the following are changing? (Ignore air resistance) a. Speed b. Velocity c. Acceleration d. Speed and Velocity e. All of the above number expressions Which numerical expression show 4 quarts more than 12 quarts? If a sphere has radius 5 m. what is the surface area of the sphere? factor trinomial 9x(2) + 9x + 2 i do not know how to type the 9x and the little x over the 2 ( to the x power ) thank you for your help. Pages: 1 | 2 | 3 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=victor","timestamp":"2014-04-20T14:03:21Z","content_type":null,"content_length":"27039","record_id":"<urn:uuid:9c291262-0a5c-48e0-8df7-ff9feeb3e94e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
"But a powerful new type of computer that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In that infinitesimal neighborhood, common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between - all at the same time. [...] Now, Lockheed Martin - which bought an early version of such a computer from the Canadian company D-Wave Systems two years ago - is confident enough in the technology to upgrade it to commercial scale, becoming the first company to use quantum computing as part of its business." I always get a bit skeptical whenever I hear the words 'quantum computing', but according to NewScientist, this is pretty legit To me it still sounds like propaganda and advertisement, because every article about D-Wave's work doesn't really tell you anything more than the usual "maybe", "it's possible", "in the future" -- sounds like D-Wave consists of only marketing people! I always get a bit sceptical when I hear "NewScientist." I always get a bit sceptical when I hear "NewScientist." This. There isn't a single periodical guaranteed to have a more negative reaction from scientists than New Scientist. It's been utter garbage for about twenty years. [/q]It's been utter garbage for about twenty years. [/q] NS was a decent generalist magazine until the late 1980s. Then it became a massively biased socialist propaganda forum. But can it crunch SETI work units? Even more important can it be used to make certain encryption algorithms useless. That is the thing I always care about. And it is always the government agencies or government contractors that gets these kinds of systems first. From an other article linked in the comments: "There was a further limitation. Theoretically, the quantum computer should operate at a temperature of 0 kelvin, but such extreme cooling is impossible in practice, so D-Wave repeatedly ran the system at slightly above zero in the hope of reaching the lowest-energy state. Due to these higher temperatures the calculation got the right answer only 13 times after 10,000 attempts." So eventually you might end up with something like a couple of thousand guesses to decrypt certain data. If that is true, that could be bad. Edited 2013-03-22 12:37 UTC Said improvements imperil current cryptography systems. However, it is not the end-all of cryptography -- the newest replacement in SSH and GPG security, for examples, include elliptic curves and another algorithm. These newer algorithms are not known to be attacked by quantum computation. I know some are considered "quantum computing safe", but many commonly deployed implementations don't support that crypto yet. And even if the implementations support it, it doesn't mean they'll choose to use it when they talk to each other. If you're using an SSH version that supports it, use an ECDSA key. ECDSA is quantumproof. That's fine, no great, but the general public doesn't use it. They use HTTPS, maybe POP3S or IMAPS. Or even an IPSEC- or SSL VPN. Over 25% of the top 200.000 'secure websites' still have SSL2 support enabled, even though it has been know to be insecure since 1995 or 1996: I doubt any software comes installed with SSL2 enabled by default. So that means that 25% of the people who configure the top 200.000 'secure websites' don't know what they are doing. That is pretty I also doubt you can find anyone deploying IPSEC VPN in a corporate environment that even knows what Perfect forward secrecy or elliptic curve is and is using an IPSEC gateway that actually supports EDIT/UPDATE correction: at least Windows 2008 servers have SSLv2 enabled by default. So maybe others too. Do we still need it for compatibility, I hope not. Edited 2013-03-23 10:24 UTC "However, it is not the end-all of cryptography -- the newest replacement in SSH and GPG security, for examples, include elliptic curves and another algorithm. These newer algorithms are not known to be attacked by quantum computation." Do you have a source for this? This very much interests me and I'd like to read more about it. It's a bit unintuitive as to why a quantum algorithm wouldn't work. Wikipedia says this: "Quantum computing attacks Elliptic curve cryptography is vulnerable to a modified Shor's algorithm for solving the discrete logarithm problem on elliptic curves." But it doesn't provide an online source. "However, it is not the end-all of cryptography -- the newest replacement in SSH and GPG security, for examples, include elliptic curves and another algorithm. These newer algorithms are not known to be attacked by quantum computation." Do you have a source for this? This very much interests me and I'd like to read more about it. It's a bit unintuitive as to why a quantum algorithm wouldn't work. Wikipedia says this: "Quantum computing attacks Elliptic curve cryptography is vulnerable to a modified Shor's algorithm for solving the discrete logarithm problem on elliptic curves." But it doesn't provide an online source. Just found this paper while doing a quick web search with the "elliptic curve quantum" keywords, would it help ? Sounds like a more detailed explanation of what Wikipedia mentioned... http://arxiv.org/abs/quant-ph/0301141 Edited 2013-03-24 22:00 UTC Thanks, that's exactly what I was looking for. I've bookmarked it for now because it will take me some time to parse it Colonel: "Jesus H. Murphy, Lieutenant! What in the sam-hell made the firing system launch a missile at our own command outpost?!?!" Lieutenant: "Well, sir, according to the computer it was detected as both a friendly AND a hostile AT THE SAME TIME!" This is from the article in the New York Times, so I can hardly blame Thom for repeating it verbatim. But this is so freaking wrong it makes my head explode. A one can be a one, or it can be a one and a zero and everything in between — all at the same time. A one can be one or a zero or both. Not everything in between. It can't be a floating point 0.48294302. Quantum means a small discrete value. A one can be one or a zero or both. Not everything in between. It can't be a floating point 0.48294302. Quantum means a small discrete value. It's been artlessly stated, but there's more than a grain of accuracy in that line. A qubit can have a range of possible values; the basic values it can assume are zero or one. Physicists would say that the qubit (let's call it |q>) can be in either the state |0> or the state |1>. However, it can also be in a linear superposition of these states. Given any two complex numbers c and d, the qubit can be scaled to become: |q> = c|0> + d|1>. There are normalization requirements to make the probabilities sum to unity but this is just really basic linear algebra on a complex vector space. It's in this sense, the sense of a continuous range of possibilities for the superposition over basis vectors that I take the quote you mention to refer. In which case he's entirely accurate, if admittedly a little unclear. Edited 2013-03-22 15:55 UTC Ok, I understand what you mean, and that's probably what they started from, but I disagree that the statement as written is accurate. It is only a one AND zero until you test for it, at which case it becomes a one OR zero, and you only get a probabilistic value of it being a one or zero. That's how it works, right? Farking QM. Completely unintuitive. The sentence could very well be correct, actually. Qubits, the basic unit of "information" in quantum computing, are quaternary in nature. As opposed to "traditional" digital bits, which are binary. So a Qubit can be indeed, 0, or 1, or 0 and 1 simultaneously, or numerical coefficients representing the probability of each state. The sentence could very well be correct, actually. Qubits, the basic unit of "information" in quantum computing, are quaternary in nature. As opposed to "traditional" digital bits, which are binary. Qubits are not quaternions (or indeed "quaternary"). There exists an interpretation of quantum information theory using a quaternion formalism that eventually leads to something called density operator theory, but this is obscure even for the field. So a Qubit can be indeed, 0, or 1, or 0 and 1 simultaneously, or numerical coefficients representing the probability of each state. No. A qubit is a linear superposition of basis vectors in some two-dimensional complex vector space. Numerical factors representing probabilities occur only when one performs an operation on qubits (specifically the inner product on the vector space). I liked how you say "no" but your response implies "yes." ;-) PS. I did not say qubits were quaternions, but that they were "quaternary in nature." If we're going to do the whole anal thing. And yes, I should have perhaps said that "from a logic design standpoint Qubits could be viewed as being quaternary in nature." With that interpretation being mainly correlated with logic design expedience, since qubits could be interpreted to be ternary for example (as initial qubit interpretations tended to be...). But then "traditional" binary logic functions, which form the basis of the majority of our logic/computing designs, is not easily translated or represented using a ternary base. Etc, etc, etc. Even if this is a quantum computer it can't do any of the more interesting things a real quantum computer can do in linear time. And people much smarter than me don't want to call it a quantum computer even if the manufacturers description of it is correct... Quantum as in quantum computer and quantum physics doesn't mean a small range of values BTW. My understanding was that quantum is just an adjective - allowing us to refer to the smallest possible discrete unit of something. So the smallest possible unit of weight isn't 1 kilogram, because you can divide that into grams. Any number you define can be subdivided so you have to just bail on that scene, and just refer to quanta instead. It's like the old child's game where you state the largest number, and finally someone says infinity, and then someone says infinity+1 - no dude, infinity is defined as the largest number, and quanta as the smallest discrete unit - as the infinitesimally small. After this things get a bit murky, but apparently Einsteins theory of relativity and Quantum theory when taken together, can describe everything we know about matter and energy. And you have other oddities like, Einsteins theory of relativity, Quantum Theory, and Selena Gomez, taken together somehow predicts the popularity of the 1990's hit Melrose place. A seemingly unrelated event. Look, I don't want to be 'that guy' that guy that poo poo's everything he doesn't understand, and predicts everything to fail. because he doesn't get it. But I don't get it, and when I read all the people saying this is hocus pocus - what can I say, it seems like 'it kinda is' - they really need to make a bit more progress, then I'll take another looksy. It's like the old child's game where you state the largest number, and finally someone says infinity, and then someone says infinity+1 - no dude, infinity is defined as the largest number, and quanta as the smallest discrete unit - as the infinitesimally small. Well, Infinity + 1 is larger. Infinity isn't "The largest number" because there is no "largest." (That implies that it stops and there is nothing larger). Infinity + 1 is larger than Infinity in the same sense as "The set of real numbers" is larger than "the set of whole numbers." Both are infinite in size, but Set of Real Numbers contains elements that aren't in the Set of Whole Numbers. Well, Infinity + 1 is larger. Infinity isn't "The largest number" because there is no "largest." (That implies that it stops and there is nothing larger). Infinity + 1 is larger than Infinity in the same sense as "The set of real numbers" is larger than "the set of whole numbers." Both are infinite in size, but Set of Real Numbers contains elements that aren't in the Set of Whole Numbers. ===>I can't help myself, while you are correct in that my statement was wrong - I admit defeat on that. If I were to say to you, that in that I had this infinite set of whole numbers, plus I had the number 1. I would still only have the infinite set of whole numbers, the number 1 is already included. I still fail to understand, how in the context of a child's game where you are attempting to say the highest number possibile and someone says infinity, how the number infinity plus 1 is larger. Probably the correct answer is infinity is not a number, but a set of numbers. Therefore infinity plus 1 - if taken as infinity plus the number 1 - is just wrong, because infinity already included the number 1. But if it means infinity but some number outside this set, then that is a larger set. A larger set - but not a larger number. In reality, I'm going back and stating that in this game the word 'infinity' really did represent the largest number possible. It's a misuse of the word infinity - I get that now. However it is absolutely unreasonable to me, to suggest the number 1 in the context of stating the largest number, is outside the set of numbers. Edited 2013-03-22 20:13 UTC If I were to say to you, that in that I had this infinite set of whole numbers, plus I had the number 1. I would still only have the infinite set of whole numbers, the number 1 is already included. I still fail to understand, how in the context of a child's game where you are attempting to say the highest number possibile and someone says infinity, how the number infinity plus 1 is larger. Well, if you find it difficult to understand how this could be so in the context of the natural numbers, try thinking about it using a different set. Think of the largest possible set you can (the number of elements in your set is something that a mathematician would call the of that set). Each time you come up with a set containing a huge number of elements, I can counter you by constructing a set containing all of your elements, plus any other element that isn't already in your set . Thus I can construct a set containing an arbitrarily large number of elements; this is, in essence, one type of infinity. Probably the correct answer is infinity is not a number, but a set of numbers. No. Infinity is not a number; it's a "If I were to say to you, that in that I had this infinite set of whole numbers, plus I had the number 1. I would still only have the infinite set of whole numbers, the number 1 is already included. I still fail to understand, how in the context of a child's game where you are attempting to say the highest number possibile and someone says infinity, how the number infinity plus 1 is larger. Well, if you find it difficult to understand how this could be so in the context of the natural numbers, try thinking about it using a different set. Think of the largest possible set you can (the number of elements in your set is something that a mathematician would call the of that set). Each time you come up with a set containing a huge number of elements, I can counter you by constructing a set containing all of your elements, plus any other element that isn't already in your set . Thus I can construct a set containing an arbitrarily large number of elements; this is, in essence, one type of infinity. Probably the correct answer is infinity is not a number, but a set of numbers. No. Infinity is not a number; it's a . " I actually think you suffer from an understanding of language. I can define a set as all possible numbers. And then you cannot tell me there is an additional number outside the set. And infinity is a concept - I said that. You said that. You can pretend we are in disagreement, but we are not. I actually think you suffer from an understanding of language. I can define a set as all possible numbers. And then you cannot tell me there is an additional number outside the set. Perhaps (I'd actually dispute that by asking you precisely what type of numbers are in your set, but I digress). Regardless, let me be kind and assume that you've got a set of numbers that is uncountable; perhaps you're thinking of, say, the set of all real numbers. You're now quite pleased with yourself because you've got a set with an infinite number of elements. However, I come along and claim that I can define a set with even more elements in it than yours. I can even be kind to you and say that I'll restrict myself to working with a set containing real numbers. The question therefore is: do you believe that I can construct a set of real numbers that contains more elements than your - already infinitely large - set of real numbers? Because I can do so quite simply by taking my set to consist of all possible subsets of the real numbers. Both of our sets have infinitely many elements, but mine has more than yours. BeamishBoy is arguing from the standard construction of numbers and set in modern mathematics which, for somewhat subtle reasons, disallows defining the set of all possible numbers. Due to the simplifying assumption that everything is a set (otherwise you could run out of numbers for talking about the size of sets which is similar to the argument BeamishBoy makes in his sibling post), numbers are defined in such a way that the set of all possible numbers must contain itself and sets containing themselves is disallowed due to https://en.wikipedia.org/wiki/Russell%27s_paradox . This means that there is obvious way to define the largest infinity and leads to two separate kinds of numbers which act rather differently for infinities: https://en.wikipedia.org/wiki/Ordinal_numbers for counting things where n != n+1 even n is infinite and https://en.wikipedia.org/wiki/Cardinal_numbers for measuring the size of sets where n == n+1 if n is infinite. There are always risks in attempting to pass one's own ignorance on a subject as somehow being authoritative.. There are always risks in attempting to pass one's own ignorance on a subject as somehow being authoritative.. Oh please do explain what those risks are - or would that be risky? Seriously dude I was not representing myself as an expert - my words stating the opposite of that - did in fact represent what I mean, and weren't some clever ploy. My explanatory style is meant to lay bare to the world what I might or may not understand on the subject, both to clarify my own thoughts and to invite commentary, and my experience has been that talking advances the subject more than being silent. I appreciate the advice both the counterpoints on what quantum mechanics is about, and the little banter on the child's game of stating the largest number. Peace, friend! Edited 2013-03-22 20:56 UTC I didn't imply you were an expert. I was just letting you know that if you do not understand a concept, or field of study, and all you have to go with is some odd (and perhaps equally uninformed hearsay) assumptions... It is then perhaps kind of silly to put a set of requirements about what those who are actually working on the concept, or field, should or should not do before they can hope to achieve your magnificent seal of approval ;-) I didn't imply you were an expert. I was just letting you know that if you do not understand a concept, or field of study, and all you have to go with is some odd (and perhaps equally uninformed hearsay) assumptions... It is then perhaps kind of silly to put a set of requirements about what those who are actually working on the concept, or field, should or should not do before they can hope to achieve your magnificent seal of approval ;-) Oh in that case - just get bent. thanks for proving my point about ignorance... Let me try to help you a bit understand what QM is and how it works. One of the core tasks of physics is to describe what makes things move and how, whether the things in questions are tiny electrons or huge nebulas. Newton's laws of motion are a simple description of this which works pretty well at our scale, whereas Einstein's special and general theories of relativity are more accurate at high speed and large scales, and quantum mechanics (or QM) is the best at small scales. QM and relativity can both be seen as extensions of Newton's laws to extreme scales, since if we try to apply them to "regular" objects, we'll get similar results as with Newton's laws. However, merging them into one single unified theory has proved to be extremely difficult for theoretical physicists. At this point, finding a satisfactory quantum description of gravitation, as described by the theory of general relativity, remains an open problem. Now, how does the quantum description of the world differs from the one offered by classical mechanics? In a nutshell, it allows for some physical situations which are perfectly impossible in a Newtonian world, and simultaneously forbids some things which Newton's laws are perfectly fine with. Here are a few examples: -Quantum objects' properties are not assumed to be perfectly known. What QM describes instead is the probabilities of finding something in a given state: at a given position, with a given speed... -All these possible states of a system are not just statistical odds. Unless a measurement procedure leads the system to collapse in a single state, multiple "possible" states of a system exist simultaneously, and may thus interact with each other. -QM's laws of motion are based on this latter effect, describing the probabilities of finding a system in a given state as changing in space and time in a manner similar to that a sound or like wave. -All properties of a system are not independent, and it's impossible to simultaneously know all of them at once. This stem from the mathematics used to describe quantum states, which happen to be matching some real-world behaviors. The reason why we're dealing with a theory that violates common sense in such a brutal way is that we have failed to find a saner description of the world which matches experiment so far. As an example, if the microscopic world followed Newton's laws of motion or their relativistic equivalents, electrons would crash into atom nuclei, many magnetic materials would have totally different properties, and we would still have to argue upon whether light is a wave or a stream of particles instead of having a maths that unify both descriptions. Edited 2013-03-22 21:07 UTC @Neolander, thanks for the explanation, although if I could request of you a more specific explanation of quantum computing, that would be great. From my understanding regular computing would have to be based on the same laws of mechanics and properties of our world as everything else, its not as if we can step outside of it, simply because we are ignorant of it. So, there must be something special about quantum computing that separates it from merely a theoretical description of how things work - to a practical engineering difference. I'm not going to pretend that I didn't already go to wikipedia and try to make some sense of entanglement and superposition. The problem is - as much as they might try to be simple and easy to understand, they aren't quite using the right words for me. Regular computers use transistors and attempt to represent information in binary, 1' and 0's. My understand is quantum computers attempt to use quantum properties to represent and manipulate data. Ok, but then that's not enough to get it for me, and frankly I was joking earlier, but since you all are trying to help - what's the missing piece here? It's not enough to just state the states can be 1, 0, or inbetween, what are the states, how do you manipulate them? I can program in assembler just a little bit - so at least in my head, I get how regular computers work - shouldn't I be able to understand quantum computers. Lets say I want to program a quantum computer - what are my steps to do so? @Neolander, thanks for the explanation, although if I could request of you a more specific explanation of quantum computing, that would be great. From my understanding regular computing would have to be based on the same laws of mechanics and properties of our world as everything else, its not as if we can step outside of it, simply because we are ignorant of it. So, there must be something special about quantum computing that separates it from merely a theoretical description of how things work - to a practical engineering difference. As I mentioned above, it takes rather small systems for quantum phenomena to become significant. More precisely, what you actually need is something that has as little physical interactions as possible with its surroundings. That is because when quantum systems are coupled to a macroscopic environment, it causes many quantum effects to vanish over time, in a process known as decoherence. As an example, if you put a system in a "half-zero, half-one" state, it will decay either to the zero or one state. The stronger the coupling, the faster the decay. One way to describe it from the point of view of physics is as information or energy being diffused into the environment and never coming back. Thus, people who try to build quantum computers have the difficult task of building quantum chips whose state may easily be read on demand, but which are still too weakly linked to the outside world for the aforementioned decoherence effects to become problematic. I'm not going to pretend that I didn't already go to wikipedia and try to make some sense of entanglement and superposition. The problem is - as much as they might try to be simple and easy to understand, they aren't quite using the right words for me. Regular computers use transistors and attempt to represent information in binary, 1' and 0's. My understand is quantum computers attempt to use quantum properties to represent and manipulate data. Ok, but then that's not enough to get it for me, and frankly I was joking earlier, but since you all are trying to help - what's the missing piece here? It's not enough to just state the states can be 1, 0, or inbetween, what are the states, how do you manipulate them? A state, in a quantum mechanical sense, is a set of physical parameters that fully describe a system. As an example, electrons in solids can be described by their momentum, energy, and magnetic "spin", while a photon of light can be described by its momentum and polarization. Often, these parameters are only allowed to take a discrete range of value, which as you can probably guess is particularly useful for information processing purpose : just make somehow sure that only two parameter values are accessible, and you got yourself your "zero" and "one" physical states. And since these are quantum states which we are talking about, nothing prevents a system from being in both states at the same time, with a certain probability of finding it in each during a measurement. Effectively, such two-level systems have been built in a wide range of physical systems, ranging from atoms in optically reflective cavities to bunch of ions trapped using electrostatic fields, with superconducting micro-circuits somewhere inbetween. The two states in question are often chosen of different energy, so that the system cannot trivially switch between both without exchanging energy with its environment. But at this point, I have to point out that such a quantum memory or "qubit" is useless by itself. Sure, your qubit may be simultaneously in its "zero" and "one" states, but since you need to perform lots of measurements to know for sure, nothing has really been gained. Besides, as mentioned above, quantum information storage tends to be short-lived due to decoherence effects, with data remaining reliably stored for at most a few miliseconds in the best experiments if memory serves me well. So the next difficulty in building a quantum computer is to make quantum memories interact with each other, so as to perform computations. And just like with classical computing, the simplest thing which one can think of building this way is logic gates. Where things become interesting here is that a quantum logic gate is able to act on all possible states of its input at the same time, and return a statistical superposition of all the results at the same time. So if we take, as an example, an unsorted database search algorithm of the form "examine every database entry, return a pointer to the entry that possesses a given key and NULL otherwise", then the only nonzero result is the pointer to our database entry, and we may naively imagine a quantum circuit which finds the right database entry in constant time. Things sadly aren't quite as simple in practice because all logic gates which destroy information (such as, say, AND and OR) are forbidden in the quantum world. One thus has to deal with other logic gates like the CNOT and Hadamard gates, which arguably reorder information more than they do computation, and thus make programmers' lives more difficult. Due to this, an actual quantum database search algorithm, as derived by Grover et al., ends up with an algorithmic complexity of O(sqrt(N)), which is still a quadratic speedup over classical ones. Other kinds of search-oriented tasks can benefit from a quantum device's ability to act on multiple inputs at the same time. Perhaps the most well-known one is Shor's algorithm, which factorizes numbers in O((log N)^3) time and is the worst nightmare of everyone who ever implemented RSA-based cryptographic systems. That being said, not all of cryptography is based on the difficulty of factorizing prime numbers, and so far some algorithms have proven to be pretty robust against the minds of quantum computing theoreticians. Did that answer some of your questions ? I can program in assembler just a little bit - so at least in my head, I get how regular computers work - shouldn't I be able to understand quantum computers. Lets say I want to program a quantum computer - what are my steps to do so? At this point, I don't think anyone has actually built a truly programmable quantum computer, so it's hard to answer this question. For now, most works I've heard of are based on special-purpose circuitry which aims at implementing a specific algorithm, so programming a quantum computer would be kind of like "writing" code using a bunch of logic gates wired as you wish and synced against a common clock signal. I think that implementing a programmable quantum ALU is a dream of many quantum computing experimentalists, but is still out of reach for now. Theoreticians tend to be ahead in this domain, so I'm sure multiple designs have already been published in scientific literature. Lots of "I think" in this last part, I know, but I haven't followed this realm closely enough to be perfectly sure about the state of current research. My current field of research (superconductivity) sure involves QM, but it's not quantum computing itself Edited 2013-03-24 21:49 UTC You know what - I just was thinking, why am I placing this on a pedestal. If it were anything else, I'd download the emulator and start tinkering around - so what do you all recommend? I found one called jQuantum. Is that a good one? Wait guys, it isn't even 1st of April yet. New scientist published the joke way too early. For those of you really interested in learning this subject (or teaching it) I downloaded the Quantum Fog emulator for Mac. It's possible there is a better emulator out there - but until someone with more knowledge weighs in on the subject, I'm just on my own...but I thought I'd share, this quantum fog program, while it is clearly an older program - it's nice. I don't know how well it would track with what else is available, however the guy that wrote it clearly went to considerable lengths to document how the simulator is supposed to work. It has tons of information, and some example programs. Anyway - I've put out the request for additional info from anyone who wants to share. Folks its entirely reasonable for anyone to wait before jumping into QC programming. You are not going to be writing your qc program, and people running that on their quantum computing based cellphone - with its near absolute zero cooling, any time soon. however, as I'm pecking away at writing my first quantum computing program for the simulator I have to admit, I've discovered something - this is freakin' awesome. I have zero clue why some of you - well that one dude, seems to dislike me. But look, I'm not a great programmer - I have no illusions about it. Most of you will be better quantum computing programmers than I will ever dream to be. But, I am writing my first quantum computing program as we speak. So - I will be the first in my family to do quantum computing and that's something right. hahaha, lol.I'm joking I'd be the first in my family to do that, even if I waited 30 years to do it. OK, anyway - it is freakin awesome. I had no idea how freakin awesome this was just this morning. Now, yeah - it's cool. Now I admit, I'm having to dust off some old textbooks, because my math is a little rusty, but I'd say dont be put off by it. It's just a bit of shall we say - overhead. Now anyway, I am out...because of the danger that I will like to argue, and to what extent I like to argue or contributed to the argument, I will now remove myself from the entire discussion. I'm out, so until the next time - good day. Two 10 year old boys are playing 'name the largest number' In the game of name the largest number, the child that names the largest number wins. The largest 'set' has absolutely no relevance in this game and never has. Child 1: says 100 Child 2: says 1,000 Child 3: says 1 million Child 1: says 1 billion Child 2: says infinity Child 3: says infinity+1 In this game, child 1 has won. They said 1 billion, the largest number stated in the game. infinity is accepted as an answer in the real world, but is not in actuality a number, but a concept. As stated earlier infinity+1 has no importance, because the game was never about having the largest set. However, infinity+1 is not, necessarily a larger set. An example of a set: The unique numbers in (1,2,3,4,5) Now lets add the "number" 1 to this set: The unique numbers in: (1,2,3,4,5,1) The set size has not changed. Lets do a more creative set: I define this set as: all numbers in the world, real or imagined, that can exist, or cannot exist. I specifically define as already being included in this set, any numbers you attempt to add to it later, and it contains all the numbers in any set you compare to it. That's my definition, and just add salt to the wound, in this intellectual exercise, this set is labeled "infinity" doesn't matter if you like it or not. Now what??? Bring it on bitch. Look, I don't get to decide the parameters - the children's game of name the highest number - infinity is accepted as an answer - it's accepted as being 'whatever the highest number is in theory' - So infinity +1 has no additional meaning. The last paragraph is controversial, you may argue against it. Please do. One thing we can all agree on, the very fact that I wrote this nonsense at 3:34am in the morning, is a sign of mental - well lets be polite, of being mental. So, yeah, I lose. Later guys! “So infinity +1 has no additional meaning.” If the infinities in both cases are equal, then yes it does. Different size infinites are one of the fundamental principals of calculus. It's also one of the most difficult to understand. Don't think in terms of sets where each number can only be used once. That's a purely artificial restriction that has absolutely no meaning whatsoever in this context. If you have a set of all reals, for example, you have a set with infinite members, none of which has a value of infinity. We are actually talking about infinite series. Not sets. Lets say we have the following summation, S1 = 1+2+3+4 + · · · We can all agree that the above goes to infinity. Now lets compare it with the following series, S2 = 2+4+6+8 + · · · so that each number in the series is 2*n. Both are infinite. However, S2/S1 is exactly 2. Because each term in S2 is divisible by 2, it is absolutely correct to write, S2 = 2*(1+2+3+4 + · · ·) In S2/S1 the series cancel out, their value (infinite or otherwise) is completely irrelevant. S2, despite being infinite, is *exactly* twice S1, which is also infinite. Whether a number has already been used is irrelevant. It's a sum. I can add the same value any number of times. 1+1+2+3+4 + · · · > 1+2+3+4 + · · · I think that's tripping some people up is treating "infinity" as though it were a discrete number that can be compared. In discrete calculus, we were always careful to say a sequence could "approach" infinity faster than another sequence, which is both valid and fairly easy to understand. The moment you treat "infinity" like a discrete number and manipulate it with discrete operators like comparison, you break the concept of infinity. Nothing is bigger than infinity. Infinity plus one isn't a discrete number, neither is infinity minus one. Sequences do not "equal" infinity because the transient property of equality would imply that all sequences approaching infinity are equal, which they're not. One might be tempted to say infinity minus infinity is zero, but that's not semantically valid because infinity isn't a discrete number. Both sequences are infinite, but neither are equal, nor do they "equal infinity". S2-S1 doesn't equal zero, it equals S1. The moment you treat "infinity" like a discrete number and manipulate it with discrete operators like comparison, you break the concept of infinity. This is completely incorrect. Infinity is a well-defined concept. It's so well defined that one can talk about (i) different kinds of infinity, and (ii) how one version of infinity is "larger" than another. I addressed this in another post by pointing out that the number of elements contained in the set of real numbers is one kind of infinity, and how this is smaller than the number of elements contained in the set of all subsets of real numbers. I know this stuff is confusing but it's been understood by mathematicians for over a century. "Infinity is a well-defined concept. It's so well defined that one can talk about (i) different kinds of infinity, and (ii) how one version of infinity is 'larger' than another." I didn't say it wasn't a well defined concept. It's just that the concept is being abused when we treat infinity as a discrete number as though it could be compared. "I know this stuff is confusing but it's been understood by mathematicians for over a century." It's really not that confusing, if you attempt to solve a discrete number for "infinity", anyone else can conceive of a number a number which is factually higher, therefor the concept of infinity rules out the possibility of any discrete number equaling infinity. What we can do is compare sums of series which approach infinity at different rates, like you did earlier. Discrete calculus (which as you noted is well understood) allows us to solve the rate for any given iteration of the sequence and confirm via inductive proofs that it continues infinitely. There are an infinite number of unique sequences who's sums add up towards infinity given an infinite number of iterations. Because of the transitive properties of mathematical equality, one cannot claim any of these infinite sequence sums "equal infinity", for the simple reason that they are not equal to each other. We might be careless and say a given sum "equals infinity" and still understand one another, but we don't actually mean it in the true mathematical sense. This is what my earlier example was trying to illustrate. If infinity could be treated as a discrete number, then mathematically this would be sound: B-A = 0 But in fact the earlier example showed that B-A=A. To be sure, I am arguing semantics in the conversation as a whole and not just you. I don't want us to talk over each other, and I don't think we have all that much to disagree on, despite your bolded statement that "This is completely incorrect." Edited 2013-03-24 20:57 UTC Well, first of all, there are two different concepts of infinity that had been discussed in some of this threads. One is the idea of infinity as a number that is larger than all the real numbers. The other is the idea of a set having an infinite number of elements. They are not the same concept. Let me give a brief description of both to see the difference: 1.- For certain applications, it is convenient to extend the real numbers to allow the values infinity and -infinity. (For example, this is a very common thing to do in measure theory and integration.) In this case infinity is (in a sense) a quantity that is bigger than all others quantities. The drawback of this approach is that infinity doesn't behave well under the usual arithmetic operations. For example infinity+1=infinity, while infinity-infinity is always undefined. In this setting you can't have anything bigger than infinity. Trying to manipulate divergent sequences as if they were the quantity infinity and doing algebraic manipulations with them is incorrect, and can quickly get you to absurd statements. For example, Riemann showed that if you have a conditionally convergent sequence, then you can rearrange its terms to make it have any limit you want. 2.- In set theory there is the notion of the "cardinality" of a set, which is a measure of how big it is. Two sets are said to be of the same cardinality if you can give a bijection between them. Of course, given any set you can always add one element to it to make it bigger. (This is based on the fact that there is no such a thing as "the set of all sets".) But having a set with more elements doesn't automatically means that it has a higher cardinality. For example the set of positive integers has the same cardinality as the set of even positive integers. (You just enumerate your positive even integers to get the bijection.) However, Cantor showed that the set of real numbers does have a bigger cardinality than the set of integers. This is very shocking. It means that it doesn't matter how you try to enumerate the real numbers, at the end there is always an infinite number of real numbers that you left out. Given any set, the set of all its subsets always has a higher cardinality, so there is no biggest cardinal number. (Again this is related to the fact that there is no "set of all sets".) Here are some notes of my thesis advisor on quantum computing. They focus more on the algorithms than in the quantum mechanics, so if you are willing to take some things on faith you can find some cool material. They even have a description of the prime factorization algorithm that would make classical cryptography obsolete with a quantum computer.
{"url":"http://www.osnews.com/comments/26883","timestamp":"2014-04-17T10:17:27Z","content_type":null,"content_length":"134807","record_id":"<urn:uuid:f344d1e3-03d4-4a11-8800-7576d6e68f1c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Sparse Random Variables Tom Johnson tjhnson@gmail.... Wed Oct 17 14:31:31 CDT 2007 I have some stats questions. 1) Please excuse my ignorance here...but how does one use rv_discrete without initializing with the 'values' keyword? For example, if I use values=((1,2),(.3,.7)), then xk and pk will both be defined....and it seems strange that pmf() should even bother using the cdf to compute the probability (and slower?). 2) Suppose I want to store a log distribution, is this easily achievable? 3) I didn't do extensive tests, but it seemed like _entropy() was usually faster than entropy() even when the distribution was 1e6 possible values. Is there a reason that default calls to entropy use the vectorized function? It seems like most usage cases will be random variables with much less than 1e6 values...but perhaps not. 4) Also, for some reason entropy() doesn't always work on the first try... >>> from scipy import * >>> x = 1e3 >>> v = rand(x) >>> v = v/sum(x) >>> a = stats.rv_discrete(name='test', values=(range(x), v)) >>> a.entropy() >>> a.entropy() The first entropy raises an error. The second works. The problem seems to be with: /home/me/lib/python/scipy/stats/distributions.py in entropy(self, *args, **kwds) -> 3794 place(output,cond0,self.vecentropy(*goodargs)) /home/me/lib/python/numpy/lib/function_base.py in __call__(self, *args) 941 if self.nout == 1: --> 942 _res = 943 else: 944 _res = tuple([array(x,copy=False).astype(c) \ <type 'exceptions.TypeError'>: function not supported for these types, and can't coerce safely to supported types 5) I really need to have random variables where the xk are tuples of the same type (integers xor floats xor strings ...) p( (0,0) ) = .25 p( (0,1) ) = .25 p( (1,0) ) = .25 p( (1,1) ) = .25 a = stats.rv_discrete(name='test', values=(((0,0),(0,1),(1,0),(1,1)), [.25]*4)) /home/me/lib/python/numpy/core/fromnumeric.py in take(a, indices, axis, out, mode) 79 except AttributeError: 80 return _wrapit(a, 'take', indices, axis, out, mode) ---> 81 return take(indices, axis, out, mode) <type 'exceptions.IndexError'>: index out of range for array My initial thought would be that the xk could be anything that is hashable. For dictionary-based discrete distributions, I do use tuples...but I would like to start using scipy.stats. I am fishing for too much or in the wrong lake? More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-October/014132.html","timestamp":"2014-04-18T13:38:54Z","content_type":null,"content_length":"4996","record_id":"<urn:uuid:a37366a3-6b23-4036-8506-72ebbafc1bc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphing Transformations :( March 24th 2012, 02:28 PM #1 Mar 2012 Santa Cruz, CA Graphing Transformations :( I'm going insane. I can't figure out how the book is getting one answer and me a completely different. So here's the problem: Graph using transformations: f(x)=1/4x^2-2 My process: y = x^2 0 |0 1 |1 2 |2 so next would be this right? y=1/4(x^2) so y * 1/4, right? according to the book, WRONG! The book says that the next step would be a vertical compression of 2. What? huh? Where the hell is there a *2 in this? I'm sure it's some algebra that I'm just over looking, but any help at all would be appreciated! Re: Graphing Transformations :( You are right that the next step is vertical compression by 4 (i.e., multiplying y by 1/4). Maybe it's horizontal expansion by 2? If $f(x) = x^2$, then $x^2/4=f(x/2)$, i.e., the graph of $x^2/4$ is obtained from the graph of f(x) by changing each (x, y) into (2x, y). In other words, compressing the parabola $y=x^2$ vertically by 4 and expanding it horizontally by 2 give the same result. Re: Graphing Transformations :( Re: Graphing Transformations :( Yes, it must be a mistake in the textbook because of the difference between #19 and #21. I personally think that compressing vertically by a factor of $\alpha$ means changing every point $(x, y)$ into $(x, y/\alpha)$. In contrast, stretching vertically by a factor of $\alpha$ means changing every point $(x, y)$ into $(x, \alpha y)$. Thus, compressing by a factor of $\alpha$ is stretching by a factor of $1/\alpha$, so it is sufficient to always use the term "stretching." With this convention, both #19 and #21 involve stretching of y = x^2 by a factor of 1/4, or compressing it by a factor of 4. One does not have to follow this convention; for example, one may always use both "stretching by $\alpha$" and "compressing by $\alpha$" for the transformation $(x,y)\mapsto (x,\alpha y)$, but use the word "stretching" for factors > 1 and "compressing" for factors < 1. However, above all, terminology has to be consistent, which it is not in the textbook. Edit: LaTeX. Edit2: Clarified the alternative convention. Last edited by emakarov; March 24th 2012 at 06:54 PM. March 24th 2012, 05:17 PM #2 MHF Contributor Oct 2009 March 24th 2012, 06:18 PM #3 Mar 2012 Santa Cruz, CA March 24th 2012, 06:46 PM #4 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/pre-calculus/196353-graphing-transformations.html","timestamp":"2014-04-17T02:36:31Z","content_type":null,"content_length":"42954","record_id":"<urn:uuid:1e5e5255-4c2f-4c09-b5a4-3ec673f210e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
More Magic Formula Analysis As a professor by day, I spend a lot of time doing research and studying financial data. Recently, my research assistant and I have been digging deeper into the magic formula. We simply can’t figure out why the results posted in the Little Book are so extraordinary compared to what we have found. Granted, there are certainly likely to be differences at the margin, but any robust strategy should be fairly impervious to slight changes in technique, time period, and methods. Here are the returns presented in the Little Book that Beats the Market: 30.8% is breathtaking and will make you a multi-billionaire very quickly. Nonetheless, we can’t replicate the results under a variety of methods. We’ve hacked and slashed the data, dealt with survivor bias, point-in-time bias, erroneous data, and all the other standard techniques used in academic empirical asset pricing analysis–still no dice. In the preliminary results presented below, we analyze a stock universe consisting of large-caps (defined as being larger than 80 percentile on the NYSE in a given year). We test a portfolio that is annually rebalanced on June 30th, equal-weight invested across 30 stocks on July 1st, and held until June 30th of the following year. We show major differences between our results and the magic formula results (30.75% CAGR vs. 13.80% CAGR). As a robustness check, we also analyse the performance of the Profit & value strategy, which is the “academic equivalent” to the Magic Formula strategy. Both the Magic Formula and Profit & Value strategy outperform the market, but none of them comes close to DOUBLING market returns. Perhaps this is a size effect? As an additional check, we looked at the results when the universe consists of small, mid, and large caps (defined as being larger than 20 percentile on the NYSE in a given year): Same story here: definitely some serious outperformance on behalf of the special formulas, but nowhere near the 31% CAGR outlined in the book. So what gives? There are a list of possible conclusions that we can draw from this analysis: 1. We screwed something up in our analysis. 2. Greenblatt & Co. screwed something up in their analysis. 3. The strategy is highly unstable (i.e., small backtesting procedure changes have large effects). I am fairly confident that we did a careful job in our analysis, and I am confident that Greenblatt & Co. did a solid analytical job. My guess is the magic formula backtest performance is simply highly unstable, and small changes in assumptions/analysis can have dramatic effects on the performance. Interestingly enough, I visited the Formulainvesting.com website and clicked on the live results of the Magic Formula: http://www.formulainvesting.com/actualperformance_MFT.htm I then compared the 2009 (partial) and 2010 results (full year) against our backtested 2009 (partial year) and 2010 results (using a 80 percentile NYSE cutoff for size): From May 1 -Dec 31 2009 our backtest results of the magic formula lagged the live performance of the Magic Formula. In 2010, our backtest shows a 13.74% return, whereas the live Magic Formula earned 12.64% after fees. But here are the results when we extend the universe market cap down to the 20 percentile market cap cutoff: 2009 absolutely kills it, as does 2010–it is obvious that ‘magic small-caps’ are driving the backtested performance here. As one can see, the live magic formula dramatically underperforms the backtested performance (likely because they had limited small/mid cap exposure). Although anecdotal in nature, we can see from a very limited out of sample test (2009 and 2010) that the backtest returns to a Magic Formula strategy is VERY unstable and results should be analyzed with a skeptical eye. I’m a huge believer that there are slack in asset prices and the market is never perfectly efficient; however, I also believe that markets are highly competitive and prices tend to stay reasonably close to efficient. Applying this thought to the Magic Formula results, I would conclude that the strategy probably works at the margin, but expecting massive outperformance, after controlling for risk, is foolhardy. Related posts: 1. How “Magic” is the Magic Formula? The underlying concept of the magic formula is a genius... 2. Technical Analysis may actually work! Note: This is a guest post by Tom Cleveland, market... 16 Responses to More Magic Formula Analysis 1. Thanks for the post. Yes size does matter. Actually I think it’s liquidity that matters. I have a fundamental ranking system that uses similar approach with Magic Formula. Roughly speaking, Magic Formula consists of two components: valuation and return on capital. My ranking system has one more component: financial condition. According to my own research (http://trustamind.blogspot.com/2011/04/ liquidity-and-performance-of.html), there is an inverse relation between liquidity and performance. Annualized return drops if I require more liquidity. Because large market cap generally fetch better liquidity, what I observed in my research generally echoes what you have observed here. I think 30% annualized return is doable if we loosen the requirement on liquidity. However, investors may or may not be able to get rich with that because they couldn’t invest too much money on a thinly traded stock. At least it is the case with my ranking system as I do weekly rebalance. But it is still arguable that if the holding period is 6 months to 1 year, investors can spend weeks or even months to accumulate a position, so liquidity may not matter. 2. Great post. I respect your non-combative, analytical stance. I’m constantly coaching my clients to not invest blindly just because someone publishes something that appears statistically solid on the surface. The devil is in the details and small differences can compound into large differences. There are so many ways to slice the data with assumptions that can make or break validity in real time. Thanks for sharing these insights. 3. Great analysis! I am a little perplexed by the wide range of results and the inconsistencies with little black book but I am not economist and thus have only a very limited understanding of how you / greenbalt&co came to these conclusions! Let’s hope the magical formula is a little more stable than you believe! Thanks for the educational post! 4. It is so obvious why liquidity matters. I’m a little embarrassed not being able to articulate it before. Liquidity generally reflects popularity of a stock. Popularity means that a lot of investors pay attention to the stock. There is an saying in the world of computer geeks that “Given enough eyeballs, all bugs are shallow”. Similarly, all pricing errors are shallow given a lot investors are watching it. The less pricing error, the less profit left for value investors. Thus the inverse relation between liquidity and performance. 5. I thought Greenblatt ruled out a size effect. So liquidity shouldn’t matter. If it does, then isn’t it possible that a lot of outperformance would be consumed by the spread? I must say, I am rather disillusioned with the results you presented, as it is beginning to look like the magic formula isn’t so magic after all. Anecdotally, it seems that others have trouble producing the stellar returns that Greenblatt cites. Maybe he should open up his data to more scrutiny. Based on your results, it doesn’t seem that the Magic Formula can do any better than simple value strategies like low PBV, PE, etc.. 6. very nice idea so thanks 7. I’ve independently come to the same rough results and conclusions. I’ve backtested the Magic Formula using two different tools. You can also look at the AAII website and see that their version of the model doesn’t approach the lofty 31% returns. As one of the posters earlier wrote, roughly Greenblatt’s model is composed of a value screen and a return to capital. Low Value and High Momentum are widely documented as generating excess returns. Return on Capital not so much. I’ve isolated and backtested various measurements of return on capital without a great deal of success. 8. The reason you are coming up with numbers that are dramatically lower than the Magic Formula backtest in the book is because you are looking at large-caps and larger mid-caps instead of small-caps and mid-caps as they did in the book. The book never said that someone should screen by whether or not a company is in the NYSE. The average market cap of companies in the NYSE is 8.8 billion and the lowest market cap of any company in the NYSE is 3.2 billion. The back-test in the book that came to a 30.8% return looked at US traded companies with a market-cap of $50 million or greater. There is clearly a huge difference between $50 million market cap companies and $8.8 billion market cap companies. Even if you look at companies in the bottom 20% of size on the NYSE you are looking at companies that are nearly 100X larger in market-cap than the back-test in the Little Book That Beats The Market. If you run the back-test with the same criteria as the book you will come to very similar results as shown in the book. 9. I echo David’s hunch regarding the likely source of the discrepancy. The book does break out the results of both the 50M+ group, with about 30% returns, as well as the results of the top 1000 companies by market cap, with a more modest 23% average return. In other words, Greenblatt does acknowledge that there is a large size effect at work, which is consistent with what you are showing. It’s interesting to note that the Formula Investing mutual funds restrict themselves to the top 1400 US-listed companies by market cap, I guess to keep expenses reasonable. There are some non-affiliated micro-cap oriented magic formula funds (e.g. Catalyst Value), with concomitantly higher expenses than the official funds. □ We are working on some results specific to $50mm cutoffs. We will post analysis in the next week or so. 10. Maybe , you need to read about the backtest studies, done on the Euro markets (by MFIE Capital) from 13/06/1999 until 13/06/2010 11. As an avid follower of the Magic Formula, I was very interested in this article. I was also surprised to see the results did not live up to the backtest that was in the Little Book. So I dug it out and reread it from cover to cover. I believe there are 3 factors that could explain why the test results are different from what Greenblatt did in the little book. First, as already pointed out by others, the size effect. Greenblatt uses $50 million as one cut-off and $1 billion as the second cut-off for larger cap. Second, the cutoff for data is Dec 31st, but the investment date is not until July 1st. In Greenblatt’s tests, he used the most recently available quarter. I follow the Magic Formula monthly and companies fall off the list regularly, so a six month lag seems much too long. Third, and most importantly, in the book he states that to duplicate the results shown in the book, you have to buy 5-7 companies every 2-3 months. This prevents any seasonal effects. If anyone remembers the Foolish Four (check Motley Fool’s website for details), it was a market crushing method, but it only worked if you bought the stocks around year end. If you did it any other time of the year, it failed miserably. As a point of reference, if you look at the AAII website from 1998 to 2004, which are the only overlapping years, the Magic Formula version they track returned about 18% compounded vs. 5% for the S&P (13% annualized since 1998. The AAII version re-balances monthly and their methodology is slightly different than the Little Book, but it is directionally correct. I would really like to see a back tested comparison that takes these factors into account. 12. I am amateur, but I can’t make sense of the data in the two tables presented. (The first is for large cap and the second is for “~all” cap company sizes.) To me it looks like the Magic formula data stays exactly the same (to four significant figures) in the two tables while other strategies, like Profit and Value, change significantly. Is this a typo or am I missing something? (Apologies in advance if I am “missing the obvious”.) This entry was posted in backtests, Trading Strategy Paper. Bookmark the permalink.
{"url":"http://blog.empiricalfinancellc.com/2011/06/909/","timestamp":"2014-04-17T18:23:15Z","content_type":null,"content_length":"45808","record_id":"<urn:uuid:257c7697-04db-41ac-9755-d2214a99d8e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Sometimes precession is unwanted so two counter rotating gyros on the same axis are used. Also a gimbal can be used. THE GIMBALED GYROSCOPE The property of Precession represents a natural movement for rotating bodies, where the rotating body doesn’t have a confined axis in any plane. A more interesting example of gyroscopic effect is when the axis is confined in one plane by a gimbal. Gyroscopes, when gimbaled, only resist a tilting change in their axis. The axis does move a certain amount with a given force. A quick explanation of how a gimbaled gyro functions Figure 4 shows a simplified gyro that is gimbaled in a plane perpendicular to the tilting force. As the rim rotates through the gimbaled plane all the energy transferred to the rim by the tilting force is mechanically stopped. The rim then rotates back into the tilting force plane where it will be accelerated once more. Each time the rim is accelerated the axis moves in an arc in the tilting force plane. There is no change in the RPM of the rim around the axis. The gyro is a device that causes a smooth transition of momentum from one plane to another plane, where the two planes intersect along the axis. A more detailed explanation of how a gimbaled gyro functions Here I attempt to show how much the axis will rotate around a gimbaled axis. That is to say, how fast it rotates in the direction of a tilting force. In figure 4, the precession plane in the gimbaled example functions differently than in the above example of figures 1-3, and I have renamed it "stop the tilting force plane". The point masses at the rim are the only mass of the gyro system that is considered. The mass and gyroscope effect of the axis is ignored. At first consider only ½ of the rim, the left half. The point masses inside the "stop the tilting force plane" share half their mass on either side of the plane, and add their combined, 1/4kg, mass to point mass A of 1/2kg. So then the total mass on the left side is ½ the total mass of all 4 point masses, or 1kg. The tilting force will change the position of point mass B and D very little and change the position of point mass A the most. So we must use the average distance from the axis of all the mass on the left-hand side. The mass on the left side is 1kg. The average distance the mass is from the "stop the tilting force" plane is 1/2 meter. Figure 5 shows a profile of the average mass in the tilting plane and the average distance from the axis that the mass is situated. We are concerned at how far the mass at the average distance will rotate within the tilting plane when a given force is applied to the axis in the direction indicated. Point mass A is rotating at 5 revolutions per second. This means that it is exposed to the tilting force for only .1 seconds. The tilting force of 1 Newton, if applied for .1 second, will cause the mass at the average distance to move .005 meter in an arc, in the tilting force plane. Since the length of the axis is twice as long as the average distance of the rim’s mass, the axis will move .01 meter in an arc. At the end of .1 second the point mass will be in the "stop the tilting force plane" and all the energy transferred to point mass A is lost in the physical restraint of the gimbal bearings. The same thing happens when point mass A is on the right side of figure 4. Only now, the tilting force will move point mass A down, and the axis will advance another .01meter. .01 meter every .1 second is not the whole story because the mass on the right side of the gyro hasn’t been considered. The right side has the same mass as the left and has the same effect on the axis as the left side does. So the axis will advance half as much, half of .01 meter, or .005meters. Both halves of the rim mass will pass through the stop the tilting force plane 10 times in one second. Each time a half of the rim passes though the "stop the tilting force plane", it losses all its momentum that was added by the tilting force. The mass has to undergo acceleration again so we continually calculate the effect that 1 Newton has for .1 second on the rim mass at the average distance, 10 times a second. So then; at the point that the 1 Newton force is applied, the axis will move 5cm per second along an arc. The gyro will rotate at .48 RPM within the tilting force plane. What considerations does the rim speed have on the distance that the axis will rotate along an arc in the tilting force plane? The gyro will rotate in the tilting force plane, half as fast if the rim speed is doubled. What happens when the mass of the rim is doubled? The gyro will rotate in the tilting force plane, half as fast if the rim mass is doubled How does the rim diameter effect rotation in the tilting force plane? The gyro will rotate in the tilting force plane, half as fast if the rim diameter is doubled The Math of a gimbaled gyro 1 Newton = 1kilogram 1 meter sec.^2 d=1/2 X (a X t^2 ) 1 Newton acting on 1kg will accelerate the mass at a rate of 1 meter sec^2 the time that ½ the mass of the rim is exposed to the tilting force at 5 revolutions a second is 10 times a second or 1/10; .1 sec The distance, d, the mass will go in .1 sec d = ½ X 1m / sec^2 X (.1sec)^2 ; = ½ X 1m/sec^2 X .01 sec^2 = .005 meter The axis is twice as long as the distance from the average distance that the rim mass is calculated from .005 X 2 = .01 meters Now consider the other side of the gyro as acted on by the same 1 Newton force. .01m / 2 = .005 The force will have ten times a second to accelerate the rim mass from a relative velocity of 0m /sec. 10 X .005m = .05m; or 5 centimeters Years ago there was a news story about a man that used a gyro to produce more energy than was needed to keep the gyro spinning. He used a surplus ship's directional gyro. I think what he did was use the property of precession to run a generator. If left undisturbed, a gyro on the surface of the Earth would turn 360 degrees once every 24 hours. The top of the gyro would normally go westward. But if the top axis were held so that it could not rotate from east to west, due to precession, the gyro will rotate in the north and south direction depending on the direction the rim is rotating. The gyro would turn due to precession until it reaches 90 degrees with it's axis pointing north and south. Then it would be in the same plane as the rotation of the Earth and gyroscopic precession would stop. To get the gyro out of the Earth's rotational plain a small force could be applied to the gyro axis and precession would put the axis back in the original position. The 90 degree precession rotation would be much faster than the once per 24 hours opposing forces rotation, but some gearing would probably still be needed to run a generator. The generator would be mechanically linked to the precession back and forth motion in one direction only so it will turn the same direction all the time. The amount of energy needed to keep the gyro's rim spinning and the energy needed to turn the gimbals back 90 degrees would determine the overall efficiency. This is NOT a free energy thing. The energy comes from the rotation of the Earth and therefore the Earth rotational speed is slowed as energy is tapped from a gyro-generator type machine. If this method of generating energy is used to a great extent, days and nights would become longer. If this should happen. let me be the first credited to use the term "rotation pollution" or "motion Other experiments with a gyro There might be a way to accelerate the rotational speed of the rim of a gyro by using a short duration tilting force on the axis. The force's duration would be for much less then the length of time that is required for the rim to rotate 90 degrees. When the rim has rotated 90 degrees from the time the tilting force was first applied, The tilting force would be purposely reversed. The direction that the rim is rotating and the direction the rim would have moved due to precession are now close to the same. The two motions might combine and result in an increase in the rotational speed of the rim. T To download an Acrobat version (41K): click here
{"url":"http://www.gyroscopes.org/how.asp","timestamp":"2014-04-20T18:22:27Z","content_type":null,"content_length":"20256","record_id":"<urn:uuid:608a0091-1caa-4304-a7ed-32516ee1d145>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] help interpreting univariate spline josef.pktd@gmai... josef.pktd@gmai... Fri Apr 30 15:40:33 CDT 2010 On Fri, Apr 30, 2010 at 4:14 PM, Elliot Hallmark <permafacture@gmail.com> wrote: >> (I'll point out that you lose accuracy - possibly a lot of it - >> by converting polynomials from one representation to another.) > error between evaluation of the interpolation and eval of polynomial > is consistently 10^-16. I don't know if this is huge in computer > world, but it is > quite reasonable to me. 10^-16 is about zero in my view. > >If all >> you want to do is solve the polynomials, though, scipy already >> provides root-finding functionality in its splines > But I will be using c code through cython to solve for roots, which is > why I want a transparent way to solve the interpolation. >> If you really want the polynomials, though, the tck representation >> scipy uses is semi-standard for representing splines; you may find the >> non-object-oriented interface (splrep, splev, etc.) somewhat less >> opaque in this respect. If you do decide to decipher the results, keep >> in mind that with the knots held fixed, it's a linear representation >> of the space of piecewise-cubic functions, so if you can find >> representations for your basis functions (e.g. 1, x, x**2, x**3) you >> can easily work out the conversion. And since the interpolating spline >> for each of those functions is itself, all you need to do is four >> interpolations on a fixed set of knots. > um, I think this is what I already have done? But the "semi standard" > spline representation in tck is completely undocumented as far as I > can tell. The only way to get the polynomial coefficents I can tell > is through evaluating the derivatives. > Do you know what is the meaning of the coefficents splrep generates? > How could such a small set of coefficents represent all the > information of a cubic function? just a guess: has a representation of the curve as linear combination of spline basis functions which requires only m-2 coefficients. The fortran and f2py code is not informative about this, maybe the original documentation by Dierckx shows the exact specification. But I still wouldn't see from the wikipedia representation, how to get the coefficients of the piecewise polynomial back. > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-April/025188.html","timestamp":"2014-04-16T04:30:45Z","content_type":null,"content_length":"5789","record_id":"<urn:uuid:eca04a47-26f6-42cd-8180-d356eae3a0ab>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
percentages forumla - method to solve? September 16th 2010, 04:15 PM #1 Sep 2010 percentages forumla - method to solve? I've forgotten how to do math! It's been so long... this is embarassing. I have an amount of money, $7000 for example, that is the total amount spent for an advertising campaign. The campaign total cost includes the cost of the ads, and then the agency fee which is 20% of the cost of the ads. X = total cost of campaign B = cost of ads C = 20% of B B + ( .2 x B) = 7000 how would I use a forumla to calculate this? Would anyone be able to send me some links or point me in the right direction? I've forgotten how to do math! It's been so long... this is embarassing. I have an amount of money, $7000 for example, that is the total amount spent for an advertising campaign. The campaign total cost includes the cost of the ads, and then the agency fee which is 20% of the cost of the ads. X = total cost of campaign B = cost of ads C = 20% of B B + ( .2 x B) = 7000 how would I use a forumla to calculate this? Would anyone be able to send me some links or point me in the right direction? That's how much you have at most to spend on the adverts. $C=\frac{20}{100}\ \frac{10(7000)}{12}=\frac{2(7000)}{12}$ That's the agency's fee. Great! I really appreciate the help. I remember now, that the way to solve these is to substitute a number (1) for one of the unknowns. September 16th 2010, 04:26 PM #2 MHF Contributor Dec 2009 September 19th 2010, 11:56 PM #3 Sep 2010
{"url":"http://mathhelpforum.com/algebra/156451-percentages-forumla-method-solve.html","timestamp":"2014-04-17T07:03:11Z","content_type":null,"content_length":"37190","record_id":"<urn:uuid:f8b95736-51f7-42af-820d-d9eae3acf489>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by LUZ Total # Posts: 71 I'm a odd number, actually both my numbers are odd. If you substract my ones digit from my tens digit the diffrence is four. And I am less than sixty. What number am I? i need help with my essay ? is about the great debater essay Amanuensis Essay Please i need help, how to start a essay using Amanuensis about my teacher but dont know how to begin. Sam's sister lives 100 miles away. Sam drove 50 mph for 1/2 hr toward his sister's house. How much farther must Sam travel to reach her house? multiply and simplify. 3b^2-27/12b^2-300 x 48b+240/4b-12 divide and simplify -16+2w/3 121w-968/9 divide and simplify a-b/2a divide a^2-b^2/8a^3 help please!!!!! divide and simplify a/a-b b/a-b solve by factoring and using principle of zero product 2k^2=27k-81 factor completely if the polinomial is prime state this x^2 + 6xy - 135y^2 the height of a triangle is 5 cm more than the length of the base. if the area of a triangle is 133 cm^2 find the hight nad length of the base factor completely if the polynomial is prime. state this 6a^6-24a^3b determine whether the following is a difference of squares . 12^2 - y^2 determine whether the following is a perfect-square trinomial. x^2- 16xy + 64y^2 solve by factoring and using the principle of zero product.? 9k^2 = 27k -81 sammy bought a pair of shoes with 30% off sale price was 58.45 what was the regular price determine whether the following is a difference of squares? -4x^2 + 4y^2 factor completely. if the polynomial is prime, state this? -5b-66 + b^2 The perimeter of a basketball court is 114 meter and the lenght is 6 meter longer than twice the width. What are the lenght and Width. i need help please i am stuck How many grams of methanol could be vaporized by the addition of 22.45kJ of heat? how to start a paragraph using sports as a topic sentence. i need ideas please? Improving Vocabulary Skills sentence check 2 and final check check my answer 5^6-2y + 3 degrees are 6,1,0 coefficient 5,2,,3 determine the coefficient and the degree of each term in the polynomial 5y^6-2y+3 the national debt of a small country is 6,680,000,000 and the population is 2,774,000. what is the amount of debt for person x^2+2y^2-3xy for x=2 and y=3 x^2+2y^2-3xy for x=2 and y=3 math need to make sure what number is 20% of 75 the length of the rectangle is 43 cm. what widths will make the perimeter greater than 120 cm? Three resistors are connected to a 20 V battery as shown. The internal resistance of the battery is negligible. (a) What is the current through the 15 resistance? (b)What is the voltage difference across the 20 resistance? social studies never mind i got the answer !!!!! social studies Explain whether the following statement is a fact or an opinion: "Capitalism is a better system of government than communism." Give reasons for your answer. Identify the solute with the highest van't Hoff factor: Non-electrolyte, NaCl, MgCl2, FeCl3, MgSO4 improving vocabulary skills fourth edition i need chapter 13 Mr lopez is putting a fence around his vegetable garden the garden is shaped like a rectangle. the larger sides are 14 feet long.and the shorter sides are 9 1/2 feet long how much fencing should mr lopez buy? zinc displaces gold in seawater present int he form of gold (III) hydroxide. What mass of gold can be recovered if 20 moles of zinc are available? Western Culture western company besides Wal Mart that has impose western culture on the country in which the business is launched show me how to find a slope? A mechanic pushes a 2,500 kg car from rest to a speed v, with 5000J of work. During this process the car moves 25m. Neglecting friction, calculate: a) the final velocity of car b) The horizontal force exerted on the car A block moves up an incline of 45 inches at constant speed, because the action of a force of 15 N parallel to the incline. If the coefficient of kinetic friction is 0.3. Determine the weight of the Rewrite each sentence, placing commas where they are needed. Uderline each compound subject once compound predicate twice. 1. The bat and glove(C.S) are in the bench.(C.P.) 2. The ballplayer swimmer, and runner, come(C.S.) from the same town.(C.P.) 3. The girl can run fast jum... 1. The bat and glove(C.S) are in the bench.(C.P.) 2. The ballplayer swimmer, and runner, come(C.S.) from the same town.(C.P.) 3. The girl can run fast jump far,(C.S.) and, throw hard.(C.P.) Rewrite each sentence, placing commas where they are needed. Uderline each compound subject once compound predicate twice. 1.Your bat ball and glove,_ are on the bench.= 2. The ballplayer swimmer and runner, come_ from the same town.= 3. The girl can run fast jump far_ and, th... thank you. THANK YOU A LOT! GOD BLESS YOU ALWAYS ;). can you help me im confuse with this ones The stripes help to protect the zebra. 5.Subject- the stripes help 6.Predicate- to protect A zebra is a grazing animal. 7.subject- A zebra 8. Predicate- is a grazing animal. Grazing animals eat mostly grass and plants. 9. Subject- Graz... Zebra will be a subject and the predicate is live in Africa Read each sentence. Write the subject on the subject line and the predicate on the predicate line. Zebra live in Africa 1. Subject_______________ 2. Predicate_______________ DO YOU HAVE A EMAIL? THANK YOU SOOO MUCH! GOD BLESS YOU ALWAYS :). THANK U SO MUCH. THE LAST BLANK WILL BE PRETENDED RIHT Aimed- Captain- Monitor- Pretended- Professional- Familiar 1. If you've played a game many times it is a __familiar___ game to you. 2. If you acted as if you were a famous basketball played, you_pretended _. 3. If you took care of a playground, you were probably the park__... Aimed- Captain- Monitor- Pretended- Professional- Familiar 1. If you've played a game many times it is a __familiar___ game to you. 2. If you acted as if you were a famous basketball played, you_pretended _. 3. If you took care of a playground, you were probably the park__... Aimed- Captain- Monitor- Pretended- Professional- Familiar 1. If you've played a game many times it is a ________________ game to you. 2. If you acted as if you were a famous basketball played, you______________. 3. If you took care of a playground, you were probably the p... GRUMBLE-EXPLOTE-LANGUAGES-MUMBLED-STREAK-STUBBORN-DARTED Jack was so upset he could not stand still. He____________ from room to room looking for his spanish textbook. GRUMBLE-EXPLODED-LANGUAGES-MUMBLE-STREAK-STUBBORN 2."Whoever hid that textbook did good job." Jack ________ to himself. 3.Jack's sister Gloria heard a low, annoyed sound. She knew the ________ Came from Jack when she saw his face. 4.Gloria also knew Jack would no... GRUMBLE-EXPLODED-LANGUAGES-MUMBLE-STREAK-STUBBORN 2."Whoever hid that textbook did good job." Jack ________ to himself. 3.Jack's sister Gloria heard a low, annoyed sound. She knew the ________ Came from Jack when she saw his face. 4.Gloria also knew Jack would no... GRUMBLE-EXPLODED-LANGUAGES-MUMBLE-STREAK-STUBBORN 2."Whoever hid that textbook did good job." Jack ________ to himself. 3.Jack's sister Gloria heard a low, annoyed sound. She knew the ________ Came from Jack when she saw his face. 4.Gloria also knew Jack would no... GRUMBLE-EXPLODED-LANGUAGES-MUMBLE-STREAK-STUBBORN 2."Whoever hid that textbook did good job." Jack ________ to himself. 3.Jack's sister Gloria heard a low, annoyed sound. She knew the ________ Came from Jack when she saw his face. 4.Gloria also knew Jack would no... GRUMBLE-EXPLODED-LANGUAGES-MUMBLE-STREAK-STUBBORN 2."Whoever hid that textbook did good job." Jack ________ to himself. 3.Jack's sister Gloria heard a low, annoyed sound. She knew the ________ Came from Jack when she saw his face. 4.Gloria also knew Jack would no... GRUMBLE-EXPLOTE-LANGUAGES-MUMBLED-STREAK-STUBBORN-DARTED Jack was so upset he could not stand still. He____________ from room to room looking for his spanish textbook. com 220 explains how using statistics, graphs, and illustrations can strengthen our arguments in many different ways. In this case, the author used many different stats. He backed up claims with facts, which makes his argument more convincing! One major problem for the elderly is the lack of coverage in covering all their expenses for long-term care. In many traditional cultures, the elderly are taken care of by their children. However, because that is not the case here, an alternative method should be employed. Cla... Medicaid covers much of the cost of long-term care. A lot of patients spend all their money for nursing home coverage until they qualify for Medicaid. This is called "spend down" and is quite common among the elderly. Class, can you think of more efficient ways to co... , can you think of more efficient ways to cover the costs of long-term care algebra 1 flip flop the little mermaid was written by? the resisting force exerted on anairplane is called? what is sometimes reffed to as a river of grass?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=LUZ","timestamp":"2014-04-20T05:24:14Z","content_type":null,"content_length":"19258","record_id":"<urn:uuid:bd5fc7bc-f0c3-417a-9b65-042f8ac7526c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help for 2 interesting questions - rolle's and continuity November 23rd 2009, 07:06 PM Need help for 2 interesting questions - rolle's and continuity 1. Let f and g be functions such that f'' and g'' exist everywhere on R. For a < b, suppose that f(a)=f(b)=g(a)=g(b)=0, and $g''(x) ot= 0$ for every x $\in$ (a,b). (i) Prove that $g(x) ot= 0$ for every x $\in$ (a,b). *I know how to do this, if I'm not wrong, use Rolle's Theorem a few times.* (ii) Show that there exists a number c $\in$ (a,b) for which $\frac{f(c)}{g(c)} = \frac{f''(c)}{g''(c)}$ *No idea. Does it have something to do with the Mean Value Theorem?* 2. Let f be a continuous function on R such that $\lim_{x\to0}\frac{f(x)}{x}$ exists. Define the function g(x) = $\int_0^1 f(xt) dt, x \in R$ Determine whether g' is continuous at x = 0. Justify your answer. November 24th 2009, 11:20 AM 1. Let f and g be functions such that f'' and g'' exist everywhere on R. For a < b, suppose that f(a)=f(b)=g(a)=g(b)=0, and $g''(x) ot= 0$ for every x $\in$ (a,b). (i) Prove that $g(x) ot= 0$ for every x $\in$ (a,b). *I know how to do this, if I'm not wrong, use Rolle's Theorem a few times.* (ii) Show that there exists a number c $\in$ (a,b) for which $\frac{f(c)}{g(c)} = \frac{f''(c)}{g''(c)}$ The function $f(x)g'(x) - f'(x)g(x)$ vanishes at both ends of the interval, so ..... (Rolle).
{"url":"http://mathhelpforum.com/calculus/116407-need-help-2-interesting-questions-rolles-continuity-print.html","timestamp":"2014-04-18T06:26:23Z","content_type":null,"content_length":"7713","record_id":"<urn:uuid:0089e546-2274-4b66-9797-f79be6f4a4d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
what is phase constant what is the harm if we just take [itex]\omega t[/itex] as the argument. you can eliminate the phase constant by changing the starting time … if you replace t by t + φ/ω, then the phase constant is zero If [itex]\phi[/itex] is the initial angle then I think the sinusoidal graph will show different starting points(t=0) for different values of [itex]\phi[/itex] of the same amplitude and frequency of vibration. Is that so??? not following you
{"url":"http://www.physicsforums.com/showthread.php?t=576430","timestamp":"2014-04-20T14:21:04Z","content_type":null,"content_length":"35040","record_id":"<urn:uuid:abe37378-6467-4c53-bb94-c7e2e5672697>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Exact learning curves for Gaussian process regression on community random graphs Matthew Urry and Peter Sollich In: Networks Across Disciplines: Theory and Applications, 11 Dec 2010, Vancouver, Canada. We study learning curves for Gaussian process regression which characterise performance in terms of the Bayes error averaged over datasets of a given size. Whilst learning curves are in general very difficult to calculate we show that for discrete input domains, where similarity between input points is characterized in terms nodes on a graph, accurate predictions can be obtained. These should in fact become exact for large graphs drawn from appropriate random graph ensembles. We focus on two types of ensemble. One is obtained by specifying (arbitrarily) the degree distribution and leads to sparse graphs, where each node is connected only to a finite number of others. The other is a community graph ensemble where we assume communities joined by a similar sparse superstructure. The calculation of the learning curves is based on translating the appropriate belief propagation equations to the graph ensemble. We demonstrate the accuracy of the predictions for Poisson (Erdos-Renyi) graphs and give some numerical results showing the need for a community orientated derivation of the learning curve. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00007979/","timestamp":"2014-04-20T10:52:17Z","content_type":null,"content_length":"8076","record_id":"<urn:uuid:8b2553d9-d7f1-49ae-a7e7-b4f4d6527e22>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
When will this stop? Ok, now it’s just getting annoying. Odd day? Give me a break. My thoughts on this irritating trend can be found here, here, and here. How can you possibly be growing tired of this? I’m all set for tomorrow’s “Sum of Squares Day” (5=4+1, 8=4+4, 9=9+0), which only happens like 56,324 times a century! Or Saturday’s “61st through 63rd digits of the decimal expansion of arccos(11/16) day”! I’ll be giving away arccos(11/16) cents to the person who celebrates that day the most zealously! These “math holidays” make me want to shoot myself in the face … well, except “Pi Day”, which is saved only by it’s association with eating pie. That one makes me want to put a pie in my face. Someone should tell that guy that 1905 is not the next odd number after 3. It’s true, these holidays are stupid and annoying. I’ll grant exceptions to Pi Day and 61st through 63rd digits of the decimal expansion of arccos(11/16) day, because everyone knows those days rule. But as for everything else – I’m over it.
{"url":"http://www.mathgoespop.com/2009/05/when-will-this-stop.html/comment-page-1","timestamp":"2014-04-17T04:15:52Z","content_type":null,"content_length":"74905","record_id":"<urn:uuid:fd154624-28b2-4be6-a654-6ffc90c09fa0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Modelling of Solute Transport Through Stratum Corneum Mollee, Thomas (2005). Mathematical Modelling of Solute Transport Through Stratum Corneum PhD Thesis, School of Physical Sciences, The University of Queensland. Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials) Name Description MIMEType Size Downloads THE21091.pdf Full text application/pdf 4.52MB 1 Author Mollee, Thomas Thesis Mathematical Modelling of Solute Transport Through Stratum Corneum Centre or School of Physical Sciences Institution The University of Queensland Publication 2005 Thesis type PhD Thesis Supervisor Tony Bracken Yuri Anissimov Total pages 119 Collection 2005 Language eng Subjects 230101 Mathematical Logic, Set Theory, Lattices And Combinatorics 780101 Mathematical sciences The principle barrier to the penetration of chemicals through human skin is the outermost layer, termed the stratum corneum. The stratum corneum consists of approximately 20 layers of essentially dead, flattened cells called corneocytes that are embedded in a lipid bilayer domain. This lipid domain occupies only a small fraction of the volume of the stratum corneum but for small, neutral solutes is generally regarded as providing the primary transport pathway through it. A theoretical understanding of the relative impermeability of the stratum corneum is of practical importance to the application of transdermal drug delivery. Existing mathematical models of this transport process are discussed in Chapter 2, following a summary of the physiological background in Chapter 1. The majority of these models use the relatively simple one-dimensional diffusion equation, treating the diffusional path length and diffusion coefficient of the solute molecules in the lipid bilayer medium as unknowns. This is a useful approach for the description of experimental data, but it does not describe the transport process in terms of fundamental physical parameters of the membrane. Models that do try to incorporate such physical parameters are also discussed in Chapter 2. Chapter 3 discusses existing one-dimensional diffusion models in which capture and release of solute molecules throughout the lipid domain are incorporated. In these models the solute molecules are divided into two species; one in which solute molecules are bound to fixed sites in the lipid medium, and one for molecules that diffuse freely. The capture and release mechanism appears in the equation expressing conservation of mass through a source/sink term. Two cases are considered in the literature. The first involves instantaneous equilibrium between "bound'' and "free" solute molecules [21] and in the second this process is time-dependent [9]. In the former case the relation between bound and free solute is nonlinear, resulting in the equation for conservation of mass being nonlinear. Despite this, exact expressions for the steady-state flux and time lag of the system have been obtained. In the latter case the capture (sink) component of the source/sink term, at a given position in the lipid medium, is assumed to be related linearly to the concentration of free solute while the release (source) of previously captured solute is proportional to the concentration of bound solute. Conservation of mass is then expressed as a system of coupled linear partial differential equations. These equations have been analyzed in the Laplace domain, and expressions for the steady-state flux and time lag of the system have been given. We complete these earlier analyses by giving analytic expressions for the concentration and flux of free solute along the diffusional pathway, as well as the cumulative amount of solute having left the membrane. The capture and release process discussed by Anissimov & Roberts is related to a more general linear capture and release process discussed by Bass Bracken & Hilden. The model of Bass et al. treats the release of previously captured solute by introducing a probability density function that governs release times. As shown by Bass et al., if the probability density function is a decaying exponential function of time the equation representing conservation of mass reduces to a system of coupled, linear partial differential equations that are equivalent to those studied by Anissimov & Roberts for solute transport through skin and which have been treated in a general context by Aifantis & Hill, and later by Hill & McNabb. For the system discussed by Bass et al. we obtain the Laplace transform of concentration, flux and cumulative amount of solute having passed through the membrane for the case when the rate constants for capture and release are independent of position. From these, expressions for the steady-state flux and time lag are given for the case of a general probability density function governing captured solute release times. We then consider as an example a density function of the form t^ne^-kt /k^n+1. When n is a positive integer the equation representing conservation of mass Formatted is expressible as a system of n + 1 coupled, linear partial differential equations. Chapters 4 and 5 contain the main new results of this thesis. Chapter 4 presents a new model of solute transport through the intercellular domain of the stratum corneum that is time dependent and that accounts for some structural features of the membrane. Here transport is viewed as one-dimensional in the "vertical" space between "horizontally') adjacent corneocytes. Once solute molecules reach the thin lipid layers separating vertically adjacent corneocytes (these regions we refer to as trapping layers) they diffuse horizontally until they reach a subsequent vertical channel, after which they continue to diffuse vertically. This process is repeated until the boundary of the membrane in contact with the receptor is reached. The model consists of section-wise one-dimensional diffusion with concentration and flux matching conditions on either side of infinitesimally thin trapping layers. Capture and release of solute molecules is used to obtain the matching relation for solute flux on opposite sides of trapping layers. Two explicit probability density functions governing solute molecule residence times are found by considering the "horizontal" diffusion of a solute molecule in the trapping layers, assuming simple geometrical shapes of corneocytes. However, we were unable to establish on physical grounds the appropriate relation for matching solute concentration on opposite sides of a trapping layer. Instead a linear relation identical to the matching condition for flux is assumed, although it is recognised that this may not always be appropriate. Under the assumed concentration and flux matching conditions, expressions for the Laplace transform of solute concentration, flux and cumulative amount having passed through the membrane are found. From these, estimates of steady-state flux and time lag of the system are found in terms of the structural parameters of the system and the results are related to those given in Section 2.2. It is found that the estimated steady-state flux is approximately one order of magnitude greater than those given in Section 2.2, while the time lag is at worst a factor of six less than those of Section 2.2. Such discrepancies are not surprising, especially given the assumed form of the matching condition for solute concentration. Unfortunately, existing experimental data are not sufficient to discriminate between this and earlier models. Obviously further investigations are needed to clarify the effects that different forms of concentration matching conditions have on steady-state flux and time lag, and also investigations into more appropriate concentration-matching conditions. Transdermal iontophoresis is the transport of ions across the skin due to the application of an external electric field. Constant direct currents or voltages are most commonly used for this purpose and can be regulated to enable control of the rate and duration of the drug delivery. However, a number of side effects are associated with its use such as erythema and electrode burns of the skin. To overcome these problems, low frequency AC iontophoresis has been proposed as an alternative means to transport ions across the skin. Chapter 5 of this thesis examines the mean flux of a charged tracer across a homogeneous membrane subject to alternating, symmetric voltage waveforms. The analysis is based on the Nernst-Planck flux equation, with electric field varying with time only, and is integrated numerically for four different voltage waveforms. Approximations for small and large frequencies are obtained and an approximation formula for all frequencies, due to Anissimov, is discussed. A brief discussion in Chapter 6 of possible future research concludes the thesis. Keyword Skin -- Permeability -- Mathematical models Skin absorption -- Mathematical models
{"url":"http://espace.library.uq.edu.au/view/UQ:107296","timestamp":"2014-04-20T01:35:06Z","content_type":null,"content_length":"33947","record_id":"<urn:uuid:9834d342-4bfd-47d7-aadf-105150a53417>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
From ErfWiki [edit] Force Multiplier "Bonuses are probably multiplicative^Erf-b1-p125 - bonuses of various types and even of the same type add up^Erf-b1-p113 ." No. A Force Multiplier is (Wikipedia): Force multiplication, in military usage, refers to a combination of attributes or advantages which make a given force more effective than another force of comparable size. A force multiplier refers to a factor that dramatically increases (hence "multiplies") the effectiveness of an item or group. All it has to do is dramatically increase power, [u]not[/u] multiply it. In fact, in our world where Parson sources the term from, measuring "force" is virtually impossible on the battlefield in the first place. There are theories and analysis that can estimate this, but how much "force" it takes simply requires too many factors one cannot judge at the moment of the attack. How much does high Morale multiply a unit's effectiveness, if it's got a weapon prone to jamming in its hand, vs. how much is the defender enhanced if his gun is a 7.62 vs. 5.56? Good leadership can turn the tide, but how does it multiply force? What I'm saying is that Force Multiplier in our world doesn't mean a "force" is "multiplied", so don't extend that to Erfworld's rules. Further, not all game systems are linear or multiplicative. They can be exponential, logarithmic, tangential, or any of a thousand possibilities dictated by probabilities dictated by random number generators or Tables of results. Force Mulitplier is a military term, and that is how Parson uses it. It does not indicate anything about the mathematics of Combat Bonuses in Erfworld. --Kreistor 04:45, 28 June 2009 (UTC) "Stack bonus maxes at 8" - assuming the kind of bonuses currently in the speculation are roughly accurate, and that stack bonus is solely determined by number in stack... 8=2^3. 1 unit in stack=no bonus, 2 in stack=+1, 4 in stack = +2, 8 in stack=+3? perhaps?? alternatively, it is a simple +1 for each additional unit in stack, maxing at +7, but from a game design standpoint, given that this bonus is added to all units in stack, making each successive level of bonus more expensive makes sense(especially as you also get the extra units attacks) --thoughts? Eserchie also, assuming the speculation calculation is right, +12 for stack bonus and basic attack, with maxed stacks, the stack bonus can't be larger than 6 (no unit had less than 6 base attack) My assumption had always been that a stack bonus only applied to the first 8 in the stack, not that it was only generated by them.
{"url":"http://www.erfworld.com/wiki/index.php?title=Talk:Bonus&oldid=26046","timestamp":"2014-04-21T07:25:44Z","content_type":null,"content_length":"16710","record_id":"<urn:uuid:1e43d34a-7406-4d1a-9cc6-c092f4b4822d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 2003; 372 pp; softcover Volume: 329 ISBN-10: 0-8218-3261-1 ISBN-13: 978-0-8218-3261-5 List Price: US$98 Member Price: US$78.40 Order Code: CONM/329 This volume contains 36 research papers written by prominent researchers. The papers are based on a large satellite conference on scientific computing held at the International Congress of Mathematicians (ICM) in Xi'an, China. Topics covered include a variety of subjects in modern scientific computing and its applications, such as numerical discretization methods, linear solvers, parallel computing, high performance computing, and applications to solid and fluid mechanics, energy, environment, and semiconductors. The book will serve as an excellent reference work for graduate students and researchers working with scientific computing for problems in science and engineering. Graduate students and research mathematicians interested in scientific computing for engineering and science. • R. Abgrall, M. Papin, and L. Hallo -- A scheme for compressible two-phase flows and interface problems • S. Amini and A. T. J. Profit -- Multi-level fast multipole Galerkin method for the boundary integral solution of the exterior Helmholtz equation • T. Arbogast -- An overview of subgrid upscaling for elliptic problems in mixed form • D. N. Arnold and R. Winther -- Mixed finite elements for elasticity in the stress-displacement formulation • W. Auzinger, O. Koch, and E. Weinmüller -- New variants of defect correction for boundary value problems in ordinary differential equations • Z. Chen, R. Glowinski, and J. He -- Scientific computing in energy and environment • S.-S. Chow and G. F. Carey -- Approximate analysis of extended Williamson fluids for Powell-Sabin-Heindl elements • P. Cummings and X. Feng -- Frequency domain method for the scalar wave equation with second order absorbing boundary condition • Z. Dostál and D. Horák -- Scalable FETI with optimal dual penalty for semicoercive variational inequalities • C. C. Douglas and G. Haase -- Algebraic multigrid and Schur complement strategies within a multilayer spectral element ocean model • J. Douglas, Jr., S. Kim, and H. Lim -- An improved alternating-direction method for a viscous wave equation • Q. Du -- Diverse vortex dynamics in superfluids • R. E. Ewing, J. Liu, and H. Wang -- Adaptive wavelet methods for advection-reaction equations • J. L. Guermond and P. D. Minev -- Approximation of an MHD problem using Lagrange finite elements • B. Guo -- Best approximation for the \(p\)-version of the finite element method in three dimensions in the framework of the Jacobi-weighted Besov spaces • B.-Y. Guo and L.-L. Wang -- Non-isotropic Jacobi spectral method • N. Herrmann -- Improved method for solving the heat equation with BEM and collocation • M. Hokr, J. Maryška, and J. Šembera -- Modelling of transport with non-equilibrium effects in dual-porosity media • Y. Hou and K. Li -- Error estimate for a two-level scheme of Newton type for the Navier-Stokes equations • Y. Kuznetsov, K. Lipnikov, S. Lyons, and S. Maliassov -- Mathematical modeling and numerical algorithms for poroelastic problems • M.-C. Lai -- Fast Poisson solver in a three-dimensional ellipsoid • B. Li, Z. Chen, and G. Huan -- Modeling horizontal wells with the CVFA method in black oil reservoir simulations • J. Li and Y. Chen -- Radial basis function based meshless method for groundwater modeling • D. Liang -- Upwinding finite covolume methods for unsteady convection-diffusion problems • J. Liu, Z. Chen, R. E. Ewing, G. Huan, B. Li, and Z. Wang -- Parallel computing in the black oil model • J. Maryška, J. Novák, P. Rálek, and J. Šembera -- Finite element model of piezoelectric resonator • B. Rivière and M. F. Wheeler -- Discontinuous finite element methods for acoustic and elastic wave problems • L. Schaefer and P. Wang -- Heuristics for developing variations on future air traffic schedule characteristics for air traffic simulation • J. Šembera, J. Maryška, and J. Novák -- FEM/FVM modelling of processes in a combustion engine • J. Shi, D. L. S. McElwain, and E. Donskoi -- A finite control volume method for the reduction of an iron ore-coal composite pellet in an axisymmetric temperature field • J. Tausch -- The fast multipole method for arbitrary Green's functions • R. Wang and Z. Chen -- A mathematical model for ESP simulation • M. Vohralík, J. Maryška, and O. Severýn -- Mixed-hybrid discrete fracture network model • H. Xu, C. Zhang, and R. M. Barron -- A new numerical algorithm for treatment of convective terms and its applications to PDEs • J. Xu, S. Dong, M. R. Maxey, and G. E. Karniadakis -- Direct numerical simulation of turbulent channel flow with bubbles • X. Yu and Q. Dai -- RKDG finite element schemes combined with a gas-kinetic method for one-dimensional compressible Euler equations
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-329","timestamp":"2014-04-21T07:32:01Z","content_type":null,"content_length":"19186","record_id":"<urn:uuid:4e1902ab-378e-4bbf-9969-e698f262ef09>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Buoyancy: Upward Force by Fluids Buoyancy: Upward Force by Fluids (page 2) Science Fair Survival Guide The paper cup floats in the water. The water pushed out by the paper cup spills over into the measuring cup. You used this volume of water to calculate the buoyancy of the paper cup. For the example, an object that displaces 0.015 L of water has a buoyancy of 0.147 N. The paper cup floats in the water, with about half of the paper cup below the surface of the water. The paper cup displaced (pushed aside) the amount of water equal to the volume of the paper cup below the water's surface. Buoyancy is the upward force of a fluid (a substance, a gas, or liquid, that flows and offers little resistance to a change in its shape when under pressure) on an object placed in it. The buoyancy of the water on the paper cup equals the weight of the water displaced by the paper cup. The Greek mathematician Archimedes (298-212 B.C.) is credited with discovering the law of buoyancy, which states that any object submerged or floating in a fluid is buoyed (lifted) by a force equal to the weight of the fluid displaced. Once you know the amount of water displaced, you can calculate the weight of the water displaced, which is equal to the buoyancy in newtons. Try New Approaches 1. An object will float as long as its weight is equal to the weight of water it displaces. Compare the weight of the cup and coins with the weight of the water it displaces determined from the original experiment. Determine the weight of the cup and coins by using a food scale to measure the mass of the cup and the coins in grams. Then convert the gram measure to kilograms and use the equation F [wt] = m × g to determine the weight of the cup and coins in newtons. 2. What is the maximum weight at which the paper cup and its contents can remain afloat? Repeat the original experiment placing the paper cup in the water. As you add one coin at a time, make note of any change of position of the cup in the water. Continue to add coins until one more coin makes the paper cup sink. Remove one coin, then determine the weight of the dry cup and coins as before. Determine the weight of the water displaced by the cup. How do the two weights compare? Design Your Own Experiment a. If Archimedes' principle is correct, buoyancy on an object in water causes the object to have an apparent weight (F[A]) equal to the actual weight (F[wt]) of the object as measured in air minus the weight of the water displaced by the object (F[B]), which is equal to buoyancy. An equation that expresses this relationship is: F[A] = (F[wt]) – (F[B]). Design a way to test this. One way is to determine the actual weight of a rock in air (F[wt]) in newtons. Do this by using the previous method of measuring the mass of the rock on a food scale in kg, then calculate the weight using the equation F [wt] = m × g. Record the rock's weight (F[w]) in a Buoyancy Data table like Table 12.1. Repeat the original experiment to determine the weight of the displaced water (F[B])when the rock is placed in water. Record the weight (F[s]) in the data table. Calculate the apparent weight (F[A])of the rock using the equation F[A] = (F[wt]) – F[B]) and record the results in the data table. b. If two items of identical volume but different weights are submerged in water, would the buoyancy on each be the same? Design a way to determine this, such as by using a container that can be closed. Fill the container with different contents to change its weight. c. How does the density of the fluid affect its buoyancy? Repeat the investigation using fluids with different densities, such as different concentrations of salt water. Since the density differences between the fluid concentrations may be slight, determine the densities by asking your teacher or maybe a pharmacist to measure the mass of a certain volume of each fluid on a scale accurate to at least 0.01 g. For more information about how things float, see Robert L. Lehrman, Physics the Easy Way (Hauppauge, N.Y.: Barron's, 1998), pp. 158–159. Get the Facts 1. Most fish are able to remain suspended at depths beneath the surface of the water. They remain stable due to a condition known as neutral buoyancy. What forces create neutral buoyancy? How is a fish able to maintain neutral buoyancy? For information, see Mary and Geoff Jones, Physics (New York: Cambridge University Press, 1997), p. 56. 2. Air is a very light fluid with a density of only 1.25 × 10^-3 glml (1.25 kg/m^3). A few things, such as a balloon filled with helium or hot air, are light enough to float in air. Why can a balloon filled with hot air float in air? What is the flight ceiling for a hot-air balloon? For information, see Louis A. Bloomfield, How Things Work: The Physics of Everyday Life (New York: Wiley, 1997), pp. 128–134. Disclaimer and Safety Precautions Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com’s website and Science Fair Project Ideas is covered by Education.com’s Privacy Policy and site Terms of Use, which include limitations on Education.com’s liability. Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
{"url":"http://www.education.com/science-fair/article/buoyancy-upward-force-fluids/?page=2","timestamp":"2014-04-20T09:39:48Z","content_type":null,"content_length":"87962","record_id":"<urn:uuid:aafc5eec-3c06-4abe-b213-dd940c16a6be>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Resources on the empirical foundations of mathematics hendrik@topoi.pooq.com hendrik at topoi.pooq.com Tue Jan 10 22:37:04 EST 2006 On Tue, Jan 10, 2006 at 03:02:12PM -0800, Richard Haney wrote: > I am interested in studying the *empirical* foundations of mathematics. > In particular, I am interested in exploring the *empirical* > foundations of logic, number systems, and geometry. I am interested in > exploring the possibility that conventional logic(s) may fail to yield > empirically true conclusions when applied to *real-world* phenomena > even though the premises may seem to be perfectly true. I am also > interested in exploring the possibility, for example, that the rules of > arithmetic may regularly (or irregularly) fail for, say, sufficiently > large numbers when applied to *real-world* phenomena. I am interested, > for example, in considering the possibility that assumptions of > constructivists may be too strong. And, for example, I am interested > in exploring how much interesting, useful mathematics can be done by > allowing oneself, when talking about natural numbers, to talk only > about natural numbers less than or equal to some unspecified, rather > large natural number. Yessenin Volpin was working along these lines, but I never fully understood his methods. I believe David Isles has been trying to explicate some of this material. > I have also read that some theoretical physicists have theorized that > space-time may be discrete (i.e., not continuous) at a sufficiently > small, sub-atomic scale. Based on numbers I found in a recent Scientific American article on loop quantum gravity, I estimated that the sugar I put in my tea (two teaspoons) contains about a googol such grains of space. At last a practical use for that number name. See the last few chapters of MotionMountain, the free on-line physics textbook (http://www.motionmountain.net/), where the author presents absolte limits on the large scale -- it turns out there are maxima on force, power, distance, time, and so forth. This seems to suggest that there is a finite limit N on the number of distinguishable points of space in the entire universe; thus a limit on the number of countable, observable, real objects. This need not be a bound on the size of numbers, of course. If one were to define a number as a sequence of zeros and ones, one write numbers up to something like 2^N (probably a different but similarly large N, of course). If one were to use Church numerals written as binary-encoded landa-expressions, one could get larger numbers still, but at the cost of huge gaps in the representable numbers. Is this reminiscent of ordinal notations? I suspect one of the reasons infinity is a useful thing to reason about is that is provides an analogue to the behaviour of the inconceivable large, and in the limit, we can round off a lot of tedious detail. > This sort of thing, too, would be of interest > as to the sort of mathematics this might give rise to, say, in place of > the real numbers, especially considering that physical space on a > straight line was the origin of the ancient Greek concept of numbers. > Insisting that every side of every idealized triangle had to have > exactly one length (number) may have led the world down one historical > path in mathematics, whereas alternatives might have been quite > different. The reasons we have considered these things so reasonable and obvious for so many millenia is, I believe, that they are approximately valid at the scales at which we have had to use our brains to interact with the environment -- we have simply evolved brains with all these assumptions built in. We have been thinking for a long time, too, even longer, if one is to believe Pinker (How the Mind Works, the Language Instinct), than we have had spoken, learned languages. So certain rules of logic are built in too. As we applied them to things beyond the environment they evolved in, they have been found lacking. Most notably, the evolution of logic in the last few thousand years have involved the application of logic to linguistically created entities. and the formulation and formalisation of logic has been linguistic (leading to the widely held belief that one cannot think without language). This recent evolution has been cultural, not linguistic, and has involved recurring failures and recoveries. The rules of logic have been debugged against paradoxes, such as the liar paradox, Zeno's paradoxes of motion, Newton's fascination with the ghosts of departed quantities (Berkeley's description of differentials, if I recall correctly), the paradoxes involved in infinite series approxomations, and so forth. The interesting thing is that each generation of mathematicians appears to believe the mathematics they grew up on, and trust it as being *true*, without a real understanding of what that means, if anything. This makes it remarkably difficult to strike out in a different direction. Physicicts have the advantage of experiments that falsify their theories. > I don't pretend to be an expert on these things, but I > would like to learn more about them. As far as I can tell, these > issues, with a focus on empirical relevance, seem to have gone > unexplored at the level of serious research. > I imagine that there may be a great many other interesting, relevant > issues and questions in the empirical foundations of mathematics but > that I am simply unable to imagine or formulate them at the moment. If. for example we actually had experience with actual infinities, we might find them behaving distressingly different from the way we now expect, based merly on extrapolation from the finite. > So what I would like to do is to find some really good resources for > research in this area. > Can anyone help me out with this? Perhaps a good place to start would be to try to come up with more of the questions. -- hendrik More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009551.html","timestamp":"2014-04-19T19:37:43Z","content_type":null,"content_length":"8904","record_id":"<urn:uuid:f5da0f84-53de-437b-81d7-514d50704f32>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
cranvas is an R package for interactive statistical graphics. It is based on the Qt framework. Currently the package is available from github and will come to CRAN some time in 2012. MANET is for exploring data, whether raw data, transformed data or model residuals. MANET provides a range of graphical tools specially designed for studying multivariate features. Anyone involved in analysing data will find MANET useful for gaining insights into the structure and relationships of their data sets. GGobi is software for data exploration using highly interactive statistical graphics. My main contribution is a first implementation of area charts, such as barcharts and histograms. ASA Data Expo 2011."A Graphical Exploration of the BP Oil Spill" with Lendie Follett and Ulrike Genschel; our poster won the first prize. Design for America." Recovery and Re-investment Act" with the Statistical Graphics working group; our contribution won the first prize in its category. ASA Data Expo 2009."Flying over the USA" with the Statistical Graphics working group; our poster won the second prize. Info Vis Contest 2005:"Boom and Bust of Technology Companies at the Turn of the 21st Century" with Hadley Wickham, Dianne Cook, Junjie Sun, Christian Röttger; our submission received a first prize and is now part of the Information Visualization Benchmarks Repository. ASA Data Expo 2006."Glaciers Melt as Mountains Warm" with Hadley Wickham, Jonathan Hobbs, Dianne Cook; our poster won the second prize. LAS Award for Early Excellence in Research/Creative Activity 2006 Graphics of Large Datasets, with Antony Unwin and Martin Theus, Springer, New York, 2006. in amazon.com Graphical Tools for the Exploration of Multivariate Categorical Data, ISBN 3831116601, 2001. in Google Books Published Papers Buja A., Swayne D.F., Littman M., Dean N., Hofmann H.: Interactive Data Visualization with Multidimensional Scaling, Journal of Computational and Graphical Statistics, 2008, Vol 17 (2): pp. 444–472. Cook D., Hofmann H., Nikolau B., Wurtele E., Lee Eun-kyung, Yang H.: Exploring gene expressions using plots. Journal of Data Science 5, pp.151-182, 2007. Wickham H., Lawrence M., Cook D., Buja A., Hofmann H., Swayne D.: The plumbing of interactive graphics}, Computational Statistics, May, 2008 (online). Buja A., Swayne D.F., Littman M., Dean N., Hofmann H., Chen L.: Interactive Data Visualization with Multidimensional Scaling. Journal of Computational and Graphical Statistics, 17, 2, 444--472, 2008. Yan A, Kloczkowski A, Hofmann H, Jernigan RL.: Prediction of side chain orientations in proteins by statistical machine learning methods. Journal of biomolecular structure and dynamics, 3, 275--288, J. Hobbs, H. Wickham, H. Hofmann, and D. Cook: Glaciers melt as mountains warm: A graphical case study, Computational Statistics, Invited submission. Special issue for ASA Statistical Computing and Graphics Data Expo 2007. Hofmann H.: Interview with a Centennial Chart. CHANCE, 20 (3), pp. 26--35, 2007. Cook D., Hofmann H., Nikolau B., Wurtele E., Lee Eun-kyung, Yang H.: Exploring gene expressions using plots Journal of Data Science, 5(2), 2007. Hofmann H.: Mosaic Plots. Encyclopedia of Measurement and Statistics, SAGE, Neil J. Salkind (ed.), 2006. Hofmann H.: Parallel Coordinate Plots. Encyclopedia of Measurement and Statistics, SAGE, Neil J. Salkind (ed.), 2006. Hofmann H.: Graphical Statistical Methods. Encyclopedia of Measurement and Statistics, SAGE, Neil J. Salkind (ed.), 2006. Ahn J.S., Cook D., Hofmann H.: A Projection Pursuit Method on the multidimensional squared Contingency Table Computational Statistics, Vol 18 (4), pp. 605­626, 2003 Wurtele E.S., Dickerson J.D., Cook D., Hofmann H., Li J., Diao L.:MetNet: software to build and model the biogenetic lattice of Arabidopsis. Comparative and Functional Genomics. Vol 4, pp.239­245, Hofmann H.: Constructing and reading mosaicplots Computational Statistics and Data Analysis, Vol 43, No. 4, pp. 565-580, 2003. Unwin A., Hofmann H., Wilhelm A.: Direct Manipulation Graphics for Data Mining. International Journal of Image and Graphics, Vol. 2, No. 1, pp. 49-65, 2002. Hofmann H.: Generalised Odds Ratios for Visual Modelling. In Journal of Computational and Graphical Statistics, 10, pp 628-640, 2002. Hofmann H., Unwin A. Wilhelm A.: Data Mining and Statistics - Introduction. in Computational Statistics, 16(3),pp. 317-321, 2001. Hofmann H., Wilhelm A.: Visual Comparisons of Association Rules. In Computational Statistics, 16(3), pp 399-416, 2001. Wilhelm A., Hofmann H.: Graphics for Categorical Data and their Applications in Data Mining. In C. Provasi (Ed.), Modelli Complessi e Metodi Computazionali Intensivi per la Stima e la Previsione, pp. 51-56, 2001. Hofmann H.: Exploring Categorical Data: Interactive Mosaic Plots. In Metrika, 51(1), 11-26, 2000. Hofmann H., Theus M.: Selection Sequences in MANET. Computational Statistics, 13(1), 77-87, 1998. Unwin A., Hawkins G., Hofmann H., Siegl B.: Interactive Graphics for Data Sets with Missing Values - MANET. Journal of Computational and Graphical Statistics, 5(2), pp. 113-122, 1996.
{"url":"http://www.public.iastate.edu/~hofmann/research/research.html","timestamp":"2014-04-19T19:37:20Z","content_type":null,"content_length":"10498","record_id":"<urn:uuid:ffb0f909-80d5-4fdf-8f45-5903f32be9ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment Help Find the maximum positive and negative shear force Assignment Help Influence Line Diagram For Shearing Force - Find the maximum positive and negative shear force Find the maximum positive and negative shear force: Find the maximum positive and negative shear force at point P in beam of Figure which is crossed by two connected wheel loads 3 m aparts moving from right to left. The front wheel carries a load of 20 kN and the rear wheel 10 kN. Maximum Negative SF For maximum negative shear force at P, the heavier wheel (20 kN) should be placed just to the left of P, the other wheel (10 kN) will then lie at Q which is 3 m to left of P (Figure 8(a)). The ordinate of IL diagram at P is - 0.4 and in which at Q is - 0.1 (by same triangles). Hence, maximum negative shear force at P, V[P] = 20 × (- 0.4) + 10 (- 0.1) = - 9 kN. Maximum Positive SF Here, two cases need to be examined (a) When the heavier wheel 20 kN is just crossed to right of P, and the lighter wheel (10 kN) is at Q 3 m behind it (Figure 8(b)). Hence the shear force V[P] = 20 × (+ 0.6) + 10 × (- 0.1) = 11 kN. (b) When the lighter wheel (10 kN) has just crossed to right of P and the front wheel (20 kN) is 3 m to right of it at R as in Figure 8 (c). The ordinate of IL diagram at P is + 0.6 and at R it is + 0.3. Hence, shear force V[P] = 20 × (+ 0.3) + 10 × (+ 0.6) = 12 kN. The second position gives the higher value; hence, the maximum positive SF will be 12 kN. Expertsmind’s world class education services We at www.expertsmind.com offer email based assignment help – homework help and projects assistance from k-12 academic level to college and university level and management and engineering studies. Our experts are helping students in their studies and they offer instant tutoring assistance giving their best practiced knowledge and spreading their world class education services through e-Learning program. - Quality assignment help assistance 24x7 hrs - Best qualified tutor’s network - Time on delivery - Quality assurance before delivery - 100% originality and fresh work
{"url":"http://www.expertsmind.com/topic/influence-line-diagram-for-shearing-force/find-the-maximum-positive-and-negative-shear-force-914273.aspx","timestamp":"2014-04-21T04:32:12Z","content_type":null,"content_length":"21236","record_id":"<urn:uuid:3ac61873-8f57-4e1c-b248-ad65d62a6e8a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
DesParO - A Design Parameter Optimisation Toolbox using an Iterative Kriging Algorithm by Christof Bäuerle, Clemens-August Thole and Ulrich Trottenberg DesParO is an optimisation toolbox designed for industrial simulation. The toolbox contains a collection of efficient algorithms for computer-based optimisation and can easily be adapted to any industrial simulation code. DesParO was especially designed for computationally expensive simulation codes and supports parallel computation. Nowadays simulation programs are widely used in the car industry to optimise production processes or to test the behaviour of a car before a real prototype is built or real tests are performed. Simulation helps to reduce the number of real prototypes and the time between model changes compared to real tests. Crash simulation is one of the most time-consuming simulation tasks and is therefore computationally expensive. Many simulation runs with different parameter values are required to optimise construction parameters, and the results have to be analysed by an engineer. In this case the use of a numerical optimisation toolbox, which analyses different parameter constellations, can support the development process significantly. The optimisation toolbox uses the PAM CRASH simulation program to calculate the crash properties of a car model. For each given set of parameters the simulation results have to be evaluated by an objective function. This article focuses on a box beam test example which demonstrates the applicability of the toolbox to crash simulation problems. The definition of an objective function which evaluates the crash simulations can be difficult for technical applications. In the case of the box beam example the objective function is constructed as a comparison between a reference crash and a crash of a model with modified material parameters. The aim of this procedure is to define an optimisation problem with a well-known solution to test the toolbox and the optimisation algorithms. In the reference case the acceleration of a box beam with given Youngs modulus and Poisson ratio is calculated and the objective function is defined as the integral of the quadratic differences to the acceleration of a model with modified parameters over the time. This integral can be approximated by a sum over discrete time steps: Thus the minimum of this objective function is reached if the material parameters of the modified model reach the values of the reference case. Figure 2 shows a plot of the objective function versus change of the Youngs modulus. The minimum of this function is at 210 kN/mm^2 and is identical with the reference case. Close to its minimum the function is smooth but on the left side the behaviour is rather chaotic. Under realistic conditions, almost all properties of crash experiments are subject to scatter, although it is desirable to avoid the scatter of a model as much as possible. Numerical properties of the simulation codes as well as certain features of the crash model may be responsible for these in-stabilities. Typical sources of scatter which are caused by the model are buckling and contact under an angle of 90°. Further investigation of this effect with the box beam example indicated that the scatter results from an instability between two different modes. Figure 3 shows one such mode. In this case the upper edge on the right side is shifted outward and the lower edge is shifted inward. In the second mode of the model both edges were displaced outside the box beam. Figure 1: Crash example box beam. Figure 2: Scan of the objective function for the test problem. Figure 3: One possible mode of the crash simulation. Figure 4: Convergence of the different optimisation algorithms. Scatter of a model is a difficult condition for optimisation and an engineer tries to avoid instabilities of a construction. In the test case of the box beam example it turned out to be possible to stabilize the model by a small modification of the construction. In particular, if both edges of the original model are bent to one side it is possible to suppress the asymmetric mode during the crash simulation and the scatter of the objective function can be reduced significantly. This procedure now leads to a well defined optimisation problem. For the solution of nonlinear problems a wide literature on optimisation strategies is available. Popular strategies are conjugate gradient, downhill simplex, and surrogate models. An important characteristic of the optimisation of crash simulations is that it is usually very time-consuming for a realistic car model. Optimisation strategies based on surrogate models are especially suited for optimisation under such conditions. In the first step design of experiment methods are used to choose parameter values, for which a simulation has to be executed. The values of the objective function for these parameters can now be used to construct a surrogate function as an interpolation of the real problem. The evaluation of the surrogate function is very fast and it is much easier to search for the minimum of this interpolation. During the next steps the design of experiment points can be refined iteratively nearby the minimum of the surrogate function. In Kriging the surrogate model consists of a sum of basis functions, very often of Gauss type. Figure 4 shows the relative convergence of different optimisation algorithms at the stabilized box beam example which was described in this article. Although the Kriging algorithm converges very fast during the first iterations at this problem, the optimisation strategy does not work so reliably if the objective function is not smooth enough. Future work will concentrate on the aspect of improving the robustness of the surrogate models by replacing the design of experiment methods by pattern search strategies. Furthermore it is planned to extend the surrogate models to unstable crash constructions. In this case surrogate functions are used for approximation (instead of interpolation) of the objective function. Please contact: Christof Bäuerle Fraunhofer SCAI Tel: +49 2241 14 27 94 E-mail: baeuerle
{"url":"http://www.ercim.eu/publication/Ercim_News/enw56/baeuerle.html","timestamp":"2014-04-20T23:36:40Z","content_type":null,"content_length":"12026","record_id":"<urn:uuid:bbea68dc-e755-4252-a2e5-e1d135827a9a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Guide Entry 80.04.05 Yale-New Haven Teachers Institute Home Practicing Precision: Lessons from Mathematical Language and Writing, by Helen Sayward Guide Entry to 80.04.05: This curriculum unit is designed to prepare students for their year’s work in Mathematics. One of the major goals of the unit is, therefore, the review of basic skills and functions which should be known to the student but may have been neglected over the summer. Two other goals are: 1) teaching students to develop an interest in mathematical concepts in their own right, and 2) teaching them good work habits and pride in their accomplishments. Mathematics taught in a structured fashion lends itself beautifully to these two goals. Classroom activities should center more on playing games with numbers, learning logical thinking, and working on exercises that emphasize developing alternate ways of thinking. These are important precursors to developing problem solving skills which can be presented later in the year. (Recommended for 9th and 10th grade Applied Math I, 9th and 10th grade Applied Math II, and 11th and 12th grade Consumer Math.) Key Words Basic Skills Mathematics Language Contents of 1980 Volume IV | Directory of Volumes | Index | Yale-New Haven Teachers Institute
{"url":"http://www.yale.edu/ynhti/curriculum/guides/1980/4/80.04.05.x.html","timestamp":"2014-04-16T04:27:00Z","content_type":null,"content_length":"4351","record_id":"<urn:uuid:aeca5646-18a5-4cfc-a314-f50d88612990>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
7 2 is NOT THE WORST HAND IN POKER (kind of) hey guys the statistical odds of winning with an offsuit 2 3 (pre-flop) are number of opponents chance % 1 29.24 2 21.61 3 15.39 4 12.73 5 10.07 6 8.87 7 7.67 8 7.01 9 6.35 the stats for a off suit 7 2 winning pre flop are number of opponents chance % 1 32.71 2 22.44 3 15.83 4 12.98 5 10.12 6 8.70 7 7.28 8 6.57 9 5.86 With bold being the better chance out of the 2 hands. So the number of opponents directly affects which poker hand has the better odds of winning. Problem solved. What you guys think? Well, you see, 7-2 has the nicknames "WORST HAND IN POKER" and "BEER HAND". Meanwhile, 2-3 is "AIR JORDAN". Meaning Micheal Jordan. Who rocks beyond comprehension. Therefore, in my infallible logic, I conclude that 7-2 is indeed, the worst of the two. Please guys i'm urging you to stop this petty bickering. This started off as an intelligent post however it then turned into a forum playground where insults start. This is supposed to be a fun site so please keep it that way (including the forums) Lighten up and have fun. ----///-\\\----Put This ---|||---|||---On Your ---|||---|||---account If ---|||---|||---You Know ----\\\- ///----Someone -----\\///-----Who Died ----///--\\\---Or whom maybe suffering from it It's true, though. When people say that 7/2 is the worst hand, it's assuming the table is full. Heads-up, or short-handed, 3/2 is actually a worse hand. And yes, thanks for the education on the subject yank, but your name calling wasn't, ahem, called for. that nickname of worst hand in poker was made by people who analyzed the potential of cards not the actual stats hence why 7 2 is the "worst hand in poker" 23 has the potential of a straight therefore it has some form of value. 72 can only hit other 7's or 2's suffice to say i wouldnt want either hand ;) i think it should be 9 2 that is the worst hand in poker you can't really get anything with that hand poker5495 wrote:I agree with every thing about what they say Your poker player poker5495 who are you talking too but how can you agree with everybody when there are like 5 people saying opposites of each other, so you can't agree with everybody The votes seem to be 50/50... how can you agree with most? If somebody says 5 its cannot be50%. Meaning only 60% I can agree on. So it's not 50%. Your poker player poker5495 Last edited by 1YRTJPOKER5495 on Tue Feb 21, 2006 8:27 pm, edited 1 time in total.
{"url":"http://triplejack.com/forum/viewtopic.php?f=2&t=506","timestamp":"2014-04-17T00:50:16Z","content_type":null,"content_length":"42318","record_id":"<urn:uuid:340815d9-fd4f-4aa3-91c5-977a954a765d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/superstardiva11/asked","timestamp":"2014-04-16T10:29:14Z","content_type":null,"content_length":"106586","record_id":"<urn:uuid:ba13d104-fd27-4477-9b34-1b81420110e9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: The null is almost surely false . . . Replies: 0 Luis A. Afonso The null is almost surely false . . . Posted: Nov 6, 2012 6:17 PM Posts: 4,518 From: LIsbon The null is almost surely false . . . Registered: 2/16/ Intuition is repeatedly used everyday life. An example: when we wish to know what someone exactly want to say, but not wanting to be clear, because of shyness, discretion, not be heard by the surrounding people, the presence of a police officer, or whatever. A statement currently believed by the anti-NHST people is: The Null Hypotheses, H0, is almost surely false; to be concerned if is true or not true, worthless. Using intuition does uncover two errors the guy thinks about NHST. Firstly he thinks that the objective is to decide the H0 veracity. And read (testing a parameter) H0: t=0 as it was an algebraic equation, so the t nullity. Quite simple to answer properly: __a) The Significance tests have only one purpose; to reject or not the Null, __b) the pseudo equation t = 0 means a very different thing: is data so imprecise that t could not be say the test statistics (modeling t) is different from 0? Or not? Luis A. Afonso
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2413642","timestamp":"2014-04-20T10:49:38Z","content_type":null,"content_length":"14626","record_id":"<urn:uuid:4233935e-e37a-46d9-89f0-59b0f1a7556b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
optimization - Find an expression for the intensity I(x) at the point P. April 6th 2013, 11:29 PM #1 Junior Member Oct 2010 optimization - Find an expression for the intensity I(x) at the point P. Please look at the attachment for the queation part a), i provided the answer which i thought is correct and it says its wrong. Can someone please tell me whats wrong with expression i provided, and how it should look. Re: optimization - Find an expression for the intensity I(x) at the point P. The intensity due to 1 source is directly proportional to the strength of the source and inversely proportional to the square of the distance from the source. The constant of proportionality is 1, so the equation must be of the form: Where S is the strength of the source and D is the distance. This is the intensity due to 1 source, so the intensity due to the other source is The expression you're looking for is simply the sum of those 2 intensities. Now, let's calculate the distance between one of the sources and P. Assuming that P is at a distance x, then the distance between the source and P is: $I_{1}(x)= \frac{S}{D_{1}^2}=\frac{S}{\sqrt{d^2+x^2}^2}=\frac {S}{d^2+x^2}$ Now you need to find the intensity due to the other source. April 7th 2013, 03:11 PM #2 Junior Member Nov 2012
{"url":"http://mathhelpforum.com/pre-calculus/216889-optimization-find-expression-intensity-i-x-point-p.html","timestamp":"2014-04-19T02:06:37Z","content_type":null,"content_length":"34389","record_id":"<urn:uuid:c4cdd2f9-7657-4d1c-9a88-fbee5b5616c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Problem 4 Midterm scores in a class have an average of 75 and an SD of 8. Final exam scores in the class have an average of 50 and an SD of 10. The correlation between the two variables is 0.63 and the scatter diagram is football shaped. Students complain that the final was too hard, so the instructor gives each student 15 more points on the final. 4b The correlation between the new final scores and the old final scores is: 4c The correlation between the new final scores and the midterm scores is: • one year ago • one year ago Best Response You've already chosen the best response. 4b= 1 4c= 0.63 Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5151ce9fe4b0d02faf5c14d6","timestamp":"2014-04-20T23:52:00Z","content_type":null,"content_length":"30314","record_id":"<urn:uuid:729ded16-1369-4703-800e-059b12230596>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 635 thanks : ) You'll be alert, engrosed in the (14)material.And concentrated 100 percent. a) no change b) material, and c) material; and d) material and After reading it over and over, I think b) a comma after material would be the correct answer. I think this is a compound sentence, si... oh, thanks! Before class, you should look over your notes from the last(9)lecture and take a minute to speculate about what your instructor is going to talk about today. (9) a) No change b) lecture, and c) lecture; and I think it need a comma after lecture is that correct? thanks Yes, I am guessing a)according to the comma rule 1: Use a comma to separate the elements in a series (three or more things). English is my 2nd language so I read and guess my best according to the rules,still I'm never too sure. am I right? choose proper punctuation George, leave the room, shut the door, and be quiet b)George;leave the room, shut the door and be quiet c)George leave the room, shut the door and be quiet d)b)George leave the room, shut the door, and be quiet My first guess is a) but I'm tempted... Wow!! Thanks, I really appreciate it! Having been discovered, Fifi looked up at his owner with puppy-dog eyes. First sentence is right. Second sentence 'the owner' didn't have puppy dog eyes. So, I say remove c) his owner and it will read Having been discovered, Fifi looked up with puppy-dog eyes. ? It could also be a) with puppy-dog eyes? So it would read: Having been discovered, Fifi looked up at his owner Kind of doens't make sense to me. I think the answer is. b) been discovered Which group of words is the sentence below misplaced? Having been discovered, Fifi looked up at his owner with puppy-dog eyes. a)with puppy-dog eyes b)been discovered c)his owner d)Fifi looked up Thank you very much! When I read it outloud c) sounds better but according to rules you just sent me, it would be a). I'm confused when it comes to this sentence. thanks My fist guess was a), but then c) seems like a better answer Choose the correct comma placemnent for the sentence below> a. plump, old, white cat b. plump, old, white, cat c. plump, old white cat d. plump old white cat world religions Thank you it looks like it will be very helpful. world religions I am conducting a research paper and am having trouble finding comparisons and contrasts between Judaism and Buddhism with credital authors, where could I go to find this information? veterinary medicine A pet owner accidentally applies 2.5 ml of a dog flea product containing 45% permethrin to a 15 lb cat. What is the dosage this cat received? yes, it is a because you factor... x^2-81= x-9(x+9) x^2+18x-81= (x+9)(x+9) check: 9x+9x= 18x x*x= x^2 -9*9= -81 ((x-9)(x+9))/((x+9)(x+9))-> cross out x+9 ANS:(x-9)/(x+9) Find all critical points of the function. f(x) = xe^3x so x= ? Show steps or tell me how I find them? an organic sbstance has an enthalpy of vaporization of 29.0 kl/mol. the compound has a vapor pressure of524 mm HG at 25 C. at what tempurture is the vapor pressure equal to 115mm HG? why and how could you apply critical thinking when evaluating an article? A car travels around a circular track at 185 miles per hour. Is this a linear velocity or an angular velocity? Explain. what is a fallacy for a list of numbers entered by the user and terminated by 0, find the sum of the positive numbers and the negative numbers a circle of radius 1 rolls around the outside of a circle of radius 2 without slipping. the curve traced by a point on the circumfarence of the smaller circle is callled an epicycloid. use the angle theta to find a set of parametric equations for this curve. thank you.. a toy wagon is pulled by exerting a force of 15 pounds on a handle that makes a 30 degree angle with the horizontal. find the work done in pulling the wagon 50 feet. a 100 pound collar slides on a frictionless vertical rod. find the distance y for which the system is in equilibrium if the counterweight weights 120 pounds. a circle of radius 1 rolls around the outside of a circle of radius 2 without slipping. the curve traced by a point on the circumfarence of the smaller circle is callled an epicycloid. use the angle theta to find a set of parametric equations for this curve. Find the Magnitude and direction of the resultant force. three forces with magnitude of 50,20, and 40 points acting on an object at angle 60 degres, 30 degrees, and -90degrees,k respectevly with the positive x-axis. A golf ball is hit off the ground at an angle of pie/6 and it travels 400 ft. How long was the golf ball in the air? How many pounds of Acid Per inch? If you have a tank that s 51ft 0 in. long by 4ft 7 in. wide, and need to fill it to the two inch mark with hydrochloric acid, how many pounds are you putting in it? (Hint 1 gallon of acid weights 9.8 lbs. A bicycle wheel has a diameter of 26 inches. About how far will it travel in one revolution? its a 75 thanks, after multipling i got 2855.72 lbs would that be correct.. How Many pounds of acid per inch? If you have a tank of 51ft long by 4ft 7in wide, and need to fill it to the two inch mark with hydrochloric acid, how many pounds are you putting in it? (Hint 1 gallon of acid weights 9.8 lbs. Nancy 2:28 A 4.50e-2 M solution of unknown monoprotic weak base has a pH of 10.200 what is the value of kb for the base What is the pH of a .10M (CH3)3N solution? Kb=6.40e-5 If you have a tank of 51ft long by 4.7ft wide, and need to fill it to the two inch mark with hydrochloric acid, how many pounds are you putting in it? (Hint 1 gallon of acid weights 9.8 lbs. If you have a tank of 51ft long by 4.7ft wide, and need to fill it to the two inch mark with hydrochloric acid, how many pounds are you putting in it? (Hint 1 gallon of acid weights 9.8 lbs. Q. 2. Describe a rocket propelled in 30 degrees north of east with a velocity of 5m/s? The thermite reaction is a very exothermic reaction; it has been used to produce liquid iron for welding. A mixture of 2 mol of powdered aluminum metal and 1 mol of iron(III) oxide yields liquid iron and solid aluminum oxide. How many grams of the mixture are needed to produce... john bunyan's the pilgrim's progress is an allegory, that is, he used names to present abstract qualities. explain how vanity fair, obstinate, pliable, help and faithful demonstrate the traits for which they are named, and how they affect christian's journey. I have to do a paper on the railroad strike of 1877, but I find that there is too many websites. My teacher does not want us to use Wikipedia since the information changes too much. I have looked at all of them, but cannot find a good one. We have to include the struggles it c... LaJolla Securities, Inc. specializes in the underwriting of small companies The terms of a recent offering were as follows: Number of shares 2 million Offering price $25 per share Net proceeds $45 million LaJolla Securities expenses, associated withthe offering, were $500,000.... LaJolla Securities Inc., specializes in the underwriting of small companies. The terms of a recent offering were as follows: Number of shares 2 million Offering price The Norman Company needs to raise $50 million of new equity capital. Its common stock is currently selling for $50 per share. The investment bankers require an underwriting spread of 3 percent of the offering price. The company's legal, accounting and printing expenses, as... Financial Accounting Here is the problem: if you could just help me get started in the right direction that would be great, I don't even know where to begin with this question ellar Company was established to manufacture components for the auto industry. The components are shipped the same day... Jem's description is so exaggerated that it could not have possibly come from an adult. As I recall, Jem says that Boo goes around scaring people at night, and eats dead cats. No mature adult would say these things about a socially awkward individual. -6(x+3)=-6x+5 first, distribute the -6, resulting in: -6x-18=-6x+5 as you can see, these two equations are not the same. if you simplify it further, you get: -6x=-6x+23 for the second equation, after distributing the 8 and simplifying both sides, you get x=x, so x can be infin... it is the same as 9x10^-4, or .0009 since G is on the perpendicular bisector of AH, then AG=BG. set the two equations as equal to each other and then solve 4gr math one yard equals 3 feet or 36 inches -3(x^2+2)=102 x^2+2=102/-3 x^2=-34-2 x=sqrt32 its sqrt 3+ sqrt5. not 3+ sqrt5 Find a polynomial with integer coefficients such that (sqrt3 + sqrt5) is a root of the polynomial Find a polynomial with integer coefficients such that (sqrt3 + sqrt5) is a root of the polynomial find a polynomial with integer coefficients such that square root of 3 + square root of 5 is a root of the polynomial how many zeros is in 2010!(2010 factorial)? w=d/t let x=80, so (80+80)/(8hs+10hrs) =8.8889 mi/hour find a formula that gives the maximum number of enclosed regions formed by n lines 2.34x10^7=23400000 1.7x10^12=1700000000000 Hydrochloric acid can be prepared using the reaction described by the chemical equation: 2 NaCl(s) + H2SO4(l) ----> 2 HCl(g) + Na2 SO4(s). How many grams of HCl can be prepared from 393 g of H2SO4 and 4.00 moles of NaCl? find a polynomial with integer coefficients such that square root of 3 + square root of 5 is a root of the polynomial Randy deposits $7,540 in a bank account that pays 4.5 percent simple interest. If he doesn't change the principle, what will his balance be in 10 years? Pregnancy obviously results in an increased need for vitamins and minerals. Deficiency or excess of any of a number of nutrients can lead to birth defects and/or complications during pregnancy Global History ok never mind sorry thank you so much :] Global History all men had to do what during military expansion during the Meiji Restoration? Global History Japan fights an imperialistic war with Russia Called the? Global History 4 improvements during Meiji Restoration Global History How were European nations able to dominate non-European areas? what is the first term of an arithmetic sequence if the 7th term is 21 and the 10th term is 126 compose a list identifying the major components of health communication. who is involved in each component? how does each component promote health communincation? in not utilized, how would it reduce health communincation? provide examples. find the area of the region enclosed by y= square root of x, y=x-2, and y=0. i use f(x) - g(x) and got the answer to be 2, but my classmate got 8/3 do i'm really confuse dont know whats the answer. Find the surface area formed by revolving the graph of f(x)=81-x^2 on the interval [0,9] about the y-axis. (express the answer in terms of pie.) find the volume of the solid formed by revolving the region bounded by the graph of y=2x^2+4x and y=0 about the y- axis ( express the answernin terms of pie) find the volume of the solid formed by revolving the region bounded by the graphs of y=x^3,y=1, and x=2 about the x-axis using the disk method. 9 express the answer in terms of pie) i got the answer to be 3/5 pie but im not sure if is right. i used the formula V=(pie)(r)^2W The question is: Compute the recent two years cash flow on total assets ratios for this company. This is the info given: Operating cash flow for current year (in millions): $1,762 1 year prior - $1,740 2 years prior - $1,981 Total Assets for current year - $13,570 1 year... 2nd grade The first set of letters has all straight letters and the second set of letters are all curved in some sort!!! I seriously sat there four hours trying to figure out what the heck they were talking about, I actually tried sounding them all out thinking that was what they were t... I am trying to work a case problem, I have to take a group of numbers and using the data by product line, there is three different products lines, I need to compile a breakdown of sales by product then give a total of all three. then I need to find the expected sales for each ... wheels bicycle shop advertised a bicycle for 15% off for a savings of $36. The bicycle did not sell so it was offered at a new 20% discount off the sale price. What did the bicycle sell for regularly? What is the amount of the new discount? three cuases for the increasing cost of healthcare and what affect does each have on society a bomb is dropped drom an airplane at an altitude of 14,400 feet how long would it take to reach the ground? (because the motion of the plane, the fall will not be vertical, but hte time will be the same as that for a vertical fall.) the plane is moving at 600miles per hour. h... the grand canyon is 1600 meters deep at its deeoest point. a rock is dropped fromthe rim above this point. Express the height of the rock as a function T in seconds. how long will it take the rock to hit the canyon floor? H= 1600-1/2t^2 thats what i got as a function is that c... ok, i know what it is had forgotten we use that but its in my notes thanks!!!! find the derivative of 3x^4-5x+3/x^4+1 i know that the derivative of the numeratoor would be 12x^3-5 but i'm not sure if its right since it has a denominator i Know i need to do something to it but not sure if to simplify. A fully plane has a constant acceleration while moving down a runaway. the plane requires .7 mile of runaway and a speed of 160 miles per hour in order to lift off. what is the planes accelearation? i need the formula cause the one i have doesnt work cause i need time I have the same question and do not understand it, well i do. Just need help to set it up. Well the punnett square Please reply soon! If we take a point of reference that is a certain distance away from where the force is applied and the force is acting on the same axis as the distance axis i.e. the force vector and the distance vector are on the same axis, only in opposite directions, would the torque of th... social studies if the issue is whether a person is civil rights were violated in court desicion through what levels of courts might that person appeal? Anatomy & Physiology Excercising muscle will receive more oxygen from hemoglobin than will relaxed muscle because: a)it's PO2 is low b)it's PCO2 is high c)both a and b d)both a and b and its temp is higher than usual Anatomy & Physiology In the fetus, the organ that is important for the normal development of the immune system is the? liver? spleen? thymus gland? lymph node? How many moles of ions are released when these samples are dissolved in water? a. 0.37 mol of NH4Cl b. 3.55 * 10^18 formula units of Ba(OH)2 * 8H2O I need to know what steps to take to get to the answer. Thanks! Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Nancy&page=5","timestamp":"2014-04-21T00:01:41Z","content_type":null,"content_length":"27541","record_id":"<urn:uuid:b2ff4cc5-2032-47e8-9e1c-fad8db6f5e3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Topological Sorting Jonathan is the chief technical officer and one of the founders of Intrinsa Corp. Jerry is a member of the technical staff at Intrinsa and a veteran of the ANSI C++ Standards Committee. They can be reached at jon@intrinsa.com and jerry@intrinsa.com, respectively. An old adage claims that "Anyone can make a bridge that stands, but it takes an engineer to make one that just barely stands." This is as true of software as bridges. It takes considerable skill to design efficient software that only does as much work as necessary. For example, traditional sorting may be overkill if you really only have a few conditions you need to satisfy. This month, Jonathan and Jerry show how topological sorting does only enough work to satisfy a few constraints. They then extend the basic algorithm to provide good results even when those constraints are contradictory. -- Tim Kientzle In developing our PREfix software component simulator, we realized that to get the best results, we should analyze functions in a particular order. For example, if Foo() calls Bar(), it's best to analyze Bar() before analyzing Foo(). Given an entire program, we needed to sort the various functions so that a called function was analyzed before the function that called it. Note that there are many different orders that might work; we just needed to find one. This is an example of a "topological sort." Most sorting techniques require a "total order," where you have to be able to compare any two elements. A topological sort uses a "partial order" -- you may know that A precedes both B and C, but not know (or care) whether B precedes C or C precedes B. Topological sorting is a useful technique in many different domains, including software tools, dependency analysis, constraint analysis, and CAD. Topological sorting works well in certain situations. If there are very few relations (the partial order is "sparse"), then a topological sort is likely to be faster than a standard sort. Given n objects and m relations, a topological sort's complexity is O(n+m) rather than the O(n log n) of a standard sort. In our case, most functions typically call a handful of other functions, meaning the total number of relations (caller/callee pairs) is relatively small, so topological sorting makes sense. Topological sorts can also deal gracefully with cycles. For example, imagine that two functions X and Y are mutually recursive: X calls Y and Y calls X. In this case, it is useful to detect the cycle and the specific relations that cause the cycle. Standard sorting algorithms, however, will simply fail in this situation. We developed an extension to topological sorting that can produce a "best" order, even in the presence of cycles. In this article, we present a basic topological sorting algorithm and implementation, then extend the algorithm and implementation to deal with cycles. The implementations presented here make heavy use of, the C++ Standard Template Library (STL). Much of the work involves manipulating vectors and queues, and STL's primitives make it easier. Furthermore, packaging the various functions as STL algorithms allows them to be reused on different types of data. The Basic Algorithm For simplicity, we'll describe the algorithm in terms of a generic "precedes" relation: If X precedes Y, we say that X is the predecessor and Y is the successor. Our terminology and discussion follows Donald Knuth's presentation in The Art of Computer Programming, Volume 1 (Addison-Wesley, 1997). For example, Figure 1 illustrates the following set of relations on the input set { A, B, C, D }: A precedes B, C precedes D, and B precedes D. For these relations, there are several valid orderings: { A, B, C, D } { A, C, B, D } { C, A, B, D } However, { C, D, A, B } would not be valid, because it violates the relation that B precedes D. One way of ensuring a correct order is to simply copy the data, making sure that whenever we copy one item, we know that all of its predecessors have already been copied. As a shorthand for this notion of "already been copied," we'll define a predecessor X as an "active" predecessor of Y if X precedes Y and X has not (yet) been copied. Then this basic algorithm can be summed up quite While there is a member that has no active predecessors, repeat: 1. Pick a member X with no active predecessors. 2. Copy X to the output. Clearly, the output will indeed be ordered. What is not so obvious is that -- assuming there are no cycles in the input -- the while loop will not terminate until every item has been output. The key to an efficient implementation of this algorithm is to be able to pick members with no active predecessors without having to rescan the entire list at each iteration. One way of accomplishing this is to associate a count of predecessors and a list of successors to each input item, and use an auxiliary queue Q as interim storage of nodes that have no active predecessors but have not yet been output. In fact, Knuth uses the topological sorting algorithm as an example of using a linked list as a queue. Assume we have an array, count, which stores the predecessor count for each member. Using the basic queue operations add (at the end of the queue) and pop (from the front of the queue), we can refine the algorithm as in Figure 2. The while loop terminates when there are no items left on the queue, either because everything in the input list has been processed, or because there is a cycle. The overall time complexity of this basic algorithm is O(n+m). The O(n) comes from the number of times that the while loop (and initial for loop) is executed, and the O(m) from the nested for loop. Although there is no way to calculate how many times the inner loop will be executed on any one iteration of the outer loop, it will only be executed once for each successor of each member, which means that the total number of times that it will be executed is the total number of successors of all the members -- or the total number of relations. Space complexity is also O(n+m). The O(n) component comes from the predecessor count information stored for each member, and the maximum length of the auxiliary queue. The O(m) comes from storing the successors for each member; once again, the total number of successors is the number of relations, so O(m). The basic algorithm's usefulness may be limited because of its inability to give detailed information about cycles. In practice, there are often one or more cycles in real-world problems. In program analysis, mutually recursive functions cause cycles. For example, Figure 3 illustrates the set of relations on the set { A, B, C, D, E, F, G }: A precedes B, B precedes C, C precedes D, C precedes B, D precedes B, D precedes E, D precedes F, F precedes G, and G precedes F. Note that there is a cycle involving B, C, and D; and another cycle involving F and G. There is also a cycle involving just B and C, but we can ignore that because of the B, C, D cycle; we are only interested in the maximal cycles. In The Design and Analysis of Computer Algorithms (Addison Wesley, 1974), Aho, Hopcroft, and Ullman present a complete -- and efficient -- algorithm for identifying strongly connected components in a graph, which we can use for finding cycles. The basic approach is to iterate through all the members of the set and perform a depth-first search of the relations from each. As we visit each member for the first time, we number it (Aho et al. refer to this as the "depth-first number") and push it onto a stack. For each member, we also keep a record of the lowest-numbered potential root of a cycle involving that member. Initially, this is simply the member itself; when we are done with the depth-first search from a member, if any of its successors has a lower-numbered potential root, we update the member's potential root. A member is potentially the base of a cycle if its lowest-numbered potential root is itself. Because the search is depth-first, any cycle will end up at the end of the stack, so we can find the cycle by popping the stack repeatedly until we get to the base node. (Unlike the Aho et al. algorithm, we ignore members that do not have any members following them on the stack; these members are not in any cycle.) Since each member is popped off the stack only once, it is in at most one cycle. The file topsort.h (available electronically; see "Availability," page 3) implements this algorithm. Dealing with Cycles In a dependency-analysis tool, it might be enough just to identify the cycles so that you know which dependencies overconstrain the problem. In other cases, however, what you want is a "best" ordering -- an ordering that preserves as much of the information as possible. In our case, we want to warn the user of the approximation caused by mutually recursive functions, but still proceed with bottom-up analysis. In Figure 3, there is no definitive ordering between B, C, and D, or between F and G. However, the first cycle precedes the second cycle because D precedes F (but there is no member of the second cycle preceding any member of the first cycle). Similarly, A precedes the first cycle, and the first cycle precedes E. Possible "best" orderings for Figure 3 include { A, B, C, D, E, F, G } and { A, D, C, B, F, G, E }, but not { A, E, B, C, D, F, G }. You can extend the topological sorting algorithm to deal with cycles by first finding the cycles of the set, then creating a set where all members of a cycle are replaced by a single placeholder. Next, topologically sort this smaller set. Finally, replace each placeholder with all the members of the corresponding cycle. Figure 4 shows this extended topological sorting algorithm. The key point is that the topological sort in step 5 will not encounter any cycles, since we have replaced them all in step 2. The Aho, Hopcroft, and Ullman cycle-finding algorithm is O(n+m). Since the worst-case number of cycles is O(n), and the number of relations between cycles is m (or fewer, if relations between the same cycle can be weeded out efficiently), the extended algorithm is also O(n+m). Oddly, the worst case for the extended algorithm is the situation where there are no cycles, and it performs a topological sort on all n members. If this is the typical case for some data, it may be more efficient to first perform a topological sort with the basic algorithm, and only fall back on the advanced algorithm if cycles actually appear. In some real-world problems, cycles of length 2 may be much more common than longer cycles. Mutual recursion is a good example of this; pairs of mutually recursive functions are not infrequent, but larger combinations are rarer (although they certainly occur in practice). In this case, you might optimize the basic topological sorting algorithm by making an O(m) initial pass through the members. Simply look for mutually recursive pairs; when both members in a mutually recursive pair's predecessor counts are reduced to 1, that pair is eligible to be put on the output queue. While this computation remains O(m), the constant factor may be high enough that it is not worth doing. We've tested the implementation on Windows 95/NT with Microsoft Visual C++ 4.2, and on Solaris with the Sparcworks compiler and ObjectSpace's STL library. The file topsort.h (available electronically; see "Availability," page 3) is the complete implementation of the topsort, cycles, and topsortWithCycles algorithms presented here. The file driver.cpp (also available electronically) illustrates different ways of calling the functions. Example 1 is driver.cpp's output for the example illustrated in Figure 3. Packaging these algorithms completely in a header file is not ideal, since they define auxiliary types, which include some noninline functions; however, it makes the clearest explanation. The topsort algorithm produces a linear order for its input obeying the partial order specified by the relations. It returns true if there is a cycle, and false otherwise (but does not give any information about the cycle). The cycles algorithm produces a list of cycles in the input. The topsortWithCycles algorithm extends topsort to return an order whether or not cycles are present. The functions are templatized to allow them to operate on arbitrary types. Example 2 shows the declaration of topsort. For efficiency, the underlying GraphInfo class operates on integer indices, rather than the iterators that are used in the external interface. The functions topsort, cycles, and topsortWithCycles first convert the relations to relations between integer indices, and use GraphInfo to operate on the integers, then use the result to order their output. While this may appear unnecessary, it is actually important to ensure the time complexity of algorithm. The aforementioned complexity analysis assumed that the time to index into the count array and get the list of successors of a member is O(1), and that the time to assign a result is also O(1). If strings were being used as indices, for example, array lookups would be O(log n) complexity, rather than O(1), and so the overall complexity would be O((n+m) · log n). For integers, both assignments and lookups are indeed O(1). One additional implementation note: topsort checks for relations between an integer and itself, and simply ignores them. Strictly speaking, such a relation is a cycle of length 1, but such a cycle does not affect the linear ordering. It is easy and efficient to deal with this as a special case. topsortWithCycles illustrates the combination of cycle-finding and topological sorting to produce an approximate topological sort in the presence of cycles. For simplicity, this implementation always searches for cycles; if cycles are relatively rare, it may be more efficient to first perform a topological sort, and then find and expand cycles from the remaining relations only if there is at least one cycle. Copyright © 1997, Dr. Dobb's Journal
{"url":"http://www.drdobbs.com/database/topological-sorting/184410262","timestamp":"2014-04-16T14:22:19Z","content_type":null,"content_length":"106516","record_id":"<urn:uuid:1f4ec77b-5236-4a82-a961-f0012a277f30>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
APL programming language A Programming Language , or sometimes Array Processing Language ) is a programming language invented in Kenneth E. Iverson while at Harvard University . Iverson received the Turing Award for his work. "APL, in which you can write a program to simulate shuffling a deck of cards and then dealing them out to several players in four characters, none of which appear on a standard keyboard." — David Given Within its chosen domain, APL is an extremely powerful, expressive and concise programming language. It was originally created as a way to express mathematical notation in a rigorous way that could be interpreted by a computer. It is easy to learn but APL programs can take some time to understand. Unlike traditional structured programming languages, code in APL is typically structured as chains of monadic or dyadic operators acting on arrays. Because APL has so many nonstandard operators, APL does not have operator precedence. The original APL did not have control structuress (loops, if-then-else), but the array operations it included could simulate structured programming constructs. For example, the iota operator (which yields an array from 1 to N) can simulate for-loop . APL systems are typically interactive. The APL environment is called a workspace. In a workspace the user can define programs and data, i.e. the data values exists also outside the programs, and the user can manipulate the data without the necessity to define a program, for example: N .leftarrow 4 5 6 7 Assign the values 4 5 6 7 to N. Print the values 8 9 10 11 Print the sum of N, i.e. 22 The user can save the workspace with all values and programs. In any case, the programs are not compiled but interpreted. APL is notorious for its use of a set of non-ASCII symbols that are an extension of traditional arithmetic and algebraic notation. These cryptic symbols, some have joked, make it possible to construct an entire air traffic control system in two lines of code. Because of its condensed nature and non-standard characters, APL has sometimes been termed a "write-only language", and reading an APL program can feel like decoding an alien tongue. Because of the unusual character set, many programmers used special APL keyboards in the production of APL code. Nowadays there are various ways to write APL code using only ASCII characters. Iverson designed a successor to APL called J which uses ASCII "natively". So far there is a sole single source of J implementations: http://www.jsoftware.com/ Other programming languages offer functionality similar to APL. A+ is an open source programming language with many commands identical to APL. Here's a "Hello World" program in APL: 'Hello World' Here's how you would write a program that would a word list stored in vector X according to word length: X[X+.¬' ';] Here's a program that find all prime numbers from 1 to N: (.tilde N .contains N .circle . .product N)/N .leftarrow 1 .downarrow .iota N Here's how to read it, from right to left: 1. .iota N creates a vector containing integers from 1 to N (if N = 6 at the beginning of the program, .iota N is {1, 2, 3, 4, 5, 6} 2. Drop first element of this vector (.downarrow function), i.e. 1. So 1 .downarrow .iota N is {2, 3, 4, 5, 6} 3. Set N to the vector (.leftarrow, assignment operator) 4. Generate outer product of R multiplied by R, i.e. a matrix which is the multiplication table of R by R (.circle . .product function) 5. Build a vector the same length as N with 1 in each place where the corresponding number in N is in the outer product matrix (.contains, set inclusion function), i.e. {0, 0, 1, 0, 1} 6. Logically negate the values in the vector (change zeros to ones and ones to zeros) (.tilde, negation function), i.e. {1, 1, 0, 1, 0} 7. Select the items in N for which the corresponding element is 1 (slash function), i.e. {2, 3, 5} Here's the equivalent in (another "write-only" language): perl -le '$_ = 1; (1 x $_) !~ /^(11+)\\1+$/ && print while $_++' • A Programming Language (1962), by Kenneth E. Iverson • History of Programming Languages, chapter 14 External links: [http://www.bath.ac.uk/~ma1flfs/cm20168/ - APL - A Programming Language], Free APL Compiler
{"url":"http://www.fact-index.com/a/ap/apl_programming_language.html","timestamp":"2014-04-16T04:33:08Z","content_type":null,"content_length":"8856","record_id":"<urn:uuid:962c4f92-d991-4bbc-8b90-3fbb50d8bf83>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
The Relativistic Big Bang. There are many reasons why Relativity holds true. The absence of transverse contraction according to Lorentz's discovery is definitely the most important one. Such a result, which is the main characteristic of the Relativistic Doppler effect, implies a slower pulsation rate according to Lorentz's time equation. The well-known Lorentz-FitzGerald longitudinal contraction and Lorentz's "local time" are also of the utmost importance when it comes to explaining Relativity. The Alpha Transformations. The problem is that the Lorentz Transformations are misleading. They must be applied to a moving system because the x' and t' variables refer to stationary one. Instead of space and time units, the variables should rather be given in wavelength and phase units. In addition, Voigt, Lorentz, Poincare, and Larmor had to elaborate complicated demonstrations using Maxwell's equations. As a matter of fact, Lorentz himself wrote in 1920 that less than 10 physicists in the whole world were able to explain Relativity. Today, it is even worse, and that is why most astronomers no longer rely on elementary Relativity in order to understand the Big Bang and the expansion of the Universe. Fortunately, I could elaborate a more practical equation set which applies to Ivanov's standing waves. Because it may be applied to sound waves with similar results, the use of Maxwell's equations is no longer relevant. I called this equation set the Alpha transformations because it is the very basis of matter mechanics, which is known to be the Wave Mechanics since Louis de Broglie. Although they primarily reflect the behavior of standing waves, their reversed form is surprisingly very similar to the original Lorentz transformations, which apply to matter. The Alpha Transformations. The Alpha Transformations do reproduce Ivanov's standing waves in a moving environment. But they may also be reproduced using the usual wave addition method because, at least theoretically, it is the superimposition of two waves trains traveling in opposite direction and whose wavelength differ. Ivanov's waves may also be reproduced using my Time Scanner, the Delmotte-Marcotte virtual wave medium, or an acoustic device made out of a microphone and two distant loudspeakers in the presence of wind. Because all five methods yield the same results, the behavior of this fascinating phenomenon is not disputable. Ivanov's waves. At first glance, this phenomenon might not look that important. However, it can be shown that it is closely related to the Lorentz Transformations. The node and antinode structure is moving at the "alpha" speed in the direction of the shorter waves so that its intrinsic energy is also moving at the alpha speed. Especially, as it was explained above, this system exhibits a slower pulsation rate and a longitudinal contraction with respect to the wavelength geometrical mean. What's more, because of the presence of a stunning phase wave, a series of clocks regulated according to the local phase would obviously display Lorentz's "local time". Such effects are indeed identical to those of the Lorentz Transformations. The Lorentz Transformations apply to all galaxies in the Universe. The point is that today's astronomers do not believe that remote galaxies are undergoing the Lorentz transformations. In their picture, because the Universe is expanding, galaxies are practically stationary with respect to the local "space fabric". This is known as the "raisin pudding model". Thus, even though we are surely not in the center of the Universe, we are still observing that all galaxies around us are receding according to Hubble's law. This interpretation of the Big Bang is incorrect, though, because Relativity always holds true. It does not tolerate any exception. Astronomers are aware that today's instruments and methods are amazingly accurate, so that relativistic speeds are no longer necessary in order to verify it. Especially, the speed of a moving space ship as measured from another one is verifiable using a Doppler radar. Hence, they must realize that some new crucial experiences are already possible. They should examine more carefully the Relativistic Big Bang hypothesis, simply because it will soon be The expanding Universe is relativistic. That is, very distant and fast galaxies are undergoing the Lorentz transformations. They are emitting radio waves, light, X-rays and gamma rays according to a slower rate of time, and that is why the resulting Doppler effect is relativistic. Rearward relativistic Doppler: lambda' = lambda * (1 + beta) / g Relativistic redshift: R = (1 + beta) / g According to Relativity, even the most distant galaxies cannot reach the speed of light. Hence, a redshift of 2 or more cannot indicate that the speed of a galaxy is faster than the speed of light. For this reason, it is not acceptable to deal with a so-called "z" redshift, which is given by the wavelength ratio minus 1 in order to indicate the beta (or v / c) speed. Using the more acceptable redshift R shown above, the galaxy normalized speed is given by: beta = 2 / ((1 / R)^2 + 1) – 1 The most distant galaxies are also severely contracted according to the Lorentz-Fitzgerald contraction. This includes distances between them. As a result, the cosmic sphere seems to contain far more galaxies near its limits than in its central area. In the graphics below, seven galaxies (A to G) are placed and transformed according to the Lorentz transformations. This way, any of them may be considered to be stationary in the center of the universe, and the two neighboring galaxies seem to move away at the same distance and at the same alpha speed. The Cosmic Sphere. In this example, the beta speed is 0.5 for C and the alpha intermediate speed is 0.2679 for B. As seen from the center A, the more the galaxies are distant and fast, the more they are contracted. But surprisingly, observer B also observes that he is stationary and that all galaxies are moving away from him. This happens because he is moving towards the waves incoming from the right. The result of this is that his perception of the time (his "local time") is distorted. The Time Scanner is capable of reproducing the equivalent distortion. The FreeBasic program: Big_Bang_02_Doppler_Lorentz_Scan.bas The Alpha suite. Most of the time, the Doppler effect is the cause of the forward vs. rearward wavelength difference. Christian Doppler himself pointed out in 1842 that this difference is unnoticeable if the observer is moving along with the transmitter. In this case, Ivanov's waves are moving at the same speed. The video below shows that this fundamental result is consistent with both the acoustic and relativistic Doppler effect. However, in the case of the relativistic Doppler effect, the slower frequency must be taken into account, and the result of this is that the Doppler effect seems to be relative. It is no longer possible to deduce one's absolute speed from it because of the amazing symmetry. This may easily be demonstrated using an "Alpha Suite", that is, any suite based on a constant alpha reference speed. The suite must be calculated according to Poincare's law of speed addition: beta = (alpha + alpha) / (1 + alpha * alpha) beta' = (alpha + beta) / (1 + alpha * beta) beta'' = (alpha + beta') / (1 + alpha * beta') For example, let's suppose that observer A below is stationary and that observer C is moving at beta = 0.5 times the speed of light. According to Poincare, the observer B must move at an intermediate alpha speed in order to see both A and C moving away from him at the same alpha speed. The alpha speed is given by: Or more simply: alpha = (1 – g) / beta alpha = (1 – 0.866025) / 0.5 = 0.267949 And inversely: beta = (alpha + alpha) / (1 + alpha * alpha) = 0.5 Thus, in the graphics below, A is stationary, B is moving at 0.267949 c and C is moving at 0.5 c. This situation is remarkable because B may consider that A and C are moving away from him at the alpha speed. But the situation of D is even more remarkable because his observations are exactly identical to those of B. This is the most stunning effect of Relativity: any speed seems to be relative so that the absolute speed cannot be recorded any more. This "theorem" is based on Ivanov's waves. It shows that the situation of observers B and D is equivalent. Their absolute motion is undetectable because they are recording the same data. The redshift from their two neighbors seems identical: R = 1.316074 times the regular wavelength. The measured redshift apparently indicates the alpha speed = 2 / ((1 / 1.316074)^2 + 1) – 1 = 0.267949 This occurs because the measures of D are far more distorted as a result of the Lorentz-FitzGerald contraction. The animation below proves that, using the Hertz test, observer B measures identical wavelengths from A and C. The Lorentz Tri-Dimensional Transformations. On a transverse axis, the expansion of the Universe produces a contraction which is incompatible with Lorentz's y' = y; z' = z equations. It is negligible at any scale smaller than that of our galaxy, yet it is unavoidable here because the goal is to show how the Universe is expanding. At the scale of the Universe, the Lorentz Transformations produce a transverse contraction. On April 30, 2010, I released a tri-dimensional version of the Lorentz Transformations (see below). Using this equation set, it is now possible to show the "Big Bang" and the expansion of the Universe: The program: Big_Bang_01_Relativistic.bas It turns out that the x axis is a preferred one, where Lorentz's t' phase (or time) applies and where the Lorentz-FitzGerald contraction takes place using Lorentz's original contraction factor. The y axis is involving a secondary level of transformation as seen by the observer moving on the x axis. Similarly, the transformation on the z axis is a tertiary one as seen by an observer moving on both x and y axes. Thus, the all-azimuth transformations shown below are asymmetric. Fortunately, it is possible to bypass this constraint by accelerating the speed according to beta[y] / g[x] or beta[z] / g[y] in order to obtain symmetrical results, for example if the goal is to obtain a 45° direction. Of course, it is also possible to elaborate a different equation set in order to obtain symmetrical results, but this option proves to be far more complicated. It also leads to speeds faster than light, hence inconsistent with Relativity. On the positive side, this equation set is consistent with Poincare's law of speed addition. For instance, a beta speed of 0.9999 on all three x, y and z axes still produces a resulting beta[xyz] speed slower than the speed of light. This leads to a new law of speed composition: g[xyz] = g[x] * g[y] * g[z] . Below are the Lorentz all-Azimuth Transformations: The Lorentz all-Azimuth tri-Dimensional Transformations. The formula: g[xyz] = g[x] * g[y] * g[z] introduces a new law of relativistic speed composition. Henri Poincare is the author of a similar law on relativistic speed addition. In May and June 2009, I worked hard in order to re-arrange and simplify this equation set. Now, it is quite nice. Even the de Broglie's phase wave is reproducible using it. Using the Delmotte-Marcotte wave medium, I could check that those transformations work perfectly, especially when it comes to reproducing the relativistic Doppler effect. I also succeeded in transforming Lorentz himself. Now, that's what I call the Lorentz "transformations"! The contraction especially is well visible. The goal was actually to show that the phase wave is linked to the contraction (an ellipse is needed instead of a circle). This way, the phase wave and the relativistic Doppler effect are perfectly superimposed. The all-azimuth transformations are a must in order to transform simultaneously several structures or wave emitters whose direction is not the same. In the animations suggested here, one may check that the transformations are perfectly achieved. What's even better is their effect on a transmitter. In addition to the contraction, the pulsation phase is slowed down and it is modified in order to mach the phase wave. It should be emphasized that the Time Scanner reproduces the same effects using only the phase wave and the alpha speed. According to the animations below, the phase wave may be interpreted in different ways. I very carefully reproduced the relativistic Doppler effect near to the center of the transmitter. After all, it is the main purpose of the Lorentz Transformations. Similarly, below is the most recent representation of my moving electron, which was obtained by means of the all-azimuth transformations. It was firstly shown in my 2002 book: "Matter is made of waves". Here, it is moving along a diagonal. Mr. Jocelyn Marcotte pointed out that its structure may be given by the sinus cardinalis: y = sin(x) / x. All this indicates that any moving material system is undergoing three fundamental transformations: 1 – The system experiences a contraction on the displacement axis according to Lorentz's factor. 2 – Events in this system are occurring at a slower rate of time according to Lorentz's factor. 3 – Events at the rear of this system are occurring sooner according to Lorentz's time equation. The Universe is likely to be expanding regularly. It was shown above that any expansion phenomenon must be relativistic. It is not possible to measure some consistent data in the presence of three A, B, C space ships regularly spaced without respecting the alpha speed of the central one. For example, speeds such as 0.0000001 c for A, 0.0000002 c for B and 0.0000003 c for C would definitely lead to an asymmetry because their initial equal distance will soon become unequal and measurable as such by both A and C. In short, the relativistic expansion is verifiable and measurable. What's more, it is consistent with Relativity. Thus, I am of an opinion that rejecting the relativistic hypothesis is highly imprudent, if not illogic. That is why Mr. Saul Perlmutter cannot deduce from his discovery that the Universe is expanding in an accelerated manner. It was even more imprudent to put forward the "dark energy" hypothesis in order to explain it. I do recognize the importance of his discovery, but it must definitely be interpreted as a confirmation that the expansion of the Universe is relativistic. The curves below are correct on condition that the Hubble constant is really constant, which is still to be demonstrated. So, let's suppose it is constant. In addition, one must realize that, if the expansion of the Universe is relativistic, the speed of light is unattainable. This is why the purple curve strongly deviates from the blue one when it approaches c. But surprisingly, although the well-known regular brightness (green curve) of a supernova differs if it is relativistic (red curve), the difference is rather small. The important point is that the difference matches Mr. Perlmutter's observations. This result indicates that the expansion of the Universe is relativistic. It doesn't indicate that the expansion is accelerated. I am quite sure that this trend will be confirmed in many years from now, as more and more observations on very distant and fast supernovae will be available. The Expansion of the Aether. The aether itself may be expanding as a result of its elasticity. In this case, one should admit that very distant galaxies may be receding much faster than the speed of light. What's more, considering that they should be stationary with respect to the local aether, the Lorentz Transformations would not apply. And finally, because all galaxies would be accelerated thanks to the aether elasticity, this property would definitely account for a so-called "dark energy". This picture is strangely similar to what astronomers believe today. However, such an expansion must be still compatible with Relativity. Otherwise, it would be measurable in an absolute way so that the center of the Universe would become verifiable. The answer to this ultimate question is well beyond our reach today because there are so many variables and hypotheses. Especially, a severe redshift is difficult to measure because the radiation energy is dimming proportionally. Considering the distance, even X-rays are becoming very faint if they are shifted into infrared. Considering that they are moving at nearly the speed of light, fastest galaxies are supposed to be seen about 13.5 billion light-years away. But today, because the light had to travel during 13.5 billion years, they may be actually about 27 billion light-years away. One should nevertheless be aware that such a distortion as a result of the light traveling delay must imperatively be perceived as a space contraction which partially accounts for the Lorentz-FitzGerald contraction. In addition, though, because the aether is expanding at the speed of light, the light emitted toward us by those galaxies is no longer capable of reaching our telescopes. On the one hand, galaxies receding faster than the speed of light would become totally invisible. On the other hand, galaxies receding at nearly the speed of light would exhibit a severe redshift because, in this area, the speed of light with respect to us is very slow. Hence, their actual distance would be far greater than 27 billion light-years away. That is why the result is finally comparable to that of a pure relativistic Big Bang, albeit it is not perfectly identical. All this is highly hypothetical. However, we are still capable of building larger telescopes. I strongly think that, in the future, and whatever the direction, we will observe more and more galaxies very near to the limits of the Cosmic Sphere. This will definitely indicate that the center of the Universe is unverifiable and that the expansion of the Universe is – or seems – relativistic.
{"url":"http://www.rhythmodynamics.com/Gabriel_LaFreniere/sa_Relativistic_Big_Bang.htm","timestamp":"2014-04-20T03:09:41Z","content_type":null,"content_length":"61836","record_id":"<urn:uuid:a49f5efa-ac3f-4f9f-87c1-7d1e49cc3073>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
[Physics FAQ] - [Copyright] Updated by Terence Tao 1997. Original by Philip Gibbs 1996. How Do You Add Velocities in Special Relativity? Suppose an object A is moving with a velocity v relative to an object B, and B is moving with a velocity u (in the same direction) relative to an object C. What is the velocity of A relative to C? u -------> A -------> B C w In non-relativistic mechanics the velocities are simply added and the answer is that A is moving with a velocity w = u+v relative to C. But in special relativity the velocities must be combined using the formula u + v w = --------- 1 + uv/c^2 If u and v are both small compared to the speed of light c, then the answer is approximately the same as the non-relativistic theory. In the limit where u is equal to c (because C is a massless particle moving to the left at the speed of light), the sum gives c. This confirms that anything going at the speed of light does so in all reference frames. This change in the velocity addition formula is not due to making measurements without taking into account time it takes light to travel or the Doppler effect. It is what is observed after such effects have been accounted for and is an effect of special relativity which cannot be accounted for with newtonian mechanics. The formula can also be applied to velocities in opposite directions by simply changing signs of velocity values or by rearranging the formula and solving for v. In other words, If B is moving with velocity u relative to C and A is moving with velocity w relative to C then the velocity of A relative to B is given by w - u v = --------- 1 - wu/c^2 Notice that the only case with velocities less than or equal to c which is singular is w = u = c which gives the indeterminate zero divided by zero. In other words it is meaningless to ask the relative velocity of two photons going in the same direction. How can that be right? Naively the relativistic formula for adding velocities does not seem to make sense. This is due to a misunderstanding of the question which can easily be confused with the following one: Suppose the object B above is an experimenter who has set up a reference frame consisting of a marked ruler with clocks positioned at measured intervals along it. He has synchronised the clocks carefully by sending light signals along the line taking into account the time taken for the signals to travel the measured distances. He now observes the objects A and C which he sees coming towards him from opposite directions. By watching the times they pass the clocks at measured distances he can calculate the speeds they are moving towards him. Sure enough he finds that A is moving at a speed v and C is moving at speed u. What will B observe as the speed at which the two objects are coming together? It is not difficult to see that the answer must be u+v whether or not the problem is treated relativistically. In this sense velocities add according to ordinary vector addition. But that was a different question from the one asked before. Originally we wanted to know the speed of C as measured relative to A not the speed at which B observes them moving together. This is different because the rulers and clocks set up by B do not measure distances and times correctly in the reference from of A where the clocks do not even show the same time. To go from the reference frame of A to the reference frame of B you need to apply a Lorentz transformation on co-ordinates as follows (taking the x-axis parallel to the direction of travel): x[B] = γ(v)( x[A] - v t[A] ) t[B] = γ(v)( t[A] - v/c^2 x[A] ) γ(v) = 1/sqrt(1-v^2/c^2) To go from the frame of B to the frame of C you must apply a similar transformation x[C] = γ(u)( x[B] - u t[B] ) t[C] = γ(u)( t[B] - u/c^2 x[B] ) These two transformations can be combined to give a transformation which simplifies to x[C] = γ(w)( x[A] - w t[A] ) t[C] = γ(w)( t[A] - w/c^2 x[A]) u + v w = --------- 1 + uv/c^2 This gives the correct formula for combining parallel velocities in special relativity. A feature of the formula is that if you combine two velocities less than the speed of light you always get a result which is still less than the speed of light. Therefore no amount of combining velocities can take you beyond light speed. Sometimes physicists find it more convenient to talk about the rapidity r, which is defined by the relation v = c tanh(r/c) The hyperbolic tangent function tanh maps the real line from minus infinity to plus infinity onto the interval −1 to +1. So while velocity v can only vary between -c and c, the rapidity r varies over all real values. At small speeds rapidity and velocity are approximately equal. If s is also the rapidity corresponding to velocity u then the combined rapidity t is given by simple addition t = r + s This follows from the identity of hyperbolic tangents tanh x + tanh y tanh(x+y) = ------------------- 1 + tanh x tanh y Rapidity is therefore useful when dealing with combined velocities in the same direction and also for problems of linear acceleration For example, if we combine the speed v n times, the result is w = c tanh( n tanh^-1(v/c) ) The velocity addition formula for non-parallel velocities The previous discussion only concerned itself with the case when both velocities v and u were aligned along the x-axis; the y and z directions were ignored. Consider now a more general case, where B is moving with velocity v = (v[x],0,0) in A's reference frame, and C is moving with velocity u = (u[x], u[y], u[z]) in B's reference frame. The question is to find the velocity w = (w[x], w[y], w[z]) of C in A's reference frame. This is still not quite the most general situation, since we are assuming B to be moving in the direction of A's x-axis, but it is a decent compromise, since the most general formula is somewhat messy. In any event, one can always orient A's frame using Euclidean rotations so that B's direction of motion lies along the x There is one additional assumption we will need to make before we can give the formula. Unlike the case of one spatial dimension, the relative orientations of B's frame of reference and A's frame of reference is now important. What B perceives as motion in the x-direction (or y-direction, or z-direction) may not agree with what A perceives as motion in the x-direction (etc.), if B is facing in a different direction from A. We will thus make the simplifying assumption that B is oriented in the standard way with respect to A, which means that the spatial co-ordinates of their respective frames agree in all directions orthogonal to their relative motion. In other words, we are assuming that y[B] = y[A] z[B] = z[A] In the technical jargon, we are requiring B's frame of reference to be obtained from A's frame by a standard Lorentz transformation (also known as a Lorentz boost). In practice, this assumption is not a major obstacle, because if B is not initially oriented in the standard way with respect to A, it can be made to be so oriented by a purely spatial rotation of axes. However, it should be warned that if B is oriented in the standard way with respect to A, and C is oriented in the standard way with respect to B, then it is not necessarily true that C is oriented in the standard way with respect to A! This phenomenon is known as precession. It's roughly analogous to the three-dimensional fact that, if one rotates an object around one horizontal axis and then about a second horizontal axis, the net effect would be a rotation around an axis which is not purely horizontal, but which will contain some vertical components. If B is oriented in the standard way with respect to A, the Lorentz transformations are given by x[B] = γ(v[x])( x[A] - v[x] t[A] ) y[B] = y[A] z[B] = z[A] t[B] = γ(v[x])( t[A] - v[x]/c^2 x[A] ) Since C is moving along the line (x[B],y[B],z[B],t[B]) = (u[x] t, u[y] t, u[z] t, t) (t real), we see, after some computation, that in A's frame of reference C is moving along the line (x[A],y[A],z[A],t[A]) = (w[x] s, w[y] s, w[z] s, s) (s real), u[x] + v[x] w[x] = ------------ 1 + u[x]v[x]/c^2 w[y] = ------------------- (1 + u[x]v[x]/c^2) γ(v[x]) w[z] = ------------------- (1 + u[x]v[x]/c^2) γ(v[x]) γ(v[x]) = 1/sqrt(1 - v[x]^2/c^2) Thus the velocity w = (w[x], w[y], w[z]) of C with respect to A is given by the above three formulae, assuming that B is orientated in the standard way with respect to A. Note that if u[y]=u[z]=0 then this reduces to the simpler velocity addition formula given before. References: "Essential Relativity", W. Rindler, Second Edition. Springer 1977. Relative speeds If an observer A measures two objects B and C to be travelling at velocities u = (u[x], u[y], u[z]) and v = (v[x], v[y], v[z]) respectively, one may ask the question of what the relative speed between B and C are, or in other words at what speed w B would measure C to be travelling at, or vice versa. In galileian relativity the relative speed would be given by w^2 = (u-v).(u-v) = (u[x] - v[x])^2 + (u[y] - v[y])^2 + (u[z] - v[z])^2. However, in special relativity the relative speed is instead given by the formula (u-v).(u-v) - (u × v)^2/c^2 w^2 = ------------------------- (1 - (u.v)/c^2)^2 where u-v = (u[x] - v[x], u[y] - v[y], u[z] - v[z]) is the vector difference of u and v, u.v = u[x] v[x] + u[y] v[y] + u[z] v[z] is the inner product of u and v, and u×v is the vector product for which (u×v)^2 = (u.u)(v.v) - (u.v)^2. When u[y] = u[z] = v[y] = v[z] = 0, the formula reduces to the more familiar |u[x] - v[x]| w = ------------- 1 - u[x] v[x]/c^2 N.M.J. Woodhouse, "Special Relativity", Lecture Notes in Physics (m: 6), Springer Verlag, 1992. J.D. Jackson, "Classical Electrodynamics", 2nd ed., 1975, ch 11. P. Lounesto, "Clifford Algebras and Spinors", CUP, 1997.
{"url":"http://math.ucr.edu/home/baez/physics/Relativity/SR/velocity.html","timestamp":"2014-04-19T12:00:18Z","content_type":null,"content_length":"14221","record_id":"<urn:uuid:bfd6da41-7f27-4730-b3c3-299d0ce476c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Bronx Algebra 1 Tutor Find a Bronx Algebra 1 Tutor ...I am currently tutoring two students who will be taking this test in November. I feel my experience with tutoring for the math regents, the GED, and the 10th grade International Exam have prepared me in all areas required for the COOP/HSPT exam. I have been tutoring since January 2013 with WyzAnt. 16 Subjects: including algebra 1, reading, GED, GRE ...I specialize in SAT/ACT Math. I teach students how to look at problems, how to break them down, which methods, strategies, and techniques to apply, and how to derive the quickest solution. I go through problems step-by-step and show students what to look for and what tools are necessary. 30 Subjects: including algebra 1, reading, English, grammar ...I can tutor almost any Math subject but my favorite discipline to tutor is Physics. I am very enthusiastic about learning HOW things work. I graduated Rensselaer Polytechnic Institute in 2010 and obtained a dual major in Physics and Applied Mathematics. 15 Subjects: including algebra 1, chemistry, calculus, algebra 2 ...Before that I student taught at Scarsdale High School and the Byram Hills middle school (H. C. Crittenden). I hold NYS certification in math education, grades 7 to 12, and students with 7 Subjects: including algebra 1, geometry, algebra 2, trigonometry ...So using different methods and techniques are important in the process of learning. I remember as a student in math class, other classmates and I always had problems learning via the teacher. Therefore, I began to ignore the teacher, went by the examples in books and was able to not only learn the material but also teach classmates as well. 11 Subjects: including algebra 1, geometry, trigonometry, elementary (k-6th)
{"url":"http://www.purplemath.com/bronx_ny_algebra_1_tutors.php","timestamp":"2014-04-16T07:29:18Z","content_type":null,"content_length":"23790","record_id":"<urn:uuid:e5fa031d-48a0-4b3b-8fca-afefd949955f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Integers are the combination of positive and negative numbers. They can be represented as - There are lot of calculators associated with integers please go through the list below. Also, there is a simple calculator here which takes an integer and tells whether its even or odd. Even numbers are the numbers that is divisible by 2 and odd numbers are the numbers that cannot be divided exactly divided by 2. Try our other Integers Calculator and get your problems solved instantly.
{"url":"http://calculator.tutorvista.com/integer-calculator.html","timestamp":"2014-04-19T17:32:20Z","content_type":null,"content_length":"30889","record_id":"<urn:uuid:83ec3208-e4c6-421e-9419-b2a75b35bada>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
move mails which marked as SPAM into Junk folder Dont work [solved] Topic: move mails which marked as SPAM into Junk folder Dont work [solved] I read this faq: http://www.iredmail.org/forum/topic365- … older.html I followed the tutorial, most emails are not sent to the junk folder, which could be happening? Re: move mails which marked as SPAM into Junk folder Dont work [solved] I added another item in the FAQ moment ago, please try it: • Make sure your domain name is listed in /etc/amavisd.conf (RHEL/CentOS) or /etc/amavis/conf.d/50-user (Debian/Ubuntu). e.g. @local_domains_maps = ['demo.iredmail.org', 'a.cn']; Re: move mails which marked as SPAM into Junk folder Dont work [solved] Also make sure that the user does not have sieve rules of his own (like "out of office", forward, filters and so on), because in this case, the server global default sieve rule is skipped. Re: move mails which marked as SPAM into Junk folder Dont work [solved] ZhangHuangbin wrote: I added another item in the FAQ moment ago, please try it: □ Make sure your domain name is listed in /etc/amavisd.conf (RHEL/CentOS) or /etc/amavis/conf.d/50-user (Debian/Ubuntu). e.g. @local_domains_maps = ['demo.iredmail.org', 'a.cn']; My configuration: @local_domains_maps = ['email-server.fama.br', 'fama.br']; Still does not work ... Re: move mails which marked as SPAM into Junk folder Dont work [solved] So look in i /var/log/sieve.log see: deliver(user@fama.br): Nov 22 11:13:19 Error: fopen(/var/vmail/sieve/dovecot.sieve) failed: Permission denied Permission probleman in global sieve file ? Re: move mails which marked as SPAM into Junk folder Dont work [solved] webmail:~# ls -al /var/vmail/sieve/ total 24 drwx------ 3 vmail vmail 4096 Out 21 19:38 . drwx------ 4 vmail vmail 4096 Out 6 21:00 .. -rw-r--r-- 1 root root 111 Out 21 19:38 .dovecot.sieve -rwx------ 1 root root 2023 Nov 26 02:43 dovecot.sieve webmail:~# chown vmail.vmail /var/vmail/sieve/dovecot.sieve webmail:~# ls -al /var/vmail/sieve/ total 28 drwx------ 3 vmail vmail 4096 Nov 28 17:19 . drwx------ 4 vmail vmail 4096 Out 6 21:00 .. -rw-r--r-- 1 root root 111 Out 21 19:38 .dovecot.sieve -rwx------ 1 vmail vmail 2023 Nov 26 02:43 dovecot.sieve -rw------- 1 vmail vmail 112 Nov 28 17:19 dovecot.sievec -r-x------ 1 vmail vmail 1138 Out 6 21:01 dovecot.sieve.sample drwx------ 20 vmail vmail 4096 Nov 27 08:19 fama.br I hope this work now rsrs
{"url":"http://www.iredmail.org/forum/post1944.html","timestamp":"2014-04-20T14:29:39Z","content_type":null,"content_length":"18464","record_id":"<urn:uuid:7e5b6ef8-f815-40d5-9f1f-6862513a4c58>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick counting problem. Find the number of solutions to x1 x2 + x3 + x4 = 12 if 0 =< x1 =< 2 (i.e. x1 = 0, 1 or... - Homework Help - eNotes.com Quick counting problem. Find the number of solutions to x1 x2 + x3 + x4 = 12 if 0 =< x1 =< 2 (i.e. x1 = 0, 1 or 2) Any help would be appreciated! Are you missing a plus sign between x1 and x2? If so, we can try solving it this way: Think of the problem in three parts. How many ways can we write 12 as the sum of three integers (x1=0), how many ways can we write 11 as the sum of three integers (x1=1), and how many ways can we write 10 as the sum of three integers (x1=2)? Can x2, x3, and x4 be any integer? Only positive? Anything non-negative? Assuming that x2, x3, and x4 must be positive numbers, then we need to find the number of partitions of 10, 11, and 12 into exactly 3 parts. (Where p(n,k) is the number of ways to write n as the sum of exactly k numbers) So there are 8+10+12 = 30 solutions to x1+x2+x3+x4=12 with x2,x3,x4 all positive and 0<=x1<=2. Assuming x2,x3, and x4, can be 0 or positive, then we have more solutions. We need to find the number of partitions of 10,11, and 12 into at most 3 parts. (Where p'(n,k) is the number of ways to write n as the sum of at most k numbers) So there are 14+16+19=49 solutions to x1+x2+x3+x4=12 with x2,x3,x4 all 0 or positive, and 0<=x1<=2 Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/quick-counting-problem-hep-please-338560","timestamp":"2014-04-17T10:39:35Z","content_type":null,"content_length":"26979","record_id":"<urn:uuid:a51ff983-c9ff-4ac2-b218-3a9515bef005>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
L-MATH is a library for performing simple linear algebra. Vector and matrix classes are available, as are simple linear interpolation functions and various operations related to creating rotation Vectors can be constructed using the VECTOR and TO-VECTOR functions. VECTOR accepts a list of elements, like so: (lm:vector 1 2 3) => #<L-MATH:VECTOR 1.000 2.000 3.000 > The VECTOR's dimension is defined by the number of elements in the VECTOR function's lambda list. (lm:dimension (lm:vector 1 2 3 4)) => 4 TO-VECTOR is intended to transform other types into the VECTOR type. At the moment it supports transforming lists and, trivially, other vector objects: (lm:to-vector (list 1 2 3)) => #<L-MATH:VECTOR 1.000 2.000 3.000 > Importantly, TO-VECTOR allows the vector's length to be modified: (lm:to-vector (list 1 2 3) :dimension 2) => #<L-MATH:VECTOR 1.000 2.000 > (lm:to-vector (lm:vector 1 2) :dimension 3) => #<L-MATH:VECTOR 1.000 2.000 0.000 > Vectors can typically be represented as lists. For instance: (lm:dot-product (lm:vector 1 0 1) (list 0 1 0)) => 0 Various operations (listed below) are available for VECTOR objects. Many of these functions will also accept lists as VECTOR (lm:dimension VECTOR) Returns the VECTOR's dimension. (lm:length VECTOR) Synonym for DIMENSION. (lm:norm VECTOR) Returns the VECTOR's length. (lm:vector= LHS RHS) Returns T iff the two vectors are equal. Internally, the VECTOR class stores the data as an array of double-floats. Because of rounding errors it is not advisable to compare floating point values exactly. VECTOR= uses the special variable *equivalence-tolerance* to define the tolerance within which two vectors are considered equal. *equivalence-tolerance* defaults to 0.0001, which should be reasonable for most applications. For example: (lm:vector= (lm:vector 1 2 3) (lm:vector 1 2 3.1)) => NIL (lm:vector= (lm:vector 1 2 3) (lm:vector 1 2 3.00001)) => T (lm:elt VECTOR index) Returns the element at the given index. This is also a SETFable place. VECTORs are zero based. (lm:x VECTOR) (lm:y VECTOR) (lm:z VECTOR) (lm:w VECTOR) Returns the elements at indices 0, 1, 2 and 3 respectively. These are all SETFable places. (lm:dot-product VECTOR) Returns the VECTOR's dot product. (lm:cross-product LHS RHS) Calculates the cross product between two 3-vectors. (lm:angle-between FROM-VECTOR TO-VECTOR) Returns the angle, in radians, needed to align the FROM-VECTOR with the TO-VECTOR. The angle is signed, and is a left-handed rotation. Example: (lm:to-degrees (lm:angle-between (lm:vector 1 0) (lm:vector 0 1))) => 90.0d0 (lm:to-degrees (lm:angle-between (lm:vector 1 0) (lm:vector 0 -1))) => -90.0d0 (lm:euclidean-distance LHS RHS) Calculates the Euclidean distance between two vectors or two numbers. Matrices can be constructed using (lm:MAKE-MATRIX row col &key initial-elements) (lm:make-matrix 2 3 :initial-elements '(1 0 0 0 1 0)) #<L-MATH:MATRIX 2 x 3 1.000 0.000 0.000 0.000 1.000 0.000 > If :initial-elements isn't specified, the matrix elements are initialised to zero. (lm:matrix= LHS RHS) Ensures that two matrices are numerically equivalent. All the real-valued components must be within *equivalence-tolerance* of each (lm:matrix-rows MATRIX)(lm:matrix-cols MATRIX) Returns the number of rows and columns in the matrix. (lm:matrix-elt MATRIX row col) Returns the element at the given row and column. MATRIX objects are zero based. This is a SETFable place. (lm:make-identity SIZE) Returns a SIZE×SIZE identity matrix. (lm:roll-matrix SIZE ANGLE) Returns SIZE×SIZE matrix which will rotate a post multiplied vector around the z-axis. It is a left-handed rotation. The ANGLE is given in radians. SIZE should be either 3 or 4. (lm:yaw-matrix SIZE ANGLE) Returns SIZE×SIZE matrix which will rotate a post multiplied vector around the y-axis. It is a left-handed rotation. The ANGLE is given in radians. SIZE should be either 3 or 4. (lm:pitch-matrix SIZE ANGLE) Returns SIZE×SIZE matrix which will rotate a post multiplied vector around the x-axis. It is a left-handed rotation. The ANGLE is given in radians. SIZE should be either 3 or 4. (lm:create-rotation-matrix VIEW RIGHT UP &optional (SIZE 3)) Creates a rotation matrix from three vectors. VIEW is the direction that the resulting vector should be pointing along, UP is the direction upwards. RIGHT is the vector orthogonal to this. Will return a left-handed rotation matrix. SIZE is the size of the matrix, and should be either 3 or 4. (lm:create-rotation-from-view VIEW WORLD-UP &optional (SIZE (length SIZE))) Given a direction to look in (VIEW), and the direction that is 'upwards' in a given coordinate system, this function creates a rotation matrix to translate into that coordinate system. This rotation is left-handed. SIZE should be either 3 or 4. (lm:create-rotation-from-view-to-view FROM-VIEW TO-VIEW WORLD-UP) Creates a rotation matrix that will rotate the vector FROM-VIEW on to the vector TO-VIEW, using WORLD-UP as the coordinate system's 'upward' direction. This is a left-handed rotation. Example: (let ((rotation (lm:create-rotation-from-view-to-view (lm:vector 1 0 0) (lm:vector 0 1 0) (lm:vector 0 0 1)))) (lm:* rotation (lm:vector 1 0 0))) => #<L-MATH:VECTOR 0.000 1.000 0.000 > (lm:linear-interpolation START END T-VAL) Given two vectors (START and END), and a real valued parameter (T-VAL), this returns a vector between START and END. When T-VAL is zero, this returns START. When T-VAL is 1, this returns END. Values between 0 and 1 return vectors between START and END; values below zero return vectors "before" START; values above 1 return vectors "after" END. The value 0.5 returns the vector exactly between START and END. Example: (lm:linear-interpolation (lm:vector -1 0) (lm:vector 1 0) 0.5) => #<L-MATH:VECTOR 0.000 0.000 > (lm:linear-interpolation (lm:vector 0 0 0) (lm:vector 10 10 10) 2) => #<L-MATH:VECTOR 20.000 20.000 20.000 > (lm:between START END) Returns the vector exactly between the VECTORs START and END. General Operations (lm:equivalent LHS RHS) Returns t iff the two objects are numerically equivalent. Numbers are tested using =. Real-valued objects (REAL types, VECTORs and MATRIXs) are compared to each other using a tolerance *equivalence-tolerance*. VECTORs and MATRIX objects are compared using VECTOR= and MATRIX=. (lm:copy OBJECT) Returns a copy of the given VECTOR, MATRIX or list. (lm:negate OBJECT)(lm:negate! OBJECT) Returns the arithmetic inverse of the given object. NEGATE! does so destructively. Example: (lm:negate (list 1 -2 3)) => (-1 2 -3) (lm:to-radians ANGLE) (lm:to-degrees ANGLE) Converts from radians to degrees, and vice versa. (lm:test-dimensions LHS RHS) Ensures that the two items have the same dimensions. The items may be lists, vectors or matrices in most sensible combinations. This function is useful when implementing your own operations between vectors and matrices to ensure that their dimensions agree. If they do not, a DIMENSION-ERROR condition is signalled. Arithmetic operations All the general arithmetic operations are defined: * (lm:+ LHS RHS) * (lm:- LHS RHS) * (lm:- OBJECT) * (lm:* LHS RHS) * (lm:/ LHS RHS) L-MATH-ERROR: A general condition from which all error conditions for the package inherit. DIMENSION-ERROR: This is signalled when an operation is requested on objects whose dimensions are inappropriate. ZERO-NORM-ERROR: This is signalled on operations which do not make sense for vectors with zero norm. OPERATION-NOT-SUPPORTED: This is signalled when an arithmetic operation is requested on two objects for which the operation is not supported. This should usually not occur, and probably should be considered a bug if it does. General Comments Both VECTOR and MATRIX classes have load forms (MAKE-LOAD-FORM). Internally, the data is stored as arrays of DOUBLE-FLOAT values. For those operations which deal with rotations, note that rotation matrices should be post-multiplied by the vectors. The coordinate system is left-handed, as are the rotations. Supported Compilers L-MATH is known to work on SBCL 1.0.29. While it should work on other compilers, this so far has not been tested. Please feel free to send in reports of which compilers you've successfully run this with, or to file bug reports where L-MATH is having problems. Getting and Installing L-MATH is available from http://launchpad.net/l-math. L-MATH is ASDF (require 'asdf-install) (asdf-install:install 'l-math) Reporting Bugs and Getting Help Bugs can be reported to http://bugs.launchpad.net/l-math. If you have any questions concerning how to use the library, you can ask them at See the file LICENSE for the licensing details. In brief, L-MATH is licensed under the GPL, with additional permissions giving link exceptions (aka the Classpath exception). Importantly for a Common Lisp library, this exception allows you to use this GPLed library in your application regardless of the licenses of the compiler and the other libraries you are using (as long, of course, as you satisfy those licenses). Note that this does not remove the obligation that the rest of the GPL places on you, such as supplying the source code of this library.
{"url":"http://www.quicklisp.org/beta/UNOFFICIAL/docs/l-math/readme.html","timestamp":"2014-04-17T00:50:25Z","content_type":null,"content_length":"10003","record_id":"<urn:uuid:733a877a-d929-4c3e-a3b6-e6470dc0aaa6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Avoidable Patterns in Partial Words A partial word is a sequence of symbols from a finite alphabet that may have some undefined positions, called holes that match every letter of the alphabet. Previously, Blanchet-Sadri, Mercas, Simmons, and Weissenstein completed the classification of binary patterns with respect to partial word avoidability. In this paper, we pose the problem of avoiding patterns in very partial words, that is, partial words very dense with holes. We define the concept of hole sparsity, a measure of the frequency of holes in a partial word. We also present two algorithms that can be used to show that a pattern is avoidable over an alphabet of a given size, allowing for partial words. Finally, we determine the minimum hole sparsity for all unary and some binary patterns in the context of trivial and nontrivial avoidability.
{"url":"http://www.uncg.edu/cmp/research/patterns/index.html","timestamp":"2014-04-18T06:18:22Z","content_type":null,"content_length":"6377","record_id":"<urn:uuid:8c9026a8-d441-446d-9084-df0cccdf331c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov chain April 7th 2010, 06:08 AM #1 Apr 2010 Is it possible to have a markov chain with an infinite number of transient states and an infinite number of positive recurrent states? Let $\mathbb{Z}$ (integers) be the space of sets. Perhaps you can define the probability transitions : $p_{2n,2n\pm 2}=\frac 12$ and $p_{2n+1,2n}=1$ ? Thanks for the help. I can see where the infinite transient states come from but how do you know it gives an infiinte number of positive recurrent states? I don't really understand the definition I have of positive a recurrent state i is said to be positive recurrent if the expected time of 1st return (starting from i) is finite (1) If i is Transient this must hold for some j P(i-->j)>0 (its possible to get to j) P(j-->i)=0 (its impossible to get back) (2) i is positive recurrent if for all j P(i-->j)>0 (its possible to get to j) P(j-->i)>0 (its ALWAYS possible to get back) (3) If its always possible to get back, then E(time of first return to i)< oo. Don't get caught up with expectation, just think of it in terms of (2). (1) If i is Transient this must hold for some j P(i-->j)>0 (its possible to get to j) P(j-->i)=0 (its impossible to get back) (2) i is positive recurrent if for all j P(i-->j)>0 (its possible to get to j) P(j-->i)>0 (its ALWAYS possible to get back) (3) If its always possible to get back, then E(time of first return to i)< oo. Don't get caught up with expectation, just think of it in terms of (2). Thanks! I now understand positive recurrence. How about null recurrence? The definition I have is.... state i is said to be null recurrent if the expected time of 1st return (starting from i) is infinite Getting back in a finite time a.s. does not guarantee that the expectation is finite. If you suppose that the first return time T is distributed something like $\mathbb{P}(T=n)=c \frac{1}{n^2}$ where c is some constant, then when you take the expectation you get which is not finite. However here you will get back in finite time a.s. The intuition is that you will get there eventually, just in an arbitrarily large time. I'm having a similar sort of problem in understanding null-recurrence. What I want is a finite, irreducible Markov chain whereby ALL states are null-recurrent. But isn't this just a simple (symmetric) random walk with reflecting barriers? If anyone can help I would be really grateful! For a finite Markov chain no state can be null-recurrent. thanks nyc, makes sense! April 7th 2010, 01:05 PM #2 April 8th 2010, 01:51 AM #3 Apr 2010 April 8th 2010, 05:10 AM #4 April 12th 2010, 08:40 AM #5 Apr 2010 April 12th 2010, 11:06 AM #6 April 13th 2010, 01:49 PM #7 April 14th 2010, 06:01 AM #8 Apr 2010 April 14th 2010, 06:19 AM #9
{"url":"http://mathhelpforum.com/advanced-statistics/137724-markov-chain.html","timestamp":"2014-04-20T18:07:33Z","content_type":null,"content_length":"55232","record_id":"<urn:uuid:3706ac9b-82f8-4140-b176-1c285eb4d862>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Volker Tolls @ Center for Astrophysics Labeyrie Multi-step Speckle Reduction Technique 1. Introduction The currently proposed concept for the TPF-C coronagraph favors a classical Lyot coronagraph consisting of the telescope system with wavefront correction optics, an occulter mask in the first focal plane followed by more optics, the Lyot stop, and a camera. This concept requires a near perfect wavefront throughout most of the optics, setting a very challenging requirement of much better than λ/1000 on the primary mirror surface accuracy and the wavefront control system to correct for residual errors. The concept of the Labeyrie multi-step speckle reduction method is a potential technique to reduce these challenging surface requirements to more achievable accuracies of λ/100 to λ/1000 by correcting and removing speckle light after the initial coronagraphic step. In the following paragraphs, we will introduce the multi-step speckle reduction technique, present computer simulation, and will discuss its advantages, requirements, and potential 2. Labeyrie’s Multi-Step Speckle Correction Method The optical layout of Labeyrie’s Multi-Stage Speckle Correction Method is shown in Figure 1. Instead of correcting the telescope wavefront before the coronagraph, Labeyrie suggested (Labeyrie 2002) that measurements are made of “the phase of the speckle pattern” in the image plane of a Lyot coronagraph. The speckle pattern is the result of phase errors introduced by the telescope system and the occulter mask in the coronagraph (all following optical elements contribute only negligible amounts to the speckle light since the bright stellar light has been mostly suppressed at this point). An adaptive element (e.g. a deformable mirror) is then applied with a conjugate of the phase in the speckle pattern, setting the net phase to approximately zero. This results in the speckle having a plane wave component emanating from the coronagraph focal plane. A lens following the corrector re-images the telescope pupil. The plane wave of scattered light will be mostly focused into a small spot in the center of that re-imaged pupil plane while the planet light is distributed over the entire aperture. A second occulter blocks the central spot and the passing planetary light is re-imaged to another focal plane with greatly reduce speckle background. Measurement of the phase should be made with a field that does not contain a planet or the other faint object under investigation, e.g. by imaging another star. This correction will then improve the contrast between the faint object and the background by as much as a factor of 10. Several additional stages of correction continue to improve the contrast ratio. The pupil plane stop can be fairly large (e.g. up to 10 % of the pupil diameter) without attenuating much of the signal of the off-axis planet. Imperfect correction of the focal plane phase just causes the spot to spread out but it is still blocked by the stop. Initial numerical simulations showed good results with a wavefront error of 1/1000 wave of the initial wavefront and a laboratory demonstration is under way. Figure 1: Schematic setup for Labeyrie’s multi-step speckle correction approach. The first part of this schematic shows a standard Lyot coronagraph (Focal Plane 1 and Pupil Plane 1). However, instead of a camera, a phase corrector is placed near Focal Plane 2. It aligns the phases of the speckle light such that it can be focused onto a second occulter near Pupil Plane 2 where it is removed. Left is the planet light which is then focused onto a camera (or enters a second speckle reduction step). 3. Simulations Before designing the testbed to demonstrate the multi-step speckle correction technique, we simulated the approach using Fourier optics. The steps in our simulation are: (1) simulating the telescope, (2) simulating the coronagraph, (3) simulating the multistage speckle reduction, and (4) applying possible post-processing to the acquired images. The imperfections in the primary mirror are simulated using the specification of the power spectral distribution (PSD) for the TPF demonstration mirror. The resulting phase screen is then rescaled to the desired wavefront error. We assumed a wavefront error of λ/1000; however, this number can be changed to any wavefront error. The type of occulter in the coronagraph can also be selected. The current choices are a sinc^2 or Gaussian profile absorbing mask, but adding other types is straight forward. The mask parameters, e.g. diameter of the mask, can be adjusted to optimize the system. Also, a rest-transmission can be specified to better simulate real occulters. For the presented simulations, we assumed a value of 10^-10. The phase corrector did correct the phase on the same pixel raster as the simulation was performed. In our testbed, the phase corrector will not have such a high resolution. Thus, future improvements of the simulation will take the limited resolution of the phase corrector into account. Figure 2: Result of simulations of Labeyrie’s multi-step speckle reduction technique. The top left panel shows the point spread function and the top middle panel the speckle background after suppressing the star light. The next three panels show the speckle reduction after: 3 steps (bottom left), 5 steps (bottom middle), and 7 steps (bottom right). The top right panel shows further improvements of the 7-step image by subtracting the same image after rotating it by 180^o. Figure 2 shows typical results of our simulations. The resolution is 512 x 512 pixels showing a FOV of 25 l/D. The top left panel shows the ideal point spread function and the top middle panel the speckle background after the star light has been suppressed by a Gaussian occulter and the exit pupil has been reduced to 90%. The next three panels show the improvement of the speckle reduction after: 3 steps (bottom left), 5 steps (bottom middle), and 7 steps (bottom right). The simulated planets at (4, 0)*λ/D and at (-2,-2)*λ/D can be identified after 5 to 7 steps. The top right panel shows a post-measurement data processing step by subtracting the even part of the image. The planets still show up as circular objects with a point-symmetric negative counterpart. Since the shape of the planets seems to be invariant when shifting the image, but the “shadows” of the speckles (the negative component) move around the speckles, planets should be easily identifiable. An important result of this simulation is that a phase only correction would need more than 3 correction steps which might be doable, but would increase the complexity of a coronagraphic camera tremendously. However, the number of speckle reduction steps can be reduced to one, if both, the phase and the amplitude, can be corrected. If we apply not only a phase correction but also an amplitude correction, all the speckle energy should be concentrated at DC. Thus, in theory, with a simple DC block all of the sun’s energy could be removed. However, this cannot be achieved in a real system with wildly changing amplitudes across the image. A more realistic solution is to introduce a mask with the inverse of the speckle amplitude averaged over smaller areas. The only disadvantage would be a reduction in through-put. The attenuation of such a mask can be defined as: m(x)=(f[min]/f(x)) &lt 1.0 where f(x) is the square root of the image measured at the output of the coronagraph and fmin determines a threshold value for f(x). Figure 3 shows the results for a number of values of fmin which become more aggressive from the left to the right. Compared to the results of a phase only correction, the phase and amplitude correction also attenuates the planet light. The simulation shows that sufficient speckle suppression can be achieved with mask transmissivities of 0.5 to 0.2. For this case, the planets are clearly identifiable. Considering that such a planetary system would have to be revisited over several orbital epochs, planets can be identified without doubt. Figure 3: Top row: Amplitude modulation mask. Bottom row: Output of a single stage with amplitude modulation mask and phase correction element. Image subtraction as describe above was also applied for the final result. Reference: Tolls, Volker; Aziz, Michael; Gonsalves, Robert A.; Korzennik, Sylvain; Labeyrie, Antoine; Lyon, Richard; Melnick, Gary; Schlitz, Ruth; Somerstein, Steve; Vasudevan, Gopal; Woodruff, Robert "Study of coronagraphic techniques;" Space Telescopes and Instrumentation I: Optical, Infrared, and Millimeter. Edited by Mather, John C.; MacEwen, Howard A.; de Graauw, Mattheus W. M.. Proceedings of the SPIE, Volume 6265, pp. 62653K (2006) Tolls, Volker; Aziz, Michael; Gonsalves, Robert A.; Korzennik, Sylvain; Labeyrie, Antoine; Lyon, Richard; Melnick, Gary; Somerstein, Steve; Vasudevan, Gopal; and Woodruff, Robert; "Study of Coronagraphic Techniques;" in Direct Imaging of Exoplanets: Science & Techniques. Proceedings of the IAU Colloquium #200, Edited by C. Aime and F. Vakili. Cambridge, UK: Cambridge University Press, 2006., pp.457-460 Gonsalves, Robert A.; Tolls, Volker, "Phase diversity in an exo-planet imager," SPIE, Volume 5905, pp. 322-329, 2005
{"url":"https://www.cfa.harvard.edu/~tolls/labeyrie.html","timestamp":"2014-04-20T00:39:35Z","content_type":null,"content_length":"21618","record_id":"<urn:uuid:a3e39dde-e459-4f60-900b-4b7fa9b89976>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Cesaro means for $\alpha<1$ and Banach limits up vote 1 down vote favorite I am interested in conditions in terms of standard scales of summation methods that guarantee the existence of an averaged limit for all almost convergent sequences. For the Cesaro summation method $ (C, 1)$ this fails; is this true, e.g., for the Cesaro methods $(C, \alpha)$ with $\alpha<1$? sequences-and-series fa.functional-analysis add comment 1 Answer active oldest votes The paper G.G. Lorentz: A contribution to the theory of divergent sequences; Acta mathematica, Volume 80, Number 1, 1960, 167-190; DOI: 10.1007/BF02393648, contains several interesting results related to your questions on Banach limits. A characterization of matrix methods that sum all almost convergent sequences is given (Theorem 7). In particular, each $C_\alpha$ sums all almost convergent sequences. However, it is shown that almost convergence cannot be represented by a regular matrix method. Also the following stronger result about a class of matrix methods is shown. up vote 3 down vote Theorem 11. For every sequence $\{A_k\}$ of methods of the class $\mathfrak A$ there is a bounded sequence $x = \{x_n\}$ which is not almost convergent but is summable to the accepted value zero by every one of the methods $A_k$. The class $\mathfrak A$ in Lorentz's paper is the class of matrices fulfilling $$\lim\limits_{m\to\infty} \max\limits_n |a_{mn}|=0.$$ I think it's not that hard to show that each matrix $C_n$ belongs to $\mathfrak A$. Some further references for almost convergence are mentioned e.g in the book Boos: Classical and Modern Methods in Summability. @Martin: I have already got the paper, thank you. Also I realized that what I meant is not what I asked, but you have already answered my real question :) – kap44 Apr 10 '12 at I saw in Lorentz' paper that $(C, \alpha)$-convergence for any $\alpha>0$ is equivalent to $(C, 1)$ convergence (apparently, for bounded sequences); thus my question is equivalent to the previous one. Is there any reference to the proof? – kap44 Apr 10 '12 at 18:09 @vanja Sorry, I do not know a reference for that result. – Martin Sleziak Apr 11 '12 at 11:22 add comment Not the answer you're looking for? Browse other questions tagged sequences-and-series fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/93620/cesaro-means-for-alpha1-and-banach-limits","timestamp":"2014-04-21T04:54:46Z","content_type":null,"content_length":"54336","record_id":"<urn:uuid:8009723d-c76e-44c6-b9bd-3881cc676ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Defining functional distances over Gene Ontology • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2008; 9: 50. Defining functional distances over Gene Ontology A fundamental problem when trying to define the functional relationships between proteins is the difficulty in quantifying functional similarities, even when well-structured ontologies exist regarding the activity of proteins (i.e. 'gene ontology' -GO-). However, functional metrics can overcome the problems in the comparing and evaluating functional assignments and predictions. As a reference of proximity, previous approaches to compare GO terms considered linkage in terms of ontology weighted by a probability distribution that balances the non-uniform 'richness' of different parts of the Direct Acyclic Graph. Here, we have followed a different approach to quantify functional similarities between GO terms. We propose a new method to derive 'functional distances' between GO terms that is based on the simultaneous occurrence of terms in the same set of Interpro entries, instead of relying on the structure of the GO. The coincidence of GO terms reveals natural biological links between the GO functions and defines a distance model D[f ]which fulfils the properties of a Metric Space. The distances obtained in this way can be represented as a hierarchical 'Functional Tree'. The method proposed provides a new definition of distance that enables the similarity between GO terms to be quantified. Additionally, the 'Functional Tree' defines groups with biological meaning enhancing its utility for protein function comparison and prediction. Finally, this approach could be for function-based protein searches in databases, and for analysing the gene clusters produced by DNA array experiments. Current genome sequencing projects are producing a wealth of data in the form of sequences of biological polymers. For this data to be useful, it has to be interpreted in functional terms. Thus, efficient systems to describe and classify protein function are needed, as well as tools to predict the function of the huge number of new sequences. There is much evidence for the need of well-defined and structured functional descriptions [1-4]. However, the main difficulty encountered is that 'function' is not a well defined concept and it is not as un-equivocal as 'sequence' or 'structure'. Indeed, protein function is a very complex and multidimensional phenomenon. In many cases, functional descriptors are based on the available experimental techniques or are due to historical reasons. However, they do not necessarily have any meaning in biological terms (evolution, molecular mechanism). The methods we use to study biological systems require conceptualization and categorization, which are sometimes taken beyond their role as mere tools of the scientific method and are 'imposed' on the cell. One example is the artificial distinction between processes such as 'transmission of information' (for example DNA/RNA processing), 'metabolism' (of small compounds) and 'transport' (communication with the environment). Such disjointed classifications, as used in the first schemes to describe protein function, clearly do not extend to the molecular or evolutionary level. These schemes have been used in the past for classifying proteins into functional classes and for developing systems to assign newly sequenced proteins to them [5,6]. The current tendency is to use vocabularies and ontologies that allow complex functional descriptions beyond disjointed classes. Among these, the important effort of the Open Biomedical Ontologies (OBO) [7] in developing controlled vocabularies for a wide scope of applications in a biological and medical context must be recognised. The OBO ontologies are designed as graphic architectures formed by univocal concepts (terms) that are linked together by relationships that satisfy some prefixed and formal rules [8]. The Gene Ontology (GO) project [9] has become the 'de-facto' standard in biomedical ontologies. Formally, GO is designed as a Direct Acyclic Graph (DAG) based on two unconstrained relationships ('is-a' and 'part-of') that link a vocabulary of functional terms [2]. This graph structure, together with the simple conceptualization, permits comparisons between any two GO terms to assess their functional similarity. However, certain problems, such as the function-based search for potential genes/proteins of interest across multiple annotated databases and the analysis of high throughput microarray data, have led to the in depth exploration of ontology in order to propose models and criteria to measure the functional relationships between the terms. In recent years, many studies have addressed this matter [10-14], although Lord was the first to establish a semantic distance for any two terms in GO [10], adjusting the ideas of Resnik [15] for general taxonomies. In the model proposed by Lord, the similarity of any two GO terms is determined as a function of the information content of common ancestors that are calculated from corpus statistics. Recently, further efforts to identify functionally related gene products in annotated databases based on the distances calculated by Lord [11] have been shown to produce a good agreement with homology searches [12]. Nevertheless, using the more informative common ancestors as a proximity reference presents some restrictions. First, the depth of the shared parent nodes is not a suitable criteria for some limited cases in which the terms to be compared are close to the root. Furthermore, the information content (i.e. probability) of a node is highly dependent on the annotated database selected and its release version. Models have been developed to overcome these limitations that take into account other aspects of the ontology structure. For example, the distance between two terms may also integrate the density of the terms and the path that links them [13]. Alternatively, a new definition has been used that considers the local relationships in the subgraph generated by the terms, rather than their global positions in the DAG [14]. A common feature of these different approaches is that they rely mainly on the semantic links of the DAG. Unfortunately, there are inherent problems in this approach due to the non-homogeneity and the uneven distribution of the biological knowledge. As a result some regions of the DAG are more densely populated than others, so that the connections between terms are not comparable. In addition, the depth of a node (which is related to its specificity) can not be assigned in an unequivocal way. This type of problem is especially relevant for nodes that are profusely connected to the root by various paths of different lengths. In this work, we propose a novel method that associates the Molecular Function GO (MF-GO) terms based on their co-occurrences in a 'curated' set of proteins and enriched by the semantic relationships from the ontology. Interpro is used as a curated database as it integrates protein information from other databases that describe protein families, domains and functional sites, such as PROSITE, PRINTS, Pfam, ProDom, SMART and TIGRFAMs [16]. Conceptually, the method is, to some extend, similar to the way in which similarities between aminoacids are 'learnt' from examples (structural curated alignments) rather than obtained from the raw chemical properties of the aminoacids. Methodologically, it shares aspects of the algorithm used in the DAVID tool [17] for clustering heterogeneous annotation contents from different resources into annotation groups based on the co-association of the annotated genes in the databases. The method analyses the mutual occurrences of the MF-GO terms across the Interpro entries. The occurrences are used as the basis of the comparison of the terms on the assumption that the persistent coincidence of two terms describes its 'relation' in the general functional space. The analysis of the occurrences provides a useful mathematical tool to quantify the functional similarity between terms. A hierarchical tree linking the MF-GO terms is built from the similarity matrix. We termed this tree the 'Functional Tree' and it formally constitutes a Distance Model since it satisfies the ultrametric triangle inequality. In this context, the Functional Distance for a pair of terms, D[f], is defined as the height of their least common ancestor in the 'Functional Tree'. In addition, the tree allows the GO terms to be clustered into compact and homogeneous groups with biological meaning. We describe here how the Functional Tree was built, how the tree is clustered and the groups generated are analyzed in terms of the functions they describe. The Functional Distance D[f ]derived from the Functional Tree was used to calculate the distances between pairs of yeast proteins to assess the reliability of the tree. We also compare this new metric with another based on semantic The steps followed to obtain the Metric Model are schematically represented in Figure Figure11. Scheme of the method used for obtaining the Metric Model based on Gene Ontology annotations. (1) Profile vectors are built by retrieving the Molecular Function Gene Ontology annotations (MF-GO terms) of Interpro domains from the file interpro2go. (2) ... First, for each Molecular Function term we create a profile vector that represents its presence/absence in different Intepro entries (Figure (Figure1,1, box 1). These vectors resemble the 'phylogenetic profiles' used to encode the proteins present in different organisms and to detect protein relationships [18,19]. Initially, we started with 1532 MF-GO terms present in 5535 Intepro entries. Additionally, we included the semantic relationships represented by the Gene Ontology DAG by assigning the same Interpro domain to the parent(s) of a given GO term. The profiles were checked to detect the terms that were associated exclusively to one Interpro entry and to ensure that this entry was not annotated with any other term. Any such profiles were removed because they do not help to extract relationships between the terms. After filtering, we obtained a matrix of 1778 Interpro entries with 1392 MF-GO terms. In a second step (Figure (Figure1,1, box 2), we built a matrix of co-occurrences of GO terms in Interpro entries. The occurrences were accumulated through all the profiles and we obtained the total mutual occurrences in the universe of the 1392 terms. Each co-occurrence vector describes a MF-GO term in relation to the rest of the MF-GO terms, which enables it to be used as a feature vector in the application of statistical learning techniques. Third, the similarity between the terms was calculated using the cosine distance between their corresponding co-occurrence vectors (Figure (Figure1,1, box 3). The similarity matrix S was obtained by crossing the vectors all-against-all (as graphically represented in Figure Figure2A)2A) and the functional groups were obtained by the clustering of S. Full details of the Similarity matrix calculus are available in the Methods section. Finally we applied a Spectral Clustering algorithm [20,21] as it performs a dimensional reduction of the data (Figure (Figure1,1, box 4). The general ideas behind Spectral Clustering methods are introduced in the 'Spectral Clustering' subsection from the appendix. This approach improved the search for functional groups in the MF-GO terms space. (A) Initial Similarity matrix of 1329 × 1329 dimensions. The similarity colour scale is shown at the right of the matrix. S is obtained from the set of co-occurrence vectors. Note that S is symmetric, positive, and its values are ranked between ... Spectral Clustering considers S as the Adjacency Matrix of a normalized weighted graph G, where the nodes stand for the MF-GO terms linked by the similarity values. Thus, the clustering problem is transformed into a partitioning graph problem. We only considered the graph comprised of terms that were connected with significant relationships, that is those connected by a pairwise similarity greater than a manually selected threshold value (see Methods, 'Similarity Matrix' subsection). After imposing this constraint, we obtained 995 MF-GO terms from the total of approximately 7500 terms integrated in the released version of this work. We have also considered the NJW adaptation of Spectral Clustering (NJM-SC) by Ng, Jordan and Weiss [21], which is summarized in the general scheme in Figure Figure33 (see the 'NJW Spectral Clustering Algorithm' subsection from the appendix). The algorithm calculates a Transition Probability Matrix, P, from a N × N Similarity matrix, S, that represents the probability of transit from one node to another in the graph. P is diagonalized and its K first eigenvectors are stacked and normalized in a new K × K matrix, Y. The rows of Y can be treated as N vectors K dimensional. Therefore, NJM-SC projects the MF-GO terms (nodes of G) onto points in a K dimensional space. Subsequently, the terms can be grouped with any standard clustering technique. K was thus selected as the number of clusters in the optimum partition of G. The optimization procedure is presented in detail in the Methods section. The resulting number of optimal groups was 93. Scheme of the spectral clustering methodology. Spectral clustering techniques aim to find the best partition of a weighted graph. A graph is constructed where the nodes are MF-GO terms linked by similarity values s[ij ]derived by calculating the cosine ... Finally, from the vector projections of the MF-GO terms, we built a dendrogram with an Agglomerative Hierarchical Clustering algorithm [22] (Figure (Figure1,1, box 5). The tree obtained ('Functional Tree') defines a distance D[f ]between any two MF-GO terms from the set of 995 (see Additional file 1). The distance for two terms was the minimum height of their common nodes. From a mathematical point of view, D[f ]satisfies the topological properties that induces a metric space (see 'Properties of a Metric Space' from the appendix). So, the metric generated by the Functional Tree establishes a 'distance scheme' that provides a measure of the closeness of any two MF-GO terms within the tree. Functional Groups The nodes of the Functional Tree are divided into groups imposing the number of clusters obtained in the optimization step. The 93 groups of Molecular Function terms are inspected and 20 groups with highly homogeneous biological function are detected. In the Functional Tree (Figure (Figure4,4, see Additional file 1), the functionally homogeneous groups are coloured and ranked, and the labels assigned are shown with their rank number. Functional Tree representation. The tree is divided into 93 groups. The groups for which a functional 'homogeneity' was qualitatively assessed are labelled and coloured over the tree. The functional labels are specified. The tree was generated with iTol ... Some of these groups were very specific, like the group containing the 21 amino acyl-tRNA ligase activities. This group includes all the tRNA ligases and no other GO term, and hence the automatic clustering algorithm achieved a perfect segregation of this functional group (group 2). Another big group mostly composed by activities related to hydrolysis (hydrolases, peptidases, nucleases, lipases) was labelled as group 1. Although this group was homogeneous for this activity, the coverage was not perfect since other hydrolases lay outside of this group. For example, group 7, which was far from group 2 in the tree, was mainly comprised of peptidase activities. Interestingly, many different activities associated with DNA processing tended to cluster together despite the fact that they were apparently unrelated (i.e.: transcription factors and enzymes involved in DNA metabolism, DNA ligases, topoisomerases, etc... – group 3). As for the hydrolases case commented above, although this group contained only DNA-related activities and other DNA-related functional terms were not included in this group. Most of the kinases of small metabolic compounds were clustered in a large group (group 15), while protein kinases were more widespread even though some of them clustered together in group 6. Many membrane transporters of apparently different nature (transporters for inorganic ions, drugs, proteins, etc...) were also clustered together in homogeneous groups. All the 'protein inhibitor' activities within the dataset were clustered together in a homogeneous group (group 20), which is interesting given that the proteins they inhibit are of a very different nature (phos-phatases, ribonucleases, proteases, etc...). Functional clustering was also evident for many other GO terms: methyl-transferases, phosphorybo-syltransferases, peptidases, some peptidic hormones, neurotransmitter receptors, phosphate-hydrolases, hy-dratases/dehydrateses, adenylyltransferases, etc.... For other clusters, this functional 'homogeneity' was not so evident. For example, oxidoreductases were spread across many groups even though some groups contained oxidoreductase activities only (group 19). This dispersion could be explained by the fact that this function is present in proteins with a very different evolutionary origin. Similarly, activities related to RNA metabolism were spread among the different clusters, except the tRNA-ligases discussed above. In general, the clustering represented by the tree makes sense, meaning that GO molecular functions that are intuitively 'similar' were close in the tree and vice versa. This emphasise that the metric represented by the tree can be used to quantify functional similarities. The Functional Tree as Metric Model The functional distance D[f ]defined from the Functional Tree allows a new quantitative analysis of the functional relationship between gene products. To assess D[f ]as a functional similarity measure we correlated sequence and annotated function similarity over a set of aligned pairs of yeast proteins. The benchmark set has been selected by applying a very restrictive criterion to obtain a high reliable set of annotated proteins. The selection process (see Methods section) takes as quality assay the evidence codes in GOA. In this work, we picked only those sequences that had been functionally characterized either by experimental assay (IDA evidence code) or by traceable published works (TAS evidence code) and whose GO terms were included in our functional tree. The functional distance between proteins (through their sets of annotated terms) is calculated using the hausdorff definition. The details are exposed in the 'Functional Comparison between Gene Products' subsection from Methods. The distance values are represented against the sequence similarity (Figure (Figure5A).5A). Lord semantic similarity D[s ]was also implemented and represented in Figure Figure5B.5B. Note that the Lord distance values are normalised in order to analize the metric derived in this work with respect to Lord's model. Comparison between functional distance and sequence similarity for pairs of Yeast proteins annotated with TAS and IDA evidence codes. The alignments covers most of the range of sequence similarities, whose distribution is shown in panel D. (A) Hausdorff ... To compare the models the mean distance values for each bin of sequence identity are superposed in figure figure5C.5C. In average, both approaches correlate well with sequence similarity and exhibit a similar trend for homologous pairs. This is partly due to the homology-based mechanism of annotation that transfers directly a source set of MF-GO terms to many homologous sequences. In consequence, more than 84% of the alignments with sequence similarity values greater than 80% share the same annotations. This lack of richness in the annotations limits further analysis of the methods. However, we can observe that D[f ]and D[s ]show a different behavior. The distance space is discretized into three well-defined groups (Figure (Figure5A)5A) whereas the semantic similarity values produced a great spread. These natural 'cut-offs' allow classifying the pairs into three categories with biological meaning that can be roughly labelled as 'closely functionally related' (distances less than 0.1), 'not related at all' (more than 0.9) and 'divergence in functionality' (in the intermediate interval with distances between 0.5 and 0.7). This partition results from the structure of the clusters (Figure (Figure6B)6B) showing small intra-group and large intergroup distances. This is in part due to 'biological' reasons but is also affected by the function transfer by sequence homology. (A) Similarity Matrix in spectral space. The rows of the matrix represent the MF-GO terms in the reduced space of dimension 93. The terms are stacked in the same order that the Functional Tree (B) Ordered Similarity Matrix. The matrix was packed according ... The repetitive and persistent presence of the same MF-GO terms in the Intepro domains indicates clear functional associations of the terms but it is also originated by the usage of a reduced set of annotations producing redundancy in the functional information of the sequences and low coverage with respect to the total number of terms in the ontology (1532 from a total of 7417 MF-GO terms). In addition, the Functional Distance Model D[f ]becomes very useful from the perspective of recovering proteins functionally similar to a query, as it provides new associations between the terms inferred from the homology information in the database entries. These new links enrich the ontology relationships among the terms. This is the case of group 3 (analized in the 'Functional Groups' from Testing section) whose MF-GO terms are spread across different lineages of the ontology involving DNA-related activities. These associations are very visible in many Pfam domains (Hormone receptor, Sigma-70 factor, Ets-domain, HSF-DNA binding, GATA-type transcription activator etc ...) but are not detected with a criteria based on the semantic proximity of the terms. Group 3 is partially represented in Figure Figure77 showing the relations of some terms in the ontology. Some terms of the group, such as 'DNA binding' and 'specific transcriptional repressor activity', are very distant in the DAG and share the root as common ancestor. This produces a semantic distance of 1. Other terms like 'transcription factor activity' and 'DNA replication origin binding' share the node 'DNA binding' that is three levels apart from to the root. Graph representation of the ontology relations of a subset of MF-GO terms belonging to 'group 3' of the Functional Tree (orange nodes). The nodes in blue (GO:0016566 and GO:0003700) correspond to members of the 'group 3' that are also annotations of the ... The benchmark set includes some example pairs in which D[f ]assigns close distances to functionally related pairs while D[s ]does not. One is the pair formed by [Uniprot: {"type":"entrez-protein","attrs":{"text":"P20134","term_id":"347595781"}}P20134] and [Uniprot:{"type":"entrez-protein","attrs":{"text":"P10961","term_id":"123687"}}P10961] that shares 50% sequence identity. The first is a transcriptional repressor and activator annotated with the term GO:0016566 ('specific transcriptional repressor activity'). The second is a trimeric heat shock transcription factor annotated with GO:0003700 ('transcription factor activity'). Both are characterized by HSF-type DNA-binding Pfam domain. The relative posititions of their annotated terms in the DAG can be checked in Figure Figure7.7. D[s ]is 0.76 indicating a weak relation between the proteins. However, D[f ]situates the pair into the 'closely functionally related' region because the terms belong to the cluster 3 described before. Other similar example is the pair formed by the protein kinases [Uniprot:{"type":"entrez-protein","attrs":{"text":"P32801","term_id":"544240"}}P32801] and [Uniprot:{"type":"entrez-protein","attrs": {"text":"P41808","term_id":"1173459"}}P41808]. The proteins are characterized by protein kinase pfam domain and are annotated respectively with GO:0004674 ('protein serine/threonine kinase activity') and GO:0004707 ('MAP kinase activity'). Both terms belong to group 6 (Functional Groups subsection). So, as in the example before, the distance D[f ]is 0 whereas D[s ]is 0.6. Although these terms are close in the ontology ('protein serine/threonine kinase activity' is ancestor of 'MAP kinase activity' and separated only by two depth levels), the Lord model assigns such a value distance because the shared parent ('protein serine/threonine kinase activity') is referred many times in the gene association.goa human file. According to Lord definition, high probable terms carry low information content producing high distance values in the comparison of terms. In the case of the kinases pair, the probability introduces a bias that shifts the semantic distance value to a region that indicates, as the example before, a weak relation between the proteins. Finally, the Functional Distance model is sensitive enough to detect subtle differences in the pairs [Uniprot:{"type":"entrez-protein","attrs":{"text":"P15700","term_id":"137024"}}P15700]/[Uniprot: {"type":"entrez-protein","attrs":{"text":"P07170","term_id":"125153"}}P07170] and [Uniprot:{"type":"entrez-protein","attrs":{"text":"P15700","term_id":"137024"}}P15700]/[Uniprot: {"type":"entrez-protein","attrs":{"text":"P26364","term_id":"125155"}}P26364] that are explained by sequence analyses. In both cases, the members of the pair were annotated with GO:0004849 ('uridine kinase activity') and GO:0004017 ('adenylate kinase activity') respectively. Lord's Semantic Model produced a distance of 0.37 between these proteins, indicating semantical relation. In fact, the aforementioned terms are close in the GO hierarchy, and the deepest common parent shared by both GO terms, two levels above, is 'nucleobase, nucleoside, nucleotide kinase activity' (GO:0019205). However, our Functional Distance located that pair of GO terms within the intermediate interval at a distance of 0.65. A thorough analysis of the sequences revealed that [Uniprot: {"type":"entrez-protein","attrs":{"text":"P15700","term_id":"137024"}}P15700] has the 'ADK' Pfam domain. Adenylate kinases are phosphotransferases with well conserved ADK domains that include an important arginine which inactivates the enzyme if mutated, and an aspartate that is located in the catalytic cleft and that forms a crucial salt bridge. However, in the particular case of [Uniprot: {"type":"entrez-protein","attrs":{"text":"P07170","term_id":"125153"}}P07170] and [Uniprot:{"type":"entrez-protein","attrs":{"text":"P26364","term_id":"125155"}}P26364], the putative ADK domain is interrupted by another PFAM domain, the ADK lid. Looking at the sequence of this particular region, the ADK domain boundaries were not clearly delineated due to a high degree of divergence in the active site. So, in this example our metric is able to capture the 'functional difference' between these two proteins due to the inserted domain. The Spectral Clustering algorithm is implemented in Matlab 7.4.0 using the clustering functions available in the Statistics Toolbox. Lord's model is implemented in Python 2.5, and Python was also used to calculate the functional and semantic distances. Discussion and Conclusion Here, we propose a new method to derive 'functional distances' between GO terms based on the co-occurrence of them in the same set of proteins. The simultaneous occurrence of terms in Interpro entries provides a natural biological link between the GO functions. The relationship between terms in the GO structure provides additional semantic information that helps to refine the metric model. In this method, an initial profile is constructed for each GO term representing its association with a set of Interpro domains (after expanding the Interpro annotations with the parenthood relationships of the GO terms). These profiles are used to generate a matrix of co-occurrence between GO terms. A graph is constructed where the nodes are the GO terms and the edges are weighted according to the distances extracted from this co-occurrence matrix. Spectral clustering is applied to this graph in order to obtain an optimal number of groups of functionally similar GO terms. The distances derived in this way provide a hierarchical clustering of GO terms (functional tree) where the groups of terms with similar biological meaning tend to be close. Additionally, this 'Functional Tree' represents a metric model D[f ]whereby the distances between the terms fulfil the mathematical properties of a metric space. The main difference of this method from previous approaches [10-14] is that D[f ]is learned from examples. Hence, in contrast to other proposed methods that derive the distances from the semantic relationships within the GO ontology, our method provides new associations between the terms that enables a different way to compare proteins in functional terms. We have selected some cases to illustrate this point, for which the functional similarity between related proteins is better estimated with this metric model than with currently available algorithms, such as Lord's 'Semantic Similarity Model' [10]. Moreover, we also tried to qualitatively assess some of the groups automatically extracted from the distances by a clustering algorithm. Over and above the comparison with these examples and the qualitative assessment of the functional tree, it is actually difficult to assess the general quality of any functional metric. The overall representation of the functional distances in the Functional Tree originates very compact groups of terms separated by well defined intervals as it is shown in Figure Figure6B.6B. This structure of the clusters produces a not uniform distribution of distances because the values tend to concentrate in three regions (low, intermediate and high), which is obviously a problem since it makes the metric to some extent 'qualitative'. On the other hand, this categorisation produces natural 'cut-offs' and functional similarities that are naturally classified by the method in three categories with biological meaning in a totally unsupervised way as been discussed in Results section. This discretization is in part due to 'biological' reasons but is also affected by the homology-based transfer that causes that only a reduced set of terms is used for annotation purposes. In consequence, the coverage with respect to the total number of terms in the Molecular Function ontology is low (around 20%). In addition, only clear relations are selected resulting a set of 995 MF-GO terms that are considered in the Functional Distance Model. It is important to note that the relationships between the GO terms obtained and the relationships in the GO ontology represent different elements. The GO DAG represents qualitative semantic relationships ('is-a' and 'part-of') while our relationships represent quantitative 'functional distances'. Thus, the metric proposed here provides a way of quantifying how similar two functional annotations are. This can be very useful for training systems for function prediction, for function-based protein searches in databases, or to assess the accuracy of a functional prediction (comparing the predicted set of annotations with the real one). This metric could also be useful for analysing the gene clusters produced by DNA array experiments. We think it may also provide insights into how functions evolved and the relationships between sequence, structure and functional spaces. Similarity Matrix The GO annotations for a given Interpro entry are retrieved from the mapping of Interpro to Gene Ontology [23] (interpro2go file, release May 2006). Only the GO terms belonging to the MF-GO are For each MF-GO term a profile vector is created that describes the presence/absence of the terms throughout the database. The profiles are constructed to analyze the simultaneous occurrence of pair MF-GO terms in the Interpro entries and filter the cases that do not contribute to the extraction of the relationship between the terms. The similarity between two terms is calculated by the cosine distance between their co-occurrence vectors: Note that the cosine distance generates values ranking between 0 and 1. The similarity value can be considered as a description of the functional relationship between these terms, whereby similarities equal to 0 stand for unrelated terms and 1 stands for strongly related. The similarity matrix S is plotted in Figure Figure2A2A together with its histogram (Figure (Figure2B2B). The distribution of the similarity values shows that almost 90 per cent of the pairs of terms are only weakly related. This structure of the relationships reflects the presence of well-defined groups of terms. However, the search space has been limited as the cosine distance assigns a non-zero value even to pairs of terms that rarely share the same Interpro entry. Thus, based on the inspection of the histogram, we set a threshold of 0.8 to select strong functional links. In total there are 995 MF-GO terms that are significantly connected. Additionally, we reduced the dimension of S by applying a NJW Spectral Clustering (NJW-SC) algorithm [21]. The similarity matrix in spectral space is shown in 6A, while the details of the algorithm are outlined in the 'Spectral Clustering Algorithm' subsection from the appendix. Optimization Approach As the number of clusters K is not initially known, an optimization approach is used, such as the multiway normalized cut value MNCut, (see 'Multiway Cuts' subsection from the appendix). The MNCut was calculated from the normalized matrix P (Transition Probability matrix). P represents the total probability of transit between any two clusters C[i ]and C[j ]for a given partition C = {C[1 ]... C [K]} of the graph and MNCut and represents the total sum of the transition probabilities between the clusters. The goal of optimization is to find the eigenvalue cut-off that generates a partition C* (K) that minimizes the MNCut value. In particular, we addressed optimization by exploiting the minimization of the gap value over the spectra of S. The optimization curve is shown in Figure Figure8A.8A. Note that a wide range of eigenvalues minimizes the gap (from 4 to 93). Thus, we applied an additional criteria to select the optimal cut-off, the correlation coefficient between the S matrix packed according to C* (K) and the ideal block diagonal matrix for this partition. The correlation calculation is used as a measure of the 'compactness' of the each partition. The correlation coefficient values are shown in 8B where the partition for the 93rd eigenvalue maximizes the procedure (correlation of 0.86). In Figure Figure6B 6B the S matrix packed for the optimal clustering is represented. The whole spectra of the P matrix [λ[i](P)][iU ]is analyzed selecting the first K eigenvalues and for each selection obtaining a partition of the MF-GO terms C[K]. In panel A, the values of the gap measure calculated for C[K ]are represented ... Benchmark Dataset of aligned proteins To compare our metric with others developed previously and to evaluate its relationship with sequence similarity, we took a set of proteins with reliable annotations taken from the Saccharomyces Genome Database (SGD). As an annotation source we used the file gene association.sgd (release October 2006). The confidence of the Gene Ontology Annotations (GOA) is represented by the Evidence Code (EVC). Although there is no consensus rule that establishes a standard order of annotations based on the EVCs, the Gene Ontology Consortium has outlined a rank of EVCs as a guide [24]. The hierarchy of confidence establishes that the TAS (Traceable Author Statement) and IDA (Inferred from Direct Assay) tagged annotations offer the highest confidence. Despite the efforts of the GOA project to improve the general reliability of its databases, the bulk of GO assignments are still made by automatic techniques with no expert curation. This homology-based transfer generates highly redundant sets of GO annotations. Moreover, as our method to derive the similarity between GO terms is to some extent affected by sequence similarity (given that it uses Interpro domains), we decided to exclude GO annotations derived from sequence relationships. In gene association.sgd, there were 1264 yeast proteins annotated with GO terms with EVCs unrelated by homology (TAS and IDA) that also appeared in our functional tree. After filtering this set for sequence redundancy with CD-hit [25] at 95%, we obtained a final set of 1193 yeast proteins. We then perform fast alignments of all-against-all using BLAST, having chosen a permissive e-value (0.1) to permit alignments between distant sequences. Nevertheless, alignments covering less than 50 residues and/or with less than 10% similarity were excluded. The final set comprised 1426 protein pairs and the distribution of sequence similarity for these pairs is shown in Figure Figure5D5D. Functional Comparison between Gene Products To calculate the functional similarity between two proteins from their set of GO terms and the metric relating these terms, we applied the Hausdorff Distance. The Hausdorff Distance is defined as the maximum value between any point within one set and the nearest point in the other set. Formally, from set A to B is: As the Hausdorff Distance is not symmetrical, a symmetrical measure was formulated as: Usually the Hausdorff distance is evaluated over the Euclidean space, although in this work we applied equation 3 using two distances: (A) the distance D[f ]obtained from our Functional Tree and (B) the distance proposed by Lord et al. [10] We implemented Lord's semantic similarity using as a reference the annotated database gene associa-tion.goa human [26] (released version 45.0), and we normalised the values between 0 and 1 to compare it with the metric derived in this work. Authors' contributions All authors contributed to the development of the methodology. AP implemented the algorithm and evaluated the method for the alignment set. FP analyzed and interpreted the functional groups. AP and FP contributed to writing the manuscript, and all the authors read and approved the final manuscript. Spectral Clustering Spectral clustering has its origin in spectral graph partitioning [27] and is intended to efficiently identify good discrete partitions of a graph based on the eigenvalues and eigenvectors of the Laplacian matrix of the graph. Spectral clustering belongs to a collection of techniques that are designed to overcome the problems of previous approaches by using new ideas such as the eigenvectors of the generalised/normalized Laplacian or the multi-way spectral cut. A systematic comparison between the existing published algorithms can be found in the work of Verma and Meila [20]. Therein, the authors present a clear description of the basic steps of the algorithms and their general classification based on three different strategies: (I) recursive spectral; (II) multi-way spectral; and (III) non-spectral. In this section we will introduce the notation and the basic steps for the NJW spectral clustering algorithm [21] and the ideas behind multi-way spectral cuts as a criterion of optimisation to find the best partition of the data. Here we implemented a modified version of NJW algorithm suggested in [20]. NJW Spectral Clustering Algorithm Consider a dataset U formed by N points to be clustered. For each pair of points within U, a similarity value can be defined as s[ij ]= s[ji ]≥ 0 by any similarity measure. U can be represented by a weighted directed graph G = (V, E) where the S = [s[ij]] matrix plays the role of the adjacency matrix of the graph. A clustering C = C[1], C[2],..., C[K ]is a partitioning of U into non-empty disjointed subsets C[1], C[2],..., C[K]. The out-degree of a node j is defined as $di=∑j=1Nsij$. We represent D for a diagonal matrix of out-degrees as: D = diag(d[1],..., d[N]). The nodes can be grouped by following the steps: 1. Compute the transition probability matrix P = D^-1 S. where P defines the probability to navigate from node i to node j in a random walk over G. By construction, the eigenvalues of P are delimited as [-1, 1], 1 = λ[1 ]≥ λ[2 ]≥ λ [N ]≥ -1. The corresponding eigenvectors are v^1,..., v^N. 2. Select the first K eigenvalues of P and form the X matrix by stacking the eigenvectors in columns: 3. Normalize each row of X to unit length to form the Y matrix: 4. Treat each row of Y as a point in K dimensions. The points can be grouped by any standard clustering technique. In this work, the points are organized in a hierarchical tree. Multiway Cuts The majority of approaches in spectral clustering deal with partitioning the graph in two optimal parts by using one eigenvector at a time and applying this approach reiteratively until K clusters are found. A way to use the K first eigenvectors simultaneously to find the optimum partition of the graph has been proposed, minimizing the cut of two partitions over all possible partitions in U [28]. Most of the approaches assume that the number of clusters K is known in advance, but in many problems related to clustering there is not indirect evidence that reveals the optimal number of groups. Here we expose the basics of the multi-way normalized cut (MNCut) concept that has been applied to find the optimum number K: The volume of node i is defined as the out-degree of the node: D denotes the diagonal matrix formed by D[i]. The volume of a subset A U is $VolA=∑i∈ADi$, (we assume that no node has volume 0). Given two disjoint subsets (A, B) U, the set of edges between the subsets is the cut between A and B: and the probability of transit from set A to set B is: Given the partition C = {C[1],...C[K]}, over U the multi-way normalized cut clustering criteria introduced in [28] is defined as: The multi-way normalized cut represents the total sum of the transition probabilities between the clusters of C. If MNCut(C) is small then the probability of evading C[k ]in a random walk is also It has been shown that for any clustering, the MNCut(C) is low-bounded by a function of the number of clusters K and the eigenvalues of P [29]: The non-negative difference between the MNCut(C) and its lower bound is the gap(C): It has been also shown that gap(C) is 0 if P has piecewise constant eigenvectors, that is if P is an ideal block stochastic matrix [29]. Therefore, from a set of M different partition solutions of the data [C[i]][iM], the optimal C* is that which minimizes the gap measure. Properties of a Metric Space By construction, the hierarchical clustering procedure over the Gene Ontology terms defines a generalized distance D between any two terms go[a ]and go[b ]that satisfies the mathematical properties of a metric space [22]. That is, any set of elements (terms) of the space x[1], x[2 ]and x[3 ]fulfil: • Nonnegativity: D(x[1], x[2]) ≥ 0 • Reflexivity: D(x[1], x[2]) = 0 if and only if x[1 ]= x[2] • Symmetry: D(x[1], x[2]) = D(x[2], x[1]) • Triangle Inequality: D(x[1], x[2]) + D(x[2], x[3]) ≥ D(x[1], x[3]) Supplementary Material Additional file 1: Functional Tree. The data provided represent the 'Functional Tree' joining the Molecular Function Gene Ontology terms. The authors want to acknowledge the members of the Structural Bioinformatics Group (CNIO), especially Dr. Ana Rojas for her help analysing the biological examples and Dr. Michael Tress for critical reading of the manuscript. This work has been partially funded by the GeneFun EU project (LSG-CT-2004-503567). • Friedberg I. Automated protein function prediction-the genomic challenge. Brief Bioinform. 2006;7:225–242. [PubMed] [PubMed] • Smith B, Kumar A. Controlled vocabularies in bioinformatics: a case study in the gene ontology. DDT: BIOSILICO. 2004;2:246–252. • Rison S, Hodgman T, Thornton J. Comparison of functional annotation schemes for genomes. Funct Integr Genomics. 2000;1:56–69. [PubMed] • Valencia A. Automatic annotation of protein function. Current Opinion in Structural Biology. 2005;15:267–74. [PubMed] • Riley M. Functions of the gene products of Escherichia coli. Microbiol Rev. 1993;57:862–952. [PMC free article] [PubMed] • Tamames J, Casari G, Ouzounis C, Valencia A. Conserved Clusters of Functionally Related Genes in Two Bacterial Genomes. Journal of Molecular Evolution. 1997;V44:66–73. [PubMed] • Mungall C. Obol: integrating language and meaning in bio-ontologies: Conference Papers. Comp Funct Genomics. 2004;5:509–520. [PMC free article] [PubMed] • Smith B, Ceusters W, Klagges B, Kohler J, Kumar A, Lomax J, Mungall C, Neuhaus F, Rector A, Rosse C. Relations in biomedical ontologies. Genome Biology. 2005;6:R46. [PMC free article] [PubMed] • Harris MA, Clark J, Ireland A, Lomax J, Ashburner M, Foulger R, Eilbeck K, Lewis S, Marshall B, Mungall C, Richter J, Rubin GM, Blake JA, Bult C, Dolan M, Drabkin H, Eppig JT, Hill DP, Ni L, Ringwald M, Balakrishnan R, Cherry JM, Christie KR, Costanzo MC, Dwight SS, Engel S, Fisk DG, Hirschman JE, Hong EL, Nash RS, Sethuraman A, Theesfeld CL, Botstein D, Dolinski K, Feierbach B, Berardini T, Mundodi S, Rhee SY, Apweiler R, Barrell D, Camon E, Dimmer E, Lee V, Chisholm R, Gaudet P, Kibbe W, Kishore R, Schwarz EM, Sternberg P, Gwinn M, Hannick L, Wortman J, Berriman M, Wood V, de la Cruz N, Tonellato P, Jaiswal P, Seigfried T, White R. The Gene Ontology (GO) database and informatics resource. Nucleic Acids Res. 2004:D258–61. [PMC free article] [PubMed] • Lord P, Stevens R, Brass A, Goble C. Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation. Bioinformatics. 2003;19:1275–83. [ • Zhang P, Zhang J, Sheng H, Russo J, Osborne B, Buetow K. Gene functional similarity search tool (GF-SST) BMC Bioinformatics. 2006;7:135. [PMC free article] [PubMed] • Schlicker A, Domingues F, Rahnenfuhrer J, Lengauer T. A new measure for functional similarity of gene products based on Gene Ontology. BMC Bioinformatics. 2006;7:302. [PMC free article] [PubMed] • Couto F, Silva M, Coutinho P. In CIKM '05: Proceedings of the 14th ACM international conference on Information and knowledge management. New York, NY, USA: ACM Press; 2005. Semantic similarity over the gene ontology: family correlation and selecting disjunctive ancestors; pp. 343–344. • Wang JZ, Du Z, Payattakool R, Yu PS, Chen CF. A new method to measure the semantic similarity of GO term. Bioinformatics. 2007;23:1274–1281. [PubMed] • Resnik P. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intelligence Research. 1999;11 • Mulder NJ, Apweiler R, Attwood TK, Bairoch A, Bateman A, Binns D, Bradley P, Bork P, Bucher P, Cerutti L, Copley R, Courcelle E, Das U, Durbin R, Fleischmann W, Gough J, Haft D, Harte N, Hulo N, Kahn D, Kanapin A, Krestyaninova M, Lonsdale D, Lopez R, Letunic I, Madera M, Maslen J, McDowall J, Mitchell A, Nikolskaya AN, Orchard S, Pagni M, Ponting CP, Quevillon E, Selengut J, Sigrist CJ, Silventoinen V, Studholme DJ, Vaughan R, Wu CH. InterPro, progress and status in 2005. Nucleic Acids Res. 2005;33:D201–5. [PMC free article] [PubMed] • Huang D, Sherman B, Tan Q, Kir J, Liu D, Bryant D, Guo Y, Stephens R, Baseler M, Lane H, Lempicki R. DAVID Bioinformatics Resources: expanded annotation database and novel algorithms to better extract biology from large gene lists. Nucleic Acids Res. 2007:W169–175. [PMC free article] [PubMed] • Pellegrini M, Marcotte E, Thompson M, Eisenberg D, Yeates T. Assigning Protein Functions by Comparative Genome Analysis: Protein Phylogenetic Profiles. Proc Natl Acad Sci USA. 1999;96:4285–4288. [PMC free article] [PubMed] • Korber BT, Farber R, Wolpert D, Lapedes A. Covariation of Mutations in the V3 Loop of Human Immunodeficiency Virus Type 1 Envelope Protein: An Information Theoretic Analysis. PNAS. 1993;90 :7176–7180. [PMC free article] [PubMed] • Verma D, Meilă M. Tech Rep 03-05-01. University of Washington Department of Computer Science; 2003. A Comparison of Spectral Clustering Algorithms. • Ng A, Jordan M, Weiss Y. On Spectral Clustering: Analysis and an algorithm. In: Dietterich TG, Becker S, Ghahramani Z, editor. In Advances in Neural Information Processing Systems 14. Cambridge, MA: MIT Press; 2002. pp. 849–856. • Duda R, Hart P, Stork D. Pattern Classification. Wiley; 2001. • Camon E, Magrane M, Barrell D, Binns D, Fleischmann W, Kersey P, Mulder N, Oinn T, Jand Cox MaslenA, Apweiler R. The Gene Ontology Annotation (GOA) project: implementation of GO in SWISS-PROT, TrEMBL, and InterPro. Genome Research. 2003;13:662–672. [PMC free article] [PubMed] • Guide to GO Evidence Codes http://www.geneontology.org/GO.evidence.shtml • Li W, Godzik A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics. 2006;22:1658–1659. [PubMed] • Gene Ontology Annotation (GOA) Database http://www.ebi.ac.uk/GOA • Chung F. Spectral Graph Theory. Number 92 in CBMS Regional Conference Series in Mathematics, American Mathematical Society. 1997. • Meilă M, Shi J. Learning Segmentation by Random Walks. NIPS2000. 2000. • Meilă M. Tech Rep 451. University of Washington Statistics; 2004. The Multicut Lemma. • Letunic M, Bork P. Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation. Bioinformatics. 2007;23:127–128. [PubMed] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2375122/?tool=pubmed","timestamp":"2014-04-17T13:23:42Z","content_type":null,"content_length":"132042","record_id":"<urn:uuid:fe139904-07df-479b-9a12-a530c37a3dd2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework assignments posted here are subject to correction in class or through other means. Problems as assigned here are for your convenience but are not a substitute for obtaining assignments in Assignments as issued in class supercede these assignments unless otherwise noted. Homework Assignment: 1 2 3 4 5 6 7 8 9 10 11 Separate Yourself From the Crowd Problems for Class Discussion: Chapter 1: A3, A4, B2, F1 (Due 8/18) Chapter 2: A3, A11 (Due 8/23) Problems for Submission (Due 8/25): Chapter 2: D6, D13, D21, D29 (Use ASPEN PLUS) Assignment Learning Objectives: · Perform analyses of flash and simple distillation systems, including equipment sizing · Apply various thermodynamic equilibrium models to process systems Reading Assignments: Tuesday (8/16): Ch.1 pp. 1-10 Thursday (8/18): Ch.2 pp. 13-30 Tuesday (8/23): Ch.2 pp. 30-54 Thursday (8/25): Ch.3 pp. 79-95 Go Ahead, Make This Separation Problems for Class Discussion: Chapter 3: A3, A6, A10, F1 (Due 8/30) Chapter 4: A3, A6, A7, (Due 9/01) Problems for Submission (Due 9/01): Chapter 3: D8, E2 Assignment Learning Objectives: Analyze columns and cascades based on external measurable properties Apply conservation principles to distillation columns Reading Assignments: Tuesday (8/30): Ch.4 pp. 101-127 Thursday (9/01): Ch.4 pp. 127-158 “Graphical Analysis Can Mess You Up” Problems for Class Discussion: Chapter 4: A13 (Due 9/06) Chapter 5: A1 (Due 9/08) Problems for Submission (Due 9/08): Chapter 4: D6, D13, D28, D29, D32 You should use ASPEN PLUS to generate your xy-diagram for 4D13. Do not forget the step of comparing your diagram with the experimental data given in your book. Simulate the system of D13 in ASPEN PLUS and compare the performance of the column you design with McCabe-Thiele with the column simulated using the RADFRAC rigorous distillation model. Assignment Learning Objectives: · Perform a design estimate for distillation columns, strippers, and absorbers using the McCabe-Thiele method with a range of flow configurations and process specifications · Use ASPEN PLUS to simulate distillation columns in rating mode Reading Assignments: Tuesday (9/06): Ch.5 pp. 183-203 Thursday (9/08): Ch.6 pp. 215-230 Unlock these problems with your key components Problems for Class Discussion (Due 9/13): 5.A.15, 6.A.2 Problems for Submission (Due 9/16): 5.D.4, 5.D.9, 5.D.13, 6.G.5 Assignment Learning Objectives: Perform a design estimate for multicomponent distillation columns using key assumptions Describe concentration profiles in multicomponent distillation columns Use ASPEN PLUS to simulate multicomponent distillation columns Reading Assignments: Tuesday (9/13): Ch.7 pp. 243-257 Thursday (9/15): No Class Meeting, Review Chapter 6 Simulation Appendices Tuesday (9/20): Exam 1 Chapter 1-6 Thursday (9/22): Ch. 8 pp. 265-304 It's Only a Start Problems for Class Discussion: Chapter 7: A4 (Due 9/27) Chapter 6: A1, A7, A10 (Due 9/29) Problems for Submission (Due 9/29): Chapter 7: D7, D18 Chapter 8: D12, D13, G3 Assignment Learning Objectives: · Apply the Fenske/Underwood/Gilliland method for preliminary design of distillation columns · Use McCabe-Thiele to design columns for complex binary system · Use ASPEN PLUS to simulate complex multicomponent distillation systems Reading Assignments: Thursday (9/22): Ch. 7 pp. 243-257 Tuesday (9/27): Ch. 8 pp. 265-304 Thursday (9/29): Ch. 9 pp. 329-347 Homework is usually completed batchwise Problems for Class Discussion (Due 10/11): Chapter 9: A4 , A5 Chapter 10: A1, A5, A7, A16 Problems for Submission (Due 10/13): Chapter 9: D5, D15, D22 Chapter 10: D4, D9, D17, G3 Assignment Learning Objectives: Evaluate performance of batch distillations units, including single-stage and multi-stage, using analytical, numerical, and graphical techniques · Perform sizing calculations based on hydraulic considerations for distillation columns · Use ASPEN PLUS to mechanically size distillation columns Reading Assignments: Tuesday (10/4): Ch. 10 pp. 357-388 (No class meeting) Thursday (10/6): Fall Break Tuesday (10/11): Ch. 11 pp. 419-445 Thursday (10/13): Ch. 12 pp. 455-484 Making Money Engineering Problems for Class Discussion (Due 10/25): Chapter 11: A1, A3, A5, A9, A11 Problems for Submission (Due 10/20): Chapter 11: B4, D3, D8, G3 Assignment Learning Objectives: Estimate operating costs and equipment costs for specified column designs Sequence columns based on heuristic approaches Optimize column design using simulation Reading Assignments: Tuesday (10/11): Ch. 11 pp. 419-445 Thursday (10/13): Ch. 12 pp. 455-484 Tuesday (10/18): No class meeting, AIChE Annual Meeting MB, EB, EQ Problems for Class Discussion (Due 10/27): Chapter 12: A1, A5 Problems for Submission (Due 11/01): Chapter 12: D4, D13, D21, Lab 11 You should use the Kremser method for at least one problem and the graphical method for at least one problem Assignment Learning Objectives: Design and rate absorbers and strippers under dilute and non-dilute conditions using graphical and approximate analytical methods Simulate absorbers and strippers in a commercial process simulator Reading Assignments: Thursday (10/20): Exam 2, Chapters 7-11 (To be rescheduled) Tuesday (10/25): Ch. 13 pp. 499-531 (Liquid-liquid extraction) Thursday (10/27): Ch. 13 pp. 531-559 Tuesday (11/01): Ch. 14 pp.575-590 (Washing, leaching, supercritical extraction) MB, EB, LLE Problems for Class Discussion (Due 11/03): Chapter 13: A1, A5, A9, A11 Problems for Submission (Due 11/08): Chapter 13: D1, D13, D19, D43, Lab12 Assignment Learning Objectives: Design and rate liquid-liquid extractors under dilute and non-dilute conditions using graphical and approximate analytical methods Simulate liquid-liquid extractors in a commercial process simulator Reading Assignments: Tuesday (11/01): Ch. 14 pp.575-590 (Washing, leaching, supercritical extraction) Thursday (11/03): Ch. 14 Tuesday (11/08): Ch. 15 pp. 599-616 (Introduction to mass transfer) Thursday (11/10): Ch. 17 pp. 725-788 (Membrane separations) Tuesday (11/15): Ch. 17 Thursday (11/17): Ch. 18 pp. 805-892 (Adsorption, Chromatography, Ion Exchange) MB, EB, SLE Problems for Class Discussion: Chapter 14: A3, A9 (Due 11/10) Chapter 17: A1, A2 (Due 11/15) Problems for Submission (Due 11/15): Chapter 14: D4, D8, D20 Assignment Learning Objectives: Design and rate washing and leaching systems under dilute and non-dilute conditions using graphical and approximate analytical methods Reading Assignments: Tuesday (11/08): Ch. 15 pp. 599-616 (Introduction to mass transfer) Thursday (11/10): Ch. 17 pp. 725-788 (Membrane separations) Tuesday (11/15): Ch. 17 Thursday (11/17): Ch. 18 pp. 805-892 (Adsorption, Chromatography, Ion Exchange) Tuesday (11/22): Exam 3 Ch. 12-17 Thursday (11/24): Thanksgiving (no class meeting) Tuesday (11/29): Ch. 18 Thursday (12/01): Review Thursday (12/08): Final Exam (10:45-12:45) Let this assignment permeate your cranial membrane Problems for Class Discussion: Chapter 18: A3, A4 (Due 11/17) Problems for Submission (Due 11/29): Chapter 17: D2, D4, D7 Chapter 18: D2, D6 Assignment Learning Objectives: Perform fundamental analyses of the performance of membrane separation systems in RO and UF applications Apply adsorption isotherms to simple SLE separations Perform breakthrough analysis for chromatography systems Reading Assignments: Thursday (11/17): Ch. 18 pp. 805-892 (Adsorption, Chromatography, Ion Exchange) Tuesday (11/22): Exam 3 Ch. 12-17 Thursday (11/24): Thanksgiving (no class meeting) Tuesday (11/29): Ch. 18 Thursday (12/01): Review Thursday (12/08): Final Exam (10:45-12:45)
{"url":"http://www.engr.uky.edu/~silverdl/CME415/assignments.htm","timestamp":"2014-04-20T13:22:54Z","content_type":null,"content_length":"48523","record_id":"<urn:uuid:cc1833bf-29f2-4326-a521-303da7df5285>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Human Ovarian Reserve from Conception to the Menopause The human ovary contains a fixed number of non-growing follicles (NGFs) established before birth that decline with increasing age culminating in the menopause at 50–51 years. The objective of this study is to model the age-related population of NGFs in the human ovary from conception to menopause. Data were taken from eight separate quantitative histological studies (n = 325) in which NGF populations at known ages from seven weeks post conception to 51 years (median 32 years) were calculated. The data set was fitted to 20 peak function models, with the results ranked by obtained correlation coefficient. The highest ranked model was chosen. Our model matches the log-adjusted NGF population from conception to menopause to a five-parameter asymmetric double Gaussian cumulative (ADC) curve ( = 0.81). When restricted to ages up to 25 years, the ADC curve has = 0.95. We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. Furthermore, we found that the rate of NGF recruitment towards maturation for most women increases from birth until approximately age 14 years then decreases towards the menopause. To our knowledge, this is the first model of ovarian reserve from conception to menopause. This model allows us to estimate the number of NGFs present in the ovary at any given age, suggests that 81% of the variance in NGF populations is due to age alone, and shows for the first time, to our knowledge, that the rate of NGF recruitment increases from birth to age 14 years then declines with age until menopause. An increased understanding of the dynamics of human ovarian reserve will provide a more scientific basis for fertility counselling for both healthy women and those who have survived gonadotoxic cancer treatments. Citation: Wallace WHB, Kelsey TW (2010) Human Ovarian Reserve from Conception to the Menopause. PLoS ONE 5(1): e8772. doi:10.1371/journal.pone.0008772 Editor: Virginia J. Vitzthum, Indiana University, United States of America Received: October 13, 2009; Accepted: December 31, 2009; Published: January 27, 2010 Copyright: © 2010 Wallace, Kelsey. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: TWK is supported by United Kingdom Engineering and Physical Sciences Research Council (EPSRC) grants EP/CS23229/1 and EP/H004092/1. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Our current understanding of human ovarian reserve presumes that the ovary establishes several million non growing follicles (NGFs) at around five months of gestational age which is followed by a decline to the menopause when approximately 1,000 remain at an average age of 50–51 years [1], [2]. With approximately 450 ovulatory monthly cycles in the normal human reproductive lifespan, this progressive decline in NGF numbers is attributed to follicle death by apoptosis. A number of recent reports have challenged this long held understanding of mammalian reproductive biology by reporting the presence of mitotically-active germ stem cells in juvenile and adult mouse ovaries [3]–[5]. While the presence of germ stem cells within the mammalian ovary that are capable of neo-oogenesis remains controversial [6], [7], a better understanding of the establishment and decline of the NGF population will be important in determining if neo-oogenesis occurs as part of normal human physiological ageing. Several studies have reported the number of NGFs at different ages in humans [8]–[15] and constructed mathematical models of NGF decline [1], [2], [13]. Some of these studies have suggested that the instantaneous rate of temporal change increases around the age of 37 years, when approximately 25,000 follicles remain, followed by exhaustion of the NGF pool and menopause 12–14 years later [11], [12]. These studies have addressed the decline from birth of the NGF population, but none has included the crucial establishment phase from conception to a peak at 18–22 weeks gestation. Our study uses more histological data than any previous study, and includes data from prenatal ovaries, allowing us to analyse complete models involving both population establishment and decline. The objective of our study is to identify the most robust mathematical model to describe the establishment and decline of NGFs in the human ovary from conception to menopause. This will allow us to estimate the number of NGFs present at a given age and describe the rate of recruitment of NGFs towards maturation or apoptosis with increasing age. The Best Fitting Peak Model The highest ranked model ( = 0.81) returned by the search for the best fitting model to the 325 data-points was a 5-parameter asymmetric double-Gaussian cumulative (ADC) curve (Equation (1)). The first 3 parameters, (, and ) define the scale and amplitude of the curve; the remaining parameters ( and ) define the rates of population establishment and decline.(1) The values for the parameters that maximise the correlation coefficient for our dataset are given in Equation (2). The model is asymmetric, since rapid establishment is followed by a long period of decline ( is small and is large); it is double-Gaussian cumulative since it is the product of two Gauss-error functions.(2) This model (illustrated graphically in Figure 1) demonstrates that 81% of the variation in individual NGF populations is due to age alone. Full statistical analysis and derivation details for the model are given in additional file 1. Interestingly, if we confine our analysis to the histological data from conception to age 25 years we discover that the ADC model remains the best fit ( = 0.95) and that 95% of the variation in NGF numbers is due to age alone (Figure 2). Figure 1. The model that best fits the histological data. The best model for the establishment of the NGF population after conception, and the subsequent decline until age at menopause is described by an ADC model with parameters = 5.56 (95% CI 5.38–5.74), = 25.6 (95% CI 24.9–26.4), = 52.7 (95% CI 51.1–54.2), = 0.074 (95% CI 0.062–0.085), and = 24.5 (95% CI 20.4–28.6). Our model has correlation coefficient = 0.81, fit standard error = 0.46 and F-value = 364. The figure shows the dataset (n = 325), the model, the 95% prediction limits of the model, and the 95% confidence interval for the model. The horizontal axis denotes age in months up to birth at age zero, and age in years from birth to 51 years. Figure 2. The model that best fits the histological data for ages up to 25 years. The best model for the establishment of the NGF population after conception, and the subsequent decline until 25 years of age is described by an ADC model with parameters = 5.79 (95% CI 5.03–6.55), = 28.0 (95% CI 15.8–40.2), = 57.4 (95% CI 33.1–81.8), = 0.074 (95% CI 0.067–0.081), and = 34.3 (95% CI −4.2–72.8). This model has correlation coefficient = 0.95, fit standard error = 0.29 and F-value = 585. This figure shows the dataset (n = 126), the model, the 95% prediction limits of the model, and the 95% confidence interval for the model. The horizontal axis denotes age in months up to birth at age zero, and age in years from birth to 25 years. To guard against model selection bias and to test the robustness of the model with respect to the data, we randomly removed 50 data points 61 times and re-fitted the models, with the ADC model being, on average, the best fitting model (double-sided t-test for difference of means, ). Additional file 2 gives details of the statistical tests performed. Models That Allow Neo-Oogenesis To examine whether a model that permits neo-oogenesis would provide a better fit to the data, we further analysed the data by fitting models that need be neither asymmetric nor single peak. We found that any mathematical model that permits an increase in NGF population after the peak at 18–22 weeks has a markedly inferior fit compared to the best-fitting ADC model (Figure 3). Figure 3. The highest ranked model that allows growth after the initial peak. The highest-ranked non-peak model returned by TableCurve is a polynomial given by . Compared to the ADC model for the same data, the model has lower correlation coefficient, higher fit standard error, and lower F-statistic. All other TableCurve models that allow multiple peaks have an inferior fit to the data. Estimated NGF Population by Age If menopause is defined as a population of less than one thousand (in line with Faddy & Gosden [2]), the model predicts age of menopause as 49.6 (95% CI 47.9–51.2) years, with a 95% prediction interval of 38.7–60.0 years (Figure 4). Our model gives a maximum mean NGF population of 300,000 per ovary (95% CI 225,000–390,000), occurring at 18–22 weeks post-conception, with a 95% prediction interval (PI) of 35,000–2,534,000 NGFs. Figure 4 gives values for NGF populations at illustrative ages, together with the corresponding 95% prediction intervals. Women with an average age of menopause will have around 295,000 NGFs present at birth per ovary, with women destined to have an earlier menopause having around 35,000 NGFs and late menopause women having over 2.5 million NGFs per ovary at birth. Figure 4. Illustrative examples. This figure gives illustrative examples of NGF populations predicted by our model. At ages 20 weeks, birth, 13 years, 25 years and 35 years the average NGF population is given, together with the respective 95% prediction intervals. The predicted average age at menopause (49.6 years) is also shown, together with the 95% prediction interval. We describe the percentage of the NGF population remaining for a given age for women whose ovarian reserve is established and declines in line with our model (Figure 5). We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. The hypothesis that early (respectively late) menopause is related to low (respectively high) peak population at 18–22 weeks post conception is illustrated in Figure 6. Figure 5. Percentage of ovarian reserve related to increasing age. The curve describes the percentage of ovarian reserve remaining at ages from birth to 55 years, based on the ADC model. 100% is taken to be the maximum ovarian reserve, occurring at 18–22 weeks post-conception. The percentages apply to all women whose ovarian reserve declines in line with our model (i.e. late and early menopause are associated with high and low peak NGF populations, respectively). We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. Figure 6. A hypothetical link between ovarian reserve and age at menopause. This figure describes the hypothesis that individual age at menopause is determined by the peak NGF population established at around 20 weeks post-conception. The central curve is the ADC model described in Figures 1 and 4. Above and below are the hypothetical curves for an ovary having log-adjusted peak population varying from the average case by one half, one, one and a half, and two standard deviations. Under this hypothesis, a variation by, for example, one standard deviation in the initial peak population results in a one standard deviation from the average age at menopause. Rates of NGF Recruitment towards Maturation To investigate the number of NGFs recruited towards maturation and ovulation or apoptosis each month we have solved our model to show (Figure 7a) that the maximum recruitment of 880 NGFs per month occurs at 14 years 2 months for the average age at menopause woman. While the maximum rate of recruitment varies hugely, from around 100 NGFs per month (Figure 7b) to over 7,500 NGFs per month ( Figure 7c) for women with an early or late menopause respectively, the rate of NGF recruitment increases to a plateau at just over 14 years and then decreases for women in general irrespective of how many NGFs were established by birth. Figure 7. Rates of NGF recruitment towards maturation. Each sub-figure describes the absolute number of NGFs recruited per month, for ages from birth to 55 years, based on population decline predicted by the ADC model. Figure 7 (a) - red curve - denotes recruitment for individuals whose decline is in line with the average age at menopause; maximum recruitment of 880 follicles per month occurs at 14 years 2 months. Figure 7 (b) - green curve - denotes recruitment for individuals whose decline is in line with early age at menopause (the lower 95% prediction limit of the model); maximum recruitment of 104 follicles per month occurs at 14 years 2 months. Figure 7 (c) - yellow curve - denote recruitment in line with late age at menopause (the upper 95% prediction limit of the model); maximum recruitment of 7,520 follicles per month occurs at 14 years 2 months. In this study we have identified the first model of human ovarian reserve from conception to menopause that best fits the combined histological evidence. This model allows us to estimate the number of NGFs present in the ovary at any given age, suggests that 81% of the variance in NGF populations is due to age alone, and shows that the rate of NGF recruitment increases from birth to age 14 years then declines with age until menopause. Further analysis demonstrated that 95% of the NGF population variation is due to age alone for ages up to 25 years. The remaining 5% is due to factors other than age e.g. smoking, BMI, parity and stress. We can speculate that as chronological age increases, factors other than age become more important in determining the rate at which NGFs are lost through apoptosis. We have made two major assumptions in our study. Firstly, that the results of the eight histological studies that have estimated the total number of NGFs per human ovary are comparable. The definition of a NGF is identical in six of the studies and similar in the remaining two studies. The counting techniques all used a variation of the technique first described by Block [8]. Our assumption is in line with that of Faddy and Gosden who also assumed histological studies to be comparable when deriving a model for ovarian reserve from birth that also took average age at menopause into account [2]. The differences between their 1996 study and our study are that we have used more histological data–including for the first time prenatal data–and that we use known ranges of age at menopause as a check on the validity of our model, rather than a contributing factor. In the eight reported studies, the majority of younger samples were from autopsy and many of the older subjects had undergone surgical oophorectomy. It is possible that this difference in the source of the ovarian samples influences our finding that factors other than age become more important in older women. Other studies, and previously reported models, of ovarian reserve have not made a distinction in the reported source of the material; in particular the Hansen et al. study combined 77 autopsy subjects and 45 elective surgical subjects into a single dataset. Our second assumption is that the peak number of NGFs at 18–22 weeks gestation defines age at menopause for the individual woman, with early menopause women having low peak populations and late menopause women having high peak populations. The data on the number of NGFs in the ovary is cross-sectional: there is no longitudinal data available and in the absence of a non-invasive test to count NGFs in the individual woman this data is likely to remain unobtainable. Considered together the wide variation at age at menopause and wide variation of peak population of NGFs are suggestive but not conclusive evidence for this assumption to be tenable. Since the publications by Johnson et al. [3], [4] there has been lively scientific debate around the widely held concept that a non-renewing oocyte reserve is laid down in the ovaries at birth, and that neo-oogenesis does not occur in adult life [6]. Johnson and Tilly have argued that their experiments in the adult female mouse have demonstrated conclusively that neo-oogenesis continues in adulthood. They have proposed that the source of postnatal oocyte production is from germline stem cells in the bone marrow, which are transported in the peripheral circulation as germline progenitor cells to arrive in the adult ovary [4]. The recent report showing isolation and culture of germline stem cells from adult mouse ovaries [5], which restored fertility after injection into infertile mice, provides further evidence to support the presence of germ line stem cells in mammalian ovaries. Our analysis of the available histological data demonstrates that any mathematical model that permits an increase in NGF population after the peak at 18–22 weeks has a markedly inferior fit compared to the best-fitting asymmetric peak functions. While the emerging evidence strongly supports the existence of germ stem cells within adult mouse ovaries [7], our model provides no supporting evidence of neo-oogenesis in normal human physiological ageing. We have described the percentage of the NGF population remaining for a given age for all women whose ovarian reserve is established and declines in line with our model (Figure 5). If we assume that a high initial NGF population is associated with late menopause, and that a low peak NGF population is associated with early menopause, then these percentages apply to 95% of all women. It is important to note that we have shown that by the age of 30 years the percentage NGF population is already 12% of the initial reserve and only 3% of the reserve remains at 40 years of age. A recent study has shown that most women underestimate the extent to which age affects their ability to conceive naturally [16]. Our finding that the rate of NGF recruitment increases to a plateau at just over 14 years and then decreases in all women irrespective of how many NGFs were established by birth is highly unlikely to be explained by coincidence. From the first comprehensive model of NGF decline from birth [2] we can calculate that the maximum NGF recruitment occurs at birth (data not shown). However, this model was not only based on goodness of fit to histological data, but it also included adjustments to take known distribution of ages at menopause into account. A more recent model of decline from birth is based entirely on fitting to histological data [13]. For this model we calculate that the maximum recruitment of NGFs to maturation occurs at 18 years 11 months (data not shown). In western society the average age of menarche is around 13 years [17], with early breast development appearing around age 11 years. Our data suggests that the onset of oestrogenisation and ovulation heralds a slowing in the rate of NGF recruitment. Our findings suggest that both endocrine and paracrine factors may be important in the slowing and subsequent decline in the rate of NGF recruitment. An important candidate is anti-Müllerian hormone (AMH), a member of the transforming growth factor-beta (TGF-) superfamily of growth factors [18]. They are produced by ovarian granulosa cells and oocytes in a developmental, stage-related manner and function as intra-ovarian regulators of folliculogenesis. There is good evidence that AMH from granulosa cells of pre-antral or antral follicles exerts a negative inhibitory influence on the primordial to primary follicle transition [19]. Furthermore AMH has been proposed as an indirect marker of ovarian reserve in post-pubertal women [19]. Until the onset of puberty (characterised by the switching on of the hypothalamic-pituitary axis and the pulsatile secretion of the gonadotophins FSH and LH) follicular maturation rarely progresses beyond the pre-antral stage. The presence of the pulsatile secretion of FSH and LH at puberty promotes follicular maturation to the antral stage and beyond. There is however incomplete data on AMH levels in pre-pubertal girls in the literature: AMH is undetectable before birth [20] and is detectable at low levels in infants [21]. The explanation for our finding that the rate of NGF recruitment increases until the onset of puberty, levels off at around 14 years of age, and then declines to the menopause remains unclear. It is interesting to speculate that AMH levels which are undetectable at birth may rise at puberty with the establishment of regular ovulatory cycles and be responsible for the slowing of the rate of NGF recruitment that occurs at puberty. Can a more complete understanding of the establishment and decline of the non-renewing pool of NGFs help us to assess ovarian reserve for the individual woman? Several candidate markers for the assessment of ovarian reserve in the individual woman have been suggested including FSH, Inhibin B, AMH, and antral follicle counts and ovarian volume by transvaginal ultrasound [22], [23]. We have previously reported a striking correlation between ovarian volume and NGF population using an earlier model [24]. However the measurement of ovarian volume by transvaginal ultrasound is imprecise, particularly at the lower end of the range [25]. It is likely that a better understanding of NGF establishment and decline will improve our ability to assess ovarian reserve for the individual woman. One immediate application of our model is to better understand the effect of chemotherapy and radiotherapy on the human ovary. Using a model based on less complete histological data, we estimated the radiosensitivity of the human oocyte [26] and were subsequently able to estimate the effective sterilising dose of radiotherapy at a given age for the individual woman [27]. Knowledge of the dose of radiotherapy and age at which it is delivered provides an important opportunity for accurate counselling of women receiving cancer treatment and will help us to predict which women are at high risk of premature menopause and who may therefore benefit from ovarian cryopreservation [28]. We have described and illustrated a model of human ovarian reserve from conception to menopause that best fits the combined histological evidence. Our model matches the log-adjusted NGF population to a five-parameter asymmetric double Gaussian cumulative (ADC) curve ( = 0.81). When restricted to ages below 26 years, the ADC curve has = 0.95. We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. Furthermore, we found that the rate of NGF recruitment towards maturation for most women increases from birth until approximately age 14 years then decreases towards the menopause. An increased understanding of the dynamics of human ovarian reserve will provide a more scientific basis for fertility counselling for both healthy women and those who have survived gonadotoxic cancer treatments. Histological Studies of NGF Populations Data were taken from all eight known published quantitative histological studies of the human ovary (Tables 1 and 2). In all eight studies the definition of a NGF is similar. For studies 2,4,5,6,7 & 8 the definition of a NGF is according to the morphological criteria of Block and Gougeon [8], [11]: an NGF was counted when a clearly defined oocyte nucleolus was present within the optical dissector counting frame. In studies 1 & 3, fetal ovary studies, an NGF was defined as being an oocyte surrounded by a single layer of granulosa cells. Each of these studies used a variation on the technique developed by Block to estimate the NGF population of an ovary [8]. Ovaries are sectioned and some of the sections stained and photographed. A mean NGF volume is calculated from a sample image and used throughout all subsequent calculations. The photographs are analysed by hand, with the number of NGFs appearing in the photograph being counted. By assuming an even distribution throughout the ovary, the population of the samples is integrated into an estimated population for the entire ovary. The studies differ in the stain used, the number of samples chosen, the method of counting, and the mathematical formula used to obtain the estimated population from the sample populations. Some ovaries were obtained at autopsy and some after elective surgical oophorectomy. Older subjects were more likely to have undergone surgical oophorectomy. No standard error has been calculated for such studies, as the exact number of NGFs in a specific ovary has not been calculated. We combined these data into a single dataset (n = 325) and enforced a zero population at conception. Table 1. The eight quantitative histological studies forming the combined dataset. Table 2. The eight quantitative histological studies of NGF population summarised by ovarian age. Mathematical Models We fitted 20 asymmetric peak models to the data set, using TableCurve-2D (Systat Software Inc., San Jose, California, USA), and ranked by correlation coefficient. Each model defines a generic type of curve and has parameters, which, when instantiated gives a specific curve of that type. For each type of curve, we calculated values for the parameters that maximise the for that model. The 20 models supplied by TableCurve are those that are commonly reported in the scientific literature as models of datasets that rise and fall such as pharmacodymanics, cell populations and electromagnetic signals. The Levenberg-Marquardt non-linear curve fitting algorithm was used, with convergence to 15 significant figures in after a maximum of 10,000 iterations. We performed the same analysis on the datapoints associated with an age of 25 years or under. File S1 contains the dataset, the model ranking, the output used to prepare Figures 1 and 2, and the statistics associated with the highest-ranked model. To avoid selection bias, we randomly removed 50 datapoints 61 times and re-fitted the models, calculating the mean and standard deviations of the coefficients obtained for each model. We then compared the mean obtained for the two highest ranked models for a statistically significant difference. File S2 contains statistics regarding the correlation coefficient for the models obtained in this way, together with output for the test for a statistically significant difference of the two highest means. To further avoid selection bias from our initial choice of models, and to allow the possibility of more than one peak (i.e. to allow models involving regeneration of ovarian reserve), we fitted all 266 models supplied by TableCurve, again ranking by . The highest ranked model was used as the basis for further calculations. Under the modelling assumption that, in general, a high (versus low) established population results in a late (versus early) menopause, we calculated the percentage of NGF pool at given ages, and the absolute monthly loss of germ cells from birth until age 55. Calculation of Recruitment Rates To calculate the rates of recruitment of NGFs towards maturation we solved the equations describing the early, late and average menopause models for all months from birth to menopause. The absolute numbers of NGFs recruited were then given by the differences in successive monthly totals. Supporting Information This file contains (1) a plot of the ADC model that best fits the histological data, (2) full statistics for the model, (3) a plot of the residuals, (4) the dataset with values for the model, the residuals, the 95% CI for the model and the 95% prediction interval, (5) details of the iterations taken to obtain the model, and (6) the ranking of the peak functions as fitted to the dataset. (0.18 MB PDF) It may have been a chance result that the ADC performed well on the 325 datapoints selected, and that another model would perform better, in general, on similar datasets. The additional file contains details of the testing of this hypothesis, and shows that the ADC mean r^2 was statistically significantly higher than any other peak model (p = 0:0065). (0.05 MB PDF) We would like to acknowledge the critical discussions we have had with Prof. Richard Anderson, Prof. Ian Gent, Dr. Jacob Howe, Dr. Dror Meirow and Dr. Evelyn Telfer. Author Contributions Conceived and designed the experiments: WHW TWK. Performed the experiments: TWK. Analyzed the data: WHW TWK. Wrote the paper: WHW TWK. Media Coverage of this Article Posted by PLoS_ONE_Group Risk factors Posted by forestellm
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0008772","timestamp":"2014-04-16T22:52:52Z","content_type":null,"content_length":"156689","record_id":"<urn:uuid:abb0348c-b1f0-4292-9ace-184ca41d187b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the image of an p-adic Galois representation always lie in a finite extension? up vote 17 down vote favorite I have been looking at Serre's conjecture and noticed that there are two conventions in the literature for a p-adic representation $\rho:\mbox{Gal}(\bar{\mathbb Q}/\mathbb Q)\to \mbox{GL}(n,V).$ In some references (eg Serre's book on $\ell$-adic representations), $V$ is a vector space over a finite extension of $\mathbb Q_p$. However, in more recent papers (eg Buzzard, Diamond, Jarvis) $V$ is a vector space over $\bar{\mathbb Q_p}$. It is easy to show that the former definition is a special case of the latter, but I suspect, and would like to prove that they are actually the same. That is, I would like to show that the image of any any continuous Galois representation over $\bar{\mathbb{Q}_p}$ actually lies in a finite extension of $\mathbb Q_p$. Is this the case? I think that a proof should use the fact that $G_{\mathbb Q}$ is compact and that $\bar{\mathbb Q}_p$ is the union of finite extensions. I have tried to mimic the proof that $\bar{\mathbb Q}_p$ is not complete, but have not been able to find an appropriate Cauchy sequence in an arbitrary compact subgroup of GL($n,V$). (This is my first question, so please feel free to edit if appropriate. Thanks!) nt.number-theory galois-representations Jen, I have a one-page note in my files of a proof that every compact subgroup of GL_n(Q_p-bar) which is different from the proof in Skinner's paper. I'll .tex it up and post it shortly. – KConrad Mar 10 '10 at 23:41 @KConrad: I know one too that uses Baire Cat Theorem and I think is in a paper of Dickinson? Somehow I suspect that the "bottom line" is that it's the same as Skinner's... – Kevin Buzzard Mar 11 '10 at 0:04 Keith, that would be great...Thanks! – Johnson-Leung Mar 11 '10 at 2:00 OK, see a link I set up below. – KConrad Mar 11 '10 at 2:21 I just want to make a cultural remark that, although this is not written down in many places, it is well-known to everyone working in the field. People switch between the two settings (finite over $\mathbb Q_p$ or $\overline{\mathbb Q}_p$) depending on which is convenient. – Emerton Mar 11 '10 at 15:26 show 1 more comment 2 Answers active oldest votes A proof of the result you're after is contained at the beginning of section two of a recent paper of Skinner here. Skinner mentions that references for this fact seem to be rare. up vote 11 down vote accepted 2 Another reference: it's Lemme 2.2.1.1 of Breuil-Mezard's 2002 Duke paper (which you can find on Breuil's website), they say they learned the proof from J.-B. Bost. It's a Baire Category argument, so maybe this is the reference that Kevin was thinking of? – D. Savitt Mar 11 '10 at 0:27 add comment I tried to cut and paste here an argument from a.tex file, but it came out looking like a complete mess, so here is a webpage link I set up: http://www.math.uconn.edu/~kconrad/blurbs/ Concerning the comments by Kevin and David about proofs using the Baire category theorem, I think the proof I posted above (due to Warren Sinnott) should be viewed in a different light. Consider the theorem that the alg. closure of $\mathbf Q_p$ is not complete. There are a couple of different proofs of it. (Note Jen said a proof of that noncompleteness theorem is what she was trying to adapt to prove the compactness theorem for the matrix groups, so I suspect the proof in the link above is the direction she was trying to go in, whether or not other proofs of the compactness theorem may be considered more slick.) I'll briefly describe two such proofs. 1. In the $p$-adic book by Koblitz, he explicitly constructs an infinite series $\sum c_ip^i$ with $c_i$ in $\overline{\mathbf Q}_p$ of absolute value 1 and increasing degree over $\mathbf Q_p$, and then use the increasing-degree condition on the coefficients to show the series can't converge in $\overline{\mathbf Q}_p$, although it's Cauchy since the general term tends to 0. (This is essentially what takes place in the compactness proof at the link I posted above, but in a multiplicative setting: form a product of matrices tending to the identity whose up vote entries have higher and higher degree over $\mathbf Q_p$. The compactness hypotheses imply the product converges in $GL_n(\overline{\mathbf Q}_p)$ and then we get a contradiction. The 12 down same argument shows any compact additive subgroup of $\overline{\mathbf Q}_p$ is inside a finite extension of $\mathbf Q_p$.) 2. In the ultrametric analysis book by Schikhof, there is a proof that $\overline{\mathbf Q}_p$ is not complete which uses the Baire category theorem: the elements of $\overline{\mathbf Q} _p$ with degree up to $n$, as $n$ varies, provide a countable cover of $\overline{\mathbf Q}_p$ by closed subsets which each turn out to have no interior point, while of course their union $\overline{\mathbf Q}_p$ has many interior points. The closed set formulation of the Baire category theorem is that a countable union of closed subsets which each have no interior does not have an interior either. Thus we have a contradiction, so $\overline{\mathbf Q}_p$ is not complete. I don't think these two strategies for proving a space is incomplete are the same, at least psychologically: in the first one you explicitly construct a non-convergent Cauchy sequence and in the second one you show a general property of complete spaces doesn't hold. For the same reason, I think the Baire and non-Baire proofs of this compactness theorem are pretty different This is the line of argument that I was attempting to make, but I accepted jnewton's answer because it came in first. I wanted to let you know that in your write-up you switch from $K_r$ to $G_r$ midway through the proof. Thanks again! – Johnson-Leung Mar 11 '10 at 23:51 The switch to the congruence subgroups $G_r$ was actually intentional. At the point in the proof where they start being used, what matters is the way these subgroups shrink nicely to the identity, and for that purpose I thought it was better to emphasize the containment of the terms $g_i$ in the various subgroups $G_{d_i}$ instead of the finer information that they are in the subgroups $K_{d_i}$. However, since the switch admittedly can look like a typo, I added some text to the argument to point out the switch was being (deliberately) made. – KConrad Mar 12 '10 at 0:23 Just a comment: I think the Schikhof proof is very natural from a functional analytic perspective. Indeed, if K is any complete normed field, and V is a normed linear K-space, then it's a basic fact that any finite-dimensional subspace of V is closed (and an obvious fact that any proper subspace has empty interior). It now follows immediately from Baire Category that there is no complete normed K-linear space of countably infinite dimension. Note that, more generally, a normed field extension L/K is complete iff [L:K] is finite. This is proved in Bosch-Guntzer-Remmert. – Pete L. Clark Mar 12 '10 at 0:43 1 Something is missing at the end: C_p/Q_p is a normed field extension that is complete but [C_p:Q_p] is not finite (or I could use Q_p/Q instead). Lost a countability hypothesis in the second to last sentence? – KConrad Mar 12 '10 at 0:58 @KConrad: I lost the word "algebraic": an algebraic normed field extension $L/K$ is complete iff $[L:K]$ is finite. – Pete L. Clark Mar 12 '10 at 4:22 show 1 more comment Not the answer you're looking for? Browse other questions tagged nt.number-theory galois-representations or ask your own question.
{"url":"http://mathoverflow.net/questions/17774/does-the-image-of-an-p-adic-galois-representation-always-lie-in-a-finite-extensi","timestamp":"2014-04-20T13:41:57Z","content_type":null,"content_length":"71952","record_id":"<urn:uuid:2021525d-b9c6-43e2-8a5c-abd0ca71beae>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Concord, CA Algebra Tutor Find a Concord, CA Algebra Tutor Hi, I'm a native-born Japanese, who grew up in Tokyo, and has lived in California for more than 20 years. For the past 12 years, I have been teaching Japanese to students of a variety of ages from preschoolers to corporate executives, and skill levels from the entry level to the advanced level. A... 3 Subjects: including algebra 1, Japanese, prealgebra ...I received the departmental teaching award for my work in that class. I continued to tutor during graduate school. I have nearly forty scientific publications and book chapters from applying my chemistry knowledge to challenging problems in biochemistry and material science. 19 Subjects: including algebra 2, algebra 1, chemistry, calculus Hello! I have been a professional tutor since 2003, specializing in math (pre-algebra through AP calculus), AP statistics, and standardized test preparation. I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. 14 Subjects: including algebra 2, algebra 1, calculus, geometry ...Learning science and math can be difficult at times but with a little help anyone can master the principles and discover a vast, exciting, and ever expanding body of knowledge! I would be honored to help you in your quest for this knowledge. I have a bachelor's degree in Physics from U.C. 12 Subjects: including algebra 2, algebra 1, chemistry, physics ...Both require knowledge of basic writing rules (such as correct modifier placement, subject/verb agreement, correct pronoun usage, etc.), but the ACT has a LOT more questions on punctuation, for which I have developed a handout that is clear, thorough, and on point. Motivated students have increa... 53 Subjects: including algebra 1, algebra 2, English, reading Related Concord, CA Tutors Concord, CA Accounting Tutors Concord, CA ACT Tutors Concord, CA Algebra Tutors Concord, CA Algebra 2 Tutors Concord, CA Calculus Tutors Concord, CA Geometry Tutors Concord, CA Math Tutors Concord, CA Prealgebra Tutors Concord, CA Precalculus Tutors Concord, CA SAT Tutors Concord, CA SAT Math Tutors Concord, CA Science Tutors Concord, CA Statistics Tutors Concord, CA Trigonometry Tutors Nearby Cities With algebra Tutor Alameda algebra Tutors Antioch, CA algebra Tutors Berkeley, CA algebra Tutors Danville, CA algebra Tutors Hayward, CA algebra Tutors Lafayette, CA algebra Tutors Martinez algebra Tutors Oakland, CA algebra Tutors Piedmont, CA algebra Tutors Pittsburg, CA algebra Tutors Pleasant Hill, CA algebra Tutors Richmond, CA algebra Tutors San Francisco algebra Tutors Vallejo algebra Tutors Walnut Creek, CA algebra Tutors
{"url":"http://www.purplemath.com/concord_ca_algebra_tutors.php","timestamp":"2014-04-17T07:54:44Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:a3608011-45ff-4109-916c-b4255923db60>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
xxmr2d:out of memory when using pzheev I just recently installed ScaLapack on a cluster of Opterons using the Portland Group compilers (pgf90). Everything seems to have compiled fine, but when I write a test code using pzheev to find the eigenvalues/vectors of a complex*16 distributed matrix I get an "xxmr2d:out of memory" error. The same code works fine with pdsyev routine (providing you change complex (8) to real (8) as appropriate). Does anybody have any idea what's going on? I've attached the test code below, if that makes a diference. Code: Select all ! TEST CODE TO COMPARE PDSYEV AND PZHEEV program test use mpi implicit none integer, parameter :: float=selected_real_kind(6,70) integer :: i1,i2 integer, parameter :: N = 40 real (float) ::W(N) !%%% Uncomment for PZHEEV COMPLEX (float) :: A(N,N),Z(N,N) COMPLEX (float), allocatable :: a0(:,:),z0(:,:) COMPLEX (float), allocatable :: work(:) !%%% Uncomment for PCHEEV !REAL (float) :: A(N,N),Z(N,N) !REAL (float), allocatable :: a0(:,:),z0(:,:) !REAL (float), allocatable :: work(:) real (float), allocatable :: w0(:) integer :: lwork,lrwork real (float), allocatable :: rwork(:) ! blacs integer :: ctxt,MyRow,MyCol,nproc integer, parameter :: Prow=2,Pcol=1 integer, parameter :: Brow=16,Bcol=16 integer, parameter :: dlen_ = 9 integer :: desca0(dlen_),descz0(dlen_),descw0(dlen_),LDA,LDB,tmp integer :: numroc ! network topology character (1) :: TOP = ' ' ! mpi integer :: ierr,myrank external numroc call SL_init(ctxt,Prow,Pcol) call mpi_comm_size(mpi_comm_world,nproc,ierr) if (nproc /= Prow*Pcol) then write(6,*) "Error! Must have ",Prow*Pcol," processors" end if call mpi_comm_rank(mpi_comm_world,myrank,ierr) call blacs_gridinfo(ctxt,Prow,Pcol,MyRow,MyCol) LDA = numroc(N,Brow,myrow,0,Prow) LDB = numroc(N,Bcol,mycol,0,Pcol) write(6,*) MyRow,MyCol,myrank,LDA,LDB call descinit(desca0,N,N,Brow,Bcol,0,0,ctxt,LDA,ierr ) call descinit(descz0,N,N,Brow,Bcol,0,0,ctxt,LDA,ierr ) if (myrow+mycol==0) then ! generate a symmetric matrix with random elements call random_seed() do i1 = 1,N do i2 = i1,N call random_number(r1) A(i2,i1) = r1 end do end do ! distribute the matrix !%%% Uncomment for PDSYEV: !call DGEBS2D(ctxt,'A',TOP,N,N,A,N) !%%% Uncomment for PZHEEV: call ZGEBS2D(ctxt,'A',TOP,N,N,A,N) ! receive the matrix !%%% Uncomment for PDSYEV: !call DGEBR2D(ctxt,'A',TOP,N,N,A,N,0,0) !%%% Uncomment for PZHEEV: call ZGEBR2D(ctxt,'A',TOP,N,N,A,N,0,0) end if do i1 = 1,N do i2 = 1,N !%%% Uncomment for PDSYEV: !call pdelset(a0,i1,i2,desca0,A(i1,i2)) !%%% Uncomment for PZHEEV: call pzelset(a0,i1,i2,desca0,A(i1,i2)) end do end do lwork = -1 lrwork = -1 !%%% Uncomment for PDSYEV: !call pdsyev('V','U',N,a0,1,1,desca0,w0,z0,1,1,descz0,WORK,LWORK,& ! rwork,lrwork,ierr) !%%% Uncomment for PZHEEV: call pzheev('V','U',N,a0,1,1,desca0,w0,z0,1,1,descz0,WORK,LWORK,& write(6,*) "done pzheev 1" lwork = int(abs(work(1))) lrwork = int(rwork(1)+1) !call pdsyev('V','U',N,a0,1,1,desca0,w0,z0,1,1,descz0,WORK,LWORK,& ! rwork,lrwork,ierr) !%%% Uncomment for PZHEEV: call pzheev('V','U',N,a0,1,1,desca0,w0,z0,1,1,descz0,WORK,LWORK,& write(6,*) "done pzheev 2" call blacs_gridexit(ctxt) call blacs_exit(0) end program test In the text of your message you say that the matrix type is complex*16 but in the code you use complex*8. I changed that and the size of lrwork to 158 (4*N-2), and the test completes without problems. I guess it is a bug that rwork(1) didn't give the minimum size required for the array rwork. hold on Stan Tomov wrote:In the text of your message you say that the matrix type is complex*16 but in the code you use complex*8. I changed that and the size of lrwork to 158 (4*N-2), and the test completes without problems. I guess it is a bug that rwork(1) didn't give the minimum size required for the array rwork. Hold on. I need to make sure I'm not doing something stupid here. If I check Code: Select all program testit implicit none complex (8) :: c1 complex*16 :: c2 write(6,*) kind(c1) write(6,*) kind(c2) end program testit then I get Code: Select all If I am not mistaken, then complex*16 is the same as complex (8). I think the kind of error you describe with rwork is the same one I got when I spent a (very frustrating) day using the wrong precision subroutine. To summarise, I think the test code I submitted is fine, except that I forgot to define real (8) :: r1. You are right, the precision is O.K. I didn't see your definition of float and thought this is single precision specific for your compiler. I also had to define r1. With these changes I still need to put lrwork at least 158, otherwise I get segmentation fault (pzheev returns 82). I used the GNU compilers (g77/gfortran) to run the test. Remarkable. It executes! Ok so there seems to be something wrong with the workspace query in pzheev. At least I can be somewhat confident that the problem doesn't lie in how I built ScaLapack. Just out of curiosity, did you try running with pdsyev instead of pzheev? That works just fine for me. Thanks for the help.
{"url":"http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=31&p=87","timestamp":"2014-04-17T13:28:08Z","content_type":null,"content_length":"28886","record_id":"<urn:uuid:c668a7af-4421-4db5-a026-7b713e67939c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Inquiry Strategies for the Journey North Teacher Inquiry Strategies for the Journey North Teacher Representing Data: Charts and Graphs In order to make sense of data gathered during investigations and Journey North migration studies, it helps to get a visual overview of the information. Tables are used to organize amounts of numerical data (e.g., the number of different species of frogs spotted in a day.) Graphs visually show comparisons or relationships. As students pull and organize information from Journey North maps and migration data tables, they should think about their driving question and what they'd like to depict. By exposing them to different types of graphs, helping them understand when it's most appropriate to use each one, and modeling how to create each type, you will prepare them for making appropriate choices as classroom scientists. Here are some tips on when to use different graph types: • Circle (Pie) Graphs - Use these to depict parts of a whole, such as the fraction (percentage) of Journey North classrooms that are tracking just 1 species, 2 species, 3, species, and 4 or more • Bar Graphs - Use these to show comparisons of data with discreet categories, such as the number of miles traveled by each of 6 eagles. • Area Graphs - Use these to depict how something changes over time. It applies to data that may have peaks and valleys, such as the average number of monarchs spotted each week outside your • Line Graphs - Upper elementary students can use these for continuous data (e.g., height, time, temperature, volume) to show how things relate to one another. (Time is typically depicted on the X axis.) For instance, students might use a line graph to depict how the average daily temperature (or isotherm) changes over time. • Scatter Plots - These are like line graphs, but are used to represent trends; individual data points are not connected. Once students have plotted points on the graph from data tables, they can draw a "line of best fit" between or near the points that offers a visual image of the correlation between variables. From that, they should be able to write a sentence or two that summarizes the data (e.g., As the temperature warmed, the number of robins sighted increased). sense of the data they collected. Gathering Data Links Copyright 1997-2014 Journey North. All Rights Reserved. Contact Us Search
{"url":"http://www.learner.org/jnorth/tm/inquiry/datab.html","timestamp":"2014-04-17T18:34:39Z","content_type":null,"content_length":"11900","record_id":"<urn:uuid:e6a20856-7f5f-4a0e-90fe-d5d6bc7c4095>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph characteristics from the heat kernel trace Bai Xiao, Richard Wilson and Edwin Hancock Pattern Recognition Volume 42, Number 11, , 2009. ISSN 0031-3203 Graph structures have been proved important in high level-vision since they can be used to represent structural and relational arrangements of objects in a scene. One of the problems that arises in the analysis of structural abstractions of objects is graph clustering. In this paper, we explore how permutation invariants computed from the trace of the heat kernel can be used to characterize graphs for the purposes of measuring similarity and clustering. The heat kernel is the solution of the heat equation and is a compact representation of the path-length distribution on a graph. The trace of the heat kernel is given by the sum of the Laplacian eigenvalues exponentiated with time. We explore three different approaches to characterizing the heat kernel trace as a function of time. Our first characterization is based on the zeta function, which from the Mellin transform is the moment generating function of the heat kernel trace. Our second characterization is unary and is found by computing the derivative of the zeta function at the origin. The third characterization is derived from the heat content, i.e. the sum of the elements of the heat kernel. We show how the heat content can be expanded as a power series in time, and the coefficients of the series can be computed using the Laplacian spectrum. We explore the use of these characterizations as a means of representing graph structure for the purposes of clustering, and compare them with the use of the Laplacian spectrum. Experiments with the synthetic and real-world databases reveal that each of the three proposed invariants is effective and outperforms the traditional Laplacian spectrum. Moreover, the heat-content invariants appear to consistently give the best results in both synthetic sensitivity studies and on real-world object recognition problems.
{"url":"http://eprints.pascal-network.org/archive/00006839/","timestamp":"2014-04-20T03:13:50Z","content_type":null,"content_length":"9561","record_id":"<urn:uuid:82ff63ad-5e3e-4bf5-af65-dd2ec575d4f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}