url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.quantumstudy.com/two-point-charges-4q-and-q-are-fixed-on-the-x-axis-at-x-d-2-and-x-d-2-respectively/
# Two point charges 4q and -q are fixed on the x-axis at x=-d/2 and x= +d/2 respectively …. Q: Two point charges 4q and -q are fixed on the x-axis at x=-d/2 and x= +d/2 respectively . If a third point charge q is taken from origin to x = d along the semicircle as shown in the figure , the energy of charge will (a) increase by $\displaystyle \frac{3 q^2}{4\pi\epsilon_0 d}$ (b) decrease by $\displaystyle \frac{q^2}{4\pi\epsilon_0 d}$ (c) decrease by $\displaystyle \frac{4 q^2}{3\pi\epsilon_0 d}$ (d) increase by $\displaystyle \frac{2 q^2}{3\pi\epsilon_0 d}$ Click to See Solution : Ans: (c) Sol: Electrostatic Potential Energy when charge q is at origin O $\displaystyle U_1 = \frac{1}{4\pi\epsilon_0} [\frac{4q \times q}{d/2} + \frac{(-q) \times q}{d/2}]$ $\displaystyle U_1 = \frac{1}{4\pi\epsilon_0} \frac{6q^2}{d}$ Electrostatic Potential Energy when charge q moved to x = d $\displaystyle U_2 = \frac{1}{4\pi\epsilon_0} [\frac{4q \times q}{3d/2} + \frac{(-q) \times q}{d/2}]$ $\displaystyle U_1 = \frac{1}{4\pi\epsilon_0} \frac{2q^2}{3d}$ The change in Poential Energy , $\displaystyle \Delta U = \frac{1}{4\pi\epsilon_0} \frac{q^2}{d} (\frac{2}{3}-6)$ $\displaystyle \Delta U = – \frac{4 q^2}{3\pi\epsilon_0 d}$
2022-09-30 22:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601519465446472, "perplexity": 2197.653664347035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00444.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aorlik.peter.1
## Orlik, Peter Compute Distance To: Author ID: orlik.peter.1 Published as: Orlik, Peter; Orlik, P.; Orlick, Peter more...less Further Spellings: Orlik, Peter Paul Nikolas External Links: MGP · Wikidata · dblp · GND · IdRef Documents Indexed: 69 Publications since 1967, including 5 Books 1 Contribution as Editor Biographic References: 1 Publication Co-Authors: 20 Co-Authors with 50 Joint Publications 490 Co-Co-Authors all top 5 ### Co-Authors 20 single-authored 14 Solomon, Louis 11 Terao, Hiroaki 8 Cohen, Daniel C. 5 Wagreich, Philip D. 4 Raymond, Frank Albert 3 Randell, Richard C. 2 Jewell, Ken 1 Aomoto, Kazuhiko 1 Dimca, Alexandru 1 Kamiya, Hidehiko 1 Kita, Michitake 1 Milnor, John Willard 1 Reiner, Victor 1 Rourke, Colin P. 1 Shapiro, Boris Zalmanovich 1 Shepler, Anne V. 1 Silvotti, Roberto 1 Takemura, Akimichi 1 Vogt, Elmar 1 Zieschang, Heiner all top 5 ### Serials 5 Inventiones Mathematicae 5 Mathematische Annalen 4 Topology 3 Advances in Mathematics 3 Nagoya Mathematical Journal 3 Topology and its Applications 2 Transactions of the American Mathematical Society 2 MSJ Memoirs 1 Arkiv för Matematik 1 Acta Mathematica 1 American Journal of Mathematics 1 Annales de l’Institut Fourier 1 Canadian Journal of Mathematics 1 Commentarii Mathematici Helvetici 1 Compositio Mathematica 1 Illinois Journal of Mathematics 1 Journal of Algebra 1 Journal of Combinatorial Theory. Series A 1 Journal of Computational and Applied Mathematics 1 Manuscripta Mathematica 1 Mathematica Scandinavica 1 Michigan Mathematical Journal 1 The Quarterly Journal of Mathematics. Oxford Second Series 1 Tohoku Mathematical Journal. Second Series 1 Bulletin of the American Mathematical Society. New Series 1 Journal of Algebraic Geometry 1 Mathematical Research Letters 1 Annals of Combinatorics 1 Annals of Mathematics. Second Series 1 Bulletin of the American Mathematical Society 1 Pure and Applied Mathematics Quarterly 1 Grundlehren der Mathematischen Wissenschaften 1 Lecture Notes in Mathematics 1 Proceedings of Symposia in Pure Mathematics 1 Regional Conference Series in Mathematics all top 5 ### Fields 23 Algebraic geometry (14-XX) 23 Manifolds and cell complexes (57-XX) 22 Several complex variables and analytic spaces (32-XX) 16 Geometry (51-XX) 14 Convex and discrete geometry (52-XX) 12 Group theory and generalizations (20-XX) 12 Algebraic topology (55-XX) 9 Combinatorics (05-XX) 4 Linear and multilinear algebra; matrix theory (15-XX) 4 Special functions (33-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Commutative algebra (13-XX) 2 Associative rings and algebras (16-XX) 2 Differential geometry (53-XX) 2 Global analysis, analysis on manifolds (58-XX) 1 General and overarching topics; collections (00-XX) 1 Number theory (11-XX) 1 Functions of a complex variable (30-XX) 1 Probability theory and stochastic processes (60-XX) ### Citations contained in zbMATH Open 62 Publications have been cited 1,807 times in 1,315 Documents Cited by Year Arrangements of hyperplanes. Zbl 0757.55001 Orlik, Peter; Terao, Hiroaki 1992 Combinatorics and topology of complements of hyperplanes. Zbl 0432.14016 Orlik, Peter; Solomon, Louis 1980 Seifert manifolds. Zbl 0263.57001 Orlik, Peter 1972 Isolated singularities defined by weighted homogeneous polynomials. Zbl 0204.56503 Milnor, John W.; Orlik, P. 1970 Isolated singularities of algebraic surfaces with $$\mathbb{C}^{*}$$ action. Zbl 0212.53702 Orlik, P.; Wagreich, P. 1971 Actions of the torus on 4-manifolds. I. Zbl 0216.20202 Orlik, P.; Raymond, F. 1970 Unitary reflection groups and cohomology. Zbl 0452.20050 Orlik, Peter; Solomon, Louis 1980 Zur Topologie gefaserter dreidimensionaler Mannigfaltigkeiten. Zbl 0147.23503 Orlik, P.; Vogt, E.; Zieschang, Heiner 1967 Algebraic surfaces with k*-action. Zbl 0352.14016 Orlik, P.; Wagreich, P. 1977 Coxeter arrangements. Zbl 0516.05019 Orlik, Peter; Solomon, Louis 1983 Arrangements defined by unitary reflection groups. Zbl 0491.51018 Orlik, Peter; Solomon, Louis 1982 Introduction to arrangements. Zbl 0722.51003 Orlik, Peter 1989 Actions of the torus on 4-manifolds. II. Zbl 0287.57017 Orlik, Peter; Raymond, Frank 1974 Commutative algebras for arrangements. Zbl 0801.05019 Orlik, Peter; Terao, Hiroaki 1994 Arrangements and hypergeometric integrals. Zbl 0980.32010 Orlik, Peter; Terao, Hiroaki 2001 Discriminants in the invariant theory of reflection groups. Zbl 0614.20032 Orlik, Peter; Solomon, Louis 1988 Nonresonance conditions for arrangements. Zbl 1054.32016 Cohen, Daniel C.; Dimca, Alexandru; Orlik, Peter 2003 Arrangements and local systems. Zbl 0971.32012 Cohen, Daniel C.; Orlik, Peter 2000 Singularities. II: Automorphisms of forms. Zbl 0352.14002 Orlik, Peter; Solomon, Louis 1978 Arrangements in unitary and orthogonal geometry over finite fields. Zbl 0579.51005 Orlik, Peter; Solomon, Louis 1985 Actions of $$\mathrm{SO}(2)$$ on 3-manifolds. Zbl 0172.25402 Orlik, Peter; Raymond, Frank 1968 Coxeter arrangements are hereditarily free. Zbl 0798.51011 Orlik, Peter; Terao, Hiroaki 1993 Arrangements and Milnor fibers. Zbl 0813.32033 Orlik, Peter; Terao, Hiroaki 1995 Singularities of algebraic surfaces with $$\mathbb C^ *$$ action. Zbl 0206.24003 Orlik, Peter; Wagreich, Philip 1971 Twisted de Rham cohomology groups of logarithmic forms. Zbl 0905.14010 Aomoto, Kazuhiko; Kita, Michitake; Orlik, Peter; Terao, Hiroaki 1997 The monodromy of weighted homogeneous singularities. Zbl 0341.14001 Orlik, P.; Randell, R. 1977 On Coxeter arrangements and the Coxeter number. Zbl 0628.51010 Orlik, Peter; Solomon, Louis; Terao, Hiroaki 1987 On 3-manifolds with local SO(2) action. Zbl 0176.21304 Orlik, P.; Raymond, F. 1969 The number of critical points of a product of powers of linear functions. Zbl 0934.32020 Orlik, Peter; Terao, Hiroaki 1995 On the homology of weighted homogeneous manifolds. Zbl 0249.57029 Orlik, Peter 1972 The sign representation for Shephard groups. Zbl 1058.20034 Orlik, Peter; Reiner, Victor; Shepler, Anne V. 2002 Milnor fiber complexes for Shephard groups. Zbl 0737.51017 Orlik, Peter 1990 Smooth homotopy lens spaces. Zbl 0182.57504 Orlik, P. 1969 The Milnor fiber of a generic arrangement. Zbl 0807.32029 Orlik, Peter; Randell, Richard 1993 Weighted homogeneous polynomials and fundamental groups. Zbl 0198.28303 Orlik, P. 1970 On the complements of affine subspace arrangements. Zbl 0797.55016 Jewell, Ken; Orlik, Peter; Shapiro, Boris Z. 1994 Complements of subspace arrangements. Zbl 0795.52003 Orlik, Peter 1992 The Hessian map in the invariant theory of reflection groups. Zbl 0614.20033 Orlik, Peter; Solomon, Louis 1988 Arrangements and ranking patterns. Zbl 1125.52017 Kamiya, Hidehiko; Orlik, Peter; Takemura, Akimichi; Terao, Hiroaki 2006 Arrangements and hypergeometric integrals. 2nd ed. Zbl 1119.32014 Orlik, Peter; Terao, Hiroaki 2007 Some cyclic covers of complements of arrangements. Zbl 0994.32023 Cohen, Daniel C.; Orlik, Peter 2002 Arrangements of hyperplanes and differential forms. Zbl 0559.05020 Orlik, Peter; Solomon, Louis; Terao, Hiroaki 1984 Gauss-Manin connections for arrangements. I: Eigenvalues. Zbl 1046.32002 Cohen, Daniel C.; Orlik, Peter 2003 Equivariant resolution of singularities with C$$^*$$ action. Zbl 0249.14011 Orlik, Peter; Wagreich, Philip 1972 The multiplicity of a holomorphic map at an isolated critical point. Zbl 0382.57015 Orlik, Peter 1977 Geometric relationship between cohomology of the complement of real and complexified arrangements. Zbl 1010.52017 Jewell, Ken; Orlik, Peter 2002 Complexes for reflection groups. Zbl 0469.20023 Orlik, Peter; Solomon, Louis 1981 A character formula for the unitary group over a finite field. Zbl 0521.20023 Orlik, Peter; Solomon, Louis 1983 Gauss-Manin connections for arrangements. II: Nonresonant weights. Zbl 1078.32018 Cohen, Daniel C.; Orlik, Peter 2005 Seifert n-manifolds. Zbl 0295.57021 Orlik, Peter; Wagreich, Philip 1975 The structures of weighted homogeneous polynomials. Zbl 0356.57030 Orlik, P.; Randell, R. 1977 Singularities. I: Hypersurfaces with an isolated singularity. Zbl 0352.14001 Orlik, Peter; Solomon, Louis 1978 Singularities and group actions. Zbl 0418.32009 Orlik, Peter 1979 Stratification of the discriminant in reflection groups. Zbl 0674.20025 Orlik, Peter 1989 Hypergeometric integrals and arrangements. Zbl 0976.33012 Orlik, Peter 1999 Free involutions on homotopy $$(4k+3)$$-spheres. Zbl 0159.53901 Orlik, P.; Rourke, C. P. 1968 On the Arf invariant of an involution. Zbl 0198.28502 Orlik, P. 1970 Local system homology of arrangement complements. Zbl 0974.32021 Orlik, Peter; Silvotti, Roberto 2000 Gauss-Manin connections for arrangements. IV: Nonresonant eigenvalues. Zbl 1103.32014 Cohen, Daniel C.; Orlik, Peter 2006 Stratified Morse theory in arrangements. Zbl 1119.32013 Cohen, Daniel C.; Orlik, Peter 2006 Gauss-Manin connections for arrangements. III: Formal connections. Zbl 1087.32014 Cohen, Daniel C.; Orlik, Peter 2005 Homotopy 4-spheres have little symmetry. Zbl 0281.57029 Orlik, Peter 1974 Arrangements and hypergeometric integrals. 2nd ed. Zbl 1119.32014 Orlik, Peter; Terao, Hiroaki 2007 Arrangements and ranking patterns. Zbl 1125.52017 Kamiya, Hidehiko; Orlik, Peter; Takemura, Akimichi; Terao, Hiroaki 2006 Gauss-Manin connections for arrangements. IV: Nonresonant eigenvalues. Zbl 1103.32014 Cohen, Daniel C.; Orlik, Peter 2006 Stratified Morse theory in arrangements. Zbl 1119.32013 Cohen, Daniel C.; Orlik, Peter 2006 Gauss-Manin connections for arrangements. II: Nonresonant weights. Zbl 1078.32018 Cohen, Daniel C.; Orlik, Peter 2005 Gauss-Manin connections for arrangements. III: Formal connections. Zbl 1087.32014 Cohen, Daniel C.; Orlik, Peter 2005 Nonresonance conditions for arrangements. Zbl 1054.32016 Cohen, Daniel C.; Dimca, Alexandru; Orlik, Peter 2003 Gauss-Manin connections for arrangements. I: Eigenvalues. Zbl 1046.32002 Cohen, Daniel C.; Orlik, Peter 2003 The sign representation for Shephard groups. Zbl 1058.20034 Orlik, Peter; Reiner, Victor; Shepler, Anne V. 2002 Some cyclic covers of complements of arrangements. Zbl 0994.32023 Cohen, Daniel C.; Orlik, Peter 2002 Geometric relationship between cohomology of the complement of real and complexified arrangements. Zbl 1010.52017 Jewell, Ken; Orlik, Peter 2002 Arrangements and hypergeometric integrals. Zbl 0980.32010 Orlik, Peter; Terao, Hiroaki 2001 Arrangements and local systems. Zbl 0971.32012 Cohen, Daniel C.; Orlik, Peter 2000 Local system homology of arrangement complements. Zbl 0974.32021 Orlik, Peter; Silvotti, Roberto 2000 Hypergeometric integrals and arrangements. Zbl 0976.33012 Orlik, Peter 1999 Twisted de Rham cohomology groups of logarithmic forms. Zbl 0905.14010 Aomoto, Kazuhiko; Kita, Michitake; Orlik, Peter; Terao, Hiroaki 1997 Arrangements and Milnor fibers. Zbl 0813.32033 Orlik, Peter; Terao, Hiroaki 1995 The number of critical points of a product of powers of linear functions. Zbl 0934.32020 Orlik, Peter; Terao, Hiroaki 1995 Commutative algebras for arrangements. Zbl 0801.05019 Orlik, Peter; Terao, Hiroaki 1994 On the complements of affine subspace arrangements. Zbl 0797.55016 Jewell, Ken; Orlik, Peter; Shapiro, Boris Z. 1994 Coxeter arrangements are hereditarily free. Zbl 0798.51011 Orlik, Peter; Terao, Hiroaki 1993 The Milnor fiber of a generic arrangement. Zbl 0807.32029 Orlik, Peter; Randell, Richard 1993 Arrangements of hyperplanes. Zbl 0757.55001 Orlik, Peter; Terao, Hiroaki 1992 Complements of subspace arrangements. Zbl 0795.52003 Orlik, Peter 1992 Milnor fiber complexes for Shephard groups. Zbl 0737.51017 Orlik, Peter 1990 Introduction to arrangements. Zbl 0722.51003 Orlik, Peter 1989 Stratification of the discriminant in reflection groups. Zbl 0674.20025 Orlik, Peter 1989 Discriminants in the invariant theory of reflection groups. Zbl 0614.20032 Orlik, Peter; Solomon, Louis 1988 The Hessian map in the invariant theory of reflection groups. Zbl 0614.20033 Orlik, Peter; Solomon, Louis 1988 On Coxeter arrangements and the Coxeter number. Zbl 0628.51010 Orlik, Peter; Solomon, Louis; Terao, Hiroaki 1987 Arrangements in unitary and orthogonal geometry over finite fields. Zbl 0579.51005 Orlik, Peter; Solomon, Louis 1985 Arrangements of hyperplanes and differential forms. Zbl 0559.05020 Orlik, Peter; Solomon, Louis; Terao, Hiroaki 1984 Coxeter arrangements. Zbl 0516.05019 Orlik, Peter; Solomon, Louis 1983 A character formula for the unitary group over a finite field. Zbl 0521.20023 Orlik, Peter; Solomon, Louis 1983 Arrangements defined by unitary reflection groups. Zbl 0491.51018 Orlik, Peter; Solomon, Louis 1982 Complexes for reflection groups. Zbl 0469.20023 Orlik, Peter; Solomon, Louis 1981 Combinatorics and topology of complements of hyperplanes. Zbl 0432.14016 Orlik, Peter; Solomon, Louis 1980 Unitary reflection groups and cohomology. Zbl 0452.20050 Orlik, Peter; Solomon, Louis 1980 Singularities and group actions. Zbl 0418.32009 Orlik, Peter 1979 Singularities. II: Automorphisms of forms. Zbl 0352.14002 Orlik, Peter; Solomon, Louis 1978 Singularities. I: Hypersurfaces with an isolated singularity. Zbl 0352.14001 Orlik, Peter; Solomon, Louis 1978 Algebraic surfaces with k*-action. Zbl 0352.14016 Orlik, P.; Wagreich, P. 1977 The monodromy of weighted homogeneous singularities. Zbl 0341.14001 Orlik, P.; Randell, R. 1977 The multiplicity of a holomorphic map at an isolated critical point. Zbl 0382.57015 Orlik, Peter 1977 The structures of weighted homogeneous polynomials. Zbl 0356.57030 Orlik, P.; Randell, R. 1977 Seifert n-manifolds. Zbl 0295.57021 Orlik, Peter; Wagreich, Philip 1975 Actions of the torus on 4-manifolds. II. Zbl 0287.57017 Orlik, Peter; Raymond, Frank 1974 Homotopy 4-spheres have little symmetry. Zbl 0281.57029 Orlik, Peter 1974 Seifert manifolds. Zbl 0263.57001 Orlik, Peter 1972 On the homology of weighted homogeneous manifolds. Zbl 0249.57029 Orlik, Peter 1972 Equivariant resolution of singularities with C$$^*$$ action. Zbl 0249.14011 Orlik, Peter; Wagreich, Philip 1972 Isolated singularities of algebraic surfaces with $$\mathbb{C}^{*}$$ action. Zbl 0212.53702 Orlik, P.; Wagreich, P. 1971 Singularities of algebraic surfaces with $$\mathbb C^ *$$ action. Zbl 0206.24003 Orlik, Peter; Wagreich, Philip 1971 Isolated singularities defined by weighted homogeneous polynomials. Zbl 0204.56503 Milnor, John W.; Orlik, P. 1970 Actions of the torus on 4-manifolds. I. Zbl 0216.20202 Orlik, P.; Raymond, F. 1970 Weighted homogeneous polynomials and fundamental groups. Zbl 0198.28303 Orlik, P. 1970 On the Arf invariant of an involution. Zbl 0198.28502 Orlik, P. 1970 On 3-manifolds with local SO(2) action. Zbl 0176.21304 Orlik, P.; Raymond, F. 1969 Smooth homotopy lens spaces. Zbl 0182.57504 Orlik, P. 1969 Actions of $$\mathrm{SO}(2)$$ on 3-manifolds. Zbl 0172.25402 Orlik, Peter; Raymond, Frank 1968 Free involutions on homotopy $$(4k+3)$$-spheres. Zbl 0159.53901 Orlik, P.; Rourke, C. P. 1968 Zur Topologie gefaserter dreidimensionaler Mannigfaltigkeiten. Zbl 0147.23503 Orlik, P.; Vogt, E.; Zieschang, Heiner 1967 all top 5 ### Cited by 1,202 Authors 42 Terao, Hiroaki 33 Abe, Takuro 28 Dimca, Alexandru 27 Röhrle, Gerhard E. 27 Yoshinaga, Masahiko 26 Orlik, Peter 24 Yau, Stephen Shing-Toung 16 Torielli, Michele 15 Denham, Graham 15 Suciu, Alexander I. 13 Cohen, Daniel C. 13 Cuntz, Michael Joachim 13 Falk, Michael J. 12 Hoge, Torsten 12 Settepanella, Simona 11 Proudfoot, Nicholas J. 11 Randell, Richard C. 11 Reiner, Victor 11 Yuzvinsky, Sergey 10 Galaz-Garcia, Fernando 10 Papadima, Ștefan 9 Douglass, J. Matthew 9 Solomon, Louis 9 Tohǎneanu, Ştefan O. 9 Varchenko, Alexander Nikolaevich 9 Zieschang, Heiner 8 Cavicchioli, Alberto 8 Salvetti, Mario 8 Zuo, Huaiqing 7 Aomoto, Kazuhiko 7 Boileau, Michel Charles 7 Guerville-Ballé, Benoît 7 Marin, Ivan 7 Sagan, Bruce Eli 7 Schenck, Hal 7 Shepler, Anne V. 7 Sticlaru, Gabriel 7 Tran, Tan Nhat 7 Tsujie, Shuhei 7 Ziegler, Günter Matthias 7 Zimmermann, Bruno P. 6 Boyer, Charles P. 6 Delucchi, Emanuele 6 Guo, Jun 6 Hausen, Jürgen 6 Jambu, Michel 6 Lehrer, Gustav Isaac 6 Lisca, Paolo 6 Nakashima, Norihiro 6 Palezzato, Elisa 6 Paris, Luis 6 Schulze, Mathias 6 Sommers, Eric N. 6 Takemura, Akimichi 6 Wakefield, Max D. 5 Amend, Nils 5 Artal Bartolo, Enrique 5 Barcelo, Hélène 5 Budur, Nero 5 Chen, Beifang 5 Cordovil, Raul 5 Guo, Weili 5 Huh, June 5 Jiang, Guangfeng 5 Kamiya, Hidehiko 5 Kohno, Toshitake 5 Lenz, Matthias 5 Marco-Buzunáriz, Miguel A. 5 Massey, David Bradley 5 Möller, Tilman 5 Núñez-Zimbrón, Jesús 5 Rubinstein, J. Hyam 5 Searle, Catherine 5 Stanley, Richard Peter 5 Suyama, Daisuke 5 Wagreich, Philip D. 5 Weber, Claude 5 Wiemeler, Michael 4 Bailet, Pauline 4 Beck, Vincent 4 Buchweitz, Ragnar-Olaf 4 Callegaro, Filippo 4 Dolgachev, Igor’ Vladimirovich 4 Everitt, Brent 4 Gaiffi, Giovanni 4 Galicki, Krzysztof 4 Garber, David 4 Garrousian, Mehdi 4 Golubeva, Valentina Alekseevna 4 Jewell, Ken 4 Jiang, Tan 4 Kawahara, Yukihito 4 Kirillov, Anatol N. 4 Kwasik, Sławomir 4 La Luz, José 4 Libgober, Anatoly S. 4 Machida, Yoshinori 4 Măcinic, Daniela Anca 4 Michel, Jean 4 Miller, Alexander R. ...and 1,102 more Authors all top 5 ### Cited in 226 Serials 81 Topology and its Applications 58 Advances in Mathematics 55 Proceedings of the American Mathematical Society 54 Journal of Algebra 54 Transactions of the American Mathematical Society 39 Mathematische Annalen 34 Inventiones Mathematicae 31 Journal of Algebraic Combinatorics 30 Mathematische Zeitschrift 29 Journal of Combinatorial Theory. Series A 27 Algebraic & Geometric Topology 25 Advances in Applied Mathematics 23 Annales de l’Institut Fourier 22 Discrete Mathematics 21 European Journal of Combinatorics 19 Duke Mathematical Journal 18 Manuscripta Mathematica 17 Journal of Pure and Applied Algebra 16 Compositio Mathematica 15 Communications in Algebra 14 Geometriae Dedicata 14 Discrete & Computational Geometry 13 Mathematical Proceedings of the Cambridge Philosophical Society 13 Tohoku Mathematical Journal. Second Series 11 Bulletin of the American Mathematical Society. New Series 11 Annals of Combinatorics 10 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 9 Journal of Geometry and Physics 9 Proceedings of the Japan Academy. Series A 9 Geometry & Topology 9 Comptes Rendus. Mathématique. Académie des Sciences, Paris 9 European Journal of Mathematics 8 Functional Analysis and its Applications 8 Journal of the Mathematical Society of Japan 8 Linear Algebra and its Applications 8 Journal of Knot Theory and its Ramifications 8 Experimental Mathematics 8 The Electronic Journal of Combinatorics 8 Journal of High Energy Physics 8 Journal of Singularities 7 Israel Journal of Mathematics 7 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 7 Journal of Soviet Mathematics 7 Kodai Mathematical Journal 7 Michigan Mathematical Journal 7 Osaka Journal of Mathematics 7 Tokyo Journal of Mathematics 7 International Journal of Mathematics 7 Séminaire Lotharingien de Combinatoire 6 Archiv der Mathematik 6 Publications of the Research Institute for Mathematical Sciences, Kyoto University 6 Graphs and Combinatorics 6 Transformation Groups 6 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 6 Proceedings of the Steklov Institute of Mathematics 5 Communications in Mathematical Physics 5 Letters in Mathematical Physics 5 Journal of Number Theory 5 Nagoya Mathematical Journal 5 Order 5 Journal of Mathematical Sciences (New York) 5 Revista Matemática Complutense 5 Arnold Mathematical Journal 4 Bulletin of the Australian Mathematical Society 4 Journal für die Reine und Angewandte Mathematik 4 Memoirs of the American Mathematical Society 4 Revista Matemática Iberoamericana 4 Science in China. Series A 4 Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Serie IX. Rendiconti Lincei. Matematica e Applicazioni 4 Indagationes Mathematicae. New Series 4 Journal of Algebraic Geometry 4 Selecta Mathematica. New Series 4 Documenta Mathematica 4 Representation Theory 4 Journal of the European Mathematical Society (JEMS) 4 Bulletin of the American Mathematical Society 4 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 3 Journal of Mathematical Physics 3 Mathematical Notes 3 Rocky Mountain Journal of Mathematics 3 Journal of Combinatorial Theory. Series B 3 Journal of the Korean Mathematical Society 3 Journal of the London Mathematical Society. Second Series 3 Chinese Annals of Mathematics. Series B 3 The Journal of Geometric Analysis 3 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 3 Finite Fields and their Applications 3 Annals of Mathematics. Second Series 3 Bulletin of the Brazilian Mathematical Society. New Series 3 Mediterranean Journal of Mathematics 3 Journal of Topology 3 Kyoto Journal of Mathematics 3 Journal de l’École Polytechnique – Mathématiques 2 Discrete Applied Mathematics 2 General Relativity and Gravitation 2 Arkiv för Matematik 2 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 2 Acta Mathematica 2 Acta Mathematica Vietnamica 2 Annali di Matematica Pura ed Applicata. Serie Quarta ...and 126 more Serials all top 5 ### Cited in 48 Fields 450 Algebraic geometry (14-XX) 372 Convex and discrete geometry (52-XX) 363 Several complex variables and analytic spaces (32-XX) 311 Manifolds and cell complexes (57-XX) 258 Group theory and generalizations (20-XX) 253 Combinatorics (05-XX) 146 Algebraic topology (55-XX) 110 Commutative algebra (13-XX) 100 Differential geometry (53-XX) 70 Geometry (51-XX) 53 Nonassociative rings and algebras (17-XX) 46 Order, lattices, ordered algebraic structures (06-XX) 46 Associative rings and algebras (16-XX) 44 Number theory (11-XX) 40 Global analysis, analysis on manifolds (58-XX) 31 Quantum theory (81-XX) 24 Special functions (33-XX) 22 Topological groups, Lie groups (22-XX) 20 Dynamical systems and ergodic theory (37-XX) 18 Linear and multilinear algebra; matrix theory (15-XX) 18 Relativity and gravitational theory (83-XX) 16 Category theory; homological algebra (18-XX) 14 Computer science (68-XX) 12 Probability theory and stochastic processes (60-XX) 10 $$K$$-theory (19-XX) 9 Functions of a complex variable (30-XX) 7 Ordinary differential equations (34-XX) 7 General topology (54-XX) 6 Field theory and polynomials (12-XX) 6 Statistics (62-XX) 5 Approximations and expansions (41-XX) 5 Operator theory (47-XX) 5 Operations research, mathematical programming (90-XX) 5 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 Information and communication theory, circuits (94-XX) 3 Partial differential equations (35-XX) 3 Systems theory; control (93-XX) 2 General and overarching topics; collections (00-XX) 2 History and biography (01-XX) 2 General algebraic systems (08-XX) 2 Difference and functional equations (39-XX) 2 Numerical analysis (65-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Mathematical logic and foundations (03-XX) 1 Measure and integration (28-XX) 1 Functional analysis (46-XX) 1 Mechanics of particles and systems (70-XX) 1 Geophysics (86-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2022-07-07 11:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517084538936615, "perplexity": 7902.223756839701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00620.warc.gz"}
https://www.biostars.org/p/422406/
1 1 Entering edit mode 2.8 years ago menon_ankita ▴ 30 Hi, I am conducting a survival analysis using the gene expression values (converted to z-scores) from a certain gene and event free survival time for 249 patients. I am a newbie to this, so please excuse my lack of expertise (this is also my second question on here, so sorry for any formatting errors, I'm still figuring this out). Here is my code: finaldata$SurvObj <- with(finaldata, Surv(efst, status == 2)) finaldata$SurvObj res.cox1 <- coxph(SurvObj ~ Zscore, data = finaldata) summary(res.cox1) And here is the output: Call: coxph(formula = SurvObj ~ Zscore, data = finaldata) n= 247, number of events= 140 (2 observations deleted due to missingness) coef exp(coef) se(coef) z Pr(>|z|) Zscore 0.5878 1.8001 0.2118 2.776 0.0055 --- Signif. codes: 0 0.001 0.01 0.05 ‘.’ 0.1 ‘ ’ 1 exp(coef) exp(-coef) lower .95 upper .95 Zscore 1.8 0.5555 1.189 2.726 Concordance= 0.561 (se = 0.025 ) Likelihood ratio test= 7.92 on 1 df, p=0.005 Wald test = 7.71 on 1 df, p=0.006 Score (logrank) test = 7.74 on 1 df, p=0.005 efst is the event free survival time for patients, the status is whether they are dead or alive (1 is alive, 2 is dead), and Zscore is the Z-Score value for the expression of the gene in question for each of the patients (to understand whether the expression is relatively high or low). However, I find it highly improbable that I am receiving such p-values and that leads me to think that I have definitely done something incorrectly since this is my first time trying this out. Please let me know whether you are able to find any errors I have made/how to fix them, or any help at all with how to better interpret this data. I appreciate it! R cox-regression gene-expression survival-analysis • 1.4k views 2 Entering edit mode 2.8 years ago Nothing seems out of the ordinary to me. The upper- and lower- confidence intervals of the hazard ratio are even both above 1.0; however, the difference between them is somewhat large, which may indicate heteroskedasticity, outliers, or some other minor problem. With no other information about your data source and processing steps, there's not much else that we can say. Thanks. Kevin 0 Entering edit mode Thank you so much for your response, Dr. Blighe! I had a question -- so originally, the data that I had was RMA (robust multiarray average) transformed data that was from an Affymetrix microarray for only the tumor tissue samples for 249 neuroblastoma patients (no normal tissue). I then calculated the Z-Scores for each patient using the expression value of the gene in question (NEK2) for that patient, subtracting the mean (from all the genes from all the patients), and dividing by standard deviation (once again, from all genes of all patients). Is there perhaps a flaw with how I calculated my Z-score? I then used those z-score values, event free survival time, and state in my regression method. This is for a school project, and I am still learning as I go along, so please excuse me for all the questions (I am feeling a bit lost). If the above data is right, then that means my hazard ratio is 1.8. How would I interpret this relative to the expression level of the gene? Thank you for your time! 1 Entering edit mode Calculating global Z-scores, like you have done, is no problem. Sometimes they are calculated independently on a per-gene basis, though. With any type of regression, though, you need to be mindful of outliers, which can give misleading results. How do the first few rows of your input data, finaldata, actually appear?; How is state encoded? As you are using a continuous scale for the Cox model predictors, the interpretation of a HR=1.8. is not readily intuitive. The beta coefficient, 0.5878, from which the HR is calculated, relates to the difference between your groups / strata based on a unit change [i.e. value of 1] in your input expression level. Put another way, if we increase the Z-score of the expression by 1, how does group x change with respect to group y. The HR is then the exponent of the beta coefficient. 0 Entering edit mode Hi Dr Blighe, That makes some more sense now. I have re-done my regression model using just the RMA transformed data (not z-scores) because RMA data is already skewed and log transformed (https://support.bioconductor.org/p/50480/) so I don't think I need to calculate z-scores (please correct me if I'm wrong), but I was wondering how I would be able to test all the genes for all 249 patients using Cox at once and filter them for p value < 0.05 and HR > 1. I saw your extremely helpful tutorial (https://www.biostars.org/p/344233/), but I am not sure how I would follow this and use your RegParallel package since my data is in 249 separate txt files (like this: ftp://caftpd.nci.nih.gov/pub/OCG-DCC/TARGET/NBL/gene_expression_array/L3/gene/Full/chla.org_NBL.HumanExon.Level-3.BER.full_gene.TARGET-30-PAAPFA-01A-01R.txt) and not a GEO dataset like the one that you used. I would appreciate any tips or steps that I could take! Thanks, Ankita 1 Entering edit mode Fortunately or unfortunately, learning how to get data into the correct format is one of the most common things that you will do as a bioinformatician. In your situation, you should first obtain a vector of all files (list.files()) and then read in the information via read.table(), fread(), or something else. You can loop through each element of the file listing via a for loop, or do it better / quicker via lapply(), something like: lapply(files, function(x) read.table(x, header = TRUE, stringsAsFactors = FALSE, dec = '.', sep = '\t') 0 Entering edit mode Ok, thank you! I had one last question, sorry -- since I am now using just the RMA transformed data (http://www.molmine.com/magma/loading/rma.htm) instead of z-scores, how would I interpret the HR value and generate Kaplan-Meier curves since I do not know the relative expression levels (what constitutes of high or low expression)? I was thinking that I would have to set a cut-off value or a range based on the median or the mean, but I was not sure exactly how I would go about this, or whether this would be "conventional", and would really appreciate some suggestions. Here is my new output using the RMA transformed values: coef exp(coef) se(coef) z Pr(>|z|) nek2 0.393838 1.48266 0.141876 2.775931 0.005504394. I see that my beta coefficient value is 0.393 and the HR is 1.482, but like I said earlier, since I am not using z-scores anymore and I am using quantile normalized data that has been pre-processed, I am not really sure how to interpret this. Thank you so much! 0 Entering edit mode The idea is still the same for using just the normalised expression levels, i.e., the beta coefficient represents the change between cases / controls based on 1-unit increase in your expression levels. The interpretation of the output would indeed be easier if you converted your expression levels into, e.g., high|mid|low. For example, you could convert the data to Z-scores again, and then convert the data based on: • Z > 1.96 = high • Z < - 1.96 = low (in code, write as Z < (1.96 * -1) = low) Z = 1.96 is a cut-off for statistical significance at 5% alpha when in a two-tailed distribution 1 Entering edit mode Ok, thank you! I am working with just tumor data (so no control or normal group to compare expression levels to), so I think I will set the ranges/cut-offs based on the median and standard deviation of the whole dataset (hopefully that works). Once again, thanks for all the help, it has been very useful.
2022-11-30 17:06:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822902321815491, "perplexity": 1237.829737071749}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00737.warc.gz"}
http://math.stackexchange.com/questions?page=1&sort=newest&pagesize=30
# All Questions 3 views ### Find the Subgroup of $\mathbb Z_4 \times \mathbb Z_2$ (Joseph A. Gallian - Exercise - 8.22) Find the Subgroup of $\mathbb Z_4 \times \mathbb Z_2$ that is not the form of $H \times K$, where $H$ is a subgroup of $\mathbb Z_4$ and $K$ is a subgroup of $\mathbb Z_2$ Order elements of ... 5 views ### converging subsequences of two metrics if $d$ and $d'$ are two metrics on a space $X$, is it true that they induce the same topology if and only if they have the same converging sequences ? 10 views ### Why is there a subsequence of $(x_n)$ that converges to some point $y$ in $\mathbb R^p$? A subset $A\subseteq\mathbb R^p$ is compact iff for every sequence $(x_n)$ in $A$ there is a subsequence $(x_{n_k})$ which converges to a point of $A$. I understand the whole proof of the above ... 13 views ### Rotation matrix I'm finding different results for the 3D rotation matrix in the XY plane from different sources and I was hoping for someone to help clarify. In my "applications of vector calculus" book, the matrix ... 28 views ### Greek School Exams-Calculus problem Ok,so this problem was posed yesterday-along with 3 others of lesser difficulty-on the Greek national exams for the 3rd grade of Lyceum-the final class,that determines University success.The reaction ... 19 views ### Calculate the fifth root of the matrix I have got the following matrix. How to start ? 11 views ### Show every chain has an upperbound? Sometimes I feel like proofs like this are pointless. I mean, if we have a partially ordered subset, it seems automatically true that you have a max element. 1) Either you have an infinite sequence ... 13 views ### How to avoid rote learning and perform deep learning? I saw this question on brillant's facebook and I didn't even thought of/figure out to use difference of squares to solve this question. All the while, I have been a C student for Maths and barely ... 5 views ### Definition of normal sets and compactness I am struggling a little bit with this notion. In Conway's Functions of One Complex Variable, he offers the definition: A set $\mathscr F \subset C(G,\Omega)$ is "normal" if each sequence in ... 7 views 3 views ### Characteristics and additional conditions for differential equation I need to solve such a DE: $$(1+x^2)u_x+u_y=0$$ And then I need to draw its characteristics. The second part of the task says: Write three additional conditions such that this equation: Has one ... 10 views ### Example for the benefit from monotone convergence I want to see a (preferably simple) example where I can apply monotone convergence to a sequence of functions $f_n$ but where I cant exchange limitation and integration in terms of the Riemann ... 4 views ### Extension Lemma for Functions on Submanifolds The following lemma is my question. (cf GTM218, Introduction to Smooth manifold) I can prove (b) using partion of unity as follows: $Proof$ for any $p \in S$ choose a slice chart $W_p$ centered at ... 15 views ### An equality with inverse trigonometric functions I've stumbled on the equality $$\tan ^{-1}\left(\frac{3}{4}\right) \left(\pi -\tan ^{-1}\left(4 \sqrt{3}\right)\right)=4 \tan ^{-1}\left(\frac{2}{\sqrt{3}}\right) \cot ^{-1}(3).$$ Out of ... 20 views ### Why does $\frac{1}{{\left\| {\left| {{A^{ - 1}}} \right|} \right\|}} \le \left\| {\left| B \right|} \right\|$? Let $A,B \in {M_n}$ suppose that the following statements are true: $A$ is nonsingular, $A+B$ is singular, $\left\| {\left| . \right|} \right\|$ is matrix norm. Why is it true that: ... According to my book, the logarithmic function $$\log_{a}x=y$$ is defined if both $x$ and $a$ are positive and $x\neq 0$ and $a\neq 1$. So are these not correct? $$\log_{-3}9=2$$ $$\log_{-2}-8=3$$ ...
2015-05-26 16:15:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675748348236084, "perplexity": 385.46508653643104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.73/warc/CC-MAIN-20150521113208-00146-ip-10-180-206-219.ec2.internal.warc.gz"}
https://fluidpower.pro/hs-certification-mistakes-in-study-manual/
# HS Certification – Mistakes in Study Manual IFPS issued very good Study Manual for preparing to HS Certifications. I have read the manual, tried to solve all reviews and found couple mistakes in formulas and review answers. I mean the edition 03/29/17 of the Study Manual. I just want to share all mistakes I found and ask somebody who is preparing to this certification exam, keep in mind info below and check, am I right or not. Of course, I have already notified IFPS about found, but didn’t receive any confirmation am I right or not. ### Review 3.5.2.1 The answer b is correct, but there is a mistake in the answer solution. First of all, the wet area is calculated wrong. The correct calculation is: 4 ft x 2 ft + 3/4 * ( 4 ft * 2 ft * 2 pcs. +  2 ft * 2 ft * 2 pcs.) = 26 ft^2 Because the wet area of the bottom has to be calculated fully not 3/4 of the bottom. Next, we have to determine the power: P = 0.001 * 100 * 26 = 2.6 hp Next, when in solution they convert hp  to Btu, they multiply to 2454. They have to multiply 2545: 2.6 hp * 2545 Btu/hr = 6617 Btu/hr. So, only with this way you can get correct answer 6617 Btu/hr. ### Review 3.8.1.2. The normal practice at the schematic for parameters of cylinder is format: [Bore Diam.] x [Rod Diam.] x [Stroke] At the picture for review 3.8.1.2.  these parameters are mixed and is can be confused. As result  – wrong answer for review! The task has to be more clear, like it done, for example, in the review 4.1.1.3. Actually, in review 4.1.1.3. the format of cylinder’s parameters is correct. ### Review 3.8.2.1. The answer in study manual is d. 5227 psi This is a wrong answer, because using Eq. 3.28 you can calculate bursting pressure. The review asks to determine the working pressure. For that, in addition, you have to apply Wq. 3.27 and using safety factor 4:1 you can find working pressure: $p_w=\frac{p_B}{SF}=\frac{5226.67}{4}=1307psi$ So, the correct answer should be c. ### Review 3.6.1.1. and formula °C to K The algorithm of solving and the answer is correct, but the formula for converting from Celsius to Kelvin (at the page 3-45 of the manual) is wrong: Instead: °C to K: K = °C + 273.7 Should be: °C to K: K = °C + 273.15 The source example. So, the correct solving way has to be: $V_2=\frac{(6.9+0.1) \cdot 4 \cdot (65 + 273.15)}{(12+0.1) \cdot (27+273.15)}=2.61 liters$ Only in this case we can get the answer 2.61 liters for volume V2 ### Formula Eq. 3.35 There are mistakes in the formula 3.35: $Q=\frac{V\cdot A}{K}$ 1.  At the page 147 of the HS Certification Study Manual: – Here the convert coefficient K For metrical units should be: K = 16.667 (instead wrong one 0.06). 2.  At the page 26 of the Fluid Power Math for Certification handbook the convert coefficient K should be: – For metrical units: K = 16.667 (instead wrong one 0.06) – For imperial units: K = 3.85 for in./sec. (instead wrong one 0.3208) or K = 0.3208 for ft./sec. (instead wrong one 3.85) The same issue metrical units you can find in formula Eq.# N.9 of Fluid Power Math for Certification handbook (at the page 25). Moreover, at the end of the page 167 of the HS Certification Study Manual you also can find Eq. 3.35. But the correct number of this Eq. is 3.25 and the convert coefficient K for imperial units should be: K = 0.204 for in./sec. (instead wrong one 3.85) or K = 2.45 for ft./sec. (instead wrong one 0.3208). So, be careful. Please check and let me know if I’m wrong. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-09-25 18:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6714164614677429, "perplexity": 3577.4571965979494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00485.warc.gz"}
https://math.stackexchange.com/questions/823693/prove-that-binom-nk-frac-n-n-kk-viewed-as-a-function-of-k
# Prove that $\binom {n}{k} = \frac {n!} {(n-k)!k!}$, viewed as a function of $k$, has maximum at $k=\lfloor n/2 \rfloor, \lceil n/2 \rceil$. [duplicate] Prove that the binomial coefficient $\binom {n}{k} = \frac {n!} {(n-k)!k!}$, viewed as a function of $k$, has maximum at $k=\lfloor n/2 \rfloor, \lceil n/2 \rceil$ if $n$ is odd and maximum at $k=n/2$ if $n$ is even. Also how do I see that $\binom {n}{k} = \frac {n!} {(n-k)!k!}$ is increasing on $[0;n/2]$ and decreasing on $[n/2;n]$ ? I see that $\binom {n}{k} = \frac {n!} {(n-k)!k!} = \frac {n!} {(n-(n-k))!(n-k)!} = \binom {n}{n-k}$, so clearly the function is symmetric around $n/2$. If it is possible, I would like an answer not depending on the derivative, but more on algebra. ## marked as duplicate by Steven Stadnicki, user147263, muaddib, Strants, Rory DaultonJul 18 '15 at 21:38 HINT: Keeping integer $n(>0)$ constant, $$\frac{\binom nk}{\binom n{k-1}}=\frac{n-k+1}k$$ Now this ratio will be $\displaystyle<=>1$ according as $\displaystyle\frac{n-k+1}k<=>1$ So, $\displaystyle\binom nk>\binom n{k-1}\iff \frac{n-k+1}k>1\iff k\le\frac{n+1}2$ • Thank you, I really like your answer, because it explains the question not only in words but also in numbers. Will the "tradional" way of proving the argument be to look at the derivative ? – Shuzheng Jun 7 '14 at 9:57 • @user111854, My Pleasure. Please explain your last statement – lab bhattacharjee Jun 7 '14 at 11:36 • I mean, would the intuitive and most easy way of proving the statement be to look at the derivative and find a point where the slope is $0$ ? I guess we would have slope $= 0$ at $n/2$ ? – Shuzheng Jun 8 '14 at 7:31 Since you realize that $\binom{n}{k} = \binom{n}{n-k}$, then we just need to prove that the function is increasing from $k=0$ to $\frac{n}{2}$ or decreasing in the other half. Also, $\binom{n}{k}$ is number of ways of choosing(rejecting) $k$ objects from $n$ objects, or of rejecting(choosing) $n-k$ objects. This is increasing. You can choose 1 object in $n$ ways, then for two objects, the first in $n$ ways and second in $(n-1)$ ways, but since order doesn't matter we get $\frac{n(n-1)}{2}$ ways. Not third object can be chosen in $n-3$ ways and again ignoring order we get $f(n,2)*\frac{(n-2)}{3}$ ways etc. As is clear we are multiplying the previous number by a factor greater than 1, so from 0 upwards it is increasing. But once we reach $\frac{n}{2}$, we see that choosing $\frac{n}{2}+k, (0\le k\le \frac{n}{2})$ objects is same as rejecting $\frac{n}{2}-k$ objects, so since till this point the function was increasing, and from here the values are being reflected, we can conclude that the function is decreasing beyond this point.
2019-10-18 18:23:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603193759918213, "perplexity": 236.2713086988535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00042.warc.gz"}
http://mathhelpforum.com/algebra/96822-practice-paper-questions.html
# Math Help - Practice Paper Questions 1. ## Practice Paper Questions 1. Sally bought identical red files. Each red file costs $1.40. When Sally bought 1 more blue file at$4.60, it increased the average cost of the red files to $1.80. How many files did Sally buy in total? 2. There are a total of 200 blue, green and red balls. There are as many red balls as blue balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls in each group is less than 100 and divisible by 3 and 4. How many green balls are there? 3. Bella, Cody and Deborah had a collection of stickers. Cody and Deborah collected 7/10 of the stickers. Bella and Deborah collected 6/7 of the stickers. Bella and Cody collected 620 stickers. How many more stamps did Deborah collect than Cody? These are questions from a math practice paper I did today. I am going to continue on it tomorrow, so I need the solutions by today. 2. 1. t=total amount x=amount bought. $\frac{t}{x}=1.40$ after we buy the$4.60 one we add 4.60 to the total and add 1 to the amount bought. this gives us a new average of 1.8 $\frac{t+4.60}{x+1}=1.80$ so we just solve for x; first equation gives t=1.4x plug this into second equation and solve for x. (x=7 not including the one she bought last. so 8 in total) 3. Originally Posted by AnnaRydell 1. Sally bought identical red files. Each red file costs $1.40. When Sally bought 1 more blue file at$4.60, it increased the average cost of the red files to $1.80. How many files did Sally buy in total? Think about how "average" is computed. Then: If she bought "f" red folders at$1.40 each how much did she spend on red folders? What then was (the expression for) her total cost? How many items did she buy, in total? What expression then stands for the average cost? Set this expression equal to the given average, and solve the resulting equation. Originally Posted by AnnaRydell 2. There are a total of 200 blue, green and red balls. There are as many red balls as blue balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls in each group is less than 100 and divisible by 3 and 4. How many green balls are there? Which is correct: "as many", or "twice as many"? Originally Posted by AnnaRydell 3. Bella, Cody and Deborah had a collection of stickers. Cody and Deborah collected 7/10 of the stickers. Bella and Deborah collected 6/7 of the stickers. Bella and Cody collected 620 stickers. How many more stamps did Deborah collect than Cody? If Cody and Deborah collected 7/10 together, and Bella and Deborah collected 6/7 together, then (by subtracting), what fraction did Cody and Bella collect together? Use this information, along with the given total for Bella and Cody, to find the total number collected by all three. Then work backwards to find the individual amounts. 4. 3. x is total stickers these are the equations (a)-----> $B+D=\frac{6x}{7}$ (b)-----> $B+C=620$ (c)-----> $C+D=\frac{7x}{10}$ equation (a) is like that because its 6/7 of the total amount. from (a) we have $C=\frac{x}{7}$ since all C,B,and D must add up to total stickers from (c) we have $B=\frac{3x}{10}$ from (b) $B+C=\frac{3x}{10}+\frac{x}{7}=\frac{31x}{70}=620$ so the this solves to x=1400 we need $(a)-(b)=D-C=\frac{6x}{7}-620=\frac{6(1400)}{7}-620=580$ which is the difference between D and C. I know you dont like algebra anna but i hope you understand this. if i can think of a non-algebraic method i'll let you know 5. Originally Posted by stapel Which is correct: "as many", or "twice as many"? question said something about 3 groups so im guessing thos are 3 different groups. still i get multiple working answers 6. Originally Posted by Krahl question said something about 3 groups so im guessing thos are 3 different groups. Yes, there are three colors. And two of the colors are related in two conflicting ways: "as many" versus "twice as many". Unless there are "zero" of each (which conflicts with other parts of the exercise), then we need corrections of the original post. 7. Edit: yeah your right stapel 8. Originally Posted by AnnaRydell 2. There are a total of 200 blue, green and red balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls in each group is less than 100 and divisible by 3 and 4. How many green balls are there? Sorry! Muddle-headed me...Above is the edited question^^^ 9. Originally Posted by AnnaRydell 2. There are a total of 200 blue, green and red balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls in each group is less than 100 and divisible by 3 and 4. How many green balls are there? Well, the number of red balls is a multiple of 12 that is also less than 100, so I'll pick 96. There are twice as many red balls as blue balls, so there would be 48 blue balls. That means that there must be 56 green balls, which is less than 96. It looks like that this is not the only answer, though. 01 10. Originally Posted by AnnaRydell 2. There are a total of 200 blue, green and red balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls in each group is less than 100 and divisible by 3 and 4. How many green balls are there? The number of blues is a multiple of twelve, such that twice this multiple (the number of reds) is still less than 100. So list out multiples: blues: 12, 24, 36, 48, 60, 72, ... reds: 24, 48, 72, 96, 120, 144, ... Clearly, the only plausible options are: (blues, reds): (12, 24), (24, 48), (36, 72), (48, 96) The total is 200, and the number of greens is less than the number of reds. Then the number of greens is: (12, 24): 200 - 12 - 24 = ...? (24, 48): 200 - 24 - 48 = ...? (36, 72): 200 - 36 - 72 = ...? (48, 96): 200 - 48 - 96 = ...? Which option(s) work(s)? 11. Hello, Anna! 2. There is a total of 200 blue, green and red balls. There are twice as many red balls as blue balls. There are fewer green balls than red balls. The number of blue balls and red balls is less than 100 and divisible by 3 and 4. How many green balls are there? Let: . $\begin{array}{ccc}R &=& \text{no. of red balls} \\ B &=& \text{no. of blue balls} \\ G &=& \text{no. of green balls}\end{array}$ There are fewer green balls than red balls: . $G \:<\:R$ .[1] The number of blue balls is divisible by 12: . $B \:=\:12k$ for some positive integer $k.$ The number of red balls is twice the number of blue balls: . $R \:=\:2B \:=\:24k$ The number of red balls is less than 100: . $24k \:<\:100 \quad\Rightarrow\quad k \:\leq \:4$ .[2] There is a total of 200 balls: . $R + B + G \:=\:200 \quad\Rightarrow\quad 24k + 12k + G \:=\:200$ . . We have: . $36k + G \:=\:200 \quad\Rightarrow\quad G \:=\:200 - 36k$ From [2], there are four cases to consider: . $k \:=\:4,3,2,1$ . . If $k = 4$, then: . $R = 96,\;B = 48,\;G \:=\:200 - 36(4) \quad\Rightarrow\quad G\:=\:56$ . . If $k = 3$, then: . $R = 72,\;N =36,\;G \:=\:92$ . . . which contradicts [1]. . . The same happens for $k \:=\:2\text{ or }1.$ Therefore, the only solution is: . $\begin{Bmatrix} R &=& 96 \\ B &=& 48 \\ G &=& 56 \end{Bmatrix}\quad\hdots \text{ There are 56 green balls.}$
2014-09-22 16:16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.787333607673645, "perplexity": 702.3780192563554}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137108.99/warc/CC-MAIN-20140914011217-00344-ip-10-234-18-248.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/640292/phase-shifting-the-one-dimensional-wave-function
Phase-shifting the one-dimensional wave function Let $$\Psi(x)$$ be the wave function for a one-dimensional quantum-mechanical system, $$i=\sqrt{-1}$$ and let $$p_x$$ be the momentum operator in one dimension. Show that for any real number ℓ one has $$e^{ilp_{x}}\Psi(x)=\Psi(x+l)$$ In the answer sheet it mentions a method with Fourier transform and another method using power series. I'm struggling to solve it either way. For the power series method I guess that I should expand as following $$e^{ilp_{x}}=\sum_{n=0}^{\infty} a_n (?-?_0)^n$$ however I am unsure what to write instead of "$$?$$" and "$$?_0$$" since the left hand side has no variabel? I mostly interested in the power series method but understanding the Fourie method would also be nice. P.S. I'm unsure if one could call this "phase-shifting" please feel free to change the title if it is not accurate terminology. • What definition of the operator $p_x$ do you know? – J.G. May 31 at 10:12 • $p_x=-i\hbar \frac{\partial}{\partial x}$. Momentum operator $\vec p$ is always $\vec p=-i\hbar\nabla$ ? – ludz May 31 at 10:15 • You're on the right track now, although the question works with the nondimensionalization $\hbar=1$. Can you expand $e^{d/dx}$ as a power series in $d/dx$, then apply it to $\Psi$ by Taylor's theorem? – J.G. May 31 at 10:22 • $e^{\frac{d}{dx}} \Psi(x)=\Sigma \frac{(\frac{d}{dx})^n}{n!} \Psi(x)$ ? – ludz May 31 at 10:28 • Sorry, that should have been $e^{l\tfrac{d}{dx}}$. – J.G. May 31 at 10:29 This is clearly a home-work style question, so I'll only give you some hints as to how to do it. First, the quantity $$\hat{T}(l) = e^{il p_x}$$ is an operator. More precisely, it is a function of the momentum operator $$\hat{p}_x$$. Functions of operators are precisely defined by their power series expansions. In other words, suppose some function $$f(u)$$ has a power series expansion $$f(u) = \sum_{n=0}^\infty a_n u^n,$$ then the operator $$f(\hat{p}_x)$$ (for example) is defined as $$f(\hat{p}_x) \equiv \sum_{n=0}^\infty a_n \hat{p}_x^n,\tag{1}\label{1}$$ where the coefficients $$a_n$$ are the same, and $$\hat{p}_x^n$$ represents the $$\hat{p}_x$$ operator acting $$n$$ times in succession. In your case, your function $$f(u) = e^{ilu}$$, whose series expansion is very well known. If you write the series expansion of this function using the form of the momentum operator $$\hat{p}_x = -i\hbar \partial_x$$, the answer should be obvious. (If it isn't, go back and revise the Taylor Series.) The power series "proof" is popular, appealing, but fallacious. It requires the wavefunction to be analytic -- i.e to have a Taylor series that actually converges to the function. There is no reason for the wavefunction to have that property. If the wavefunction vanishes outside some closed interval there is no way that $$e^{i\hat p_x a}\psi$$ with the $$e^{i\hat p_x a}$$ defined as $$e^{i\hat p_x a}\stackrel{?}=1+ia\hat p_x+\frac{a^2}{2}(i\hat p_x)^2+\dots =1+a \partial_x+\frac {a^2}{2} \partial^2_{xx}+\ldots$$ can ever make $$\psi$$ nonzero where it was originally zero. On the other hand, if $$\psi(x)= \langle x|\psi\rangle$$, and $$\tilde \psi(p) =\langle p|\psi\rangle$$, with $$\hat p|p\rangle=p|p\rangle$$ then $$\psi(x)= \int \frac{dk}{2\pi} e^{ikx}\tilde \psi(k)$$ and $$\psi(x+a)=\int \frac{dk}{2\pi} e^{ika} e^{ikx} \tilde \psi(k)$$ and this can be turned into valid proof of the claim.
2021-08-05 19:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340906143188477, "perplexity": 165.563707097325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046157039.99/warc/CC-MAIN-20210805193327-20210805223327-00718.warc.gz"}
http://www.mapleprimes.com/tags/equations?page=45
# Items tagged with equationsequations Tagged Items Feed ### Maple 11, Tensors in Physics Package: Possible Bug... August 10 2007 Maple 0 3 Here are some possible bugs or limitations that I have come across while working with Tensors in the new physics package. I have done best of my efforts looking into the documentation, but it is still possible that the bugs I am listing are not bugs at all but outcome of lack of my knowledge in using Maple. My intention of creating this blog is to not to criticize but to help the Physics package development team in making updates. I appreciate their efforts for developing a much needed package for areas like fluid mechanics, continuum mechanics, theory relativity etc. Platform I am using: Maple Ver 11.01 on Mac OS X 10.4.10 ### how to find why maple doesn't comlpete the command... August 06 2007 0 6 We have a maple 10 site license and I recently started to use maple for some challenging symbolic manipulation. Specifically I wanted to find the 3-dimensional eigenvectors A(u,w) and eigenvalues \lambda for a system of 8 partial differential equations. I was disappointed when maple completed most of the problem finding all 8 eigenvalues but only 6 of the 8 eigenvectors. In fact explicitly issuing Eigenvectors(A) command didn't seem to work as it ran for a long time with out returning so I ended up just issuing a series of LinearSolve(A-\lambda I|0) commands. This worked for the first 6 eigenvalues but for the last pair of eigenvalues maple just didn't do anything simply printing the command I issued. Maple didn't print any diagnostic information to let me know what went wrong. My question is when Maple doesn't evaluate a command how does one obtain information about what went wrong? ### Elliptic Integrals... August 05 2007 0 1 I am trying to recreate a transcendental equation from a paper on buckling analysis. I have not been able to successfully manipulate equation (10) given on page 692 of the attached files into the final forms of equations (12) and (13), using the boundary conditions given in (11). Could someone take a look at my Maple worksheet and offer some suggestions that would yield an elliptic integral? Download 4865_Lo_p692.pdf View file details ### Iterating an Equilibrium - fsolve... August 01 2007 0 5 Hi, I am new to Maple and am having trouble with what should be a simple task. I have a series of simultaneoous equations of the form a*x+b*y=c etc. I have found an initial equilibrium by choosing values of x and y which give me reasonable values of a and b (using the Solve command I have output of a,b,c,x,y which satisfy my equations). Now, I need to use the parameters I have, my a,b,c, to try and retrieve my x and y (my real aim in this is to see the effect on the x,y when I change the a,b,c). When I try and put the equations plus parameter values into Solve it takes a very long time to solve (I stop after 20mins). What I want to do is use the equilibrium values I obtained before to give Maple a starting point to search for a solution. ### PDSolve, Error: duplicated elements (ranking)... July 24 2007 0 3 Hello! I have some trouble with getting an explicit solution to a PDE: f=f(u,v) in IC^4 (IC=complex numbers) So i have a system of 8 equations, first order, integrable (i.e. solution exists by theory). I'm getting the following, when calling pdsolve(PDE, fcns), where PDE is the set of the PDEs and fcns are the 4 complex functions f[i], i=1,2,3,4: "Error, (in pdsolve/sys) duplicated elements (ranking)" What could it be? Thx, yadaddy. ### Plotting a function which contains a (difficult) i... July 13 2007 0 5 Hello, This is related to my recent posts at http://www.mapleprimes.com/forum/integral-equations#comment-8339 I have a function of three variables one of which is inside a (difficult) integral. I want to calculate & store the array of points which satisfy the function, then use transform to change those points and then plot them. The function looks like this: P:=(x,m)->sqrt(x)/(sqrt((x+m)^2-1)*(x+m+sqrt((x+m)^2-1))); IntegralP:=(m)->Int(P(x,m),x=0..infinity); f4A:=(beta,Omega,m)->beta^2+(3*(beta*Omega)^(3/2)*evalf(IntegralP(m)))/(4*2^(3/4))-1; In my earlier case, m=1 (and there was another variable, but it wasn't under the integral) so the integral was not a problem. I tried the same method; ### Change coordinate systems for substitutions... July 13 2007 0 1 Still being a newbie to Maple, I am stuck on this one. I am trying to create a general system of changing my equations in an [x,y] coordinate system to a [u,v] system. As a specific example: I have two substitution equations, u=2x-3y and v=-x+y. I have four equations: x=0, x=-3, y=x, y=x+1. I have tried MapToBasis with both static and procedure statements and I am not having any luck. The equations do not completely change to [u,v]. Here is one example: > proc (u, v) u = 2*x-3*y end proc; > proc (u, v) v = -x+y end proc; > with*VectorCalculus; > SetCoordinates('cartesian', [u, v]); ### Plotting functions of several variables... July 12 2007 0 0 I am trying to plot the equations shown in Figures 5 and 6, which are solutions to Equations (14), (15), and (16). These equations are developed in the Lo paper (attached). I am working through the symbolic solution by hand and would greatly appreciate any suggestions on how to set these equations and the graphs of the same in a worksheet. Thank you for the assistance anyone may offer. Wayne Bell Download 4865_Lo_p691.pdf View file details ### cartesian to polar equations... July 09 2007 0 2 I am trying to convert equations from cartesian to polar and spherical expecting Maple to change the variables from x,y and z to r, theta, etc. I presume I am using the wrong commands. convert(exp(sqrt(x^2+y^2)),polar) returns the same expression in x and y. What is the correct command? thanks ### why won't this plot?... July 09 2007 0 6 I seem to keep running into things in Maple that I just can't get to plot. I have been using Mathcad and am trying to learn to do everything in Maple, but keep finding myself having to go to Mathcad to do even the simplest things. Could someone please help and explain to me what I'm doing wrong? I have with(plots) and restart() at the top of my page. I then do this: g1 := implicitplot(3x^2,x=0..10,y=0..10); display( g1 ); This results in it just printing the display line again, no graph is shown. I can get this to work using the plot command but since its not very easy in Maple to print multiple equations on a single graph I have started trying to do them this way. If there is a better way to do graphs then please tell me this as well as I'm trying to learn Maple as best as I can. ### Plot a projection... July 07 2007 0 3 I would like to plot a projection using parametric equations: x=la*cos(fi) y=fi fi= ### Maple T.A. 3.0 is now shipping... July 06 2007 0 0 This is to inform you that we are now shipping the newest version of Maple T.A. – Maple T.A. 3.0. Maple T.A. is an easy-to-use web-based system for creating tests and assignments, automatically assessing student responses and performance. It supports complex, free-form entry of mathematical equations and intelligent evaluation of responses, making it ideal for mathematics, science, or any course that requires mathematics. The new edition – Maple T.A. 3.0 – comes with increased flexibility in content creation, an enhanced user interface and improved grading and assessment capabilities ### Trying to do an MISO... June 28 2007 0 1 Hello, I am doing some research with solar panels right now and I need to try and come up with equations for a multiple input single output system. What I am doing is taking numbers from a website and comparing to the actual numbers from solar panels I have set up in a field. But the head of the project is wanting to make some unique equations. I can easily do a single input single output by curve fitting or splining the information, but I have never tried a multiple input single output system. Is there a way to do this? Chris ### using maple to solve equations that have no algrai... June 28 2007 0 1 I'm completely new to maple, and I'm trying to solve some equations that have no algebraic solution but should be straightforward to solve with numerical methods. I'd be most grateful if anyone could offer some advice on how to do this. For example, I'd like to solve the following two equations for p: 2^(1-p)+(1-x)^(1-p)/(1-p) and x^(2-p)-y^(2-p))/(x-y) = (a^(2-p)-b^(2-p))/(a-b) The attached worksheet contains my unsuccessful attempts. View 4985_equations.mw on MapleNet First 43 44 45 46 47 48 49 Last Page 45 of 55 
2015-10-10 07:01:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4594392776489258, "perplexity": 711.789915497548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737942301.73/warc/CC-MAIN-20151001221902-00216-ip-10-137-6-227.ec2.internal.warc.gz"}
https://dhruveshp.com/blog/2021/signal-propagation-on-slurm/
## Issue with signal propagation to inner script on slurm. When one runs scancel or when the job reaches its time limit, slurm will send SIGTERM to the job and wait for certain amount of time before it sends the final SIGKILL. During this time between SIGTERM and SIGKILL, the job can do some cleanup/saving etc to exit gracefully. This is all good. However, when we run a python script in sbatch see_signal.py ------------- import signal import time print("start script") def print_signal(sig, frame): print("Script recieved signal:", sig) if sig == 15: print("SIGTERM recieved, raising SIGINT") raise KeyboardInterrupt signal.signal(signal.SIGTERM, print_signal) signal.signal(signal.SIGCONT, print_signal) try: print("script started") for i in range(100000): print("working...") time.sleep(0.1) except KeyboardInterrupt as e: print("SIGINT recieved in script. We will exit gracefully") time.sleep(10) #!/bin/bash #SBATCH --output=t.log python see_signal.py The python script never receives the SIGTERM, but dies a painful and sudden death when the job receives the SIGKILL. Also, changing the execution of the python script to a proper job step by using srun python see_signal.py instead of python see_signal.py does not help either. ## Solutions: 1. Start the process in background and use its PID to sent the relevant signal 1 #!/bin/bash #SBATCH --output=t.log #SBATCH --signal=B:TERM@60 # tells the controller # to send SIGTERM to the job 60 secs # before its time ends to give it a # chance for better cleanup. # Install trap for the signals INT and TERM to # the main BATCH script here. # Send SIGTERM using kill to the internal script's # process and wait for it to close gracefully. # Note: Most python scripts don't install handler # for SIGTERM and hence might die a quick painful death # on recieveing SIGTERM (kill -15). # To avoid this, you can send SIGINT, # i.e., KeyboardInterrupt using (kill -2). trap 'echo signal recieved in BATCH!; kill -15 "${PID}"; wait "${PID}";' SIGINT SIGTERM # Start the work in background process and get its PID python see_signal.py & # Set the PID var so that the trap can use it PID="$!" wait "${PID}" If you cancel the job manually, make sure that you specify the signal as TERM like so scancel --signal=TERM <jobid>. 2. If you only have one jobstep, a much cleaner solution is to use exec to start that step in the main BATCH process (solution courtesy Michael Boratko.) #!/bin/bash #SBATCH --output=t.log #SBATCH --signal=B:TERM@60 # tells the controller # to send SIGTERM to the job 60 secs # before its time ends to give it a # chance for better cleanup. exec python see_signal.py 3. By default all the signals to a job are only sent to main BATCH script. If the job-steps inside this script use srun, then the signals are propagated to the job-steps. However, if the main BATCH script does not handle the signal, it will not wait for the job-steps to handle the propagated signals. Hence, ultimately, the job-steps will still not get a chance to end gracefully. So, the recommended way for such a case is to install a trap for the signal in the main BATCH script and in it ask the job to wait for all the subprocesses/job-steps to end.2 #!/bin/bash #SBATCH --output=t.log #SBATCH --signal=B:TERM@60 # tells the controller # to send SIGTERM to the job 60 secs # before its time ends to give it a # chance for better cleanup. # trap the signal to the main BATCH script here. sig_handler() { echo "BATCH interrupted" wait # wait for all children, this is important! } trap 'sig_handler' SIGINT SIGTERM SIGCONT srun python see_signal.py
2021-10-23 11:13:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3485736548900604, "perplexity": 10044.245176356419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00428.warc.gz"}
https://www.physicsforums.com/threads/find-the-smallest-positive-integer.828771/
# Find the smallest positive integer #### youngstudent16 1. The problem statement, all variables and given/known data For a positive integer $n$, let $a_n=\frac{1}{n} \sqrt[3]{n^{3}+n^{2}-n-1}$ Find the smallest positive integer $k \geq2$ such that $a_2a_3\cdots a_k>4$ 2. Relevant equations The restrictions are the only relevant thing I can think of 3. The attempt at a solution I have just tried plugging in numbers so far When $n=2$ I got $\frac{3^{\frac{2}{3}}}{2}$ When $n=3$ I got $\frac{2 \times 2^{\frac{2}{3}}}{3}$ When $n=4$ I got $\frac{1}{4} 3^{\frac{1}{3}} \hspace{1mm} 5^{\frac{2}{3}}$ Now this is growing really slowly so this is obviously not the correct approach. Related Precalculus Mathematics Homework News on Phys.org #### Mark44 Mentor 1. The problem statement, all variables and given/known data For a positive integer $n$, let $a_n=\frac{1}{n} \sqrt[3]{n^{3}+n^{2}-n-1}$ Find the smallest positive integer $k \geq2$ such that $a_2a_3\cdots a_k>4$ 2. Relevant equations The restrictions are the only relevant thing I can think of 3. The attempt at a solution I have just tried plugging in numbers so far When $n=2$ I got $\frac{3^{\frac{2}{3}}}{2}$ When $n=3$ I got $\frac{2 \times 2^{\frac{2}{3}}}{3}$ When $n=4$ I got $\frac{1}{4} 3^{\frac{1}{3}} \hspace{1mm} 5^{\frac{2}{3}}$ Now this is growing really slowly so this is obviously not the correct approach. I would advise you to just keep going. Sometimes, a brute force approach is the easiest way to go. For your results for n = 4, is the expression you show less than 4? Writing the result as a decimal approximation would be helpful. #### Student100 Gold Member 1. The problem statement, all variables and given/known data For a positive integer $n$, let $a_n=\frac{1}{n} \sqrt[3]{n^{3}+n^{2}-n-1}$ Find the smallest positive integer $k \geq2$ such that $a_2a_3\cdots a_k>4$ 2. Relevant equations The restrictions are the only relevant thing I can think of 3. The attempt at a solution I have just tried plugging in numbers so far When $n=2$ I got $\frac{3^{\frac{2}{3}}}{2}$ When $n=3$ I got $\frac{2 \times 2^{\frac{2}{3}}}{3}$ When $n=4$ I got $\frac{1}{4} 3^{\frac{1}{3}} \hspace{1mm} 5^{\frac{2}{3}}$ Now this is growing really slowly so this is obviously not the correct approach. Set up an inequality. #### Student100 Gold Member I would advise you to just keep going. Sometimes, a brute force approach is the easiest way to go. For your results for n = 4, is the expression you show less than 4? Writing the result as a decimal approximation would be helpful. Never mind, I'm an idiot. :) #### youngstudent16 Set up an inequality. Ok trying to just work with the variables no numbers I'm getting this pattern $\frac{\sqrt[3]{k-1)(k+1)^2}(k-1)}{k}$ Now I simplify and set up the ineqaulity $\frac{\sqrt[3]{(2k^2)(k+1)^2}}{2k}>4$ Now what #### youngstudent16 I would advise you to just keep going. Sometimes, a brute force approach is the easiest way to go. For your results for n = 4, is the expression you show less than 4? Writing the result as a decimal approximation would be helpful. I tried putting them with decimal and using wolframalpha I went to 10 numbers and still it was tiny #### Student100 Gold Member Ok trying to just work with the variables no numbers I'm getting this pattern $\frac{\sqrt[3]{k-1)(k+1)^2}(k-1)}{k}$ Now I simplify and set up the ineqaulity $\frac{\sqrt[3]{(2k^2)(k+1)^2}}{2k}>4$ Now what $a _2+a _3 + ... a _k$ Whats the sum of your brute force method? You are summing them correct? #### youngstudent16 $a _2+a _3 + ... a _k$ Whats the sum of your brute force method? You are summing them correct? Its multiplying not summing and yes I tried brute force now using wolframalpha for several numbers and its going up very slowly like 1.01 1..., 1..., 1... etc #### Student100 Gold Member Its multiplying not summing and yes I tried brute force now using wolframalpha for several numbers and its going up very slowly like 1.01 1..., 1..., 1... etc Okay, well then as long as you set up your inequality right have you tried punching it into a calculator/wolfram and seeing if a solution exists? Picking various k's and working it out? #### haruspex Homework Helper Gold Member 2018 Award $\frac{\sqrt[3]{(2k^2)(k+1)^2}}{2k}>4$ Now what You've done the hard work. Just cube both sides, multiply out and simplify. #### Mark44 Mentor As it turns out, brute force and direct calculation with paper and pencil don't work very well for this problem. You can, however, use brute force and a computer to find the answer. I put together an Excel spreadsheet that shows that when n is about 250, the product finally gets to 4. The first two rows of my spreadsheet look like this: Code: 2 | 1/A1 * (A1^3 + A1^2 - A1 - 1)^(1/3) | =B1 =A1 + 1 | 1/A2 * (A2^3 + A2^2 - A2 - 1)^(1/3) | =B2 * C1 I just copied the second row (all three columns) down a bunch of rows. #### haruspex Homework Helper Gold Member 2018 Award As it turns out, brute force and direct calculation with paper and pencil don't work very well for this problem. You can, however, use brute force and a computer to find the answer. I put together an Excel spreadsheet that shows that when n is about 250, the product finally gets to 4. The first two rows of my spreadsheet look like this: Code: 2 | 1/A1 * (A1^3 + A1^2 - A1 - 1)^(1/3) | =B1 =A1 + 1 | 1/A2 * (A2^3 + A2^2 - A2 - 1)^(1/3) | =B2 * C1 I just copied the second row (all three columns) down a bunch of rows. It's not at all difficult algebraically. Look at where youngstudent got to in post #5. #### andrewkirk Homework Helper Gold Member Ok trying to just work with the variables no numbers I'm getting this pattern $\frac{\sqrt[3]{k-1)(k+1)^2}(k-1)}{k}$ Now I simplify and set up the ineqaulity $\frac{\sqrt[3]{(2k^2)(k+1)^2}}{2k}>4$ Now what I'm not sure that's quite right. Go back to the expression inside the cube root in $a_n$. Notice that it factorises to $(n^2-1)(n+1)=(n-1)(n+1)^2$. Next, bring the $n$ in the denominator inside the cube root, by cubing it. That gives us $${a_n}^3=\frac{(n-1)(n+1)^2}{n^3}$$ Hence the inequality you want to prove is: $$4^3=64< \prod_{k=2}^n \frac{(k-1)(k+1)^2}{k^3}$$ Look carefully at the inside of the product. Notice how the power in the denominator is 3 and the numerator has the 1st power of the previous factor and the 2nd power of the next factor. Does that give you an idea about some really nice simplifying cancellation that is going to happen between adjacent factors? Try writing out a few factors in a row like this and you'll get an idea of the cancellation, which will lead you towards a simple guessed expression for the product, in which everything cancels out except a few bits from the first and last factors. You can then use mathematical induction to prove that that expression is the correct one for each product. Once you have that expression, solving the inequality is easy. It should be an inequality involving only polynomials in $n$, none with degree greater than 2, so it's just solving a quadratic. Last edited: #### haruspex Homework Helper Gold Member 2018 Award I'm not sure that's quite right. It's what I get. To get that, youngstudent has already done most of what you describe. Reducing it to a quadratic is all that's left. #### youngstudent16 You've done the hard work. Just cube both sides, multiply out and simplify. Ah perfect thank you I got 254 as the correct solution that was a lot of computation #### andrewkirk Homework Helper Gold Member It's what I get. To get that, youngstudent has already done most of what you describe. Reducing it to a quadratic is all that's left. It's the first of the two formulas that I think is not right. The second is the same as mine, minus some cancelling ($\frac{(n+1)^2}{4n}<4^3$) but is not equivalent to the first. It's possible that the difference between the two is just an error in latex coding. The purpose of my post was to indicate that one can do this problem deductively rather than just inductively. "Find the smallest positive integer" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-05-23 12:02:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.711998462677002, "perplexity": 695.2783762377195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00294.warc.gz"}
https://www.gamedev.net/forums/topic/470499-not-able-to-write-to-a-file-after-you-read-from-it/
# Not able to write to a file after you read from it? This topic is 3914 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I am trying to read down to a certain line in a file, see what it says, and then write out a new value based on that. I am using the fstream class, and setting the fstream::in | fstream::out option when I open the file, but it won't let me write to the file after reading it seems (if I write first it works fine). My plan is to just read the whole file into memory, then write the whole file back out, but I was wondering if there was a different way to do it. Thanks ##### Share on other sites You can't insert into the middle of a file, nor can you overwrite using fstream. Plan accordingly. ##### Share on other sites Quote: Original post by tibberous1I am trying to read down to a certain line in a file, see what it says, and then write out a new value based on that. I am using the fstream class, and setting the fstream::in | fstream::out option when I open the file, but it won't let me write to the file after reading it seems (if I write first it works fine). My plan is to just read the whole file into memory, then write the whole file back out, but I was wondering if there was a different way to do it.Thanks That's the usual basic approach, but you don't have to hold on to the data at once. Become familiar with the concept of a stream. Usually you end up with something more like: Open a blank output file in addition to the input file.For each line in the input file: If it's the one we're interested in: Do awesome calculations Output new value to the output file Else: Output the line to the output fileUse OS functionality to copy the output file over the input (in C++, <cstdlib> has the stuff you need) This has the added benefit of making it easier to recover, in general, from problems that come up during the processing. 1. 1 Rutin 26 2. 2 3. 3 4. 4 JoeJ 18 5. 5 • 14 • 21 • 11 • 11 • 9 • ### Forum Statistics • Total Topics 631763 • Total Posts 3002190 ×
2018-07-20 19:07:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3855939507484436, "perplexity": 814.5878818430955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00530.warc.gz"}
https://sites.google.com/site/shitaoliu/analysis-seminar
### Analysis Seminar Clemson Analysis SeminarFridays, 3:30--4:30pmMartin M-102  Fall 2017    September 8         Walton Green (Clemson University)  September 15        Jeong-rock Yoon (Clemson University)  September 22       Oleg Yordanov (Clemson University & Bulgarian Academy of Sciences)  September 26 (Tuesday)       James Melbourne (University of Minnesota)     October 6  October 13  October 20  October 27  November 3  November 10                 November 17  December 1  December 8Title & AbstractSeptember 26  James Melbourne Title: A R\'enyi Entropy TrilogyAbstract: In part of an effort to properly axiomatically characterize Shannon entropy, Alfred R\'enyi put forth a family of "information measures", parameterized $r \in [0,\infty]$.  The Shannon entropy corresponded to $r=1$, and his famed entropy power inequality (EPI), fully proved by Stam some years later, could be written $N_1(X+Y) \geq N_1(X)+N_1(Y)$ for independent random variables $X,Y$. This provides an archetype for exploring further convolution inequalities under other Renyi entropy parameters. In particular, when $r=0$ one can interpret the Brunn-Minkowski inequality of Convex Geometry as a Renyi EPI of a nearly identical form, while setting r=\infty allows one to cleanly formulate some new projection inequalities important in Random Matrix Theory.  We will properly define the terminology and notation used above in order to discuss this background and motivation before describing some recent progress in the understanding of $r=\infty$ case. Time permitting some general superadditivity properties.will be explained as well.September 22  Oleg Yordanov Title: Approximate, Saturated and Blurred Scaling of Random Fields: ApplicationsAbstract: Scaling (homogeneous, power-law) functions are empirically identified in a variety of natural phenomena and structures. An important class of irregular structures and processes, modeled as random fields, exhibit scaling of their second order, two-point correlation functions. Among these, referred to also as random fractals, are the morphology of rough surfaces, the fully developed turbulence, star and galaxy clusters, and many others. In all these cases, the scaling is accounted for by using power-law functions, which are singular and have limited range of validity. In this talk, I present examples of random fields whose correlation functions are defined over the entire real line and are analytic: yet they exhibit scaling properties albeit not exact. The fields are constructed over a finite band of wavenumbers/frequencies in the Fourier space. The scaling arises as an asymptotic behavior and therefore is only approximate. I also present applications of the above fields and discuss certain technical subtleties involved in these applications.September 15  Jong-rock YoonTitle: Various models of viscoelasticity including fractional derivative modelSeptember 8  Walton Green Title: Brownian Rotation of Magnetized Particles in Magnetic Particle ImagingAbstract: Magnetic Particle Imaging (MPI) is a medical imaging technique which is implemented by measuring the voltage emitted by magnetized particles in a domain. I will derive the current model (equilibrium model) and propose a more sophisticated one (relaxation model) and compare the two in both simulation and reconstruction. Subpages (1):
2018-02-25 14:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5726035237312317, "perplexity": 2961.4553292559945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00380.warc.gz"}
https://www.jiskha.com/questions/1721304/The-length-and-breadth-of-a-rectangular-paper-were-measured-to-be-the-nearest-centimeter
# math The length and breadth of a rectangular paper were measured to be the nearest centimeter and found to be 18 cm and 12 cm respectively. Find the percentage error in its perimeter 1. the maximum and minimum dimensions are 18.5*12.5 and 17.5*11.5 find the difference from the true perimeter P=2(12+18) and divide it by P posted by Steve ## Similar Questions 1. ### Math The length and breadth of a rectangular paper were measured to be the nearest centimeter and found to be 18 cm and 12 cm respectively. Find the percentage error in its perimeter. 2. ### math The length and breadth of a rectangular paper were measured to be the nearest centimeter and found to be 18 cm and 12 cm respectively. Find the percentage error in its perimeter. 3. ### math The length and breadth of a rectangular paper were measured to be the nearest centimeter and found to be 18 cm and 12 cm respectively. Find the percentage error in its perimeter. 4. ### math The length and breadth of a rectangular paper were measured to be the nearest centimeter and found to be 18 cm and 12 cm respectively. Find the percentage error in its perimeter. 5. ### Math the perimeter of a rectangular field is 140m. If the length is increased by 15m and breadth is decrease by 5m, the length will become 3 times the breadth. Find the length of breadth of the field. 6. ### Mathematics the perimeter of a rectangular field is 140m. If the length is increased by 15m and the breadth is decreased by 5m, the length will become 3 time the breadth. find the length and breadth of the field. 7. ### geometry The rectangular label will completely cover the lateral surface of the can using as little paper as possible. If the can has a height of 8 cm and a diameter of 7 cm, then what are the width and length (to the nearest centimeter) 8. ### geometry The rectangular label will completely cover the lateral surface of the can using as little paper as possible. If the can has a height of 12 cm and a diameter of 5 cm, then what are the width and length (to the nearest centimeter) 9. ### Maths The breadth of a rectangular plot of land is 85% of its length. The difference between the length and breadth is 18m. A) what is the length B) what is the area 10. ### Maths a rectangular piece of paper has length=14cm and breadth=12cm. A square piece of paper of perimeter 24 cm is cut off from it. what is the area of the piece of paper left More Similar Questions
2018-09-21 18:44:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262961506843567, "perplexity": 488.25811655108123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157351.3/warc/CC-MAIN-20180921170920-20180921191320-00138.warc.gz"}
https://astronomy.stackexchange.com/tags/quasars/new
# Tag Info The jump occurs at the redshifted wavelength of the Lyman-$\alpha$ line, so this is the Gunn-Peterson trough, which is caused by neutral hydrogen in the intergalactic medium suppressing any radiation with shorter wavelengths for sufficiently distant quasars. This is a limiting case of the Lyman-$\alpha$ forest formed by the absorption lines caused by all the ...
2022-01-26 22:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7470874786376953, "perplexity": 777.5557699246369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00346.warc.gz"}
https://power.larc.nasa.gov/docs/methodology/energy-fluxes/geometry/
# Solar Geometry¶ Multi-year monthly averaged solar geometry parameters are available for any latitude/longitude via the "Data Tables for a particular location" web application. The call-out below lists the solar geometry parameters provided to assist users in setting up solar panels. In the sections below the equations are provided for calculating each of the parameters, and the methodology for calculating the multi-year monthly averages is described. Solar Geometry Parameters • Solar Noon • Daylight Hours • Daylight Average of Hourly Cosine Solar Zenith Angles • Cosine Solar Zenith Angle at Mid-Time Between Sunrise and Solar Noon • Declination • Sunset Hour Angle • Maximum Solar Angle Relative to The Horizon • Hourly Solar Angles Relative to The Horizon • Hourly Solar Azimuth Angles Monthly Average Declination Table The solar geometry parameters are calculated for the "monthly average day"; consequently each parameter is the monthly "averaged" value for the respective parameter for the given month. The "monthly average day" is the day in the month whose solar declination (δ) is closest to the average declination for that month (Klein, 1977). The table below lists the date and average declination, δ, for each month. Month Day δ (°) Month Day δ (°) January, 17 -20.9 July, 17 21.2 February, 16 -13.0 August, 16 13.5 March, 16 -2.4 September, 15 2.2 April, 15 9.4 October, 15 -9.6 May, 15 18.8 November, 14 -18.9 June, 11 23.1 December, 10 -23.0 ## Monthly Averaged Solar Noon (UTC time)¶ Equation: Monthly averaged solar noon \begin{align}\ SN = 12.0 - \frac{\lambda}{15}+\frac{EoT*4}{60} \end{align} \begin{align} Where: \\ SN: & \text{ Monthly averaged solar noon in decimal UTC hour. } \\ \lambda: & \text{ Local longitude (user input) in degrees. } \\ & \text{(positive east of Prime Meridian; negative west of Prime Meridian). } \\ EoT: & \text{ Equation of Time in degrees and is calculated for the monthly average} \\ & \text{ day (Klein, 1977) of the given month. } \\ \\ \end{align} ## Monthly Averaged Daylight Hours (hours)¶ The Monthly Averaged Daylight Hours is from Solar Engineering of Thermal Process, 3rd Edition. Please see the reference box beneath the Sunset Hour Angle section below. Equation: Monthly averaged daylight hours \begin{align}\ D = \frac{2\omega_{s}}{2\pi} 24 \end{align} \begin{align} Where: \\ D: & \text{ Monthly averaged daylight hours, in decimal form. } \\ \omega_{s}: & \text{ The sunset hour angle in radian on the monthly average day. } \\ & \text{(positive west of Prime Meridian; negative east of Prime Meridian). } \\ & \omega_{s} = cos^{-1}(-tan\phi\ tan \delta), (-1\leq \tan\phi\ tan \delta≤1) \\ & \omega_{s} = 0, (tan\phi\ tan \delta <-1) \\ & \omega_{s} =\pi, (tan\phi\ tan \delta >1) \\ \phi: & \text{ Latitude. } \\ \delta: & \text{ Declination of the Sun on the monthly average day of the given month. } \\ \end{align} ## Monthly Averaged Cosine Solar Zenith Angle¶ The Cosine Solar Zenith Angle is the average cosine of the angle between the Sun and directly overhead during daylight hours. The determination of monthly averaged daylight average of hourly cosine solar zenith angles for each month is based on the monthly average day (i.e. calculated for the monthly averaged day). The following equations may need the angles expressed in radians for the trigonometric functions Depending on the expected input of calculation system, to convert angles in degrees to radians multiply by rpd. This includes the result of the solar declination function. \begin{align}\ rpd = \frac{\pi}{180} = \frac{cos^{-1}(-1.0)}{180} \end{align} Equation: Monthly Average of Daily Average of the Cosine Solar Zenith Angle \begin{align}\ CSZA_{Mdly} &= \frac{Fcos^{-1}(-\text{F/G})+G\sqrt{(1.0-(F/G)^2)}}{\pi}, (-1 \leq(F/G)\leq1) \\ F &= \sin⁡(\phi) * \sin(\delta) \\ G &= \cos(\phi) * \cos(\delta) \end{align} \begin{align} Where&: \\ &CSZA_{Mdly}: \text{ Monthly average of daylight average of the cosine of solar zenith angle. } \\ &\phi: \text{ Latitude. } \\ &\delta: \text{ Sun declination. } \\ \end{align} Equation: Monthly Average of Daylight Average of the Cosine Solar Zenith Angle \begin{align}\ CSZA_{Mda} &= \frac{Fcos^{-1}(-\text{F/G})+G\sqrt{(1.0-(F/G)^2)}}{cos^{-1}(-F/G)}, (-1 \leq(F/G)\leq1)\\ F &= \sin⁡(\phi) * \sin(\delta) \\ G &= \cos(\phi) * \cos(\delta) \end{align} \begin{align} Where&: \\ & CSZA_{Mda}:\text{ Monthly average of daylight average of the cosine of solar zenith angle. } \\ &\phi: \text{ Latitude. } \\ &\delta: \text{ Sun declination. } \\ \end{align} Equation: Monthly Averaged Cosine Solar Zenith Angle at Mid-Time Between Sunrise and Solar Noon \begin{align}\ CSZA_{ZMT}&= F + G\sqrt{\frac{G-F}{2G}}\\ F &= \sin⁡(\phi) * \sin(\delta) \\ G &= \cos(\phi) * \cos(\delta) \end{align} \begin{align} Where&: \\ &CSZA_{ZMT}: \text{ Zenith angle at mid-time between sunrise and solar noon } \\ & \text{ on the monthly average of the given month. } \\ & \phi: \text{ Latitude. } \\ & \delta: \text{ Sun declination. } \\ \end{align} ## Monthly Averaged Declination¶ Declination is the angular distance of the Sun north (positive) or south (negative) of the equator. Declination varies through the year from 23.45° N to 23.45° S and reaches the minimum/maximum at the southern/northern summer solstices. The determination of monthly averaged declination for each month is based on the monthly average day. The following equations may need the angles expressed in radians for the trigonometric functions Depending on the expected input of calculation system, to convert angles in degrees to radians multiply by rpd. This includes the result of the solar declination function. \begin{align}\ rpd = \frac{\pi}{180} = \frac{cos^{-1}(-1.0)}{180} \end{align} Equations for computation of declination \begin{align}\ \delta&=sin^{-1}(sin(\epsilon*rpd)*sin(\lambda*rpd))/rpd \\ \epsilon&=23.439-0.0000004*n \\ L&= modulo(280.460+0.9856474*n, 360.0) \\ g&= modulo(357.528+0.9856003*n, 360.0) \\ \lambda&=modulo(L+1.915*sin(g*rpd)+0.020*sin(2*g*rpd), 360.0) \end{align} \begin{align} Where: \\ \delta: & \text{ Declination angle of Sun In degrees. } \\ n: & \text{ Number of days from Julian 2000.0.} \\ L: & \text{ Mean longitude of the Sun, corrected for aberration, in degrees.} \\ g: & \text{ Mean anomaly, in degrees.} \\ \lambda: & \text{ Ecliptic longitude, in degrees.} \\ \epsilon: & \text{ Obliquity of ecliptic, in degrees.} \end{align} ## Sunset Hour Angle¶ The Sunset Hour Angle equation is from Solar Engineering of Thermal Process, 3rd Edition. Equation: Sunset Hour Angle \begin{align} \omega_{s} &= cos^{-1}(-tan\phi\ tan \delta), (-1\leq \tan\phi\ tan \delta≤1) \\ \omega_{s} &= 0, (tan\phi\ tan \delta <-1) \\ \omega_{s} &=\pi, (tan\phi\ tan \delta >1) \\ \end{align} \begin{align} Where: \\ \omega_{s}: & \text{ Sunset Hour angle. } \\ \phi: & \text{ Latitude. } \\ \delta: & \text{ Declination of the Sun on the monthly average day of the given month. } \\ \end{align} Reference John A. Duffie and William A. Beckman, 2006. Solar Engineering of Thermal Process, 3rd edition, Wiley-Interscience Publication. ## Maximum Solar Angle Relative to The Horizon¶ The maximum solar angle relative to the horizon occurs at local solar noon. Equation: Maximum Solar Angle Relative to The Horizon \begin{align} & \alpha_{max} = 90. - |\phi - \delta | \\ \end{align} \begin{align} Where: \\ \alpha_{max}: & \text{ Maximum solar angle relative to the horizon, in degrees. } \\ \phi: & \text{ Latitude, in degrees. } \\ \delta: & \text{ Declination of the Sun on the monthly average day of the given month. } \\ \end{align} ## Hourly Based Equations¶ The methodologies outlined in the papers below are used to compute the hourly solar angles relative to the horizon and hourly azimuth angles. Reference Seidelmann, P.K. (Ed.), 1992. Explanatory Supplement to the Astronomical Almanac. A revision to the Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac. University Science Books, Mill Valley, CA (USA), 1992, 780 p., ISBN 0-935702-68-7. Zhang, Taiping; Stackhouse, Paul W.; Macpherson, Bradley; Mikovitz, J. Colleen (2021). A solar azimuth formula that renders circumstantial treatment unnecessary without compromising mathematical rigor: Mathematical setup, application and extension of a formula based on the subsolar point and atan2 function. Renewable Energy. Elsevier BV. 172: 1333–1340. doi:10.1016/j.renene.2021.03.047
2022-05-17 21:13:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 36, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 10812.908652013144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00393.warc.gz"}
http://math.stackexchange.com/questions/86214/derivative-and-second-derivative-question?answertab=votes
# Derivative and second derivative question Suppose the derivative of a function $f$ is below. On what interval is $f$ increasing? $$f'(x) = (x+1)^4(x-5)^3(x-7)^6$$ - Where does the second derivative come in? –  Arturo Magidin Nov 28 '11 at 2:10 Why use the Second Derivative Test (en.wikipedia.org/wiki/Second_derivative_test)? The hint/answer provided below should answer your questions. –  JavaMan Nov 28 '11 at 2:14 $f(x)$ is increasing wherever $f'(x) > 0$. Thus, it follows that $f(x)$ is increasing wherever: $$(x+1)^4 (x-5)^3 (x-7)^6 > 0$$
2014-11-27 10:00:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.793914258480072, "perplexity": 1145.2529811922527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008218.28/warc/CC-MAIN-20141125155648-00187-ip-10-235-23-156.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Riemann_sum
# Talk:Riemann sum WikiProject Mathematics (Rated C-class, High-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: C Class High Importance Field:  Applied mathematics ## Merge? I think the Riemann sum and the Riemann integral have too much in common. I suggest merging information from them into Riemann integral, and make Riemann sum a redirect. Please see discussion at talk:Riemann integral.(Igny 21:51, 5 December 2005 (UTC)) Yes I was looking for this article but found that one instead. Very confusing. If you knew the difference between a Riemann sum and an integral why would you need to look it up?Circuitboardsushi (talk) 21:56, 7 April 2012 (UTC) ## Clarification I think it would be worth clarifying that the distance between the points (x1, x2, xi-1, xi) have to be a uniform distance, and it is done for simplicities sake. The graphs would give that idea as well, which isn't really true of Riemann's original "Riemann sums." --AstoVidatu 04:47, 7 December 2006 (UTC) ==Error Estimation== Are the error estimation formulas correct for the "middle sum" and "trapezoidal sum" methods? The "middle sum" error estimate is currently quoted as :${\displaystyle \left\vert \int _{a}^{b}f(x)-A_{\mathrm {mid} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{(24n^{2})}},}$ ...and the "trapezoidal sum" error estimate is :${\displaystyle \left\vert \int _{a}^{b}f(x)-A_{\mathrm {trap} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{(12n^{2})}},}$ I don't have a calculus book handy, but it doesn't make intuitive sense that the "trapezoidal sum" error could be twice the size of the "middle sum" error. Is this right? Is there a handy reference online where these formulas are derived? --Imperpay 22:30, 26 March 2007 (UTC) The formulae are correct. http://people.hofstra.edu/stefan_Waner/realworld/integral/numint.html Accuracy of Trapezoid and Simpson Approximations lists the formula for the trapezoidal error as ${\displaystyle \left\vert \int _{a}^{b}f(x)-A_{\mathrm {trap} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{(12n^{2})}},}$ It may not be intuitive, but oddly enough, the consideration of multiple derivatives and the fact that the middle sum method overlaps in both directions makes the error bound for the middle sum method smaller. -- Icedemon —Preceding unsigned comment added by 98.226.21.88 (talk) 09:35, 25 December 2007 (UTC) ## Examples how have you guys not added the limit of the Riemann sum, which solves the integral with no error? can someone please add this vital information? --69.125.25.190 (talk) 23:49, 1 November 2008 (UTC) I've added an example but I need some help with the formatting. If someone could clean it up for me, that would be great. Thanks. Dwees (talk) 04:53, 26 November 2008 (UTC) Would you be able to include sigma notation of the series in the written the examples? Stevescott517 (talk) 03:09, 30 July 2015 (UTC) ## Subsets or elements? Regarding: Because P is a partition with n elements of I, the Riemann sum of f over I with the partition P is defined as Should that be "n subsets of I" instead of "n elements of I"? 76.175.72.51 (talk) 17:06, 14 October 2009 (UTC) ## Simpson's Rule is Blank! There are examples of using Simpson's Rule, but there is no theory under the blue heading like the other Reimann sum methods, just examples. —Preceding unsigned comment added by 134.114.119.6 (talk) 03:14, 14 March 2010 (UTC) The article located here http://en.wikipedia.org/wiki/Archimedes_Palimpsest states, "When rigorously proving theorems, Archimedes often used what are now called Riemann sums." I think Archimedes' use of this method should be noted in this article so as to make it clear that Reimann did not originate Reimann Sums. — Preceding unsigned comment added by 50.46.144.101 (talk) 11:34, 24 December 2011 (UTC) ## Merge in Rectangle method The Rectangle method and Riemann sum are the same; I propose that they should be merged. Klbrain (talk) 20:55, 26 April 2016 (UTC) The Riemann sum is a more technical and complicated way of explaining integrals. I do believe, however, that Rectangle method should be merged instead with Trapezoidal rule. Both these quadrature methods are very similar and at the same level. ReallyFat B. 11:46, 17 May 2016 (UTC) Support merge with Riemann sum. Oppose with merge Trapezoidal rule. The "rectangle method" (a name I've never heard used in seriousness) is, as proposer suggests, exactly the same as Riemann sums. The Trapezoidal rule however is the first in a series of methods that attempt to improve on the accuracy of Riemann sums (with Simpson's rule etc following as the order of the function at the "top of the box" gets higher). The trapezoidal rule is literally not a rectangular sum and therefore it should not be merged there. Jason Quinn (talk) 17:38, 11 June 2016 (UTC) I still support the merge of Rectangle method to Riemann sum (in agreement with Jason Quinn). The argument that the Riemann sum is a "more technical and complicated wat of explaining integrals" is, in some ways, an argument for merging Rectangle method into Riemann sum, making the final article statisfy WP:ACCESSIBILITY. In general, pages covering the same topic with different level of complexity are better discussed together. I can't see that merging Rectangle method and Trapezoidal rule would work as they are different methods, unless they were both merged to Numerical integration (which is the relevant target from the Quadrature dab page) (and it was then argued that all such methods should be on one page).Klbrain (talk) 13:03, 27 September 2016 (UTC) ────────────────────────────────────────────────────────────────────────────────────────────────────  Done I've gone ahead and performed this merge. I saw no text immediately incorporable so I merge no text. Some of the error stuff might be useful but would need to be customized for this article. Jason Quinn (talk) 21:52, 27 July 2017 (UTC)
2017-09-22 20:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809016227722168, "perplexity": 1485.9204736764648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689102.37/warc/CC-MAIN-20170922183303-20170922203303-00024.warc.gz"}
http://mathhelpforum.com/calculus/38223-how-would-i-integrate.html
# Thread: how would i integrate this 1. ## how would i integrate this would i just integrate as normal, the double integral is putting me off. could someone show me how id do: $\int\limits_0^2 \int\limits_{x^{2}-2}^x x^{2} dydx$ thanks 2. Originally Posted by skystar would i just integrate as normal, the double integral is putting me off. could someone show me how id do: $\int\limits_0^2 \int\limits_{x^{2}-2}^x x^{2} dydx$ thanks The inside integral is constant with respect to y, so $\int_0^2 \left ( \int_{x^{2}-2}^x x^2 dy \right ) dx$ $= \int_0^2 \left ( \left . x^2y \right |_{x^2 - 2}^x \right ) dx$ $= \int_0^2 \left ( x^2((x^2 - 2) - x ) \right ) dx$ I'm sure you can finish from here. -Dan $= \int_0^2 (x^4 - x^3 - 2x^2) dx$ 3. Originally Posted by skystar would i just integrate as normal, the double integral is putting me off. could someone show me how id do: $\int\limits_0^2 \int\limits_{x^{2}-2}^x x^{2} dydx$ thanks Topsquark is right just remember that the $dx,dy,d\theta$ d whatever dictates what the variable of integration is...whatever variable is in the integral that is not of the same type as the d whatever is considered a constant 4. Originally Posted by Mathstud28 Topsquark is right just remember that the $dx,dy,d\theta$ d whatever dictates what the variable of integration is...whatever variable is in the integral that is not of the same type as the d whatever is considered a constant What are you talking about ?
2013-12-21 02:39:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727622866630554, "perplexity": 880.4980182812762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774525/warc/CC-MAIN-20131218054934-00036-ip-10-33-133-15.ec2.internal.warc.gz"}
http://clay6.com/qa/12091/the-potential-energy-of-a-1-kg-mass-free-to-move-along-the-x-axis-is-given-
# The potential energy of a 1 kg mass free to move along the x-axis is given by $V_{(x)}=\bigg[\large\frac{x^4}{4}-\frac{x^2}{2}\bigg]$$J The total mechanical energy of the particle is 2J then the maximum speed is? $(a)\;\frac{3}{\sqrt 2}\; m/s \quad (b)\;\sqrt 2 \;m/s\quad (c)\; \frac{1}{\sqrt 2}\; m/s \quad (d)\;2\; m/s$ ## 1 Answer Velocity is maximum when KE of particle is maximum. ie potential energy of particle is minimum \large\frac{dV}{dx}$$=0=>x^3-x=0=>x=\pm 1$ minimum $PE$ is when $x=1$ min $PE=\large\frac{1}{4}-\frac{1}{2}=\frac{-1}{4}$$J Given KE_{\large \max}$$+PE_{\large min}$$=2\;J KE_{\large max }=2 +\large\frac{1}{4}=\frac{9}{4} \large\frac{9}{4}=\frac{1}{2}$$mv^2_{\large max} \qquad (m=1 kg)$ $v^2 _{\large max}=\large\frac{9}{2}$ $v_{\large max}=\large\frac{3}{\sqrt 2}$$\;m/s$ Hence a is the correct answer. edited Feb 17, 2014 by meena.p
2017-12-12 16:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461143374443054, "perplexity": 1898.9508431898419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00312.warc.gz"}
http://mathhelpforum.com/algebra/46448-conversion.html
# Math Help - Conversion: 1. ## Conversion: Thanks for the help -qbkr21 2. Originally Posted by qbkr21 Thanks for the help -qbkr21 No. you have to convert inches to centimeters before you carry out your calculations. 1 inch = 2.54 cm so your volume is $[4(2.54)]^3$ i don't get what you did for the last part. what formula are you using? should you not use weight = mass*gravity ? 3. Originally Posted by Jhevon No. you have to convert inches to centimeters before you carry out your calculations. 1 inch = 2.54 cm so your volume is $[4(2.54)]^3$ i don't get what you did for the last part. what formula are you using? should you not use weight = mass*gravity ? (length)^3 = Volume *Is there a rule that I am missing? Must the length always be in cm before it's cubed? Thanks, qbkr21 4. Originally Posted by qbkr21 (length)^3 = Volume *Is there a rule that I am missing? Must the length always be in cm before it's cubed? Thanks, qbkr21 That's because the density is given in cm^3, so you have to choose either cm, either inches, but not the 2 at the same time ! 5. I converted the inches to cm and found it to equal 10.16 so... (10.16)^3 = 1048.77 (This now my volume?) So Density = mass/ volume (19.3)/ = (x)/(1048.77) = 20241.3 grams 1 lb. = 453.59 g (20241.3 grams)(1 lb. / 453.59 grams) = 44.6247 lbs. Correct? 6. Originally Posted by qbkr21 I converted the inches to cm and found it to equal 10.16 so... (10.16)^3 = 1048.77 (This now my volume?) So Density = mass/ volume (19.3)/ = (x)/(1048.77) = 20241.3 grams 1 lb. = 453.59 g (20241.3 grams)(1 lb. / 453.59 grams) = 44.6247 lbs. Correct? looks good. so you didn't use weight = mass*gravity, you used a conversion factor from grams to pounds. ok 7. Originally Posted by Jhevon looks good. so you didn't use weight = mass*gravity, you used a conversion factor from grams to pounds. ok Suppose I did want to use the formula above weight = mass*gravity... Would I not only have to convert the given inches to meters but also the density for gold 19.3 g/cm^3 to just g/m^3? 8. Originally Posted by qbkr21 Suppose I did want to use the formula above weight = mass*gravity... Would I not only have to convert the given inches to meters but also the density for gold 19.3 g/cm^3 to just g/m^3? yeah...too much work, huh? your way is fine
2014-04-16 07:02:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238863348960876, "perplexity": 2799.405704541092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/2142/linear-regression-effect-sizes-when-using-transformed-variables
# Linear regression effect sizes when using transformed variables When performing linear regression, it is often useful to do a transformation such as log-transformation for the dependent variable to achieve better normal distribution conformation. Often it is also useful to inspect beta's from the regression to better assess the effect size/real relevance of the results. This raises the problem that when using e.g. log transformation, the effect sizes will be in log scale, and I've been told that because of non-linearity of the used scale, back-transforming these beta's will result in non-meaningful values that do not have any real world usage. This far we have usually performed linear regression with transformed variables to inspect the significance and then linear regression with the original non-transformed variables to determine the effect size. Is there a right/better way of doing this? For the most part we work with clinical data, so a real life example would be to determine how a certain exposure affects continues variables such as height, weight or some laboratory measurement, and we would like to conclude something like "exposure A had the effect of increasing weight by 2 kg". I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks in and your standard estimates become asymptotically normal. Alternatively, you can use bootstrapping as a non-parametric means to estimate the standard errors. (Homoskedasticity, a common variance for the observations across units, is required for your standard errors to be right; robust options permit heteroskedasticity). Instead, transformations help to ensure that a linear model is appropriate. To give a sense of this, let's consider how we can interpret the coefficients in transformed models: • outcome is units, predictors is units: A one unit change in the predictor leads to a beta unit change in the outcome. • outcome in units, predictor in log units: A one percent change in the predictor leads to a beta/100 unit change in the outcome. • outcome in log units, predictor in units: A one unit change in the predictor leads to a beta x 100% change in the outcome. • outcome in log units, predictor in log units: A one percent change in the predictor leads to a beta percent change in the outcome. If transformations are necessary to have your model make sense (i.e., for linearity to hold), then the estimate from this model should be used for inference. An estimate from a model that you don't believe isn't very helpful. The interpretations above can be quite useful in understanding the estimates from a transformed model and can often be more relevant to the question at hand. For example, economists like the log-log formulation because the interpretation of beta is an elasticity, an important measure in economics. I'd add that the back transformation doesn't work because the expectation of a function is not the function of the expectation; the log of the expected value of beta is not the expected value of the log of beta. Hence, your estimator is not unbiased. This throws off standard errors, too. The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect size, e.g. when there are linear and additive relationships. If that's the focus then the (conceptually, if not practically) simplest way to think about the problem would seem to be this: To get the marginal effect of X on Y in a linear normal regression model with no interactions, you can just look at the coefficient on X. But that's not quite enough since it is estimated not known. In any case, what one really wants for marginal effects is some kind of plot or summary that provides a prediction about Y for a range of values of X, and a measure of uncertainty. Typically one might want the predicted mean Y and a confidence interval, but one might also want predictions for the complete conditional distribution of Y for an X. That distribution is wider than the fitted model's sigma estimate because it takes into account uncertainty about the model coefficients. There are various closed form solutions for simple models like this one. For current purposes we can ignore them and think instead more generally about how to get that marginal effects graph by simulation, in a way that deals with arbitrarily complex models. Assume you want the effects of varying X on the mean of Y, and you're happy to fix all the other variables at some meaningful values. For each new value of X, take a size B sample from the distribution of model coefficients. An easy way to do so in R is to assume that it is Normal with mean coef(model) and covariance matrix vcov(model). Compute a new expected Y for each set of coefficients and summarize the lot with an interval. Then move on to the next value of X. It seems to me that this method should be unaffected by any fancy transformations applied to any of the variables, provided you also apply them (or their inverses) in each sampling step. So, if the fitted model has log(X) as a predictor then log your new X before multiplying it by the sampled coefficient. If the fitted model has sqrt(Y) as a dependent variable then square each predicted mean in the sample before summarizing them as an interval. In short, more programming but less probability calculation, and clinically comprehensible marginal effects as a result. This 'method' is sometimes referred to CLARIFY in the political science literature, but is quite general. SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of cake a day will increase your weight by approximately 2kg in one week. However, if you weigh 200kg your weight would increase 2.5kg. See figure 1 for a depiction of this non-linear relationship (figure 1 being a fit of the curve over the raw data)."
2022-06-28 19:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463855147361755, "perplexity": 566.2117201801418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00108.warc.gz"}
https://socratic.org/questions/57c7d9f011ef6b2c568d4fd3
# If I have 9.71xx10^22 "platimum atoms", what molar quantity, and what mass of metal are present? Sep 1, 2016 You have approx $0.16 \cdot m o l$ of platinum atoms, with a mass of approx. $30 \cdot g$. #### Explanation: $\text{Moles of platinum}$ $=$ $\left(9.71 \times {10}^{22} \cdot \text{platinum atoms")/(6.022xx10^23*"platinum atoms} \cdot m o {l}^{-} 1\right)$ $=$ ??"moles"? Now $1$ $m o l$ of $P t$ atoms has a mass of $195.08 \cdot g$. So you should have a mass of approx. $30 \cdot g$. I appreciate that it is hard to work with such unfeasibly large numbers. But remember that the mole is just another number, like a score, or a dozen, or 1 gross - it's just that a mole is *&&??! LARGE. $\text{Avogadro's number}$ of platinum atoms specifies $1$ $m o l$ and has a mass of $195.08 \cdot g$. Likewise $\text{Avogadro's number}$ of ""^1H atoms has a mass of $1 \cdot g$ precisely. This idea of molar and mass equivalence is something that is fundamental to chemistry, and you should take some effort to appreciate it.
2020-01-23 18:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720768690109253, "perplexity": 1088.4541187594289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00324.warc.gz"}
https://math.stackexchange.com/questions/1126368/metric-spaces-limit-points-and-isolated-points
# Metric spaces - Limit points and Isolated points I apologise for this, but I have numerous potential misunderstandings. $\Bbb R^2$ is a metric space, since it is a subspace of the Euclidean space, $\Bbb R^n$ I can look at a set $E$ with elements $x_n$ where $x_n = \frac1n,n=1,2,\dots$ and look at the plot of $n$ vs $x_n$ A limit point is a point that has another point in any of its neighbourhoods. This means that $(n,x_n)=(1,1)$ is an isolated point, taking an $r$ neighbourhood sufficiently small, lets say $r=.1$. Isolated points include $(2,\frac12),(3,\frac13)$ so on. Is this all correct reasoning so far? Lastly this means my only limit points are at $(\to \infty, 0)$? [and hence not in $E$] • It makes no sense to say "... my only limit points are at $(\rightarrow\infty,0)$" even if you correctly point out that this is not in $E$ – Dan Rust Jan 30 '15 at 12:46 It doesn't really make sense to say that $\mathbb R^2$ is a "subspace" of $\mathbb R^n$ (where $n>2$), because $\mathbb R^2$ is not a subset of $\mathbb R^n$. It is true that you can embed $\mathbb R^2$ in $\mathbb R^n$, by the map $(x_1, x_2)\mapsto (x_1, x_2, 0, \ldots, 0)$, but this is not really the same thing. In general, if $(X,d)$ is a metric space and $S\subset X$ is nonempty, then $(S,d')$ is a subspace, where $d'$ is the restriction of $d$ to $S$. Assuming by your set $E$ you mean the subset of $\mathbb R^2$ consisting of the points $(n, x_n)$ where $n=1,2,\ldots$, then indeed this set has no limit points. If $(x,y)\in\mathbb R^2$, then you can take $N$ to be an integer greater than $x+1$, so for all $n\geqslant N$, $$\sqrt{(n-x)^2+\left(\frac1n - y\right)^2}\geqslant n-x\geqslant N-x > 1.$$ Your reasoning that every point of $E$ is an isolated point is correct.
2019-08-21 01:16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540978074073792, "perplexity": 62.04987135253521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00467.warc.gz"}
https://math.stackexchange.com/questions/2661647/on-the-strong-markov-property-in-discrete-time-markov-chains
# On the 'Strong Markov Property' in 'Discrete Time Markov Chains'. I am studying the Strong Markov Property in Discrete Time Markov Chains. More precisely, I am trying to understand the proof of the theorem below that is found in chapter 3, section 3 page 51 of D. Kannan's book An Introduction to Stocachastic Processes. I have doubts in 3 passages of the demonstration. The preliminary notations are that $\{X_n,n\geq 0\}$ is a Markov Chain definite in $(\Omega,\mathscr{A},\mathbb{P})$ and $\mathscr{A}_n=\sigma(X_0,X_1,\ldots, X_n)$. Theorem 3.3.3 Let $\{X_n,n\geq 0\}$ be a Markov Chain with contable state space $\mathcal{S}=\{s_0,s_1,s_2,\ldots\}$ and $m$-step transition probabilities $p^{m}(\,\cdot \,,\,\cdot\,)$ and $\tau$ be a stopping time relative to $\{X_n,n\geq 0\}$ such that $\tau(\omega)<\infty$ for all $\omega\in\Omega$. Define $Y_n(\omega)=X_{\tau(\omega)+n}(\omega), n\geq 0$. Then $\{Y_n,n\geq 0\}$ is an Markov Chain and, for $0=n_0<n_1<\cdots<n_m$, $x_0,x_1,\ldots,x_m\in\mathcal{S}$, we have $$\mathbb{P}\big( Y_{n_k}=x_k, 0\leq k\leq m \big)=q_0(x_0)\prod_{k=0}^{m-1}p^{n_{k+1}-n_k}(x_k,x_{k+1}),\quad \tag{*}$$ where $q_0$ is the initial distribuition of $Y_0$. PROOF. Define the $\mathscr{A}_\tau=\{A\in \mathscr{A}: A\cap \{\omega\in\Omega: \tau(\omega)\leq n\}\in\mathscr{A}_n\}$. Then $\mathscr{A}_\tau$ is a $\sigma$-algebra. For $A\in\mathscr{A}_\tau$ there is an $A_n\in \mathscr{A}_n$ such that $$A\cap \{\tau=n\}=A_n\cap \{\tau=n\}.$$ Let $\mathscr{B}=\sigma\{Y_n:n\geq 0\}$. If $B=\bigcap_{k=1}^{m}\{Y_{n_k}=x_k\}$ and $B_n=\bigcap^{m}_{k=1}\{X_{n+n_k}=x_k\}$, then $B\in\mathscr{B}$, $B_n\in \sigma(X_{n},X_{n+1},X_{n+2},\ldots)$, and $$B\cap \{\tau=n\}=B_n\cap \{ \tau =n\}.$$ Now \begin{align} \mathbb{P}(A\cap B) =& \sum_{n\leq 0}\sum_{x_0\in \mathcal{S}} \mathbb{P}(A_n\cap \{\tau=n\}\cap \{X_n=x_0\}\cap B_n) \tag{Eq 1}\\ =& \sum_{n\leq 0}\sum_{x_0\in \mathcal{S}} \mathbb{P}(A_n\cap \{\tau=n\}\cap \{X_n=x_0\}) \mathbb{P}(B_n|\{X_n=x_0\}) \tag{Eq 2}\\ =& \sum_{n\leq 0}\sum_{x_0\in \mathcal{S}} \prod_{m=0}^{m-1}p^{n_{k+1}-n_k}(x_k,x_{k+1}) \mathbb{P}(A\cap \{\tau=n\}\cap \{X_n=x_0\}) \tag{Eq 3}\\ =& \sum_{x_0\in \mathcal{S}} \prod_{m=0}^{m-1}p^{n_{k+1}-n_k}(x_k,x_{k+1}) \mathbb{P}(A\cap \{X_n=x_0\}) \tag{Eq 4}\\ \end{align} Since the event $\{Y_0=x_0\}\in\mathscr{A}_\tau$, now choose $A=\{Y=x_0\}$. This yields ($\ast$) e proves the Markov property of $\{Y_n,n\leq 0\}$. Question 1. How does the equation (Eq 1) imply the equation (Eq 2)? I have no idea how to get the equation (Eq 2) from the equation (Eq 1). Question 2. How does the equation (Eq 2) imply the equation (Eq 3)? I think it would be from the definition of $\tau$ and the Markov property. But I have no idea how to put those thoughts into practice. Question 3. How does the equation (Eq 4) imply the equation (*)? • First question (the others are similar): $$P(B_n\mid A_n,\tau=n,X_n=x_0)=P(B_n\mid X_n=x_0)$$ Does this ring any bell? – Did Feb 22 '18 at 21:23 • @Did This is just my doubt. I can not understand why this equality is true. Could you explain this equality better? It seems to me that my question is reduced in the following question: If $C\in\mathscr{A} _n=\sigma( X_0,\ldots, X_{n-1},X_{n})$ then is it true that $P(B_n\mid C, X_n=x_0)=P(B_n\mid X_n=x_0)$? – MathOverview Feb 23 '18 at 12:32
2019-05-27 05:13:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716613292694092, "perplexity": 464.6684510663019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00192.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/vector-cartesian-equation-plane-intersection-planes_2550
# Question - Vector and Cartesian Equation of a Plane Account Register Share Books Shortlist #### Question Find the vector equation of the plane which contains the line of intersection of the planes  and which is perpendicular to the plane.vecr(5hati+3hatj-6hatk)+8=0 #### Solution You need to to view the solution Is there an error in this question or solution? S
2017-08-21 23:55:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36838629841804504, "perplexity": 529.0139758188257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109682.23/warc/CC-MAIN-20170821232346-20170822012346-00432.warc.gz"}
https://www.physicsforums.com/threads/change-of-axiom-of-probability.205883/#post-1551137
# Change of Axiom of Probability The reference book I have used stating that: Axiom 1 stating that 0<=P(E)<=1 Axiom 2 stating that P(S)=1 Axiom 3, the probability of union of mutually exclusive events is equal to the summation probability of of each of the events. And the author says that, hopefully, the reader will agree that the axioms are natural and in accordance with our intuitive concept of probability as related to chance and randomness. But what if axiom 1 and axiom 2 is changed to Axiom 1 stating that 0.5<=P(E)<=1.5 Axiom 2 stating that P(S)=1.5 (Axiom 3 no change) or Axiom 1 stating that 1.1<=P(E)<=2 Axiom 2 stating that P(S)=2 (Axiom 3 no change) and rebuild the probability model base on the new axiom? Will there be any problem in this new probability model? If not can I say that the original Axiom 1 and Axiom 2 is just taking some reference value so everybody on the earth can follow it? HallsofIvy Homework Helper No there is nothing wrong with that-and there is no real difference. If, in the first case, you subtract 0.5 from "your" probability, you get "regular" probability. In case two, since 2- 1.1= .9, you would have to divide by .9 after substracting 1.1 in order to get "regular" probability. The reason for the (mathematically arbitrary) choice of 0 and 1 is to relate it to the common idea of a probability as a percentage: Probability of 1.0 corresponds, in common parlance, to "100% certain". arildno Homework Helper Gold Member Dearly Missed However, as an addition to Halls' comment, addition and multiplication rules for your new probabilities would be rather more tricky than using 0 and 1 as your limits. arildno Homework Helper Gold Member Dearly Missed For example, let 0<=p(E)<=1, whereas a<=P(E)<=b, so that $$P(E)=a+(b-a)*p(E)$$ Now, for disjoint events u and v, we have: p(u+v)=p(u)+p(v). But in P-notation, we would have: P(u+v)=a+(b-a)*p(u+v)=P(u)+P(v)-a This is an unnice addition rule.. NateTG Homework Helper But what if axiom 1 and axiom 2 is changed to Axiom 1 stating that 0.5<=P(E)<=1.5 Axiom 2 stating that P(S)=1.5 (Axiom 3 no change) or Axiom 1 stating that 1.1<=P(E)<=2 Axiom 2 stating that P(S)=2 (Axiom 3 no change) and rebuild the probability model base on the new axiom? Will there be any problem in this new probability model? If not can I say that the original Axiom 1 and Axiom 2 is just taking some reference value so everybody on the earth can follow it? You can effectively 'shift' the numbers for the range, but you'll basically end up doing everything by shifting the numbers back to the [0,1] range in order to do any kind of operation, so it makes little sense to try to shift things that way. This may be statistics rather than probability, but if you start trying to apply probability, it rapidly becomes clear that the 0 and 1 are natural notions for the limits. For example, if you want to calculate the expected number of heads in 100 coin flips, sixes in 100 die rolls, or 0s in 100 spins of a Roulette wheel with the 0 and 1 probability, you can simply multiply the number of trials by the probability.
2021-10-27 02:57:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349571824073792, "perplexity": 895.288159430685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00193.warc.gz"}
https://kamerynblog.wordpress.com/2017/09/12/least-models-of-second-order-set-theories/
### Least models of second-order set theories K. Williams “Least models of second-order set theories” (under review) arXiv PDF bibTeX Abstract The main theorems of this paper are (1) there is no least transitive model of Kelley–Morse set theory $\mathsf{KM}$ and (2) there is a least $\beta$-model—that is, a transitive model which is correct about which of its classes are well-founded—of Gödel–Bernays set theory $\mathsf{GBC}$ + Elementary Transfinite Recursion. Along the way I characterize when a countable model of $\mathsf{ZFC}$ has a least $\mathsf{GBC}$-realization and show that no countable model of $\mathsf{ZFC}$ has a least $\mathsf{KM}$-realization. I also show that fragments of Elementary Transfinite Recursion have least $\beta$-models and, for sufficiently weak fragments, least transitive models. These fragments can be separated from each other and from the full principle of Elementary Transfinite Recursion by consistency strength. The main question left unanswered by this article is whether there is a least transitive model of $\mathsf{GBC}$ + Elementary Transfinite Recursion. Every set theorist knows there is a least transitive model of $\mathsf{ZFC}$. What if we want to ask the same question about second-order set theories? Which second-order set theories, if any, have least transitive models? The answer for $\mathsf{GBC}$ follows immediately from the existence of a least transitive model of $\mathsf{ZFC}$: the least transitive model of $\mathsf{GBC}$ is $(L_\alpha, \mathrm{Def}(L_\alpha))$ where $L_\alpha$ is the least transitive model of $\mathsf{ZFC}$. (Indeed, Shepherdson formulated his original argument in terms of $\mathsf{GB}$ (i.e. without Global Choice), rather than in terms of $\mathsf{ZFC}$.) But this easy argument won’t work for second-order set theories that assert the existence of more classes. Indeed, no argument will work for sufficiently strong second-order set theories. Take $\mathsf{KM}$. Allowing for the existence of impredicatively-defined classes means that models of $\mathsf{KM}$ have enough “meta-ordinals” that tools from admissible set theory can be applied. Starting from a model of $\mathsf{KM}$ we can build another model of $\mathsf{KM}$ whose first-order part is the same but whose second-order part sits off to the side, so to speak. So no (countable) model of $\mathsf{ZFC}$ has a least $\mathsf{KM}$-realization and thus there cannot be a smallest transitive model of $\mathsf{KM}$. (Of course, see the actual paper for more than a brief sketch of the argument.) But $\mathsf{KM}$ is much stronger than $\mathsf{GBC}$ and there are natural theories in the middle. What about those? For $\Pi^1_k\text{-}\mathsf{CA}$ more or less the same argument as the $\mathsf{KM}$ case goes through, getting that they don’t have least transitive models. (But I don’t give a proof of such in this paper; see my forthcoming dissertation for full details.) So let’s go lower, to the theory $\mathsf{ETR}$, which is $\mathsf{GBC}$ augmented with the principle of Elementary Transfinite Recursion. Then $\mathsf{ETR}$ is stronger than $\mathsf{GBC}$ because, say, we can construct the Tarskian satisfaction class for first-order formulae via a class recursion of height $\omega$. But $\mathsf{ETR}$ is strictly weaker than $\Pi^1_1\text{-}\mathsf{CA}$. I don’t actually know whether there is a least transitive model of $\mathsf{ETR}$. (This is the main open question from my paper.) However, I do show that there is a least $\beta$-model of $\mathsf{ETR}$. As well, sufficiently weak fragments of $\mathsf{ETR}$—such as $\mathsf{ETR}_{\mathrm{Ord}}$ which only asserts that elementary transfinite recursions of height $\le \mathrm{Ord}$ have solutions—do have least transitive models. Combined with the results from my joint paper with Gitman, Hamkins, Holy, and Schlict this gives that there is a smallest transitive model of $\mathsf{GBC}$ which satisfies the forcing theorem for all class forcings. I think that’s neat. Bibtex @ARTICLE{Williams:least-models, author = {Kameryn Williams}, title = {Least models of second-order set theories}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {}, source = {}, doi = {}, eprint = {1709.03955}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {https://kamerynblog.wordpress.com/2017/09/12/least-models-of-second-order-set-theories/}, }
2018-07-16 23:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7499936819076538, "perplexity": 460.990363072719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589536.40/warc/CC-MAIN-20180716232549-20180717012549-00365.warc.gz"}
http://arxiver.moonhats.com/2014/12/09/the-cepheid-distance-to-the-maser-host-galaxy-ngc-4258-studying-systematics-with-the-large-binocular-telescope-ga/
# The Cepheid distance to the maser-host galaxy NGC 4258: Studying systematics with the Large Binocular Telescope [GA] We identify and phase a sample of 81 Cepheids in the maser-host galaxy NGC 4258 using the Large Binocular Telescope (LBT), and obtain calibrated mean magnitudes in up to 4 filters for a subset of 43 Cepheids using archival HST data. We employ 3 models to study the systematic effects of extinction, the assumed extinction law, and metallicity on the Cepheid distance to NGC 4258. We find a correction to the Cepheid colors consistent with a grayer extinction law in NGC 4258 compared to the Milky Way ($R_V =4.9$), although we believe this is indicative of other systematic effects. If we combine our Cepheid sample with previously known Cepheids, we find a significant metallicity adjustment to the distance modulus of $\gamma_1 = -0.60 \pm 0.21$ mag/dex, for the Zaritsky et al. (1994) metallicity scale, as well as a weak trend of Cepheid colors with metallicity. Conclusions about the absolute effect of metallicity on Cepheid mean magnitudes appear to be limited by the available data on the metallicity gradient in NGC 4258, but our Cepheid data require at least some metallicity adjustment to make the Cepheid distance consistent with independent distances to the LMC and NGC 4258. From our ensemble of models and the geometric maser distance of NGC 4258 ($\mu_{N4258} = 29.40 \pm 0.06$ mag), we estimate $\mu_{LMC} = 18.57 \pm 0.14$ mag ($51.82 \pm 3.23$ kpc). M. Fausnaugh, C. Kochanek, J. Gerke, et. al. Tue, 9 Dec 14 56/64 Comments: A brief video summarizing the key results of this paper can be found at this http URL
2017-11-23 03:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5533453226089478, "perplexity": 3306.2195809594937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00263.warc.gz"}
https://euclid.math.temple.edu/events/seminars/algebra/
# Algebra Seminar Current contacts: Vasily Dolgushev, Ed Letzter, Martin Lorenz or Chelsea Walton The Seminar usually takes place on Mondays at 1:30 PM in Room 617 on the sixth floor of Wachman Hall. Click on title for abstract. • Monday February 6, 2017 at 13:30, Wachman Hall, Rm 617 Iterated Thom Spectra and Intermediate Hopf-Galois Extensions of Ring Spectra Jonathan Beardsley, University of Washington Given a fiber sequence of n-fold loop spaces X-->Y-->Z, and morphism of n-fold loop spaces Y-->BGL_1(R) for R an E_{n+1}-ring spectrum, we describe a method of producing a new morphism of (n-1)-fold loop spaces Z-->BGL_1(MX), where MX is the Thom spectrum associated to the composition X-->Y-->BGL_1(R). This new morphism has associated Thom spectrum MY, but constructed directly as an MX-module. In particular this induces a relative Thom isomorphism (i.e. a torsor structure) for MY over MX: MY \otimes_{MX} MY = MY \otimes Z. We will see a rough description of this construction as well as many examples. In many cases this torsor condition additionally satisfies a descent condition showing that the unit map MX-->MY is a Hopf-Galois extension of structured ring spectra. Moreover, the composition R-->MX-->MY describes an intermediate Hopf-Galois extension associated to thinking of X as a sub-bialgebra of Y. It seems likely that the methods described in this talk can be modified to apply to homotopy quotients of DGAs. • Monday February 13, 2017 at 13:30, Wachman Hall, Rm 617 Dirac cohomology, Hopf-Hecke algebras and infinitesimal Cherednik algebras Johannes Flake, Rutgers University Dirac cohomology has been employed successfully to analyze the representation theory of connected semisimple Lie groups and of degenerate affine Hecke algebras. We study a common generalization of these situations as suggested by Dan Barbasch and Siddhartha Sahi, certain PBW deformations satisfying an orthogonality condition, which we call Hopf-Hecke algebras. Besides the mentioned special cases, they also include infinitesimal Cherednik algebras as new examples. We will discuss a general result relating the Dirac cohomology with central characters, partial results on the classification of Hopf-Hecke algebras, and a concrete computation of the Dirac cohomology for infinitesimal Cherednik algebras of the general linear group. This is joint work with Siddhartha Sahi. • Monday February 20, 2017 at 13:30, Wachman Hall Rm. 617 Maximum nullity, zero forcing, and power domination Chassidy Bozeman, Iowa State University Zero forcing on a simple graph is an iterative coloring procedure that starts by initially coloring vertices white and blue and then repeatedly applies the following color change rule: if any vertex colored blue has exactly one white neighbor, then that neighbor is changed from white to blue. Any initial set of blue vertices that can color the entire graph blue is called a zero forcing set. The zero forcing number is the cardinality of a minimum zero forcing set. A well known result is that the zero forcing number of a simple graph is an upper bound for the maximum nullity of the graph (the largest possible nullity over all symmetric real matrices whose (ij)-th entry (for distinct i and j) is nonzero whenever {i,j} is an edge in G and is zero otherwise). A variant of zero forcing, known as power domination (motivated by the monitoring of the electric power grid system), uses the power color change rule that starts by initially coloring vertices white and blue and then applies the following rules: 1) In step 1, for any white vertex w that has a blue neighbor, change the color of w from white to blue. 2) For the remaining steps, apply the color change rule. Any initial set of blue vertices that can color the entire graph blue using the power color change rule is called a power dominating set. We present results on the power domination problem of a graph by considering the power dominating sets of minimum cardinality and the amount of steps necessary to color the entire graph blue. • Monday February 27, 2017 at 13:30, Wachman Hall Rm. 617 Introduction to deformation quantization Vasily Dolgushev, Temple University I will introduce the concept of a star product and outline Fedosov's construction for star products on an arbitrary symplectic manifold. I will also state the classification theorem for star products on a symplectic manifold. • Monday March 6, 2017 at 13:30, Wachman Hall, Rm 617 Zero divisors in the Grothendieck ring Lev Borisov, Rutgers University The Grothendieck ring of complex algebraic varieties is defined as the space of formal sums $\sum_i a_i [X_i]$ of algebraic varieties with integer coefficients, subject to the relations $[X]=[X-Z]+[Z]$ for closed subvarieties $Z$ of $X$. I will talk about recent developments that show that the class of the affine line is a zero divisor in the Grothendieck ring. • Monday March 20, 2017 at 13:30, Wachman Hall Rm. 617 Deformation quantization of symplectic manifolds: Fedosov's construction Vasily Dolgushev, Temple University Equivalence classes of star products on a symplectic manifold M can be described in terms of the second de Rham cohomology of M. I will review Fedosov's construction whose input is a series of closed two forms and whose output is a star product on a symplectic manifold. • Monday March 27, 2017 at 13:30, Wachman Hall Rm. 617 Differential graded (dg) Lie algebras and their Maurer-Cartan elements Vasily Dolgushev, Temple University To describe the equivalence classes of star products on an arbitrary Poisson manifold, we need some constructions related to differential graded Lie algebras. I am going to review these constructions in my talk. • Monday April 10, 2017 at 13:30, Wachman Hall Rm. 617 Dimer models on cylinders over Dynkin diagrams Maitreeyee Kulkarni, Louisiana State University Let G be a Lie group of type ADE and P be a parabolic subgroup. It is known that there exists a cluster structure on the coordinate ring of the partial flag variety G/P (see the work of Geiss, Leclerc, and Schroer). Since then there has been a great deal of activity towards categorifying these cluster algebras. Jensen, King, and Su gave a direct categorification of the cluster structure on the homogeneous coordinate ring for Grassmannians (that is, when G is of type A and P is a maximal parabolic subgroup). In this setting, Baur, King, and Marsh gave an interpretation of this categorification in terms of dimer models. In this talk, I will give an analog of dimer models for groups in other types by introducing a technique called “constructing cylinders over Dynkin diagrams”, which can (conjecturally) be used to generalize the result of Baur, King, and Marsh. • Monday April 17, 2017 at 13:30, Wachman Hall Rm. 617 Algebraization of Operator Theory Lia Vas, University of the Sciences I have been working in algebra and ring theory, in particular with rings of operators, involutive rings, Baer star-rings and Leavitt path algebras. These rings were introduced in order to simplify the study of sometimes rather cumbersome operator theory concepts. For example, a Baer star-ring is an algebraic analogue of an AW star-algebra and a Leavitt path algebra is an algebraic analogue of a graph C-star algebra. Such rings of operators can be studied without involving methods of operator theory. Thus algebraization of operator theory is a common thread between most of the topics of my interest. After some overview of the main ideas of such algebraization, I will focus on one common aspect of some of the rings of operators – the existence of a trace as a way to measure the size of subspaces/subalgebras. In particular, we adapt some desirable properties of a complex-valued trace on a C-star algebra to a larger class of algebras. • Monday April 24, 2017 at 13:30, Wachman Hall Rm 617 Survey on algebras of low Gelfand-Kirillov dimension Edward Letzter, Temple University History and background of results on finitely generated algebras of low (i.e., greater than zero but less than three) Gelfand-Kirillov dimension. Beginning with early results of Bergman, Small-Stafford-Warfield (and others), continuing through later results of Artin-Stafford, Bell, Small (and others), and concluding with recent work of Smoktunowicz and collaborators Bell, Lenagan, Small (and others). • Monday May 1, 2017 at 13:30, Wachman Hall Rm. 617 Survey on algebras of low Gelfand-Kirillov dimension, II Edward Letzter, Temple University Continuation of Part I, surveying open questions, examples, and results in Gelfand-Kirillov dimension 2. As time permits, I'll discuss some broader features of the theory.
2017-05-26 22:35:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645500302314758, "perplexity": 1001.7855466535079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608686.22/warc/CC-MAIN-20170526222659-20170527002659-00412.warc.gz"}
https://www.gamedev.net/forums/topic/561788-which-is-better-style/
# Which is better style? ## Recommended Posts Concentrate    181 Which is better style, 1) string str = convert<string>(123,BASE_16); where convert has the following prototype : template<typename ReturnType, typename InputType> ReturnType convert(const InputType, const size_t base); or this way : 2) string str; convert(123,str,BASE_16); where the convert function has the following prototype : template<typename ReturnType, typename InputType> void convert(const InputType src,ReturnType& dest, const size_t base); I feel like the second one looks better. ##### Share on other sites Telastyn    3777 I personally favor #1 strongly. Functions exist to take params and return results. Mixing them up in the parameter list is (imo) distasteful. Promit    13246 Definitely #1. ##### Share on other sites Simian Man    1022 Yeah number one is far superior. jyk    2094 #1. ##### Share on other sites Zipster    2365 It depends. Can the operation fail? It often makes sense to use an output parameter for the result and reserve the return value for indicating success/failure, most often in cases where a) the output itself can't be used to reliably detect success/failure, and b) you can't use exceptions. I prefer #1 when I have a choice, though. ##### Share on other sites theOcelot    498 What on earth do you prefer #2 for? The only time I consider anything like that is when I don't want to copy some big object for a return value, and even then creating and initializing the variable on two lines is annoying. ##### Share on other sites Antheus    2409 Quote: Original post by theOcelotWhat on earth do you prefer #2 for? The only time I consider anything like that is when I don't want to copy some big object for a return value, and even then creating and initializing the variable on two lines is annoying. One would use #2 when trying to avoid redundant heap allocations when parsing a lot of text, perhaps hundreds of thousands of lines, thereby reducing running time by a factor of 10. YMMV, depends on implementation of std::string and whether RVO/NRVO can handle objects which invoke new in constructor. ##### Share on other sites Zahlman    1682 If you really expect to need #2 in specific circumstances for performance reasons, you can always implement #1 in terms of #2, and provide both: // Ugly version for people who have identified a needtemplate<typename ReturnType, typename InputType>void convert(const InputType& src, ReturnType& dest, const size_t base) { // evil conversion logic}// Pretty version for people who like pretty codetemplate<typename ReturnType, typename InputType>ReturnType convert(const InputType& src, const size_t base) { ReturnType result; convert(src, result, base); return result;} ##### Share on other sites theOcelot    498 Quote: Original post by Antheus Quote: Original post by theOcelotWhat on earth do you prefer #2 for? The only time I consider anything like that is when I don't want to copy some big object for a return value, and even then creating and initializing the variable on two lines is annoying. One would use #2 when trying to avoid redundant heap allocations when parsing a lot of text, perhaps hundreds of thousands of lines, thereby reducing running time by a factor of 10. Sure, other efficiency issues. But Concentrate seems to prefer it aesthetically. That's what I don't get. ##### Share on other sites Fenrisulvur    186 In the given example, I don't really like either very much. What do you gain from using generic semantics in this operation? Seems pointless to me. Definitely #1, though. ##### Share on other sites Ftn    462 If function can fail without throwing exception, I'd prefer to describe it in function name like: bool try_convert_in_place(...) I usually go what Zahlman suggested. If performance is issue at given point, call the faster one. ##### Share on other sites iMalc    2466 Quote: Original post by Antheus Quote: Original post by theOcelotWhat on earth do you prefer #2 for? The only time I consider anything like that is when I don't want to copy some big object for a return value, and even then creating and initializing the variable on two lines is annoying. One would use #2 when trying to avoid redundant heap allocations when parsing a lot of text, perhaps hundreds of thousands of lines, thereby reducing running time by a factor of 10. YMMV, depends on implementation of std::string and whether RVO/NRVO can handle objects which invoke new in constructor. I would have thought it made little difference. Doesn't RVO just work out if it's possible to construct the result directly in-place, bypassing the need for a copy-constructor call, for where the function is called/inlined? Oh I see, perhaps it has to consider the possible side-effects of the 'new' call. With RVO, number 1 can in theory be marginally faster than number 2, assuming that a constructor other than the default one happens to be what is used inside 'convert'. That of course isn't possible in number 2 since the string variable has already been default constructed prior to the call. In other words, number 1 wins for aesthetic reasons and also don't think that all optimisation reasons are in favour of number 2. One would hope that compilers will get even better over time, not worse! So, overall number 1 certainly wins in my books. I'd only change it if and when I see it being a performance hot-spot during later optimisation, and even then, the actual change might not be to anything like number 2. ##### Share on other sites phresnel    953 These days I am absolutely used to have all my functions in the form "fun(input) -> output", i.e. usually no parameter gets mutated, and output really is output. That way, you no longer struggle with thinking about which parameters are [in], and which are [out]. When I need to return multiple values, I do either by returning a class- or aggregrate-type, or in trivial cases by using boost::tuple<>. Note that thanks to RVO, you usually have no performance bottleneck, and even if, C++0x will mostly fix this by move-semantics. Finally, one of the reasons to use that style was const-correctness, i.e. in your #2, you can't say const string str; convert(123,str,BASE_16);, so you force clients of convert to introduce mutable state (a.k.a. "hm, will that value be changed in this function *skim...waste time...*") where absolutely not necessary. ##### Share on other sites Wouldn't it be better to use a third option? string str(convert<string>(123,BASE_16)); I'm fairly new to C++ Programming (<2 years), but all the books I've been reading say it's best to initialise something using the constructor if you can, rather than declaring it and using the copy constructor. If what I've just written is garbage, then someone please tell me why.. :o) ##### Share on other sites nobodynews    3126 Quote: Original post by BattleMetalChrisI'm fairly new to C++ Programming (<2 years), but all the books I've been reading say it's best to initialise something using the constructor if you can, rather than declaring it and using the copy constructor. I'm pretty sure both ways use the copy constructor (string a = another_string; and string a(another_string); both use the copy constructor) so I'm not sure what you or your book(s) meant. ##### Share on other sites phresnel    953 Quote: Original post by BattleMetalChrisWouldn't it be better to use a third option?string str(convert(123,BASE_16));I'm fairly new to C++ Programming (<2 years), but all the books I've been reading say it's best to initialise something using the constructor if you can, rather than declaring it and using the copy constructor.If what I've just written is garbage, then someone please tell me why.. :o) It is the same. When you "assign" to a variable X in the declaration of variable X, then it is just syntactic sugar for using the "()" syntax. Note though that there is a subtlety when there are no parameters to the constructors: The compiler will than assume it is a function declaration (google "c++ most vexing parse"). ##### Share on other sites Ah right, fair enough.
2017-09-25 20:23:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21382908523082733, "perplexity": 3393.587634612224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693363.77/warc/CC-MAIN-20170925201601-20170925221601-00117.warc.gz"}
https://formulasearchengine.com/index.php?title=Cass_criterion&oldid=20866
# Cass criterion (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) If ${\displaystyle p_{t}}$ represents the vector of Arrow–Debreu commodity prices prevailing in period ${\displaystyle t}$ then a competitive equilibrium allocation is inefficient if and only if
2020-09-29 23:07:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7219083905220032, "perplexity": 8421.941651439261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402093104.90/warc/CC-MAIN-20200929221433-20200930011433-00797.warc.gz"}
http://math.stackexchange.com/questions/136766/smooth-structures-on-compact-manifolds
# Smooth structures on compact manifolds I am currently reading some notes where an operator, originally defined on functions over Euclidean Space is now transferred to the setting of a compact, smooth Riemannian manifold. There is a statement in the notes which says the following: "Let $M$ be a smooth compact Riemannian manifold without boundary. Cover $M$ by a finite number of coordinate charts $U_i$ with diffeomorphisms $h_i : O_i \to U_i$ where the $O_i$ are open subsets of $\mathbb{R}^m$ with compact closure. We assume the coordinate charts $U_i$ are chosen so that the union $U_i \cup U_j$ is also contained in a larger coordinate chart for all $(i,j)$." This last statement confuses me a little, because I cannot see this as being trivial. How can I justify this assumption ? That is, given I have a smooth atlas $\{ U_i \}$ for $M$ (which we can take to consist of finitely many charts since $M$ is compact), how do I adjust this chart so that the last sentence above holds ? - Are you sure you are citing this correctly? In this generality (for each pair $(i,j)$) I doubt that this can be achieved. Is there something missing? E.g. for each pair $(i,j)$ such that $U_i\cap U_j$ is not empty? –  user20266 Apr 25 '12 at 12:47 @Thomas thanks for your comment ! I have copied the whole definition now into the post above in order to reduce the possibility that I have missed something. –  harlekin Apr 25 '12 at 12:55 This is interesting. You would need to heavily restrict. For example, you could not take the two charts on $S^n$ given by stereographic projection. –  Neal Apr 25 '12 at 13:59 I'd start checking what happens with the cover and what is really needed. –  user20266 Apr 25 '12 at 14:38 Hm ... ok it sounds like I have to search for a result that says something like "for each compact manifold there exists such a cover", instead of a statement of the form "any given cover can be adjusted so that the statement holds." –  harlekin Apr 25 '12 at 15:04 I'm interpreting the statement as follows: There is a finite collection of distinguished coordinate charts $U_i$ covering $M$ where each $U_i\cup U_j$ is contained in another (perhaps nondistinguished) coordinate chart $W_{ij}$. I am not necessarily claiming that $W_{ij}\cup W_{kl}$ is contained in a chart $Z_{ijkl}$ and each $Z_{ijkl}\cup Z_{mnop}$ is contained in a chart ...., just that the union of the $U$s are contained in charts. Here's a proof that such things always exist, though it may use tools you don't have access to. Equip $M$ with any Riemannian metric $g$. This gives rise to several functions. First, for each point, there is a map called the exponential map, $\operatorname{exp}_p :T_p M\rightarrow M$ with the property that $\operatorname{exp}_p$ is a diffeomorphism onto its image when restricted to a small enough ball around the origin in $T_p M$. It is a fact that when $\operatorname{exp}_p$ is restricted to a ball of radius $r$ in $T_p M$, the image consists of all points in $M$ a distance $r$ away from $p$. Another important function coming from a choice of metric is the function $\operatorname{inj}:M\rightarrow\mathbb{R}$. It is called the injectivity radius and defined as the largest radius of a ball around the origin in $T_p M$ such that $\operatorname{exp}_p:B(r)\rightarrow M$ is a diffeomorphism onto its image. It is known that $\operatorname{inj}$ is a continuous function and, on a compact manifold, bounded away from $0$. Let $\rho$ denote the minimum value of $\operatorname{inj}$ on $M$. Then, by definition, at every point $p$, if we restrict $\operatorname{exp}_p$ to the ball of radius $\rho$, it's a diffeomorphism onto its image. In particular, we can use $\operatorname{exp}_p(B(\rho))$ as a chart on $M$. Let $U_p$ denote $\operatorname{exp}_p(B(\frac{\rho}{2}))$. I claim that the collection of $\{U_p\}$ has the property that the union of any 2 are contained in a coordinate chart and that they cover $M$. Once we establish all this, use compactness of $M$ to extract a finite subcollection of the $U_p$, giving the desired collection of charts. First, $p\in U_p$ since $\operatorname{exp}_p(0) = p$ always. Thus, these do cover. Now, why do they have the property that the union is contained in a coordinate chart? We'll break into 2 cases depending on whether or not $U_p\cap U_q$ is empty or not. If $U_p\cap U_q =\emptyset$, then $U_p\cup U_q$ is a (disconnected) chart. The diffeomorphism between it and an open subset of $\mathbb{R}^n$ is given by $$\begin{cases} \operatorname{exp}_p^{-1}(r) & r\in U_p \\ \operatorname{exp}_q^{-1}(r) + v & r\in U_q\end{cases},$$ where $v$ is some vector of very large (compared to $\rho$) length. The point of $v$ is to shift the ball $\operatorname{exp}_q^{-1}(U_q)$ in $T_q M\cong \mathbb{R}^n$ far enough away from the origin so that it wont intersect the image of $\operatorname{exp}_p^{-1}(U_p)$ in $T_p M \cong\mathbb{R}^n$ On the other hand, if $U_p\cap U_q \neq \emptyset$, choose $r\in U_p\cap U_q$. I claim that $U_p\cup U_q \subseteq \operatorname{exp}_r(B(\rho))$. To see this, we'll use the triangle inequality. Using $d$ to denote distance, we have for any $s\in U_p$, that $d(s,r)\leq d(s,p) + d(p,r) < \frac{\rho}{2} + \frac{\rho}{2} = \rho$. This same argument works for $U_q$, so we have $U_p\cup U_q\subseteq \operatorname{exp}_r(B(\rho))$. - Incidentally, if this is applied to $S^2$, the unit sphere in $\mathbb{R}^3$ with usual metric, then one finds that $\operatorname{exp}_p$ is nothing but stereographic projection from the antipodal point. Hence, $U_p$ is the open hemisphere containing $p$ in the center and it's easy to see these really do work. –  Jason DeVito May 16 '12 at 14:59
2014-12-18 13:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114639163017273, "perplexity": 88.82809160702352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766295.3/warc/CC-MAIN-20141217075246-00141-ip-10-231-17-201.ec2.internal.warc.gz"}
https://eng.libretexts.org/Bookshelves/Aerospace_Engineering/Aerodynamics_and_Aircraft_Performance_(Marchman)/04%3A_Performance_in_Straight_and_Level_Flight
Skip to main content # 4: Performance in Straight and Level Flight ## Introduction Now that we have examined the origins of the forces which act on an aircraft in the atmosphere, we need to begin to examine the way these forces interact to determine the performance of the vehicle. We know that the forces are dependent on things like atmospheric pressure, density, temperature and viscosity in combinations that become “similarity parameters” such as Reynolds number and Mach number. We also know that these parameters will vary as functions of altitude within the atmosphere and we have a model of a standard atmosphere to describe those variations. It is also obvious that the forces on an aircraft will be functions of speed and that this is part of both Reynolds number and Mach number. Many of the questions we will have about aircraft performance are related to speed. How fast can the plane fly or how slow can it go? How quickly can the aircraft climb? What speed is necessary for lift‑off from the runway? In the previous section on dimensional analysis and flow similarity we found that the forces on an aircraft are not functions of speed alone but of a combination of velocity and density which acts as a pressure that we called dynamic pressure. This combination appears as one of the three terms in Bernoulli’s equation $P+\frac{1}{2} \rho V^{2}=P_{0}$ which can be rearranged to solve for velocity $V=\sqrt{2\left(P_{0}-P\right) / \rho}$ In chapter two we learned how a Pitot‑static tube can be used to measure the difference between the static and total pressure to find the airspeed if the density is either known or assumed. We discussed both the sea level equivalent airspeed which assumes sea level standard density in finding velocity and the true airspeed which uses the actual atmospheric density. In dealing with aircraft it is customary to refer to the sea level equivalent airspeed as the indicated airspeed if any instrument calibration or placement error can be neglected. In this text we will assume that such errors can indeed be neglected and the term indicated airspeed will be used interchangeably with sea level equivalent airspeed. $V_{I N D}=V_{e}=V_{S L}=\sqrt{\frac{2\left(P_{0}-P\right)}{\rho_{S L}}}$ It should be noted that the equations above assume incompressible flow and are not accurate at speeds where compressibility effects are significant. In theory, compressibility effects must be considered at Mach numbers above 0.3; however, in reality, the above equations can be used without significant error to Mach numbers of 0.6 to 0.7. The airspeed indication system of high speed aircraft must be calibrated on a more complicated basis which includes the speed of sound: $V_{\mathrm{IND}}=\sqrt{\frac{2 a_{S L}^{2}}{\gamma-1}\left[\left(\frac{P_{0}-P}{\rho_{S L}}+1\right)^{\frac{\gamma-1}{\gamma}}-1\right]}$ where $$a_{sl}$$ = speed of sound at sea level and ρSL = pressure at sea level. Gamma is the ratio of specific heats (Cp/Cv) for air. Very high speed aircraft will also be equipped with a Mach indicator since Mach number is a more relevant measure of aircraft speed at and above the speed of sound. In the rest of this text it will be assumed that compressibility effects are negligible and the incompressible form of the equations can be used for all speed related calculations. Indicated airspeed (the speed which would be read by the aircraft pilot from the airspeed indicator) will be assumed equal to the sea level equivalent airspeed. Thus the true airspeed can be found by correcting for the difference in sea level and actual density. The correction is based on the knowledge that the relevant dynamic pressure at altitude will be equal to the dynamic pressure at sea level as found from the sea level equivalent airspeed: An important result of this equivalency is that, since the forces on the aircraft depend on dynamic pressure rather than airspeed, if we know the sea level equivalent conditions of flight and calculate the forces from those conditions, those forces (and hence the performance of the airplane) will be correctly predicted based on indicated airspeed and sea level conditions. This also means that the airplane pilot need not continually convert the indicated airspeed readings to true airspeeds in order to gauge the performance of the aircraft. The aircraft will always behave in the same manner at the same indicated airspeed regardless of altitude (within the assumption of incompressible flow). This is especially nice to know in take‑off and landing situations! ## 4.1 Static Balance of Forces Many of the important performance parameters of an aircraft can be determined using only statics; ie., assuming flight in an equilibrium condition such that there are no accelerations. This means that the flight is at constant altitude with no acceleration or deceleration. This gives the general arrangement of forces shown below. In this text we will consider the very simplest case where the thrust is aligned with the aircraft’s velocity vector. We will also normally assume that the velocity vector is aligned with the direction of flight or flight path. For this most basic case the equations of motion become: T – D = 0 L – W = 0 Note that this is consistent with the definition of lift and drag as being perpendicular and parallel to the velocity vector or relative wind. Now we make a simple but very basic assumption that in straight and level flight lift is equal to weight, L = W We will use this so often that it will be easy to forget that it does assume that flight is indeed straight and level. Later we will cheat a little and use this in shallow climbs and glides, covering ourselves by assuming “quasi‑straight and level” flight. In the final part of this text we will finally go beyond this assumption when we consider turning flight. Using the definition of the lift coefficient $C_{L}=\frac{L}{\frac{1}{2} \rho V_{\infty}^{2} S}$ and the assumption that lift equals weight, the speed in straight and level flight becomes: $V=\sqrt{\frac{2 W}{\rho S C_{L}}}$ The thrust needed to maintain this speed in straight and level flight is also a function of the aircraft weight. Since T = D and L = W we can write D/L = T/W or Therefore, for straight and level flight we find this relation between thrust and weight: The above equations for thrust and velocity become our first very basic relations which can be used to ascertain the performance of an aircraft. ## 4.2 Aerodynamic Stall Earlier we discussed aerodynamic stall. For an airfoil (2‑D) or wing (3‑D), as the angle of attack is increased a point is reached where the increase in lift coefficient, which accompanies the increase in angle of attack, diminishes. When this occurs the lift coefficient versus angle of attack curve becomes non‑linear as the flow over the upper surface of the wing begins to break away from the surface. This separation of flow may be gradual, usually progressing from the aft edge of the airfoil or wing and moving forward; sudden, as flow breaks away from large portions of the wing at the same time; or some combination of the two. The actual nature of stall will depend on the shape of the airfoil section, the wing planform and the Reynolds number of the flow. We define the stall angle of attack as the angle where the lift coefficient reaches a maximum, CLmax, and use this value of lift coefficient to calculate a stall speed for straight and level flight. Note that the stall speed will depend on a number of factors including altitude. If we look at a sea level equivalent stall speed we have It should be emphasized that stall speed as defined above is based on lift equal to weight or straight and level flight. This is the stall speed quoted in all aircraft operating manuals and used as a reference by pilots. It must be remembered that stall is only a function of angle of attack and can occur at any speed. The definition of stall speed used above results from limiting the flight to straight and level conditions where lift equals weight. This stall speed is not applicable for other flight conditions. For example, in a turn lift will normally exceed weight and stall will occur at a higher flight speed. The same is true in accelerated flight conditions such as climb. For this reason pilots are taught to handle stall in climbing and turning flight as well as in straight and level flight. For most of this text we will deal with flight which is assumed straight and level and therefore will assume that the straight and level stall speed shown above is relevant. This speed usually represents the lowest practical straight and level flight speed for an aircraft and is thus an important aircraft performance parameter. We will normally define the stall speed for an aircraft in terms of the maximum gross takeoff weight but it should be noted that the weight of any aircraft will change in flight as fuel is used. For a given altitude, as weight changes the stall speed variation with weight can be found as follows: It is obvious that as a flight progresses and the aircraft weight decreases, the stall speed also decreases. Since stall speed represents a lower limit of straight and level flight speed it is an indication that an aircraft can usually land at a lower speed than the minimum takeoff speed. For many large transport aircraft the stall speed of the fully loaded aircraft is too high to allow a safe landing within the same distance as needed for takeoff. In cases where an aircraft must return to its takeoff field for landing due to some emergency situation (such as failure of the landing gear to retract), it must dump or burn off fuel before landing in order to reduce its weight, stall speed and landing speed. Takeoff and landing will be discussed in a later chapter in much more detail. ## 4.3 Perspectives on Stall While discussing stall it is worthwhile to consider some of the physical aspects of stall and the many misconceptions that both pilots and the public have concerning stall. To the aerospace engineer, stall is CLmax, the highest possible lifting capability of the aircraft; but, to most pilots and the public, stall is where the airplane looses all lift! How can it be both? And, if one of these views is wrong, why? The key to understanding both perspectives of stall is understanding the difference between lift and lift coefficient. Lift is the product of the lift coefficient, the dynamic pressure and the wing planform area. For a given altitude and airplane (wing area) lift then depends on lift coefficient and velocity. It is possible to have a very high lift coefficient CL and a very low lift if velocity is low. When an airplane is at an angle of attack such that CLmax is reached, the high angle of attack also results in high drag coefficient. The resulting high drag normally leads to a reduction in airspeed which then results in a loss of lift. In a conventionally designed airplane this will be followed by a drop of the nose of the aircraft into a nose down attitude and a loss of altitude as speed is recovered and lift regained. If the pilot tries to hold the nose of the plane up, the airplane will merely drop in a nose up attitude. Pilots are taught to let the nose drop as soon as they sense stall so lift and altitude recovery can begin as rapidly as possible. A good flight instructor will teach a pilot to sense stall at its onset such that recovery can begin before altitude and lift is lost. It should be noted that if an aircraft has sufficient power or thrust and the high drag present at CLmax can be matched by thrust, flight can be continued into the stall and post‑stall region. This is possible on many fighter aircraft and the post‑stall flight realm offers many interesting possibilities for maneuver in a “dog-fight”. The general public tends to think of stall as when the airplane drops out of the sky. This can be seen in almost any newspaper report of an airplane accident where the story line will read “the airplane stalled and fell from the sky, nosediving into the ground after the engine failed”. This kind of report has several errors. Stall has nothing to do with engines and an engine loss does not cause stall. Sailplanes can stall without having an engine and every pilot is taught how to fly an airplane to a safe landing when an engine is lost. Stall also doesn’t cause a plane to go into a dive. It is, however, possible for a pilot to panic at the loss of an engine, inadvertently enter a stall, fail to take proper stall recovery actions and perhaps “nosedive” into the ground. ## 4.4 Drag and Thrust Required As seen above, for straight and level flight, thrust must be equal to drag. Drag is a function of the drag coefficient CD which is, in turn, a function of a base drag and an induced drag. CD = CD0 + CDi We assume that this relationship has a parabolic form and that the induced drag coefficient has the form CDi = KCL2 We therefore write CD = CD0 + KCL2 K is found from inviscid aerodynamic theory to be a function of the aspect ratio and planform shape of the wing where e is unity for an ideal elliptical form of the lift distribution along the wing’s span and less than one for non‑ideal spanwise lift distributions. The drag coefficient relationship shown above is termed a parabolic drag “polar” because of its mathematical form. It is actually only valid for inviscid wing theory not the whole airplane. In this text we will use this equation as a first approximation to the drag behavior of an entire airplane. While this is only an approximation, it is a fairly good one for an introductory level performance course. It can, however, result in some unrealistic performance estimates when used with some real aircraft data. The drag of the aircraft is found from the drag coefficient, the dynamic pressure and the wing planform area: Therefore, Realizing that for straight and level flight, lift is equal to weight and lift is a function of the wing’s lift coefficient, we can write: giving: The above equation is only valid for straight and level flight for an aircraft in incompressible flow with a parabolic drag polar. Let’s look at the form of this equation and examine its physical meaning. For a given aircraft at a given altitude most of the terms in the equation are constants and we can write where The first term in the equation shows that part of the drag increases with the square of the velocity. This is the base drag term and it is logical that for the basic airplane shape the drag will increase as the dynamic pressure increases. To most observers this is somewhat intuitive. The second term represents a drag which decreases as the square of the velocity increases. It gives an infinite drag at zero speed, however, this is an unreachable limit for normally defined, fixed wing (as opposed to vertical lift) aircraft. It should be noted that this term includes the influence of lift or lift coefficient on drag. The faster an aircraft flies, the lower the value of lift coefficient needed to give a lift equal to weight. Lift coefficient, it is recalled, is a linear function of angle of attack (until stall). If an aircraft is flying straight and level and the pilot maintains level flight while decreasing the speed of the plane, the wing angle of attack must increase in order to provide the lift coefficient and lift needed to equal the weight. As angle of attack increases it is somewhat intuitive that the drag of the wing will increase. As speed is decreased in straight and level flight, this part of the drag will continue to increase exponentially until the stall speed is reached. Adding the two drag terms together gives the following figure which shows the complete drag variation with velocity for an aircraft with a parabolic drag polar in straight and level flight. ## 4.5 Minimum Drag One obvious point of interest on the previous drag plot is the velocity for minimum drag. This can, of course, be found graphically from the plot. We can also take a simple look at the equations to find some other information about conditions for minimum drag. The requirements for minimum drag are intuitively of interest because it seems that they ought to relate to economy of flight in some way. Later we will find that there are certain performance optima which do depend directly on flight at minimum drag conditions. At this point we are talking about finding the velocity at which the airplane is flying at minimum drag conditions in straight and level flight. It is important to keep this assumption in mind. We will later find that certain climb and glide optima occur at these same conditions and we will stretch our straight and level assumption to one of “quasi”‑level flight. We can begin with a very simple look at what our lift, drag, thrust and weight balances for straight and level flight tells us about minimum drag conditions and then we will move on to a more sophisticated look at how the wing shape dependent terms in the drag polar equation (CD0 and K) are related at the minimum drag condition. Ultimately, the most important thing to determine is the speed for flight at minimum drag because the pilot can then use this to fly at minimum drag conditions. Let’s look at our simple static force relationships: L = W, T = D to write D = W x D/L which says that minimum drag occurs when the drag divided by lift is a minimum or, inversely, when lift divided by drag is a maximum. This combination of parameters, L/D, occurs often in looking at aircraft performance. In general, it is usually intuitive that the higher the lift and the lower the drag, the better an airplane. It is not as intuitive that the maximum lift‑to drag ratio occurs at the same flight conditions as minimum drag. This simple analysis, however, shows that MINIMUM DRAG OCCURS WHEN L/D IS MAXIMUM. Note that since CL / CD = L/D we can also say that minimum drag occurs when CL/CD is maximum. It is very important to note that minimum drag does not connote minimum drag coefficient. Minimum drag occurs at a single value of angle of attack where the lift coefficient divided by the drag coefficient is a maximum: Dmin occurs when (CL/CD)max As noted above, this is not at the same angle of attack at which CDis at a minimum. It is also not the same angle of attack where lift coefficient is maximum. This should be rather obvious since CLmax occurs at stall and drag is very high at stall. Since minimum drag is a function only of the ratio of the lift and drag coefficients and not of altitude (density), the actual value of the minimum drag for a given aircraft at a given weight will be invariant with altitude. The actual velocity at which minimum drag occurs is a function of altitude and will generally increase as altitude increases. If we assume a parabolic drag polar and plot the drag equation for drag versus velocity at different altitudes the resulting curves will look somewhat like the following: Note that the minimum drag will be the same at every altitude as mentioned earlier and the velocity for minimum drag will increase with altitude. We discussed in an earlier section the fact that because of the relationship between dynamic pressure at sea level with that at altitude, the aircraft would always perform the same at the same indicated or sea level equivalent airspeed. Indeed, if one writes the drag equation as a function of sea level density and sea level equivalent velocity a single curve will result. To find the drag versus velocity behavior of an aircraft it is then only necessary to do calculations or plots at sea level conditions and then convert to the true airspeeds for flight at any altitude by using the velocity relationship below. ## 4.6 Minimum Drag Summary We know that minimum drag occurs when the lift to drag ratio is at a maximum, but when does that occur; at what value of CL or CD or at what speed? One way to find CL and CD at minimum drag is to plot one versus the other as shown below. The maximum value of the ratio of lift coefficient to drag coefficient will be where a line from the origin just tangent to the curve touches the curve. At this point are the values of CL and CD for minimum drag. This graphical method of finding the minimum drag parameters works for any aircraft even if it does not have a parabolic drag polar. Once CLmd and CDmd are found, the velocity for minimum drag is found from the equation below, provided the aircraft is in straight and level flight As we already know, the velocity for minimum drag can be found for sea level conditions (the sea level equivalent velocity) and from that it is easy to find the minimum drag speed at altitude. It should also be noted that when the lift and drag coefficients for minimum drag are known and the weight of the aircraft is known the minimum drag itself can be found from It is common to assume that the relationship between drag and lift is the one we found earlier, the so called parabolic drag polar. For the parabolic drag polar it is easy to take the derivative with respect to the lift coefficient and set it equal to zero to determine the conditions for the minimum ratio of drag coefficient to lift coefficient, which was a condition for minimum drag. Hence, This gives or and The above is the condition required for minimum drag with a parabolic drag polar. Now, we return to the drag polar and for minimum drag we can write which, with the above, gives or From this we can find the value of the maximum lift‑to‑drag ratio in terms of basic drag parameters And the speed at which this occurs in straight and level flight is So we can write the minimum drag velocity as or the sea level equivalent minimum drag speed as ## 4.7 Review: Minimum Drag Conditions for a Parabolic Drag Polar At this point we know a lot about minimum drag conditions for an aircraft with a parabolic drag polar in straight and level flight. The following equations may be useful in the solution of many different performance problems to be considered later in this text. There will be several flight conditions which will be found to be optimized when flown at minimum drag conditions. It is therefore suggested that the student write the following equations on a separate page in her or his class notes for easy reference. EXAMPLE 4.1 An aircraft which weighs 3000 pounds has a wing area of 175 square feet and an aspect ratio of seven with a wing aerodynamic efficiency factor (e) of 0.95. If the base drag coefficient, CDO, is 0.028, find the minimum drag at sea level and at 10,000 feet altitude, the maximum lift‑to-drag ratio and the values of lift and drag coefficient for minimum drag. Also find the velocities for minimum drag in straight and level flight at both sea level and 10,000 feet. We need to first find the term K in the drag equation. K = 1 / (πARe) = 0.048 Now we can find We can check this with The velocity for minimum drag is the first of these that depends on altitude. At sea level To find the velocity for minimum drag at 10,000 feet we an recalculate using the density at that altitude or we can use It is suggested that at this point the student use the drag equation and make graphs of drag versus velocity for both sea level and 10,000 foot altitude conditions, plotting drag values at 20 fps increments. The plots would confirm the above values of minimum drag velocity and minimum drag. ## 4.8 Flying at Minimum Drag One question which should be asked at this point but is usually not answered in a text on aircraft performance is “Just how the heck does the pilot make that airplane fly at minimum drag conditions anyway?” The answer, quite simply, is to fly at the sea level equivalent speed for minimum drag conditions. The pilot sets up or “trims” the aircraft to fly at constant altitude (straight and level) at the indicated airspeed (sea level equivalent speed) for minimum drag as given in the aircraft operations manual. All the pilot need do is hold the speed and altitude constant. ## 4.9 Drag in Compressible Flow For the purposes of an introductory course in aircraft performance we have limited ourselves to the discussion of lower speed aircraft; ie, airplanes operating in incompressible flow. As discussed earlier, analytically, this would restrict us to consideration of flight speeds of Mach 0.3 or less (less than 300 fps at sea level), however, physical realities of the onset of drag rise due to compressibility effects allow us to extend our use of the incompressible theory to Mach numbers of around 0.6 to 0.7. This is the range of Mach number where supersonic flow over places such as the upper surface of the wing has reached the magnitude that shock waves may occur during flow deceleration resulting in energy losses through the shock and in drag rises due to shock‑induced flow separation over the wing surface. This drag rise was discussed in Chapter 3. As speeds rise to the region where compressiblility effects must be considered we must take into account the speed of sound a and the ratio of specific heats of air, gamma. Gamma for air at normal lower atmospheric temperatures has a value of 1.4. Starting again with the relation for a parabolic drag polar, we can multiply and divide by the speed of sound to rewrite the relation in terms of Mach number. where or The resulting equation above is very similar in form to the original drag polar relation and can be used in a similar fashion. For example, to find the Mach number for minimum drag in straight and level flight we would take the derivative with respect to Mach number and set the result equal to zero. The complication is that some terms which we considered constant under incompressible conditions such as K and CDO may now be functions of Mach number and must be so evaluated. Often the equation above must be solved itteratively. ## 4.10 Review To this point we have examined the drag of an aircraft based primarily on a simple model using a parabolic drag representation in incompressible flow. We have further restricted our analysis to straight and level flight where lift is equal to weight and thrust equals drag. The aircraft can fly straight and level at a wide range of speeds, provided there is sufficient power or thrust to equal or overcome the drag at those speeds. The student needs to understand the physical aspects of this flight. We looked at the speed for straight and level flight at minimum drag conditions. One could, of course, always cruise at that speed and it might, in fact, be a very economical way to fly (we will examine this later in a discussion of range and endurance). However, since “time is money” there may be reason to cruise at higher speeds. It also might just be more fun to fly faster. Flight at higher than minimum-drag speeds will require less angle of attack to produce the needed lift (to equal weight) and the upper speed limit will be determined by the maximum thrust or power available from the engine. Cruise at lower than minimum drag speeds may be desired when flying approaches to landing or when flying in holding patterns or when flying other special purpose missions. This will require a higher than minimum-drag angle of attack and the use of more thrust or power to overcome the resulting increase in drag. The lower limit in speed could then be the result of the drag reaching the magnitude of the power or the thrust available from the engine; however, it will normally result from the angle of attack reaching the stall angle. Hence, stall speed normally represents the lower limit on straight and level cruise speed. It must be remembered that all of the preceding is based on an assumption of straight and level flight. If an aircraft is flying straight and level at a given speed and power or thrust is added, the plane will initially both accelerate and climb until a new straight and level equilibrium is reached at a higher altitude. The pilot can control this addition of energy by changing the plane’s attitude (angle of attack) to direct the added energy into the desired combination of speed increase and/or altitude increase. If the engine output is decreased, one would normally expect a decrease in altitude and/or speed, depending on pilot control input. We must now add the factor of engine output, either thrust or power, to our consideration of performance. It is normal to refer to the output of a jet engine as thrust and of a propeller engine as power. We will first consider the simpler of the two cases, thrust. ## 4.11 Thrust We have said that for an aircraft in straight and level flight, thrust must equal drag. If the thrust of the aircraft’s engine exceeds the drag for straight and level flight at a given speed, the airplane will either climb or accelerate or do both. It could also be used to make turns or other maneuvers. The drag encountered in straight and level flight could therefore be called the thrust required (for straight and level flight). The thrust actually produced by the engine will be referred to as the thrust available. Although we can speak of the output of any aircraft engine in terms of thrust, it is conventional to refer to the thrust of jet engines and the power of prop engines. A propeller, of course, produces thrust just as does the flow from a jet engine; however, for an engine powering a propeller (either piston or turbine), the output of the engine itself is power to a shaft. Thus when speaking of such a propulsion system most references are to its power. When speaking of the propeller itself, thrust terminology may be used. The units employed for discussions of thrust are Newtons in the SI system and pounds in the English system. Since the English units of pounds are still almost universally used when speaking of thrust, they will normally be used here. Thrust is a function of many variables including efficiencies in various parts of the engine, throttle setting, altitude, Mach number and velocity. A complete study of engine thrust will be left to a later propulsion course. For our purposes very simple models of thrust will suffice with assumptions that thrust varies with density (altitude) and throttle setting and possibly, velocity. We already found one such relationship in Chapter two with the momentum equation. Often we will simplify things even further and assume that thrust is invariant with velocity for a simple jet engine. If we know the thrust variation with velocity and altitude for a given aircraft we can add the engine thrust curves to the drag curves for straight and level flight for that aircraft as shown below. We will normally assume that since we are interested in the limits of performance for the aircraft we are only interested in the case of 100% throttle setting. It is obvious that other throttle settings will give thrusts at any point below the 100% curves for thrust. In the figure above it should be noted that, although the terminology used is thrust and drag, it may be more meaningful to call these curves thrust available and thrust required when referring to the engine output and the aircraft drag, respectively. ## 4.12 Minimum and Maximum Speeds The intersections of the thrust and drag curves in the figure above obviously represent the minimum and maximum flight speeds in straight and level flight. Above the maximum speed there is insufficient thrust available from the engine to overcome the drag (thrust required) of the aircraft at those speeds. The same is true below the lower speed intersection of the two curves. The true lower speed limitation for the aircraft is usually imposed by stall rather than the intersection of the thrust and drag curves. Stall speed may be added to the graph as shown below: The area between the thrust available and the drag or thrust required curves can be called the flight envelope. The aircraft can fly straight and level at any speed between these upper and lower speed intersection points. Between these speed limits there is excess thrust available which can be used for flight other than straight and level flight. This excess thrust can be used to climb or turn or maneuver in other ways. We will look at some of these maneuvers in a later chapter. For now we will limit our investigation to the realm of straight and level flight. Note that at the higher altitude, the decrease in thrust available has reduced the “flight envelope”, bringing the upper and lower speed limits closer together and reducing the excess thrust between the curves. As thrust is continually reduced with increasing altitude, the flight envelope will continue to shrink until the upper and lower speeds become equal and the two curves just touch. This can be seen more clearly in the figure below where all data is plotted in terms of sea level equivalent velocity. In the example shown, the thrust available at h6 falls entirely below the drag or thrust required curve. This means that the aircraft can not fly straight and level at that altitude. That altitude is said to be above the “ceiling” for the aircraft. At some altitude between h5 and h6 feet there will be a thrust available curve which will just touch the drag curve. That altitude will be the ceiling altitude of the airplane, the altitude at which the plane can only fly at a single speed. We will have more to say about ceiling definitions in a later section. Another way to look at these same speed and altitude limits is to plot the intersections of the thrust and drag curves on the above figure against altitude as shown below. This shows another version of a flight envelope in terms of altitude and velocity. This type of plot is more meaningful to the pilot and to the flight test engineer since speed and altitude are two parameters shown on the standard aircraft instruments and thrust is not. It may also be meaningful to add to the figure above a plot of the same data using actual airspeed rather than the indicated or sea level equivalent airspeeds. This can be done rather simply by using the square root of the density ratio (sea level to altitude) as discussed earlier to convert the equivalent speeds to actual speeds. This is shown on the graph below. Note that at sea level V = Ve and also there will be some altitude where there is a maximum true airspeed. ## 4.13 Special Case of Constant Thrust A very simple model is often employed for thrust from a jet engine. The assumption is made that thrust is constant at a given altitude. We will use this assumption as our standard model for all jet aircraft unless otherwise noted in examples or problems. Later we will discuss models for variation of thrust with altitude. The above model (constant thrust at altitude) obviously makes it possible to find a rather simple analytical solution for the intersections of the thrust available and drag (thrust required) curves. We will let thrust equal a constant T = T0 therefore, in straight and level flight where thrust equals drag, we can write where q is a commonly used abbreviation for the dynamic pressure. or and rearranging as a quadratic equation Solving the above equation gives or In terms of the sea level equivalent speed These solutions are, of course, double valued. The higher velocity is the maximum straight and level flight speed at the altitude under consideration and the lower solution is the nominal minimum straight and level flight speed (the stall speed will probably be a higher speed, representing the true minimum flight speed). There are, of course, other ways to solve for the intersection of the thrust and drag curves. Sometimes it is convenient to solve the equations for the lift coefficients at the minimum and maximum speeds. To set up such a solution we first return to the basic straight and level flight equations T = T0 = D and L = W. or solving for CL This solution will give two values of the lift coefficient. The larger of the two values represents the minimum flight speed for straight and level flight while the smaller CL is for the maximum flight speed. The matching speed is found from the relation ## 4.14 Review for Constant Thrust The figure below shows graphically the case discussed above. From the solution of the thrust equals drag relation we obtain two values of either lift coefficient or speed, one for the maximum straight and level flight speed at the chosen altitude and the other for the minimum flight speed. The stall speed will probably exceed the minimum straight and level flight speed found from the thrust equals drag solution, making it the true minimum flight speed. As altitude increases T0 will normally decrease and VMIN and VMAX will move together until at a ceiling altitude they merge to become a single point. It is normally assumed that the thrust of a jet engine will vary with altitude in direct proportion to the variation in density. This assumption is supported by the thrust equations for a jet engine as they are derived from the momentum equations introduced in chapter two of this text. We can therefore write: EXAMPLE 4.2 Earlier in this chapter we looked at a 3000 pound aircraft with a 175 square foot wing area, aspect ratio of seven and CDO of 0.028 with e = 0.95. Let us say that the aircraft is fitted with a small jet engine which has a constant thrust at sea level of 400 pounds. Find the maximum and minimum straight and level flight speeds for this aircraft at sea level and at 10,000 feet assuming that thrust available varies proportionally to density. If, as earlier suggested, the student, plotted the drag curves for this aircraft, a graphical solution is simple. One need only add a straight line representing 400 pounds to the sea level plot and the intersections of this line with the sea level drag curve give the answer. The same can be done with the 10,000 foot altitude data, using a constant thrust reduced in proportion to the density. Given a standard atmosphere density of 0.001756 sl/ft3, the thrust at 10,000 feet will be 0.739 times the sea level thrust or 296 pounds. Using the two values of thrust available we can solve for the velocity limits at sea level and at l0,000 ft. = 63053 or 5661 VSL = 251 ft/sec (max) or = 75 ft/sec (min) Thus the equation gives maximum and minimum straight and level flight speeds as 251 and 75 feet per second respectively. It is suggested that the student do similar calculations for the 10,000 foot altitude case. Note that one cannot simply take the sea level velocity solutions above and convert them to velocities at altitude by using the square root of the density ratio. The equations must be solved again using the new thrust at altitude. The student should also compare the analytical solution results with the graphical results. As mentioned earlier, the stall speed is usually the actual minimum flight speed. If the maximum lift coefficient has a value of 1.2, find the stall speeds at sea level and add them to your graphs. ## 4.15 Performance in Terms of Power The engine output of all propeller powered aircraft is expressed in terms of power. Power is really energy per unit time. While the propeller output itself may be expressed as thrust if desired, it is common to also express it in terms of power. While at first glance it may seem that power and thrust are very different parameters, they are related in a very simple manner through velocity. Power is thrust multiplied by velocity. The units for power are Newton‑meters per second or watts in the SI system and horsepower in the English system. As before, we will use primarily the English system. The reason is rather obvious. The author challenges anyone to find any pilot, mechanic or even any automobile driver anywhere in the world who can state the power rating for their engine in watts! Watts are for light bulbs: horsepower is for engines! Actually, our equations will result in English system power units of foot‑pounds per second. The conversion is one HP = 550 foot-pounds/second. We will speak of two types of power; power available and power required. Power required is the power needed to overcome the drag of the aircraft Preq = D x V Power available is equal to the thrust multiplied by the velocity. Pav = T x V It should be noted that we can start with power and find thrust by dividing by velocity, or we can multiply thrust by velocity to find power. There is no reason for not talking about the thrust of a propeller propulsion system or about the power of a jet engine. The use of power for propeller systems and thrust for jets merely follows convention and also recognizes that for a jet, thrust is relatively constant with speed and for a prop, power is relatively invariant with speed. Power available is the power which can be obtained from the propeller. Recognizing that there are losses between the engine and propeller we will distinguish between power available and shaft horsepower. Shaft horsepower is the power transmitted through the crank or drive shaft to the propeller from the engine. The engine may be piston or turbine or even electric or steam. The propeller turns this shaft power (Ps) into propulsive power with a certain propulsive efficiency, ηp. The propulsive efficiency is a function of propeller speed, flight speed, propeller design and other factors. It is obvious that both power available and power required are functions of speed, both because of the velocity term in the relation and from the variation of both drag and thrust with speed. For the ideal jet engine which we assume to have a constant thrust, the variation in power available is simply a linear increase with speed. It is interesting that if we are working with a jet where thrust is constant with respect to speed, the equations above give zero power at zero speed. This is not intuitive but is nonetheless true and will have interesting consequences when we later examine rates of climb. Another consequence of this relationship between thrust and power is that if power is assumed constant with respect to speed (as we will do for prop aircraft) thrust becomes infinite as speed approaches zero. This means that a Cessna 152 when standing still with the engine running has infinitely more thrust than a Boeing 747 with engines running full blast. It also has more power! What an ego boost for the private pilot! In using the concept of power to examine aircraft performance we will do much the same thing as we did using thrust. We will speak of the intersection of the power required and power available curves determining the maximum and minimum speeds. We will find the speed for minimum power required. We will look at the variation of these with altitude. The graphs we plot will look like that below. While the maximum and minimum straight and level flight speeds we determine from the power curves will be identical to those found from the thrust data, there will be some differences. One difference can be noted from the figure above. Unlike minimum drag, which was the same magnitude at every altitude, minimum power will be different at every altitude. This means it will be more complicated to collapse the data at all altitudes into a single curve. ## 4.16 Power Required The power required plot will look very similar to that seen earlier for thrust required (drag). It is simply the drag multiplied by the velocity. If we continue to assume a parabolic drag polar with constant values of CDO and K we have the following relationship for power required: We can plot this for given values of CDO, K, W and S (for a given aircraft) for various altitudes as shown in the following example. We will note that the minimum values of power will not be the same at each altitude. Recalling that the minimum values of drag were the same at all altitudes and that power required is drag times velocity, it is logical that the minimum value of power increases linearly with velocity. We should be able to draw a straight line from the origin through the minimum power required points at each altitude. The minimum power required in straight and level flight can, of course be taken from plots like the one above. We would also like to determine the values of lift and drag coefficient which result in minimum power required just as we did for minimum drag. One might assume at first that minimum power for a given aircraft occurs at the same conditions as those for minimum drag. This is, of course, not true because of the added dependency of power on velocity. We can begin to understand the parameters which influence minimum required power by again returning to our simple force balance equations for straight and level flight: Thus, for a given aircraft (weight and wing area) and altitude (density) the minimum required power for straight and level flight occurs when the drag coefficient divided by the lift coefficient to the two‑thirds power is at a minimum. Assuming a parabolic drag polar, we can write an equation for the above ratio of coefficients and take its derivative with respect to the lift coefficient (since CL is linear with angle of attack this is the same as looking for a maximum over the range of angle of attack) and set it equal to zero to find a maximum. Note that The lift coefficient for minimum required power is higher (1.732 times) than that for minimum drag conditions. Knowing the lift coefficient for minimum required power it is easy to find the speed at which this will occur. Note that the velocity for minimum required power is lower than that for minimum drag. The minimum power required and minimum drag velocities can both be found graphically from the power required plot. Minimum power is obviously at the bottom of the curve. Realizing that drag is power divided by velocity and that a line drawn from the origin to any point on the power curve is at an angle to the velocity axis whose tangent is power divided by velocity, then the line which touches the curve with the smallest angle must touch it at the minimum drag condition. From this we can graphically determine the power and velocity at minimum drag and then divide the former by the latter to get the minimum drag. Note that this graphical method works even for non­parabolic drag cases. Since we know that all altitudes give the same minimum drag, all power required curves for the various altitudes will be tangent to this same line with the point of tangency being the minimum drag point. One further item to consider in looking at the graphical representation of power required is the condition needed to collapse the data for all altitudes to a single curve. In the case of the thrust required or drag this was accomplished by merely plotting the drag in terms of sea level equivalent velocity. That will not work in this case since the power required curve for each altitude has a different minimum. Plotting all data in terms of Ve would compress the curves with respect to velocity but not with respect to power. The result would be a plot like the following: Knowing that power required is drag times velocity we can relate the power required at sea level to that at any altitude. or The result is that in order to collapse all power required data to a single curve we must plot power multiplied by the square root of sigma versus sea level equivalent velocity. This, therefore, will be our convention in plotting power data. ## 4.17 Review In the preceding we found the following equations for the determination of minimum power required conditions: We can also write Thus, the drag coefficient for minimum power required conditions is twice that for minimum drag. We also can write Since minimum power required conditions are important and will be used later to find other performance parameters it is suggested that the student write the above relationships on a special page in his or her notes for easy reference. Later we will take a complete look at dealing with the power available. If we know the power available we can, of course, write an equation with power required equated to power available and solve for the maximum and minimum straight and level flight speeds much as we did with the thrust equations. The power equations are, however not as simple as the thrust equations because of their dependence on the cube of the velocity. Often the best solution is an itterative one. If the power available from an engine is constant (as is usually assumed for a prop engine) the relation equating power available and power required is For a jet engine where the thrust is modeled as a constant the equation reduces to that used in the earlier section on Thrust based performance calculations. EXAMPLE 4.3 For the same 3000 lb airplane used in earlier examples calculate the velocity for minimum power. • It is suggested that the student make plots of the power required for straight and level flight at sea level and at 10,000 feet altitude and graphically verify the above calculated values. • It is also suggested that from these plots the student find the speeds for minimum drag and compare them with those found earlier. ## 4.18 Summary This chapter has looked at several elements of performance in straight and level flight. A simple model for drag variation with velocity was proposed (the parabolic drag polar) and this was used to develop equations for the calculations of minimum drag flight conditions and to find maximum and minimum flight speeds at various altitudes. Graphical methods were also stressed and it should be noted again that these graphical methods will work regardless of the drag model used. It is strongly suggested that the student get into the habit of sketching a graph of the thrust and or power versus velocity curves as a visualization aid for every problem, even if the solution used is entirely analytical. Such sketches can be a valuable tool in developing a physical feel for the problem and its solution. ## Homework 4 1. Use the momentum theorem to find the thrust for a jet engine where the following conditions are known: inlet velocity 300 fps inlet flow density 0.0023 sl/ft^3 inlet area 4 ft^2 exit flow velocity 1800 fps exit flow density unknown exit area 2 ft^2 fuel flow rate 5lb_m/sec Assume steady flow and that the inlet and exit pressures are atmospheric. 2. We found that the thrust from a propeller could be described by the equation T = T0 – aV2. Based on this equation, describe how you would set up a simple wind tunnel experiment to determine values for T0 and a for a model airplane engine. Assume you have access to a wind tunnel, a pitot-static tube, a u-tube manometer, and a load cell which will measure thrust. Draw a sketch of your experiment. ### References Figure 4.1: Kindred Grey (2021). “Static Force Balance in Straight and Level Flight.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.1_20210804 Figure 4.2: Kindred Grey (2021). “Different Types of Stall.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.2_20210804 Figure 4.3: Kindred Grey (2021). “Part of Drag Increases With Velocity Squared.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.3_20210804 Figure 4.4: Kindred Grey (2021). “Part of Drag Decreases With Velocity Squared.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.4_20210804 Figure 4.5: Kindred Grey (2021). “Total Drag Variation With Velocity.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.5_20210804 Figure 4.6: Kindred Grey (2021). “Altitude Effect on Drag Variation.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Retrieved from https://archive.org/details/4.6_20210804 Figure 4.7: Kindred Grey (2021). “Drag Versus Sea Level Equivalent (Indicated) Velocity.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.7_20210804 Figure 4.8: Kindred Grey (2021). “Graphical Method for Determining Minimum Drag Conditions.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.8_20210805 Figure 4.9: Kindred Grey (2021). “Thrust and Drag Variation With Velocity.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.9_20210805 Figure 4.10: Kindred Grey (2021). “Minimum and Maximum Speeds for Straight & Level Flight.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.10_20210805 Figure 4.11: Kindred Grey (2021). “Thrust Variation With Altitude vs Sea Level Equivalent Speed.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.11_20210805 Figure 4.12: Kindred Grey (2021). “Straight & Level Flight Speed Envelope With Altitude.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.12_20210805 Figure 4.13: Kindred Grey (2021). “True Maximum Airspeed Versus Altitude .” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.13_20210805 Figure 4.14: Kindred Grey (2021). “Graphical Solution for Constant Thrust at Each Altitude .” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.14_20210805 Figure 4.15: Kindred Grey (2021). “Power Available Varies Linearly With Velocity.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.15_20210805 Figure 4.16: Kindred Grey (2021). “Power Required and Available Variation With Altitude.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.16_20210805 Figure 4.17: Kindred Grey (2021). “Power Required Variation With Altitude.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.17_20210805 Figure 4.18: Kindred Grey (2021). “Graphical Determination of Minimum Drag and Minimum Power Speeds.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.18_20210805 Figure 4.19: Kindred Grey (2021). “Plot of Power Required vs Sea Level Equivalent Speed.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.19_20210805 Figure 4.20: Kindred Grey (2021). “Compression of Power Data to a Single Curve.” CC BY 4.0. Adapted from James F. Marchman (2004). CC BY 4.0. Available from https://archive.org/details/4.20_20210805 <!– pb_fixme –><!– pb_fixme –> This page titled 4: Performance in Straight and Level Flight is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by James F. Marchman (Virginia Tech Libraries' Open Education Initiative) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. • Was this article helpful?
2022-09-27 05:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7632102966308594, "perplexity": 752.8971216389639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00427.warc.gz"}
http://curiouscheetah.com/BlogMath/category/geogebra/
# GeoGebra In this entry, I’m going to start with a concrete problem and develop an abstract generalization. The starting problem: Given isosceles trapezoid $$ABCD$$ with an altitude of 6. Point $$E$$ is on $$\overline{DC}$$ such that $$DE = 3$$, $$EC =… ## Constructing a Tangent I was recently asked for an elegant proof of the following problem. It’s based on a construction challenge from Euclidea. Given: Circles A, B, and C, such that point C is on circle A, point B is on circles A… ## Inscribing an Equilateral Triangle I was reminded of the cylindrical wedge that casts shadows of a triangle, a square, and a circle, and it got me wondering: What if I wanted to create such a shape with an equilateral triangle as one of its… ## The Free Throws Problem At a recent workshop on collaboration, the other participants and I were presented with a version of this problem: Adam hits 60% of his free throws. He gets fouled just before the buzzer, and his team is down by one… ## Isosceles Triangles in a Quadrilateral In this post, I’ll discuss two issues. First, I’ll look at a problem taken from a major textbook, and explain why the solution is wrong. Then, I’ll discuss why this particular problem bothers me in the greater context of mathematics… ## Polygon Sets: Doing the Math In my previous post, I created sets of regular polygons in GeoGebra by setting a parameter of the polygons equal to a constant. In this post, I will show the mathematics for determining the side length given a particular parameter…. ## Polygon Sets I recently found myself creating a set of regular polygons for a worksheet. I used GeoGebra to create them, and then free-handed the zoom in order to get them consistently sized. This led me to wonder what “consistently sized” would… ## SSA Congruence: Constraints In my last post, I pointed out that SSA is in fact sufficient for determining all three sides and angles under certain conditions. In this post, I will specify those conditions, with illustrations. Given two noncollinear segments \(\overline{S_1}$$ and $$\overline{S_2}$$… I’m exploring if it’s possible to create a function in GeoGebra that would take an integer as input and create a simplified radical as output. For instance, it would take $$20$$ as input and return $$2\sqrt{5}$$ as output. I don’t… Introduction In my previous post, I included this image, which I’d made in GeoGebra. The image satisfies the conditions of the problem: $$AD$$ is tangent to $$\odot P$$ and $$\overline{BC} \cong \overline{AD}$$. In order to create this image, I…
2019-02-22 14:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673545241355896, "perplexity": 212.3415257163983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518425.87/warc/CC-MAIN-20190222135147-20190222161147-00520.warc.gz"}
https://www.physicsforums.com/threads/reversal-of-limits-of-integration-in-the-derivation-of-probability-current-density.662669/
# Reversal of limits of integration in the derivation of probability current density 1. Jan 6, 2013 ### hnicholls In working out the derivation of the probability current density, I see (based on the definition of j(x,t)) that the limits of integration are changed from d/dt∫(b to a) P(x.t) dx = iħ/2m[ψ*(x.t)∂/∂xψ(x.t) - ψ(x.t)∂/∂xψ*(x.t)](b to a) to d/dt∫(b to a) P(x.t) dx = -iħ/2m[ψ*(x.t)∂/∂xψ(x.t) - ψ(x.t)∂/∂xψ*(x.t)](a to b) Thus, the prefactor becomes -iħ/2m as a result of reversing the limits of integration. Is there a reason that the prefactor must be in terms of -i? Thanks 2. Jan 6, 2013 ### Staff: Mentor Re: Reversal of limits of integration in the derivation of probability current densit The changed limits just changed the sign from + to -, the $\frac{i\hbar}{2m}$-part is quantum mechanics. 3. Jan 6, 2013 ### hnicholls Re: Reversal of limits of integration in the derivation of probability current densit I understand that the sign change is a result of reversing the limits and that the iħ/2m part is what quantizes the result, my question is why does the result need to me negative, rather than positive. Thanks again. 4. Jan 6, 2013 ### cosmic dust Re: Reversal of limits of integration in the derivation of probability current densit Let me give you an alternative way to derive the current's expression, which maybe has more sense. What we are looking for is a function that satisfies the “continuity equation”: $${{\partial }_{t}}\rho +{{\partial }_{x}}j=0$$ which comes from the requirement of local conservation of probability. In the above equation $\rho ={{\Psi }^{*}}\Psi$ is the probability density, so when you calculate ${\partial \rho }/{\partial t}\;$ using S.E. and its complex conjugate, you find: $${{\partial }_{t}}\rho =-\frac{i\hbar }{2m}{{\partial }_{x}}\left( \Psi {{\partial }_{x}}{{\Psi }^{*}}-{{\Psi }^{*}}{{\partial }_{x}}\Psi \right)$$ So, when you compare this with the continuity equation, you have to set: $$j=\frac{i\hbar }{2m}\left( \Psi {{\partial }_{x}}{{\Psi }^{*}}-{{\Psi }^{*}}{{\partial }_{x}}\Psi \right)$$ 5. Jan 6, 2013 ### Staff: Mentor Re: Reversal of limits of integration in the derivation of probability current densit A minus sign does not mean that the result is negative, the integral itself can be negative as well. I don't know the context of that equation, but I think it is just a complex phase anyway. In other words, the sign (together with i) has no physical significance on its own. 6. Jan 6, 2013 ### hnicholls Re: Reversal of limits of integration in the derivation of probability current densit But isn't j (x,t) defined as, $$j=\frac{-i\hbar }{2m}\left( \Psi {{\partial }_{x}}{{\Psi }^{*}}-{{\Psi }^{*}}{{\partial }_{x}}\Psi \right)$$ 7. Jan 6, 2013 ### vanhees71 Re: Reversal of limits of integration in the derivation of probability current densit The sign is uniquely defined by the Schrödinger equation, which reads (setting $\hbar=1$) $$\mathrm{i} \partial_t \psi(t,x)=-\frac{\Delta}{2m} \psi(t,x)+ V(x) \psi(t,x).$$ Multiplying with $\psi^*$ leads to $$\psi^*(t,x) \mathrm{i} \partial_t \psi(t,x)=\psi^*(t,x) \left [-\frac{\Delta}{2m} \psi(t,x)+ V(x) \psi(t,x) \right].$$ Then subtracting the conjugate complex of this equation and multiplying with (-i) leads to $$\partial_t |\psi(t,x)|^2=\vec{\nabla} \cdot \frac{\mathrm{i}}{2m} [\psi^*(t,x) \vec{\nabla} \psi(t,x)-\psi(t,x) \vec{\nabla} \psi^*(t,x)].$$ Comparing this with the continuity equation leads to $$\rho(t,x)=|\psi(t,x)|^2, \quad \vec{j}=-\frac{\mathrm{i}}{2m} [\psi^*(t,x) \vec{\nabla} \psi(t,x)-\psi(t,x) \vec{\nabla} \psi^*(t,x)],$$ as already given by cosmic dust. 8. Jan 10, 2013 ### hnicholls Re: Reversal of limits of integration in the derivation of probability current densit So, proceeding with the assumption that ψ(t,x) satisfies the TDSE, (−ℏ2/2m)∂x2ψ(t,x) = (iℏ)∂tψ(t,x) with V(x)ψ(t,x) = 0 Dividing by iℏ (−ℏ/2mi)∂x2ψ(t,x) = ∂tψ(t,x) Multiply by -i (−iℏ/2m)∂x2ψ(t,x) = ∂tψ(t,x) Multiplying by ψ(t,x)∗ (−iℏ/2m)∂x2ψ(t,x)∗ψ(t,x) = ∂tψ(t,x)ψ(t,x)∗ P(t,x) = ψ(t,x)ψ(t,x)∗ So, (−iℏ/2m)∂x2ψ(t,x)∗ψ(t,x) = ∂tP(t,x) and the left side of this equation can be rewritten as (−iℏ/2m)∂x[ψ(t,x)∗∂xψ(t,x) - ψ(t,x)∂xψ(t,x)∗] but, this is ∂xj(t,x) where j(t,x) is defined as (−iℏ/2m)[ψ(t,x)∗∂xψ(t,x) - ψ(t,x)∂xψ(t,x)∗] and ∂xj(t,x) = ∂tP(t,x) because the "continuity equation" requires that ∂xj(t,x) - ∂tP(t,x) = 0 So, the reversal of the limits of integration is necessary so that the TDSE and the "continuity equation" are both satisfied. That seems right.
2017-08-19 23:16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334180355072021, "perplexity": 1323.9834267760807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00243.warc.gz"}
https://hssliveguru.com/kerala-syllabus-9th-standard-maths-solutions-chapter-3/
# Kerala Syllabus 9th Standard Maths Solutions Chapter 3 Pairs of Equations ## Kerala State Syllabus 9th Standard Maths Solutions Chapter 3 Pairs of Equations ### Kerala Syllabus 9th Standard Maths Pairs of Equations Text Book Questions and Answers Textbook Page No. 36 Do each problem below either in your head, or using an equation with one letter, or two equations with two letters: Pairs Of Equations Class 9 Questions And Answers Kerala Syllabus Question 1. In a rectangle of perimeter one metre, one side is five centimetres longer than the other. What are the lengths of the sides? Shortest side = x Longest side = x + 5 Perimeter = 1 m = 100 cm 2(x + x + 5) = 100 2x + 5 = 50; 2x = 45; x = 22.5 ∴ Shortest side = 22.5 Longest side = 22.5 + 5 = 27.5 Pairs Of Equations Questions And Answers Kerala Syllabus Question 2. A class has 4 more girls than boys. On a day when only 8 boys were absent, the number of girls was twice that of the boys. How many girls and boys are there in the class? Number of boys = x Number of girls = x + 4 2(x – 8) = x + 4 2x – 16 = x + 4 2x – x = 4 +16; x = 20 ∴ Number of boys = 20 Number of girls = 24 Pairs Of Equations Problems Kerala Syllabus Question 3. A man invested 10000 rupees, split into two schemes, at annual rates of interest 8% and 9%. After one year he got 875 rupees as interest from both. How much did he invest in each? If one part is x then the remaining part is 10000 – x $$x\times \frac { 8 }{ 100 } +\left( 10000-x \right) \times \frac { 9 }{ 100 } =100$$ 8x + 90000 – 9x = 87500 90000 – 87500 = x 2500 = x one part = 2500 and remaining part = 7500 Kerala Syllabus 9th Standard Maths Chapter 3  Question 4. A three and a half metre long rod is to be cut into two pieces, one piece is to be bent into a square and the other into an equilateral triangle. The length of their sides must be the same. How should it be cut? Total length = 3½ m Since the sides of a square and equilateral triangle are equal, all these 7 sides are equal. ∴ Length of one side $$3\frac { 1 }{ 2 } \div 7=\frac { 7 }{ 2 } \div 7=\frac { 1 }{ 2 }$$m Length of the rod for the square $$= 4\times \frac { 1 }{ 2 }$$ = 2m Length of the rod for the equilateral triangle = $$3\times \frac { 1 }{ 2 }$$ = $$1\frac { 1 }{ 2 }$$m Class 9 Maths Chapter 3 Kerala Syllabus Question 5. The distance travelled in t seconds by an object starting with a speed of u metres/second and moving along a straight line with speed increasing at the rate of a metres/second every second is given by ut + $$\frac { 1 }{ 2 }$$ at² metres. An object moving in this manner travels 10 metres in 2 seconds and 28 metres in 4 seconds. With what speed did it start? At what rate does its speed change? If t = 2 ut + $$\frac { 1 }{ 2 }$$at²= 10 2u + 2a= 10 u + a = 5 — (1) If t = 4 4u + 8a = 28 u + 2a = 7 — (2) from (1) and (2) a = 2 ∴ u = 3 Textbook Page No. 40 Pairs Of Equations Class 9 Questions And Answers Pdf  Question 1. Raju bought seven notebooks of two hundred pages and five of hundred pages, for 107 rupees. Joseph bought five notebooks of two hundred pages and seven of hundred pages, for 97 rupees. What is the price of each kind of notebook? Cost of 200 page note book = x Cost of 100 page note book = y 7x + 5y= 107 …………(1) 5x + 7y = 97 …………(2) (1) × 5 $$\Rightarrow$$ 35x + 25y = 535 …………(3) (2) × 7 $$\Rightarrow$$ 35x + 49y = 679 …………(4) (4) – (3) $$\Rightarrow$$ 24y = 144 y = $$\frac{144}{24}$$ = 6 Substitute y = 6 in equation (1) 7x + 30 = 107; 7x = 77 x = $$\frac{77}{7}$$ = 11 Price of the 200 pages notebook = Rs. 11 Price of the 100 pages notebook = Rs. 6 Pairs Of Equations Class 9 Extra Questions And Answers Question 2. Four times a number and three times number added together make 43. Two times the second number, subtracted from three times the first give 11. What are the numbers? Let the first number = x and the second number = y 4x + 3y = 43 …………(1) 3x – 2y = 11 …………(2) (1) × 3 $$\Rightarrow$$ 12x + 9y= 129 …………(3) (2) × 4 $$\Rightarrow$$ 12x – 8y = 44 …………(4) (3) -(4) $$\Rightarrow$$ 17y = 85; y = $$\frac{85}{17}$$ = 5 Substitute y = 5 in equation (1) 4x + 3y = 43 4x + 15 = 43 4x = 43 – 15 = 28 ∴ x = $$\frac{28}{4}$$ = 7, y = 5 First number = 7 Second number = 5 9th Standard Maths Chapter 3 Kerala Syllabus Question 3. The sum of the digits of two – digit number is 11. The number got by interchanging the digits is 27 more than the original number. What is the number? If the numbers are x and y x + y = 11 …………(1) 10x + y + 27 = 10y + x 9x – 9y = -27 X – y = -3 …………(2) (1) + (2) 2x = 8; x = 4 x + y = 11 4 + y = 11 y = 7 ∴ Required number is 47. Kerala Syllabus 9th Standard Maths Notes Question 4. Four years ago, Rahim’s age was three times Ramu’s age. After two years, it would just be double. What are their ages now? Ramu’s present age = x Rahim’s present age = y 4 years back, Ramu’s age = x – 4 Rahim’s age = y – 4 3(x – 4) = y – 4 3x – 12 = y – 4 3x – y = 8 ……….(1) After 2 years, Ramu’s age = x + 2 Rahim’s age = y + 2 2(x + 2) = y + 2 2x + 4 = y + 2 2x – y = -2 ……….(2) (1) – (2) $$\Rightarrow$$ x = 10 3x – y = 8; 30 – y = 8; y = 22 x = 10, y = 22 Ramu’s present age = 10 Rahim’s present age = 22 Pair Of Equations Class 9 Kerala Syllabus Question 5. If the length of a rectangle is in-creased by 5 metres and breadth decreased by 3 metres, the area would decrease by 5 square metres. If the length is increased by 3 metres and breadth increased by 2 metres, the area would increase by 50 square metres. What are the length and breadth? length = x; breadth = y (x + 5)(y – 3) = xy – 5 xy – 3x + 5y – 15 = xy – 5 – 3x + 5y = + 10 3x – 5y = -10 ………..(1) (x + 3)(y + 2) = xy + 50 xy + 2x + 3y + 6 = xy + 50 2x + 3y = 44 ………..(2) (2) × 1 $$\Rightarrow$$ 6x-10y = -20 ……….(3) (3) × 2 $$\Rightarrow$$ 6x + 9y = 132 …………(4) (3)- (4) $$\Rightarrow$$ -19y = -152 y = $$\frac{-152}{-19}$$ = 8, 2x + 3y = 44 2x + 24 = 44; 2x = 20; x = 10 ∴ x = 10, y = 8 Length of the rectangle = 10 m Breadth of the rectangle = 8m Textbook Page No. 42 Hsslive Guru Maths 9th Kerala Syllabus Question 1. A 10 metre long rope is to be cut into two pieces and a square is to be made using each. The difference in the areas enclosed must be 1$$\frac{1}{4}$$ square metres. How should it be cut? Length of one piece = x m Length of other piece = (10 – x) m ∴ Rope is divided into 6 m and 4 m. 9th Maths Notes Kerala Syllabus Question 2. The length of a rectangle is 1 metre more than its breadth. Its area is 3$$\frac{3}{4}$$ square metres. What are its length and breadth? Length = x x = y + 1; x – y = 1 $$x y = 3 \frac{3}{4} =\frac{15}{4}$$ (x + y)² = (x – y)² + 4xy 1² + 4 × $$\frac{15}{4}$$ = 1 + 15 = 16 x – y = 1; x + y = 4 2x = 5; x = 5/2 = 2.5 y=1.5 ∴ Length = 2.5 m Hsslive Maths Class 9 Kerala Syllabus Question 3. The hypotenuse of a right triangle is 6$$\frac{1}{2}$$ centimetres and its area is 7$$\frac{1}{2}$$ square centimetres. Calculate the lengths of its perpendicular sides. The perpendicular sides are x and y. Given that, From (3) and (4) 2x = 24/2 = 12 ∴ x = 6 6 – y = 7/2 ∴ y = 5/2 = 2.5 ∴ Perpendicular sides = 6 and 2.5 ### Kerala Syllabus 9th Standard Maths Pairs of Equations Exam Oriented Text Book Questions and Answers Kerala Syllabus 9th Standard Notes Maths Question 1. There are some oranges in a bag. When 10 oranges more added in the bag; the numbers become 3 times of the oranges initially had. Then how many oranges were there in the bag initially. Let the number of oranges initially taken = x X + …….. = 3X 3X – X = …….. 2X = …….. X = ……… /2 = …….. x + 10 = 3x; 3x – x = 10 ; 2x = 10; x = $$\frac{10}{2}$$ = 5 Kerala Syllabus 9th Standard Maths Notes Pdf Question 2. A box contains some white balls and some black balls. The number of black balls is 8 more than the number of white balls. The total number of balls is 4 times the number of white balls. Find out the number of white balls and the number of black balls. Number of white balls = x Number of black balls = ……… + 8 Total number of balls = ……. ‘ x; i. e. (x) + (x + 8) = ………. x; 2x + 8 = ……. x; 8 = …… x – 2x = …… x; x = 8/ ……. white balls = ………. black balls = ……… + 8 = …….. Number of white balls = x Number of black balls = x + 8 Total number of balls = 4 ‘ x; (x) + (x + 8) = 4x ; 2x + 8 = 4x; 8 = 4x – 2x = 2x ; x = $$\frac{8}{2}$$ = 4 white balls = 4 black balls = 4 + 8 = 12 Pairs Of Equations Questions Kerala Syllabus Question 3. The sum of two numbers is 36 and the difference is 8. Find the numbers. Let x, y be the numbers x + y = 36 x – y = 8 (x + y) + (x – y) = ……. + …… 2x = ….. x = …… /2 = …….. x – y = 8 ……. – y = 8 …….. – 8 = y x + y = 36; x – y = 8 (x + y) + (x – y) = 36 + 8 2x = 44; x = $$\frac{44}{2}$$ = 22 x – y = 8; 22 – y = 8; 22 – 8 = y; y = 14 numbers 22, 14 Kerala Syllabus 9th Std Maths Notes  Question 4. The cost of 2 pencils and 5 pens is Rs 17, two pencils and 3 pens of the same rate is Rs 11. Find out the prices of a pencil and a pen. Let the price of pencil = x; Price of pen = y; ∴ 2x + 5y = …….. 2x +….. y = ….. 2x + …… y = 11 (2x + 5y) (-2x + ….. y) = -11 …… y = ……. y = $$\frac{…..}{……}$$ 2x + 5x ….. = 17; 2x = 17 – ……..; x = ……. /2; = …….. 2x + 5y = 17; 2x + 3y = 11; (2x + 5y) – (2x + 3y) = 17 – 11; 2y = 6 y = $$\frac{6}{2}$$ = 3; 2x + 5 × 3 = 17; 2x = 17 – 15; x = $$\frac{2}{2}$$ = 1 Price of pencil = Rs 1 Price of pen = Rs 3 Question 5. Twice of a number added with thrice of another number gives 23. Four times the first number and 5 times the second number when added gives 41. Find out the numbers. First number = x Second number = y ∴ 2x + 3y = …… 4x + 5y = …… 2(2x + 3y) = 2 ……. 4x + 6y = ……. (4x + 6y) – (4x + 5y) = ( …… ) – ( ….. ) y = …….; 2x + 3x ……. = ……. 2x = ( …… ) – ( ……. ); x = ………./2 2x + 3 y = 23 4x + 5y = 41 2(2x + 3y) = 2 × 23; 4x + 6y = 46 (4x + 6y) – (4x + 5y) = 46 – 41; y = 5 2x + 3 × 5 = 23 2x = 23 – 15; x = $$\frac{8}{2}$$ = 4 Price of pencil = Rs 4 Price of pen = Rs 5 Question 6. Rama spends Rs 97 to buy 4 two hundred page note books and 5 hundred page note books. Geetha spends Rs 101 to buy 5 two hundred page note book and 4 one hundred page note books. What is the prices of two types of note books? Let the cost of two hundred page note books = x The cost of one hundred page note books = y (1) 4x + 5y = 97 (1) × 5 $$\Rightarrow$$ 20x + ……..y = …….. (2) 5x + 4y = 101 (2) × 4 $$\Rightarrow$$ 20x + …….y = ……. ……y = ( ….. ) – ( ……. ); y = …….. /…….. 4x + 5x( ……. ) = 97 4x = 97 – ( ……. ) x = ……../4 (1) 4x + 5y = 97 (2) 5x + 4y = 101 (1) × 5 → 20x + 25y = 485 (2) × 4 → 20x + 16y = 404 9y = 485 – 404 y = $$\frac{81}{9}$$ = 9 4x + 5 × 9 = 97 4x = 97 – 45 = 52 Cost of two hundred page note book = Rs 13 Cost of one hundred page note book = Rs 9 Question 7. 6 years back the age of Muneer was 3 times the age of Mujeeb. After 4 years the age of Muneer becomes twice the age of Mujeeb. Find the age of two of them now. Age of Mujeeb 6 years back = x Age of Muneer 6 years back = 3x After 4 years 3x + 4 = 2(x + 4) 3x + 4 = 2x + 8 3x – 2x = 8 – 4; x = 4 Age of Mujeeb 6 years back = 4 + 6 = 10 Age of Muneer 6 years back = 3(4 + 6) = 18 years. Question 8. The cost of 4 chairs and 5 tables is Rs 6600 and the cost of 5 chairs and 3 tables is Rs 5000 at the same prices. What are the prices of a table and a chair? Cost of a chair = Rs a Cost of a table = Rs b 4a + 5b = 6600 ¾ …………(1) 5a + 3b = 5000 ¾ …………(2) (1) × 5 → 20a + 25b = 33000 (2) × 4 → 20a + 12b = 20000 (20a + 25b) – (20a + 12b) = 33000 – 20000 (1) $$\Rightarrow$$ 13b = 13000 b = $$\frac{13000}{13}$$ = Rs 1000 4a + 5b = 6600; 4a + 5 × 1000 = 6600 4a = 6600 – 5000 = 1600 a = $$\frac{1600}{4}$$ = Rs 400 Cost of a table = Rs 1000 Cost of a chair = Rs 400
2022-12-05 13:48:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4516248404979706, "perplexity": 1437.731327763624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00705.warc.gz"}
https://www.techwhiff.com/learn/textbook-authors-typically-receive-a-simple/1898
Textbook authors typically receive a simple percentage of total revenue generated from book sales. The publisher bears all the production costs and chooses the output level. Suppose the retail price of a book is fixed at $50. The author receives$10 per copy, and the firm receives $40 per copy. The firm is interested in maximizing its own profits. Will the author be happy with the book company's output choice? Does the selected output maximize the joint profits (for both the author and company) from the book? ## Answers #### Similar Solved Questions 1 answer ##### Please explain point no vi? ... 1 answer ##### Differentiate 1-cosx/1+cosx differentiate 1-cosx/1+cosx... 1 answer ##### Let ( XvXy) be a bivariate normal vector with returns μι-0.15, μ2 ρ 2. 0.25,01 0.1,... Let ( XvXy) be a bivariate normal vector with returns μι-0.15, μ2 ρ 2. 0.25,01 0.1, σ2-02, 0.4. Compute Prob. (X1 + 2X2 > 1).... 1 answer ##### Probability 5. The discrete random variable Z has the following probability distribution 2 0.2 4 P(Z)... Probability 5. The discrete random variable Z has the following probability distribution 2 0.2 4 P(Z) 0.1 0.25 0.05 0.3 Which of the following is FALSE? A) P(Z < 2) 0.55 B) P(Z 2 4)- 0.45 C) P(Z 4)=0.9 D) P(Z3)-0.05 6. A random sample of 100 first-year students was selected to determine the aver... 1 answer ##### What are the differences between Apple production in the US and China? What would make production... What are the differences between Apple production in the US and China? What would make production more feasible in the United States? Should Apple or other companies move more production to the US? A Tiny Screw Shows Why iPhones Won't Be 'Assembled in U.S.A.' A screw from the late 2013 m... 1 answer ##### ABC Company purchased a new machinery two years ago for$68759. Today, it is selling this... ABC Company purchased a new machinery two years ago for $68759. Today, it is selling this machinery for$25416. What is the after-tax salvage value if the tax rate is 37 percent? The MACRS allowance percentages are as follows, commencing with year one: 20.00, 32.00, 19.20, 11.52, 11.52, and 5.76 per...
2022-07-07 14:21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2645079791545868, "perplexity": 2214.4514007600264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00749.warc.gz"}
https://jappieklooster.nl/thesis-writing-tips.html
Thesis writing tips So I recently started writing my master thesis and looking at some of my fellow master students their work I realized that perhaps many people have trouble with managing references, quality control and keeping motivated. In this post I’ll explain how I managed to do it, most of these ideas are either directly stolen from my classmates, ripped from reddit and a few are my own “invention”, but others may already have thought of it. So perhaps the best way to see this post is as a aggragetion of techniques and tools (the tools part will be moved to another post) I used to waste less time. So my advice to you is to skim this article and look for any ideas that seem interesting and try them out. 1 Reference management Reference management seems to be a thing which can be quite tricky. It is extremely tedious and dull work, and therefore I obviously don’t do it by hand. I use bibtex as core platform, I haven’t compared the available options but since it integrated with org-mode I was happy with it. To find papers I mainly use Google scholar trough the university proxy. However sometimes papers aren’t available for my university, so I’m forced to use http://sci-hub.io/ (if that link gets blocked in the future, try the wikipedia entry, wikipedia is impartial and will therefore provide a proper link (unlike Google)). Once I got access to the paper I will skim trough it to see if there is relevant information. If so I will make a copy to a paper folder and use Google scholar to generate the bibtex entry for me, there is a citation option at the bottom of the search result, click it and then bibtex. That little feature of Google scholar has already saved me probably several hours of making bibtex entries. The copied paper will have the same name as the bibtex entry key, this is important if you want to recheck your reference later again. Doing this will also lower the psychological barrier of re-checking something since you won’t have to go trough the entire obtain paper process again. Note that you shouldn’t track it in your version control, because first of all you don’t expect it to change, and second of all if you push it to a public repository you’re breaking the law on a whole other level. (distributing rather than just consuming, consuming is more easily ignored by law enforcement). 1.1 Books I haven’t completely solved books. It seems that getting books is a lot more tougher than papers, probably because most scientist feel that papers should be easy obtainable but don’t necessarily feel for books the same way (many would have written one or contributed to one). I as a programmer feel very deeply about the necessity for freedom of information and therefore will share some possible sources. It will be up to the reader to decide if they should be using such sources. However do note that these sources aren’t always successful and you should prefer obtaining papers since sci-hub works practically always. library genesis should be your first source of books, it has a crappy interface and some mirrors may not work but if you get a result you usually also get a book. Then if that fails you can try and use IRC, which is described very thoroughly here. But in short for archiving purposes, go to undernet, join bookz (/join #bookz) and then search for your book title (@search). Then if you get a result you need to save that file, which isn’t a book just the query result, open that and then if it contains your book use the code to download it. Finally there is a desperate method I used one time. It takes time but it can be worth the effort. There is this concept called digital borrowing, where HTML drm is used to ensure only n copies get borrowed at a time. An example of such a website is the open library. The system is kind off dumb since making copies of digital stuff is practically free. However I do like the fact that some publishers fell for it. Anyway if it has your book you can just “borrow” it and make screen shots of each page. You just bypassed a bunch of HTML based drm and encryption with a button on your keyboard. Yes its pretty dumb. (not that I disliked the open library initiative or the internet archive, in fact I love there effort of tricking the publishers. There work may save some books from the copyright abyss, where literature gets lost forever because laws prevent copying before the last copy gets lost). 2 Time management I have a pretty strict schedule by working 6 hours per day 5 days in the week. I won’t do any work on the thesis in the weekend. I only do 6 hours because personally I feel like those last two hours are usually wasted, only when I don’t notice the time pass by I will go overtime. I start every working day at 9 in the morning to about 3-ish. The days where I meet my teacher are of course the least productive. Currently this is on Wednesday, breaking up the week nicely. I do have to say that when I start there is no distraction. I will block websites such as Facebook, reddit and YouTube. 6 hours is a short time and the personal contract I have with my self is that this time will be used productively. Also note that annexing the weekend to try and finish the thesis a few weeks earlier is probably a bad idea. I mean focusing for extended amount of times on just one subject can only be done so long. The best I ever did was a commit streak on a hobby project on github for 3 weeks, and I had a lot of fun in making that. This probably won’t be true for all the work in your thesis. You may get burned out quickly if you try and do this. (Although I personally think it is quite a lot of fun to work on it. 3TODO management Tracking what stuff you have to do is rather quite important. What I’ve found most effective is a latex package called todonotes. it allows you to insert todo items in the document itself, which will be shown as long as your document class options are drafting, but as soon as you set it to final they will be removed from the final document. This integrates good with org, although misses any sort of highlighting. Org also doesn’t understand line breaks in todo items. Which makes it rather difficult to stay below 80 chars per line for todo items (this breaks my heart). I use the general todo items for feedback from the teacher and also for my own thoughts. Things I think I should do end up in a todo item, this allows the teacher to see what goes on in my mind and also if I understood his feedback correctly. Then with these notes I create some specialized commands: \usepackage[obeyFinal, colorinlistoftodos]{todonotes} \newcommand{\drafting}{\todo[noline, color=gray]{Working draft}} \newcommand{\toReview}{\todo[noline, color=yellow]{To review}} \newcommand{\newlycleared}{ \todo[noline, backgroundcolor=white, bordercolor=red]{Newly cleared} } \newcommand{\cleared}{\todo[noline, color=white]{Cleared}} Drafting for paragraphs that are incomplete and should be finished. toReview for paragraphs that have been finished but should be checked again at a later time for spell/grammar reasons. newlycleared for the items that have been cleared but haven’t been seen by the teacher (he’s the guy giving me the grade, so gotta keep him on my side). Finally cleared for the items that should be done. Usually I build up a bunch of toReview items throughout the week and then go on a clearing spree the day before meeting the teacher. This also helps me reminding what I wrote about. Note that the idea of using todonotes I basically stole directly from one of my classmates. But this significantly made the quality of my documents better. Since I can manage my attention much more focused. Besides looking at the todo list and looking at the document have now become the same thing. 4 In short This got way longer than I thought it would. I still even haven’t talked about editing tools, version control, org mode itself, and that awesome UML library called plantuml which integrates excellently with org. I will discuss these things in a later post. I hope these tips can be use full to you.
2019-07-18 21:42:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38726723194122314, "perplexity": 873.7378173660956}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00004.warc.gz"}
http://www.whxb.pku.edu.cn/CN/Y2018/V34/I10/1171
物理化学学报  2018, Vol. 34 Issue (10): 1171-1178    DOI: 10.3866/PKU.WHXB201803024 所属专题: 材料科学的分子模拟 论文 1 景德镇陶瓷大学材料科学与工程学院,江西 景德镇 333403 2 南京大学化学与化工学院,南京大学介观化学教育部重点实验室,南京 210023 Influence of Photoisomerization on Binding Energy and Conformation of Azobenzene-Containing Host-Guest Complex Pingying LIU1,2,Chunyan LIU2,Qian LIU2,Jing MA2,*() 1 School of Materials Science and Engineering, Jingdezhen Ceramic Institute, Jingdezhen 333403, Jiangxi Province, P. R. China 2 School of Chemistry and Chemical Engineering, Key Laboratory of Mesoscopic Chemistry of MOE, Nanjing University, Nanjing 210023, P. R. China 全文: PDF(1634 KB)   HTML 输出: BibTeX | EndNote (RIS) | Supporting Info Abstract: The construction of a photo-controllable artificial molecular machine capable of realizing the light-driven motion on a molecular scale and of performing a specific function is a fascinating topic in supramolecular chemistry. The bistable switchable molecule, azobenzene (AZO), has been introduced into the supramolecular architecture as a key building block, owing to its efficient and reversible trans (E)-cis (Z) photoisomerization. The binding strength of the dibenzo[24]crown-8 (DB24C8) host and dialkylammonium-based rod-like guest consisting of an AZO moiety and the Z$\to$E photoisomerization process in an interlocked host-guest complex have been investigated by the density functional theory (DFT) calculations and the reactive molecular dynamics (RMD) simulations by considering both torsion and inversion paths. The strong host-guest binding strength provides a necessary premise to stabilize the complex during the E-Z photoisomerization of the AZO unit, which is a terminal stopper to control the directional motion of the guest. A stronger binding strength for the Z isomer can be induced by the stronger hydrogen-bonding interaction. The steric effect is introduced into the Z isomer to force the ring slipping exclusively over the cyclopentyl terminal (pseudostopper). The host-guest complexation has a slight effect on the conformation of the AZO functional subunit for the two isomers. The faster Z$\to$E photoisomerization process within the picosecond timescale is kinetically more favored than the dethreading of the ring through the pseudostopper subunit of the rod. After isomerization, a structure relaxation is observed for the crown ether ring within 500 ps. The flexible backbone of the crown ether ring is helpful in realizing steady and stable host-guest recognition during photoisomerization. Moreover, the orthogonality of the site-specific binding interaction is revealed by the similar binding energies obtained at similar hydrogen bonding recognition sites for various interlocked host-guest supramolecular systems although the constituents of the guests are different from each other. The introduction of two stereoisomers of the AZO subunit has little influence on the other conformations of guest subunits. These results are useful for the rational design of more sophisticated stimuli-controlled artificial molecular machines. Key words: Photoisomerization    Reactive molecular dynamics model    Azobenzene    Nanomotors    Pseudorotaxane    Supramolecular chemistry 中图分类号: O641
2018-04-26 13:09:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3097492754459381, "perplexity": 9845.447532485574}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00513.warc.gz"}
https://msp.org/moscow/2019/8-2/p03.xhtml
#### Vol. 8, No. 2, 2019 Recent Issues Volume 8, Issue 4 Volume 8, Issue 3 Volume 8, Issue 2 Volume 8, Issue 1 The Journal About the Journal Editorial Board Subscriptions Submission Guidelines Submission Form Ethics Statement Editorial Login ISSN (electronic): 2640-7361 ISSN (print): 2220-5438 Previously Published To Appear founded and published with the scientific support and advice of the Moscow Institute of Physics and Technology Other MSP Journals Embeddings of weighted graphs in Erdős-type settings ### David M. Soukup Vol. 8 (2019), No. 2, 117–123 ##### Abstract Many recent results in combinatorics concern the relationship between the size of a set and the number of distances determined by pairs of points in the set. One extension of this question considers configurations within the set with a specified pattern of distances. In this paper, we use graph-theoretic methods to prove that a sufficiently large set $E$ must contain at least ${C}_{G}|E|$ distinct copies of any given weighted tree $G$, where ${C}_{G}$ is a constant depending only on the graph $G$. ##### Keywords finite point configurations, distance sets, graphs Primary: 52C10
2019-11-15 21:52:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20134061574935913, "perplexity": 748.5588464496302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00243.warc.gz"}
https://ncatlab.org/nlab/show/cardinal+arithmetic
# Contents ## Idea arithmetic with cardinals. a kind of transfinite arithmetic ### Definition For $S$ a set, write ${|S|}$ for its cardinality. Then the standard operations in the category Set induce arithmetic operations on cardinal numbers: For $S_1$ and $S_2$ two sets, the sum of their cardinalities is the cardinality of their disjoint union, the coproduct in $Set$: ${|S_1|} + {|S_2|} \coloneqq {|S_1 \amalg S_2|} \,.$ More generally, given any family $(S_i)_{i: I}$ of sets indexed by a set $I$, the sum of their cardinalities is the cardinality of their disjoint union: $\sum_{i: I} {|S_i|} \coloneqq {|\coprod_{i: I} S_i|} \,.$ Likewise, the product of their cardinalities is the cardinality of their cartesian product, the product in $Set$: ${|S_1|} \, {|S_2|} \coloneqq {|S_1 \times S_2|} \,.$ More generally again, given any family $(S_i)_{i: I}$ of sets indexed by a set $I$, the product of their cardinalities is the cardinality of their cartesian product: $\prod_{i: I} {|S_i|} \coloneqq {|\prod_{i: I} S_i|} \,.$ Also, the exponential of one cardinality raised to the power of the other is the cardinality of their function set, the exponential object in $Set$: ${|S_1|}^{|S_2|} \coloneqq {|Set(S_2,S_1)|} \,.$ In particular, we have $2^{|S|}$, which (assuming the law of excluded middle) is the cardinality of the power set $P(S)$. In constructive (but not predicative) mathematics, the cardinality of the power set is $\Omega^{|S|}$, where $\Omega$ is the cardinality of the set of truth values. The usual way to define an ordering on cardinal numbers is that ${|S_1|} \leq {|S_2|}$ if there exists an injection from $S_1$ to $S_2$: $({|S_1|} \leq {|S_2|}) \;:\Leftrightarrow\; (\exists (S_1 \hookrightarrow S_2)) \,.$ Classically, this is almost equivalent to the existence of a surjection $S_2 \to S_1$, except when $S_1$ is empty. Even restricting to inhabited sets, these are not equivalent conditions in constructive mathematics, so one may instead define that ${|S_1|} \leq {|S_2|}$ if there exists a subset $X$ of $S_2$ and a surjection $X \to S_1$. Another alternative is to require that $S_1$ (or $X$) be a decidable subset of $S_2$. All of these definitions are equivalent using excluded middle. This order relation is antisymmetric (and therefore a partial order) by the Cantor–Schroeder–Bernstein theorem (proved by Cantor using the well-ordering theorem, then proved by Schroeder and Bernstein without it). That is, if $S_1 \hookrightarrow S_2$ and $S_2 \hookrightarrow S_1$ exist, then a bijection $S_1 \cong S_2$ exists. This theorem is not constructively valid, however. The well-ordered cardinals are well-ordered by the ordering $\lt$ on ordinal numbers. Assuming the axiom of choice, this agrees with the previous order in the sense that $\kappa \leq \lambda$ iff $\kappa \lt \lambda$ or $\kappa = \lambda$. Another definition is to define that $\kappa \lt \lambda$ if $\kappa^+ \leq \lambda$, using the successor operation below. The successor of a well-ordered cardinal $\kappa$ is the smallest well-ordered cardinal larger than $\kappa$. Note that (except for finite cardinals), this is different from $\kappa$'s successor as an ordinal number. We can also take successors of arbitrary cardinals using the operation of Hartog's number?, although this won't quite have the properties that we want of a successor without the axiom of choice. ## Properties • It is traditional to write ${}_0$ for the first infinite cardinal? (the cardinality of the natural numbers), $\aleph_1$ for the next (the first uncountable cardinality), and so on. In this way every cardinal (assuming choice) is labeled $\aleph_\mu$ for a unique ordinal number $\mu$, with $(\aleph_\mu))^+ = \aleph_{\mu^+}$. • For every cardinal $\pi$, we have $2^\pi \gt \pi$ (this is sometimes called Cantor's theorem). The question of whether $2^{\aleph_0} = \aleph_{1}$ (or more generally whether $2^{\aleph_\mu} = \aleph_{\mu^+}$) is called Cantor’s continuum problem; the assertion that this is the case is called the (generalized) continuum hypothesis. It is known that the continuum hypothesis is undecidable in ZFC. • For every transfinite cardinal $\pi$ we have (using the axiom of choice) $\pi + \pi = \pi$ and $\pi \cdot \pi = \pi$, so addition and multiplication are idempotent. ## References Lecture notes include • Cardinal arithmetic (pdf) Created on May 24, 2017 at 03:26:43. See the history of this page for a list of all contributions to it.
2018-07-17 22:53:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 57, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830288290977478, "perplexity": 186.30762461490923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00399.warc.gz"}
http://hardcocoa.com/index.php/2016/05/11/capacitance/
# Capacitance ### What is a capacitor? A capacitor is an electrical component that stores charge. It is usually made by having two parallel plates thinly separated by an insulating material. This material is also known as a dielectric. Current flows through an insulator if the voltage across it is sufficient. In a capacitor, the conducting plates are close enough for the current to flow, even through it is separated by the insulating dielectric. However, because of the dielectric, some charges are stored on the plates as the current flows, resulting in higher potential difference as time passes. When the potential difference across the capacitor equals to the e.m.f of the circuit, current stops flowing. ### Physics of capacitance Capacitance is defined as Capacitance is the ratio of the charge stored to potential difference across a capacitor. Mathematically, \begin{aligned}C=\frac{Q}{V} \end{aligned} The SI  unit of capacitance is farad (F). A capacitor of 1 farad has a stored charge of 1 C when the potential difference across it is 1 V. ### Combined Capacitance Capacitors can be connected either in series or parallel. In a series connection, the combined capacitance can be found from the relationship \begin{aligned} V_\text{total} &= V_1 + V_2 + ...\\\frac{Q}{C_\text{total}} &= \frac{Q}{C_1}+\frac{Q}{C_2}+...\end{aligned} Since the charge $Q$ stored in each capacitance is the same, \begin{aligned}\frac{1}{C_\text{total}} &= \frac{1}{C_1}+\frac{1}{C_2}+...\end{aligned} In a parallel connection, the combined resistance is derived as follow: \begin{aligned} Q_\text{total}&=Q_1+Q_2+...\\C_\text{total}V&=C_1V+C_2V+...\\\end{aligned} Since the potential difference $V$ is the same in a parallel circuit, \begin{aligned} C_\text{total}=C_1+C_2+... \end{aligned} ## Summary 1. Capacitance is the ratio of the charge stored to the potential difference across the component. 2. SI unit of capacitance is farad (F). 3. The total capacitance of series connected capacitors is \begin{aligned} \frac{1}{C_\text{total}}=\frac{1}{C_1}+\frac{1}{C_2}+... \end{aligned} 4. The total capacitance of parallel connected capacitors is \begin{aligned} C_\text{total} = C_1+C_2+... \end{aligned} ## Review ### Question 1 The figure shows three capacitors connected in series with a cell of e.m.f. 3.0 V. Calculate the p.d. across each capacitor. ### Solution Since the charge stored in each capacitor is the same as the charge stored across all the capacitors, we shall go ahead and find the charge stored in the combined capacitors and then use it to calculate the potential difference across each capacitor. \begin{aligned} \frac{1}{C_\text{total}} &= \frac{1}{100 \times 10^{-3}}+\frac{1}{200 \times10^{-3}} + \frac{1}{400 \times 10^{-3}}\\C_\text{total}&= 0.0571\text{ F}\\ Q_\text{total}&=3.0 \times 0.0571\\&=0.1713\text{ C}\end{aligned} Hence, \begin{aligned} V_1&=\frac{0.1713}{100 \times 10^{-3}}\\&=1.713\text{ V}\\V_2&=\frac{0.1713}{200 \times 10^{-3}}\\&=0.857\text{ V} \\V_3&=\frac{0.1713}{400 \times 10^{-3}}\\&=0.428\text{ V}\end{aligned} You can observe that the sum of all the three p.d. is the e.m.f. of the cell.
2018-01-21 02:37:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975809454917908, "perplexity": 1482.9394025172285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889917.49/warc/CC-MAIN-20180121021136-20180121041136-00336.warc.gz"}
https://homework.cpm.org/category/ACC/textbook/acc7/chapter/cc33/lesson/cc33.2.3/problem/3-95
### Home > ACC7 > Chapter cc33 > Lesson cc33.2.3 > Problem3-95 3-95. Evaluate the expressions below for the given values. Homework Help ✎ 1. $30−2x$ for $x=−6$ Substitute in $−6$ for $x$. $30−2(−6)$ Evaluate $42$ 1. $x^2+2x$ for $x=−3$ Follow the steps in part (a). 1. $-\frac{1}{2}x+9$ for $x=−6$ Follow the steps in part (a). $12$ 1. $\sqrt { k }$ for $k=9$ Follow the steps in part (a). $3$
2019-10-15 16:46:59
{"extraction_info": {"found_math": true, "script_math_tex": 14, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7206792235374451, "perplexity": 8485.997046028993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00090.warc.gz"}
https://isabelle.in.tum.de/repos/isabelle/rev/c41954ee87cf
author wenzelm Tue, 01 Jan 2019 21:47:27 +0100 changeset 69566 c41954ee87cf parent 69565 1daf07b65385 child 69570 2f78e0d73a34 more antiquotations -- less LaTeX macros; src/HOL/Analysis/Ball_Volume.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Binary_Product_Measure.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Bochner_Integration.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Borel_Space.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Brouwer_Fixpoint.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Complete_Measure.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Complex_Transcendental.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Embed_Measure.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Extended_Real_Limits.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Function_Topology.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Further_Topology.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Gamma_Function.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Lipschitz.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Measure_Space.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Path_Connected.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Set_Integral.thy file | annotate | diff | comparison | revisions src/HOL/Analysis/Sigma_Algebra.thy file | annotate | diff | comparison | revisions --- a/src/HOL/Analysis/Ball_Volume.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Ball_Volume.thy Tue Jan 01 21:47:27 2019 +0100 @@ -3,7 +3,7 @@ Author: Manuel Eberl, TU München *) -section \<open>The Volume of an $n$-Dimensional Ball\<close> +section \<open>The Volume of an \<open>n\<close>-Dimensional Ball\<close> theory Ball_Volume imports Gamma_Function Lebesgue_Integral_Substitution @@ -25,8 +25,8 @@ text \<open> We first need the value of the following integral, which is at the core of - computing the measure of an $n+1$-dimensional ball in terms of the measure of an - $n$-dimensional one. + computing the measure of an \<open>n + 1\<close>-dimensional ball in terms of the measure of an + \<open>n\<close>-dimensional one. \<close> lemma emeasure_cball_aux_integral: "(\<integral>\<^sup>+x. indicator {-1..1} x * sqrt (1 - x\<^sup>2) ^ n \<partial>lborel) = --- a/src/HOL/Analysis/Binary_Product_Measure.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Binary_Product_Measure.thy Tue Jan 01 21:47:27 2019 +0100 @@ -336,7 +336,7 @@ qed -subsection%important \<open>Binary products of $\sigma$-finite emeasure spaces\<close> +subsection%important \<open>Binary products of \<open>\<sigma>\<close>-finite emeasure spaces\<close> locale%important pair_sigma_finite = M1?: sigma_finite_measure M1 + M2?: sigma_finite_measure M2 for M1 :: "'a measure" and M2 :: "'b measure" --- a/src/HOL/Analysis/Bochner_Integration.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Bochner_Integration.thy Tue Jan 01 21:47:27 2019 +0100 @@ -2100,7 +2100,7 @@ then show ?thesis using Lim_null by auto qed -text \<open>The next lemma asserts that, if a sequence of functions converges in $L^1$, then +text \<open>The next lemma asserts that, if a sequence of functions converges in \<open>L\<^sup>1\<close>, then it admits a subsequence that converges almost everywhere.\<close> lemma%important tendsto_L1_AE_subseq: --- a/src/HOL/Analysis/Borel_Space.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Borel_Space.thy Tue Jan 01 21:47:27 2019 +0100 @@ -2140,7 +2140,7 @@ shows "(\<lambda>x. \<bar>f x\<bar> powr p) \<in> borel_measurable M" unfolding powr_def by auto -text \<open>The next one is a variation around \verb+measurable_restrict_space+.\<close> +text \<open>The next one is a variation around \<open>measurable_restrict_space\<close>.\<close> lemma%unimportant measurable_restrict_space3: assumes "f \<in> measurable M N" and @@ -2152,7 +2152,7 @@ measurable_restrict_space2[of f, of "restrict_space M A", of B, of N] assms(2) space_restrict_space) qed -text \<open>The next one is a variation around \verb+measurable_piecewise_restrict+.\<close> +text \<open>The next one is a variation around \<open>measurable_piecewise_restrict\<close>.\<close> lemma%important measurable_piecewise_restrict2: assumes [measurable]: "\<And>n. A n \<in> sets M" --- a/src/HOL/Analysis/Brouwer_Fixpoint.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Brouwer_Fixpoint.thy Tue Jan 01 21:47:27 2019 +0100 @@ -407,8 +407,8 @@ retracts (ENR). We define AR and ANR by specializing the standard definitions for a set to embedding in spaces of higher dimension. -John Harrison writes: "This turns out to be sufficient (since any set in $\mathbb{R}^n$ can be -embedded as a closed subset of a convex subset of $\mathbb{R}^{n+1}$) to derive the usual +John Harrison writes: "This turns out to be sufficient (since any set in \<open>\<real>\<^sup>n\<close> can be +embedded as a closed subset of a convex subset of \<open>\<real>\<^sup>n\<^sup>+\<^sup>1\<close>) to derive the usual definitions, but we need to split them into two implications because of the lack of type quantifiers. Then ENR turns out to be equivalent to ANR plus local compactness."\<close> --- a/src/HOL/Analysis/Complete_Measure.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Complete_Measure.thy Tue Jan 01 21:47:27 2019 +0100 @@ -1085,7 +1085,7 @@ qed text \<open>The following theorem is a specialization of D.H. Fremlin, Measure Theory vol 4I (413G). We - only show one direction and do not use a inner regular family $K$.\<close> + only show one direction and do not use a inner regular family \<open>K\<close>.\<close> lemma (in cld_measure) borel_measurable_cld: fixes f :: "'a \<Rightarrow> real" --- a/src/HOL/Analysis/Complex_Transcendental.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Complex_Transcendental.thy Tue Jan 01 21:47:27 2019 +0100 @@ -883,9 +883,9 @@ qed text\<open>This function returns the angle of a complex number from its representation in polar coordinates. -Due to periodicity, its range is arbitrary. @{term Arg2pi} follows HOL Light in adopting the interval $[0,2\pi)$. +Due to periodicity, its range is arbitrary. @{term Arg2pi} follows HOL Light in adopting the interval \<open>[0,2\<pi>)\<close>. But we have the same periodicity issue with logarithms, and it is usual to adopt the same interval -for the complex logarithm and argument functions. Further on down, we shall define both functions for the interval $(-\pi,\pi]$. +for the complex logarithm and argument functions. Further on down, we shall define both functions for the interval \<open>(-\<pi>,\<pi>]\<close>. The present version is provided for compatibility.\<close> lemma Arg2pi_0 [simp]: "Arg2pi(0) = 0" @@ -1751,7 +1751,7 @@ subsection\<open>The Argument of a Complex Number\<close> -text\<open>Finally: it's is defined for the same interval as the complex logarithm: $(-\pi,\pi]$.\<close> +text\<open>Finally: it's is defined for the same interval as the complex logarithm: \<open>(-\<pi>,\<pi>]\<close>.\<close> definition%important Arg :: "complex \<Rightarrow> real" where "Arg z \<equiv> (if z = 0 then 0 else Im (Ln z))" --- a/src/HOL/Analysis/Embed_Measure.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Embed_Measure.thy Tue Jan 01 21:47:27 2019 +0100 @@ -14,7 +14,7 @@ text \<open> Given a measure space on some carrier set \<open>\<Omega>\<close> and a function \<open>f\<close>, we can define a push-forward - measure on the carrier set $f(\Omega)$ whose \<open>\<sigma>\<close>-algebra is the one generated by mapping \<open>f\<close> over + measure on the carrier set \<open>f(\<Omega>)\<close> whose \<open>\<sigma>\<close>-algebra is the one generated by mapping \<open>f\<close> over the original sigma algebra. This is useful e.\,g.\ when \<open>f\<close> is injective, i.\,e.\ it is some kind of tagging'' function. --- a/src/HOL/Analysis/Extended_Real_Limits.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Extended_Real_Limits.thy Tue Jan 01 21:47:27 2019 +0100 @@ -396,11 +396,11 @@ -text \<open>The next few lemmas remove an unnecessary assumption in \verb+tendsto_add_ereal+, culminating -is continuous on ereal times ereal, except at $(-\infty, \infty)$ and $(\infty, -\infty)$. +text \<open>The next few lemmas remove an unnecessary assumption in \<open>tendsto_add_ereal\<close>, culminating +is continuous on ereal times ereal, except at \<open>(-\<infinity>, \<infinity>)\<close> and \<open>(\<infinity>, -\<infinity>)\<close>. It is much more convenient in many situations, see for instance the proof of -\verb+tendsto_sum_ereal+ below.\<close> +\<open>tendsto_sum_ereal\<close> below.\<close> fixes y :: ereal @@ -437,7 +437,7 @@ qed text\<open>One would like to deduce the next lemma from the previous one, but the fact -that $-(x+y)$ is in general different from $(-x) + (-y)$ in ereal creates difficulties, +that \<open>- (x + y)\<close> is in general different from \<open>(- x) + (- y)\<close> in ereal creates difficulties, so it is more efficient to copy the previous proof.\<close> @@ -503,8 +503,8 @@ ultimately show ?thesis by simp qed -text \<open>The next lemma says that the addition is continuous on ereal, except at -the pairs $(-\infty, \infty)$ and $(\infty, -\infty)$.\<close> +text \<open>The next lemma says that the addition is continuous on \<open>ereal\<close>, except at +the pairs \<open>(-\<infinity>, \<infinity>)\<close> and \<open>(\<infinity>, -\<infinity>)\<close>.\<close> fixes x y :: ereal @@ -528,7 +528,7 @@ subsubsection%important \<open>Continuity of multiplication\<close> text \<open>In the same way as for addition, we prove that the multiplication is continuous on -ereal times ereal, except at $(\infty, 0)$ and $(-\infty, 0)$ and $(0, \infty)$ and $(0, -\infty)$, +ereal times ereal, except at \<open>(\<infinity>, 0)\<close> and \<open>(-\<infinity>, 0)\<close> and \<open>(0, \<infinity>)\<close> and \<open>(0, -\<infinity>)\<close>, starting with specific situations.\<close> lemma%important tendsto_mult_real_ereal: @@ -922,7 +922,7 @@ ultimately show ?thesis using MInf by auto -text \<open>the next one is copied from \verb+tendsto_sum+.\<close> +text \<open>the next one is copied from \<open>tendsto_sum\<close>.\<close> lemma tendsto_sum_ereal [tendsto_intros]: fixes f :: "'a \<Rightarrow> 'b \<Rightarrow> ereal" assumes "\<And>i. i \<in> S \<Longrightarrow> (f i \<longlongrightarrow> a i) F" @@ -1476,8 +1476,8 @@ qed text \<open>The following statement about limsups is reduced to a statement about limits using -subsequences thanks to \verb+limsup_subseq_lim+. The statement for limits follows for instance from +subsequences thanks to \<open>limsup_subseq_lim\<close>. The statement for limits follows for instance from fixes u v::"nat \<Rightarrow> ereal" @@ -1521,7 +1521,7 @@ then show ?thesis unfolding w_def by simp qed -text \<open>There is an asymmetry between liminfs and limsups in ereal, as $\infty + (-\infty) = \infty$. +text \<open>There is an asymmetry between liminfs and limsups in \<open>ereal\<close>, as \<open>\<infinity> + (-\<infinity>) = \<infinity>\<close>. This explains why there are more assumptions in the next lemma dealing with liminfs that in the --- a/src/HOL/Analysis/Function_Topology.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Function_Topology.thy Tue Jan 01 21:47:27 2019 +0100 @@ -17,14 +17,14 @@ to each factor is continuous. To form a product of objects in Isabelle/HOL, all these objects should be subsets of a common type -'a. The product is then @{term "Pi\<^sub>E I X"}, the set of elements from 'i to 'a such that the $i$-th -coordinate belongs to $X\;i$ for all $i \in I$. +'a. The product is then @{term "Pi\<^sub>E I X"}, the set of elements from \<open>'i\<close> to \<open>'a\<close> such that the \<open>i\<close>-th +coordinate belongs to \<open>X i\<close> for all \<open>i \<in> I\<close>. Hence, to form a product of topological spaces, all these spaces should be subsets of a common type. This means that type classes can not be used to define such a product if one wants to take the product of different topological spaces (as the type 'a can only be given one structure of topological space using type classes). On the other hand, one can define different topologies (as -introduced in \verb+thy+) on one type, and these topologies do not need to +introduced in \<open>thy\<close>) on one type, and these topologies do not need to share the same maximal open set. Hence, one can form a product of topologies in this sense, and this works well. The big caveat is that it does not interact well with the main body of topology in Isabelle/HOL defined in terms of type classes... For instance, continuity of maps @@ -41,25 +41,24 @@ probably too naive here). Here is an example of a reformulation using topologies. Let -\begin{verbatim} -continuous_on_topo T1 T2 f = ((\<forall> U. openin T2 U \<longrightarrow> openin T1 (f-U \<inter> topspace(T1))) - \<and> (f(topspace T1) \<subseteq> (topspace T2))) -\end{verbatim} -be the natural continuity definition of a map from the topology $T1$ to the topology $T2$. Then -the current \verb+continuous_on+ (with type classes) can be redefined as -\begin{verbatim} -continuous_on s f = continuous_on_topo (subtopology euclidean s) (topology euclidean) f -\end{verbatim} +@{text [display] +\<open>continuous_on_topo T1 T2 f = + ((\<forall> U. openin T2 U \<longrightarrow> openin T1 (f-U \<inter> topspace(T1))) + \<and> (f(topspace T1) \<subseteq> (topspace T2)))\<close>} +be the natural continuity definition of a map from the topology \<open>T1\<close> to the topology \<open>T2\<close>. Then +the current \<open>continuous_on\<close> (with type classes) can be redefined as +@{text [display] \<open>continuous_on s f = + continuous_on_topo (subtopology euclidean s) (topology euclidean) f\<close>} -In fact, I need \verb+continuous_on_topo+ to express the continuity of the projection on subfactors -for the product topology, in Lemma~\verb+continuous_on_restrict_product_topology+, and I show -the above equivalence in Lemma~\verb+continuous_on_continuous_on_topo+. +In fact, I need \<open>continuous_on_topo\<close> to express the continuity of the projection on subfactors +for the product topology, in Lemma~\<open>continuous_on_restrict_product_topology\<close>, and I show +the above equivalence in Lemma~\<open>continuous_on_continuous_on_topo\<close>. I only develop the basics of the product topology in this theory. The most important missing piece is Tychonov theorem, stating that a product of compact spaces is always compact for the product topology, even when the product is not finite (or even countable). -I realized afterwards that this theory has a lot in common with \verb+Fin_Map.thy+. +I realized afterwards that this theory has a lot in common with \<^file>\<open>~~/src/HOL/Library/Finite_Map.thy\<close>. \<close> subsection%important \<open>Topology without type classes\<close> @@ -67,15 +66,15 @@ subsubsection%important \<open>The topology generated by some (open) subsets\<close> text \<open>In the definition below of a generated topology, the \<open>Empty\<close> case is not necessary, -as it follows from \<open>UN\<close> taking for $K$ the empty set. However, it is convenient to have, +as it follows from \<open>UN\<close> taking for \<open>K\<close> the empty set. However, it is convenient to have, and is never a problem in proofs, so I prefer to write it down explicitly. -We do not require UNIV to be an open set, as this will not be the case in applications. (We are -thinking of a topology on a subset of UNIV, the remaining part of UNIV being irrelevant.)\<close> +We do not require \<open>UNIV\<close> to be an open set, as this will not be the case in applications. (We are +thinking of a topology on a subset of \<open>UNIV\<close>, the remaining part of \<open>UNIV\<close> being irrelevant.)\<close> inductive generate_topology_on for S where -Empty: "generate_topology_on S {}" -|Int: "generate_topology_on S a \<Longrightarrow> generate_topology_on S b \<Longrightarrow> generate_topology_on S (a \<inter> b)" + Empty: "generate_topology_on S {}" +| Int: "generate_topology_on S a \<Longrightarrow> generate_topology_on S b \<Longrightarrow> generate_topology_on S (a \<inter> b)" | UN: "(\<And>k. k \<in> K \<Longrightarrow> generate_topology_on S k) \<Longrightarrow> generate_topology_on S (\<Union>K)" | Basis: "s \<in> S \<Longrightarrow> generate_topology_on S s" @@ -83,8 +82,8 @@ "istopology (generate_topology_on S)" unfolding istopology_def by (auto intro: generate_topology_on.intros) -text \<open>The basic property of the topology generated by a set $S$ is that it is the -smallest topology containing all the elements of $S$:\<close> +text \<open>The basic property of the topology generated by a set \<open>S\<close> is that it is the +smallest topology containing all the elements of \<open>S\<close>:\<close> lemma%unimportant generate_topology_on_coarsest: assumes "istopology T" @@ -344,8 +343,8 @@ we will need it to define the strong operator topology on the space of continuous linear operators, by pulling back the product topology on the space of all functions.\<close> -text \<open>\verb+pullback_topology A f T+ is the pullback of the topology $T$ by the map $f$ on -the set $A$.\<close> +text \<open>\<open>pullback_topology A f T\<close> is the pullback of the topology \<open>T\<close> by the map \<open>f\<close> on +the set \<open>A\<close>.\<close> definition%important pullback_topology::"('a set) \<Rightarrow> ('a \<Rightarrow> 'b) \<Rightarrow> ('b topology) \<Rightarrow> ('a topology)" where "pullback_topology A f T = topology (\<lambda>S. \<exists>U. openin T U \<and> S = f-U \<inter> A)" @@ -434,7 +433,7 @@ set along one single coordinate, and the whole space along other coordinates. In fact, this is only equivalent for nonempty products, but for the empty product the first formulation is better (the second one gives an empty product space, while an empty product should have exactly one -point, equal to \verb+undefined+ along all coordinates. +point, equal to \<open>undefined\<close> along all coordinates. So, we use the first formulation, which moreover seems to give rise to more straightforward proofs. \<close> @@ -970,7 +969,7 @@ "\<And>x i. open (A x i)" "\<And>x S. open S \<Longrightarrow> x \<in> S \<Longrightarrow> (\<exists>i. A x i \<subseteq> S)" by metis - text \<open>$B i$ is a countable basis of neighborhoods of $x_i$.\<close> + text \<open>\<open>B i\<close> is a countable basis of neighborhoods of \<open>x\<^sub>i\<close>.\<close> define B where "B = (\<lambda>i. (A (x i))UNIV \<union> {UNIV})" have "countable (B i)" for i unfolding B_def by auto @@ -1119,13 +1118,13 @@ subsection%important \<open>The strong operator topology on continuous linear operators\<close> (* FIX ME mv*) -text \<open>Let 'a and 'b be two normed real vector spaces. Then the space of linear continuous -operators from 'a to 'b has a canonical norm, and therefore a canonical corresponding topology -(the type classes instantiation are given in \verb+Bounded_Linear_Function.thy+). +text \<open>Let \<open>'a\<close> and \<open>'b\<close> be two normed real vector spaces. Then the space of linear continuous +operators from \<open>'a\<close> to \<open>'b\<close> has a canonical norm, and therefore a canonical corresponding topology +(the type classes instantiation are given in \<^file>\<open>Bounded_Linear_Function.thy\<close>). -However, there is another topology on this space, the strong operator topology, where $T_n$ tends to -$T$ iff, for all $x$ in 'a, then $T_n x$ tends to $T x$. This is precisely the product topology -where the target space is endowed with the norm topology. It is especially useful when 'b is the set +However, there is another topology on this space, the strong operator topology, where \<open>T\<^sub>n\<close> tends to +\<open>T\<close> iff, for all \<open>x\<close> in \<open>'a\<close>, then \<open>T\<^sub>n x\<close> tends to \<open>T x\<close>. This is precisely the product topology +where the target space is endowed with the norm topology. It is especially useful when \<open>'b\<close> is the set of real numbers, since then this topology is compact. We can not implement it using type classes as there is already a topology, but at least we @@ -1202,12 +1201,12 @@ text \<open>In general, the product topology is not metrizable, unless the index set is countable. When the index set is countable, essentially any (convergent) combination of the metrics on the -factors will do. We use below the simplest one, based on $L^1$, but $L^2$ would also work, +factors will do. We use below the simplest one, based on \<open>L\<^sup>1\<close>, but \<open>L\<^sup>2\<close> would also work, for instance. What is not completely trivial is that the distance thus defined induces the same topology as the product topology. This is what we have to prove to show that we have an instance -of \verb+metric_space+. +of \<^class>\<open>metric_space\<close>. The proofs below would work verbatim for general countable products of metric spaces. However, since distances are only implemented in terms of type classes, we only develop the theory @@ -1476,10 +1475,10 @@ by simp next text\<open>Finally, we show that the topology generated by the distance and the product - topology coincide. This is essentially contained in Lemma \verb+fun_open_ball_aux+, + topology coincide. This is essentially contained in Lemma \<open>fun_open_ball_aux\<close>, except that the condition to prove is expressed with filters. To deal with this, - we copy some mumbo jumbo from Lemma \verb+eventually_uniformity_metric+ in - \verb+Real_Vector_Spaces.thy+\<close> + we copy some mumbo jumbo from Lemma \<open>eventually_uniformity_metric\<close> in + \<^file>\<open>~~/src/HOL/Real_Vector_Spaces.thy\<close>\<close> fix U::"('a \<Rightarrow> 'b) set" have "eventually P uniformity \<longleftrightarrow> (\<exists>e>0. \<forall>x (y::('a \<Rightarrow> 'b)). dist x y < e \<longrightarrow> P (x, y))" for P unfolding uniformity_fun_def apply (subst eventually_INF_base) @@ -1581,7 +1580,7 @@ text \<open>There are two natural sigma-algebras on a product space: the borel sigma algebra, generated by open sets in the product, and the product sigma algebra, countably generated by products of measurable sets along finitely many coordinates. The second one is defined and studied -in \verb+Finite_Product_Measure.thy+. +in \<^file>\<open>Finite_Product_Measure.thy\<close>. These sigma-algebra share a lot of natural properties (measurability of coordinates, for instance), but there is a fundamental difference: open sets are generated by arbitrary unions, not only --- a/src/HOL/Analysis/Further_Topology.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Further_Topology.thy Tue Jan 01 21:47:27 2019 +0100 @@ -4188,7 +4188,7 @@ subsection%important\<open>The "Borsukian" property of sets\<close> -text\<open>This doesn't have a standard name. Kuratowski uses contractible with respect to $[S^1]$'' +text\<open>This doesn't have a standard name. Kuratowski uses contractible with respect to \<open>[S\<^sup>1]\<close>'' while Whyburn uses property b''. It's closely related to unicoherence.\<close> definition%important Borsukian where --- a/src/HOL/Analysis/Gamma_Function.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Gamma_Function.thy Tue Jan 01 21:47:27 2019 +0100 @@ -1799,10 +1799,10 @@ text \<open> The following is a proof of the Bohr--Mollerup theorem, which states that - any log-convex function $G$ on the positive reals that fulfils $G(1) = 1$ and - satisfies the functional equation $G(x+1) = x G(x)$ must be equal to the + any log-convex function \<open>G\<close> on the positive reals that fulfils \<open>G(1) = 1\<close> and + satisfies the functional equation \<open>G(x + 1) = x G(x)\<close> must be equal to the Gamma function. - In principle, if $G$ is a holomorphic complex function, one could then extend + In principle, if \<open>G\<close> is a holomorphic complex function, one could then extend this from the positive reals to the entire complex plane (minus the non-positive integers, where the Gamma function is not defined). \<close> --- a/src/HOL/Analysis/Lipschitz.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Lipschitz.thy Tue Jan 01 21:47:27 2019 +0100 @@ -423,9 +423,9 @@ text \<open>We deduce that if a function is Lipschitz on finitely many closed sets on the real line, then it is Lipschitz on any interval contained in their union. The difficulty in the proof is to show -that any point $z$ in this interval (except the maximum) has a point arbitrarily close to it on its +that any point \<open>z\<close> in this interval (except the maximum) has a point arbitrarily close to it on its right which is contained in a common initial closed set. Otherwise, we show that there is a small -interval $(z, T)$ which does not intersect any of the initial closed sets, a contradiction.\<close> +interval \<open>(z, T)\<close> which does not intersect any of the initial closed sets, a contradiction.\<close> proposition lipschitz_on_closed_Union: assumes "\<And>i. i \<in> I \<Longrightarrow> lipschitz_on M (U i) f" --- a/src/HOL/Analysis/Measure_Space.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Measure_Space.thy Tue Jan 01 21:47:27 2019 +0100 @@ -1093,7 +1093,7 @@ qed text \<open>The next lemma is convenient to combine with a lemma whose conclusion is of the -form \<open>AE x in M. P x = Q x\<close>: for such a lemma, there is no \verb+[symmetric]+ variant, +form \<open>AE x in M. P x = Q x\<close>: for such a lemma, there is no \<open>[symmetric]\<close> variant, but using \<open>AE_symmetric[OF...]\<close> will replace it.\<close> (* depricated replace by laws about eventually *) @@ -3522,7 +3522,7 @@ qed qed -subsubsection%unimportant \<open>Supremum of a set of $\sigma$-algebras\<close> +subsubsection%unimportant \<open>Supremum of a set of \<open>\<sigma>\<close>-algebras\<close> lemma space_Sup_eq_UN: "space (Sup M) = (\<Union>x\<in>M. space x)" unfolding Sup_measure_def --- a/src/HOL/Analysis/Path_Connected.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Path_Connected.thy Tue Jan 01 21:47:27 2019 +0100 @@ -3401,9 +3401,9 @@ (\<forall>x. h(1, x) = q x) \<and> (\<forall>t \<in> {0..1}. P(\<lambda>x. h(t, x))))" -text\<open>$p, q$ are functions $X \to Y$, and the property $P$ restricts all intermediate maps. -We often just want to require that $P$ fixes some subset, but to include the case of a loop homotopy, -it is convenient to have a general property $P$.\<close> +text\<open>\<open>p\<close>, \<open>q\<close> are functions \<open>X \<rightarrow> Y\<close>, and the property \<open>P\<close> restricts all intermediate maps. +We often just want to require that \<open>P\<close> fixes some subset, but to include the case of a loop homotopy, +it is convenient to have a general property \<open>P\<close>.\<close> text \<open>We often want to just localize the ending function equality or whatever.\<close> text%important \<open>%whitespace\<close> --- a/src/HOL/Analysis/Set_Integral.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Set_Integral.thy Tue Jan 01 21:47:27 2019 +0100 @@ -817,7 +817,7 @@ then show ?thesis using * by auto qed -text \<open>The next lemma shows that $L^1$ convergence of a sequence of functions follows from almost +text \<open>The next lemma shows that \<open>L\<^sup>1\<close> convergence of a sequence of functions follows from almost everywhere convergence and the weaker condition of the convergence of the integrated norms (or even just the nontrivial inequality about them). Useful in a lot of contexts! This statement (or its variations) are known as Scheffe lemma. --- a/src/HOL/Analysis/Sigma_Algebra.thy Tue Jan 01 20:57:54 2019 +0100 +++ b/src/HOL/Analysis/Sigma_Algebra.thy Tue Jan 01 21:47:27 2019 +0100 @@ -1431,7 +1431,7 @@ subsubsection \<open>Induction rule for intersection-stable generators\<close> -text%important \<open>The reason to introduce Dynkin-systems is the following induction rules for $\sigma$-algebras +text%important \<open>The reason to introduce Dynkin-systems is the following induction rules for \<open>\<sigma>\<close>-algebras generated by a generator closed under intersection.\<close> proposition sigma_sets_induct_disjoint[consumes 3, case_names basic empty compl union]: @@ -2022,7 +2022,7 @@ using emeasure_extend_measure[OF M _ _ ms(2,3), of "(i,j)"] eq ms(1) \<open>I i j\<close> by (auto simp: subset_eq) -subsection \<open>The smallest $\sigma$-algebra regarding a function\<close> +subsection \<open>The smallest \<open>\<sigma>\<close>-algebra regarding a function\<close> definition%important vimage_algebra :: "'a set \<Rightarrow> ('a \<Rightarrow> 'b) \<Rightarrow> 'b measure \<Rightarrow> 'a measure" where "vimage_algebra X f M = sigma X {f - A \<inter> X | A. A \<in> sets M}"`
2022-12-01 03:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367949962615967, "perplexity": 4683.869052627898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00134.warc.gz"}
http://mathhelpforum.com/advanced-algebra/212836-prime-maximal-ideals.html
# Thread: Prime and Maximal Ideals 1. ## Prime and Maximal Ideals Find all prime ideals and all maximal ideals of $\mathbb{Z}_{12}$. 2. ## Re: Prime and Maximal Ideals well, first, ask yourself this: what ARE the ideals of Z12? remember an ideal has to be a subgroup of the additive group. start looking there. 3. ## Re: Prime and Maximal Ideals Definitely tried to take up your challenge but apart from an exhaustive trial of all possible sets of elements of $\mathbb{Z}_{12}$ I did not discover a process (or a general theorem) that would yield all the ideals of $\mathbb{Z}_{12}$ I did notice that Fraleigh (Abstract Algebra) has the following exercise on page 243 [Exercise 3 of Exercieses 26] "Find all ideals N of $\mathbb{Z}_{12}$" In the Answers to Odd Numbered Exercises (page 502) Fraleigh gives the answer to this exercise as follows: <0> = {0} <1> = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11} = $\mathbb{Z}_{12}$ <2> = {0, 2, 4, 6, 8, 10} <3> = {0, 3, 6, 9} <4> = {0, 4, 8} <6> = {0, 6} These certainly seem to be ideals of $\mathbb{Z}_{12}$, but how did Fraleigh determine these ideals, and further, how do we know they are the only sets which are, in fact, ideals? Surely this set of ideals is not determined by exhaustive trial of all the subsets of $\mathbb{Z}_{12}$! Why are they the only ideals? Do we have a theorem governing the nature of the ideals of $\mathbb{Z}_{n}$? I am assuming I apply the requisite tests to the above ideals to determine whether they are (1) prime (2) maximal. Is that corect? Peter 4. ## Re: Prime and Maximal Ideals Z12 is a CYCLIC group. as such, all its subgroups are cyclic, so it suffices to examine <a> for all elements a. furthermore, for a cyclic group G, there exists exactly ONE subgroup of order d for each divisor d of the order of G. Z12 is of order 12, and 12 has divisors 1,2,3,4,6 and 12. for a cyclic group <a>, the order of <ak> is |a|/gcd(k,|a|). for additive groups, it is common to write ak additively as ka. Z12 has generator 1. looking at every element we see that: <0> has order 12/gcd(0,12) = 12/12 = 1 <1> has order 12/gcd(1,12) = 12/1 = 12 <2> has order 12/gcd(2,12) = 12/2 = 6 <3> has order 12/gcd(3,12) = 12/3 = 4 <4> has order 12/gcd(4,12) = 12/4 = 3 <6> has order 12/gcd(6,12) = 12/6 = 2 note we have not listed all the elements of Z12, although we have covered every possible divisor. so the elements we have not listed must lead to duplicates of the subgroups listed above. indeed: <5> = <7> = <11> = <1>, since for all of these numbers k, gcd(k,12) = 1. also: <10> = <2>, since gcd(10,12) = 2 (we have only ONE subgroup of order 6). <8> = <4>, because gcd(8,12) = 4. <9> = <3>, because gcd(9,12) = 3. ************** this shows that for Zn, all the possible subgroups of (Zn,+) are of the form <d>, where d is a divisor of n (it is customary to write <n> as {0}). are these ideals of the RING Zn? suppose we have an integer d, where d|n, (so n = kd) and we consider the multiples of d mod n. clearly the set {0,d,2d,....,(k-1)d} is a cyclic subgroup of the additive group. now suppose a is ANY element of Zn. using a somewhat standard abuse of notation, we will identify a with the integer in the range 0 ≤ a ≤ n-1 it is congruent to. what we now have to do is show that a(td) (mod n) is in <d>, for 0 ≤ t ≤ k-1. however: a(td) (mod n) = (at)d (mod n) = (at (mod n))(d (mod n)) = ((at (mod n))(d). when we reduce at (mod n), we obtain an integer between 0 and n-1, say b. next we can reduce b (mod k), to obtain cd, where c is between 0 and k-1, showing that a(td) is indeed in <d>. let me illustrate this with n = 12, and d = 3. so <d> = <3> = {0,3,6,9}. our "k" here is 4. suppose we take a = 5, and t = 2. so we need to show that 5(2*3) is in <3>. first we see that 5(2*3) (mod 12) = (4*2)3 (mod 12) = 10*3 (mod 12). now 10*3 = (2*4 + 2)*3 = (2*4)3 + 2*3 = 2(4*3) + 2*3 = 2*12 + 6 = 0 + 6 = 6 (mod 12) (this is why we can reduce at (mod k), instead of at (mod n), because kd = n = 0 (mod n)). or: the "long way"- 5(2*3) = 5*6 = 30 = 6 + 24 = 6 (mod 12). that's part of the beauty of "modular arithmetic", you can reduce (mod 12) before evaluating sums and products, or after, you get the same answer either way. this shows that for the ring Zn, the subgroup <d> for any divisor d, is also an ideal, and that these are the only POSSIBLE ideals. in other words, the ideals of Zn are all principal. (unfortunately, Zn isn't a PID, since it will have zero-divisors if n is not prime, so it isn't in general, an integral domain). this simple sub-structure of the ring Zn lets us see something very clearly: an ideal of Zn, (d), is maximal if and only if d is prime. for example, in Z12: 4 is not prime, it has the divisor 2. this means that (4) is contained in (2), which is easy to see. thus the maximal ideals of Z12 are (2) and (3). it's pretty easy to see that (2) is maximal, if we add any odd integer k , and consider the ideal (k,2), we clearly get ALL odd elements of Z12. perhaps it's not so obvious (3) is maximal. but gcd(2,3) = 1, right? this means that, for example, we have 3*3 - 4*2 = 1, so that 1 is in (2,3), so the ideal (2,3) = Z12. in fact, if we choose any k NOT in (3), we will have gcd(k,3) = 1, since all the multiples of 3 (and so the elements that have 3 as a factor) are already in (3). the same reasoning holds for any prime p where p|n. this is the reason why ideals such that I+J = R are called co-prime ideals, we are generalizing this basic example. so that takes care of the maximal ideals. what about the prime ideals? for Zn i claim: the maximal ideals ARE the prime ideals. consider (p), where p is a prime dividing n. this is a maximal ideal, as we have seen (for if (p) is contained in (a), for some a|n in Zn, then a|p, so either a = p, so that (a) = (p), or a = 1, so that (a) = Zn). now if ab is in (p), we have ab = kp (mod n). that is ab = kp + tn. suppose that a is not in (p). this means that a = sp + u, where u is between 1 and p-1, and s is between 0 and n/p. let's write b = xp + y, where x is between 0 and n/p, and y is between 0 and p-1. so ab = (sp + u)(xp + y) = (sxp + sy + ux)p + uy = kp + tn. hence: tn + uy = (sxp + sy + ux - kx)p <--this is an equation of integers, not congruence classes. if we take both sides mod p (remember, p divides n), we get: uy = 0 (mod p), that is, p divides uy. by our choice of u, p does NOT divide u, so p MUST divide y. since y < p, the only choice left is y = 0, that is b = xp, so that b is in (p). hence (p) is a prime ideal. i admit this is somewhat long-winded, but i want to make it clear how the structure of the ordinary integers is "partially preserved" in Zn. so the maximal ideals in Zn are prime. let's go the other way: suppose (d) is a prime ideal of Zn. if d is composite, we have d = ab, for 1 ≤ a,b < d. so then ab = d is in (d), but neither a nor b can be in (d), since d is the smallest (non-zero) element of (d) = {0,d,2d,...,(n/d - 1)d}. so if (d) is prime, d must be prime, and as we have seen, (p) is maximal for a prime p dividing n. this shows the prime ideals are also the ideals (p) for a prime p|n, which are maximal. if you feel up to it, find the maximal ideals of Z60 now. , , ### prime divisors of 60 in Z60 are the prime ideals Click on a term to search for related topics.
2017-06-24 22:45:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8435717225074768, "perplexity": 780.1903431358502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00139.warc.gz"}
http://www.cs.cornell.edu/Info/Misc/LaTex-Tutorial/Solutions/down-done.html
# Justification Here are two ways to format this to have a flushleft (raggedright) margin. \documentstyle[12pt]{article} \begin{document} \begin{flushleft} In another moment down went Alice after it, never once considering how in the world she was to get out again. The rabbit-hole went straight on like a tunnel for some way, and then dipped suddenly down, so suddenly that Alice had not a moment to think about stopping herself before she found herself falling down a very deep well. \end{flushleft} \end{document} Another way to do this is to use the declaration. Notice that the \raggedright declaration is in effect as long as the environment it is in is in effect. Ragged right justification will be turned off as soon as it encounters a \end{ } command. \documentstyle[12pt]{article} \begin{document} \raggedright In another moment down went Alice after it, never once considering how in the world she was to get out again. The rabbit-hole went straight on like a tunnel for some way, and then dipped suddenly down, so suddenly that Alice had not a moment to think about stopping herself before she found herself falling down a very deep well. \end{document}
2017-10-22 08:08:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6638861894607544, "perplexity": 5377.550450544868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825154.68/warc/CC-MAIN-20171022075310-20171022095310-00581.warc.gz"}
https://www.techwhiff.com/learn/synthrold-is-used-as-a-replacement-or/134288
# Synthrold is used as a replacement or supplemental therapy for diminished thyroid function A doctors of... ###### Question: Synthrold is used as a replacement or supplemental therapy for diminished thyroid function A doctors of Synthroid, how many tablets are required to provide the prescribed medication? order prescribes a dosage of 0.163 millig Select one O a. 12225 tablets O b. 163 tablets ° C. 0 0022 tablets Od. O tablets 0 e. 2 tablets #### Similar Solved Questions ##### Activity 2) Wheatstone Bridge Circuit Analyze and then construct the circuit shown with the indicated nominal... Activity 2) Wheatstone Bridge Circuit Analyze and then construct the circuit shown with the indicated nominal R values shown in the table. Use the 5v supply for Vsource. 7) Calculate and Measure the voltage across BC for each version of the circuit, Q1 and Q2. + (V) Ri R2 a + Q Calculated Measured ... ##### QUESTION 27 A circular coil of wire with radius 3 cm and 50 turns is placed... QUESTION 27 A circular coil of wire with radius 3 cm and 50 turns is placed in a uniform magnetic field of magnitude 0.4 T. The magnetic field is parallel to the area vector; l.e. perpendicular to the plane of the coll." a) What is the magnetic flux through the coll? b) The magnetic field is dec... ##### What is the Latin system name for CuCl_2? What is the Latin system name for CuCl_2#?... ##### To learn the income of the people who eat Smackers, General Mills would use? a) population... to learn the income of the people who eat Smackers, General Mills would use? a) population or  neither of the choices or both of these choices or a sample... ##### Identify the equilibrium-constant expressions for the following reactions. You may want to reference (Pages 642-847) Section... Identify the equilibrium-constant expressions for the following reactions. You may want to reference (Pages 642-847) Section 15.3 while com Part A 6H2(g) +P.(g) = 4PH, (g) o Keq = 4°P (PH)" [PH]" 0 Keq = 1, P. OK (H, P1 PH] Submit Request Answer Part B N, 04(E) = 2NO2()... ##### Caspian Sea Drinks is considering the production of a diet drink. The expansion of the plant... Caspian Sea Drinks is considering the production of a diet drink. The expansion of the plant and the purchase of the equipment necessary to produce the diet drink will cost $24.00 million. The plant and equipment will be depreciated over 10 years to a book value of$2.00 million, and sold for that a... ##### What microeconomic forces impacted the landing the most? what microeconomic forces impacted the landing the most?... ##### Let Azfa"b"c" I n 0 }. Answer each of the following question: 1. 2. 3. 4.... Let Azfa"b"c" I n 0 }. Answer each of the following question: 1. 2. 3. 4. Is A a regular language? Is A a context free language? Is A Turing recognizable? Is A Turing decidable?... ##### For this assignment, you are to write a program that does the following: • read exam scores from a file name data.txt into an array, and • after reading all the scores, output the following to the mon... For this assignment, you are to write a program that does the following: • read exam scores from a file name data.txt into an array, and • after reading all the scores, output the following to the monitor: the average score and the total number of scores. In addition, the program must use ... ##### A 15-year, 12% semiannual bond sells for $1153.72. What's Rd (yield), or return required from the... A 15-year, 12% semiannual bond sells for$1153.72. What's Rd (yield), or return required from the investors; or what is the cost the firm pays to the bond holders?... Silver Company makes a product that is very popular as a Mother's Day gift. Thus, peak sales occur in May of each year, as shown in the company's sales budget for the second quarter given below. April May June Total Budgeted sales (all on account) $380,000$580,000 $200,000$1,160,000 From p...
2022-09-29 08:39:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37257760763168335, "perplexity": 2787.6317541446815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00383.warc.gz"}
http://www.theinfolist.com/php/SummaryGet.php?FindGo=Nuclear_reactor_physics
Nuclear reactor physics TheInfoList OR: Nuclear reactor physics is the field of physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its motion and behavior through Spacetime, space and time, and the related entities of energy and force. "Physical science is that depar ... that studies and deals with the applied study and engineering applications of chain reaction to induce a controlled rate of fission in a nuclear reactor for the production of energy.van Dam, H., van der Hagen, T. H. J. J., & Hoogenboom, J. E. (2005). ''Nuclear reactor physics''. Retrieved from http://www.janleenkloosterman.nl/reports/ap3341.pdf Most nuclear reactor A nuclear reactor is a device used to initiate and control a fission nuclear chain reaction or nuclear fusion reactions. Nuclear reactors are used at nuclear power plants for electricity generation and in nuclear marine propulsion. Heat from nu ... s use a chain reaction A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events. Chain reactions are one way that syst ... to induce a controlled rate of nuclear fission Nuclear fission is a nuclear reaction, reaction in which the atomic nucleus, nucleus of an atom splits into two or more smaller atomic nucleus, nuclei. The fission process often produces gamma ray, gamma photons, and releases a very large ... in fissile material, releasing both energy In physics, energy (from Ancient Greek: wikt:ἐνέργεια#Ancient_Greek, ἐνέργεια, ''enérgeia'', “activity”) is the physical quantity, quantitative physical property, property that is #Energy transfer, transferred to a phy ... and free neutron The neutron is a subatomic particle, symbol or , which has a neutral (not positive or negative) charge, and a mass slightly greater than that of a proton. Protons and neutrons constitute the atomic nucleus, nuclei of atoms. Since protons and ... s. A reactor consists of an assembly of nuclear fuel (a reactor core), usually surrounded by a neutron moderator In nuclear engineering, a neutron moderator is a medium that reduces the speed of Neutron temperature#Fast, fast neutrons, ideally without neutron capture, capturing any, leaving them as Neutron temperature#Thermal, thermal neutrons with only Ther ... such as regular water, heavy water, graphite Graphite () is a crystalline A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in al ... , or zirconium hydride, and fitted with mechanisms such as control rods which control the rate of the reaction. The physics of nuclear fission Nuclear fission is a nuclear reaction, reaction in which the atomic nucleus, nucleus of an atom splits into two or more smaller atomic nucleus, nuclei. The fission process often produces gamma ray, gamma photons, and releases a very large ... has several quirks that affect the design and behavior of nuclear reactors. This article presents a general overview of the physics of nuclear reactors and their behavior. Criticality In a nuclear reactor, the neutron The neutron is a subatomic particle, symbol or , which has a neutral (not positive or negative) charge, and a mass slightly greater than that of a proton. Protons and neutrons constitute the atomic nucleus, nuclei of atoms. Since protons and ... population at any instant is a function of the rate of neutron production (due to fission processes) and the rate of neutron losses (due to non-fission absorption mechanisms and leakage from the system). When a reactor’s neutron population remains steady from one generation to the next (creating as many new neutrons as are lost), the fission chain reaction is self-sustaining and the reactor's condition is referred to as "critical". When the reactor’s neutron production exceeds losses, characterized by increasing power level, it is considered "supercritical", and when losses dominate, it is considered "subcritical" and exhibits decreasing power. The " Six-factor formula" is the neutron life-cycle balance equation, which includes six separate factors, the product of which is equal to the ratio of the number of neutrons in any generation to that of the previous one; this parameter is called the effective multiplication factor k, also denoted by Keff, where k = Є ''L''f ρ ''L''th ''f'' η, where Є = "fast-fission factor", ''L''f = "fast non-leakage factor", ρ = " resonance escape probability", ''L''th = "thermal non-leakage factor", ''f'' = "thermal fuel utilization factor", and η = "reproduction factor". This equation's factors are roughly in order of potential occurrence for a fission born neutron during critical operation. As already mentioned before, k = (Neutrons produced in one generation)/(Neutrons produced in the previous generation). In other words, when the reactor is critical, k = 1; when the reactor is subcritical, k < 1; and when the reactor is supercritical, k > 1. '' Reactivity'' is an expression of the departure from criticality. δk = (k − 1)/k. When the reactor is critical, δk = 0. When the reactor is subcritical, δk < 0. When the reactor is supercritical, δk > 0. Reactivity is also represented by the lowercase Greek letter rho (ρ). Reactivity is commonly expressed in decimals or percentages or pcm (per cent mille) of Δk/k. When reactivity ρ is expressed in units of delayed neutron fraction β, the unit is called the '' dollar Dollar is the name of more than 20 Currency, currencies. They include the Australian dollar, Brunei dollar, Canadian dollar, Hong Kong dollar, Jamaican dollar, Liberian dollar, Namibian dollar, New Taiwan dollar, New Zealand dollar, Singapore d ... ''. If we write 'N' for the number of free neutrons in a reactor core and $\tau$ for the average lifetime of each neutron (before it either escapes from the core or is absorbed by a nucleus), then the reactor will follow the differential equation In mathematics, a differential equation is an functional equation, equation that relates one or more unknown function (mathematics), functions and their derivatives. In applications, the functions generally represent physical quantities, the der ... (''evolution equation'') :$\frac = \frac$ where $\alpha$ is a constant of proportionality, and $dN/dt$ is the rate of change of the neutron count in the core. This type of differential equation describes exponential growth Exponential growth is a process that increases quantity over time. It occurs when the instantaneous Rate (mathematics)#Of change, rate of change (that is, the derivative) of a quantity with respect to time is proportionality (mathematics), propor ... or exponential decay A quantity is subject to exponential decay if it decreases at a rate Proportionality (mathematics), proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where is the quantity and ... , depending on the sign of the constant $\alpha$, which is just the expected number of neutrons after one average neutron lifetime has elapsed: :$\alpha = P_P_ n_ - P_ - P_$ Here, $P_$ is the probability that a particular neutron will strike a fuel nucleus, $P_$ is the probability that the neutron, having struck the fuel, will cause that nucleus to undergo fission, $P_$ is the probability that it will be absorbed by something other than fuel, and $P_$ is the probability that it will "escape" by leaving the core altogether. $n_$ is the number of neutrons produced, on average, by a fission event—it is between 2 and 3 for both 235U and 239Pu. If $\alpha$ is positive, then the core is ''supercritical'' and the rate of neutron production will grow exponentially until some other effect stops the growth. If $\alpha$ is negative, then the core is "subcritical" and the number of free neutrons in the core will shrink exponentially until it reaches an equilibrium at zero (or the background level from spontaneous fission). If $\alpha$ is exactly zero, then the reactor is ''critical'' and its output does not vary in time ($dN/dt = 0$, from above). Nuclear reactors are engineered to reduce $P_$ and $P_$. Small, compact structures reduce the probability of direct escape by minimizing the surface area The surface area of a solid object is a measure of the total area Area is the quantity that expresses the extent of a region on the plane or on a curved surface. The area of a plane region or ''plane area'' refers to the area of a s ... of the core, and some materials (such as graphite Graphite () is a crystalline A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in al ... ) can reflect some neutrons back into the core, further reducing $P_$. The probability of fission, $P_$, depends on the nuclear physics of the fuel, and is often expressed as a cross section. Reactors are usually controlled by adjusting $P_$. Control rods made of a strongly neutron-absorbent material such as cadmium Cadmium is a chemical element with the Symbol (chemistry), symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12 element, group 12, zinc and mercury (element), mercury. Li ... or boron Boron is a chemical element with the Chemical symbol, symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the ''boron g ... can be inserted into the core: any neutron that happens to impact the control rod is lost from the chain reaction, reducing $\alpha$. $P_$ is also controlled by the recent history of the reactor core itself ( see below). Starter sources The mere fact that an assembly is supercritical does not guarantee that it contains any free neutrons at all. At least one neutron is required to "strike" a chain reaction, and if the spontaneous fission Spontaneous fission (SF) is a form of radioactive decay that is found only in very heavy chemical elements. The nuclear binding energy of the elements reaches its maximum at an atomic mass number of about 56 (e.g., iron-56); spontaneous breakdow ... rate is sufficiently low it may take a long time (in 235U reactors, as long as many minutes) before a chance neutron encounter starts a chain reaction even if the reactor is supercritical. Most nuclear reactors include a "starter" neutron source A neutron source is any device that emits neutrons, irrespective of the mechanism used to produce the neutrons. Neutron sources are used in physics, engineering, medicine, nuclear weapons, petroleum exploration, biology, chemistry, and nuclear p ... that ensures there are always a few free neutrons in the reactor core, so that a chain reaction will begin immediately when the core is made critical. A common type of startup neutron source is a mixture of an alpha particle Alpha particles, also called alpha rays or alpha radiation, consist of two protons and two neutron The neutron is a subatomic particle, symbol or , which has a neutral (not positive or negative) charge, and a mass slightly greater than tha ... emitter such as 241Am ( americium-241 Americium-241 (, Am-241) is an Isotopes of americium, isotope of americium. Like all isotopes of americium, it is radioactive, with a half-life of . is the most common isotope of americium as well as the most prevalent isotope of americium in nu ... ) with a lightweight isotope such as 9Be ( beryllium-9). The primary sources described above have to be used with fresh reactor cores. For operational reactors, secondary sources are used; most often a combination of antimony Antimony is a chemical element with the Symbol (chemistry), symbol Sb (from la, wiktionary:stibium#Latin, stibium) and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony ... with beryllium Beryllium is a chemical element with the Symbol (chemistry), symbol Be and atomic number 4. It is a steel-gray, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other ... . Antimony becomes activated in the reactor and produces high-energy gamma photons, which produce photoneutrons from beryllium. Uranium-235 Uranium-235 (235U or U-235) is an Isotopes of uranium, isotope of uranium making up about 0.72% of natural uranium. Unlike the predominant isotope uranium-238, it is fissile, i.e., it can sustain a nuclear chain reaction. It is the only fissile ... undergoes a small rate of natural spontaneous fission, so there are always some neutrons being produced even in a fully shutdown reactor. When the control rod Control rods are used in nuclear reactors to control the rate of fission of the nuclear fuel – uranium or plutonium. Their compositions include chemical elements such as boron, cadmium, silver, hafnium, or indium, that are capable of absorbing ... s are withdrawn and criticality is approached the number increases because the absorption of neutrons is being progressively reduced, until at criticality the chain reaction becomes self-sustaining. Note that while a neutron source is provided in the reactor, this is not essential to start the chain reaction, its main purpose is to give a shutdown neutron population which is detectable by instruments and so make the approach to critical more observable. The reactor will go critical at the same control rod position whether a source is loaded or not. Once the chain reaction is begun, the primary starter source may be removed from the core to prevent damage from the high neutron flux in the operating reactor core; the secondary sources usually remains in situ to provide a background reference level for control of criticality. Subcritical multiplication Even in a subcritical assembly such as a shut-down reactor core, any stray neutron that happens to be present in the core (for example from spontaneous fission of the fuel, from radioactive decay of fission products, or from a neutron source A neutron source is any device that emits neutrons, irrespective of the mechanism used to produce the neutrons. Neutron sources are used in physics, engineering, medicine, nuclear weapons, petroleum exploration, biology, chemistry, and nuclear p ... ) will trigger an exponentially decaying chain reaction. Although the chain reaction is not self-sustaining, it acts as a multiplier that increases the equilibrium number of neutrons in the core. This ''subcritical multiplication'' effect can be used in two ways: as a probe of how close a core is to criticality, and as a way to generate fission power without the risks associated with a critical mass. If $k$ is the neutron multiplication factor of a subcritical core and $S_0$ is the number of neutrons coming per generation in the reactor from an external source, then at the instant when the neutron source is switched on, number of neutrons in the core will be $S_0$. After 1 generation, this neutrons will produce $k \times S_0$ neutrons in the reactor and reactor will have a totality of $k \times S_0 + S_0$ neutrons considering the newly entered neutrons in the reactor. Similarly after 2 generation, number of neutrons produced in the reactor will be $k \times \left(k \times S_0 + S_0\right) + S_0$ and so on. This process will continue and after a long enough time, the number of neutrons in the reactor will be, :$S_0 +k \times S_0 + k \times k \times S_0 + \ldots$ This series will converge because for the subcritical core, $0 < k < 1$. So the number of neutrons in the reactor will be simply, :$\frac$ The fraction $\frac$ is called subcritical multiplication factor. Since power in a reactor is proportional to the number of neutrons present in the nuclear fuel material (material in which fission can occur), the power produced by such a subcritical core will also be proportional to the subcritical multiplication factor and the external source strength. As a measurement technique, subcritical multiplication was used during the Manhattan Project The Manhattan Project was a research and development undertaking during World War II that produced the first nuclear weapons. It was led by the United States with the support of the United Kingdom and Canada. From 1942 to 1946, the project w ... in early experiments to determine the minimum critical masses of 235U and of 239Pu. It is still used today to calibrate the controls for nuclear reactors during startup, as many effects (discussed in the following sections) can change the required control settings to achieve criticality in a reactor. As a power-generating technique, subcritical multiplication allows generation of nuclear power for fission where a critical assembly is undesirable for safety or other reasons. A subcritical assembly together with a neutron source can serve as a steady source of heat to generate power from fission. Including the effect of an external neutron source ("external" to the fission process, not physically external to the core), one can write a modified evolution equation: :$\frac = \frac + R_$ where $R_$ is the rate at which the external source injects neutrons into the core. In equilibrium, the core is not changing and dN/dt is zero, so the equilibrium number of neutrons is given by: :$N = -\frac$ If the core is subcritical, then $\alpha$ is negative so there is an equilibrium with a positive number of neutrons. If the core is close to criticality, then $\alpha$ is very small and thus the final number of neutrons can be made arbitrarily large. Neutron moderators To improve $P_$ and enable a chain reaction, natural or low enrichment uranium-fueled reactors must include a neutron moderator In nuclear engineering, a neutron moderator is a medium that reduces the speed of Neutron temperature#Fast, fast neutrons, ideally without neutron capture, capturing any, leaving them as Neutron temperature#Thermal, thermal neutrons with only Ther ... that interacts with newly produced fast neutrons from fission events to reduce their kinetic energy from several MeV In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an Voltage, electric potential difference of one volt i ... to thermal energies of less than one eV, making them more likely to induce fission. This is because 235U has a larger cross section for slow neutrons, and also because 238U is much less likely to absorb a thermal neutron The neutron detection temperature, also called the neutron energy, indicates a free neutron's kinetic energy In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its ... than a freshly produced neutron from fission. Neutron moderators are thus materials that slow down neutrons. Neutrons are most effectively slowed by colliding with the nucleus of a light atom, hydrogen being the lightest of all. To be effective, moderator materials must thus contain light elements with atomic nuclei that tend to scatter neutrons on impact rather than absorb them. In addition to hydrogen, beryllium and carbon atoms are also suited to the job of moderating or slowing down neutrons. Hydrogen moderators include water Water (chemical formula ) is an Inorganic compound, inorganic, transparent, tasteless, odorless, and Color of water, nearly colorless chemical substance, which is the main constituent of Earth's hydrosphere and the fluids of all known living ... (H2O), heavy water ( D2O), and zirconium hydride (ZrH2), all of which work because a hydrogen nucleus has nearly the same mass as a free neutron: neutron-H2O or neutron-ZrH2 impacts excite rotational modes of the molecules (spinning them around). Deuterium Deuterium (or hydrogen-2, symbol or deuterium, also known as heavy hydrogen) is one of two Stable isotope ratio, stable isotopes of hydrogen (the other being Hydrogen atom, protium, or hydrogen-1). The atomic nucleus, nucleus of a deuterium ato ... nuclei (in heavy water) absorb kinetic energy less well than do light hydrogen nuclei, but they are much less likely to absorb the impacting neutron. Water or heavy water have the advantage of being transparent liquid A liquid is a nearly Compressibility, incompressible fluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. As such, it is one of State of matter#Four fundamental states, the four fund ... s, so that, in addition to shielding and moderating a reactor core, they permit direct viewing of the core in operation and can also serve as a working fluid for heat transfer. Carbon in the form of graphite has been widely used as a moderator. It was used in Chicago Pile-1, the world's first man-made critical assembly, and was commonplace in early reactor designs including the Soviet The Soviet Union,. officially the Union of Soviet Socialist Republics. (USSR),. was a List of former transcontinental countries#Since 1700, transcontinental country that spanned much of Eurasia from 1922 to 1991. A flagship communist state, ... RBMK The RBMK (russian: реактор большой мощности канальный, РБМК; ''reaktor bolshoy moshchnosti kanalnyy'', "high-power channel-type reactor") is a class of graphite-moderated nuclear power reactor designed and built ... nuclear power plant A nuclear power plant (NPP) is a thermal power station in which the heat source is a nuclear reactor. As is typical of thermal power stations, heat is used to generate steam that drives a steam turbine connected to a electric generator, generato ... s such as the Chernobyl plant. Moderators and reactor design The amount and nature of neutron moderation affects reactor controllability and hence safety. Because moderators both slow and absorb neutrons, there is an optimum amount of moderator to include in a given geometry of reactor core. Less moderation reduces the effectiveness by reducing the $P_$ term in the evolution equation, and more moderation reduces the effectiveness by increasing the $P_$ term. Most moderators become less effective with increasing temperature, so ''under-moderated'' reactors are stable against changes in temperature in the reactor core: if the core overheats, then the quality of the moderator is reduced and the reaction tends to slow down (there is a "negative temperature coefficient" in the reactivity of the core). Water is an extreme case: in extreme heat, it can boil, producing effective voids in the reactor core without destroying the physical structure of the core; this tends to shut down the reaction and reduce the possibility of a fuel meltdown. ''Over-moderated'' reactors are unstable against changes in temperature (there is a "positive temperature coefficient" in the reactivity of the core), and so are less inherently safe than under-moderated cores. Some reactors use a combination of moderator materials. For example, TRIGA type research reactors use ZrH2 moderator mixed with the 235U fuel, an H2O-filled core, and C (graphite) moderator and reflector blocks around the periphery of the core. Delayed neutrons and controllability Fission reactions and subsequent neutron escape happen very quickly; this is important for nuclear weapons A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either nuclear fission, fission (fission bomb) or a combination of fission and nuclear fusion, fusion reactions (Thermonuclear weapon, thermonu ... , where the objective is to make a nuclear pit release as much energy as possible before it physically explodes. Most neutrons emitted by fission events are prompt: they are emitted effectively instantaneously. Once emitted, the average neutron lifetime ($\tau$) in a typical core is on the order of a millisecond A millisecond (from ''milli-'' and second; symbol: ms) is a unit of time in the International System of Units (SI) equal to one thousandth (0.001 or 10−3 or 1/1000) of a second and to 1000 microseconds. A unit of 10 milliseconds may be called ... , so if the exponential factor $\alpha$ is as small as 0.01, then in one second the reactor power will vary by a factor of (1 + 0.01)1000, or more than ten thousand 1000 or one thousand is the natural number following 999 (number), 999 and preceding 1001 (number), 1001. In most English-speaking countries, it can be written with or without a comma or sometimes a period decimal mark#Digit grouping, separating ... . Nuclear weapons are engineered to maximize the power growth rate, with lifetimes well under a millisecond and exponential factors close to 2; but such rapid variation would render it practically impossible to control the reaction rates in a nuclear reactor. Fortunately, the ''effective'' neutron lifetime is much longer than the average lifetime of a single neutron in the core. About 0.65% of the neutrons produced by 235U fission, and about 0.20% of the neutrons produced by 239Pu fission, are not produced immediately, but rather are emitted from an excited nucleus after a further decay step. In this step, further radioactive decay Radioactive decay (also known as nuclear decay, radioactivity, radioactive Decay chain, disintegration, or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nucl ... of some of the fission products (almost always negative beta decay), is followed by immediate neutron emission from the excited daughter product, with an average life time of the beta decay (and thus the neutron emission) of about 15 seconds. These so-called delayed neutron In nuclear engineering, a delayed neutron is a neutron emitted after a nuclear fission event, by one of the fission products (or actually, a fission product daughter after beta decay), any time from a few milliseconds to a few minutes after the fis ... s increase the effective average lifetime of neutrons in the core, to nearly 0.1 seconds, so that a core with $\alpha$ of 0.01 would increase in one second by only a factor of (1 + 0.01)10, or about 1.1: a 10% increase. This is a controllable rate of change. Most nuclear reactors are hence operated in a ''prompt subcritical'', ''delayed critical'' condition: the prompt neutrons alone are not sufficient to sustain a chain reaction, but the delayed neutrons make up the small difference required to keep the reaction going. This has effects on how reactors are controlled: when a small amount of control rod is slid into or out of the reactor core, the power level changes at first very rapidly due to ''prompt subcritical multiplication'' and then more gradually, following the exponential growth or decay curve of the delayed critical reaction. Furthermore, ''increases'' in reactor power can be performed at any desired rate simply by pulling out a sufficient length of control rod. However, without addition of a neutron poison In applications such as nuclear reactors, a neutron poison (also called a neutron absorber or a nuclear poison) is a substance with a large Neutron cross section, neutron absorption cross-section. In such applications, absorbing neutrons is norma ... or active neutron-absorber, ''decreases'' in fission rate are limited in speed, because even if the reactor is taken deeply subcritical to stop prompt fission neutron production, delayed neutrons are produced after ordinary beta decay of fission products already in place, and this decay-production of neutrons cannot be changed. The rate of change of reactor power is determined by the reactor period $T$, which is related to the reactivity $\rho$ through the Inhour equation. Kinetics The kinetics of the reactor is described by the balance equations of neutrons and nuclei (fissile, fission products). Reactor poisons Any nuclide A nuclide (or nucleide, from atomic nucleus, nucleus, also known as nuclear species) is a class of atoms characterized by their number of protons, ''Z'', their number of neutrons, ''N'', and their nuclear energy state. The word ''nuclide'' was co ... that strongly absorbs neutrons is called a reactor poison, because it tends to shut down (poison) an ongoing fission chain reaction. Some reactor poisons are deliberately inserted into fission reactor cores to control the reaction; boron Boron is a chemical element with the Chemical symbol, symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the ''boron g ... or cadmium Cadmium is a chemical element with the Symbol (chemistry), symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12 element, group 12, zinc and mercury (element), mercury. Li ... control rods are the best example. Many reactor poisons are produced by the fission process itself, and buildup of neutron-absorbing fission products affects both the fuel economics and the controllability of nuclear reactors. Long-lived poisons and fuel reprocessing In practice, buildup of reactor poisons in nuclear fuel is what determines the lifetime of nuclear fuel in a reactor: long before all possible fissions have taken place, buildup of long-lived neutron absorbing fission products damps out the chain reaction. This is the reason that nuclear reprocessing Nuclear reprocessing is the chemical separation of fission products and actinides from spent nuclear fuel. Originally, reprocessing was used solely to extract plutonium for producing nuclear weapons. With commercialization of nuclear power, the ... is a useful activity: spent nuclear fuel contains about 96% of the original fissionable material present in newly manufactured nuclear fuel. Chemical separation of the fission products restores the nuclear fuel so that it can be used again. Nuclear reprocessing is useful economically because chemical separation is much simpler to accomplish than the difficult isotope separation required to prepare nuclear fuel from natural uranium ore, so that in principle chemical separation yields more generated energy for less effort than mining, purifying, and isotopically separating new uranium ore. In practice, both the difficulty of handling the highly radioactive fission products and other political concerns make fuel reprocessing a contentious subject. One such concern is the fact that spent uranium nuclear fuel contains significant quantities of 239Pu, a prime ingredient in nuclear weapons (see breeder reactor). Short-lived poisons and controllability Short-lived reactor poisons in fission products strongly affect how nuclear reactors can operate. Unstable fission product nuclei transmute into many different elements (''secondary fission products'') as they undergo a decay chain In nuclear science, the decay chain refers to a series of radioactive decay Radioactive decay (also known as nuclear decay, radioactivity, radioactive Decay chain, disintegration, or nuclear disintegration) is the process by which an unsta ... to a stable isotope. The most important such element is xenon Xenon is a chemical element A chemical element is a species of atoms that have a given number of protons in their atomic nucleus, nuclei, including the pure Chemical substance, substance consisting only of that species. Unlike chemical c ... , because the isotope 135Xe, a secondary fission product with a half-life of about 9 hours, is an extremely strong neutron absorber. In an operating reactor, each nucleus of 135Xe becomes 136Xe (which may later sustain beta decay) by neutron capture Neutron capture is a nuclear reaction in which an atomic nucleus The atomic nucleus is the small, dense region consisting of protons and neutrons at the center of an atom, discovered in 1911 by Ernest Rutherford based on the 1909 Geiger ... almost as soon as it is created, so that there is no buildup in the core. However, when a reactor shuts down, the level of 135Xe builds up in the core for about 9 hours before beginning to decay. The result is that, about 6–8 hours after a reactor is shut down, it can become physically impossible to restart the chain reaction until the 135Xe has had a chance to decay over the next several hours. This temporary state, which may last several days and prevent restart, is called the iodine pit or xenon-poisoning. It is one reason why nuclear power reactors are usually operated at an even power level around the clock. 135Xe buildup in a reactor core makes it extremely dangerous to operate the reactor a few hours after it has been shut down. Because the 135Xe absorbs neutrons strongly, starting a reactor in a high-Xe condition requires pulling the control rods out of the core much farther than normal. However, if the reactor does achieve criticality, then the neutron flux in the core becomes high and 135Xe is destroyed rapidly—this has the same effect as very rapidly removing a great length of control rod from the core, and can cause the reaction to grow too rapidly or even become prompt critical. 135Xe played a large part in the Chernobyl accident The Chernobyl disaster was a nuclear accident that occurred on 26 April 1986 at the No. 4 nuclear reactor, reactor in the Chernobyl Nuclear Power Plant, near the city of Pripyat in the north of the Ukrainian Soviet Socialist Republic, Ukrainia ... : about eight hours after a scheduled maintenance shutdown, workers tried to bring the reactor to a zero power critical condition to test a control circuit. Since the core was loaded with 135Xe from the previous day's power generation, it was necessary to withdraw more control rods to achieve this. As a result, the overdriven reaction grew rapidly and uncontrollably, leading to steam explosion in the core, and violent destruction of the facility. Uranium enrichment While many fissionable In nuclear engineering, fissile material is material capable of sustaining a nuclear fission Nuclear chain reaction#Fission chain reaction, chain reaction. By definition, fissile material can sustain a chain reaction with neutrons of thermal ene ... isotopes exist in nature, the only usefully fissile In nuclear engineering, fissile material is material capable of sustaining a nuclear fission Nuclear fission is a nuclear reaction, reaction in which the atomic nucleus, nucleus of an atom splits into two or more smaller atomic nucleu ... isotope found in any quantity is 235U. About 0.7% of the uranium in most ores is the 235 isotope, and about 99.3% is the non-fissile 238 isotope. For most uses as a nuclear fuel, uranium must be ''enriched'' - purified so that it contains a higher percentage of 235U. Because 238U absorbs fast neutrons, the critical mass In nuclear engineering, a critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its atomic nucleus, nuclear properties (specifically, it ... needed to sustain a chain reaction increases as the 238U content increases, reaching infinity at 94% 238U (6% 235U). Concentrations lower than 6% 235U cannot go fast critical, though they are usable in a nuclear reactor with a neutron moderator In nuclear engineering, a neutron moderator is a medium that reduces the speed of Neutron temperature#Fast, fast neutrons, ideally without neutron capture, capturing any, leaving them as Neutron temperature#Thermal, thermal neutrons with only Ther ... . A nuclear weapon primary stage using uranium uses HEU enriched to ~90% 235U, though the secondary stage often uses lower enrichments. Nuclear reactors with water moderator require at least some enrichment of 235U. Nuclear reactors with heavy water or graphite moderation can operate with natural uranium, eliminating altogether the need for enrichment and preventing the fuel from being useful for nuclear weapons; the CANDU The CANDU (Canada Deuterium Uranium) is a Canadian pressurized heavy-water reactor design used to generate electric power. The acronym refers to its deuterium oxide (heavy water) neutron moderator, moderator and its use of (originally, natural ... power reactors used in Canadian Canadians (french: Canadiens) are people identified with the country of Canada. This connection may be residential, legal, historical or cultural. For most Canadians, many (or all) of these connections exist and are collectively the source of ... power plants are an example of this type. The Uranium enrichment Enriched uranium is a type of uranium in which the percent composition of uranium-235 (written 235U) has been increased through the process of isotope separation. Naturally occurring uranium is composed of three major isotopes: uranium-238 (238U ... is difficult because the chemical properties of 235U and 238U are identical, so physical processes such as gaseous diffusion, gas centrifuge, laser A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word "laser" is an acronym for "light amplification by stimulated emission of radiation". The fi ... , or the mass spectrometry Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a ''mass spectrum'', a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is us ... must be used for isotopic separation based on small differences in mass. Because enrichment is the main technical hurdle to production of nuclear fuel and simple nuclear weapons, enrichment technology is politically sensitive. Oklo: a natural nuclear reactor Modern deposits of uranium contain only up to ~0.7% 235U (and ~99.3% 238U), which is not enough to sustain a chain reaction moderated by ordinary water. But 235U has a much shorter half-life Half-life (symbol ) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable ato ... (700 million years) than 238U (4.5 billion years), so in the distant past the percentage of 235U was much higher. About two billion years ago, a water-saturated uranium deposit (in what is now the Oklo mine in Gabon Gabon (; ; snq, Ngabu), officially the Gabonese Republic (french: République gabonaise), is a country on the west coast of Central Africa. Located on the equator, it is bordered by Equatorial Guinea to the northwest, Cameroon to the north, ... , West Africa West Africa or Western Africa is the westernmost region of Africa. The United Nations geoscheme for Africa#Western Africa, United Nations defines Western Africa as the 16 countries of Benin, Burkina Faso, Cape Verde, The Gambia, Ghana, Guinea, ... ) underwent a naturally occurring chain reaction that was moderated by groundwater Groundwater is the water present beneath Earth's surface in rock and Pore space in soil, soil pore spaces and in the fractures of stratum, rock formations. About 30 percent of all readily available freshwater in the world is groundwater. A unit ... and, presumably, controlled by the negative void coefficient as the water boiled from the heat of the reaction. Uranium from the Oklo mine is about 50% depleted compared to other locations: it is only about 0.3% to 0.7% 235U; and the ore contains traces of stable daughters of long-decayed fission products. * List of nuclear reactors * Nuclear physics Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter. Nuclear physics should not be confused with atomic physics, which studies the ... * Nuclear fission Nuclear fission is a nuclear reaction, reaction in which the atomic nucleus, nucleus of an atom splits into two or more smaller atomic nucleus, nuclei. The fission process often produces gamma ray, gamma photons, and releases a very large ... * Nuclear fusion Nuclear fusion is a reaction in which two or more atomic nuclei are combined to form one or more different atomic nuclei and subatomic particles ( neutrons or protons). The difference in mass between the reactants and products is mani ... * Void coefficient *
2023-01-29 01:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209160327911377, "perplexity": 1724.3858199731103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00572.warc.gz"}
https://glossary.informs.org/ver2/mpgwiki/index.php?title=DFP_method
# DFP method This is a method to solve an unconstrained nonlinear program that proceeds as follows. 1. Start with any symmetric, negative definite matrix, say $LaTeX: H$ (e.g., $LaTeX: -I$), and any point, say $LaTeX: x$. Compute $LaTeX: g=\nabla f(x)$, and set each of the following: 2. direction: $LaTeX: d = -Hg$. 3. step size: $LaTeX: s \in \arg\!\max \{ f(x + td): t \ge 0\}$. 4. change in position: $LaTeX: p = sd$. 5. new point and gradient: $LaTeX: x' = x + p$ and $LaTeX: g' = \nabla f(x')$. 6. change in gradient: $LaTeX: q = g' - g$. 7. Replace $LaTeX: x$ with $LaTeX: x'$ and update $LaTeX: H$ by the DFP update to complete the iteration.
2019-11-14 11:33:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821211695671082, "perplexity": 953.2806650224703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00086.warc.gz"}
https://proxies-free.com/tag/amount/
## Drawing – How do I construct a particular binary tree from a certain amount of information? I am running an algorithm to solve a problem that returns a lot of information about a binary tree associated with this problem. To illustrate the results I am trying to draw the binary tree. I do the plot from the right in the picture below. But I like to draw the tree from the left. The question is: how to draw the binary tree from the right in the following figure? The input data for this are two lists. The first contains the "path" of the solution (for the tree left / bottom right: paths = {{0, 0, 0, 0, 0}, {0, 1, 1, 1}}) containing the second list The number of nodes in each layer (for the tree on the bottom right: node_leves = {1, 2, 4, 4}). I've tried to look for functions that are linked to charts, trees, and so on, but have not found a solution for rendering that particular tree. ## Derivative of an infinite amount Consider the approach $$alpha (x) << 1$$, Does that mean that $$d_ mu ( alpha (x))$$ is about the same $$0$$? Trying to take the variation $$Delta (d_ mu phi (x))$$ for field theory and I come across something ugly: / ## woocommerce – Displays the set amount of product categories in columns with a link to surplus I try to list product categories up to a certain amount (x) in a drop-down menu, but if the total number of product categories in the shop is greater than x then just list x minus 1 and finally display the link "Show all categories" Place. As an example: with 35 product categories. If the maximum number of product categories is set to 32 and the maximum number per column is set to 8, 31 product categories are displayed in 4 columns and the "Show All" link in 32nd position. The total product categories can be any number. Another example, at the moment I have only 18 categories. If the maximum number of product categories is set to 16 and the maximum number per column is set to 8, 15 product categories in 2 columns and the "Show All" link will appear in 16th place. I'm a beginner and have exhausted my knowledge of if / else logic. Everything I tried messed up the results. Here's the basic code that works to list them only in columns from 8 to 32, with the link going to the next column. `````` 'product_cat', 'orderby' => 'name', 'number' => 32, //maximum to list 'title_li' => '', 'show_count' => 0, // 1 for yes, 0 for no 'pad_counts' => 0, // 1 for yes, 0 for no 'hierarchical' => 1, // 1 for yes, 0 for no 'hide_empty' => 0, // 1 for yes, 0 for no 'echo' => 0, // 1 for yes, 0 for no 'exclude' => '73, 74, 16', //best sellers, new, and uncategorized 'depth' => '1', //top level categories, not sub 'style' => '', //default is list with bullets, '' is without ); // Grab top level categories \$get_cats = wp_list_categories(\$args); // Split into array items \$cat_array = explode("",\$get_cats); // Amount of categories (count of items in array) \$results_total = count(\$cat_array); // How many tags to show per list-8) \$remainder = (\$results_total-8); \$cats_per_list = (\$results_total-\$remainder); // Counter number for tagging onto each list \$list_number = 1; // Set the category result counter to zero \$result_number = 0; ?> = \$cats_per_list) { \$result_number = 0; \$list_number++; echo ''.\$category.' ``````'; } else { echo ''.\$category.''; } } echo 'View Categories'; ?> `````` ## docker – How do I set up a Linux machine on a performance basis based on a large amount of unused RAM? I have a test box (CentOS7) with 192 GB of RAM, which usually only uses ~ 7 GB, maybe with some spikes of up to ~ 32, but nothing more. Most of the time I used Docker, it has a big 2 TB SSD (xfs) which is used ~ 10, but I still want to make it faster. Most load-securing operations involve installing system packages in containers or the like with pip packages. I already have a local http proxy, which is partially used due to the MITM / SSL challenges, but the internet connection is also good, so the rest of the question assumes we just have to focus on it Maximize hard drive speed with free RAM, Since this is a test box, I do not care about the potential loss of data when the system reboots randomly. I prefer to consider this view available. I'm not very interested in making / tmp a tmpfs because I'm worried that a temporary tmp download might be too small. Also many temporary files are created in containers or in other folders like `.tox` (Extensively test Python code). It would be really cool if the file system knew how to save folders with the same name in RAM, but I do not know of a file system that can do that. With regard to the Docker aspect, the memory is configured as follows: ``````Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true `````` ## Formulas – How to calculate the amount tagged by day in Google Sheets? I use Google Sheets to capture revenue and expenses. I have a separate file for each month and a master file to summarize each month. The monthly account file contains a few sheets – each sheet contains revenue from the execution of a service, revenue from the sale of a product, expenses and the like. The service sheet looks like this: Now I want to be able to summarize the revenue from each such sheet by date. The easiest and only way I can do that is to use = SUMIF Formula for summing amounts from the column "C" by entering an entire date from the column "A" as a criterion, as follows: ``````=SUMIF(inservice!A3:A200;"2019-09-03";inservice!C3:C200) `````` It does the job, but the thing is that each month I would have to manually change the date (month and year specific) in the formula, and that's one thing I want to avoid. I tried to work around this problem by changing the formula: ``````=SUMIF(inservice!A3:A200;"*03";inservice!C3:C200) `````` However, I've found that only the amount from column "C" is summed up when the date in column "A" ends with a specific day. In this case this is "03". However, the sum is always "0". That's why this solution does not work for me. Is there a way to work around this problem by using the = SUMIF changed formula? Another solution that I thought about is to put all the data into separate cells and use each cell as a criterion = SUMIF Formula, but for this to work, I would need a formula that automatically changes the month and year accordingly in those reference dates, unless such a formula can be placed inside = SUMIF even. I feel like it's a very cumbersome way to achieve what I'm trying to do, but I definitely hope that an example of what I mean will help – say the fourth column in the picture below "D" and I mean use it as the range for a criterion to be met in the formula. To calculate the amount earned on 03.09.2019, I would use such a formula (where D192 is D192) 2019-09-03): `=SUMIF(inservice!A3:A200;inservice!D192;inservice!C3:C200)` But again – for this to work, I would have to find a way to automatically change dates in column "D" with each new file, as it is very tedious to repeat them every month. I hope I explained everything comprehensibly. I have a feeling that an array formula might be a sought-after solution, but every time I try to involve it when using Google Sheets, I do not get the results I'm looking for, and I'm afraid I do not quite understand how it works. I started learning formulas a few months ago, when it turned out that I needed a neater account. Therefore treat me as a beginner and try to make your explanations as simple as possible. 😅 ## dnd 5e – What is the amount of commands that a Kambion can grant through devilish spells? Yes, the barbarian must obey. "As good as he can", however, was added by the DM, and even then it means that the barbarian will do his best to make the spoken command as literal as possible. For example, if the Barbarian has to climb a wall to reach the Bard, he will do so. He got to Do your best to attack the bard, for that was the order. What "as good as it can" does not mean that the barbarian will not only do his best to successfully try to execute the command, but also in the best possible way, even beyond what the stated command said. The barbarian will do his best to be able to do that exactly What the command said: Attack. Yes. Do the attack as best he can, not necessarily. Probably "attack the bard" could be interpreted as "easy to attack", which the bard, who attacks like the barbarian, usually attacks. If the barbarian is in the habit of performing all the attacks he can but is not angry, unless the situation seems to be serious, then, as the barbarian knows he is much stronger than the bard, he would be his perform both attacks, but without raging. If the monster had ordered, "Attack the Bard with all the strength you have!" then barbarian would have to do his best to go nuclear on the bard. But the other side of the coin is that the barbarian could say, "Okay, so I just throw a stone at the Bard and do not even use all my strength for a huge amount of damage! There I have the filled condition for the order! " Obviously this would be a bit of a fraudulent "interpretation" by the player. According to RAW, a DM call must always be made for such commands. For example, if the DM finds that the charm is essentially word-based, a literal interpretation of the command by the barbarian will work fine. Throw that tiny stone! But if the DM realizes that the spell is more like a kind of mind-control effect, then he assumes that the Barbarian would attack with all his ferocity: He would then not only obey the Cambion's very intention of His words. Since the DM stated that the Barbarian must do the attack to the best of his ability, the player really had no choice. ## To reduce the free space for a certain amount of time to a certain number I want to charge the memory for a certain period of time. I check the memory usage with `vmstat -s` and with `tail /dev/zero` Command though `tail /dev/zero` Timeout in about 60s and it fills the memory randomly. I want to have 5% free space for 180 seconds. ## The transaction fee is always deducted by a receiver when a BTC amount is sent to two different recipients in a single transaction Need help!! anyone there.. To always deduct the transaction fee from 1 recipient when you send a BTC amount to 2 different recipients in a single transaction ## Is the AWS lambda function well suited for the use case where a large amount of records are retrieved from the database at user request? We have a use case where the lambda function is called by the API Gateway on user request from the browser, retrieves data from the database, and returns to the client. The time it takes to retrieve the data or the amount of data depends on which filters the user has selected. Do you think that the lambda function is suitable for this type of application? Additional questions: (If the answer is yes, the Lambda function can fit well) • How do we estimate the memory requirements for the lambda function for such an application? • The lambda function (for the above use case) waits for data to be retrieved from the database. Is there a cheaper way? • Is a dedicated EC2 instance hosting your web app a better way? ## Billing – ProRata vs Straight Amount I have worked through this forum and would like to thank everyone for their great contributions. A question I've been thinking about relates to your billing configuration, if you're considering prorata or straight billing – which would you recommend? I like the idea of ​​pro-rata because it provides more control and control as all customers receive an invoice for their associated hosting services on a particular day of the month. However, I do not want customers at check out to understand that \$ 5 will be paid monthly and \$ 6.50 (pro rata) will be debited on their first bill. I would be happy about the findings, thank you.
2019-09-15 10:17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43201082944869995, "perplexity": 1042.8714043889308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00451.warc.gz"}
http://www.prettyinspiration.com/blog/page.php?tag=f529d4-the-demand-measure-of-gdp-accounting-adds-together
The offers that appear in this table are from partnerships from which Investopedia receives compensation. Value added is the increase in the value of goods or services as a result of the production process. The demand measure of GDP accounting adds together: a) wages and salaries, rent, interest, and profit. ○ B. has risen gradually Gross National Income per capita = Gross National Income / Total Population, Problems with using GDP as a measure for standard of living. Boston Spa, ○ C. taxes are imposed on investment in capital. In 2015, the UK manufacturing sector accounted for 10% of total UK GDP and it accounted for 8% of jobs. ○ C. imports exceed exports by $150 billion. And also the increasingly lucrative computer games industry. These are low cost, high volume, low priced products. They can be calculated using the same formula, and they rise and fall together. Reach the audience you really want to apply for your teaching vacancy by posting directly to our website and related social media audiences. ○ C. has declined slightly ○ C. the value of all final good and services produced anywhere in the world by a$20. ○ D. durable goods and nondurable goods, physical capital is worn out, or reduced in value because of aging, over the course of a year. ○ C. Education ○ A. Factors that can cause a change in aggregate demand, Adam Smith, Karl Marx and Friedrich Hayek on Economic Systems, Gross National Happiness – Bhutan in Focus, Edexcel A-Level Economics Study Companion for Theme 4, Edexcel A-Level Economics Study Companion for Theme 2, Advertise your teaching jobs with tutor2u, GDP includes the output of foreign owned businesses that are located in a nation following foreign direct investment. ○ B. the value of all final goods and services produced by a government. The demand measure of GDP accounting adds together Question options Question 16 from ECON 201 at University of Maryland, University College ○ A. wages and salaries, rent, interest, and profit. The demand curve measures the quantity demanded at each price. This is known as the shadow economy. Other goods and services are such that lots of value can be added as we move from sourcing the raw materials through to the final product. In other words, it disguises the structure and relative efficiency of production underneath total expenditures. [Year 12 Enrichment Task]. This is: Only those incomes that are come from the production of goods and services are included in the calculation of GDP by the income approach. For example, the output produced at the, Gross Domestic product (by sum of factor incomes), The creative force behind 10bn unique products, It accounts for 15-20 per cent of world economy, It employs about 300m people (roughly 5 pc of world population). Question 13 1 / 1 point. ○ B. consumption, investment, government purchases, and trade balance. C. consumption, government purchases, wages and salaries, and trade balance. Some products have a low value-added, for example cheap tee-shirts selling for little more than £5. LS23 6AD, Tel: +44 0844 800 0085 ○ D. has sharply risen, machinery and the economic and social changes that resulted in the first half of the 1800s. ○ D. Economic growth, ○ A. has proven to be stable Is it time to end our fixation with GDP and growth? Trade balance Some differences can arise based on data sources, timing, and mathematical techniques used. Investopedia uses cookies to provide you with a great user experience. Each multiple choice question has just ONE answer. ○ A. exports exceed imports by $50 billion. . Manufacturing in the UK was 11% of GDP in 2015. ○. National income measures the monetary value of the flow of output of goods and services produced in an economy over a period of time. ○ D. exports exceed imports by$150 billion. If consumption equals … The pizza has many ingredients at stages of the supply chain – tomato growers, dough, mushroom farmers and also the value created by Dominos as they put the pizza together and deliver to the consumer. equals $200 billion, and government spending equals$260 billion, then: ○ B. imports exceed exports by $50 billion. The aggregate demand formula is AD = C + I + G +(X-M). B. consumption, investment, government purchases, and trade balance. b) consumption, government purchases, wages and salaries, and trade balance. ○ A. final inventories We exclude: Transfer payments e.g. The Industrial Revolution ○ B. Examples include designer jewellery, perfumes, meals in expensive restaurants and sports cars. the state pension; income support for families on low incomes; the Jobseekers’ Allowance for the unemployed and other welfare assistance such housing benefit and incapacity benefits, Private transfers of money from one individual to another. 1. ○ B. NNP; GNP ○ B. It's used to show how a country's demand … GDP is the sum of the incomes earned through the production of goods and services. Income not registered with the tax authorities Every year, billions of pounds worth of activity is not declared to the tax authorities. 214 High Street, D. consumption, interest, government purchases, and trade balance. For example, it does not distinguish producing$100,000 worth of toenail clippers versus \$100,000 worth of computers. Have factories made the lives of workers better? Say you buy a pizza from Dominos for £9.99. Much cheaper & more effective than TES or the Guardian. Aggregate demand takes GDP and shows how it … ○ A. the sum of all currency and coins in circulation. ultimately determines the prevailing standard of living in a country. Please sign in or register to post comments. In general macroeconomic terms, both GDP and aggregate demand share the same equation: GDP or AD=C+I+G+(X−M)where:C=Consumer spending on goods and servicesI=Investment spending on business capital goodsG=Government spending on public goods and servicesX=ExportsM=Imports\begin{aligned} & GDP \text{ or } AD = C + I + G + (X - M)\\ &\textbf{where:}\\ &C=\text{Consumer spending on goods and services}\\ &I=\text{Investment spending on business capital goods}\\ &G=\text{Government spending on public goods and services}\\ &X=\text{Exports}\\ &M=\text{Imports}\\ \end{aligned}​GDP or AD=C+I+G+(X−M)where:C=Consumer spending on goods and servicesI=Investment spending on business capital goodsG=Government spending on public goods and servicesX=ExportsM=Imports​. ○ D. GDP per capita. These superb packs of revision flashcards contain everything you need to cover for AQA & Edexcel A Level... Manufacturing is one of the production industries, which also include mining, electricity, water & waste management and oil & gas extraction. The IS-LM model represents the interaction of the real economy with financial markets to produce equilibrium interest rates and macroeconomic output. ○ C. GDP; NNP Investment spending on business capital goods, Government spending on public goods and services, Everything You Need to Know About Macroeconomics, Measuring the total value of all goods and services sold to final users, Adding together income payments and other. Copyright © 2020 StudeerSnel B.V., Keizersgracht 424, 1016 GC Amsterdam, KVK: 56829787, BTW: NL852321363B01, Principles of Macroeconomics Lecture notes, ECON 2020 - Lecture Notes on Chapter 6 - Unemployment. measured in U.S. dollars.
2021-04-22 20:38:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20087099075317383, "perplexity": 3867.878446675448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00019.warc.gz"}
https://crazyproject.wordpress.com/2011/11/02/construct-representatives-of-the-similarity-classes-of-matrices-having-a-given-dimension-and-characteristic-polynomial/
## Construct representatives of the similarity classes of matrices having a given dimension and characteristic polynomial Construct (representatives of) the similarity classes of $6 \times 6$ matrices over $\mathbb{C}$ having characteristic polynomial $c(x) = (x^4-1)(x^2-1)$. First, note that $c(x)$ factors over $\mathbb{C}$ as $c(x) = (x+1)^2(x-1)^2(x+i)(x-i)$. Recall that the characteristic polynomial of a matrix is the product of its invariant factors, that the minimal polynomial is the divisibility-largest invariant factor, and that the characteristic polynomial divides a power of the minimal polynomial (that is, since $\mathbb{C}[x]$ is a UFD, each factor of the characteristic polynomial appears in the factorization of the minimal polynomial). If $A$ is a matrix having characteristic polynomial $c(x)$, then the minimal polynomial of $A$ is one of the following. 1. $(x+i)(x-i)(x+1)(x-1)$ 2. $(x+i)(x-i)(x+1)^2(x-1)$ 3. $(x+i)(x-i)(x+1)(x-1)^2$ 4. $(x+i)(x-i)(x+1)^2(x-1)^2$ In each case, the remaining invariant factors are determined. The possible lists of invariant factors are thus as follows. 1. $(x+1)(x-1)$, $(x+i)(x-i)(x+1)(x-1)$ 2. $x-1$, $(x+i)(x-i)(x+1)^2(x-1)$ 3. $x+1$, $(x+i)(x-i)(x+1)(x-1)^2$ 4. $(x+i)(x-i)(x+1)^2(x-1)^2$ The corresponding matrices are thus as follows. 1. $\begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}$ 2. $\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & -1 \end{bmatrix}$ 3. $\begin{bmatrix} -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{bmatrix}$ 4. $\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}$ Every $6 \times 6$ matrix over $\mathbb{C}$ with characteristic polynomial $c(x)$ is similar to exactly one matrix in this list.
2016-10-26 15:07:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798192977905273, "perplexity": 140.61520500119667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720962.53/warc/CC-MAIN-20161020183840-00366-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/8555-2-1-1-yet-another-simple-one-print.html
# (a^2 -1) / (-a -1) yet another simple one Printable View • Dec 7th 2006, 12:38 PM shenton (a^2 -1) / (-a -1) yet another simple one Simplify: (a^2 - 1) / (-a - 1) Answer key: 1 - a This looks like another simple problem. But I tried AfterShock method of splitting the numerator and find nothing to cancel; I also tried Dan's suggestion of factoring, I don't think I see something to factor. This one seems harder, how to arrive at 1 - a solution? Thanks. • Dec 7th 2006, 12:42 PM topsquark Quote: Originally Posted by shenton Simplify: (a^2 - 1) / (-a - 1) Answer key: 1 - a This looks like another simple problem. But I tried AfterShock method of splitting the numerator and find nothing to cancel; I also tried Dan's suggestion of factoring, I don't think I see something to factor. This one seems harder, how to arrive at 1 - a solution? Thanks. $\frac{a^2 - 1}{-a - 1} = \frac{a^2 - 1}{-1(a + 1)} = -\frac{a^2 - 1}{a + 1}$ Now note that $a^2 - 1$ is the difference between two squares, so $a^2 - 1 = (a + 1)(a - 1)$ so $\frac{a^2 - 1}{-a - 1} = - \frac{(a + 1)(a - 1)}{(a + 1)} = -(a - 1) = 1 - a$ -Dan • Dec 7th 2006, 12:46 PM AfterShock Quote: Originally Posted by shenton Simplify: (a^2 - 1) / (-a - 1) Answer key: 1 - a This looks like another simple problem. But I tried AfterShock method of splitting the numerator and find nothing to cancel; I also tried Dan's suggestion of factoring, I don't think I see something to factor. This one seems harder, how to arrive at 1 - a solution? Thanks. Another way, although the way shown is the easiest: (a^2 - 1)/(-a - 1) = (a^2)/(-a - 1) + (-1/(-a-1)) = (-a^2)/(a + 1) + 1/(a + 1) In the above step, I just made it so that the denominator was positive and the numerator negative, and in the second both positive. Personal preference. We can expand the first set of terms by long division: (-a + 1) + -1/(a+1) + 1/(a+1) = (-a + 1) • Dec 7th 2006, 01:34 PM shenton Thanks, guys for teaching. These questions are actually pretty hard. I learnt something now.
2017-07-28 07:12:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059638381004333, "perplexity": 2044.0904433948147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00584.warc.gz"}
https://iacr.org/cryptodb/data/author.php?authorkey=654
## CryptoDB ### Amit Sahai #### Publications Year Venue Title 2019 CRYPTO Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice? We initiate a study of “cryptographic sensing” problems of this type, presenting definitions, positive and negative results, and directions for further research. 2019 EUROCRYPT We develop attacks on the security of variants of pseudo-random generators computed by quadratic polynomials. In particular we give a general condition for breaking the one-way property of mappings where every output is a quadratic polynomial (over the reals) of the input. As a corollary, we break the degree-2 candidates for security assumptions recently proposed for constructing indistinguishability obfuscation by Ananth, Jain and Sahai (ePrint 2018) and Agrawal (ePrint 2018). We present conjectures that would imply our attacks extend to a wider variety of instances, and in particular offer experimental evidence that they break assumption of Lin-Matt (ePrint 2018).Our algorithms use semidefinite programming, and in particular, results on low-rank recovery (Recht, Fazel, Parrilo 2007) and matrix completion (Gross 2009). 2019 EUROCRYPT In this work, we introduce and construct D-restricted Functional Encryption (FE) for any constant $D \ge 3$D≥3, based only on the SXDH assumption over bilinear groups. This generalizes the notion of 3-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model.A $D=(d+2)$D=(d+2)-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $M=(\varvec{x},\varvec{y},\varvec{z})$M=(x,y,z). Here, $\varvec{x}\in \mathbb {F}_{\mathbf {p}}^{d\times n}$x∈Fpd×n and $\varvec{y},\varvec{z}\in \mathbb {F}_{\mathbf {p}}^n$y,z∈Fpn. Function keys can be issued for a function $f=\varSigma _{\varvec{I}= (i_1,..,i_d,j,k)}\ c_{\varvec{I}}\cdot \varvec{x}[1,i_1] \cdots \varvec{x}[d,i_d] \cdot \varvec{y}[j]\cdot \varvec{z}[k]$f=ΣI=(i1,..,id,j,k)cI·x[1,i1]⋯x[d,id]·y[j]·z[k] where the coefficients $c_{\varvec{I}}\in \mathbb {F}_{\mathbf {p}}$cI∈Fp. Knowing the function key and the ciphertext, one can learn $f(\varvec{x},\varvec{y},\varvec{z})$f(x,y,z), if this value is bounded in absolute value by some polynomial in the security parameter and n. The security requirement is that the ciphertext hides $\varvec{y}$y and $\varvec{z}$z, although it is not required to hide $\varvec{x}$x. Thus $\varvec{x}$x can be seen as a public attribute.D-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $\mathbb {R}$R. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation ($i\mathcal {O}$iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $\mathbb {R}$R. 2019 CRYPTO In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $\delta _s-$sound and $\delta _z-$zero-knowledge NIZK candidate satisfying $\delta _s+\delta _z=1-\epsilon$, for any constant $\epsilon >0$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $\mathsf {NP}$ instances. We believe that this may be of independent interest.To achieve this result we analyze following two transformations:Parallel Repetition: We show that using parallel repetition any $\delta _s-$sound and $\delta _z-$zero-knowledge $\mathsf {NIZK}$ candidate can be turned into (roughly) $\delta ^n_s-$sound and $1-(1-\delta _{z})^n-$zero-knowledge candidate. Here n is the repetition parameter.MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $\delta _s-$sound and $\delta _z-$zero-knowledge $\mathsf {NIZK}$ candidate can be turned into (roughly) $1-(1-\delta _s)^n-$sound and $2\cdot \delta ^n_{z}-$zero-knowledge candidate. Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any $\mathsf {NIZK}$ candidate satisfying $\delta _s,\delta _z<1/3 -1/\mathsf {poly}(\lambda )$ to a secure one. 2019 CRYPTO The existence of secure indistinguishability obfuscators ( $i\mathcal {O}$ ) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing $i\mathcal {O}$ rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for $d>2$ is poorly understood.We propose a new approach to constructing $i\mathcal {O}$ for general circuits. Unlike all previously known realizations of $i\mathcal {O}$ , we avoid the use of d-linear maps of degree $d \ge 3$ .At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator ( $\varDelta$ RG) and pseudo flawed-smudging generator ( $\mathrm {PFG}$ ), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over $\mathbb {Z}$ . We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain $i\mathcal {O}$ for general circuits assuming:Subexponentially secure LWEBilinear Maps $\mathrm {poly}(\lambda )$ -secure 3-block-local PRGs $\varDelta$ RGs or $\mathrm {PFG}$ s 2018 EUROCRYPT 2018 EUROCRYPT 2018 CRYPTO We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$ p>0, the leakage reveals essentially nothing about the input.In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$ p<1, there is a finite basis $\mathbb {B}$ B such that leakage-resilient computation with leakage probability p can be realized using circuits over the basis $\mathbb {B}$ B. We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$ p′-leakage of input values alone, for any $p<p'<1$ p<p′<1. Finally, we complement this by a negative result, showing that for every basis $\mathbb {B}$ B there is some leakage probability $p<1$ p<1 such that for any $p'<1$ p′<1, leakage tolerance as above cannot be achieved in general.We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any $\mathbf{t}$ t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(\mathbf{t}^{1+\varepsilon })$ O(t1+ε) random bits, for every constant $\varepsilon > 0$ ε>0. This (near-optimal) bound significantly improves upon previous constructions that required more than $\mathbf{t}^{3}$ t3 random bits. 2018 CRYPTO We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or N$^{th}$-Residuosity).We demonstrate the following applications of our new technique:We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or N$^{th}$-Residuosity).We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for “list coin-tossing” – a slight relaxation of coin-tossing that suffices for most conceivable applications – based on polynomially hard DDH (or QR or N$^{th}$-Residuosity). This result generalizes to randomized input-less functionalities. Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries.In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving “non-malleability” across different primitives. 2018 CRYPTO We develop a general approach to adding a threshold functionality to a large class of (non-threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (ThFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our ThFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE. 2018 TCC The notion of Functional Encryption (FE) has recently emerged as a strong primitive with several exciting applications. In this work, we initiate the study of the following question: Can existing public key encryption schemes be “upgraded” to Functional Encryption schemes without changing their public keys or the encryption algorithm? We call a public-key encryption scheme with this property to be FE-compatible. Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here. We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible. In particular, each of the following popular CCA secure encryption schemes (some of which existed even before the notion of FE was introduced) fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:1.[CHK04] when instantiated with the IBE scheme of [BB04].2.[CHK04] when instantiated with any Hierarchical IBE scheme.3.[PW08] when instantiated with any Lossy Trapdoor Function. 2018 TCC Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. Traditionally, there have been two main approaches for PRF design: the “practitioner’s approach” of building concretely-efficient constructions based on known heuristics and prior experience, and the “theoretician’s approach” of proposing constructions and reducing their security to a previously-studied hardness assumption. While both approaches have their merits, the resulting PRF candidates vary greatly in terms of concrete efficiency and design complexity.In this work, we depart from these traditional approaches by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits—specifically, depth-2$\mathsf {ACC}^0$ circuits. Concretely, our main weak PRF candidate is a “piecewise-linear” function that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. We also put forward a similar depth-3 strong PRF candidate.The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of $\mathsf {ACC}^0$ and width-3 branching programs, interpolation and property testing for sparse polynomials, and new natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs).Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging standard error-correcting codes. 2017 EUROCRYPT 2017 EUROCRYPT 2017 EUROCRYPT 2017 EUROCRYPT 2017 CRYPTO 2017 ASIACRYPT 2017 ASIACRYPT 2017 ASIACRYPT 2017 TCC 2016 EUROCRYPT 2016 EUROCRYPT 2016 EUROCRYPT 2016 CRYPTO 2016 CRYPTO 2016 CRYPTO 2016 TCC 2016 ASIACRYPT 2016 ASIACRYPT 2016 TCC 2015 JOFC 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 TCC 2015 TCC 2015 TCC 2015 TCC 2015 PKC 2015 EUROCRYPT 2015 EUROCRYPT 2015 CRYPTO 2015 CRYPTO 2015 CRYPTO 2015 CRYPTO 2015 ASIACRYPT 2015 ASIACRYPT 2014 EUROCRYPT 2014 EUROCRYPT 2014 EUROCRYPT 2014 EUROCRYPT 2014 TCC 2014 TCC 2014 EPRINT 2014 EPRINT 2014 EPRINT 2014 EPRINT 2014 EPRINT 2014 ASIACRYPT 2013 CRYPTO 2013 CRYPTO 2013 CRYPTO 2013 CRYPTO 2013 CRYPTO 2013 ASIACRYPT 2012 TCC 2012 EUROCRYPT 2012 CRYPTO 2012 CRYPTO 2012 CRYPTO 2011 PKC 2011 TCC 2011 TCC 2011 CRYPTO 2011 CRYPTO 2011 CRYPTO 2011 CRYPTO 2011 EUROCRYPT 2011 ASIACRYPT 2010 TCC 2010 TCC 2010 ASIACRYPT 2010 CRYPTO 2010 EUROCRYPT 2010 EPRINT Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for $NP$. We show that such protocols exist in the {\em interactive PCP} model of Kalai and Raz (ICALP '08), where one of the provers is replaced by a PCP oracle. This strengthens the feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC '88) which requires two stateful provers. In contrast to previous zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC '97), in our protocol both the prover and the PCP oracle are efficient given an $NP$ witness. Our main technical tool is a new primitive that we call {\em interactive locking}, an efficient realization of an unconditionally secure commitment scheme in the interactive PCP model. We implement interactive locking by adapting previous constructions of {\em interactive hashing} protocols to our setting, and also provide a direct construction which uses a minimal amount of interaction and improves over our interactive hashing based constructions. Finally, we apply the above results towards showing the feasibility of basing unconditional cryptography on {\em stateless} tamper-proof hardware tokens, and obtain the following results: *) We show that if tokens can be used to encapsulate other tokens, then there exist unconditional and statistically secure (in fact, UC secure) protocols for general secure computation. *) Even if token encapsulation is not possible, there are unconditional and statistically secure commitment protocols and zero-knowledge proofs for $NP$. *) Finally, if token encapsulation is not possible, then no protocol can realize statistically secure oblivious transfer. 2010 EPRINT In this paper, we present two fully secure functional encryption schemes. Our first result is a fully secure attribute-based encryption (ABE) scheme. Previous constructions of ABE were only proven to be selectively secure. We achieve full security by adapting the dual system encryption methodology recently introduced by Waters and previously leveraged to obtain fully secure IBE and HIBE systems. The primary challenge in applying dual system encryption to ABE is the richer structure of keys and ciphertexts. In an IBE or HIBE system, keys and ciphertexts are both associated with the same type of simple object: identities. In an ABE system, keys and ciphertexts are associated with more complex objects: attributes and access formulas. We use a novel information-theoretic argument to adapt the dual system encryption methodology to the more complicated structure of ABE systems. We construct our system in composite order bilinear groups, where the order is a product of three primes. We prove the security of our system from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas. Our second result is a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. As for ABE, previous constructions of such schemes were only proven to be selectively secure. Security is proven under a non-interactive assumption whose size does not depend on the number of queries. The scheme is comparably efficient to existing selectively secure schemes. We also present a fully secure hierarchical PE scheme under the same assumption. The key technique used to obtain these results is an elaborate combination of the dual system encryption methodology (adapted to the structure of inner product PE systems) and a new approach on bilinear pairings using the notion of dual pairing vector spaces (DPVS) proposed by Okamoto and Takashima. 2010 EPRINT A number of works have investigated using tamper-proof hardwaretokens as tools to achieve a variety of cryptographic tasks. In particular, Goldreich and Ostrovsky considered the goal of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of \emph{one-time programs}: in a one-time program, an honest sender sends a set of {\em simple} hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a secret program specified by the sender's tokens exactly once (or, more generally, up to a fixed $t$ times). A recent line of work initiated by Katz examined the problem ofachieving UC-secure computation using hardware tokens. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We show that the following tasks, which cannot be realized in the plain'' model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens. Unconditional non-interactive secure computation: We show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities $f(x,y)$ which take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires a unidirectional communication of simple stateful tokens from the sender to the receiver. This strengthens previous feasibility results for one-time programs both by providing unconditional security and by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security under cryptographic assumptions. Interactive Secure computation from stateless tokens based on one-way functions: We show that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions. One cannot hope for security against unbounded adversaries with stateless tokens since an unbounded adversary could query the token multiple times to learn" the functionality it contains. Non-interactive secure computation from stateless tokens: We consider the problem of designing non-interactive secure computation from stateless tokens for stateless oblivious reactive functionalities, i.e., reactive functionalities which allow unlimited queries from the receiver (these are the only functionalities one can hope to realize non-interactively with stateless tokens). By building on recent techniques from resettably secure computation, we give a general positive result for stateless oblivious reactive functionalities under standard cryptographic assumption. This result generalizes the notion of (unlimited-use) obfuscation by providing security against a malicious sender, and also provides the first general feasibility result for program obfuscation using stateless tokens. 2009 TCC 2009 EUROCRYPT 2009 CRYPTO 2009 PKC 2008 EUROCRYPT 2008 EUROCRYPT 2008 EUROCRYPT 2008 EUROCRYPT 2008 CRYPTO 2008 EPRINT In this work, we design a new public key broadcast encryption system, and we focus on a critical parameter of device key size: the amount of the cryptographic key material that must be stored securely on the receiving devices. Our new scheme has ciphertext size overhead O(r), where $r$ is the number of revoked users, and the size of public and private keys is only a constant number of group elements from an elliptic-curve group of prime order. All previous work, even in the restricted case of systems based on symmetric keys, required at least lg(n) keys stored on each device. In addition, we show that our techniques can be used to realize Attribute-Based Encryption (ABE) systems with non-monotonic access formulas, where are key storage is significantly more efficient than previous solutions. Our results are in the standard model under a new, but non-interactive, assumption. 2008 EPRINT We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of {\em two-party} protocols with security against {\em malicious} parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead. We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include: \begin{itemize} \item An {\em unconditionally secure} protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$. \item Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the numberof ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by $O(\log k)$ field operations per gate, where$k$ is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation ({\em SIAM J.\ Comput.}, 35(5), 2006). \item A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant. \end{itemize} All of our protocols are in fact {\em UC-secure} in the OT-hybrid model and can be generalized to {\em multiparty} computation with an arbitrary number of malicious parties. 2007 ASIACRYPT 2007 EPRINT We consider the problem of constructing efficient locally decodable codes in the presence of a computationally bounded adversary. Assuming the existence of one-way functions, we construct {\em efficient} locally decodable codes with positive information rate and \emph{low} (almost optimal) query complexity which can correctly decode any given bit of the message from constant channel error rate $\rho$. This compares favorably to our state of knowledge locally-decodable codes without cryptographic assumptions. For all our constructions, the probability for any polynomial-time adversary, that the decoding algorithm incorrectly decodes any bit of the message is negligible in the security parameter. 2007 EPRINT We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes. 2007 EPRINT Non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs have played a significant role in the theory of cryptography. However, lack of efficiency has prevented them from being used in practice. One of the roots of this inefficiency is that non-interactive zero-knowledge proofs have been constructed for general NP-complete languages such as Circuit Satisfiability, causing an expensive blowup in the size of the statement when reducing it to a circuit. The contribution of this paper is a general methodology for constructing very simple and efficient non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs that work directly for groups with a bilinear map, without needing a reduction to Circuit Satisfiability. Groups with bilinear maps have enjoyed tremendous success in the field of cryptography in recent years and have been used to construct a plethora of protocols. This paper provides non-interactive witness-indistinguishable proofs and non-interactive zero-knowledge proofs that can be used in connection with these protocols. Our goal is to spread the use of non-interactive cryptographic proofs from mainly theoretical purposes to the large class of practical cryptographic protocols based on bilinear groups. 2007 EPRINT The Universal Composability framework was introduced by Canetti to study the security of protocols which are concurrently executed with other protocols in a network environment. Unfortunately it was shown that in the so called plain model, a large class of functionalities cannot be securely realized. These severe impossibility results motivated the study of other models involving some sort of setup assumptions, where general positive results can be obtained. Until recently, all the setup assumptions which were proposed required some trusted third party (or parties). Katz recently proposed using a \emph{physical setup} to avoid such trusted setup assumptions. In his model, the physical setup phase includes the parties exchanging tamper proof hardware tokens implementing some functionality. The tamper proof hardware is modeled so as to assume that the receiver of the token can do nothing more than observe its input/output characteristics. It is further assumed that the sender \emph{knows} the program code of the hardware token which it distributed. Based on the DDH assumption, Katz gave general positive results for universally composable multi-party computation tolerating any number of dishonest parties making this model quite attractive. In this paper, we present new constructions for UC secure computation using tamper proof hardware (in a stronger model). Our results represent an improvement over the results of Katz in several directions using substantially different techniques. Interestingly, our security proofs do not rely on being able to rewind the hardware tokens created by malicious parties. This means that we are able to relax the assumptions that the parties \emph{know} the code of the hardware token which they distributed. This allows us to model real life attacks where, for example, a party may simply pass on the token obtained from one party to the other without actually knowing its functionality. Furthermore, our construction models the interaction with the tamper-resistant hardware as a simple request-reply protocol. Thus, we show that the hardware tokens used in our construction can be \emph{resettable}. In fact, it suffices to use token which are completely stateless (and thus cannot execute a multi-round protocol). Our protocol is also based on general assumptions (namely enhanced trapdoor permutations). 2007 EPRINT Predicate encryption is a new paradigm generalizing, among other things, identity-based encryption. In a predicate encryption scheme, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK_f corresponding to the predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known for relatively few classes of predicates. We construct such a scheme for predicates corresponding to the evaluation of inner products over N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulae, or threshold predicates (among others). Besides serving as what we feel is a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right. 2007 EPRINT \emph{Precise zero knowledge} introduced by Micali and Pass (STOC'06) guarantees that the view of any verifier $V$ can be simulated in time closely related to the \emph{actual} (as opposed to worst-case) time spent by $V$ in the generated view. We provide the first constructions of precise concurrent zero-knowledge protocols. Our constructions have essentially optimal precision; consequently this improves also upon the previously tightest non-precise concurrent zero-knowledge protocols by Kilian and Petrank (STOC'01) and Prabhakaran, Rosen and Sahai (FOCS'02) whose simulators have a quadratic worst-case overhead. Additionally, we achieve a statistically-precise concurrent zero-knowledge property---which requires simulation of unbounded verifiers participating in an unbounded number of concurrent executions; as such we obtain the first (even non-precise) concurrent zero-knowledge protocols which handle verifiers participating in a super-polynomial number of concurrent executions. 2006 CRYPTO 2006 EUROCRYPT 2006 EUROCRYPT 2006 EUROCRYPT 2006 EUROCRYPT 2006 TCC 2006 EPRINT We construct the first fully collusion resistant tracing traitors system with sublinear size ciphertexts and constant size private keys. More precisely, let $N$ be the total number of users. Our system generates ciphertexts of size $O(\sqrt{N})$ and private keys of size $O(1)$. We build our system by first building a simpler primitive called private linear broadcast encryption (PLBE). We then show that any PLBE gives a tracing traitors system with the same parameters. Our system uses bilinear maps in groups of composite order. 2006 EPRINT We present the first aggregate signature, the first multisignature, and the first verifiably encrypted signature provably secure without random oracles. Our constructions derive from a novel application of a recent signature scheme due to Waters. Signatures in our aggregate signature scheme are sequentially constructed, but knowledge of the order in which messages were signed is not necessary for verification. The aggregate signatures obtained are shorter than Lysyanskaya et~al. sequential aggregates and can be verified more efficiently than Boneh et~al. aggregates. We also consider applications to secure routing and proxy signatures. 2006 EPRINT There is a vast body of work on {\em implementing} anonymous communication. In this paper, we study the possibility of using anonymous communication as a {\em building block}, and show that one can leverage on anonymity in a variety of cryptographic contexts. Our results go in two directions. \begin{itemize} \item{\bf Feasibility.} We show that anonymous communication over {\em insecure} channels can be used to implement unconditionally secure point-to-point channels, and hence general multi-party protocols with unconditional security in the presence of an honest majority. In contrast, anonymity cannot be generally used to obtain unconditional security when there is no honest majority. \item{\bf Efficiency.} We show that anonymous channels can yield substantial efficiency improvements for several natural secure computation tasks. In particular, we present the first solution to the problem of private information retrieval (PIR) which can handle multiple users while being close to optimal with respect to {\em both} communication and computation. A key observation that underlies these results is that {\em local randomization} of inputs, via secret-sharing, when combined with the {\em global mixing} of the shares, provided by anonymity, allows to carry out useful computations on the inputs while keeping the inputs private. \end{itemize} 2006 EPRINT We prove the equivalence of two definitions of non-malleable encryption, one based on the simulation approach of Dolev, Dwork and Naor and the other based on the comparison approach of Bellare, Desai, Pointcheval and Rogaway. Our definitions are slightly stronger than the original ones. The equivalence relies on a new characterization of non-malleable encryption in terms of the standard notion of indistinguishability of Goldwasser and Micali. We show that non-malleability is equivalent to indistinguishability under a parallel chosen ciphertext attack,'' this being a new kind of chosen ciphertext attack we introduce, in which the adversary's decryption queries are not allowed to depend on answers to previous queries, but must be made all at once. This characterization simplifies both the notion of non-malleable encryption and its usage, and enables one to see more easily how it compares with other notions of encryption. The results here apply to non-malleable encryption under any form of attack, whether chosen-plaintext, chosen-ciphertext, or adaptive chosen-ciphertext. 2006 EPRINT As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumes Hierarchical Identity-Based Encryption (HIBE). 2006 EPRINT We provide the first construction of a concurrent and non-malleable zero knowledge argument for every language in NP. We stress that our construction is in the plain model with no common random string, trusted parties, or super-polynomial simulation. That is, we construct a zero knowledge protocol $\Pi$ such that for every polynomial-time adversary that can adaptively and concurrently schedule polynomially many executions of $\Pi$, and corrupt some of the verifiers and some of the provers in these sessions, there is a polynomial-time simulator that can simulate a transcript of the entire execution, along with the witnesses for all statements proven by a corrupt prover to an honest verifier. Our security model is the traditional model for concurrent zero knowledge, where the statements to be proven by the honest provers are fixed in advance and do not depend on the previous history (but can be correlated with each other); corrupted provers, of course, can chose the statements adaptively. We also prove that there exists some functionality F (a combination of zero knowledge and oblivious transfer) such that it is impossible to obtain a concurrent non-malleable protocol for F in this model. Previous impossibility results for composable protocols ruled out existence of protocols for a wider class of functionalities (including zero knowledge!) but only if these protocols were required to remain secure when executed concurrently with arbitrarily chosen different protocols (Lindell, FOCS 2003) or if these protocols were required to remain secure when the honest parties' inputs in each execution are chosen adaptively based on the results of previous executions (Lindell, TCC 2004). We obtain an $\Tilde{O}(n)$-round protocol under the assumption that one-to-one one-way functions exist. This can be improved to $\Tilde{O}(k\log n)$ rounds under the assumption that there exist $k$-round statistically hiding commitment schemes. Our protocol is a black-box zero knowledge protocol. 2006 EPRINT In this paper we show a general transformation from any honest verifier statistical zero-knowledge argument to a concurrent statistical zero-knowledge argument. Our transformation relies only on the existence of one-way functions. It is known that the existence of zero-knowledge systems for any non-trivial language implies one way functions. Hence our transformation \emph{unconditionally} shows that concurrent statistical zero-knowledge arguments for a non-trivial language exist if and only if standalone secure statistical zero-knowledge arguments for that language exist. Further, applying our transformation to the recent statistical zero-knowledge argument system of Nguyen et al (STOC'06) yields the first concurrent statistical zero-knowledge argument system for all languages in \textbf{NP} from any one way function. 2005 EUROCRYPT 2005 TCC 2005 EPRINT We construct a secure protocol for any multi-party functionality that remains secure (under a relaxed definition of security) when executed concurrently with multiple copies of itself and other protocols. We stress that we do *not* use any assumptions on existence of trusted parties, common reference string, honest majority or synchronicity of the network. The relaxation of security, introduced by Prabhakaran and Sahai (STOC '04), is obtained by allowing the ideal-model simulator to run in *quai-polynomial* (as opposed to polynomial) time. Quasi-polynomial simulation suffices to ensure security for most applications of multi-party computation. Furthermore, Lindell (FOCS '03, TCC' 04) recently showed that such a protocol is *impossible* to obtain under the more standard definition of *polynomial-time* simulation by an ideal adversary. Our construction is the first such protocol under reasonably standard cryptographic assumptions. That is, existence of a hash function collection that is collision resistent with respect to circuits of subexponential size, and existence of trapdoor permutations that are secure with respect to circuits of quasi-polynomial size. We introduce a new technique: protocol condensing''. That is, taking a protocol that has strong security properties but requires *super-polynomial* communication and computation, and then transforming it into a protocol with *polynomial* communication and computation, that still inherits the strong security properties of the original protocol. Our result is obtained by combining this technique with previous techniques of Canetti, Lindell, Ostrovsky, and Sahai (STOC '02) and Pass (STOC '04). 2005 EPRINT We provide unconditional constructions of concurrent statistical zero-knowledge proofs for a variety of non-trivial problems (not known to have probabilistic polynomial-time algorithms). The problems include Graph Isomorphism, Graph Nonisomorphism, Quadratic Residuosity, Quadratic Nonresiduosity, a restricted version of Statistical Difference, and approximate versions of the (coNP forms of the) Shortest Vector Problem and Closest Vector Problem in lattices. For some of the problems, such as Graph Isomorphism and Quadratic Residuosity, the proof systems have provers that can be implemented in polynomial time (given an NP witness) and have \tilde{O}(log n) rounds, which is known to be essentially optimal for black-box simulation. To our best of knowledge, these are the first constructions of concurrent zero-knowledge protocols in the asynchronous model (without timing assumptions) that do not require complexity assumptions (such as the existence of one-way functions). 2005 EPRINT Non-interactive zero-knowledge (NIZK) systems are fundamental cryptographic primitives used in many constructions, including CCA2-secure cryptosystems, digital signatures, and various cryptographic protocols. What makes them especially attractive, is that they work equally well in a concurrent setting, which is notoriously hard for interactive zero-knowledge protocols. However, while for interactive zero-knowledge we know how to construct statistical zero-knowledge argument systems for all NP languages, for non-interactive zero-knowledge, this problem remained open since the inception of NIZK in the late 1980's. Here we resolve two problems regarding NIZK: - we construct the first perfect NIZK argument system for any NP language. - we construct the first UC-secure NIZK protocols for any NP language in the presence of a dynamic/adaptive adversary. While it was already known how to construct efficient prover computational NIZK proofs for any NP language, the known techniques yield large common reference strings and large NIZK proofs. As an additional implication of our techniques, we considerably reduce both the size of the common reference string and the size of the proofs. 2004 EUROCRYPT 2004 EPRINT Informally, an obfuscator $\Obf$ is an efficient, probabilistic compiler'' that transforms a program $P$ into a new program $\Obf(P)$ with the same functionality as $P$, but such that $\Obf(P)$ protects any secrets that may be built into and used by $P$. Program obfuscation, if possible, would have numerous important cryptographic applications, including: (1) Intellectual property'' protection of secret algorithms and keys in software, (2) Solving the long-standing open problem of homomorphic public-key encryption, (3) Controlled delegation of authority and access, (4) Transforming Private-Key Encryption into Public-Key Encryption, and (5) Access Control Systems. Unfortunately however, program obfuscators that work on arbitrary programs cannot exist [Barak et al]. No positive results for program obfuscation were known prior to this work. In this paper, we provide the first positive results in program obfuscation. We focus on the goal of access control, and give several provable obfuscations for complex access control functionalities, in the random oracle model. Our results are obtained through non-trivial compositions of obfuscations; we note that general composition of obfuscations is impossible, and so developing techniques for composing obfuscations is an important goal. Our work can also be seen as making initial progress toward the goal of obfuscating finite automata or regular expressions, an important general class of machines which are not ruled out by the impossibility results of Barak et al. We also note that our work provides the first formal proof techniques for obfuscation, which we expect to be useful in future work in this area. 2004 EPRINT We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, $\omega$, to decrypt a ciphertext encrypted with an identity, $\omega'$, if and only if the identities $\omega$ and $\omega'$ are close to each other as measured by the set overlap'' distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term attribute-based encryption''. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model. 2004 EPRINT We propose a modification to the framework of Universally Composable (UC) security [Canetti'01]. Our new notion, involves comparing the protocol executions with an ideal execution involving ideal functionalities (just as in UC-security), but allowing the environment and adversary access to some super-polynomial computational power. We argue the meaningfulness of the new notion, which in particular subsumes many of the traditional notions of security. We generalize the Universal Composition theorem of [Canetti'01] to the new setting. Then under new computational assumptions, we realize secure multi-party computation (for static adversaries) without a common reference string or any other set-up assumptions, in the new framework. This is known to be impossible under the UC framework. 2003 CRYPTO 2002 EPRINT We introduce a new methodology for achieving security against adaptive chosen-ciphertext attack (CCA) for public-key encryption schemes, which we call the {\em oblivious decryptors model}. The oblivious decryptors model generalizes both the two-key model of Naor and Yung, as well the Cramer--Shoup encryption schemes. The key ingredient in our new paradigm is Sahai's notion of Simulation-Sound NIZK proofs. Our methodology is easy to use: First, construct an encryption scheme which satisfies the bare'' oblivious-decryptors model: This can be done quite easily, with simple proofs of security. Then, by adding a Simulation-Sound NIZK proof, the scheme becomes provably CCA-secure. Note that this paradigm allows for the use of {\em efficient} special-purpose Simulation-Sound NIZK proofs, such as those recently put forward by Cramer and Shoup. We also show how to present all known efficient (provably secure) CCA-secure public-key encryption schemes as special cases of our model. 2002 EPRINT We consider the problem of constructing Concurrent Zero Knowledge Proofs, in which the fascinating and useful zero knowledge'' property is guaranteed even in situations where multiple concurrent proof sessions are executed with many colluding dishonest verifiers. Canetti et al. show that black-box concurrent zero knowledge proofs for non-trivial languages require $\tilde\Omega(\log k)$ rounds where $k$ is the security parameter. Till now the best known upper bound on the number of rounds for NP languages was $\omega(\log^2 k)$, due to Kilian, Petrank and Richardson. We establish an upper bound of $\omega(\log k)$ on the number of rounds for NP languages, thereby closing the gap between the upper and lower bounds, up to a $\omega(\log\log k)$ factor. 2002 EPRINT We show how to securely realize any two-party and multi-party functionality in a {\em universally composable} way, regardless of the number of corrupted participants. That is, we consider an asynchronous multi-party network with open communication and an adversary that can adaptively corrupt as many parties as it wishes. In this setting, our protocols allow any subset of the parties (with pairs of parties being a special case) to securely realize any desired functionality of their local inputs, and be guaranteed that security is preserved regardless of the activity in the rest of the network. This implies that security is preserved under concurrent composition of an unbounded number of protocol executions, it implies non-malleability with respect to arbitrary protocols, and more. Our constructions are in the common reference string model and rely on standard intractability assumptions. 2001 CRYPTO 2001 CRYPTO 2001 EUROCRYPT 2001 EPRINT Informally, an {\em obfuscator} $O$ is an (efficient, probabilistic) compiler'' that takes as input a program (or circuit) $P$ and produces a new program $O(P)$ that has the same functionality as $P$ yet is unintelligible'' in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the unintelligibility'' condition in obfuscation as meaning that $O(P)$ is a virtual black box,'' in the sense that anything one can efficiently compute given $O(P)$, one could also efficiently compute given oracle access to $P$. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of functions $F$ that are {\em \inherently unobfuscatable} in the following sense: there is a property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em any program} that computes a function $f\in F$, the value $\pi(f)$ can be efficiently computed, yet (b) given {\em oracle access} to a (randomly selected) function $f\in F$, no efficient algorithm can compute $\pi(f)$ much better than random guessing. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only {\em approximately} preserve the functionality, and (c) only need to work for very restricted models of computation ($TC_0$). We also rule out several potential applications of obfuscators, by constructing unobfuscatable'' signature schemes, encryption schemes, and pseudorandom function families. 2000 EUROCRYPT 2000 EPRINT We present the first complete problem for SZK, the class of (promise) problems possessing statistical zero-knowledge proofs (against an honest verifier). The problem, called STATISTICAL DIFFERENCE, is to decide whether two efficiently samplable distributions are either statistically close or far apart. This gives a new characterization of SZK that makes no reference to interaction or zero knowledge. We propose the use of complete problems to unify and extend the study of statistical zero knowledge. To this end, we examine several consequences of our Completeness Theorem and its proof, such as: (1) A way to make every (honest-verifier) statistical zero-knowledge proof very communication efficient, with the prover sending only one bit to the verifier (to achieve soundness error 1/2). (2) Simpler proofs of many of the previously known results about statistical zero knowledge, such as the Fortnow and Aiello--H&aring;stad upper bounds on the complexity of SZK and Okamoto's result that SZK is closed under complement. (3) Strong closure properties of SZK which amount to constructing statistical zero-knowledge proofs for complex assertions built out of simpler assertions already shown to be in SZK. (4) New results about the various measures of "knowledge complexity," including a collapse in the hierarchy corresponding to knowledge complexity in the "hint" sense. (5) Algorithms for manipulating the statistical difference between efficiently samplable distributions, including transformations which "polarize" and "reverse" the statistical relationship between a pair of distributions. 1999 CRYPTO 1999 CRYPTO 1999 CRYPTO 1999 EPRINT We prove the equivalence of two definitions of non-malleable encryption appearing in the literature--- the original one of Dolev, Dwork and Naor and the later one of Bellare, Desai, Pointcheval and Rogaway. The equivalence relies on a new characterization of non-malleable encryption in terms of the standard notion of indistinguishability of Goldwasser and Micali. We show that non-malleability is equivalent to indistinguishability under a parallel chosen ciphertext attack,'' this being a new kind of chosen ciphertext attack we introduce, in which the adversary's decryption queries are not allowed to depend on answers to previous queries, but must be made all at once. This characterization simplifies both the notion of non-malleable encryption and its usage, and enables one to see more easily how it compares with other notions of encryption. The results here apply to non-malleable encryption under any form of attack, whether chosen-plaintext, chosen-ciphertext, or adaptive chosen-ciphertext. 1999 EPRINT One of the toughest challenges in designing cryptographic protocols is to design them so that they will remain secure even when composed. For example, concurrent executions of a zero-knowledge protocol by a single prover (with one or more verifiers) may leak information and may not be zero-knowledge in toto. In this work we: (1) Suggest time as a mechanism to design concurrent cryptographic protocols and in particular maintaining zero-knowledge under concurrent execution. (2) Introduce the notion of of Deniable Authentication and connect it to the problem of concurrent zero-knowledge. We do not assume global synchronization, however we assume an (alpha,beta) timing constraint: for any two processors $P_1$ and $P_2$, if $P_1$ measures alpha elapsed time on its local clock and $P_2$ measures beta elapsed time on its local clock, and $P_2$ starts after $P_1$ does, then $P_2$ will finish after $P_1$ does. We show that for an adversary controlling all the processors clocks (as well as their communication channels) but which is constrained by an (alpha,beta) constraint there exist four-round almost concurrent zero-knowledge interactive proofs and perfect concurrent zero-knowledge arguments for every language in NP. We also address the more specific problem of Deniable Authentication, for which we propose several particularly efficient solutions. Deniable Authentication is of independent interest, even in the sequential case; our concurrent solutions yield sequential solutions, without recourse to timing, i.e., in the standard model. 1998 CRYPTO 1998 CRYPTO 1998 EPRINT The heart of the task of building public key cryptosystems is viewed as that of making trapdoors;'' in fact, public key cryptosystems and trapdoor functions are often discussed as synonymous. How accurate is this view? In this paper we endeavor to get a better understanding of the nature of trapdoorness'' and its relation to public key cryptosystems, by broadening the scope of the investigation: we look at general trapdoor functions; that is, functions that are not necessarily injective (ie., one-to-one). Our first result is somewhat surprising: we show that non-injective trapdoor functions (with super-polynomial pre-image size) can be constructed {from} any one-way function (and hence it is unlikely that they suffice for public key encryption). On the other hand, we show that trapdoor functions with polynomial pre-image size are sufficient for public key encryption. Together, these two results indicate that the pre-image size is a fundamental parameter of trapdoor functions. We then turn our attention to the converse, asking what kinds of trapdoor functions can be constructed from public key cryptosystems. We take a first step by showing that in the random-oracle model one can construct injective trapdoor functions from any public key cryptosystem. Eurocrypt 2019 TCC 2017 Crypto 2014 TCC 2013 TCC 2012 Eurocrypt 2010 Crypto 2008 Crypto 2007 TCC 2005 Asiacrypt 2005 Eurocrypt 2001 #### Coauthors Shweta Agrawal (3) Shashank Agrawal (1) Prabhanjan Vijendra Ananth (1) Prabhanjan Ananth (11) Benny Applebaum (1) Boaz Barak (7) Mihir Bellare (5) Allison Bishop (3) Nir Bitansky (1) Dan Boneh (9) Elette Boyle (1) Zvika Brakerski (1) Ran Canetti (3) David Cash (1) Nishanth Chandran (3) Jean-Sébastien Coron (2) Giovanni Di Crescenzo (1) Yi Deng (1) Yevgeniy Dodis (2) Cynthia Dwork (2) Edith Elkind (1) Dengguo Feng (1) Rex Fernando (1) Sanjam Garg (11) Daniel Genkin (1) Rosario Gennaro (1) Craig Gentry (8) Steven Goldfeder (1) Oded Goldreich (3) Shafi Goldwasser (1) S. Dov Gordon (2) Vipul Goyal (24) Jens Groth (6) Divya Gupta (5) Shai Halevi (9) Dennis Hofheinz (1) Susan Hohenberger (2) Samuel B. Hopkins (1) Russell Impagliazzo (2) Yuval Ishai (29) Tibor Jager (1) Abhishek Jain (15) Aayush Jain (9) Yael Tauman Kalai (6) Bhavana Kanukurthi (1) Jonathan Katz (3) Dakshita Khurana (10) Sam Kim (1) Ilan Komargodski (1) Venkata Koppula (1) Pravesh Kothari (1) Daniel Kraschewski (3) Ravi Kumar (1) Abishek Kumarasubramanian (2) Eyal Kushilevitz (7) Tancrède Lepoint (3) Kevin Lewi (1) Dongdai Lin (1) Huijia Lin (3) Yehuda Lindell (1) Feng-Hao Liu (1) Steve Lu (2) Benjamin Lynn (1) Ben Lynn (1) Hemanta K. Maji (8) Christian Matt (2) Daniele Micciancio (2) Eric Miles (6) Ilya Mironov (2) Tal Moran (1) Ryan Moriarty (2) Pratyay Mukherjee (1) Moni Naor (2) Tatsuaki Okamoto (2) Shien Jin Ong (2) Claudio Orlandi (3) Rafail Ostrovsky (19) Omkant Pandey (8) Omer Paneth (3) Rafael Pass (3) Alain Passelègue (1) Chris Peikert (2) Giuseppe Persiano (1) Manoj Prabhakaran (20) Sridhar Rajagopalan (1) Vanishree Rao (4) Peter M. R. Rasmussen (2) Mariana Raykova (3) Steven Rudich (2) Alfredo De Santis (1) Dominique Schröder (1) Hakan Seyalioglu (2) Hovav Shacham (2) Elaine Shi (1) Akshayaram Srinivasan (1) Katsuyuki Takashima (2) Mehdi Tibouchi (3) Eran Tromer (1) Wei-Lung Dustin Tseng (2) Dominique Unruh (1) Ramarathnam Venkatesan (2) Muthuramakrishnan Venkitasubramaniam (2) Ivan Visconti (2)
2019-10-22 12:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7319669127464294, "perplexity": 1648.1297046357101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00009.warc.gz"}
https://motls.blogspot.com/2018/04/can-kids-learn-to-think-mathematically.html
## Tuesday, April 17, 2018 ... // ### Can kids learn to think mathematically from granddaddy's animals? On Saturday night, we had a reunion – the end of elementary school after 30 years. Lots of beer, memories, personal stuff. I always discuss some serious topics. So one classmate (DS) holds impressive 3 bitcoins and is a full-blown hodler ;-) while your humble correspondent and another classmate (JK) were arguing why the Bitcoin pricing was a bubble and what it meant. I asked lots of people about Hejný's method to teach mathematics. (Teachers must be silent in the method, kids must invent everything by themselves, they solve some 10+ types of problems in recreational mathematics for 8 years, without any conceptual progress, and at the end, they tell you how much they love and understand mathematics because of this method.) By the end of the exchanges, 10 people were familiar with the topic, 8 of them were familiar to start with. Only 2 were sort of positive about that "constructivist" method in education – and one of them (VK) arguably changed his mind to a large extent. The rest was highly critical, just like I am. In March, I discussed particular problems, as seen on the matika.in website. All of them are recreational mathematics of some kind and they are supposed to be solved by guesswork – by the trial and error. That brute force strategy is a typical non-mathematical approach to the problems – mathematics is all about searching for patterns and clever things to solve otherwise hard or unsolvable problems. The champions and opponents of the method disagree about all those problems as well although some of them could be used in a wise classroom, too. But nothing polarizes the two camps as clearly as the Daddy Forrest. Search for that phrase on the matika.in website and try to solve some of the problems. Daddy should really be "granddaddy" (děda), some old guy from the family who lives in the countryside, who owns animals, and whose name is derived from the "forest" (les-Lesoň). This whole "environment" of Daddy Forrest's animals is using animal codes for animals that represent small integers, up to 20. You may search for the pictures on Google Images. The numbers 1,2,3,4,5 are replaced with a mouse, cat, goose, dog, goat. 10 is a cow and 20 is a horse. There are some other animals, too. The textbooks contain tons of colorful pictures of these animals. In the classroom, they use some stickers with the pictures of the animals that may be attached to a board. On top of that, children have to memorize how to write and read some icons or quasi-letters that represent each animal. The problems are of the type: place two cats and a goose on one side and five mice and a dog on the other. Which side is stronger? Or: remove one animal from such an "animal equation" so that the equation holds (they don't use that language). Now, opponents of the method such as myself usually say that it's nonsense and there's nothing about mathematics that the children learn from this activity. It's arguably the single most obscene example of the nonsense that is being pumped into the children and that is being marketed as mathematics. On the other hand, the people who defend the method – or people who have the natural tendency to defend it – often praise it as a great idea that teaches kids to think mathematically. Who is right? Of course the opponents are right. But what do the others say? A classmate VK turned out to be a fan of the method – we have been sitting next to each other for some 8 years when we were kids. He was an excellent student – who also had straight As throughout the high school which your revolting humble correspondent was extremely far from. OK, on Saturday night, he said: It's wonderful because the animals teach the kids that the digits, like the animals, are just another code and there's nothing else behind them. Great. I agree that they learn it, that's the key lesson here. But is that lesson correct? I don't think so. What VK said was that the conventions to represent integers are just social conventions and they may be changed. And when we translate from one convention to another, we get what we inserted. So there's no added value in the numerals which is a great lesson to learn, VK seems to say. (I had some deja vu. I think that he said exactly the same thing when we were 15 and I reacted in the same negative way to his comment almost three decades ago.) But the miracle of mathematics is, I respond, that it does have an added value. Mathematics only starts after you define your language and conventions, once you have some symbols, relations, operations, and stuff like that, and you actually start to do something with these damn things! You discover laws, patterns, regularities, tricks, algorithms, methods, methodologies, and other things. Those things are the beef of mathematics. Mathematics is the added value. It's some abstract body of wisdom that exists in any scheme of conventions to represent integers and other objects and wisdom that therefore doesn't depend on any particular choice of conventions. By mathematics, we mean the beef that even Chinese or extraterrestrials who use very different symbols (or dancing) would still find underneath their sequences of symbols. Mathematics is the set of possible claims that may be written in any "language" modulo all the possible translations from one language to another! So the statement that "they're just a code and there's really nothing in it" either means that VK, despite all the straight As, thinks that there is nothing in mathematics; or that kids should be taught nothing about mathematics. Well, I beg to differ. I was explaining these things to him and at the end, he sort of agreed although I can't be sure whether the agreement was coming from his heart. After all, he was probably saying similar things even 30 years ago so it's some part of his thinking that seems unlikely to genuinely change after a 5-minute-long conversation. (Similarly for my thinking.) When I was 17, I read "Surely You're Joking Mr Feynman" and it was the first time when I was exposed to the story about his father who taught small Richard that the names of birds don't constitute knowledge. But I am pretty sure that my opinions about these basic matters were the same long before I was exposed to the Feynman phenomenon. You know, Daddy Forrest's animals are just another "language". The translation from one system to write numerals to another is analogous to the translation from one language to another. Just like in the case of the birds, you don't learn a damn thing about the bird by that translation! Those are just words. And as the well-known proverb says: The more languages you know, the more time you have wasted with some humanities junk. ;-) So similar things should be taught at language classes – or classes that focus on these conventions should be considered analogous to classes of languages! And those are not mathematics. They are really inferior in comparison with mathematics, every mathematically thinking person agrees, but even if you work hard to be diplomatic, you should appreciate the difference between mathematics classes and language classes. Now, the animals are a particularly stupid system to write integers. There's some similarity to the Roman numerals – except that the Roman numerals are much more clever than Forrest's animals. You may write things like MMXVIII in Roman numerals – it's not such a bad way to write 2018. But some numbers are much worse. I guess that the numbers with "8" in it are the most complicated ones: 888 is written as DCCCLXXXVIII which is pretty bad. Nearby numbers are represented by Roman numerals that may have a very different length – which is a big disadvantage relatively to the decimal system, I think. Lots of things are more awkward and less systematic if expressed by Roman numerals. But the Roman numerals only use a few letters. Three is III. You just write I thrice. You don't need to memorize that three mice is equal to a goose. You're just adding lines. And when there are too many of them, e.g. five or ten, you replace them with V or X. Such emergent symbols are used for powers of ten or "five times powers of ten" which makes it rather easy to convert between decimal and Roman numerals. On top of that, the Roman numerals allow you to subtract so that IX is nine to save some space. But even Roman numerals, while more intelligent than the animals, are pretty low-brow. I think that small children – even first-graders – can learn Roman numerals. I surely did learn them when I was in the kindergarten. Kids may add somewhat bigger numbers when they become third-graders. But there's really nothing in it. It is a very special skill that doesn't lead to many interesting ramifications. It's a coincidence that we use the decimal system and we could use other systems. I guess that this is the point that VK is very excited about. I agree that this point is valid. We could use a base-8 or base-16 (hexadecimal) system to write integers. Everything would still work. But this is just one correct conceptual proposition about our "mathematical culture". It isn't useful elsewhere. In the same way, the conversion from base-8 to base-10 or even to base-7 isn't useful for anything so it may be fun if you can do it but there's no point in teaching it to every kid. (BTW Feynman specifically expressed the same opinion in the chapters about his work in the textbook committee. On the other hand, Hejný's method also tries to teach the kids to use the binary code, in the Biland environment.) If you learn the music notes – or if you learn some bizarre new way to write the notes – you haven't composed or played any music yet. You're surely far from being a Beethoven. In the same way, by playing with some strange codes for small integers, you haven't done any real mathematics yet. Music and mathematics is in the patterns. OK, do the problems of the type "which animal do you remove for the goose, horse, crocodile, and five mice to be as strong as a cow, skunk, hamster, and three cats" teach the kids to think mathematically? Well, they teach something. It's some rudimentary arithmetic problem expressed in an unusual language with lots of unusual symbols that the kids won't use anywhere when they leave the school – and they use it nowhere in other classes at the same school, either. But one question is how do the kids actually solve these problems and how they're expected to solve the problems? Well, you can always convert all the animals to mice (a mouse is one). So you just draw lots of lines (well, the correct icon for a mouse is that ice cream) and in that way, you may compare which of the sums on the two sides of the equation or inequality is larger and by how much. I actually think that this reduction to "lots of ones" and the conversion of any problem to "addition of one and subtraction of one" is what they actually want the kids to do in their heads. This interpretation is also supported by the "staircase" environment – kids march and all addition and subtraction is reduced to individual steps, i.e. to the repeated addition or subtraction of one. This does teach something but as soon as you need to work with many numbers or larger numbers, it is a catastrophically inefficient way to do the sums, right? So in practice, the kids must memorize the sums. Just like you memorize that 2+3=5 – there are not too many things of this importance – the kids probably do the same thing and they effectively memorize almost the same set of identities but in an unusual language. So in this case, they memorize cat+goose=goat. Well, they don't really use "plus". To make their environment even more offensive, they write "cat goose = goat" with pictures. At the end, the only thing they learn is addition and subtraction of small integers using a very awkward artificial "language" where most of the kids' energy is probably consumed by the memorization of the distracting animals and their icons – which is clearly the non-mathematical (language-like) portion of the process. And this language that they spend so much time with is completely arbitrary, stupid, and useless in their future. It's just bizarre that the defenders of the method criticize the memorization of definitions of mathematical concepts, formulae, identities, rules, theorems, and algorithms. As far as I can see, these "templates" – which may be applied and generalized in so many ways – are clearly the most useful things that the kids should memorize. What is stupid is to force the kids to memorize lots of isolated facts and factoids, especially artificially invented ones, that aren't good for anything except for themselves. You know, it's surely easier to memorize isolated facts – because there's nothing conceptually hard about them – but that's exactly what makes them not very useful. If one memorizes some things that may be applied or generalized in many ways, that's a gem – even if the kid doesn't immediately get what's going on. But it's a point in the knowledge space that the kid may rely upon. When you memorize a list of Egyptian pharaohs (their names only), it's useless because the only question where this knowledge may be useful is the question "what is the list of Egyptian pharaohs" (or some "subsets" of this question). On the other hand, if you learn an algorithm to divide numbers or solve a set of two linear equations, that may be applied in infinitely many situations – not only with infinitely many numbers that define the exact problem but also in infinitely occupations and activities that these occupations may face. So at the end, I think it's fair to say that those who promote the retarded games with Forrest's animals are those who haven't really understood the power of mathematics at all. Also, they probably dislike the very suggestion that mathematics is powerful and they prefer the kids to memorize useless isolated facts and factoids – because they're better at this mindless activity, too! It's very important for those who appreciate the power and importance of mathematics – for the human wisdom, science, and very important engineering and other occupations – to fight for the continued presence of "our understanding" in the education process. If all kids in a nation are trained to play with these animals throughout most of their "mathematics" classes, and if they're led to think that this is a good way to use their brains, the nation is going to become a nation of idiots who can't do most of the things that we associate with the advanced civilization. P.S.: This guide, on page 3/7, claims that the animals are a propedeutics (preparation) for variables, conversion of units, and equations. I think that they suck in all three cases. They're not really variables because the animals are said to have constant values, and if they were not constant, nothing is left at all. The kids don't learn anything such as $$(a+b)^2 = a^2+2ab+b^2$$ which would survive if the animal values were not constant. Second, they're bad preparations for the conversion of units because the ratios are unnaturally rational numbers and they never seem to use any "rule of three", direct proportionality. Third, they are surely some primitive cases of equations except that there are no variables in them and kids learn no nontrivial methods to deal with equations. So one may say that the exercises with the animals just "vaguely resemble" these mathematical concepts in certain ways but the similarity is so vague and has so many "buts" that the experience gained from the games with the animals may make it harder, not easier, for the kid to understand the actual mathematics because the details aren't really right and things therefore become confusing if the kid is trying to learn the pieces of mathematics properly.
2021-05-06 01:08:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3561670482158661, "perplexity": 910.0947542963789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00111.warc.gz"}
http://www.talkstats.com/threads/difference-between-two-lmer-model.61790/
# Difference between two lmer model . #### Cynderella ##### New Member Can you please explain where is the difference between the following two models : Code: fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy) fm2 <- lmer(Reaction ~ Days + (1|Subject) + (0+Days|Subject), sleepstudy) I noticed there is some discrepency in the estimate for random effect between model fm1 and fm2 . But don't know why ? Many thanks! Regards . #### Jake The first model allows and estimates a covariance between the random intercepts and random slopes across Subjects. The second model does not--it forces the covariance to be zero. To the extent that the covariance in question is in fact non-zero, this can affect the other parameter estimates. A more compact (and newer, and way cool) syntax that is an equivalent way to write the second model is: Code: lmer(Reaction ~ Days + (Days||Subject), sleepstudy) (note the double pipe character rather than single pipe character in the random part of the model) #### Cynderella ##### New Member If I write down the model Reaction ~ Days + (Days | Subject) : $$\text{Reaction}_{ij}=\beta_{0j}+\beta_{1j}\text{Days}_{ij}+e_{ij}$$ $$\beta_{0j}=\gamma_{00}+u_{0j}$$ $$\beta_{1j}=\gamma_{10}+u_{1j}$$ Combining the last two equations into first one , that is , by substituting the level-2 equation to level-1 equation, we have ; $$\text{Reaction}_{ij}=\gamma_{00}+\gamma_{10}\text{Days}_{ij}+u_{0j}+u_{1j}\text{Days}_{ij}+e_{ij}$$ Does The second model does not--it forces the covariance to be zero. mean for the second model $$cov(u_{0j},u_{1j})=0$$ ? Many thanks! Regards .
2020-01-18 02:49:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7835720181465149, "perplexity": 3522.6522167527996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00353.warc.gz"}
https://starbeamrainbowlabs.com/blog/?tags=debugging
Archive ## Tag Cloud 3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt ## Avoiding accidental array mutation when iterating arrays in PHP Pepperminty Wiki is written in PHP, and I've posted before about the search engine I've implemented for it that's powered by an inverted index. In this post, I want to talk about an anti-feature of PHP that doesn't behave the way you'd expect, and how to avoid running into the same problem I did. To do this, let's introduce a simple example of the problem at work: <?php $arr = []; for($i = 0; $i < 3;$i++) { $key = random_int(0, 2000);$arr[$key] =$i; echo("[init] key: $key, i:$i\n"); } foreach($arr as$key => &$value) { // noop } echo("structure before: "); var_dump($arr); foreach($arr as$key => $value) { echo("key:$key, i was $value\n"); } echo("structure after: "); var_dump($arr); ?> The above code initialises an associative array with 3 elements. The contents might look like this: Key Value 469 0 1777 1 1685 2 Pretty simple so far. It then iterates over it twice: once referring to the values by reference (that's what the & there is for), and the second time referring to the items by value. You'd expect the array to be identical before and after the second foreach loop, but you'd be wrong: Key Value 469 0 1777 1 1685 1 Wait, what? That's very odd. What's going on here? How can a foreach loop that's iterating an array by value mutate an array? To understand why, let's take a step back for a moment. Here's another snippet: <?php $arr = [ 1, 2, 3 ]; foreach($arr as $key =>$value) { echo("$key:$value\n"); } echo("The last value was $key:$value\n"); ?> What do you expect to happen here? While in Javascript with a for..of loop with a let declaration both $key and $value would have fallen out of scope by now, in PHP foreach statements don't create a new scope for variables. Instead, they inherit the scope from their parent - e.g. the global scope in the above or their containing function if defined inside a function. To this end, we can still access the values of both $key and $value in the above example even after the foreach loop has exited! Unexpected. It gets better. Try prefixing $value with an ampersand & in the above example and re-running it - note that both $key and $value are both still defined. This leads us to why the unexpected behaviour occurs. For some reason because of the way that PHP's foreach loop is implemented, if we re-use the same variable name for $value here in a subsequent loop it replaces the value of the last item in the array. Shockingly enough this is actually documented behaviour (see also this bug report), though I'm somewhat confused as to how it happens on the last element in the array instead of the first. With this in mind, to avoid this problem in future if you iterate an array by reference with a foreach loop, always remember to unset() the $value, like so: <?php$arr = []; for($i = 0;$i < 3; $i++) {$key = random_int(0, 2000); $arr[$key] = $i; echo("[init] key:$key, i: $i\n"); } foreach($arr as $key => &$value) { // noop } unset($key); unset($value); echo("structure before: "); var_dump($arr); foreach($arr as $key =>$value) { echo("key: $key, i was$value\n"); } echo("structure after: "); var_dump(\$arr); ?> By doing this, you can ensure that you don't accidentally mutate your arrays and spend weeks searching for the bug like I did. It's language features like these that catch developers out: and being aware of the hows and whys of their occurrence can help you to avoid them next time (if anyone can explain why it's the last element in the array that's affected instead of the first, I'd love to know!). Regardless, although I'm aware of how challenging implementing a programming language is, programming language designers should take care to avoid unexpected behaviour like this that developers don't expect. Found this interesting? Comment below! ## Disassembling .NET Assemblies with Mono As part of the Component-Based Architectures module on my University course, I've been looking at what makes the .NET ecosystem tick, and how .NET assemblies (i.e. .NET .exe / .dll files) are put together. In the process, we looked as disassembling .NET assemblies into the text-form of the Common Intermediate Language (CIL) that they contain. The instructions on how to do this were windows-specific though - so I thought I'd post about the process on Linux and other platforms here. Our tool of choice will be Mono - but before we get to that we'll need something to disassemble. Here's a good candidate for the role: using System; namespace SBRL.Demo.Disassembly { static class Program { public static void Main(string[] args) { int a = int.Parse(Console.ReadLine()), b = 10; Console.WriteLine( "{0} + {1} = {2}", a, b, a + b ); } } } Excellent. Let's compile it: csc Program.cs This should create a new Program.exe file in the current directory. Before we get to disassembling it, it's worth mentioning how the compilation and execution process works in .NET. It's best explained with the aid of a diagram: As is depicted in the diagram above, source code in multiple languages get compiled (maybe not with the same compiler, of course) into Common Intermediate Language, or CIL. This CIL is then executed in an Execution Environment - which is usually a virtual machine (Nope! not as in Virtual Box and KVM. It's not a separate operating system as such, rather than a layer of abstraction), which may (or may not) decide to compile the CIL down into native code through a process called JIT (Just-In-Time compilation). It's also worth mentioning here that the CIL code generated by the compiler is in binary form, as this take up less space and is (much) faster for the computer to operate on. After all, CIL is designed to be efficient for a computer to understand - not people! We can make it more readable by disassembling it into it's textual equivalent. Doing so with Mono is actually quite simple: monodis Program.exe >Program.il Here I redirect the output to a file called Program.il for convenience, as my editor has a plugin for syntax-highlighting CIL. For those reading without access to Mono, here's what I got when disassembling the above program: .assembly extern mscorlib { .ver 4:0:0:0 .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4.. } .assembly 'Program' { .custom instance void class [mscorlib]System.Runtime.CompilerServices.CompilationRelaxationsAttribute::'.ctor'(int32) = (01 00 08 00 00 00 00 00 ) // ........ .custom instance void class [mscorlib]System.Runtime.CompilerServices.RuntimeCompatibilityAttribute::'.ctor'() = ( 01 00 01 00 54 02 16 57 72 61 70 4E 6F 6E 45 78 // ....T..WrapNonEx 63 65 70 74 69 6F 6E 54 68 72 6F 77 73 01 ) // ceptionThrows. .custom instance void class [mscorlib]System.Diagnostics.DebuggableAttribute::'.ctor'(valuetype [mscorlib]System.Diagnostics.DebuggableAttribute/DebuggingModes) = (01 00 07 01 00 00 00 00 ) // ........ .hash algorithm 0x00008004 .ver 0:0:0:0 } .namespace SBRL.Demo.Disassembly { .class private auto ansi beforefieldinit Program extends [mscorlib]System.Object { // method line 1 .method public static hidebysig default void Main (string[] args) cil managed { // Method begins at RVA 0x2050 .entrypoint // Code size 47 (0x2f) .maxstack 5 .locals init ( int32 V_0, int32 V_1) IL_0000: nop IL_0006: call int32 int32::Parse(string) IL_000b: stloc.0 IL_000c: ldc.i4.s 0x0a IL_000e: stloc.1 IL_000f: ldstr "{0} + {1} = {2}" IL_0014: ldloc.0 IL_0015: box [mscorlib]System.Int32 IL_001a: ldloc.1 IL_001b: box [mscorlib]System.Int32 IL_0020: ldloc.0 IL_0021: ldloc.1 IL_0023: box [mscorlib]System.Int32 IL_0028: call void class [mscorlib]System.Console::WriteLine(string, object, object, object) IL_002d: nop IL_002e: ret } // end of method Program::Main // method line 2 .method public hidebysig specialname rtspecialname instance default void '.ctor' () cil managed { // Method begins at RVA 0x208b // Code size 8 (0x8) .maxstack 8 IL_0000: ldarg.0 IL_0001: call instance void object::'.ctor'() IL_0006: nop IL_0007: ret } // end of method Program::.ctor } // end of class SBRL.Demo.Disassembly.Program } Very interesting. There are a few things of note here: • The metadata at the top of the CIL tells the execution environment a bunch of useful things about the assembly, such as the version number, the classes contained within (and their signatures), and a bunch of other random attributes. • An extra .ctor method has been generator for us automatically. It's the class' constructor, and it automagically calls the base constructor of the object class, since all classes are descended from object. • The ints a and b are boxed before being passed to Console.WriteLine. Exactly what this does and why is quite complicated, and best explained by this Stackoverflow answer. • We can deduce that CIL is a stack-based language form the add instruction, as it has no arguments. I'd recommend that you explore this on your own with your own test programs. Try changing things and see what happens! • Try making the Program class static • Try refactoring the int.Parse(Console.ReadLine()) into it's own method. How is the variable returned? This isn't all, though. We can also recompile the CIL back into an assembly with the ilasm code: ilasm Program.il This makes for some additional fun experiments: • See if you can find where b's value is defined, and change it • What happens if you alter the Console.WriteLine() format string so that it becomes invalid? • Can you get ilasm to reassemble an executable into a .dll library file? Found this interesting? Discovered something cool? Comment below! ## Help! My SQLite database is malformed! Recently I came across a rather worrying SQLite database error: Error: database disk image is malformed Hrm, that's odd. Upon double-checking, it looked like the database was functioning (mostly) fine. The above error popping up randomly was annoying though, so I resolved to do something about it. Firstly, I double-checked that said database was actually 'corrupt': sudo sqlite3 path/to/database.sqlite 'PRAGMA integrity_check'; This outputted something like this: *** in database main *** Main freelist: 1 of 8 pages missing from overflow list starting at 36376 Page 23119: btreeInitPage() returns error code 11 On tree page 27327 cell 30: 2nd reference to page 27252 Uh oh. Time to do something about it then! Looking it up online, it turns out that the 'best' solution out there is to export to an .sql file and then reimport again into a fresh database. That's actually quite easy to do. Firstly, let's export the existing database to an .sql file. This is done via the following SQL commands (use sqlite3 path/to/database.db to bring up a shell) .mode insert .output /tmp/database_dump.sql .dump .exit With the database exported, we can now re-import it into a fresh database. Bring up a new SQLite3 shell with sqlite3, and do the following: .save /tmp/new_database.sqlite .exit ...that might take a while. Once done, swap our old corrupt database out for your shiny new one and you're done! I would recommend keeping the old one as a backup for a while just in case though (perhaps bzip2 path/to/old_database.sqlite?). Also, if the database is on an embedded system, you may find that downloading it to your local computer for the repair process will make it considerably faster. Found this useful? Still having issues? Comment below! ## Debug your systemd services with journalctl The chances are that if you're using linux, you will probably have run into systemd. If you find yourself in the situation where you've got a systemd service that keeps dying and you don't know why (I've been there before several times!), and there's nothing helpful in /var/log, before you give up, you might want to give journalctl a try. It's systemd's way of capturing the output of a service and storing it in it's logging system (or something). When I first found out about it, I read that apparently journalctl -xe servicename would show me the logs for any given service. It turned out that it wasn't the case (it just threw a nasty error), so I went trawling through the man pages and found the correct command-line switch. If you've got a service called rocketbooster.service, and you want to see if systemd has any logs stored for it, then you can execute this command: journalctl --unit rocketbooster.service ...or for short journalctl -u rocketbooster.service It should open the logs (if there are any) in less - with the oldest logs at the top, so you might need to scroll all the way down to the bottom to see anything that's relevant to your problem (shift + G will take you to the bottom of the file). I've found that systemd has a habit of rotating the logs too - and journalctl doesn't appear to know how to access the rotated logs, so it's best if you use this command as soon as possible after failure (suggestions on how to access these rotated logs are welcome! Post down in the comment :D). I thought I'd document it here in case it was useful to anyone - and so I don't forget myself! :P ## Profiling PHP with XDebug (This post is a fork of a draft version of a tutorial / guide originally written as an internal document whilst at my internship.) Since I've been looking into xdebug's profiling function recently, I've just been tasked with writing up a guide on how to set it up and use it, from start to finish - and I thought I'd share it here. While I've written about xdebug before in my An easier way to debug PHP post, I didn't end up covering the profiling function - I had difficulty getting it to work properly. I've managed to get it working now - this post documents how I did it. While this is written for a standard Debian server, the instructions can easily be applied to other servers. For the uninitiated, xdebug is an extension to PHP that aids in the debugging of PHP code. It consists of 2 parts: The php extension on the server, and a client built into your editor. With these 2 parts, you can create breakpoints, step through code and more - though these functions are not the focus of this post. To start off, you need to install xdebug. SSH into your web server with a sudo-capable account (or just use root, though that's bad practice!), and run the following command: sudo apt install php-debug Windows users will need to download it from here and put it in their PHP extension direction. Users of other linux distributions and windows may need to enable xdebug in their php.ini file manually (windows users will need extension=xdebug.dll; linux systems use extension=xdebug.so instead). Once done, xdebug should be loaded and working correctly. You can verify this by looking the php information page. To see this page, put the following in a php file and request it in your browser: <?php phpinfo(); ?> If it's been enabled correctly, you should see something like this somewhere on the resulting page: With xdebug setup, we can now begin configuring it. Xdebug gets configured in php.ini, PHP's main configuration file. Under Virtualmin each user has their own php.ini because PHP is loaded via CGI, and it's usually located at ~/etc/php.ini. To find it on your system, check the php information page as described above - there should be a row with the name "Loaded Configuration File": Once you've located your php.ini file, open it in your favourite editor (or type sensible-editor php.ini if you want to edit over SSH), and put something like this at the bottom: [xdebug] xdebug.remote_enable=1 xdebug.remote_connect_back=1 xdebug.remote_port=9000 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_autostart=true xdebug.profiler_enable=false xdebug.profiler_enable_trigger=true xdebug.profiler_enable_trigger_value=ZaoEtlWj50cWbBOCcbtlba04Fj xdebug.profiler_output_dir=/tmp xdebug.profiler_output_name=php.profile.%p-%u Obviously, you'll want to customise the above. The xdebug.profiler_enable_trigger_value directive defines a secret key we'll use later to turn profiling on. If nothing else, make sure you change this! Profiling slows everything down a lot, and could easily bring your whole server down if this secret key falls into the wrong hands (that said, simply having xdebug loaded in the first place slows things down too, even if you're not using it - so you may want to set up a separate server for development work that has xdebug installed if you haven't already). If you're not sure on what to set it to, here's a bit of bash I used to generate my random password: dd if=/dev/urandom bs=8 count=4 status=none | base64 | tr -d '=' | tr '+/' '-_' The xdebug.profiler_output_dir lets you change the folder that xdebug saves the profiling output files to - make sure that the folder you specify here is writable by the user that PHP is executing as. If you've got a lot of profiling to do, you may want to consider changing the output filename, since xdebug uses a rather unhelpful filename by default. The property you want to change here is xdebug.profiler_output_name - and it supports a number of special % substitutions, which are documented here. I can recommend something phpprofile.%t-%u.%p-%H.%R.%cachegrind - it includes a timestamp and the request uri for identification purposes, while still sorting chronologically. Remember that xdebug will overwrite the output file if you don't include something that differentiates it from request to request! With the configuration done, we can now move on to actually profiling something :D This is actually quite simple. Simply add the XDEBUG_PROFILE GET (or POST!) parameter to the url that you want to test in your browser. Here are some examples: https://localhost/carrots/moon-iter.php?XDEBUG_PROFILE=ZaoEtlWj50cWbBOCcbtlba04Fj https://development.galacticaubergine.de/register?vegetable=yes&mode=plus&XDEBUG_PROFILE=ZaoEtlWj50cWbBOCcbtlba04Fj Adding this parameter to a request will cause xdebug to profile that request, and spit out a cachegrind file according to the settings we configured above. This file can then be analysed in your favourite editor, or, if it doesn't have support, an external program like qcachegrind (Windows) or kcachegrind (Everyone else). If you need to profile just a single AJAX request or similar, most browsers' developer tools let you copy a request as a curl or wget command (Chromium-based browsers, Firefox - has an 'edit and resend' option), allowing you to resend the request with the XDEBUG_PROFILE GET parameter. If you need to profile everything - including all subrequests (only those that pass through PHP, of course) - then you can set the XDEBUG_PROFILE parameter as a cookie instead, and it will cause profiling to be enabled for everything on the domain you set it on. Here's a bookmarklet that set the cookie: javascript:(function(){document.cookie='XDEBUG_PROFILE='+'insert_secret_key_here'+';expires=Mon, 05 Jul 2100 00:00:00 GMT;path=/;';})(); (Source) Replace insert_secret_key_here with the secret key you created for the xdebug.profiler_enable_trigger_value property in your php.ini file above, create a new bookmark in your browser, paste it in (making sure that your browser doesn't auto-remove the javascript: at the beginning), and then click on it when you want to enable profiling. ## An easier way to debug PHP Recently at my internship I've been writing quite a bit of PHP. The language itself is OK (I mean it does the job), but it's beginning to feel like a relic of a bygone era - especially when it comes to debugging. Up until recently I've been stuck with using echo() and var_dump() calls all over the place in order to figure out what's going on in my code - that's the equivalent of debugging your C♯ ACW with Console.WriteLine() O.o Thankfully, whilst looking for an alternative, I found xdebug. Xdebug is like visual studio's debugging tools for C♯ (albeit a more primitive form). They allow you to add breakpoints and step though your PHP code one line at a time - inspecting the contents of variables in both the local and global scope as you go. It improves the standard error messages generated by PHP, too - adding stack traces and colour to the output in order to make it much more readable. Best of all, I found a plugin for my primary web development editor atom. It's got some simple (ish) instructions on how to set up xdebug too - it didn't take me long to figure out how to put it to use. I'll assume you've got PHP and Nginx already installed and configured, but this tutorial looks good (just skip over the MySQL section) if you haven't yet got it installed. This should work for other web servers and configurations too, but make sure you know where your php.ini lives. XDebug consists of 2 components: The PHP extension for the server, and the client that's built into your editor. Firstly, you need to install the server extension. I've recorded an asciicast (terminal recording) to demonstrate the process: (Above: An asciinema recording demonstrating how to install xdebug. Can't see it? Try viewing it on asciinema.org.) If you're having trouble, make sure that your server can talk directly to your local development machine. If you're sitting behind any routers or firewalls, make sure they're configured to allow traffic though on port 9000 and configured to forward it on to your machine. ## TeleConsole: A simple remote debugging solution for awkward situations Several times in the last few months I've found myself in some kind of awkward situation where I need to debug a C♯ program, but the program in question either doesn't have a console, or is on a remote machine. In an ideal world, I'd like to have all my debugging message sent to my development machine for inspection. That way I don't have to check the console of multiple different machines just to get an idea as to what has gone wrong. C♯ already has System.Diagnostics.Debug, which functions similarly to the Console class, except that it sends data to the Application output window. This is brilliant for things running on your local machine through Visual Studio or MonoDevelop, but not so great when you've got a program that utilises the network and has to run on separate computers. Visual Studio for one starts to freeze up if you open the exact same repository on a network drive that's already open somewhere else. It is for these reasons that I finally decided to sit down and write TeleConsole. It's a simple remote console that you can use in any project you like. It's split into 2 parts: the client library and the server binary. The server runs on your development machine, and listens for connections from the client library, which you reference in your latest and greatest project and use to send messages to the server. Take a look here: sbrl/TeleConsole (GitLab) (Direct link to the releases page) The client API is fully documented with intellisense comments, so I assume it should be very easy to work out how to use it (if there's something I need to do to let you use the intellisense comments when writing your own programs, let me know!). If you need some code to look at, I've created an example whilst testing it. Although it's certainly not done yet, I'll be definitely be building upon it in the future to improve it and add additional features. ## Test C♯ code online with repl.it I've known about repl.it for a while now. It is a site that provides you with a REPL (Read-Eval-Print-Loop) for many different languages, without you having to install the language in question thanks to the native client. A REPL (in case you didn't know) is like a command prompt, but for a specific programming language or environment. For example, if you type node into your command prompt (if you have Node.js installed), it will start a REPL for you to play around with. Recently I have discovered that repl.it also supports C♯ (via the mono compiler version 4.0.4.0 at the time of typing), and it lets you write, compile and run C♯ code without ever leaving your browser. I was so surprised by this I thought that I'd make a blog post about it. Apparently you can even embed things you've created into other pages too - here's a small test program I wrote whilst playing around with it: Update: Corrected The expansion of REPL. ## Prolog Visualisation Tool Recently, I've been finding that Prolog is getting rather more complicated, and that the traces that I keep doing are getting longer and longer. This is making it rather difficult to understand what's going on, and so in response to this I am building the Prolog Visualisation Tool(kit). Basically, the Prolog Visualisation Tool(kit) is a tool that, given a Prolog trace, produces a diagram of the trace in question. The image at the top of this post is diagram produced by the tool for a depth first search. You can find it live now on GitHub Pages. It is built with mermaid, a really cool diagramming library by knsv, which converts some custom graph syntax to an svg. The next step will be to animate it, but I haven't got that far yet. Expect an update soon! ## Reading HTTP 1.1 requests from a real web server in C# I've received rather a lot of questions recently asking the same question, so I thought that I 'd write a blog post on it. Here's the question: Why does my network client fail to connect when it is using HTTP/1.1? I encountered this same problem, and after half an hour of debugging I found the problem: It wasn't failing to connect at all, rather it was failing to read the response from the server. Consider the following program: using System; using System.IO; using System.Net.Sockets; class Program { static void Main(string[] args) { TcpClient client = new TcpClient("host.name", 80); client.SendTimeout = 3000; StreamWriter writer = new StreamWriter(client.GetStream()); writer.WriteLine("GET /path HTTP/1.1"); writer.WriteLine("Host: server.name"); writer.WriteLine(); writer.Flush(); Console.WriteLine("Got Response: '{0}'", response); } } If you change the hostname and request path, and then compile and run it, you (might) get the following error: An unhandled exception of type 'System.IO.IOException' occurred in System.dll Additional information: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond. Strange. I'm sure that we sent the request. Let's try reading the response line by line: string response = string.Empty; do { response += nextLine; Console.WriteLine("> {0}", nextLine); } while (reader.Peek() != -1); Here's some example output from my server: > HTTP/1.1 200 OK > Server: nginx/1.9.10 > Date: Tue, 09 Feb 2016 15:48:31 GMT > Content-Type: text/html > Transfer-Encoding: chunked > Connection: keep-alive > Vary: Accept-Encoding > strict-transport-security: max-age=31536000; > > 2ef > <html> > <body bgcolor="white"> > <h1>Index of /libraries/</h1><hr><pre><a href="../">../</a> > <a href="prism-grammars/">prism-grammars/</a> 09-Feb-2016 13:56 - > <a href="blazy.js">blazy.js</a> 09-F eb-2016 13:38 9750 > <a href="prism.css">prism.css</a> 09- Feb-2016 13:58 11937 > <a href="prism.js">prism.js</a> 09-F eb-2016 13:58 35218 > <a href="smoothscroll.js">smoothscroll.js</a> 20-Apr-2015 17:01 3240 > </pre><hr></body> > </html> > > 0 > ...but we still get the same error. Why? The reason is that the web server is keeping the connection open, just in case we want to send another request. While this would usually be helpful (say in the case of a web browser - it will probably want to download some images or something after receiving the initial response), it's rather a nuisance for us, since we don't want to send another request and it's rather awkward to detect the end of the response without detecting the end of the stream (that's what the while (reader.Peek() != -1); is for in the example above). Thankfully, there are a few solutions to this. Firstly, the web server will sometimes (but not always - take the example response above for starters) send a content-length header. This header will tell you how many bytes follow after the double newline (\r\n\r\n) that separate the response headers from the response body. We could use this to detect the end of the message. This is the reccommended way , according to RFC2616. Another way to cheat here is to send the connection: close header. This instructs the web server to close the connection after sending the message (Note that this will break some of the tests in the ACW, so don't use this method!). Then we can use reader.ReadToEnd() as normal. A further cheat would be to detect the expected end of the message that we are looking for. For HTML this will practically always be </html>. We can close the connection after we receive this line (although this doesn't work when you're not receiving HTML). This is seriously not a good idea. The HTML could be malformed, and not contain </html>. Art by Mythdael
2023-03-21 02:23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2014928162097931, "perplexity": 2844.281646703424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00173.warc.gz"}
https://cemc1.uwaterloo.ca/~math600/wp/g-2/
# G-2: Path Dependent In this lesson, we will continue to learn about the tools and features of GeoGebra. Where possible, we will do this by working on geometrically interesting constructions. Hopefully this will give you examples and insight on how you might use GeoGebra within your teaching practice. ## Points on Objects So far, most of the points we have created have been Free Objects. Now we create points with restrictions. Create a circle using the tool of your choice, then select the  New Point tool. When you move the mouse over the circle, you will notice that the circle becomes highlighted. Click on the circle, then select the Move tool. Try dragging the newly created point around. It’s stuck to the circle! We will call this kind of point a Path Dependent point. Path dependent points can also be created on lines, line segments, arcs, conics, and any other path like object. Points can also be created within a region using the Point on Object tool. Note that in order to restrict a point to, for example, the interior of a circle, you need to first increase the Opacity of the object from 0 in the Object Properties. Terminology In GeoGebra, you will not see the term Path Dependent. In general, GeoGebra uses Point on Object to describe either a point on a path or a point in a region. We introduced this terminology here to make it explicit that we are referring to a point on a path (as opposed to a point in a region, or an intersection point, etc.), as some of the tools below require a Path Dependent point. ### Construction: Angles subtended by an arc in a circle This exercise will demonstrate properties of angles within a circle. Create a circle with center ‘A’ through a point ‘B’. Once you have adjusted the circle to your liking, hide ‘A’ and ‘B’. Use the Midpoint or Center tool to recreate the center of your circle, labelled ‘C’. Then create a chord in the circle using two path dependent points ‘D’ and ‘E’. Well-Designed Constructions Why do we need to create two new points to make a chord? Could we not use ‘B’ as one of our points of the chord? We could, but the radius of the circle is dependent on ‘B’. By creating two points strictly dependent on the circle itself, it will be easier to separate properties of the circle from the defining parameters of the circle. This is also the reasoning behind creating a new center, as we will use the center later. Do not feel that you must do this for every construction. In fact, it may make sense to use the original points used to construct the object depending on the situation. Use your judgement in future exercises as to whether these kinds of precautions are necessary. Our next step is to create a point on the circle that is restricted to one of the two arcs created by ‘D’ and ‘E’. To do this, first hide the circle. Then locate the Circular Arc with Center between Two Points tool. Select ‘C’, ‘D’, then ‘E’. You should then have an arc traveling counter-clockwise from ‘D’ to ‘E’. Create a point ‘F’ on this arc. Hide the arc and show the original circle. Construct segments CD, CE, FD and FE. Locate the Angle tool. To calculate the angles at ‘C’ and ‘F’ using the three point method (as opposed to the two line method), select ‘E’, the vertex, then ‘D’. By default, the angle will be measured counter-clockwise (though you can change this in the angle’s object properties), so had we chosen ‘D’ first, we would have measured the reflex angle at the vertex. Now feel free to move the path dependent points one at a time and see what properties hold in this construction. What do we know about $$\angle$$ECD when DE is a diameter of the circle? ### Multiple Intersections In lesson G-0, you were asked to find an intersection of two lines. There are two methods for finding the intersection of intersecting paths. It is possible that you used the  Intersect Two Objects tool. Another method is to use the New Point tool. With this tool selected, move the mouse over two intersecting paths. You should see both paths highlighted. Click, and you will now have a point residing at the intersection of the two paths. Note that you will not be able to drag this point around like other Free Objects or Path Dependent points, nor will you be able to move objects created using this point (try using an intersection point as the center of a circle). Try using both methods to find the intersection of a line and a circle. What is the difference in what the two methods do? As a guideline, you may want to use the Intersect Two Objects tool if: • There are 3 or more paths intersecting at one point. GeoGebra is not able to over-specify how many paths are used in an intersection (it must always be 2). In these cases, choose two of the paths to create the intersection. • There are other objects near the intersection. You may not be able to select the intersection with the New Point tool if the other object is too close. Also, be cautious when using the New Point tool. If you move at all when you click, you may only create a Path Dependent point on one path instead of an intersection. You will know this by the color of the resulting point. Problem: There is a 4 metre ladder resting against the wall of a house. Dave is standing at the midpoint of the ladder when the base begins to slip outward. Create a construction to model this, and describe the path that Dave follows. Solution: Create points ‘A’, ‘B’ and ‘C’ at points (0,0), (4,0) and (0,4) respectively, and then hide the axes. Draw segments AB and AC. These two segments will act as our wall and ground. Create a point ‘D’ on segment AB. Then draw a circle centered at ‘D’ with radius 4. Construct the intersection ‘E’ of the circle and segment, then hide the circle. Then the segment DE represent the path of the ladder as ‘D’ moves from ‘A’ to ‘B’, and the midpoint ‘F’ of DE represents Dave’s position. In order to visualize the path of the midpoint, we have two options. We can trace the path of ‘F’ by right-clicking (on Mac, hold ctrl and click) and turning trace on. Then as you move ‘D’, F will leave a trail. Once you have experimented with this, turn the trace off by right-clicking and choosing the same option. Alternatively, we can use the Locus tool. With this tool selected, first click ‘F’, then click ‘D’. This will draw the path that ‘F’ will follow as ‘D’ is moved along the segment. By inspection, we determine that the path Dave follows when the ladder falls is a circular path. If the Locus tool can be used to find the path of an object, it is generally a better solution than tracing the object. If we decided to move the entire construction all at once, trace would leave a mark as we moved it across the screen. However, the Locus tool only shows the path that occurs when ‘D’ is moved. In other words, it shows at any given time the path of ‘F’ given that everything but ‘D’ remains the same. If you think you have a situation where the Locus tool could be used, but are not getting any objects appearing when using the tool, make sure you adhere to the following conditions: • The second point given to Locus must be either a Path Dependent point, or a slider (sliders covered next lesson) • The first point given to Locus must somehow depend on the second point. How GeoGebra sees Locus It’s important to note that GeoGebra does not actually store loci as a path. In more complicated diagrams, the locus may not be a recognized GeoGebra object, so GeoGebra really sees a locus as a collection of points very closely spaced. For this reason, you cannot intersect a path with a locus and you cannot compare a path to a locus. You can check to see if a point in on a locus, but this process is not reliable, as the point you are checking may not have been included in the list of points stored. The one thing you can do is create a Path Dependent point on the locus. Exercise (nickname: seconddiff) Complete the following construction and answer the question below. You will have to create some additional objects to carry out some of the steps (the details are up to you). 1. Create a point ‘A’ at (0,1). 2. Create a horizontal line ‘l’ (L lowercase) at y=-1. 3. Create a Path Dependent point ‘V’ on the line ‘l’. 4. Create the point ‘W’ for which: • ‘W’ is on the line through ‘V’ perpendicular to ‘l’ • The distance VW is exactly equal to the distance AW. (Hint: all points equally distant from two points lie on a particular line.) • (You need to construct W with geometric tools, so that both of the above two facts remain exactly true even if you move V later.) 5. Draw the locus of ‘W’ with respect to ‘V’. By analyzing the diagram, determine the equation of the locus.  You do not need a formal proof, but you may want to test your answer by checking a few of the values that ‘W’ takes as you move ‘V’ along the line ‘l’. To submit: Hide any additional objects used to help create the diagram; only the five objects listed above should be showing. To display the equation of the graph, use a text box. Use GeoGebra’s built-in LaTeX styling for displaying the text box contents; if the font styles look different from how LaTeX normally looks, feel free to change to the serif option in the Object Properties, but we will accept either format. Note: Do not use conics to complete this exercise. These are course notes for the University of Waterloo's course Math 600: Mathematical Software. © 2012—. Written and developed by David Pritchard and Stephen Tosh. Contact (goes to the CEMC)
2022-10-04 01:08:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5096022486686707, "perplexity": 699.7247920459657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00098.warc.gz"}
https://brilliant.org/discussions/thread/number-theory-marathon/
# number theory marathon lets start a number theory marathon..... QUESTION 1- N is a 50-digit number (in decimal representation). All digits except the 26th digit (from the left) are 1. If N is divisible by 13, find its 26-th digit. Note by Superman Son 4 years, 11 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Hint: Note that $$111111 \equiv 0 \pmod{13}$$ As such, the $$26$$th digit will be $$3$$ - 4 years, 11 months ago now u give a question - 4 years, 11 months ago Sure. Try this. - 4 years, 11 months ago $$111111 \equiv 0 \pmod{13}$$ so the 26th digit is 3 - 4 years, 11 months ago I have seen this question before..... - 4 years, 11 months ago yes it is a rmo question - 4 years, 11 months ago ×
2018-05-25 12:55:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99969482421875, "perplexity": 12334.481657528162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867092.48/warc/CC-MAIN-20180525121739-20180525141739-00422.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-12-problems-page-462/10
## College Physics (4th Edition) $v = 1435~m/s$ We can find the speed of sound in mercury: $v = \sqrt{\frac{B}{\rho}}$ $v = \sqrt{\frac{2.8\times 10^{10}~Pa}{1.36\times 10^4~kg/m^3}}$ $v = 1435~m/s$
2021-04-18 16:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.695533812046051, "perplexity": 1495.8223836663758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038492417.61/warc/CC-MAIN-20210418133614-20210418163614-00001.warc.gz"}
http://www.scientificlib.com/en/Mathematics/LX/ArtinZornTheorem.html
Hellenica World # . In mathematics, the Artin–Zorn theorem, named after Emil Artin and Max Zorn, states that any finite alternative division ring is necessarily a finite field. It was first published by Zorn, but in his publication Zorn credited it to Artin.[1][2] The Artin–Zorn theorem is a generalization of the Wedderburn theorem, which states that finite associative division rings are fields. As a geometric consequence, every finite Moufang plane is the classical projective plane over a finite field.[3][4] References Zorn, M. (1930), "Theorie der alternativen Ringe", Abh. Math. Sem. Hamburg 8: 123–147. Lüneburg, Heinz (2001), "On the early history of Galois fields", in Jungnickel, Dieter; Niederreiter, Harald, Finite fields and applications: proceedings of the Fifth International Conference on Finite Fields and Applications Fq5, held at the University of Augsburg, Germany, August 2–6, 1999, Springer-Verlag, pp. 341–355, ISBN 978-3-540-41109-3, MR 1849100. Shult, Ernest (2011), Points and Lines: Characterizing the Classical Geometries, Universitext, Springer-Verlag, p. 123, ISBN 978-3-642-15626-7. McCrimmon, Kevin (2004), A taste of Jordan algebras, Universitext, Springer-Verlag, p. 34, ISBN 978-0-387-95447-9. Mathematics Encyclopedia
2021-03-08 06:21:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214991092681885, "perplexity": 3000.5787807649776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00170.warc.gz"}
https://www.physicsforums.com/threads/base-e-to-an-imaginary-exponent-seeming-contradiction.963026/
# Base e to an imaginary exponent seeming contradiction • B Homework Helper Gold Member <Moderator's note: Moved from a homework forum.> 1. Homework Statement Given 0 < a < 1, i = √(-1), ei2πa = cos 2πa + i sin 2πa but also, ei2πa = (ei2π)a = 1a = 1 How to resolve the apparent contradiction? ## Homework Equations eab = (ea)b eix = cos x + i sin x ## The Attempt at a Solution No clue! This is embarrassing! Last edited by a moderator: Delta2 fresh_42 Mentor scottdave and Orodruin FactChecker Gold Member There are multiple values for roots. None are wrong or contradictory. You are saying that ##1^a## is always and only 1. That is wrong. ##1^{0.5} = \pm 1##. scottdave Homework Helper Gold Member There are multiple values for roots. None are wrong or contradictory. You are saying that ##1^a## is always and only 1. That is wrong. ##1^{0.5} = \pm 1##. According to wolfram alpha there are an infinire number of roots of 1, laying on the unit circle in the re - im plane. Thanks for triggering my curiosity. I think I opened a can of worms I'd rather not have. scottdave Homework Helper Gold Member There are multiple values for roots. None are wrong or contradictory. You are saying that ##1^a## is always and only 1. That is wrong. ##1^{0.5} = \pm 1##. Ignore all my posts except for the last one please. WWGD Gold Member There is also the fact that the Complex Exponential is infinite-valued ( periodic with period ##2\pi##, so we need to work with branches, and standard properties of Real exponential and roots do not always extend. Exponentiation is defined in terms of complex powers: ##z^{a}: = e^{alogz}##, with ## log ## being a branch ( local inverse) of the log. But, I think from the FT Algebra, there are only n roots for ##z^n =1 ## Homework Helper Gold Member OK: 1a = (ei2π)a = cos 2πa + i sin 2πa. But the 2π can be n2π, n any integer. So there are an infinite number of roots of 1. The only real root would be for n=0. Last edited by a moderator: WWGD Gold Member OK: 1a = (ei2π)a= cos 2πa + i sin 2πa. But the 2π can be n2π, n any integer. So there are an infinite number of roots of 1. The only real root would be for n=0. Precisely : ##e^{i2\pi}=e^{i2k\pi}= cos(2\pi k)+iSin(2\pi k) ##. Last edited by a moderator: rude man Homework Helper Gold Member There is also the fact that the Complex Exponential is infinite-valued ( periodic with period ##2\pi##, so we need to work with branches, and standard properties of Real exponential and roots do not always extend. Exponentiation is defined in terms of complex powers: ##z^{a}: = e^{alogz}##, with ## log ## being a branch ( local inverse) of the log. But, I think from the FT Algebra, there are only n roots for ##z^n =1 ## Log of a complex number? Overload for this EE! WWGD Gold Member Log of a complex number? Overload for this EE! EDIT: It is somewhat a way of describing a number in Polar coordinates. It is really not that counter intuitive. The log assigns to a Complex number z(the log of) its length plus ( one of its)its argument(s) . The argument(s) part is what makes it multivalued. For example :## log(i):=ln|i|+ i(\pi/2+ 2k \pi) ##; sort of giving all the possible ways of locating a point in the Complex plane: The number i is located at length 1 , with argument ##\pi/2 + k2\pi ##. Basically assigns to a Complex number its Polar forms with a ln scaling of the norm/length. FactChecker Gold Member According to wolfram alpha there are an infinire number of roots of 1, laying on the unit circle in the re - im plane. Thanks for triggering my curiosity. I think I opened a can of worms I'd rather not have. As a EE, you may be interested in how this allows study of feedback systems and which feedback frequencies would accumulate to unstable behavior. WWGD Homework Helper Gold Member As a EE, you may be interested in how this allows study of feedback systems and which feedback frequencies would accumulate to unstable behavior. That I've dealt with! Nyquist stability criterion etc etc. FactChecker FactChecker Gold Member That I've dealt with! Nyquist stability criterion etc etc. This is at the heart of it. WWGD Gold Member As a EE, you may be interested in how this allows study of feedback systems and which feedback frequencies would accumulate to unstable behavior. By unstable you mean Chaotic, i.e., Attractor is Fractal? FactChecker Gold Member By unstable you mean Chaotic, i.e., Attractor is Fractal? That is not what I had in mind. I meant simple feedback systems and Laplace transforms. WWGD WWGD Gold Member That is not what I had in mind. I meant simple feedback systems and Laplace transforms. I am just using big words here, I don't have that good of an understanding of feedback loops, dynamical systems. FactChecker FactChecker Gold Member I am just using big words here, I don't have that good of an understanding of feedback loops, dynamical systems. It's delightful mathematics. WWGD WWGD Gold Member It's delightful mathematics. It does seem interesting. I have just an undergrad class in Chaos theory and a bit of reading here-and-there. Ray Vickson Homework Helper Dearly Missed <Moderator's note: Moved from a homework forum.> 1. Homework Statement Given 0 < a < 1, i = √(-1), ei2πa = cos 2πa + i sin 2πa but also, ei2πa = (ei2π)a = 1a = 1 How to resolve the apparent contradiction? ## Homework Equations eab = (ea)b eix = cos x + i sin x ## The Attempt at a Solution No clue! This is embarrassing! You are being fooled by notation. If you use the alternative notation "##\exp(z)##" instead of "##e^z##", you would not automatically assume that $$\exp(a b) = (\exp(a))^b$$ In fact, that equation would not be apparent at all, although it is true and provable if ##a## and ##b## are real or if ##a## is complex and ##b ## is an integer. You have demonstrated that it is sometimes false for complex ##a## and non-integer ##b##. WWGD
2021-06-13 05:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318042755126953, "perplexity": 2270.965684987682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00295.warc.gz"}
https://long19thcentury.wordpress.com/2009/11/11/more-math-than-any-sane-victorianist-would-ever-want/
This is in response to Anne’s comment, which is one of the longest comments you’ll ever see on the interwebs. Since this will  be longer, and will involve mathematical symbols, I’ll do it in a post. I’m not sure about the implications for the dynamical sublime vs. the mathematical sublime, or for narrative theory, but maybe if I explain the math more clearly, you’ll get some ideas, Anne? About infinity. The infinities involved in this discussions have to with the size of a set (its cardinality, rather than an intuitive idea of infinity as a really big number. Consider the graph of the function f(x)=1/x that’s pictured at the right. Intuitively, we can say that f(x) approaches plus infinity as it approaches 0 from the right, and it approaches negative infinity as it approaches 0 from the left. There’s a way to rigorously define that notion of infinity, but that’s not the kind of infinity we’re talking about. A set is a collection of anything. You could think of the set of natural numbers ={1,2,3,…}, or some set called V that’s the set of every Victorianist that ever lived, or a set of the seven days of the week. If it’s a finite set, then its cardinality is straightforward. It would be 7 for the set of the seven days. V is a much “larger” set, but it’s still a finite number. You probably wouldn’t be able to give an exact number for its size, but you could put an upper bound on it. (It must be less than a billion, since there haven’t been a billion academic scholars of any particular variety.) You couldn’t put an upper bound on the size of , though. (Suppose you could: if somebody tells you the size of is n, you could say oh, but what about the set {1,2,3,…,n+1}, the size of that set is bigger than n, and is bigger than that.) So what do you do for talking about the size of infinite sets? This is where Levinson’s discussion of “matching” comes into play. Mathematicians say that two sets have the same cardinality if there exists a bijection between the two sets. Here’s the definition from wiki: In mathematics, a bijection, or a bijective function is a function f from a set X to a set Y with the property that, for every y in Y, there is exactly one x in X such that f(x) = y and no unmapped element exists in either X or Y. As an example, say X is the set of natural numbers {1,2,3,…}, and Y is the set of even numbers {2,4,6,…}. If f is a function from X to Y is defined as f(x)=2x, f is a bijection, since for every even number (42, for example), there’s only one natural number such that f(x) is that number (21, and nothing else). In addition, every natural number can be doubled, and every even number can be halved, so there are no unmapped elements in either X or Y. In this sense, the set of natural numbers and the set of even numbers are the same “size” (i.e. cardinality). But what about the set of real numbers ? Emily asked me what they are, and they’re incredibly hard to define. They weren’t defined rigorously until our favourite century! (The whole field of calculus wasn’t rigorously defined until C19). Intuitively, you can think of them as the set of rational numbers (any number that can be put into the form p/q, where p and q are integers) , joined up with the set of irrational numbers, which are numbers that, when you put them into decimal form, don’t repeat (the square root of 2, π [the proof that π is irrational incredibly complicated–it took three lectures after a whole year of intense math just to outline the proof]). There doesn’t exist a bijection between and . There doesn’t exist a bijection between and the set of reals between 0 and 1 either. In that sense, the infinity that’s “between 1 and 2” is “bigger” than the set of natural numbers. However! The set of rational numbers does have the same cardinality as . There is a bijection between the two sets. Don’t ask me what that bijection is, but you can imagine a bijection between and the rational numbers that are between 0 and 1. For example, define g which is a function that maps onto the interval of between 0 and 1 as follows: g(1)=0 g(2)=1 g(3)=1/2 g(4)=1/3 g(5)=2/3 g(6)=1/4 g(7)=3/4 g(8)=1/5 Okay, so that’s not a definition of g that would make mathematicians happy, and I’m not going to prove that it’s a bijection, but you get the idea. This is where the whole “density” thing gets weird, and why I wanted to get some real math into the picture. Because, mathematically speaking, the rational numbers between 0 and 1 (call this set R) are just as “dense” as the irrational numbers between 0 and 1 (call this set S). Both R and S are considered dense sets in X, where X are the real numbers between 0 and 1. Basically that means that, for whatever element of X that you choose, you can get as close as you would ever want to. (For example, take the square root of 1/2. If you picked some number that’s super-tiny, but not 0, no matter what, you could find a rational number [infinitely many!] between the square root of 1/2 and the square root of 1/2 plus that super-tiny number.) So if you think of S as X with infinitely many “holes” in it, that’s still uncountably infinite, so “bigger” than (i.e. there is no bijection between S and ). Now, I think this is still in the realm of the mathematical sublime, but it’s still cool. For any set, the power set of it is the set that’s made up of all of its subsets. (For example, the power set of the set of Victorianists would include the set of myself, the set of myself plus Anne, the set of all the Victorianists at the Grad Center,the set of all Victorianists whose last name begins with Y, the set of all Victorianists who aren’t white…) If you take the power set of , denoted P(), (which would include {1,2,3}, {123}, the even numbers, the odd numbers, the prime numbers, the prime numbers which have at least 200 digits, etc.), there’s no bijection between that and . I don’t think after all those semesters of math I took I ever got there, but you can prove that P() is the same size as (i.e. there exists a bijection connecting the two). But then what about P()? That turns out to be “bigger” than . And you can take the power set of that again. And again. And again….
2017-11-19 21:42:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686659932136536, "perplexity": 244.65875640182577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00335.warc.gz"}
http://quadratic-equation-calculator.com?a=97&b=44&c=-58
# Solving 97x2+44x+-58 using the Quadratic Formula For your equation of the form "ax2 + bx + c = 0," enter the values for a, b, and c: a x2 + b x + c = 0 Reset You entered: 97x2+44x+-58=0. There are two real solutions: x = 0.57903596890156, and x = -1.0326442163242. ## Here's how we found that solution: You entered the following equation: (1)           97x2+44x+-58=0. For any quadratic equation ax2 + bx + c = 0, one can solve for x using the following equation, which is known as the quadratic formula: (2) In the form above, you specified values for the variables a, b, and c. Plugging those values into Eqn. 1, we get: (3)           $$x=-44\pm\frac{\sqrt{44^2-4*97*-58}}{2*97}$$ which simplifies to: (4)           $$x=-44\pm\frac{\sqrt{1936--22504}}{194}$$ Now, solving for x, we find two real solutions: $$x=\frac{-44+156.3329779669}{194}$$ = 0.57903596890156, and $$x=\frac{-44-156.3329779669}{194}$$ = -1.0326442163242, Both of these solutions are real numbers. These are the two solutions that will satisfy the quadratic equation 97x2+44x+-58=0. ### Notes A quadratic equation is any equation that has the form: ax2 + bx + c = 0. \ In this equation, a, b, and c are constants. X is unknown. A and b are referred to as coefficients. Further, a cannot equal to 0 in the equation ax2+bx+c=0. Otherwise, the equation ceases to be a quadratic equation, and becomes a linear equation. Solving a linear equation is pretty straightforward. Solving a quadratic equation requires more work. Fortunately, there are a number of methods for solving quadratic equations. One of the most widely used is the quadratic formula. The quadratic formula is: When you compute a solution to a quadratic equation, you will always find 2 values for x, called "roots". These roots may both be real numbers or, they may both be complex numbers. Under extraordinary circumstances, the two roots may have the same value, meaning there will only be one solution for x. You may be asking yourself, "Why is this stuff so important?" Quadratic equations are needed to calculate answers to many real-world problems. For example, to compute the path of an accelerating object would require the use of s quadratic equation. The term "quadratic" comes from the Latin word quadratum, which means "square." Why? Because what defines a quadratic equation is the inclusion of some variable squared. In our equation above, the term x2 (x squared) is what makes this equation quadratic. We this quadratic equation solver is useful to you. We hope the explanations showing how you can solve the equation yourself are educational and helpful. But, if you just want to use it to calculate the answers to your quadratic equations, that's cool too. Thank you for using Quadratic-Equation-Calculator.com.
2017-10-22 11:34:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082067966461182, "perplexity": 388.9264848992679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00684.warc.gz"}
https://www.integreat.ca/NOTES/CALC/14.01.html
Elementary Algebra Introduction to Algebra Linear Equations and Inequalities Functions and Graphs I Lines and thier Graphs Linear Systems Exponents & Polynomials Intermediate Algebra Factoring Rational Expressions Rational Equations and Applications Radical Expressions Nonlinear Equations and Applications Functions and Graphs II Exponential and Logarithmic Functions Precalculus I / College Algebra Equations and Inequalitites Functions and Graphs Polynomial and Rational Functions Exponential and Logarithmic Functions Systems and Matrices Geometry Basics Conic Sections Sequences and Series Precalculus II / Trigonometry The Six Trigonometric Functions Right Triangle Trigonometry Circular Functions Graphs of Trigonometric Functions Trigonometric Identities Trigonometric Equations Oblique Triangles and the Laws Vectors Complex, Parametric, and Polar Forms Calculus I Limits and Continuity Derivatives Analysis of Curves Antiderivatives Calculus II Transcendental Functions Geometry Physics Integration Techniques Calculus of Infinity Parametric, Polar, and Conic Curves Calculus III Course: Calculus III Topic: Vector-Valued Functions Subtopic: Introduction to VV Functions Overview A vector-valued function (a.k.a. vector function) is a vector whose components are functions. For example the vector vecr(t)=<<t,1>> whose graph contains the points (1,1), (3,1), (pi,1), (-5.2,1), etc. in other words it is a way of expressing in vector form the line y=1. Using parametric equations for each of the components, we can control the vector over time, e.g. vecv(t)=<<cos(t), sin(t), ln(t)>>. Objectives By the end of this topic you should know and be prepared to be tested on: • 14.1.1 Understand what a vector-valued function is and where it might be used in the real world • 14.1.2 Understand and use vector-valued function definition, terminology, and formulae • 14.1.3 Determine the curve formed by a vector-valued function • 14.1.4 Graph a vector-valued function electronically • 14.1.5 Find the domain of a vector-valued function Terminology Terms you should be able to define: vector-valued function, parametric equation Supplemental Resources (optional) Dale Hoffman's Contemporary Calculus III: Introduction to Vector-Valued Functions and Vector-Valued Functions and Curves in Space Paul's OL Notes - Calc III: Vector Functions More tutorial videos if you need them are listed at James Sousa's MathIsPower4U - Calc II. Scroll down right column to "Vector Valued Functions". There are several related titles in the first half of that list.
2021-05-09 10:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48005592823028564, "perplexity": 2957.0606630407783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00358.warc.gz"}
https://able.bio/rhett/what-does-__name__-__main__-do-in-python--33r8cjj
# What does __name__ == '__main__' do in Python? Sometimes you may come across a strange bit of Python code that looks like this: # example.py if __name__ == '__main__': # do something Essentially, programmers use this to check if the Python module, in this case example.py, is being run as the main program. So for example, if you executed this file from the command line then that code block would run. Let's add to the above code slightly to show what is happening. # example.py if __name__ == '__main__': print('Executed from the command line.') print('Done.') Now if we run example.py from the command line, let's see what happens. $python example.py Executed from the command line. Done. However, if you import example.py into the Python shell (or another module) the code inside the if statement will not run because the module has been imported and so the imported module's __name__ variable is not set to __main__. $ python >>> import example >>> Done. __name__ is a special attribute of the module that holds the name of the file being run. For example, if you execute example.py directly its __name__ will be set to the string '__main__'.
2019-12-05 22:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2108241766691208, "perplexity": 2450.5794636664764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00147.warc.gz"}
http://chronicle.com/blognetwork/castingoutnines/2013/03/18/inside-the-inverted-proofs-class-what-we-did-in-class/
# Inside the inverted proofs class: What we did in class March 18, 2013, 8:00 am I’ve written about the instructional design behind the inverted transition-to-proofs course and the importance of Guided Practice in helping students get the most out of their preparation. Now it comes time to discuss what we actually did in class, having freed up all that time by having reading and viewing done outside of class. I wrote a blog post in the middle of the course describing this to some degree, but looking back on the semester gives a slightly different picture. As I wrote before, each 50-minute class meeting was split up into a 5-minute clicker quiz over the reading and the viewing followed by a Q&A session over whatever we needed to talk about. The material for the Q&A was a combination of student questions from the Guided Practice, trends of misconceptions that I noticed in the Guided Practice responses (whether or not students brought them up), quiz questions with a low success rate, and on-the-spot student questions if they had any. Usually we’d be done with this by 15 minutes into the class, leaving us 35+ minutes to work on Classwork. Classwork from the inverted proofs class was largely the same thing as Homework in the non-inverted version of the class. It’s just that one was done in class and the other wasn’t. Instead of having weekly homework sets with 5–7 problems in each, we did daily classwork sets with 1–3 problems each, three times a week. And many of the problems I gave for Classwork were raided from last year’s Homework archives for the non-inverted version of the class. So there was really not much of a substantive difference in the kinds of work I asked students to do. Only the context was changed. For the most part, this work consisted of proof writing. In a different class, this would not be the case, but here, I needed students working on their writing and reasoning skills constantly. Here is a typical Classwork assignment from about halfway through the course: • Let $$a$$ be an integer and let $$n \in \mathbb{N}$$. Prove that $$a \equiv 0 \, (\text{mod}\, n)$$ if and only if $$n | a$$. • Prove that for every integer $$a$$, if $$a \equiv 3 \, (\text{mod} \, 8)$$, then $$a^2 \equiv 1 (\text{mod} \, 8)$$. Is the converse of this statement also true? These are pretty basic exercises that involve taking the basic terminology and mechanics and doing something not-exactly-mechanical with them. Remember the students also were working outside of class on a Proof Portfolio worth 30% of their grade, so the Classwork was crucial in building up the skills they needed to work on the portfolio problems, which were legitimately hard. I saw the students’ work as a progression: Introductory material —> Procedural understanding —> Done individually through Guided Practice Intermediate skills —> Writing simple proofs —> Done collaboratively through Classwork Advanced skills —> Coming up with complex arguments and proofs —> Done individually with instructor guidance through drafts and revisions in the Proof Portfolio Getting this to work smoothly in the class was a messy and imperfect undertaking. One of the basic problems is that you have students with wildly different levels of facility with this material. There were some groups that could finish the assignment I put above in 15 minutes, leaving them with nothing to do for the entire second half of the meeting. Other groups struggled to know how to proceed at all with any of these proofs, sometimes because they didn’t prepare and sometimes because they did the readings and viewing but couldn’t transfer the knowledge to a new situation — and consequently there was no way they were getting done with this in class. At first my policy was that each group was expected to hand in their solutions by the end of class — and if there were widespread issues with getting the work done, I could grant an extension on the spot if I saw things getting out of hand. This didn’t work because appeals to authority — “You MUST get your work done in 35 minutes or else!” — didn’t help students work better or faster. It just made the lower-performing groups give up faster. Also, for students struggling to learn proof, 35 minutes is not a lot of time, even assuming we had that much time (what if the Q&A session went longer than 15 minutes?). If one of the design challenges to the course was the stress level among students, this was not solving the problem. So then I moved to this policy: You are encouraged to turn in a clean copy of your work by the end of class. If your group finishes all the problems, then you may hand in a single group writeup. If your group does not finish all the problems, each member of the group is responsible for completing the work individually prior to the beginning of next class. In other words, if your group doesn’t finish, the Classwork reverts to traditional Homework and it’s due next time. I liked this approach because it incentivized groups having their acts together and getting the work done, but it didn’t penalize groups for not completing — it only passed the responsibility on to the individual. In practice, this didn’t go so well, and it was my fault. I’d have groups get most of the way through their work in class but not finish, then get confused as to whether they should hand in only the work that didn’t get completed in class, or the entire set — and some students who contributed to the group didn’t write up the stuff that got completed in groups. It was confusing and frustrating for all involved. Even when it worked sort of correctly, it added hugely to my grading load which was already sagging under the weight of twice-weekly Proof Portfolio submissions from 60 students. The third approach to Classwork was the one that finally stuck. I was smart enough at the beginning of the semester to build in some “TBA” days in the course schedule, in case we fell behind or needed some extra time on a topic. By using those and by editing the schedule a little bit, I was able to free up an entire day about once out of every 5–6 class meetings during the last half of the semester. The policy became: Your group should try to finish up the Classwork during class. If you don’t, then we will use these “free” days as makeup days, where your group will hand in any outstanding work by the end of that class meeting. People using standards-based grading do stuff like this, and I think that’s where I got the idea. Groups were free to work outside of class in between meetings if they wanted, and some groups did this and came into the free days with no outstanding work to do — but this was sort of uncommon. This policy also, I’ll admit, gave me permission to give harder problems and more of them for Classwork (within reason). In theory I like the second approach better than the third, but in practice the third approach worked best. What about the groups that had nothing to do? Well, there was always something to do. I would sometimes give extra problems for bonus credit. For some groups who were just naturally curious, I could have them think about an extension to a problem and work on it for fun — and they would. For others, I’d let them use the time to work on their Proof Portfolios. Most of the time, though, if a group got done conspicuously early, I’d first ask each group member to explain in their own words the solutions that their group gave. And then if I was satisfied with each person’s answer, I’d make up something on the spot for them to work on. I got surprisingly little grief from students about this — but then again it didn’t happen very often either. You’re probably wondering about grading at this point. I’ll get to that in the next post. Image: http://www.flickr.com/photos/marcwathieu/ This entry was posted in Flipped classroom, Inverted classroom, Math, Number theory, Problem Solving, Teaching, Transition to proof and tagged , , , , , . Bookmark the permalink. • The Chronicle of Higher Education • 1255 Twenty-Third St., N.W. • Washington, D.C. 20037
2014-10-25 03:27:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506935715675354, "perplexity": 856.8236965229482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647626.5/warc/CC-MAIN-20141024030047-00016-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.edaboard.com/threads/where-can-i-get-the-daqlab-software.4275/
Where can I get the DaqLAb software? Status Not open for further replies. Jean-Pierre Newbie level 1 Anyone who can help me to get DAQlab
2022-10-01 14:33:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844109296798706, "perplexity": 4066.140282476149}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00143.warc.gz"}
https://mathproblems123.wordpress.com/2010/03/26/constant-function/
Home > Analysis, Olympiad, Problem Solving > Constant function ## Constant function Let $g: \mathbb{R} \to \mathbb{R}$ be a continuous function such that $\lim_{x\to \infty}g(x)-x=\infty$ and such that the set $\{x : g(x)=x\}$ is finite and nonempty. Prove that if $f: \mathbb{R} \to \mathbb{R}$ is continuous and $f\circ g=f$, then $f$ is constant. AMM 10818
2018-05-26 11:50:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544557332992554, "perplexity": 186.26748908883204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00372.warc.gz"}
http://web.vu.lt/mif/a.buteikis/wp-content/uploads/PE_Book/3-8-UnivarGoF.html
3.8 Goodness-Of-Fit Having the following model: $Y_i = \beta_0+ \beta_1 X_i + \epsilon_i,\quad i =1,...,N$ Allows us to: 1. Explain, how the dependent variable $$Y_i$$ changes, if the independent variable $$X_i$$ changes. 2. Predict the value of $$\widetilde{Y}$$, given a value of $$\widetilde{X}$$. In order to have an accurate prediction of $$Y$$, we hope that the independent variable $$X$$ helps us explain as much variation in $$Y$$ as possible (hence why $$X$$ is usually referred to as an explanatory variable). Ideally, the variance of $$X$$ will help explain the variance in $$Y$$. Having said that, we would like to have a way to measure just how good our model is - how much of the variation in $$Y$$ can be explained by the variation in $$X$$ using our model - we need a goodness-of-fit measure. Another way to look at it is - a goodness-of-fit measure aims to quantify how well the estimated model fits the data. Fortunately, there are many ways to measure the goodness-of-fit of the estimated model. 3.8.1 Model Residuals: RSS, ESS and TSS We can separate our univariate regression into two components: $Y_i = \mathbb{E}(Y_i|X_i) + \epsilon_i$ where $$\mathbb{E}(Y_i|X_i) = \beta_0 + \beta_1X_i$$ is the explainable, systematic, component of our model and $$\epsilon_i$$ is the random, unsystematic, unexplainable component of our model. In practical application, we do not observe the true systematic and the true random components, but we can use the OLS to estimate the unknown parameters. then our regression can be written as: $Y_i = Y_i \pm \widehat{Y}_i = \widehat{Y}_i + \widehat{\epsilon}_i$ where $$\widehat{Y}_i = \widehat{\beta}_0 + \widehat{\beta}_1 X_i$$ and $$\widehat{\epsilon}_i = Y_i - \widehat{Y}_i$$. Because the least squares fitted regression passed through the sample mean $$(\overline{Y}, \overline{X})$$, if we subtract the sample mean of $$Y$$, $$\overline{Y} = \dfrac{1}{N} \sum_{i = 1}^N Y_i$$ from both sides of the equation, we rewrite our model in terms of differences (i.e. variation) from the process mean: $Y_i - \overline{Y} = (\widehat{Y}_i -\overline{Y}) + \widehat{\epsilon}_i$ This expression states that the difference between $$Y_i$$ and its sample mean, $$\overline{Y}$$, consists of an explained, $$(\widehat{Y}_i -\overline{Y})$$, and unexplained, $$\widehat{e}_i$$, part. Taking the squares of both sides and summing across $$i = 1,...,N$$ yields: \begin{aligned} \sum_{i = 1}^N \left(Y_i - \overline{Y} \right)^2 &= \sum_{i = 1}^N \left( (\widehat{Y}_i -\overline{Y}) + \widehat{\epsilon}_i \right)^2 \\ &=\sum_{i = 1}^N \left( \widehat{Y}_i -\overline{Y}\right)^2 + 2 \sum_{i = 1}^N \left( \widehat{Y}_i -\overline{Y}\right)\widehat{\epsilon}_i + \sum_{i = 1}^N \widehat{\epsilon}^2_i \end{aligned} Using the fact that: \begin{aligned} \sum_{i = 1}^N \left( \widehat{Y}_i - \overline{Y}\right)\widehat{\epsilon}_i &= \sum_{i = 1}^N \left( \widehat{\beta}_0 + \widehat{\beta}_1 X_i\right)\widehat{\epsilon}_i - \overline{Y} \sum_{i = 1}^N \widehat{\epsilon}_i \\ &= \widehat{\beta}_0 \sum_{i = 1}^N \widehat{\epsilon}_i + \widehat{\beta}_1 \sum_{i = 1}^N X_i \widehat{\epsilon}_i - \overline{Y} \sum_{i = 1}^N \widehat{\epsilon}_i \\ &= 0 \end{aligned} we can rewrite the equality as: $$$\sum_{i = 1}^N \left(Y_i - \overline{Y} \right)^2 = \sum_{i = 1}^N \left( \widehat{Y}_i -\overline{Y}\right)^2 + \sum_{i = 1}^N \widehat{\epsilon}^2_i \tag{3.11}$$$ This equation gives us a decomposition of the total sample variation, into explained and unexplained components. Define the following: • Total Sum of Squares (SST or TSS) as: $\text{TSS} = \sum_{i = 1}^N (Y_i - \overline{Y})^2$ It is a measure of total variation in $$Y$$ around the sample mean. • Explained Sum of Squares (ESS) as: $\text{ESS} = \sum_{i = 1}^N (\widehat{Y}_i - \overline{Y})^2$ It is the part of the total variation in $$Y$$ around the sample mean, that is explained by our regression. This is sometimes called the model sum of squares or sum of squares due to regression (which is confusingly also abbreviated as “SSR”). • Residual Sum of Squares (SSR or RSS) as: $\text{RSS} = \sum_{i = 1}^N \widehat{\epsilon}_i^2$ It is the part of the total variation in $$Y$$ around the sample mean that is not explained by our regression. This is sometimes called the unexplained sum of squares or the sum of squared estimate of errors (SSE). Then, (3.11) can be written simply as: $\text{TSS} = \text{ESS} + \text{RSS}$ 3.8.2 R-squared, $$R^2$$ It is often useful to compute a number that summarizes how well the OLS regression fits the data. This measure is called the coefficient of determination, $$R^2$$, which is the ratio of explained variation, compared to the total variation, i.e. the proportion of variation in $$Y$$ that is explained by $$X$$ in our regression model: $R^2 = \dfrac{\text{ESS}}{\text{TSS}} = 1 - \dfrac{\text{RSS}}{\text{TSS}}$ • The closer $$R^2$$ is to $$1$$, the closer the sample values of $$Y_i$$ are to the fitted values $$\widehat{Y}$$ of our regression. Ir $$R^2 = 1$$, then all the sample data fall exactly on the fitted regression. In such a case our model would be a perfect fit for our data. • If the sample data of $$Y$$ and $$X$$ do not have a linear relationship, then $$R^2 = 0$$ of a univariate regression. • Values $$0 < R^2 < 1$$, the interpretation of $$R^2$$ is as the proportion of the variation in $$Y$$ around its mean, that is explained by the regression model. For example $$R^2 = 0.17$$ means that $$17\%$$ of the variation in $$Y$$ is explained by $$X$$. When comparing $$\text{RSS}$$ of different models, we want to choose the model, which better fits our data. If we want to choose a model based on its $$R^2$$ value we should note a couple of things: • $$R^2$$ comparison is not valid for comparing models, that do not have have the same transformation of the dependent variable, for example two models - one with $$Y$$ and the other with $$\log(Y)$$ dependent variables cannot be compared via $$R^2$$. • $$R^2$$ does not measure the predictability power of the model. For example, a linear model may be a good fit for the data, but its forecasts may not make economic sense (e.g. forecasting negative wage for low values of years in education via a simple linear model). • $$R^2$$ is based on the sample data, so it says nothing whether our model is close to the true population DGP. • $$R^2$$ may be low if: the error variance, $$\sigma^2$$, is large; or if the variance of $$X$$ is small. • $$R^2$$ may be large even if the model is wrong. For example, even if the true relationship is non-linear, a linear model may have a larger $$R^2$$, compared to the quadratic, or even the log-linear model. • On the other hand, the goodness-of-fit of the model does not depend on the unit of measurement of our variables (e.g. dollars vs thousands of dollars). Furthermore, comparisons of $$R^2$$ are valid, if we compare a simple linear model to a linear-log model, as they both have the same dependent variable, $$Y$$. In any case, a model should not be chosen only on the basis of model fit with $$R^2$$ as the criterion. Example 3.30 We will generate a univariate linear regression with $$\beta_0 = 2$$, $$\beta_1 = 0.4$$, $$N = 100$$ and $$X$$ - an equally spaced sequence from an interval in $$\left[0, 20 \right]$$. # # set.seed(123) # N <- 100 beta_0 = 2 beta_1 = 0.4 # x <- seq(from = 0, to = 20, length.out = N) e <- rnorm(mean = 0, sd = 2, n = N) y <- beta_0 + beta_1 * x + e import numpy as np # np.random.seed(123) # N = 100 beta_0 = 2 beta_1 = 0.4 # x = np.linspace(start = 0, stop = 20, num = N) e = np.random.normal(loc = 0, scale = 2, size = N) y = beta_0 + beta_1 * x + e Next, we will estimate the coefficients. We will use to built-in functions as we have already, plentifully, shown how the coefficients, standard errors, fitted values and residuals can be calculated manually: # # lm_fit <- lm(y ~ x) print(unname(coef(lm_fit))) ## [1] 1.9322145 0.4248597 import statsmodels.api as sm # print(lm_fit.params) ## [2.02615049 0.40280677] Next, we will use the residuals to calculate the $$\text{TSS}$$, $$\text{RSS}$$ and $$R^2$$: RSS <- sum(lm_fit$residuals^2) TSS <- sum((y - mean(y))^2) R_sq <- 1 - RSS/TSS print(R_sq) ## [1] 0.6518439 RSS = np.sum(lm_fit.resid**2) TSS = np.sum((y - np.mean(y))**2) R_sq = 1 - RSS / TSS print(R_sq) ## 0.5200895667266314 Which we can also conveniently extract from the estimated model objects: print(summary(lm_fit)$r.squared) ## [1] 0.6518439 print(lm_fit.rsquared) ## 0.5200895667266314 Finally, we may look at the full summary output of our models: print(summary(lm_fit)) ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.9071 -1.1047 -0.0692 1.2970 4.1897 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.93221 0.36309 5.322 6.52e-07 *** ## x 0.42486 0.03137 13.546 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.829 on 98 degrees of freedom ## Multiple R-squared: 0.6518, Adjusted R-squared: 0.6483 ## F-statistic: 183.5 on 1 and 98 DF, p-value: < 2.2e-16 print(lm_fit.summary()) ## OLS Regression Results ## ============================================================================== ## Dep. Variable: y R-squared: 0.520 ## Model: OLS Adj. R-squared: 0.515 ## Method: Least Squares F-statistic: 106.2 ## Date: Tue, 13 Oct 2020 Prob (F-statistic): 2.63e-17 ## Time: 21:39:35 Log-Likelihood: -223.27 ## No. Observations: 100 AIC: 450.5 ## Df Residuals: 98 BIC: 455.8 ## Df Model: 1 ## Covariance Type: nonrobust ## ============================================================================== ## coef std err t P>|t| [0.025 0.975] ## ------------------------------------------------------------------------------ ## const 2.0262 0.452 4.478 0.000 1.128 2.924 ## x1 0.4028 0.039 10.306 0.000 0.325 0.480 ## ============================================================================== ## Omnibus: 2.753 Durbin-Watson: 1.975 ## Prob(Omnibus): 0.252 Jarque-Bera (JB): 1.746 ## Skew: 0.035 Prob(JB): 0.418 ## Kurtosis: 2.356 Cond. No. 23.1 ## ============================================================================== ## ## Warnings: ## [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. and see a variety of familiar statistics. Furthermore, if we decide to scale the variables, by say, dividing by $$10$$, then the $$R^2$$ would be unchanged: lm_fit_scale_y <- lm(I(y/10) ~ x) print(unname(coef(lm_fit_scale_y))) ## [1] 0.19322145 0.04248597 print(summary(lm_fit_scale_y)$r.squared) ## [1] 0.6518439 lm_fit_scale_x <- lm(y ~ I(x/10)) print(unname(coef(lm_fit_scale_x))) ## [1] 1.932214 4.248597 print(summary(lm_fit_scale_x)$r.squared) ## [1] 0.6518439 lm_fit_scale_yx <- lm(I(y/10) ~ I(x/10)) print(unname(coef(lm_fit_scale_yx))) ## [1] 0.1932214 0.4248597 print(summary(lm_fit_scale_yx)$r.squared) ## [1] 0.6518439 lm_fit_scale_y = sm.OLS(y/10, sm.add_constant(x)).fit() print(lm_fit_scale_y.params) ## [0.20261505 0.04028068] print(lm_fit_scale_y.rsquared) ## 0.5200895667266313 lm_fit_scale_x = sm.OLS(y, sm.add_constant(x/10)).fit() print(lm_fit_scale_x.params) ## [2.02615049 4.02806766] print(lm_fit_scale_x.rsquared) ## 0.5200895667266314 lm_fit_scale_yx = sm.OLS(y/10, sm.add_constant(x/10)).fit() print(lm_fit_scale_yx.params) ## [0.20261505 0.40280677] print(lm_fit_scale_yx.rsquared) ## 0.5200895667266314 Finally, we will plot $$Y_i - \overline{Y}$$, $$\widehat{Y}_i - \overline{Y}$$ and $$\widehat{\epsilon}_i$$ for a better visual understanding of what $$\text{TSS}$$, $$\text{ESS}$$ and $$\text{RSS}$$ measure and their impact when calculating $$R^2$$: # # plot(x, y - mean(y), pch = 19) points(x, lm_fit$fitted.values - mean(y), col = "blue", pch = 19) points(x, lm_fit$residuals, col = "red", pch = 19) # legend(x = 0, y = 7, legend = c(expression(Y[i] - bar(Y)[i]), expression(widehat(Y)[i]-bar(Y)[i]), expression(widehat(epsilon)[i])), pch = c(19, 19, 19), col = c("black", "blue", "red")) import matplotlib.pyplot as plt # _ = plt.figure(num = 0, figsize = (10, 8)) _ = plt.plot(x, y - np.mean(y), linestyle = "None", marker = "o", color = "black", label = "$Y_i - \\overline{Y}_i$") _ = plt.plot(x, lm_fit.fittedvalues - np.mean(y), linestyle = "None", marker = "o", color = "blue", label = "$\\widehat{Y}_i - \\overline{Y}_i$") _ = plt.plot(x, lm_fit.resid, linestyle = "None", marker = "o", color = "red", label = "$\\widehat{\\epsilon}_i$") _ = plt.legend() plt.show() Example 3.31 We will now present an example of when a linear model may prove to be a better fit in terms of its $$R^2$$, even when the data has a nonlinear relationship 3.8.2.1 Cases When $$R^2$$ is Negative A Case of a negative $$R^2$$ can arise when: 1. The predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. E.g. if we try to guess the coefficient values - e.g. we assume that coefficients of models on similar data, or in similar countries would be the same for our data; 2. We do not include an intercept, $$\beta_0$$ in our linear regression model; 3. When a non-linear function is used to fit the data; In cases where negative $$R^2$$ values arise, the mean of the data provides a better fit to the outcomes, rather than the fitted model values, according to this criterion, $$R^2$$. We will later see, that there is a variety of different alternative criterions for evaluating the accuracy of a model. We will look at each case separately. 3.8.2.1.1 Fitted values are not derived from the data, which is being analysed Let’s say that we use a model, which was fitted on the following dataset: set.seed(123) # N <- 1000 # x0 <- sample(seq(from = 0, to = 2, length.out = N), replace = TRUE) e0 <- rnorm(mean = 0, sd = 1, n = N) y0 <- -2 + 2 * x0 + e0 np.random.seed(123) # N = 1000 # x0 = np.random.choice(np.linspace(start = 0, stop = 2, num = N), size = N, replace = True) e0 = np.random.normal(loc = 0, scale = 1, size = N) y0 = -2 + 2 * x0 + e0 The estimated model of such a dataset is: lm_fit0 <- lm(y0 ~ 1 + x0) print(coef(lm_fit0)) ## (Intercept) x0 ## -1.954413 1.971793 lm_fit0 = sm.OLS(y0, sm.add_constant(x0)).fit() print(lm_fit0.params) ## [-2.00889979 2.01142461] Now, assume that we are analyzing a different data sample. Let’s say that our data sample comes from the following underlying DGP: set.seed(456) # N <- 1000 beta_0 <- 2 beta_1 <- -2 # x_other <- sample(seq(from = 0, to = 2, length.out = N), replace = TRUE) e_other <- rnorm(mean = 0, sd = 1, n = length(x)) y_other <- beta_0 + beta_1 * x_other + e_other np.random.seed(456) # N = 1000 beta_0 = 2 beta_1 = -2 # x_other = np.random.choice(np.linspace(start = 0, stop = 2, num = N), size = N, replace = True) e_other = np.random.normal(loc = 0, scale = 1, size = N) y_other = beta_0 + beta_1 * x_other + e_other However, we make the incorrect assumption that our data sample comes from the same population as the previous data. This leads us to calculating the fitted values, residuals and $$R^2$$ using pre-estimated coefficients: y_fit_other <- coef(lm_fit0)[1] + coef(lm_fit0)[2] * x_other resid_other <- y - y_fit_other # RSS <- sum(resid_other^2) TSS <- sum((y_other - mean(y_other))^2) R_sq <- 1 - RSS/TSS print(R_sq) ## [1] -19.48824 y_fit_other = lm_fit0.params[0] + lm_fit0.params[1] * x_other resid_other = y_other - y_fit_other # RSS = np.sum(np.array(resid_other)**2) TSS = np.sum((y_other - np.mean(y_other))**2) R_sq = 1 - RSS / TSS print(R_sq) ## -1.7715352852594477 Visual inspection reveals that our assumption that an existing model of one dataset is good enough for our dataset was incorrect - it is clear that our dataset is from a different DGP. For comparison, we also plot the process mean. plot(x_other, y_other) lines(x_other[order(x_other)], y_fit_other[order(x_other)], col = "red") abline(h = mean(y_other), col = "blue", lty = 2) legend(x = 1.5, y = 4, legend = c("Data Sample", expression(widehat(Y)[i]), expression(bar(Y))), pch = c(1, NA, NA), lty = c(NA, 1, 2), col = c("black", "red", "blue")) _ = plt.figure(num = 1, figsize = (10, 8)) _ = plt.plot(x_other, y_other, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black", label = "Data Sample") _ = plt.plot(x_other[np.argsort(x_other)], y_fit_other[np.argsort(x_other)], linestyle = "-", color = "red", label = "$\\widehat{Y}_i$") _ = plt.axhline(y = np.mean(y_other), linestyle = "--", color = "blue", label = "$\\overline{Y}_i$") _ = plt.legend() plt.show() If we compare models from datasets of different countries, different firms, we would run into such problems. For example, if one firm is very large, while another is relatively new and small - making an assumption that a model on the data of one firm can be applied to the data of this new firm would be incorrect - some variables may have similar effects, but they would most likely not be the same in magnitude. 3.8.2.1.2 Regression without an intercept We will generate an OLS model with an intercept set.seed(123) # N <- 100 beta_0 <- 30 beta_1 <- 2 # x <- seq(from = 0, to = 20, length.out = N) e <- rnorm(mean = 0, sd = 1, n = N) y <- beta_0 + beta_1 * x + e np.random.seed(123) # N = 100 beta_0 = 30 beta_1 = 2 # x = np.linspace(start = 0, stop = 20, num = N) e = np.random.normal(loc = 0, scale = 1, size = N) y = beta_0 + beta_1 * x + e But we will estimate the parameters of a regression model without an intercept. The estimated coefficient, fitted values and residuals are calculated as follows (take note that we do not include a constant in the independent variable matrix): lm_fit <- lm(y ~ -1 + x) print(coef(lm_fit)) ## x ## 4.248594 lm_fit = sm.OLS(y, x).fit() print(lm_fit.params) ## [4.24107257] Which results in the following negative $$R^2$$: RSS <- sum(lm_fit$residuals^2) TSS <- sum((y - mean(y))^2) print(R_sq) ## [1] -0.6507254 RSS = np.sum(np.array(lm_fit.resid)**2) TSS = np.sum((y - np.mean(y))**2) R_sq = 1 - RSS / TSS print(R_sq) ## -0.6718501471478293 For cases when a model does not have an intercept, $$R^2$$ is usually computed as: $R^2 = 1 - \dfrac{RSS}{\sum_{i = 1}^N Y_i^2}$ where the denominator acts as if we assume that $$\mathbb{E}(Y) = 0$$ (and hence we assume that $$\overline{Y} \approx 0$$). Applying this expression for the $$R^2$$ yields: R_sq <- 1 - RSS/sum(y^2) print(R_sq) ## [1] 0.9136212 R_sq = 1 - RSS/np.sum(np.array(y)**2) print(R_sq) ## 0.9129369975970213 Furthermore, this value of $$R^2$$ is (silently) applied in the built-in OLS estimation functions: print(summary(lm_fit)$r.squared) ## [1] 0.9136212 print(lm_fit.rsquared) ## 0.9129369975970213 Unfortunately, if $$R^2$$ is calculated in this way, it ignores a very important fact about our model - a negative $$R^2$$ indicates that the regression is actually worse than a simple average of the process. In fact, the modified $$R^2$$ shows a very high value - the complete opposite of what we would expect to see in such a situation. Visually, we can see that our model provides quite a poor fit. For comparison, we also plot the process mean: plot(x, y, ylim = c(0, max(lm_fit$fitted.values))) lines(x, lm_fit$fitted.values, col = "red") abline(h = mean(y), lty = 2, col = "blue") legend(x = 0, y = 80, legend = c("Data Sample", expression(widehat(Y) == widehat(beta)[1]*X), expression(bar(Y))), lty = c(NA, 1, 2), pch = c(1, NA, NA), col = c("black", "red", "blue")) _ = plt.figure(num = 2, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black", label = "Data Sample") _ = plt.plot(x[np.argsort(x)], lm_fit.fittedvalues[np.argsort(x)], linestyle = "-", color = "red", label = "$\\widehat{Y} = \\widehat{\\beta}_1 X$") _ = plt.axhline(y = np.mean(y), linestyle = "--", color = "blue", label = "$\\overline{Y}$") _ = plt.legend() plt.show() So, while the modified $$R^2$$ seems high, in reality the model provides a poor fit for the data sample. 3.8.2.1.3 A Nonlinear function is used to fit the data with large error variance As an example we will simulate data from the following log-linear model: $$\log(Y) = \beta_0 + \beta_1 X + \epsilon$$, where $$\beta_0 = 0.2$$, $$\beta_1 = 2$$, $$N = 100$$, $$\epsilon \sim \mathcal{N}(0, 1)$$, and $$X$$ is a random sample with replacement from an interval from $$0$$ to $$0.5$$, equally spaced into $$N$$ elements. set.seed(123) # N <- 100 beta_0 <- 0.2 beta_1 <- 2 # x <- sample(seq(from = 0, to = 0.5, length.out = N), replace = TRUE) e <- rnorm(mean = 0, sd = 1, n = length(x)) y <- exp(beta_0 + beta_1 * x + e) np.random.seed(123) # N = 100 beta_0 = 0.2 beta_1 = 2 # x = np.random.choice(np.linspace(start = 0, stop = 0.5, num = N), size = N, replace = True) e = np.random.normal(loc = 0, scale = 1, size = N) y = np.exp(beta_0 + beta_1 * x + e) This data has a small variation in $$X$$ and a (relative to the variance in $$\log(Y)$$ and in $$\epsilon$$) large error variance: print(var(x)) ## [1] 0.02231239 print(var(log(y))) ## [1] 0.8985592 print(var(e)) ## [1] 0.8482849 print(np.var(x)) ## 0.023800418324660753 print(np.var(np.log(y))) ## 1.1624704319571753 print(np.var(e)) ## 1.0245932457731062 If we estimate the correct model and look at the coefficients and $$R^2$$: lm_fit <- lm(log(y) ~ x) print(lm_fit$coefficients) ## (Intercept) x ## 0.3305187 1.5632999 print(summary(lm_fit)$r.squared) ## [1] 0.06068536 lm_fit = sm.OLS(np.log(y), sm.add_constant(x)).fit() print(lm_fit.params) ## [0.0878827 2.44826432] print(lm_fit.rsquared) ## 0.12272111156714938 We see that the $$R^2$$ is very small. Furthermore, if we were to back-transform our fitted values and calculate $$R^2$$: y_fit <- exp(lm_fit$fitted.values) resid <- y - y_fit # TSS <- sum((y - mean(y))^2) print(R_sq) ## [1] -0.06587727 y_fit = np.exp(lm_fit.fittedvalues) resid = y - y_fit # TSS = np.sum((y - np.mean(y))**2) print(R_sq) ## -0.04781349315972472 We see that it is even worse. The plot of the fitted values: # # plot(x, y) lines(x[order(x)], y_fit[order(x)], col = "red") abline(h = mean(y), lty = 2, col = "blue") _ = plt.figure(num = 3, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], y_fit[np.argsort(x)], linestyle = "-", color = "red") _ = plt.axhline(y = np.mean(y), linestyle = "--", color = "blue") plt.show() So, a large variance of the error term, or a small variance of the independent variable, will result in a lower $$R^2$$ value overall. Furthermore, back-transforming would likely result in a lower $$R^2$$ value. Finally, plotting the residuals against the fitted values indicates that the residuals of $$Y - \exp(\widehat{\log(Y)})$$ do not have the same properties as the log-linear model residuals: # # par(mfrow = c(2, 1)) # # plot(lm_fit$fitted.values, lm_fit$residuals, main = "log-linear model residuals") # plot(y_fit, resid, main = "Dependent variable and the back-transformed fitted value residuals") fig = plt.figure(num = 4, figsize = (10, 8)) _ = fig.add_subplot('211').plot(lm_fit.fittedvalues, lm_fit.resid, color = "black", linestyle = "None", marker = "o", markerfacecolor = 'None') _ = plt.title("log-linear model residuals") _ = fig.add_subplot('212').plot(y_fit, resid, color = "black", linestyle = "None", marker = "o", markerfacecolor = 'None') _ = plt.title("Dependent variable and the back-transformed fitted value residuals") plt.show() 3.8.2.2 Correlation Analysis The correlation coefficient between $$X$$ and $$Y$$ is defined as: $\rho_{X,Y} = \dfrac{\mathbb{C}{\rm ov} (X, Y)}{\sqrt{\mathbb{V}{\rm ar} (X)} \sqrt{\mathbb{V}{\rm ar} (Y)}} = \dfrac{\sigma_{X,Y}}{\sigma_X \sigma_Y}$ The sample correlation is calculated as: $r_{X, Y} = \dfrac{\widehat{\sigma}_{X,Y}}{\widehat{\sigma}_X \widehat{\sigma}_Y} = \dfrac{\dfrac{1}{N-1} \sum_{i = 1}^N (X_i - \overline{X})(Y_i - \overline{Y})}{\sqrt{\dfrac{1}{N-1} \sum_{i = 1}^N (X_i - \overline{X})^2}\sqrt{\dfrac{1}{N-1} \sum_{i = 1}^N (Y_i - \overline{Y})^2}}$ The sample correlation $$-1 \leq r_{X,Y} \leq 1$$ measures the strength of the linear association between the sample values of $$X$$ and $$Y$$. 3.8.2.3 Correlation Analysis and $$R^2$$ There is a relationship between $$R^2$$ and $$r_{X,Y}$$: 1. $$R^2 = r_{X,Y}^2$$. So, $$R^2$$ can be computed as the square of the sample correlation between $$Y$$ and $$X$$. 2. $$R^2 = r_{Y, \widehat{Y}}^2$$. So, $$R^2$$ can be computed as the square of the sample correlation between $$Y$$ and its fitted values $$\widehat{Y}$$. As such, $$R^2$$ measures the linear association, the goodness-of-fit, between the sample data $$Y$$, and its predicted values $$\widehat{Y}$$. Because of this, $$R^2$$ is sometimes called a measure of goodness-of-fit print(summary(lm_fit)$r.squared) ## [1] 0.06068536 print(lm_fit.rsquared) ## 0.12272111156714938 print(cor(log(y), x)^2) ## [1] 0.06068536 print(np.corrcoef(np.log(y), x)[0][1]**2) ## 0.12272111156714918 print(cor(log(y), lm_fit$fitted.values)^2) ## [1] 0.06068536 print(np.corrcoef(np.log(y), lm_fit.fittedvalues)[0][1]**2) ## 0.12272111156714925 3.8.2.4 A General (pseudo) $$R$$-squared Measure, $$R^2_g$$ As we have seen, we may need to back-transform our independent variable. Then, we can calculate a general (pseudo) measure of $$R^2$$: $R^2_g = r_{Y, \widehat{Y}}^2 = \mathbb{C}{\rm orr}(Y, \widehat{Y})^2$ In our previous example we can calculate $$R^2_g$$ for both the log and the back-transformed values: cor(log(y), lm_fitfitted.values)^2 ## [1] 0.06068536 cor(y, y_fit)^2 ## [1] 0.03673844 print(np.corrcoef(np.log(y), lm_fit.fittedvalues)[0][1]**2) ## 0.12272111156714925 print(np.corrcoef(y, y_fit)[0][1]**2) ## 0.05975262825731168 A way to look at it is that $$R^2$$ measures the variation explained by our model, whereas $$R^2_g$$ measures the variance explained by our model. In a linear regression, the two definitions are the same, as long as the intercept coefficient is included in the model. 3.8.3 Regression Diagnostics In many cases while carrying out statistical/econometric analysis, we are not sure, whether we have correctly specified our model. As we have seen, the $$R^2$$ can be artificially small (or large), regardless of the specified model. As such, there are a number of regression diagnostics and specification tests. For the univariate regression, the most crucial assumptions come from (UR.3) and (UR.4), namely: • $$\mathbb{V}{\rm ar} (\epsilon_i | \mathbf{X} ) = \sigma^2_\epsilon,\ \forall i = 1,..,N$$ • $$\mathbb{C}{\rm ov} (\epsilon_i, \epsilon_j) = 0,\ i \neq j$$ • $$\boldsymbol{\varepsilon} | \mathbf{X} \sim \mathcal{N} \left( \mathbf{0}, \sigma^2_\epsilon \mathbf{I} \right)$$ We note that the residuals are defined as: \begin{aligned} \widehat{\boldsymbol{\varepsilon}} &= \mathbf{Y} - \widehat{\mathbf{Y} } \\ &= \mathbf{Y} - \mathbf{X} \widehat{\boldsymbol{\beta}} \\ &= \mathbf{Y} - \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{Y} \\ &= \left[ \mathbf{I} - \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \right]\mathbf{Y} \end{aligned} Hence, for the OLS residuals (i.e. not the true unobserved errors) the expected value of the residuals is still zero: \begin{aligned} \mathbb{E} \left( \widehat{\boldsymbol{\varepsilon}}| \mathbf{X} \right) &= \mathbb{E} \left( \left[ \mathbf{I} - \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \right]\mathbf{Y} | \mathbf{X} \right)\\ &= \mathbb{E} \left( \left[ \mathbf{I} - \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \right] \left( \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \right) | \mathbf{X} \right) \\ &= \mathbf{X} \boldsymbol{\beta} + \mathbb{E} (\boldsymbol{\varepsilon}) - \mathbf{X} \boldsymbol{\beta} - \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbb{E} (\boldsymbol{\varepsilon}) \\ &= 0 \end{aligned} For simplicity, let $$\widehat{\boldsymbol{\varepsilon}} = \left[ \mathbf{I} - \mathbf{H}\right]\mathbf{Y}$$, where $$\mathbf{H}\ = \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top$$. Consequently, the variance-covariance matrix of the residuals is: \begin{aligned} \mathbb{V}{\rm ar} \left( \widehat{\boldsymbol{\varepsilon}}| \mathbf{X}\right) &= \mathbb{V}{\rm ar} \left( \left[ \mathbf{I} - \mathbf{H}\right]\mathbf{Y}|\mathbf{X}\right) \\ &= \left[ \mathbf{I} - \mathbf{H}\right]\mathbb{V}{\rm ar} \left( \mathbf{Y} | \mathbf{X}\right) \left[ \mathbf{I} - \mathbf{H}\right]^\top \\ &= \left[ \mathbf{I} - \mathbf{H}\right] \sigma^2 \left[ \mathbf{I} - \mathbf{H}\right]^\top \\ &= \sigma^2 \left[ \mathbf{I} - \mathbf{H}^\top - \mathbf{H} + \mathbf{H} \mathbf{H}^\top\right] \\ &= \sigma^2 \left[ \mathbf{I} - \mathbf{H}^\top - \mathbf{H} + \mathbf{H}^\top\right] \\ &= \sigma^2 \left[ \mathbf{I} - \mathbf{H}\right] \end{aligned} \tag{3.12} This result shows an important distinction of the residuals from the errors - the residuals may have different variances (which are the diagonal elements of $$\mathbb{V}{\rm ar} \left( \widehat{\boldsymbol{\varepsilon}}| \mathbf{X}\right)$$), even if the true errors (which affect the process $$\mathbf{Y}$$) all have the same variance $$\sigma^2$$. The variance for the fitted values is smallest for observations near the mean and the largest for values, which deviate the most from the process mean. 3.8.3.1 Residual Diagnostic Plots One way to examine the adequacy of the model is to visualize the residuals. There are a number of ways to do this: • Plotting the residuals $$\widehat{\epsilon}_i$$ against the fitted values $$\widehat{Y}_i$$; • Plotting the residuals $$\widehat{\epsilon}_i$$ against $$X_i$$ • Plotting the residual Q-Q plot, histogram or boxplot. In all cases, if there are no violations of our (UR.2) or (UR.3) assumptions - the plots should reveal no patterns. The residual histogram, Q-Q plot should be approximately normal so that our assumption (UR.4) holds. As we are not guaranteed to specify a correct functional form, residual plots offer a great insight on what possible functional form we may have missed. We should note that when having multiple models, it is only meaningful to compare the residuals of models with the same dependent variable. For example, comparing the residuals of a linear-linear model (with $$Y$$) and of a log-linear model (with $$\log(Y)$$) is not meaningful as they have different value scales. Transforming the dependent or the independent variables may help to alleviate some of the problems of the residuals: • If nonlinearities are present in the residual plots - we must firstly account for them, and only after can we check, whether the errors have a constant variance. • Transforming $$\mathbf{Y}$$ primarily aims to help with problems with the error terms (and may help with non-linearity); • Transforming $$\mathbf{X}$$ primarily aims to help with correcting for non-linearity; • Sometimes transforming $$\mathbf{X}$$ is enough to account for non-linearity and have normally distributed errors, while transforming $$\mathbf{Y}$$ may account for non-linearity but might make the errors non-normally distributed. • Other times, transforming $$\mathbf{X}$$ does not help account for the nonlinear relationship at all; Remember that the Q-Q plot plots quantiles of the data versus quantiles of a distribution. If the observations come from a normal distribution we would expect the observed order statistics plotted against the expected (theoretical) order statistics to form an approximately straight line. Example 3.32 We will generate four different models: • a simple linear model: $$Y = \beta_0 + \beta_1 X + \epsilon$$; • a log-linear model: $$\log(Y) = \beta_0 + \beta_1 X + \epsilon$$; • a linear-log model: $$Y = \beta_0 + \beta_1 \log(X) + \epsilon$$; • a log-log model: $$\log(Y) = \beta_0 + \beta_1 \log(X) + \epsilon$$; For each case, we will estimate a simple linear model on the data and examine the residual plots. For simplicity, we will use the same $$X_i$$ and the same $$\epsilon_i \sim \mathcal{N}(0, 0.2^2)$$, $$i = 1,...,N$$ with $$N = 200$$, $$\beta_0 = 1$$ and $$\beta_1 = 2$$. # # # # # # set.seed(123) # Sample size and coefficients N = 200 beta_0 <- 1 beta_1 <- 2 # Variables which will be the same for each model: x <- seq(from = 0.1, to = 2, length.out = N) e <- rnorm(mean = 0, sd = 0.2, n = N) # Simple linear model y <- beta_0 + beta_1 * x + e data_lin <- data.frame(y, x, e) # Linear-Log model: y <- beta_0 + beta_1 * log(x) + e data_linlog <- data.frame(y, x, e) # Log-linear model: y <- exp(beta_0 + beta_1 * x + e) data_loglin <- data.frame(y, x, e) # Log-Log model: y <- exp(beta_0 + beta_1 * log(x) + e) data_loglog <- data.frame(y, x, e) import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm import scipy.stats as stats # np.random.seed(123) # Sample size and coefficients N = 200 beta_0 = 1 beta_1 = 2 # Variables which will be the same for each model: x = np.linspace(start = 0.1, stop = 2, num = N) e = np.random.normal(loc = 0, scale = 0.2, size = N) # Simple linear model y = beta_0 + beta_1 * x + e data_lin = pd.DataFrame([y, x, e], index = ["y", "x", "e"]).T # Linear-Log model: y = beta_0 + beta_1 * np.log(x) + e data_linlog = pd.DataFrame([y, x, e], index = ["y", "x", "e"]).T # Log-linear model: y = np.exp(beta_0 + beta_1 * x + e) data_loglin = pd.DataFrame([y, x, e], index = ["y", "x", "e"]).T # Log-Log model: y = np.exp(beta_0 + beta_1 * np.log(x) + e) data_loglog = pd.DataFrame([y, x, e], index = ["y", "x", "e"]).T # Plot the data - remember: X is the same for all models par(mfrow = c(2, 2)) # plot(x, data_liny, main = "Simple linear DGP") # # # plot(x, data_linlog$y, main = "linear-log DGP") # # # plot(x, data_loglin$y, main = "log-linear DGP") # # # plot(x, data_loglog$y, main = "log-log DGP") # Plot the data - remember: X is the same for all models fig = plt.figure(num = 5, figsize = (10, 8)) _ = fig.add_subplot("221").plot(x, data_lin["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.title("Simple linear DGP") _ = fig.add_subplot("222").plot(x, data_linlog["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.title("linear-log DGP") _ = fig.add_subplot("223").plot(x, data_loglin["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.title("log-linear DGP") _ = fig.add_subplot("224").plot(x, data_loglog["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.title("log-log DGP") plt.tight_layout() plt.show() Next, we will estimate the simple linear regression for each dataset: mdl1 <- lm(y ~ 1 + x, data = data_lin) mdl2 <- lm(y ~ 1 + x, data = data_linlog) mdl3 <- lm(y ~ 1 + x, data = data_loglin) mdl4 <- lm(y ~ 1 + x, data = data_loglog) mdl1 = sm.OLS(data_lin["y"], sm.add_constant(data_lin["x"])).fit() mdl2 = sm.OLS(data_linlog["y"], sm.add_constant(data_linlog["x"])).fit() mdl3 = sm.OLS(data_loglin["y"], sm.add_constant(data_loglin["x"])).fit() mdl4 = sm.OLS(data_loglog["y"], sm.add_constant(data_loglog["x"])).fit() We can plot the fitted values alongside the actual data: # # par(mfrow = c(2, 2)) # # # plot(x, data_lin$y, main = "Simple linear DGP") lines(x[order(x)], mdl1$fitted.values[order(x)], col = "red", lwd = 2) # # # plot(x, data_linlog$y, main = "linear-log DGP") lines(x[order(x)], mdl2$fitted.values[order(x)], col = "red", lwd = 2) # # # plot(x, data_loglin$y, main = "log-linear DGP") lines(x[order(x)], mdl3$fitted.values[order(x)], col = "red", lwd = 2) # # # plot(x, data_loglog$y, main = "log-log DGP") lines(x[order(x)], mdl4fitted.values[order(x)], col = "red", lwd = 2) fig = plt.figure(num = 6, figsize = (10, 8)) _ = fig.add_subplot("221").plot(x, data_lin["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], mdl1.fittedvalues[np.argsort(x)], color = "red", linewidth = 2) _ = plt.title("Simple linear DGP") _ = fig.add_subplot("222").plot(x, data_linlog["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], mdl2.fittedvalues[np.argsort(x)], color = "red", linewidth = 2) _ = plt.title("linear-log DGP") _ = fig.add_subplot("223").plot(x, data_loglin["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], mdl3.fittedvalues[np.argsort(x)], color = "red", linewidth = 2) _ = plt.title("log-linear DGP") _ = fig.add_subplot("224").plot(x, data_loglog["y"], linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], mdl4.fittedvalues[np.argsort(x)], color = "red", linewidth = 2) _ = plt.title("log-log DGP") plt.tight_layout() plt.show() Then, the different residual plots are as follows: plot_resid <- function(resid, y_fit, x, plt_title){ plot(c(0, 1), c(0, 1), ann = F, bty = 'n', type = 'n', xaxt = 'n', yaxt = 'n') text(x = 0.5, y = 0.5, plt_title, cex = 1.6, col = "black") # qqnorm(resid, main = "Q-Q plot of residuals") qqline(resid, col = "red", lwd = 2) # hist(resid, main = "Histogram of residuals", col = "cornflowerblue", breaks = 25) # plot(y_fit, resid, main = "Residuals vs Fitted values") # plot(x, resid, main = "Residuals vs X") } # # # # # # # # # def plot_resid(resid, y_fit, x, plt_title, plt_row, plt_col, plt_pos, fig): ax = fig.add_subplot(plt_row, plt_col, plt_pos) ax.set_yticklabels([]) ax.set_xticklabels([]) ax.tick_params(right = False, top = False, left = False, bottom= False) ax.text(0.5, 0.5, plt_title, horizontalalignment='center', verticalalignment='center', transform = ax.transAxes) # ax = fig.add_subplot(plt_row, plt_col, plt_pos + 1) stats.probplot(resid, dist = "norm", plot = ax) # ax = fig.add_subplot(plt_row, plt_col, plt_pos + 2) ax.hist(resid, color = "cornflowerblue", bins = 25, ec = 'black') plt.title("Histogram of residuals") # ax = fig.add_subplot(plt_row, plt_col, plt_pos + 3) ax.plot(y_fit, resid, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") plt.title("Residuals vs Fitted values") # ax = fig.add_subplot(plt_row, plt_col, plt_pos + 4) ax.plot(x, resid, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") plt.title("Residuals vs X") # par(mfrow = c(4, 5)) # plot_resid(mdl1residuals, mdl1$fitted.values, x, "simple linear DGP") # plot_resid(mdl2$residuals, mdl2$fitted.values, x, "linear-log DGP") # plot_resid(mdl3$residuals, mdl3$fitted.values, x, "log-linear DGP") # plot_resid(mdl4$residuals, mdl4$fitted.values, x, "log-log DGP") fig = plt.figure(num = 7, figsize = (12, 10)) # plot_resid(mdl1.resid, mdl1.fittedvalues, x, "simple linear DGP", 4, 5, 1, fig) plot_resid(mdl2.resid, mdl2.fittedvalues, x, "linear-log DGP", 4, 5, 6, fig) plot_resid(mdl3.resid, mdl3.fittedvalues, x, "log-linear DGP", 4, 5, 11, fig) plot_resid(mdl4.resid, mdl4.fittedvalues, x, "log-log DGP", 4, 5, 16, fig) # plt.tight_layout() ## <string>:1: UserWarning: Tight layout not applied. tight_layout cannot make axes width small enough to accommodate all axes decorations plt.show() We see that the linear model for the dataset, which is generated from a simple linear DGP, has residuals which appear to be random - we do not see any non-random patterns in the scatterplots. Furthermore, the histogram and Q-Q plot indicate that the residuals may be from a normal distribution. On the other hand, a simple linear model does not fit the data well, if the data is sampled from a non-linear DGP - we see clear patterns in the residual scatter plots, as well as non-normality. For comparison, if we were to fit the correct models, we would have the following plots: mdl2_correct <- lm(y ~ 1 + log(x), data = data_linlog) mdl3_correct <- lm(log(y) ~ 1 + x, data = data_loglin) mdl4_correct <- lm(log(y) ~ 1 + log(x), data = data_loglog) # # par(mfrow = c(4, 5)) # plot_resid(mdl1$residuals, mdl1$fitted.values, x, "simple linear DGP") # plot_resid(mdl2_correct$residuals, mdl2_correct$fitted.values, x, "linear-log DGP") # plot_resid(mdl3_correct$residuals, mdl3_correct$fitted.values, x, "log-linear DGP") # plot_resid(mdl4_correct$residuals, mdl4_correctfitted.values, x, "log-log DGP") mdl2_correct = sm.OLS(data_linlog["y"], sm.add_constant(np.log(data_linlog["x"]))).fit() mdl3_correct = sm.OLS(np.log(data_loglin["y"]), sm.add_constant(data_loglin["x"])).fit() mdl4_correct = sm.OLS(np.log(data_loglog["y"]), sm.add_constant(np.log(data_loglog["x"]))).fit() # # fig = plt.figure(num = 8, figsize = (12, 10)) # plot_resid(mdl1.resid, mdl1.fittedvalues, x, "simple linear DGP", 4, 5, 1, fig) plot_resid(mdl2_correct.resid, mdl2_correct.fittedvalues, x, "linear-log DGP", 4, 5, 6, fig) plot_resid(mdl3_correct.resid, mdl3_correct.fittedvalues, x, "log-linear DGP", 4, 5, 11, fig) plot_resid(mdl4_correct.resid, mdl4_correct.fittedvalues, x, "log-log DGP", 4, 5, 16, fig) # plt.tight_layout() plt.show() Then the residuals are normally distributed and do not have any patterns or change in variance. 3.8.3.2 Residual Heteroskedasticity If $$\mathbb{V}{\rm ar} (\epsilon_i | \mathbf{X} ) = \sigma^2_\epsilon,\ \forall i = 1,..,N$$, we say that the residuals are homoskedastic. If this assumption is violated, we say that the residuals are heteroskedastic - that is, their variance is not constant throughout observations. The consequences of heteroskedasticity are as follows: • OLS parameters remain unbiased; • OLS estimates are no longer efficient (i.e. they no longer have the smallest variance). The reason for this is that OLS gives equal weight to all observations in the data, when in fact, observation with larger error variance contain less information, compared to observations with smaller error variance; • The variance estimate of the residuals is biased, and hence the standard errors are biased. This in turn leads to a bias in test statistics and confidence intervals. • Because of standard error bias, we may fail to reject the null hypothesis whether $$\beta_i = 0$$ in our estimated model, when the null hypothesis is actually false (i.e. making a Type II error). There are a few possible corrections to account for heteroskedasticity: • Take logarithms of the data, this may be able to help linearize the data and in turn, the residuals; • Apply a different estimation method. We will examine this later on, but one possibility is to use a Weighted Least Squares estimation method, which gives different observations different weights and allows to account for a non-constant variance; • It is possible to correct the the biased standard errors for heteroskedasticity. This would leave the OLS estimates unchanged. White’s heteroskedasticity-consistent standard errors (or, robust standard errors) give a consistent variance estimator. Example 3.33 We are going to simulate the following model: \begin{aligned} Y_i &= \beta_0 + \beta_1 X_i + u_i\\ u_i &= \sqrt{i} \cdot \epsilon_i,\text{ where } \epsilon_i \sim \mathcal{N}(0, \sigma^2) \end{aligned} set.seed(123) # N <- 100 beta_0 <- 8 beta_1 <- 10 # x <- seq(from = 0, to = 5, length.out = N) e <- rnorm(mean = 0, sd = 0.8, n = N) u <- (1:N) * e # y <- beta_0 + beta_1 * x + u # mdl <- lm(y ~ 1 + x) summary(mdl)coefficients ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.398462 8.475098 0.1650084 8.692773e-01 ## x 14.771130 2.928474 5.0439679 2.095328e-06 np.random.seed(123) # N = 100 beta_0 = 8 beta_1 = 10 # x = np.linspace(start = 0, stop = 5, num = N) e = np.random.normal(loc = 0, scale = 0.8, size = N) u = np.array(list(range(1, N + 1))) * e # y = beta_0 + beta_1 * x + u # print(mdl.summary().tables[1]) ## ============================================================================== ## coef std err t P>|t| [0.025 0.975] ## ------------------------------------------------------------------------------ ## const 10.8945 9.978 1.092 0.278 -8.907 30.696 ## x1 9.3559 3.448 2.714 0.008 2.514 16.198 ## ============================================================================== The residuals appear to be non-normal and have nonlinearities remaining: par(mfrow = c(2, 3)) # plot(x, y) # lines(x[order(x)], mdl$fitted.values[order(x)], col = "red") # plot_resid(mdl$residuals, mdlfitted.values, x, "heteroskedastic shock DGP") fig = plt.figure(num = 9, figsize = (10, 8)) ax = fig.add_subplot("231") _ = ax.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = ax.plot(x[np.argsort(x)], mdl.fittedvalues[np.argsort(x)], color = "red") plot_resid(mdl.resid, mdl.fittedvalues, x, "heteroskedastic shock DGP", 2, 3, 2, fig) plt.tight_layout() plt.show() There are a number of methods to test for the presence of heteroskedasticity. Some of the tests are: • Goldfeld–Quandt Test. It divides the dataset into two subsets. The subsets are specified so that the observations for which the explanatory variable takes the lowest values are in one subset, and the highest values - in the other subset. The subsets are not necessarily of equal size, nor do they contain all the observations between them. The test statistic used is the ratio of the mean square residual errors for the regressions on the two subsets. This test statistic corresponds to an F-test of equality of variances. The Goldfeld–Quandt test requires that data be ordered along a known explanatory variable, from lowest to highest. If the error structure depends on an unknown variable or an unobserved variable the Goldfeld–Quandt test provides little guidance. Also, error variance must be a monotonic function of the specified explanatory variable. For example, when faced with a quadratic function mapping the explanatory variable to error variance the Goldfeld–Quandt test may improperly accept the null hypothesis of homoskedastic errors. Unfortunately the Goldfeld–Quandt test is not very robust to specification errors. The Goldfeld–Quandt test detects non-homoskedastic errors but cannot distinguish between heteroskedastic error structure and an underlying specification problem such as an incorrect functional form or an omitted variable. • Breusch–Pagan Test. After estimating the linear regression $$Y = \beta_0 + \beta_1 X + \epsilon$$, calculate the model residuals $$\widehat{\epsilon}_i$$. The OLS assumptions state that the residual variance does not depend on the independent variables $$\mathbb{V}{\rm ar} (\epsilon_i | \mathbf{X} ) = \sigma^2_\epsilon$$. If this assumptions is not true, then there may be a linear relationship between $$\widehat{\epsilon}_i^2$$ and $$X_i$$. So, the Breush-Pagan test is the based on the following regression: $\widehat{\epsilon}_i^2 = \gamma_0 + \gamma_1 X_i + v_i$ The hypothesis tests is: \begin{aligned} H_0&: \gamma_1 = 0 \text{ (residuals are homoskedastic)}\\ H_1&: \gamma_1 \neq 0 \text{ (residuals are heteroskedastic)} \end{aligned} It is a chi-squared test, where the test statistic: $LM = N \cdot R^2_{\widehat{\epsilon}}$ is distributed as $$\chi^2_1$$ under the null. Here $$R^2_{\widehat{\epsilon}}$$ is the R-square of the squared residual regression. One weakness of the BP test is that it assumes that the heteroskedasticity is a linear relationship of the independent variables. If we fail to reject the null hypothesis, we still do not rule out the possibility of a non-linear relationship between the independent variables and the error variance. • White Test is more generic than the BP test as it allows the independent variables to have a nonlinear effect on the error variance. For example, a combination of linear, quadratic and cross-products of the independent variables. It is a more commonly used test for homoskedasticity. The test statistic is calculated the same way as in BP test: $LM = N \cdot R^2_{\widehat{\epsilon}}$ the difference from BP is that the squared residual model, from which we calculate $$R^2_{\widehat{\epsilon}}$$, may be nonlinear. A shortcoming of the White test is that it can lose its power if the model has many exogenous variables. We can carry out these tests via R and Python: # # Goldfeld–Quandt Test print(lmtest::gqtest(mdl, alternative = "two.sided")) ## ## Goldfeld-Quandt test ## ## data: mdl ## GQ = 6.9269, df1 = 48, df2 = 48, p-value = 4.407e-10 ## alternative hypothesis: variance changes from segment 1 to 2 import statsmodels.stats.diagnostic as sm_diagnostic # Goldfeld–Quandt Test print(sm_diagnostic.het_goldfeldquandt(y = y, x = sm.add_constant(x), alternative = "two-sided")) ## (5.330019801721165, 4.3458162211165886e-08, 'two-sided') # Breusch–Pagan Test print(lmtest::bptest(mdl)) ## ## studentized Breusch-Pagan test ## ## data: mdl ## BP = 18.864, df = 1, p-value = 1.404e-05 # Breusch–Pagan Test print(sm_diagnostic.het_breuschpagan(resid = mdl.resid, exog_het = sm.add_constant(x))) ## (27.15440005289671, 1.8783720306329005e-07, 36.53111796891305, 2.7232777548974932e-08) # White Test print(lmtest::bptest(mdl, ~ x + I(x^2))) ## ## studentized Breusch-Pagan test ## ## data: mdl ## BP = 21.665, df = 2, p-value = 1.975e-05 The White test is equivalent to the Breusch-Pagan test with an auxiliary model containing all regressors, their squares and their cross-products. As such there is only the bptest() function, which can be leveraged to carry out the White test. # White Test print(sm_diagnostic.het_white(resid = mdl.resid, exog = sm.add_constant(x))) ## (27.63658737233078, 9.972207411097485e-07, 18.522820288405395, 1.5369984549894296e-07) In the general description of LM test - it exaggerates the significance of results in small or moderately large samples. In this case the F-statistic is preferable. As such we have both LM and F statistics and $$p$$-values provided for the BP test. For BP and White tests - the first value is the $$LM$$ statistic, the second value is the $$p$$-value of the LM statistic, the third value is the $$F$$-test statistic, the last value is the $$p$$-value of the $$F$$ test. For the GQ test, the first value is the $$F$$-statistic, the second value is the associated $$p$$-value. We see that in all cases the $$p$$-value is less than $$0.05$$, so we reject the null hypothesis and conclude that the residuals are hetereoskedastic. On the other hand, if we were to carry out these tests for a correctly specified model, like the one for the simple linear regression: # Goldfeld–Quandt Test print(lmtest::gqtest(mdl1, alternative = "two.sided")) ## ## Goldfeld-Quandt test ## ## data: mdl1 ## GQ = 1.119, df1 = 98, df2 = 98, p-value = 0.579 ## alternative hypothesis: variance changes from segment 1 to 2 # Goldfeld–Quandt Test print(sm_diagnostic.het_goldfeldquandt(y = data_lin["y"], x = sm.add_constant(data_lin["x"]), alternative = "two-sided")) ## (0.7299402182948976, 0.12090366054870887, 'two-sided') # Breusch–Pagan Test print(lmtest::bptest(mdl1)) ## ## studentized Breusch-Pagan test ## ## data: mdl1 ## BP = 0.41974, df = 1, p-value = 0.5171 # Breusch–Pagan Test print(sm_diagnostic.het_breuschpagan(resid = mdl1.resid, exog_het = sm.add_constant(data_lin["x"]))) ## (3.987557828302002, 0.04583745596968394, 4.027991495112321, 0.04611120607782332) # White Test print(lmtest::bptest(mdl1, ~ x + I(x^2), data = data_lin)) ## ## studentized Breusch-Pagan test ## ## data: mdl1 ## BP = 0.42016, df = 2, p-value = 0.8105 # White Test print(sm_diagnostic.het_white(resid = mdl1.resid, exog = sm.add_constant(data_lin["x"]))) ## (4.0785282350399354, 0.13012443194407053, 2.050490063862835, 0.13140921798003924) We see that we do not reject the null hypothesis of homoskedastic residuals (except for the BP test in Python, where the $$p$$-value is close to 0.05, on the other hand, the remaining two tests do not reject the null). There are also a number of additional heteroskedasticity tests. A discussion of their quality can be found here. 3.8.3.3 Residual Autocorrelation If $$\mathbb{C}{\rm ov} (\epsilon_i, \epsilon_j) \neq 0$$ for some $$i \neq j$$, then the errors are correlated. Autocorrelation is frequently encountered in time-series models. Example 3.34 Assume that our model is defined as follows: \begin{aligned} Y_t &= \beta_0 + \beta_1 X_t + \epsilon_t \\ \epsilon_t &= \rho \epsilon_{t-1} + u_t,\ |\rho| < 1,\ u_t \sim \mathcal{N}(0, \sigma^2) \end{aligned} Then we say that the model has autocorrelated, or serially correlated errors. In this case, we have that: $\mathbb{C}{\rm ov}(\epsilon_t, \epsilon_{t-1}) = \mathbb{C}{\rm ov}(\rho \epsilon_{t-1} + u_t, \epsilon_{t-1}) = \rho \mathbb{C}{\rm ov}(\epsilon_{t-1},\epsilon_{t-1}) = \rho \sigma^2 \neq 0$ Estimating the coefficients via OLS and ignoring the violation will still result in unbiased and consistent OLS estimators. However, the estimators are inefficient and the variance of the regression coefficients will be biased. On the other hand, autocorrelation in errors may be a result of a misspecified model. Example 3.35 If we were to fit a linear model on a quadratic - we may get residuals, which appear to be correlated. set.seed(123) # N <- 100 beta_0 <- 2 beta_1 <- 1.5 # x <- seq(from = 0, to = 10, length.out = N) e <- rnorm(mean = 0, sd = 5, n = N) y <- beta_0 + beta_1 * x^2 + e # lm_fit <- lm(y ~ 1 + x) np.random.seed(123) # N = 100 beta_0 = 2 beta_1 = 1.5 # x = np.linspace(start = 0, stop = 10, num = N) e = np.random.normal(loc = 0, scale = 0.8, size = N) y = beta_0 + beta_1 * (x**2) + e # lm_fit = sm.OLS(y, sm.add_constant(x)).fit() # # par(mfrow = c(1, 2)) # plot(x, y, main = "linear regression") # lines(x[order(x)], lm_fitfitted.values[order(x)], col = "red") # # plot(lm_fit$residuals, type = "o", main = "residual plot") plot(x[order(x)], lm_fit$residuals[order(x)], type = "o", main = "residual plot against X") fig = plt.figure(num = 10, figsize = (10, 8)) _ = ax.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = ax.plot(x, lm_fit.fittedvalues, color = "red") _ = plt.title("linear regression") _ = ax.plot(x[np.argsort(x)], lm_fit.resid[np.argsort(x)], linestyle = "-", marker = "o", markerfacecolor = "None", color = "black") _ = plt.title("residual plot against X") plt.tight_layout() plt.show() There are a number of tests for the presence of autocorrelation: • Durbin–Watson Test for the hypothesis: \begin{aligned} H_0&:\text{the errors are serially uncorrelated}\\ H_1&:\text{the errors follow a first order autoregressive process (i.e. autocorrelation at lag 1)} \end{aligned} The test statistic: $d = \dfrac{\sum_{i = 2}^N (\widehat{\epsilon}_i - \widehat{\epsilon}_{i-1})^2}{\sum_{i = 1}^N \widehat{\epsilon}_i^2}$ The value of $$d$$ always lies between 0 and 4. $$d = 2$$ indicates no autocorrelation. If the Durbin–Watson statistic is not close to 2, there is evidence of a serial correlation. • Breusch-Godfrey Test is a more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable. Consider the following linear regression: $Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ We then estimate the model via OLS and fit the following model on the residuals $$\widehat{\epsilon}_i$$: $\widehat{\epsilon}_i = \alpha_0 + \alpha_1 X_i + \rho_1 \widehat{\epsilon}_{i - 1} + \rho_2 \widehat{\epsilon}_{i - 2} + ... + \rho_p \widehat{\epsilon}_{i - p} + u_t$ and calculate its $$R^2$$ (R-squared), then testing the hypothesis: \begin{aligned} H_0&:\rho_1 = \rho_2 = ... = \rho_p = 0\\ H_1&:\rho_j \neq 0 \text{ for some } j \end{aligned} Under the null hypothesis the test statistic: $LM = (N-p)R^2 \sim \chi^2_p$ We can carry out these tests via R and Python: # # Durbin–Watson Test print(lmtest::dwtest(lm_fit, alternative = "two.sided")) ## ## Durbin-Watson test ## ## data: lm_fit ## DW = 0.26274, p-value < 2.2e-16 ## alternative hypothesis: true autocorrelation is not 0 import statsmodels.stats.stattools as sm_tools # Durbin–Watson Test print(sm_tools.durbin_watson(lm_fit.resid)) ## 0.018027275453585786 # Breusch-Godfrey Test print(lmtest::bgtest(lm_fit, order = 2)) ## ## Breusch-Godfrey test for serial correlation of order up to 2 ## ## data: lm_fit ## LM test = 75.941, df = 2, p-value < 2.2e-16 # Breusch-Godfrey Test print(sm_diagnostic.acorr_breusch_godfrey(lm_fit, nlags = 2)) ## (93.98337188826778, 3.9063405254180396e-21, 749.7890457680423, 2.5642011470769976e-59) For the Durbin–Watson Test, the DW statistic is returned. The test statistic equals 2 for no serial correlation. If it is closer to zero - we have evidence of positive correlation. If it is closer to 4, then we have more evidence of negative serial correlation. The Breusch-Godfrey Test returns the LM statistic with its corresponding $$p$$-value as well as the alternative test version with the $$F$$-statistic with its corresponding $$p$$-value. In all test cases (because $$p$$-values are less than 0.05 and the Durbin-Watson test statistic is further from 2), we reject the null hypothesis of no serial correlation. On the other hand, if we were to carry out these tests for a correctly specified model, like the one for the simple linear regression: # Durbin–Watson Test print(lmtest::dwtest(mdl1, alternative = "two.sided")) ## ## Durbin-Watson test ## ## data: mdl1 ## DW = 2.1222, p-value = 0.4255 ## alternative hypothesis: true autocorrelation is not 0 # Durbin–Watson Test print(sm_tools.durbin_watson(mdl1.resid)) ## 1.9377965734765383 # Breusch-Godfrey Test print(lmtest::bgtest(mdl1, order = 2)) ## ## Breusch-Godfrey test for serial correlation of order up to 2 ## ## data: mdl1 ## LM test = 2.4867, df = 2, p-value = 0.2884 # Breusch-Godfrey Test print(sm_diagnostic.acorr_breusch_godfrey(mdl1, nlags = 2)) ## (0.20396667467699192, 0.9030445986699857, 0.10004570053602482, 0.9048422424493952) In this case the DW statistic is close to 2, and the test $$p$$-values are greater than 0.05, so we fail to reject the null hypothesis of no serial correlation. Note There is also the Ljung-Box Test for testing the null hypothesis of no autocorrelation of residuals. 3.8.3.4 Residual Normality Testing The normality requirement is necessary if we want to obtain the correct $$p$$-values and critical $$t$$-values when testing the hypothesis that $$H_0: \beta_j = c$$, especially for significance testing, with $$c = 0$$. Assume that we want to test whether our residuals $$z_1,...,z_N$$ come from a normal distribution. The hypothesis can be stated as: \begin{aligned} H_0&:\text{residuals follow a normal distribution}\\ H_1&:\text{residuals do not follow a normal distribution} \end{aligned} There are a number of normality tests, like: • Anderson-Darling Test. The test statistic is calculated as: $A^2 = -N - \sum_{i 1}^N \dfrac{2i-1}{N}\left[ \log(F(z_{(i)}) + \log\left(1 - F(z_{(N+1-i)}) \right)\right]$ where $$z_{(i)}$$ are the ordered data and $$F(\cdot)$$ is the cumulative distribution function (cdf) of the distribution being tested (for the univariate regression residuals - we are usually interested in testing for the normal distribution). The test statistic is compared against the critical values from the normal distribution. Empirical testing indicates that the Anderson–Darling test is not quite as good as Shapiro-Wilk, but is better than other tests. • Shapiro-Wilk Test. The test statistic is: $W = \dfrac{\left(\sum_{i = 1}^N a_i z_{(i)} \right)^2}{\sum_{i = 1}^N (z_i - \overline{z})^2}$ where $$z_{(i)}$$ is the $$i$$-th smallest value in the sample (i.e. the data are ordered). $$a_i$$ values are calculated using means, variances and covariances of $$z_{(i)}$$. $$W$$ is compared against tabulated values of this statistic’s distribution. Small values of $$W$$ will lead to the rejection of the null hypothesis. Monte Carlo simulation has found that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests. • Kolmogorov-Smirnov Test. The test statistic is given by: $D = \max\{ D^+; D^-\}$ where: \begin{aligned} D^+ &= \max_i \left( \dfrac{i}{N} - F(z_{(i)})\right)\\ D^- &= \max_i \left( F(z_{(i)}) - \dfrac{i - 1}{N} \right) \end{aligned} where $$F(\cdot)$$ is the theoretical cdf of the distribution being tested (for the univariate regression residuals - we are usually interested in testing for the normal distribution). The Lilliefors Test is based on the Komogorov-Smirnov Test as a special case of this for the normal distribution.. For the normal distribution case, the test statistic is compared against the critical values from a normal distribution in order to determine the $$p$$-value. • Cramer–von Mises Test is an alternative to the Kolmogorov–Smirnov test. The test statistic: $W = N\omega^2 = \dfrac{1}{12N} + \sum_{i = 1}^N \left[ \dfrac{2i-1}{2N} - F(z_{(i)}) \right]^2$ If this value is larger than the tabulated value, then the hypothesis that the data came from the distribution $$F$$ can be rejected. • Jarque–Bera Test (valid for large samples). The statistic is calculated as: $JB = \dfrac{N-k+1}{6} \left(S^2 + \dfrac{(C - 3)^2}{4}\right)$ where \begin{aligned} S &= \dfrac{\dfrac{1}{N}\sum_{i = 1}^N (z_i - \overline{z})^3}{\left( \dfrac{1}{N}\sum_{i = 1}^N (z_i - \overline{z})^2 \right)^{3/2}}= \dfrac{\widehat{\mu}_3}{\widehat{\sigma}^3}\\ C &= \dfrac{\dfrac{1}{N}\sum_{i = 1}^N (z_i - \overline{z})^4}{\left( \dfrac{1}{N}\sum_{i = 1}^N (z_i - \overline{z})^2 \right)^{2}} = \dfrac{\widehat{\mu}_4}{\widehat{\sigma}^4}\\ \end{aligned} $$N$$ is the sample size, $$S$$ is the skewness and $$C$$ is kurtosis and $$k$$ is the number of regressors (i.e. the number of different independent variables $$X$$, with $$k = 1$$ outside a regression context). If the data comes from a normal distribution, then the $$JB$$ statistic has a chi-squared distribution with two degrees of freedom, $$\chi^2_2$$. • Chi-squared (Goodness-Of-Fit) Test. The chi-square test is an alternative to the Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests. The chi-square goodness-of-fit test can be applied to discrete distributions such as the binomial and the Poisson, while the Kolmogorov-Smirnov and Anderson-Darling tests are restricted to continuous distributions. This is not a restriction per say, since for non-binned data you can simply calculate a histogram before generating the chi-square test. However, the value of the chi-square test statistic are dependent on how the data is binned. Another disadvantage of the chi-square test is that it requires a sufficient sample size in order for the chi-square approximation to be valid. The test statistic: $\chi^2 = \sum_{i = 1}^k \dfrac{(O_i - E_i)^2}{E_i}$ where $$k$$ is the number of (non-empty) bins, $$O_i$$ is the observed frequency for bin $$i$$ (i.e. the number of observations in bin $$i$$) and $$E_i$$ is the expected (theoretical) frequency for bin $$i$$, which is calculated as: $E_i = N(F(z_u) - F(z_l))$ where $$N$$ is the total sample size, $$F(\cdot)$$ is the cdf for the distribution being tested, $$z_u$$ is the upper limit for bin $$i$$ and $$z_l$$ is the lower limit for bin $$i$$. The chi-squared statistic can then be used to calculate a $$p$$-value by comparing the value of the statistic to a chi-squared distribution with $$k - c$$ degrees of freedom (where $$c$$ is the number of estimated parameters for the distribution plus one - for the normal distribution with mean and standard deviation parameters $$c = 2 + 1 = 3$$). We will carry out the normality tests on the log-linear DGP, with an incorrectly specified linear model: # # Anderson-Darling Test #print(goftest::ad.test(mdl3$residuals, null = "pnorm")) print(nortest::ad.test(mdl3$residuals)) ## ## Anderson-Darling normality test ## ## data: mdl3$residuals ## A = 2.787, p-value = 4.883e-07 # May need to install through terminal: pip install scikit-gof import skgof as skgof # Anderson-Darling Test print(sm_diagnostic.normal_ad(x = mdl3.resid)) ## (2.742775345091559, 6.263145991939627e-07) # Shapiro-Wilk Test print(shapiro.test(mdl3$residuals)) ## ## Shapiro-Wilk normality test ## ## data: mdl3$residuals ## W = 0.90394, p-value = 4.514e-10 # Shapiro-Wilk Test print(stats.shapiro(x = mdl3.resid)) ## (0.9430360198020935, 4.2506789554863644e-07) # Kolmogorov-Smirnov Test print(ks.test(mdl3$residuals, y = "pnorm", alternative = "two.sided")) ## ## One-sample Kolmogorov-Smirnov test ## ## data: mdl3$residuals ## D = 0.50971, p-value < 2.2e-16 ## alternative hypothesis: two-sided # Kolmogorov-Smirnov Test print(sm_diagnostic.kstest_normal(x = mdl3.resid, dist = "norm")) #statistic and p-value ## (0.08586356720817684, 0.0010846493406071833) # Cramer–von Mises Test #print(goftest::cvm.test(mdl3$residuals, null = "pnorm")) print(nortest::cvm.test(mdl3$residuals)) ## ## Cramer-von Mises normality test ## ## data: mdl3$residuals ## W = 0.37085, p-value = 5.297e-05 # Cramer–von Mises test print(skgof.cvm_test(data = mdl3.resid, dist = stats.norm(0, np.sqrt(np.var(mdl3.resid))))) ## GofResult(statistic=0.39458042141994304, pvalue=0.07458108247027562) # Jarque–Bera Test print(tseries::jarque.bera.test(mdl3$residuals)) ## ## Jarque Bera Test ## ## data: mdl3$residuals ## X-squared = 268.92, df = 2, p-value < 2.2e-16 # Jarque–Bera Test print(sm_tools.jarque_bera(mdl3.resid)) #JB statistic, pvalue, skew and kurtosis. ## (27.00245322122904, 1.3692784843495163e-06, 0.8659632801063863, 3.490637112922614) Note that the Jarque-Bera tests in R and Python in these packages do not allow to control for the fact that we are carrying out the tests on the residuals. In other words, it assumes that $$k = 1$$. # Chi-squared Test # NA # Chi-squared Test # NA In this case the $$p$$-value is less than 0.05 for most tests, so we reject the null hypothesis and conclude that the residuals are not normally distributed. For a correctly specified log-linear model: # Anderson-Darling Test #print(goftest::ad.test(mdl3_correct$residuals, null = "pnorm")) print(nortest::ad.test(mdl3_correct$residuals)) ## ## Anderson-Darling normality test ## ## data: mdl3_correct$residuals ## A = 0.39645, p-value = 0.3665 # Anderson-Darling Test # print(sm_diagnostic.normal_ad(x = mdl3_correct.resid)) ## (0.1576307896975777, 0.9518802524386675) # Shapiro-Wilk Test print(shapiro.test(mdl3_correct$residuals)) ## ## Shapiro-Wilk normality test ## ## data: mdl3_correct$residuals ## W = 0.99066, p-value = 0.2225 # Shapiro-Wilk Test print(stats.shapiro(x = mdl3_correct.resid)) ## (0.9960342049598694, 0.8864037990570068) # Kolmogorov-Smirnov Test print(ks.test(mdl3_correct$residuals, y = "pnorm", alternative = "two.sided")) ## ## One-sample Kolmogorov-Smirnov test ## ## data: mdl3_correct$residuals ## D = 0.35043, p-value < 2.2e-16 ## alternative hypothesis: two-sided # Kolmogorov-Smirnov Test print(sm_diagnostic.kstest_normal(x = mdl3_correct.resid, dist = "norm")) #statistic and p-value ## (0.031206131899122358, 0.9168591544429977) # Cramer–von Mises Test #print(goftest::cvm.test(mdl3_correct$residuals, null = "pnorm")) print(nortest::cvm.test(mdl3_correct$residuals)) ## ## Cramer-von Mises normality test ## ## data: mdl3_correct$residuals ## W = 0.058543, p-value = 0.3937 # Cramer–von Mises test print(skgof.cvm_test(data = mdl3_correct.resid, dist = stats.norm(0, np.sqrt(np.var(mdl3_correct.resid))))) ## GofResult(statistic=0.02066788690314537, pvalue=0.996414601640607) # Jarque–Bera Test print(tseries::jarque.bera.test(mdl3_correct$residuals)) ## ## Jarque Bera Test ## ## data: mdl3_correct$residuals ## X-squared = 4.795, df = 2, p-value = 0.09095 # Jarque–Bera Test print(sm_tools.jarque_bera(mdl3_correct.resid)) #JB statistic, pvalue, skew and kurtosis. ## (0.4627520589444065, 0.793441052712381, -0.11645049039934306, 2.9641199189474325) # Chi-squared Test # NA # Chi-squared Test # NA In this case, the $$p$$-values for most of the tests are greater than 0.05, so we do not reject the null hypothesis of normality. The more tests we carry out the more certain we can be, whether the residuals are (not) from a normal distribution. For now, focus on at least one test from each category - namely, the Breusch–Pagan Test for homoskedasticity, the Durbin-Watson Test for autocorrelation, Shapiro-Wilk Test for normality. 3.8.3.5 Standardized Residuals When we compare residuals for different observations, we want to take into account that their variances may be different (as we have shown in eq. (3.12)). One way to account for this is to divide the residuals by an estimate the residuals standard deviation. This results in calculating the standardized residuals: $s_i = \dfrac{\widehat{\epsilon_i}}{\widehat{\sigma}\sqrt{1 - h_{ii}}}$ where $$h_{ii}$$ is the $$i$$-th diagonal element of $$\mathbf{H}$$. Standardized residuals are useful in detecting outliers. Generally, any observation with a standardized residual greater than 2 in absolute value should be examined more closely.
2020-10-21 21:39:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7843294143676758, "perplexity": 4584.517745247734}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00410.warc.gz"}
http://math.stackexchange.com/questions/291950/if-fx-x-5-and-gx-x2-5-what-is-ux-if-u-circ-fx-gx
If $f(x) = x-5$ and $g(x) = x^2 -5$, what is $u(x)$ if $(u \circ f)(x) = g(x)$? Let $f(x) = x-5$, $g(x) = x^2 -5$. Find $u(x)$ if $(u \circ f)(x) = g(x)$. I know how to do it we have $(f \circ u) (x)$, but only because $f(x)$ was defined. But here $u(x)$ is not defined. Is there any way I can reverse it to get $u(x)$ alone? - Hint: We are told that $u(x-5)=x^2-5$. Let $t=x-5$. Now express $x^2-5$ in terms of $t$. - thanks that solved the problem. –  rbtLong Feb 1 '13 at 7:26 can you please check my solution (at the bottom)? the teacher solved it, but i'm not sure if i copied it right. i just dont want to post another question. –  rbtLong Feb 1 '13 at 8:04 Hint: $$x^2-5=(x-5)(x+5)$$ $$(x+5)=(x-5)+10$$ - are u using f inverse? my professor used an f inverse on the left and f on the right (this confused me) is that what you're doing? –  rbtLong Feb 1 '13 at 7:27 No; my hint is essentially the same as André's. I was suggesting how you could write $x^2-5$ as a function of $x-5$. –  Zev Chonoles Feb 1 '13 at 7:30 ah ok thanks for the answer! –  rbtLong Feb 1 '13 at 7:30 I think I figured out what my professor did now . . . $(u \circ f)(x) = g(x)$ $(u \circ f)(f^{-1} (x)) = g( f^{-1}(x))$ $\big((u \circ f) \circ f^{-1}\big)(x) = (g \circ f^{-1})(x)$ $\big(u \circ (f \circ f^{-1})\big)(x) = (g \circ f^{-1})(x)$ $u(x) = g(f^{-1}(x))$ $u(x) = g(x+5)$ I think this is right. Please correct me if I'm wrong. - It is essentially correct. There should be a line after the first that says $(u\circ f)(f^{-1}(x))=g(f^{-1}(x))$. Then the rest follows, and is fine except for some missing parentheses. The only downside to this approach is that it is very manipulational, and perhaps gives a little less concrete knowledge of what is going on. –  André Nicolas Feb 1 '13 at 8:06
2015-08-31 03:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305782914161682, "perplexity": 303.29688656400015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00328-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/56521/calculation-of-dimension-of-holomorphic-quadratic-differentials-as-in-gardiners-b
# Calculation of dimension of holomorphic quadratic differentials as in Gardiners book In Frederick Gardiner's book Teichmuller Theory and Quadratic Differentials, P.27-28, Chapter 1 ) that dimension of $dim_RQD(X) = 6g-6+3m+2n$ ( by using Riemann-Roch theorem ). Now for open annulus $A$, $g=0, m=2, n=0$, we get $dim_RQD(X)=0$ ! I am a bit puzzled why it is zero ! (Should I define the genus of an open annulus to be zero ?) For q.diffs $q$ on the annulus $A$, should we look at $q=\phi(z)dz^2$ when $\phi$ is a function on the annulus embedded in complex plane or should we lift it to upper half plane and consider the $\phi(z)$ with $\phi(z) = \phi(\gamma(z)) ({\gamma'(z)})^2$ for all $\gamma \in Deck(H/A)$ ? I guess the second approach makes more sense because it respects the hyperbolic geometric structure on $A$ as well ? -
2015-07-04 00:08:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493525624275208, "perplexity": 139.26299570379433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096290.39/warc/CC-MAIN-20150627031816-00118-ip-10-179-60-89.ec2.internal.warc.gz"}
https://infinitylearn.com/surge/study-materials/class-12/physics/electric-charges-and-fields-class-12-notes-chapter-1/
Electric Charges and Fields Class 12 Notes Chapter 1 Electric Charges and Fields Class 12 Notes Chapter 1 Electric Charges and Fields Class 12 Notes Chapter 1 1. Electric Charge Charge is the property associated with matter due to which it produces and experiences electric and magnetic effect. 2. Conductors and Insulators Those substances which readily allow the passage of electricity through them are called conductors, e.g. metals, the earth and those substances which offer high resistance to the passage of electricity are called insulators, e.g. plastic rod and nylon. 3. Transference of electrons is the cause of frictional electricity. 4. Additivity of Charges Charges are scalars and they add up like real numbers. It means if a system consists of n charges q1, q2, q3 , … ,qn, then total charge of the system will be q1 +q2 + … +qn. 5. Conservation of Charge The total charge of an isolated system is always conserved, i.e. initial and final charge of the system will be same. 6. Quantisation of Charge Charge exists in discrete amount rather than continuous value and hence, quantised. Mathematically, charge on an object, q=±ne where, n is an integer and e is electronic charge. When any physical quantity exists in discrete packets rather than in continuous amount, the quantity is said to be quantised. Hence, charge is quantised. 7. Units of Charge (i) SI unit coulomb (C) (ii) CGS system (a) electrostatic unit, esu of charge or stat-coulomb (stat-C) (b) electromagnetic unit, emu of charge or ab-C (ab-coulomb) 1 ab-C = 10 C, 1 C = 3 x 109 stat-C 8. Coulomb’s Law It states that the electrostatic force of interaction or repulsion acting between two stationary point charges is given by 9. Electrostatic forces (Coulombian forces) are conservative forces. 10. Principle of Superposition of Electrostatic Forces This principle states that the net electric force experienced by a given charge particle q0 due to a system of charged particles is equal to the vector sum of the forces exerted on it due to all the other charged particles of the system. 11. Electrostatic Force due to Continuous Charge Distribution The region in which charges are closely spaced is said to have continuous distribution of charge. It is of three types given as below: 12. Electric Field Intensity The electric field intensity at any point due to source charge is defined as the force experienced per unit positive test charge placed at that point without disturbing the source charge. It is expressed as 13. Electric Field Intensity (EFI) due to a Point Charge 14. Electric Field due to a System of Charges Same as the case of electrostatic force, here we will apply principle of superposition, i.e. 15. Electric Field Lines Electric field lines are a way of pictorially mapping the electric field around a configuration of charge(s). These lines start on positive charge and end on negative charge. The tangent on these lines at any point gives the direction of field at that point. 16. Electric field lines due to positive and negative charge and their combinations are shown as below: 17. Electric Dipole Two point charges of same magnitude and opposite nature separated by a small distance altogether form an electric dipole. 18. Electric Dipole Moment The strength of an electric dipole is measured by a vector quantity known as electric dipole moment (p) which is the product of the charge (q) and separation between the charges (2l). 19. Electric Field due to a Dipole Electric field of an electric dipole is the space around the dipole in which the electric effect of the dipole can be experienced. 21. Torque on an electric dipole placed in a uniform electric field (E) is given by 24. Dipole is in stable equilibrium in uniform electric field when angle between p and E is 0° and in unstable equilibrium when angle θ= 180°. 25. Net force on electric dipole placed in a uniform electric field is zero. 26. There exists a net force and torque on electric dipole when placed in non-uniform electric field. 27. Work done in rotating the electric dipole from θ1 to θ2 is W = pE (cos θ1 – cos θ2) 28. Potential energy of electric dipole when it rotates from θ1 = 90° to θ2 =0 U = pE (cos 90° – cosθ) = -pE cos θ = – p .E 29. Work done in rotating the dipole from the position of stable equilibrium to unstable equilibrium, i.e. when θ1 = 0° and θ2 = π. W = 2 pE 30. Work done in rotating the dipole from the position of stable equilibrium to the position in which dipole experiences maximum torque, i.e. when θ1 = 0° and θ2 = 90°. W = pE Register to Get Free Mock Test and Study Material +91 Verify OTP Code (required)
2023-03-26 02:00:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844903290271759, "perplexity": 798.7149421541003}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00440.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-6-inverse-functions-6-1-inverse-functions-6-1-exercises-page-407/29
Calculus 8th Edition $f^{-1}(x)=\frac{x^{2}-3}{4}$ Calculate the inverse of the function $f(x)=\sqrt {4x+3}$ Write $y=f(x)$ $y=\sqrt {4x+3}$ Solve this equation for x in terms of y to get the inverse function. $x=\frac{y^{2}-3}{4}$ To express $f^{-1}(x)$ as a function of x,interchange x and y. The resulting equation is $y=\frac{x^{2}-3}{4}$ Therefore, the inverse of the function $f^{-1}(x)=y=\frac{x^{2}-3}{4}$ The graph of the functions $f(x)=\sqrt {4x+3}$ and $f^{-1}(x)=\frac{x^{2}-3}{4}$ along the line $y=x$ on the same screen is depicted below:
2020-03-28 12:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884792804718018, "perplexity": 80.6559358818021}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491857.4/warc/CC-MAIN-20200328104722-20200328134722-00142.warc.gz"}
https://www.davidfong.org/post/kenya2021lunch/
# Kenya Spur Afrika 2021 - Laksa Garden Thanks for lunch at Laksa Garden!! Trying to eat all the yummy things in Melbourne before we leave. Praying that everything will go smoothly - most pressing is the covid test before the flights and packing! Thanks for joining. We had a special prayer section at church on Sunday CrossGen (MCBC). Paid for all the insurance, flights & meds. Trying to figure out all the documents that are needed to be uploaded. Family & friends are all finding out but it’s all very mixed feelings. Pray for peace and guidance. Planning to get tested for the international covid test (costs ) on Dec 23rd. Dec 22nd is packing donations day! Pray for good health and it’s my last week of work. - same for David! Spur Afrika trip 2021-2022 posts ##### Rosalie Lui ###### Managing Director, Spur Afrika Australia Rosalie Lui is an occupational therapist working with Healthscope
2022-09-28 00:44:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2698158025741577, "perplexity": 12524.98047475086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00761.warc.gz"}
https://blender.stackexchange.com/questions/7395/2d-vs-3d-depth-of-field
# 2d vs 3d Depth of Field What is the difference between Depth of Field rendered at render time (3d) compared to Depth of Field created in post (2d). Both are depth of field, they look very similar if not the same. 2d depth of field has the advantage of being faster to compute and being and you can easily change what part is in focus, whereas if you use 3d depth of filed and you want to change it after then render you have to start over. I am primarily focusing on cycles I ran some tests with 100 samples each with this scene: # Defocus Node: Time: 2:01 + roughly 7 seconds for compositing = 2:08 # Cycles DoF: Time: 2:02 # Conclusions: Based on the above results, rendered DoF seems more reliable and realistic, as well as a little bit faster in this case. The main advantage of composited DoF is the ability to tweak the settings after the render. • did you use an Anti-Aliased z-pass? If you did, you are not supposed to because the pixels after being anti-aliased reflect an incorrect z-depth. – Vader Feb 28 '14 at 2:36 • @Vader The Z is plugged straight into the defocus node which is plugged straight into the composite node. Added .blend to question – gandalf3 Feb 28 '14 at 3:51 • Your example file, though helpful in highlighting the difference in quality, is pretty extreme (my best Canon lens only goes down to f/1.4. I'd love to get my hands on your f/0.1 lens, haha). What is the visual difference with a more realistic example? – Justin Aug 25 '16 at 20:54 • @Justin Don't have time to test in-depth right now, but a quick re-rendering of the scene in the answer suggests that the defocus node does much better, however cycles and the defocus node didn't seem to agree on just how blurry f/1.4 should be.. – gandalf3 Aug 26 '16 at 3:29 • The cycles DOF is higher quality (although maybe the compositing workflow could be improved), but the cycles image is noisier overall, therefore cycles DOF requires more samples, therefore in is not faster. Also see blender.stackexchange.com/questions/67437/noisy-depth-of-field – lbalazscs Nov 17 '16 at 19:04
2020-07-06 03:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3266346752643585, "perplexity": 2161.46819192066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00313.warc.gz"}
https://wonghoi.humgar.com/blog/2022/07/28/maildroid-does-not-save-sent-email-to-imap-sent-folder-by-default/
# Maildroid does not save sent email to IMAP ‘Sent’ folder by default This behavior is so frustrating. We don’t live in a time where IMAP storage space is at a premium anymore! Fairmail saves a copy of sent mail in the IMAP’s ‘Sent’ folder by default. I did some research and it seems like MailDroid insists on ‘saving the sent mail in “Sent from device” folder (I tried disabling it thinking Maildroid will be smart enough to save it in the IMAP’s sent folder, but no, it doesn’t). Fairmail’s Settings->Send->Message->[Last Item] On Replying to a message in user folder, save the reply in the same folder was disabled (should be, or else it’d be messy), but there’s a subtext that says “The email server could still add the message to the sent message folder”, which points me to think saving a copy to the Sent folder is an IMAP server managed behavior, so it doesn’t rely on email clients specifically telling it where (which folder) to save the sent email. I was about to give up Maildroid and stick to Fairmail, and did a little research on “imap put sent mail in sent folder” and viola, Maildroid is the first one that came up. Seems like I wasn’t the only one perplexed by Maildroid’s weird design choices and it seems like most other email apps do not have this awkward behavior. I did as what the dev for maildroid said and it turns out all of the email accounts in use says “Not specified”. When I clicked on one of the accounts, the root folder /Inbox shows up. I had to expand it with the ‘>‘ so the (IMAP) sub-folders show up. This GUI was designed horribly and it’s clearly an afterthought. The reason I’m saying that instead of just assigning the IMAP sent folder right away after I tapped on it, the UI changes the textbook at the bottom of the screen (WTF) to what I’ve selected (despite there were no multiple choices) and click the DONE next to it. It’s so unintuitive in many levels. 38 total views,  1 views today Subscribe Notify of
2023-03-26 03:08:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2798788845539093, "perplexity": 4033.7700467253467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00690.warc.gz"}
https://cs.stackexchange.com/tags/equality/new
# Tag Info You got the definition of $L^+$ wrong. It is not $L^* \setminus \{\epsilon\}$. Rather, it is $$L^+ = \bigcup_{n=1}^* L^n.$$ You can check that $\epsilon \in L^+$ iff $\epsilon \in L$. Therefore: If $\epsilon \notin L$ then $L^+ = L^*\setminus\{\epsilon\}$. If $\epsilon \in L$ then $L^+ = L^*$. Linear-bounded automata accept the class of context-sensitive languages. In contrast, Turing machine (deterministic as well as nondeterministic) accept the class of recursive languages. Every context-sensitive language is recursive, but the converse doesn't hold. For example, the halting problem for nondeterministic Turing machines running in space $n^2$ is ...
2019-09-17 12:29:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6604636311531067, "perplexity": 301.297754772704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00514.warc.gz"}
https://socratic.org/questions/how-do-you-solve-sqrt-26x-39-x-5
# How do you solve sqrt(26x-39)=x+5? Apr 16, 2015 We can square both sides to get ${\left(\sqrt{26 x - 39}\right)}^{2} = {\left(x + 5\right)}^{2}$ $26 x - 39 = {x}^{2} + 10 x + 25$ (Used the Identity color(blue)((a + b)^2 = a^2 + 2ab + b^2 Transposing the terms from the left hand side to the right, we get: $0 = {x}^{2} + 10 x - 26 x + 25 + 39$ $0 = {x}^{2} - 16 x + 64$ ${x}^{2} - 16 x + 64 = 0$ ${\left(x - 8\right)}^{2} = 0$ color(green)(x = 8 Left Hand Side = $\sqrt{26 \cdot 8 - 39} = \sqrt{169} = 13$ Right Hand Side = $8 + 5 = 13$
2019-12-07 01:31:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787075281143188, "perplexity": 794.3077687893364}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00545.warc.gz"}
https://economics.stackexchange.com/tags/labor-economics/hot
# Tag Info 62 This is an interesting question a lot of good labour economists have been thinking about for a while. There are a few conflicting theories as to what will happen. You could base a whole career on this question. This IGM survey will give you some idea as to what leading economists think. The prevailing opinion seems to be that increased automation is not ... 40 Automation has been happening for a couple of hundred years now and right now we're all still working pretty hard. Although a 40-hour working week is standard, many people exceed this, and many families have two working parents. One reason for this is that we've used productivity gains for increased consumption, rather than decreased work. The industrial ... 23 Your question relates to an important research topic on the link between automation and employment. David Autor works on this issue and the topic "Inequality, Technological Change and Globalization". He published a very recent and interesting JPE paper on “Why Are There Still So Many Jobs?” There have been periodic warnings in the last two centuries that ... 22 This is more of an elaboration of The Almighty Bob's answer: It is true that if we start from a competitive market (i.e. large numbers of buyers and sellers), then granting market power to sellers (e.g. workers) by allowing the formation of a monopolistic cartel is bad for efficiency. Those sellers will use their market power to increase the price (and ... 21 Horses were replaced by cars. Clerks were replaced by word-processors and spreadsheets. We have adapted to the technology and changed how we work. Therein lies the answer. Consider if you will a society where every person owns a robot and has that robot work on their behalf, freeing their time to pursue creative arts and learning like the nobles of old. Yes, ... 19 I find your question very interesting. The metric of median household income is also used by others to argue the presence of income inequality: https://en.wikipedia.org/wiki/Income_inequality_in_the_United_States#Causes However, it seems that it is not only the median but also the mean that stagnates: (I used family instead of household income because I ... 17 There are already excellent answers, but I would like to add in a different perspective: There will be fewer people. Not just jobs, but actual human beings - if there is less demand for human workers (i.e. laborers), due to machines taking over, the amount of "land" or other resource that a single human can manage will increase with technology, leading ... 16 How will non-rich citizens make a living if jobs keep getting replaced by robots and are outsourced? EDIT / UPDATE 5th November 2016: http://mashable.com/2016/11/05/elon-musk-universal-basic-income/ "There’s a pretty good chance we end up with a universal basic income, or something like that, due to automation" "I'm not sure what else one would ... 15 The Labor Theory of Value has been replaced by the theory of Marginal Utility, which was already accepted by Marx time. In fact he acknowledged: "nothing can have value, without being an object of utility" -Wikipedia: Marginal Utility - The Marginal Revolution and Marxism Marginal Utility addresses the diamond - water paradox by explaining that the more ... 15 I'm going to give a less economically rigorous answer, and address your concern about your own situation. Jobs change. Your skillsets will always need to change. If you are young, it's a certainty that you will not be in the same job, or even the same career, your entire life. It's likely that many of the jobs you will do in life don't exist right now. ... 13 I think your question has two parts: Is a labor union a cartel? Is a labor union therefore illegal? Let me give you the quick answer to both: 1) yes, 2) no. The longer version is the following: You are right, there is, from an economical point of view not that much of a difference between selling a good and labor, so a union could (and most times is) ... 12 Isaac Sorkin, a grad student at Michigan, has addressed this. Here is Miles Kimball blogging it, link. Main argument there is that previous work measures short run elasticities, which are less responsive than in the long run. You can surely find more by checking Sorkin's citations. 12 Contracts are a subset of all mechanisms where agreements are enforcable. An example of a mechanism that is not a contract: A second price auction (or Vickrey auction) is a truth-telling mechanism where the enforcability of contracts is not required. In the truth-telling equilibrium no one has any incentive to change their bid, no matter the outcome. This ... 11 Employment excludes non-salaried directors, volunteers, persons paid by commission only, and self employed persons such as consultants and contractors. The actively trading businesses with zero employees are therefore those businesses where the staff members are drawn exclusively from that group. That may cover most one-person outfits, perhaps some family ... 10 To the extent that there is an economic explanation for their findings, it's something along the lines of costs of changing prices and employment are large enough relative to the increase in the minimum wage observed that producers choose instead to take a large amount of the cost of minimum wage increases on themselves. The alternatives would be 1) they ... 9 To me, it seems that it has increased, not decreased, due to the factors you mention. Yes, transportation and information networks enable workforce movement. But they also enable movement of goods and information - and because goods and information are more mobile than humans, they profit more, and the results of their portability outpaces the results of the ... 9 In Labor Economics, "Extensive margin" refers to "how many people work". "Intensive margin" refers to "how much a given number of people work, on average". To copy from a freely available recent study by Blundell, Bozio and Laroque 2011, "...we split the overall level of work activity into the number of individuals in work and the intensity of work ... 9 This phenomenon is sometimes called "wage compression" because the range of wages is compressed by the minimum wage laws. One paper on this subject is The Impact of the Minimum Wage on Other Wages 9 I am surprised none of the posts above discuss the following paper: Autor, D., and M. Handel. "Putting Tasks to the Test: Human Capital." Job Tasks and Wages" Journal of Labor Economics (2009). This paper discusses your concerns and addresses why your concerns are quite well grounded in both theory and empirics. Tasks that are more routine do offer lower ... 8 There's also Labor Economics by Pierre Cahuc, Stéphane Carcillo, and André Zylberberg. It's a broader labor econ book, but The "Unemployment and Inequality" fourth of the book covers these topics. I have not seen the second edition, but I expect that they did not alter that part for the worse. 8 The way I see it, there are two possible futures given the increasing state of automation in the world. Future One: A Basic Income We decide as a nation, federal state, or world, that human beings are important in and of themselves. Every human receives an income from the state which enables them to support themselves, without any necessity for work in ... 8 Let's work such a very simple model. We have a Robinson Crusoe island economy, an isolated individual that lives totally alone. In order to consume something Crusoe must work. Assume for even more simplicity that capital is not needed (say, fruit-gathering by hand). Crusoe does not like to work but he would rather sit relaxed and enjoy the good weather in ... 8 You seem to be looking for the phrase 'wage share' or 'share of labour compensation'. Wage share: The wage share (or labor share) is the ratio between compensation of employees (according to the system of National accounts) and one of the following variables: gross domestic product at market prices gross domestic product at factor cost net ... 8 A very famous study in this direction is Card and Krueger (1994). They look an increase in the minimum wage in New Jersey in 1992. While New Jersey raised the minimum wage from USD 4.25/h to USD 5.05/h, the minimum wages remained at $4.25 in adjacent Pennsylvania. You should have a look at the subsequent research. 8 It is worth noting that OP's original question before I edited it asked, "why would economists lie to us?" This already leaves a poor taste in my mouth; such a question is loaded enough as it is, only good for picking fights. The author stated at the bottom of the Progressive Dairy article is a lawyer, not an economist, and seems to be the basis for some of ... 8 The 40-week comes started with the movement of the 8h working day. The wikipedia article makes a quick summary of the historical progress of thinking from 12h work days in Britain during the industrial revolution to the nowadays 8h/day. The quick summary is that after the first world war, the International Labour Organization (ILO) was formed and its first ... 7 Kroft and Pope (working paper, published in JoLE 2014) ask exactly this question, and their tentative answer is "no". They view Craigslist as a unique opportunity to study the benefits of online job sites, since it grew rapidly and somewhat idiosyncratically while other popular sites grew steadily (leaving few opportunities for identification). That said, ... 7 First, we need to assume that the minimum wage is an "effective constraint", i.e. that in the cases examined people are paid the minimum wage. I guess this holds. Second, the negative relation between demand for labor (for the services sold by workers) and wage (its price), depends on an assumption of a smooth such relation. In turn, such a smooth relation ... 7 I don't believe that Search Unemployment (MP) actually "won" over efficiency wages. The whole discussion of search literature would be too long for this post, so I'll just skim the most important parts. (i) As a first, Shimer (2005) discusses that the MP model actually fails to get employment (market tightness) volatility right (ii) In the same AER issue, ... 7 From the Chicago Federal Reserve: Following a minimum wage hike, household income rises on average by about \$250 per quarter and spending by roughly \\$700 per quarter for households with minimum wage workers. Most of the spending response is caused by a small number of households who purchase vehicles http://www.chicagofed.org/digital_assets/... Only top voted, non community-wiki answers of a minimum length are eligible
2019-10-15 11:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45389649271965027, "perplexity": 1761.008500547146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00538.warc.gz"}
https://online.stat.psu.edu/stat501/book/export/html/941
Perhaps somewhere along the way in our most recent discussion, you thought "why not just fit two separate regression functions — one for the smokers and one for the non-smokers?" (If you didn't think of it, I thought of it for you!) Are there advantages to including both the binary and quantitative predictor variables within one multiple regression model? The answer is yes! In this section, we explore the two primary advantages. An easy way of discovering the first advantage is to analyze the data three times — once using the data on all 32 subjects, once using the data on only the 16 non-smokers, and once using the data on only the 16 smokers. Then, we can investigate the effects of the different analyses on important things such as sizes of standard errors of the coefficients and the widths of confidence intervals. Let's try it! Here's the Minitab output for the analysis using a (0,1) indicator variable and the data on all 32 subjects. Let's just run through the output and collect information on various values obtained: Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2390 349 -6.84 0.000 Gest 143.10 9.13 15.68 0.000 1.06 Smoke -244.5 42.0 -5.83 0.000 1.06 Regression Equation Wgt = -2390 + 143.10 Gest - 244.5 Smoke The standard error of the Gest coefficient is 9.13. Recall that this value quantifies how much the estimated Gest coefficient would vary from sample to sample. And, the following output: Variable Setting Gest 38 Smoke 1 Fit SE Fit 95% CI 95% PI 2803.69 30.8496 (2740.60, 2866.79) (2559.13, 3048.26) Variable Setting Gest 38 Smoke 0 Fit SE Fit 95% CI 95% PI 3048.24 28.9051 (2989.12, 3107.36) (2804.67, 3291.81) tells us that for mothers with a 38-week gestation, the width of the confidence interval for the mean birth weight is 126.2 for smoking mothers and 118.2 for non-smoking mothers. Let's do that again, but this time for the Minitab output on just the 16 non-smoking mothers: Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2546 457 -5.57 0.000 Gest_0 147.2 12.0 12.29 0.000 1.00 Regression Equation Wgt_0 = -2546 + 147.2 Gest_0 The standard error of the Gest coefficient is 12.0. And: Variable Setting Gest_0 38 Fit SE Fit 95% CI 95% PI 3047.72 26.7748 (2990.30, 3105.15) (2811.30, 3284.15) for non-smoking mothers with a 38-week gestation, the width of the confidence interval for the mean birth weight is 114.9. And, let's do the same thing one more time for the Minitab output on just the 16 smoking mothers: Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2475 554 -4.447 0.001 Gest_1 139.0 14.1 9.85 0.000 1.00 Regression Equation Wgt_1 = -2475 + 139.2 Gest_1 The standard error of the Gest coefficient is 14.1. And: Variable Setting Gest_1 38 Fit SE Fit 95% CI 95% PI 2808.53 35.8088 (2731.73, 2885.33) (2526.39, 3090.67) for smoking mothers with a 38-week gestation, the length of the confidence interval is 153.6. Here's a summary of what we've gleaned from the three pieces of output: Model estimated using… SE(Gest) Width of CI for $$\mu_Y$$ all 32 data points 9.13 (NS) 118.2 (S) 126.2 16 nonsmokers 12.0 114.9 16 smokers 14.1 153.6 Let's see what we learn from this investigation: • The standard error of the Gest coefficient — SE(Gest) — is smallest for the estimated model based on all 32 data points. Therefore, confidence intervals for the Gest coefficient will be narrower if calculated using the analysis based on all 32 data points. (This is a good thing!) • The width of the confidence interval for the mean weight of babies born to smoking mothers is narrower for the estimated model based on all 32 data points (126.2 compared to 153.6), and not substantially different for non-smoking mothers (118.2 compared to 114.9). (Another good thing!) In short, there appears to be an advantage in "pooling" and analyzing the data all at once rather than breaking it apart and conducting different analyses for each group. Our regression model assumes that the slope for the two groups are equal. It also assumes that the variances of the error terms are equal. Therefore, it makes sense to use as much data as possible to estimate these quantities. An easy way of discovering the second advantage of fitting one "combined" regression function using all of the data is to consider how you'd answer the research question if you broke apart the data and conducted two separate analyses obtaining: Nonsmokers Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2546 457 -5.57 0.000 Gest_0 147.2 12.0 12.29 0.000 1.00 Regression Equation Wgt_0 = -2546 + 147.2 Gest_0 Smokers Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2475 554 -4.47 0.001 Gest_1 139.0 14.1 9.85 0.000 1.00 Regression Equation Wgt_1 = -2475 + 139.0 Gest_1 How could you use these results to determine if the mean birth weight of babies differs between smoking and non-smoking mothers, after taking into account length of gestation? Not completely obvious, is it?! It actually could be done with much more (complicated) work than would be necessary if you analyze the data as a whole and fit one combined regression function: Coefficients Term Coef SE Coef T-Value P-Value VIF Constant -2390 349 -6.84 0.000 Gest 143.10 9.13 15.68 0.000 1.06 Smoke -244.5 42.0 -5.83 0.000 1.06 Regression Equation Wgt = -2390 + 143.10 Gest - 244.5 Smoke As we previously discussed, answering the research question merely involves testing the null hypothesis $$H_0 \colon \beta_2 = 0$$ against the alternative $$H_0 \colon \beta_2 \ne 0$$. The P-value is < 0.001. There is sufficient evidence to conclude that there is a statistically significant difference in the mean birth weight of all babies of smoking mothers and the mean birth weight of all babies of non-smoking mothers, after taking into account length of gestation. In summary, "pooling" your data and fitting one combined regression function allows you to easily and efficiently answer research questions concerning the binary predictor variable. [1] Link ↥ Has Tooltip/Popover Toggleable Visibility
2022-08-15 10:01:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3827972710132599, "perplexity": 2429.861655795106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00600.warc.gz"}
https://www.zbmath.org/authors/?q=rv%3A6456
zbMATH — the first resource for mathematics Tu, Dongsheng Compute Distance To: Author ID: tu.dongsheng Published as: Tu, Dongsheng; Tu, D. S.; Tu, D.; Tu, DongSheng; Tu, Donsheng Documents Indexed: 42 Publications since 1986, including 1 Book Reviewing Activity: 84 Reviews all top 5 Co-Authors 12 single-authored 5 Wu, Yaohua 4 Li, Xiao 3 Gross, Alan J. 3 Peng, Yingwei 3 Song, Hui 3 Zheng, Zhongguo 2 Chen, Bingshu Eric 2 Chen, Jiahua 2 Cheng, Ming-Yen 2 Jiang, Wenyu 2 Tan, Xianming 1 Boruvka, Audrey 1 Chen, Youyi 1 Cheng, Ping 1 Hall, Peter Gavin 1 Jiang, Shan 1 Jin, Huan 1 Lu, Wenqi 1 Moon, Nathalie C. 1 Qin, Guoyou 1 Qiu, Peihua 1 Rao, Calyampudi Radhakrishna 1 Shao, Jun 1 Shi, Peide 1 Takahara, Glen K. 1 Yu, Zhaoping 1 Zhao, Naiqing 1 Zhu, Liting 1 Zhu, Zhongyi all top 5 Serials 3 Biometrical Journal 3 Statistics & Probability Letters 2 Journal of Systems Science and Mathematical Sciences 2 Chinese Journal of Applied Probability and Statistics 2 Communications in Statistics. Simulation and Computation 2 Communications in Statistics. Theory and Methods 2 Lifetime Data Analysis 2 Science China. Mathematics 1 The Canadian Journal of Statistics 1 Metrika 1 Acta Mathematica Sinica 1 Biometrika 1 Calcutta Statistical Association. Bulletin 1 Journal of Combinatorics, Information & System Sciences 1 Journal of Multivariate Analysis 1 Acta Mathematicae Applicatae Sinica 1 Journal of Mathematical Research & Exposition 1 Chinese Annals of Mathematics. Series A 1 Computational Statistics 1 Journal of Statistical Computation and Simulation 1 Scientia Sinica. Series A 1 Kexue Tongbao 1 Statistische Hefte 1 Computational Statistics and Data Analysis 1 Journal of Biopharmaceutical Statistics 1 Applied Mathematics. Series A (Chinese Edition) 1 Statistica Sinica 1 Journal of University of Science and Technology of China 1 Statistics and Its Interface 1 Sankhyā 1 Springer Series in Statistics Fields 42 Statistics (62-XX) 9 Numerical analysis (65-XX) 3 Biology and other natural sciences (92-XX) Citations contained in zbMATH Open 21 Publications have been cited 276 times in 258 Documents Cited by Year The jackknife and bootstrap. Zbl 0947.62501 Shao, Jun; Tu, Dongsheng 1995 Confidence bands for hazard rates under random censorship. Zbl 1153.62360 Cheng, Ming-Yen; Hall, Peter; Tu, Dongsheng 2006 Random weighting method in regression models. Zbl 0698.62069 Zheng, Zhongguo; Tu, Dongsheng 1988 On the use of the ratio or the odds ratio of cure rates in therapeutic equivalence clinical trials with binary endpoints. Zbl 0914.62103 Tu, Dongsheng 1998 A Bartlett type correction for Rao’s score test in Cox regression model. Zbl 1192.62055 Tu, Dongsheng; Chen, Jiahua; Shi, Peide; Wu, Yaohua 2005 Confidence intervals for the first crossing point of two hazard functions. Zbl 1322.62265 Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng 2009 The Edgeworth expansion for the random weighting method. Zbl 0661.62013 Tu, Dongsheng; Zheng, Zhongguo 1987 Constructing nonparametric likelihood confidence regions with high order precisions. Zbl 1225.62045 Li, Xiao; Chen, Jiahua; Wu, Yaohua; Tu, Dongsheng 2011 The Berry-Esséen theorem for the subject-years method in mortality analysis with censored data. Zbl 0738.62102 Tu, D. S. 1991 On the estimation of skewness of a statistic using the jackknife and the bootstrap. Zbl 0743.62031 Tu, D.; Zhang, L. 1992 Approximating the distribution of a general standardized functional statistic with that of jackknife pseudo values. Zbl 0838.62016 Tu, D. S. 1992 A new approach for joint modelling of longitudinal measurements and survival times with a cure fraction. Zbl 1348.62258 Song, Hui; Peng, Yingwei; Tu, Dongsheng 2012 Random weighting: Another approach to approximate the unknown distributions of pivotal quantities. Zbl 0781.62024 Tu, Dongsheng; Zheng, Zhongguo 1991 Two one-sided tests procedures in establishing therapeutic equivalence with binary clinical endpoints: Fixed sample performances and sample size determination. Zbl 0912.62119 Tu, Dongsheng 1997 Bias reduction for jackknife skewness estimators. Zbl 0825.62202 Tu, D.; Gross, A. J. 1994 A Bartlett type correction for Wald test in Cox regression model. Zbl 1147.62386 Li, Xiao; Wu, Yaohua; Tu, Dongsheng 2008 A Bartlett-type correction for the subject-years method in comparing survival data to a standard population. Zbl 0865.62081 Tu, Dongsheng; Gross, Alan J. 1996 Inference on the occurrence/exposure rate with mixed censoring models. Zbl 0751.62050 Rao, C. Radhakrishna; Tu, D. S. 1991 On the asymptotic accuracy of bootstrapping and random weighing approximation for $$m$$-dependent sample mean. Zbl 0791.62046 Yu, Zhaoping; Tu, Dongsheng 1993 Accurate confidence intervals for the ratio of specific occurrence/exposure rates in risk and survival analysis. Zbl 0837.62096 Tu, Dongsheng; Gross, Alan J. 1995 Random weighting the functional statistics. Zbl 0677.62017 Tu, Dongsheng 1988 A new approach for joint modelling of longitudinal measurements and survival times with a cure fraction. Zbl 1348.62258 Song, Hui; Peng, Yingwei; Tu, Dongsheng 2012 Constructing nonparametric likelihood confidence regions with high order precisions. Zbl 1225.62045 Li, Xiao; Chen, Jiahua; Wu, Yaohua; Tu, Dongsheng 2011 Confidence intervals for the first crossing point of two hazard functions. Zbl 1322.62265 Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng 2009 A Bartlett type correction for Wald test in Cox regression model. Zbl 1147.62386 Li, Xiao; Wu, Yaohua; Tu, Dongsheng 2008 Confidence bands for hazard rates under random censorship. Zbl 1153.62360 Cheng, Ming-Yen; Hall, Peter; Tu, Dongsheng 2006 A Bartlett type correction for Rao’s score test in Cox regression model. Zbl 1192.62055 Tu, Dongsheng; Chen, Jiahua; Shi, Peide; Wu, Yaohua 2005 On the use of the ratio or the odds ratio of cure rates in therapeutic equivalence clinical trials with binary endpoints. Zbl 0914.62103 Tu, Dongsheng 1998 Two one-sided tests procedures in establishing therapeutic equivalence with binary clinical endpoints: Fixed sample performances and sample size determination. Zbl 0912.62119 Tu, Dongsheng 1997 A Bartlett-type correction for the subject-years method in comparing survival data to a standard population. Zbl 0865.62081 Tu, Dongsheng; Gross, Alan J. 1996 The jackknife and bootstrap. Zbl 0947.62501 Shao, Jun; Tu, Dongsheng 1995 Accurate confidence intervals for the ratio of specific occurrence/exposure rates in risk and survival analysis. Zbl 0837.62096 Tu, Dongsheng; Gross, Alan J. 1995 Bias reduction for jackknife skewness estimators. Zbl 0825.62202 Tu, D.; Gross, A. J. 1994 On the asymptotic accuracy of bootstrapping and random weighing approximation for $$m$$-dependent sample mean. Zbl 0791.62046 Yu, Zhaoping; Tu, Dongsheng 1993 On the estimation of skewness of a statistic using the jackknife and the bootstrap. Zbl 0743.62031 Tu, D.; Zhang, L. 1992 Approximating the distribution of a general standardized functional statistic with that of jackknife pseudo values. Zbl 0838.62016 Tu, D. S. 1992 The Berry-Esséen theorem for the subject-years method in mortality analysis with censored data. Zbl 0738.62102 Tu, D. S. 1991 Random weighting: Another approach to approximate the unknown distributions of pivotal quantities. Zbl 0781.62024 Tu, Dongsheng; Zheng, Zhongguo 1991 Inference on the occurrence/exposure rate with mixed censoring models. Zbl 0751.62050 Rao, C. Radhakrishna; Tu, D. S. 1991 Random weighting method in regression models. Zbl 0698.62069 Zheng, Zhongguo; Tu, Dongsheng 1988 Random weighting the functional statistics. Zbl 0677.62017 Tu, Dongsheng 1988 The Edgeworth expansion for the random weighting method. Zbl 0661.62013 Tu, Dongsheng; Zheng, Zhongguo 1987 all top 5 Cited by 448 Authors 11 Tu, Dongsheng 7 Bouzebda, Salim 6 Politis, Dimitris Nicolas 6 Qiu, Peihua 6 Sen, Bodhisattva 5 Xiong, Shifeng 4 Ferrari, Silvia Lopes de Paula 4 Lemonte, Artur José 4 Lui, Kung-Jong 4 Romano, Joseph P. 4 Wolf, Michael 4 Wu, Yaohua 3 Chatterjee, Snigdhansu 3 Chen, Jiahua 3 Dufour, Jean-Marie 3 Marcheselli, Marzia 3 Paparoditis, Efstathios 3 Peng, Liang 3 Qi, Yongcheng 3 Zhao, Lincheng 2 Aerts, Marc 2 Alvarez-Andrade, Sergio 2 Arcos Cebrián, Antonio 2 Banks, Harvey Thomas 2 Belzunce, Félix 2 Chan, Kung-Sik 2 Chang, Kuang-Chao 2 Chen, Song Xi 2 Claeskens, Gerda 2 Colubi, Ana 2 Davison, Anthony C. 2 De Martini, Daniele 2 Fang, Yixin 2 Franceschi, Sara 2 Ghosh, Sujit Kumar 2 González-Manteiga, Wenceslao 2 González-Rodríguez, Gil 2 Gross, Alan J. 2 Hidalgo, Javier 2 Jiang, Jiancheng 2 Khalaf, Lynda 2 Koul, Hira Lal 2 Li, Guoying 2 Li, Xiao 2 Limnios, Nikolaos 2 Liu, Rong 2 Lombardía, María José 2 Lopuhaä, Hendrik P. 2 Maesono, Yoshihiko 2 Martin, Michael A. 2 Martínez-Riquelme, Carolina 2 Modarres, Reza 2 Mu, Weiyan 2 Munk, Axel 2 Musta, Eni 2 Panero, Marco 2 Park, Kayoung 2 Peng, Yingwei 2 Scaillet, Olivier 2 Seijo, Emilio 2 Seo, Myunghwan 2 Shao, Jun 2 Song, Hui 2 Song, Weixing 2 Vargas, Tiago M. 2 Wang, Lei 2 Wang, Liang 2 Woodroofe, Michael Barrett 2 Zähle, Henryk 2 Zhang, Guoyi 1 Adimari, Gianfranco 1 Akritas, Michael G. 1 Al-Sharadqah, Ali 1 Alin, Aylin 1 Allison, James S. 1 Amann, Anton 1 Amiri, Saeid 1 Andrews, Donald Wilfrid Kao 1 Angelov, Angel G. 1 Arcones, Miguel A. 1 Arnab, Raghunath 1 Arranz, Miguel A. 1 Arteche, Josu 1 Asmild, Mette 1 Babu, Gutti Jogesh 1 Bacro, Jean-Noël 1 Baíllo, Amparo 1 Banerjee, Moulinath 1 Barabesi, Lucio 1 Barbe, Philippe 1 Bastien, Philippe 1 Beutner, Eric 1 Beyaztas, Ufuk 1 Biewen, Martin 1 Bilton, Penny 1 Bitmead, Robert R. 1 Bittanti, Sergio 1 Blanco-Fernández, Angela 1 Blumentritt, Thomas 1 Bose, Arup ...and 348 more Authors all top 5 Cited in 79 Serials 24 Computational Statistics and Data Analysis 21 Journal of Statistical Planning and Inference 18 The Annals of Statistics 16 Journal of Multivariate Analysis 13 Journal of Econometrics 10 Statistics & Probability Letters 7 Communications in Statistics. Theory and Methods 7 Journal of Statistical Computation and Simulation 7 Statistical Papers 6 Test 5 Scandinavian Journal of Statistics 5 Science in China. Series A 5 Econometric Theory 5 Electronic Journal of Statistics 4 Statistical Science 4 Communications in Statistics. Simulation and Computation 4 Journal of Biopharmaceutical Statistics 4 Mathematical Methods of Statistics 4 The Annals of Applied Statistics 3 The Canadian Journal of Statistics 3 Annals of the Institute of Statistical Mathematics 3 Insurance Mathematics & Economics 3 Computational Statistics 3 Lifetime Data Analysis 3 Journal of Statistical Theory and Practice 2 Automatica 2 Biometrical Journal 2 Statistics 2 Economics Letters 2 Applied Mathematical Modelling 2 European Journal of Operational Research 2 Bernoulli 2 Journal of Nonparametric Statistics 2 Journal of Applied Statistics 2 Statistical Methodology 2 AStA. Advances in Statistical Analysis 2 Science China. Mathematics 2 Journal of Agricultural, Biological, and Environmental Statistics 1 Computer Physics Communications 1 Mathematical Biosciences 1 Metrika 1 Physics Reports 1 Psychometrika 1 Bulletin of Mathematical Biology 1 Fuzzy Sets and Systems 1 Information Sciences 1 International Journal of Mathematics and Mathematical Sciences 1 Journal of Computational and Applied Mathematics 1 Theoretical Population Biology 1 Journal of Time Series Analysis 1 Applied Mathematics Letters 1 Mathematical and Computer Modelling 1 Signal Processing 1 Stochastic Processes and their Applications 1 Statistische Hefte 1 Acta Mathematica Sinica. New Series 1 Chinese Science Bulletin 1 Computational Economics 1 Applied Mathematics. Series B (English Edition) 1 ACM Transactions on Modeling and Computer Simulation 1 Journal of the Royal Statistical Society. Series B. Statistical Methodology 1 Extremes 1 Stochastic Environmental Research and Risk Assessment 1 Journal of High Energy Physics 1 Applied Stochastic Models in Business and Industry 1 Foundations of Computational Mathematics 1 Statistical Modelling 1 Journal of Machine Learning Research (JMLR) 1 North American Actuarial Journal 1 Review of Derivatives Research 1 Statistical Methods and Applications 1 Mathematical Biosciences and Engineering 1 Journal of the Korean Statistical Society 1 Mathematical Geosciences 1 Statistics Surveys 1 Sankhyā. Series A 1 Journal of Theoretical Biology 1 Dependence Modeling 1 Journal de la Société Française de Statistique all top 5 Cited in 18 Fields 240 Statistics (62-XX) 30 Numerical analysis (65-XX) 25 Probability theory and stochastic processes (60-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 11 Biology and other natural sciences (92-XX) 5 Computer science (68-XX) 5 Systems theory; control (93-XX) 4 Operations research, mathematical programming (90-XX) 2 Quantum theory (81-XX) 1 History and biography (01-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Ordinary differential equations (34-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Geophysics (86-XX) 1 Information and communication theory, circuits (94-XX)
2021-09-21 03:22:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4550280272960663, "perplexity": 13934.09500391416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00428.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=128&t=58076&p=218297
isochoric/isometric: $\Delta V = 0$ isothermal: $\Delta T = 0$ isobaric: $\Delta P = 0$ Giselle Littleton 1F Posts: 78 Joined: Wed Sep 18, 2019 12:20 am Does an adiabatic affect how much energy a reaction gives off? Jessica Booth 2F Posts: 101 Joined: Fri Aug 30, 2019 12:18 am Adiabatic mean that $\Delta Q = 0$, so a reaction can still absorb or release energy if work is done on the system or the system does work. Joowon Seo 3A Posts: 100 Joined: Sat Aug 24, 2019 12:17 am Been upvoted: 1 time Adiabatic means that energy cannot be absorbed or ejected as heat so the system must do work or work must be done on a system in order to absorb or release energy. Posts: 102 Joined: Wed Sep 18, 2019 12:18 am If we apply this fact to the $\Delta U = q + w$ equation, we find that $\Delta U = w$ when adiabatic
2021-03-02 05:38:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6324869990348816, "perplexity": 3225.9893044189703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00294.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/rev-2-11-60-hz-circuit-house-wiring-voltage-varies-maximum-value-minimum-60-times-second-a-q2787324
## Electric Circuits (Rev 2/11)In a 60 Hz circuit (such as house wiring), the voltage varies from a maximum value to a minimum, and back again 60 times per second. How is the average voltage defined? Why this definition, and not just the average voltage?
2013-05-25 12:05:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916487097740173, "perplexity": 1261.7106758900256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705948348/warc/CC-MAIN-20130516120548-00078-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/mbe.2013.10.1067
Article Contents Article Contents # Parametrization of the attainable set for a nonlinear control model of a biochemical process • In this paper, we study a three-dimensional nonlinear model of a controllable reaction $[X] + [Y] + [Z] \rightarrow [Z]$, where the reaction rate is given by a unspecified nonlinear function. A model of this type describes a variety of real-life processes in chemical kinetics and biology; in this paper our particular interests is in its application to waste water biotreatment. For this control model, we analytically study the corresponding attainable set and parameterize it by the moments of switching of piecewise constant control functions. This allows us to visualize the attainable sets using a numerical procedure. These analytical results generalize the earlier findings, which were obtained for a trilinear reaction rate (which corresponds to the law of mass action) and reported in [18,19], to the case of a general rate of reaction. These results allow to reduce the problem of constructing the optimal control to a straightforward constrained finite dimensional optimization problem. Mathematics Subject Classification: Primary: 49J15, 49N90; Secondary: 93C10, 93C95. Citation: • [1] J.-P. Aubin and A. Cellina, "Differential Inclusions: Set-Valued Maps and Viability Theory," Springer, Berlin-New York, 1984.doi: 10.1007/978-3-642-69512-4. [2] A. D. Bojarski, J. Rojas and T. Zhelev, Modelling and sensitivity analysis of ATAD, Computers & Chemical Engineering, 34 (2010), 802-811. [3] B. Bonnard and M. Chyba, "Singular Trajectories and Their Role in Control Theory," Springer-Verlag, Berlin-Heidelberg-New York, 2003. [4] G. Bromström and H. Drange, On the mathematical formulation and parameter estimation of the norwegian sea plankton system, Sarsia, 85 (2000), 211-225. [5] D. Brune, Optimal control of the complete-mix activated sludge process, Environmental Technology Letters, 6 (1985), 467-476.doi: 10.1080/09593338509384365. [6] M. Burke, M. Chapwanya, K. Doherty, I. Hewitt, A. Korobeinikov, M. Meere, S. McCarthy, M. O'Brien, V. T. N. Tuoi, H. Winstanley and T. Zhelev, Modeling of autothermal thermophylic aerobic digestion, Mathematics-in-Industry Case Studies Journal, 2 (2010), 34-63. [7] S. Busenberg, S. Kumar, P. Austin and G. Wake, The Dynamics of a model of a plankton-nutrient interaction, Bulletin of Mathematical Biology, 52 (1990), 677-696. [8] F. L. Chernous'ko and V. B. Kolmanovskii, Computational and approximate methods of optimal control, Journal of Mathematical Sciences, 12 (1979), 310-353.doi: 10.1007/BF01098370. [9] F. L. Chernous'ko, "Ellipsoidal state Estimation for Dynamical Systems," CRS Press, Boca Raton, Florida, 1994.doi: 10.1016/j.na.2005.01.009. [10] B. P. Demidovich, "Lectures on Stability Theory," Nauka, Moscow, 1967. [11] A. V. Dmitruk, A generalized estimate on the number of zeroes for solutions of a class of linear differential equations, SIAM Journal of Control and Optimization, 30 (1992), 1087-1091.doi: 10.1137/0330057. [12] P. Georgescu and Y.-H. Hsieh, Global stability for a virus dynamics model with nonlinear incidence of infection and removal, SIAM Journal on Applied Mathematics, 67 (2007), 337-353.doi: 10.1137/060654876. [13] M. Graells, J. Rojas and T. Zhelev, Energy efficiency optimization of wastewater treatment. Study of ATAD, Computer Aided Chemical Engineering, 28 (2010), 967-972. [14] E. N. Khailov and E. V. Grigorieva, On the attainability set for a nonlinear system in the plane, Moscow University. Computational Mathematics and Cybernetics, (2001), 27-32. [15] E. V. Grigorieva and E. N. Khailov, A nonlinear controlled system of differential equations describing the process of production and sales of a consumer good, Dynamical Systems and Differential Equations, (2003), 359-364. [16] E. N. Khailov and E. V. Grigorieva, Description of the attainability set of a nonlinear controlled system in the plane, Moscow University. Computational Mathematics and Cybernetics, (2005), 23-28. [17] E. V. Grigorieva and E. N. Khailov, Attainable set of a nonlinear controlled microeconomic model, Journal of Dynamical and Control Systems, 11 (2005), 157-176.doi: 10.1007/s10883-005-4168-8. [18] E. V. Grigorieva, N. V. Bondarenko, E. N. Khailov and A. Korobeinikov, Three-dimensional nonlinear control model of wastewater biotreatment, Neural, Parallel, and Scientific Computations, 20 (2012), 23-35. [19] E. V. Grigorieva, N. V. Bondarenko, E. N. Khailov and A. Korobeinikov, Finite-dimensional methods for optimal control of autothermal thermophilic aerobic digestion, in "Industrial Waste" (eds. K.-Y. Show and X. Guo), InTech, Croatia, (2012), 91-120.doi: 10.5772/36237. [20] T. Gross, W. Ebenhöh and U. Feudel, Enrichment and foodchain stability: The impact of different forms of predator-prey interaction, Journal of Theoretical Biology, 227 (2004), 349-358.doi: 10.1016/j.jtbi.2003.09.020. [21] V. I. Gurman and E. A. Trushkova, Estimates for attainability sets of control systems, Differential Equations, 45 (2009), 1636-1644.doi: 10.1134/S0012266109110093. [22] Kh. G. Guseinov, A. N. Moiseev and V. N. Ushakov, On the approximation of reachable domains of control systems, Journal of Applied Mathematics and Mechanics, 62 (1998), 169-175.doi: 10.1016/S0021-8928(98)00022-7. [23] P. Hartman, "Ordinary Differential Equations," Jorn Wiley & Sons, New York-London-Sydney, 1964. [24] A. N. Kolmogorov, Sulla teoria di Volterra della lotta per l'esistenza, Giorn. Ist. Ital. Attuari, 7 (1936), 74-80. [25] V. A. Komarov, Estimates of the attainable set for differential inclusions, Mathematical Notes, 37 (1985), 916-925. [26] A. Korobeinikov, Stability of ecosystem: Global properties of a general prey-predator model, Mathematical Medicine and Biology, 26 (2009), 309-321.doi: 10.1093/imammb/dqp009. [27] A. Korobeinikov, Global asymptotic properties of virus dynamics models with dose dependent parasite reproduction and virulence, and nonlinear incidence rate, Mathematical Medicine and Biology, 26 (2009), 225-239. [28] A. Korobeinikov, Global properties of a general predator-prey model with non-symmetric attack and consumption rate, Discrete and Continuous Dynamical System. Ser. B, 14 (2010), 1095-1103.doi: 10.3934/dcdsb.2010.14.1095. [29] M. A. Krasnosel'skii, "The Operator of Translation along the Trajectories of Differential Equations," American Mathematical Society, Providence, RI, 1968. [30] A. B. Kurzhanski and I. Valyi, "Ellipsoidal Calculus for Estimation and Control," Birkhäuser, Boston, 1997. [31] U. Ledzewicz, J. Marriott, H. Maurer and H. Schättler, Realizable protocols for optimal administration of drugs in mathematical models for anti-angiogenic treatment, Mathematical Medicine and Biology, 27 (2010), 157-179.doi: 10.1093/imammb/dqp012. [32] U. Ledzewicz, E. Kashdan and H. Schättler, Optimal and suboptimal protocols for a mathematical model for tumor anti-angiogenesis in combination with chemotherapy, Mathematical Biosciences and Engineering, 8 (2011), 307-323.doi: 10.3934/mbe.2011.8.307. [33] U. Ledzewicz, M. Naghnaeian and H. Schättler, Optimal response to chemotherapy for a mathematical model of tumor-immune dynamics, Journal of Mathematical Biology, 64 (2012), 557-577.doi: 10.1007/s00285-011-0424-6. [34] E. B. Lee and L. Markus, "Foundations of Optimal Control Theory," Jorn Wiley & Sons, New York, 1967. [35] M. S. Nikol'skii, Approximation of the attainability set for a controlled process, Mat. Zametki, 41 (1987), 71-76, 121. [36] N. P. Osmolovskii and H. Maurer, Equivalence of second order optimality conditions for bang-bang control problems. Part 2: proofs, variational derivatives and representations, Control and Cybernetics, 36 (2007), 5-45. [37] A. I. Ovseevich and F. L. Chernous'ko, Two-sided estimates on the attainability domains of controlled systems, Journal of Applied Mathematics and Mechanics, 46 (1982), 590-595.doi: 10.1016/0021-8928(82)90005-3. [38] A. I. Panasyuk and V. I. Panasyuk, An equation generated by a differential inclusion, Mathematical Notes, 27 (1980), 429-437, 494. [39] A. I. Panasyuk, Equations of attainable set dynamics. I. Integral funnel equation, Journal of Optimization Theory and Applications, 64 (1990), 349-366.doi: 10.1007/BF00939453. [40] A. I. Panasyuk, Equations of attainable set dynamics. II. Partial differential equations, Journal of Optimization Theory and Applications, 64 (1990), 367-377.doi: 10.1007/BF00939454. [41] T. Partasarathy, "On Global Univalence Theorems," Springer-Verlag, Berlin-Heidelberg-New York, 1983. [42] A. Sard, The measure of the critical values of differentiable maps, Bulletin of the American Mathematical Society, 48 (1942), 883-890.doi: 10.1090/S0002-9904-1942-07811-6. [43] H. Schättler and U. Ledzewicz, "Geometric Optimal Control. Theory, Methods and Examples," Springer, New York-Heidelberg-Dordrecht-London, 2012.doi: 10.1007/978-1-4614-3834-2. [44] L. Schwartz, "Analyse Mathematique 1," Hermann, Paris, 1967. [45] G. V. Shevchenko, Numerical method for solving a nonlinear time-optimal control problem with additive control, Computational Mathematics and Mathematical Physics, 47 (2007), 1768-1778.doi: 10.1134/S0965542507110048. [46] G. V. Shevchenko, Numerical solution of a nonlinear time optimal control problem, Computational Mathematics and Mathematical Physics, 51 (2011), 537-549.doi: 10.1134/S0965542511040154. [47] S. Ichiraku, A note on global implicit function theorems, IEEE Transactions on Curcuits and Systems, 32 (1985), 503-505.doi: 10.1109/TCS.1985.1085729. [48] D. Szolnoki, Set-oriented methods for computing reachable sets and control sets, Discrete and Continuous Dynamical Systems. Ser. B, 3 (2003), 361-382.doi: 10.3934/dcdsb.2003.3.361. [49] Z. Feng and H. R. Thieme, Endemic models with arbitrarily distributed periods of infection I: Fundamental properties of the model, SIAM Journal on Applied Mathematics, 61 (2000), 803-833.doi: 10.1137/S0036139998347834. [50] H. R. Thieme and Z. Feng, Endemic models with arbitrarily distributed periods of infection II: Fast disease dynamics and permanent recovery, SIAM Journal on Applied Mathematics, 61 (2000), 983-1012.doi: 10.1137/S0036139998347846. [51] A. N. Tikhonov, A. B. Vasil'eva and A. G. Sveshnikov, "Differential Equations," Springer-Verlag, Berlin-Heidelberg-New York, 1985.doi: 10.1007/978-3-642-82175-2. [52] A. I. Tyatyushkin and O. V. Morzhin, Constructive methods of control optimization in nonlinear systems, Automation and Remote Control, 70 (2009), 772-786.doi: 10.1134/S0005117909050063. [53] A. I. Tyatyushkin and O. V. Morzhin, Numerical investigation of attainability sets of nonlinear controlled differential systems, Automation and Remote Control, 72 (2011), 1291-1300.doi: 10.1134/S0005117911060178. [54] P. Varaiya and A. B. Kurzhanski, Ellipsoidal methods for dynamics and control. Part I, Journal of Mathematical Sciences, 139 (2006), 6863-6901.doi: 10.1007/s10958-006-0397-y. [55] O. Ya. Viro, O. A. Ivanov, N. Yu. Netsvetaev and V. M. Kharlamov, "Elementary Topology: Problem Textbook," AMS, 2008.
2022-12-02 08:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6980805397033691, "perplexity": 3946.593753033456}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00702.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177693535
### On the Nonexistence of a Three Series Condition for Series of Nonindependent Random Variables David Gilat Source: Ann. Math. Statist. Volume 42, Number 1 (1971), 409. First Page: Full-text: Open access
2013-05-19 09:48:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870276212692261, "perplexity": 2737.234403760714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00071-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sm&paperid=7594&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Sb.: Year: Volume: Issue: Page: Find Mat. Sb., 2010, Volume 201, Number 4, Pages 3–24 (Mi msb7594) An algorithm for linearizing convex extremal problems E. S. Gorskaya M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics Abstract: This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the $L_p$-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles. Keywords: convex problems, piecewise linear functions, approximation of functions, evaluation of operator norms. DOI: https://doi.org/10.4213/sm7594 Full text: PDF file (551 kB) References: PDF file   HTML file English version: Sbornik: Mathematics, 2010, 201:4, 471–492 Bibliographic databases: UDC: 519.853.3+517.518.8+514.172.45 MSC: 90C05, 90C25, 52A27 Citation: E. S. Gorskaya, “An algorithm for linearizing convex extremal problems”, Mat. Sb., 201:4 (2010), 3–24; Sb. Math., 201:4 (2010), 471–492 Citation in format AMSBIB \Bibitem{Gor10} \by E.~S.~Gorskaya \paper An algorithm for linearizing convex extremal problems \jour Mat. Sb. \yr 2010 \vol 201 \issue 4 \pages 3--24 \mathnet{http://mi.mathnet.ru/msb7594} \crossref{https://doi.org/10.4213/sm7594} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2675339} \zmath{https://zbmath.org/?q=an:1218.90110} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2010SbMat.201..471G} \elib{https://elibrary.ru/item.asp?id=19066194} \transl \jour Sb. Math. \yr 2010 \vol 201 \issue 4 \pages 471--492 \crossref{https://doi.org/10.1070/SM2010v201n04ABEH004079} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000279452200006} \elib{https://elibrary.ru/item.asp?id=15325127} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-77954798505} • http://mi.mathnet.ru/eng/msb7594 • https://doi.org/10.4213/sm7594 • http://mi.mathnet.ru/eng/msb/v201/i4/p3 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Gorskaya E.S., “Priblizhenie vypuklykh funktsii proektsiyami mnogogrannikov”, Vestn. Mosk. un-ta. Ser. 1. Matem. Mekh., 2010, no. 5, 20–27. • Number of views: This page: 584 Full text: 127 References: 36 First page: 16
2021-01-18 23:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3490689992904663, "perplexity": 6324.593074622038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00701.warc.gz"}
https://gr-info.univ.gakushuin.ac.jp/index.php?action=pages_view_main&active_action=cvclient_view_main_init&display_type=cv&cvid=Shu_Nakamura&type=misc&page=2&num=5&block_id=271
Researcher Information 日本語 | English .blockstyle_221 {color: #ffffff;} .blockstyle_221 a{color: #ffffff;} トップページ > 理学部> 数学科 中村 周 研究者氏名 中村 周 ナカムラ シュウ http://pc1.math.gakushuin.ac.jp/~shu/ 学習院大学 理学部数学科 教授 理学博士(東京大学) 50183520 研究分野 • 自然科学一般 / 基礎解析学 / 関数解析、関数方程式 論文 Behrndt Jussi, Gesztesy Fritz, Nakamura Shu MATHEMATISCHE ANNALEN   371(3-4) 1255-1300   2018年8月   [査読有り] Takuro Matsuta, Tohru Koma, Shu Nakamura ANNALES HENRI POINCARE   18(2) 519-528   2017年2月   [査読有り] We improve the Lieb-Robinson bound for a wide class of quantum many-body systems with long-range interactions decaying by power law. As an application, we show that the group velocity of information propagation grows by power law in time for such ... J. Math. Sci. Univ. Tokyo   24 239-257   2017年   [査読有り] Shu Nakamura COMMUNICATIONS IN PARTIAL DIFFERENTIAL EQUATIONS   41(6) 894-912   2016年   [査読有り] We consider the scattering theory for a pair of operators H-0 and H=H-0+V on L-2(M, m), where M is a Riemannian manifold, H-0 is a multiplication operator on M, and V is a pseudodifferential operator of order - , &gt;1. We show that a time-depende... Shu Nakamura JOURNAL OF MATHEMATICAL PHYSICS   55(11)    2014年11月   [査読有り] We consider the scattering theory for discrete Schrodinger operators on Z(d) with long-range potentials. We prove the existence of modified wave operators constructed in terms of solutions of a Hamilton-Jacobi equation on the torus T-d. (C) 2014 A... MISC Shu Nakamura 2014年7月 We consider scattering theory for a pair of operators and on<br /> , where is a Riemannian manifold, is a multiplication<br /> operator on and is a pseudodifferential operator of order ,<br /> \$\mu... Shu Nakamura 2014年3月 We consider the scattering theory for discrete Schr\&quot;odinger operators on<br /> with long-range potentials. We prove the existence of modified wave<br /> operators constructed in terms of solutions of a Hamilton-Jacobi equation on<br />... Kazuki Horie, Shu Nakamura 50(3) 477-496   2014年 In a previous paper by the second author, we discussed a characterization of<br /> the microlocal singularities for solutions to Schr\&quot;odinger equations with long<br /> range type perturbations, using solutions to a Hamilton-Jacobi equation. ... Shu Nakamura 2013年5月 On this short note, we apply the Mourre theory of the limiting absorption<br /> with {\it difference} type conditions on the potential, instead of conditions<br /> on the derivatives. In order that we modify the definition of the conjugate<br /> o... Shu Nakamura, Alexander Pushnitski 2012年2月 The object of study in this paper is the on-shell scattering matrix of<br /> the Schr\&quot;odinger operator with the potential satisfying assumptions typical in<br /> the theory of shape resonances. We study the spectrum of in the<b...
2020-06-05 15:11:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745446801185608, "perplexity": 2178.3412005565788}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502097.77/warc/CC-MAIN-20200605143036-20200605173036-00301.warc.gz"}
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Entov_Michael&arg9=Michael_Entov
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Geometry, Spectral Theory, Groups, and Dynamics Edited by: Michael Entov, Yehuda Pinchover, and Michah Sageev, Technion - Israel Institute of Technology, Haifa, Israel A co-publication of the AMS and Bar-Ilan University. SEARCH THIS BOOK: Contemporary Mathematics 2005; 275 pp; softcover Volume: 387 ISBN-10: 0-8218-3710-9 ISBN-13: 978-0-8218-3710-8 List Price: US$92 Member Price: US$73.60 Order Code: CONM/387 This volume contains articles based on talks given at the Robert Brooks Memorial Conference on Geometry and Spectral Theory and the Workshop on Groups, Geometry and Dynamics held at Technion - the Israel Institute of Technology (Haifa). Robert Brooks' (1952 - 2002) broad range of mathematical interests is represented in the volume, which is devoted to various aspects of global analysis, spectral theory, the theory of Riemann surfaces, Riemannian and discrete geometry, and number theory. A survey of Brooks' work has been written by his close colleague, Peter Buser. Also included in the volume are articles on analytic topics, such as Szegő's theorem, and on geometric topics, such as isoperimetric inequalities and symmetries of manifolds. The book is suitable for graduate students and researchers interested in various aspects of geometry and global analysis. This book is copublished with Bar-Ilan University. The book is suitable for graduate students and research mathematicians interested in various aspects of geometry and global analysis. • P. Buser -- On the mathematical work of Robert Brooks • D. Blanc -- Moduli spaces of homotopy theory • R. Brooks and M. Monastyrsky -- K-regular graphs and Hecke surfaces • P. Buser and K.-D. Semmler -- Isospectrality and spectral rigidity of surfaces with small topology • I. Chavel -- Topics in isoperimetric inequalities • B. Farb and S. Weinberger -- Hidden symmetries and arithmetic manifolds • H. M. Farkas -- Variants of the $$3N+1$$ conjecture and multiplicative semigroups • U. Frauenfelder, V. Ginzburg, and F. Schlenk -- Energy capacity inequalities via an action selector • K. Fujiwara -- On non-bounded generation of discrete subgroups in rank-1 Lie group • C. Gordon, P. Perry, and D. Schueth -- Isospectral and isoscattering manifolds: A survey of techniques and examples • M. G. Katz and C. Lescop -- Filling area conjecture, optimal systolic inequalities, and the fiber class in abelian covers • E. Leichtnam -- An invitation to Deninger's work on arithmetic zeta functions • A. Lubotzky -- Some more non-arithmetic rigid groups • R. G. Pinsky -- On domain monotonicity for the principal eigenvalue of the Laplacian with a mixed Dirichlet-Neumann boundary condition • B. Simon -- The sharp form of the strong Szegő theorem
2015-03-05 17:04:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23603826761245728, "perplexity": 4637.9924790037785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464303.77/warc/CC-MAIN-20150226074104-00263-ip-10-28-5-156.ec2.internal.warc.gz"}
https://hal.archives-ouvertes.fr/hal-00460508
# Modification of Radiation Pressure due to Cooperative Scattering of Light Abstract : Cooperative spontaneous emission of a single photon from a cloud of $N$ atoms modifies substantially the radiation pressure exerted by a far-detuned laser beam exciting the atoms. On one hand, the force induced by photon absorption depends on the collective decay rate of the excited atomic state. On the other hand, directional spontaneous emission counteracts the recoil induced by the absorption. We derive an analytical expression for the radiation pressure in steady-state. For a smooth extended atomic distribution we show that the radiation pressure depends on the atom number via cooperative scattering and that, for certain atom numbers, it can be suppressed or enhanced. Document type : Preprints, Working Papers, ... Domain : https://hal.archives-ouvertes.fr/hal-00460508 Contributor : Tom Bienaime <> Submitted on : Monday, March 1, 2010 - 2:18:12 PM Last modification on : Wednesday, October 14, 2020 - 4:23:16 AM ### Identifiers • HAL Id : hal-00460508, version 1 • ARXIV : 0912.1992 ### Citation Philippe Courteille, Simone Bux, Eleonora Lucioni, Katharina Lauber, Tom Bienaime, et al.. Modification of Radiation Pressure due to Cooperative Scattering of Light. 2009. ⟨hal-00460508⟩ Record views
2021-01-15 18:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143254637718201, "perplexity": 4004.24914057107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00308.warc.gz"}
https://mathoverflow.net/questions/211189/question-on-paper-of-stewart-and-top-about-ranks-of-elliptic-curves-over-qt
# Question on paper of Stewart and Top about ranks of elliptic curves over Q(t) I'm reading "On Ranks of Twists of Elliptic Curves and Power-Free Values of Binary Forms" by Stewart and Top, and struggling to understand the argument on pg 962 which shows that the rank of a particular elliptic curve $E_{D(t)}/\mathbb{Q}(t)$ is exactly 2. Here are the relevant details: Start with the elliptic curve $$E/\mathbb{Q}: y^2 = x^3 + 1$$ and the polynomial $$D(t) = 2t(t - 1)(t + 1)(2t + 1)(t + 2) \in \mathbb{Z}[t].$$ Let $C/\mathbb{Q}$ be the curve given by $s^3 = D(t)$ and let $$E_D/\mathbb{Q}(t): y^2 = x^3 + D(t)^2.$$ For each point $P = (x(t), y(t))$ in $E_D(\mathbb{Q}(t))$, we define an element $\phi_P$ of $\text{Mor}_\mathbb{Q}(C, E)$ by $$\phi_P(t, s) = (x(t)/s^2, y(t)/s^3).$$ Then we have a map $$\lambda: E_D(\mathbb{Q}(t)) \to H^0(C, \Omega^1_{C/\mathbb{Q}})$$ given by $$\lambda(P) = \phi_P^\ast \omega_E$$ which is shown to be a homomorphism with finite kernel. We want to use this homomorphism to show that the rank of $E_D/\mathbb{Q}(t)$ is exactly two. First, we can find two points, and show they are independent by looking at their images under $\lambda$. This is fine, and shows that the rank is at least 2. Next, we want to show that the rank is at most 2. We know that the image lands in $H^0(C, \Omega^1_{C/\mathbb{Q}}(\zeta_3))$, the eigenspace on which the automorphism of $C$ given by $\zeta(t, s) = (t, \zeta_3 s)$ acts on differentials as multiplication by $\zeta_3$. This constrains the image to the 3-dimensional space, say spanned by $\omega_1, \omega_2, \omega_3$ (this numbering is different from the paper). All of this makes sense to me. What doesn't make sense is how the authors constrain the image to a 2-dimensional subspace. They define three involutions on $C$, called $\sigma_1, \sigma_2, \sigma_3$, and show that the space of $\sigma_i^\ast$-invariant holomorphic differentials is generated by $\omega_i$. Hence the quotient of $C$ by $\sigma_i$ is an elliptic curve. In two cases, the curve is isogenous to $E$ over $\mathbb{Q}$, and in the third case it is not. From this, they somehow infer that the rank is 2. I'm very confused about this, and would appreciate some more details or a reference. Thanks! • I think you answered your own question... the image must land in the space corresponding to $\omega_1, \omega_2$ (or $\omega_3, \omega_4$ as labelled in the original paper), since the third case yields a curve which is not isogenous, which is the natural notion of 'isomorphic' for elliptic curves. Jul 10, 2015 at 2:08 • That's the part I don't understand... why the image must land in a space which yields an isogenous curve in this way. – stl Jul 10, 2015 at 12:45 Suppose that $\omega$ is the invariant differential of an elliptic quotient $C/\sigma$ of $C$. In particular, $\omega$ is non-zero. If $\omega$ lies in the image of $\lambda$, then there exists a point $P = (x(t), y(t))$ on $E_D(\mathbb{Q}(t))$ such that $\lambda(P) = \omega$. Let $\rho_1(P)$ denote the morphism in $\operatorname{Mor}_{\mathbb{Q}}(C/\sigma, E)$ corresponding to $\lambda_1(P)$. Then, as a morphism of curves, it must be either finite or constant. Since $\omega$ is non-constant, it must be the former; so $\rho_1(P)$ is an isogeny between $C/\sigma$ and $E$. Hence, this shows that any elliptic quotient corresponding to an element in the image of $\lambda$ must be isogenous to $E$. Since, as you noted in the question, that the quotients $C/\sigma_1, C/\sigma_2$ are isogenous but $C/\sigma_3$ is not isogenous to the first two, $E$ can be isogenous to at most two of them. Since we have the lower bound of two, the rank is exactly two.
2023-01-31 13:37:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573808908462524, "perplexity": 91.83555453154212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00016.warc.gz"}
https://learn.careers360.com/ncert/question-add-and-subtract-m-minus-n-m-plus-n/
# Add and subtract            (i)  $m-n,m+n$ S safeer $m-n,m+n$ Will give the result as follows $m-n+m+n=2m$ Subtracting $m-n,m+n$ Will give the result as follows $m-n-(m+n)=m-n-m-n=-2n$ Exams Articles Questions
2020-04-02 12:32:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3661433458328247, "perplexity": 4143.17574980389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00241.warc.gz"}
https://www.physicsforums.com/threads/surface-plasmons.399293/
# Surface Plasmons 1. Apr 28, 2010 ### geo_alchemist I'm still trying (yet unsuccessfully) to deal with surface plasmons, and I still hope on your help. let me start like this: I find in the review that: We consider an interface in the xy-plane between two half-infinite spaces, 1 and 2, of materials the optical properties of which are described by their complex frequency-dependent dielectric functions $$\epsilon$$1($$\omega$$) and $$\epsilon$$2($$\omega$$), respectively. We ignore magnetic materials. Surface polaritons can only be excited at such an interface if the dielectric displacement $$\stackrel{\rightarrow}{D}$$ of the electromagnetic mode has a component normal to the surface which can induce a surface charge density $$\sigma$$, (D2-D1)z=4$$\pi$$$$\sigma$$ and here I found that I don't quite understand why there must be a component, normal to the surface, and what is the connection between surface charge density and surfacce plasmons. Any help will be greatfully appreciated. 2. Apr 28, 2010 ### Gokul43201 Staff Emeritus About the normal components: think Gauss' Law. For the connection between surface charge density and surface plasmons, go back to square one, and start at the definition of a surface plasmon.
2018-12-14 00:21:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6399050354957581, "perplexity": 1104.538267384753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00141.warc.gz"}
http://www.oxfordmathcenter.com/drupal7/node/522
# Properties of Legendre's Symbol Supposing that $p$ and $q$ are odd primes, and $a$ and $b$ are integers not divisible by $p$, the following properties for the Legendre Symbol hold. The first five are easy to prove, the sixth is a result from Gauss, and the last is one of the most famous and intriguing results of number theory 1. If $a \equiv b \pmod{p}$, then $\displaystyle{\left( \frac{a}{p} \right) = \left( \frac{b}{p} \right)}$ 2. $\displaystyle{\left( \frac{a}{p} \right) \left( \frac{b}{p} \right) = \left( \frac{ab}{p} \right)}$ 3. $\displaystyle{\left( \frac{a^2}{p} \right) = 1}$ 4. $\displaystyle{\left( \frac{1}{p} \right) = 1}$ 5. $\displaystyle{\left( \frac{-1}{p} \right) = \left\{ \begin{array}{cl} 1 & \textrm{ if } p \equiv 1\pmod{4}\\ -1 & \textrm{ if } p \equiv -1\pmod{4} \end{array} \right. }$ 6. $\displaystyle{\left( \frac{2}{p} \right) = (-1)^{\frac{p^2 -1}{8}}}$ 7. $\displaystyle{\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2} \cdot \frac{q-1}{2}}}$     (The Law of Quadratic Reciprocity) Taken together, these properties allow for quick computation of $\displaystyle{\left( \frac{a}{p} \right)}$, even for large values of $a$ and $p$. For example, suppose we wished to calculate $\displaystyle{\left( \frac{713}{1009} \right)}$. Upon noticing $1009$ is prime and $713 = 23 \times 31$, we immediately have from property (2) above: $$\left( \frac{713}{1009} \right) = \left( \frac{23}{1009} \right) \left( \frac{31}{1009} \right)$$ Let us deal with these two factors separately. As explanation of each calculation below, the number of the property applied appears over the corresponding equals sign. As can be seen, applying these properties properly reduces quite quickly the magnitudes of the numbers involved, until the evaluation of the Legendre symbol becomes trivial. $\displaystyle{\begin{array}{rcl} \left( \frac{23}{1009} \right) &\overset{7}{=}& \left( \frac{1009}{23} \right)\\ &\overset{1}{=}& \left( \frac{20}{23} \right)\\ &\overset{2}{=}& \left( \frac{2^2}{23} \right) \left( \frac{5}{23} \right)\\ &\overset{3}{=}& \left( \frac{5}{23} \right)\\ &\overset{7}{=}& \left( \frac{23}{5} \right)\\ &\overset{1}{=}& \left( \frac{3}{5} \right)\\ &\overset{7}{=}& \left( \frac{5}{3} \right)\\ &\overset{1}{=}& \left( \frac{2}{3} \right)\\ &\overset{6}{=}& -1 \end{array}}$ $\displaystyle{\begin{array}{rcl} \left( \frac{31}{1009} \right) &\overset{7}{=}& \left( \frac{1009}{31} \right)\\ &\overset{1}{=}& \left( \frac{17}{31} \right)\\ &\overset{7}{=}& \left( \frac{31}{17} \right)\\ &\overset{1}{=}& \left( \frac{14}{17} \right)\\ &\overset{2}{=}& \left( \frac{2}{17} \right) \left( \frac{7}{17} \right)\\ &\overset{6}{=}& \left( \frac{7}{17} \right)\\ &\overset{7}{=}& \left( \frac{17}{7} \right)\\ &\overset{1}{=}& \left( \frac{3}{7} \right)\\ &\overset{7}{=}& -\left( \frac{7}{3} \right)\\ &\overset{1}{=}& -\left( \frac{1}{3} \right)\\ &\overset{4}{=}& -1\\ \end{array}}$ But then, $\displaystyle{\left( \frac{23}{1009} \right) \left( \frac{31}{1009} \right) = (-1)(-1) = 1}$. As such, we may conclude any of the following three equivalent statements: • $\displaystyle{\left( \frac{713}{1009} \right) = 1}$ • $\displaystyle{713 \textrm{ is a quadratic residue of } 1009}$ • $\displaystyle{x^2 \equiv 713\pmod{1009} \textrm{ does indeed have a solution}}$ ◆ ◆ ◆
2018-04-25 06:34:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808889985084534, "perplexity": 268.74419433263824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00455.warc.gz"}
https://socratic.org/questions/if-a-current-of-6-a-passing-through-a-circuit-generates-12-w-of-power-what-is-th
# If a current of 6 A passing through a circuit generates 12 W of power, what is the resistance of the circuit? $\frac{1}{3} \setminus \setminus \Omega$ #### Explanation: Power ($P$) of a circuit carrying a current $I$ & having a resistance $R$ is given as $P = {I}^{2} R$ but given that $I = 6 A$ & power $P = 12 W$ hence $12 = {6}^{2} R$ $R = \setminus \frac{12}{36}$ $R = \frac{1}{3} \setminus \setminus \Omega$ Jun 20, 2018 Approximately $0.33$ ohms. #### Explanation: Power is related through resistance and current by the equation: $P = {I}^{2} R$ where: • $P$ is the power in watts • $I$ is the current in amperes • $R$ is the resistance in ohms Rearranging for resistance, we get: $R = \frac{P}{I} ^ 2$ Now, plugging in our given values, we find that: $R = {\left(12 \setminus \text{W")/(6 \ "A}\right)}^{2}$ $= \left(12 \setminus {\text{W")/(36 \ "A}}^{2}\right)$ $= \frac{1}{3} \setminus \Omega$ $\approx 0.33 \setminus \Omega$
2022-01-26 08:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7708299160003662, "perplexity": 1329.1881660054512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00425.warc.gz"}
https://specs.anoma.net/main/print.html
# Anoma Anoma is an intent-centric, privacy-preserving protocol for decentralized counterparty discovery, solving, and multi-chain atomic settlement. To learn more about Anoma's vision, take a look at the Anoma Vision Paper. These documents describe the motivation, design rationale, architecture, and protocol details of Anoma. They are intended to be both minimal, in that no more is said than necessary, and complete, in that enough is said to define precisely what Anoma is. Anoma is a protocol, it is not an implementation. Documentation for the implementation of the Anoma protocol by Heliax can be found here. Anoma is free, in both the senses of "free speech" and "free beer". The source for these documents is on Github, permissively licensed, and they can be forked or edited as you like. At present, this particular repository is stewarded by the Anoma Foundation. Contributions are welcome. This specification is designed be readable in both breadth-first and depth-first manners. If you want to understand the broad motivation and architecture of Anoma, read the motivation and architecture overviews. If you want to dive into the protocol details of a particular aspect of Anoma, such as the state architecture, or release line, such as V1, you can start directly with the document in question; overview documents are self-contained and/or crosslinked where necessary, and contain pointers to subprotocols and details thereof. A sitewide search function is available at the upper left. NOTE: These documents are not yet complete. If you're reading this page right now, there's a high chance you might be interested in the V1 release line, slated for use in the Namada instance. # Motivation For now, see the Anoma vision paper. # Releases In order to facilitate progressive deployment and iteration, the Anoma protocols are organised into a series of releases. Releases combine a selection of subcomponents of the Anoma protocols (which are themselves independently versioned) into a unified, compatible whole designed both to be architecturally self-contained and to provide a coherent product proposition. Major release lines are defined by their product propositions - for example, once V1 is released, Heliax will continue to support, improve performance, and add features relevant to multi-asset shielded transfers, but features providing for a substantially different product proposition will be slated for other release lines. This method of organisation is not a position on how particular instances of the Anoma protocols should evolve or upgrade, it is just a choice to cleanly separate different protocol version lines. Note, however, that the architectural capabilities of subsequent major version releases subsume previous ones - everything V1 can do, V2 can do as well - the version lines are a temporally scoped mode of organisation, Anoma is designed to converge to a singular suite of modular protocols. At present, there are three major releases planned: • V1: V1 provides for multi-asset shielded transfers, with assets from any connected chain sharing the same anonymity set, on top of a basic proof-of-stake Tendermint BFT stack. • V2: V2 provides for programmable private bartering, counterparty discovery, and settlement, all on top of a bespoke heterogenous consensus system. • V3: V3 provides for explicit information flow control and multiparty private state transitions. # V1 ## What is Anoma v1? Namada is the first release protocol version and the first fractal instance of the Anoma protocol. Namada is a sovereign proof-of-stake blockchain, using Tendermint BFT consensus, that enables multi-asset private transfers for any native or non-native asset using a multi-asset shielded pool derived from the Sapling circuit. Namada features full IBC protocol support, a natively integrated Ethereum bridge, a modern proof-of-stake system with automatic reward compounding and cubic slashing, and a stake-weighted governance signalling mechanism. Users of shielded transfers are rewarded for their contributions to the privacy set in the form of native protocol tokens. A multi-asset shielded transfer wallet is provided in order to facilitate safe and private user interaction with the protocol. ## How does Namada relate to Anoma? • The first major release version of the Anoma protocol. • The first fractal instance launched as part of the Anoma network. The Anoma protocol is designed to facilitate the operation of networked fractal instances, which intercommunicate but can utilise varied state machines and security models. Different fractal instances may specialise in different tasks and serve different communities. The Namada instance will be the first such fractal instance, and it will be focused exclusively on the use-case of private asset transfers. ## Raison d'être Safe and user-friendly multi-asset privacy doesn't yet exist in the blockchain ecosystem. Up until now you had the choice to build a sovereign chain that reissues assets (e.g. Zcash) or to build a privacy preserving solution on existing chains (e.g. Tornado Cash on Ethereum). Both have large trade-offs: in the former case users don't have assets that they actually want to use and in the latter case the restrictions of existing platforms mean that users leak a ton of metadata and the protocols are expensive and clunky to use. Namada can support any asset on an IBC-compatible blockchain and assets (such as ERC20 tokens) sent over a custom Ethereum bridge that reduces transfer costs and streamlines UX as much as possible. Once assets are on Namada, shielded transfers are cheap and all assets contribute to the same anonymity set. Namada is also a helpful stepping stone to finalise, test, and launch a protocol version that is simpler than the full Anoma protocol but still encapsulates a unified and useful set of features. There are reasons to expect that it may make sense for a fractal instance focused exclusively on shielded transfers to exist in the long-term, as it can provide throughput and user-friendliness guarantees which are more difficult to provide with a more general platform. Namada is designed so that it could evolve into such an instance. # Later components • Mobile clients (Android/iOS) • More IBC adapters to different chains & ecosystems • Light clients, indexers with proofs • Private bridges # V2 V2 includes: • Counterparty discovery, matchmaking, and settlement • Transparent validity predicates • Ferveo for front-running prevention • Taiga • J-group • Typhon Product: self-sovereign community economic infrastructure • Private transfers • Private bartering • Local identity • Local currency issuance • Local UBI • Public goods funding # V3 V3 includes: • FHE/MPC multi-party private state transitions Product: • Limited economic statistics • Private auctions • Private multi-party stateful contracts # Architecture Architectural subcomponents: # Web Wallet UI and Features The application is divided to 4 main sections: • LockScreen • AccountOverview • Staking, Governance, Public Goods Funding • Settings These are further divided to individual screens or flows (comprising several screens) grouping activities that belong together. The user stories below are associated with visual representation of the views. They are either wireframes or placeholder designs. When the view is represented with a wireframe, there is likely a ready design for the view. When there is no ready designs the developers can develop the features using the placeholder designs. Example: The wireframe for staking view would look like this: And the placeholder design for the same feature looks like this: Here is the example user stories for that view: User can: • see an overview of own balances (staked, available, ...) • see own active staking positions • see listing and be able to search all validators • easily be able to filter validators by state (active, inactive, ...) # Views ## LockScreen When the user accesses the wallet for the first time there is a need to create a new account. This screen gives the user to possibility to do so or unlock the wallet by using an existing account. ### LockScreen Wireframe User can: • can to unlock the wallet by entering the master password • can to start a flow to create a new account ## AccountOverview This is the most important part of the application and the part where the user spends the most time. Here the user performs the most common tasks such as creating transactions. Only one account is selected as a time and the selected account is indicated here. ### AccountOverview Wireframe User can: • see the aggregated balance in fiat currency • can see the currently selected account address • can navigate to Settings/Accounts for changing the account • can see a listing of all hold tokens and their logos, balances, names ### AccountOverview/TokenDetails Wireframe User can: • can see the balance of token in native and fiat currency • can navigate to AccountOverview/TokenDetails/Receive for receiving tokens • can navigate to AccountOverview/TokenDetails/Send for sending tokens • can see a listing of past transaction of the current account and selected token Wireframe User can: • see QR code of the address • see address as a string and copy it by clicking button ### AccountOverview/TokenDetails/Send Wireframe 1 Wireframe 2 Wireframe 3 User can: view 1: • see the balance of the token in current account • enter details: transfer amount, recipient address, memo • can select to perform the transaction as shielded view 2: • see a summary of the transaction details • clear indication whether the transaction is transparent of shielded • select a gas fee • see an option in gas fees that is specific for shielded transactions • see a transaction summary including gas fee view 3: • see a confirmation once the transaction is confirmed • be abel to navigate to see the new transaction in the block explorer • be able to navigate back to AccountOverview/TokenDetails ## StakingGovernancePgf Aside of AccountOverview this is a part that the user is likely visiting frequently. When user clicks the main menu Staking & Governance a sub menu with 3 options (Staking, Governance, Public Goods Funding) opens. Staking is selected as a default. ### Staking/Overview designs User can: • see an overview of own balances (staked, available, ...) • see own active staking positions • see the state of all the staking position (pending, staked, unbonding with remaining time) • see listing and be able to search all validators • easily be able to filter validators by state (active, inactive, ...) ### Staking/ValidatorDetails designs User can: • see all information in chain about the validator • see a logo of the validator • see a and click link to validators website • see all staking positions with the current validator • see the state of all the staking position (pending, staked, unbonding with remaining time) • see all unclaimed rewards with the current validator • open a modal to manage new staking, unstake, and claim rewards ### Governance/Proposals designs User can: • see a listing of the latest proposals and their statuses • filter by proposal status • search by proposal title • navigate to the details of any proposal • navigate to a view to create new proposal ### Governance/ProposalDetails designs User can: • see all the details of the proposal • can vote on proposal if vote is open and the user has not voted yet • can see all voting details of the proposal • can see full description designs User can: • enter the details (TBD) of the proposal • see a summary of the proposal • submit the proposal • be prompted for a payment by the wallet ### PublicGoodsFunding/Overview designs User can: • see a list of current council members • see a list of the latest continuous funding • see a list of the latest retrospective funding • navigate to see current and past council members • navigate to see all continuous funding • navigate to see all retrospective funding ### PublicGoodsFunding/Council designs User can: • see the details of the councils, including their funding, budget, members, ... • As a default see the current council is being displayed • select a tab "Past" and see all the past councils • Select any of the past council in the table and see it's details • navigate to governance vote for the council • navigate to see the details of continuous and retrospective funding of the funding of the council • navigate to the council member view to see details about the council members ### PublicGoodsFunding/ContinuousFunding designs User can: • See all the funding • filter by: all, active, past • navigate to the council details that approved this funding • navigate to block explorer to see the transaction for the payments ### PublicGoodsFunding/RetrospectiveFunding designs User can: • See all the funding • filter by: all, upcoming • navigate to the council details that approved this funding • navigate to block explorer to see the transaction for the payments ## Settings This is a part of the application that is visited less often. This is where the user can change settings of select the active account. ### Settings Wireframe User can: • Navigate to Settings/Accounts • Navigate to Settings/WalletSettings ### Settings/WalletSettings Wireframe User can: • see and change the fiat currency to display in various locations in the app where amounts are being displayed in fiat currency • Default fiat currency is USD ### Settings/Accounts Wireframe User can: • select an account by clicking it, when it becomes visibly selected • can navigate to Settings/AccountSettings for changing the settings of certain account • can navigate to Settings/Accounts/NewAccount/Start for adding a new account to the wallet ### Settings/Accounts/NewAccount view 1: • see a welcome screen that explain the flow view 2: • enter an alias to the account • enter and confirm a password • select the length of the seed phrase (12 or 24 words) view 3: • see a seed phrase that was generated • copy the seed phrase to clipboard view 4: • enter a randomly requested word from the set of words. ("please enter word #5") view 5: • see a confirmation that the account was created • navigate to AccountOverview and so that the newly created account becomes the selected account ### Settings/AccountSettings Wireframe User can: • Rename the selected account • display the seed phrase, user is being prompted for a password • delete account, user is prompted to input a security text to prevent an accidental deletion • select the network ## IBC Protocol The web wallet must be able to transfer token amounts to other chains via the Inter-Blockchain Communication Protocol (IBC). We need to be able to support the following: • Fungible token transfer (ICS020) from Namada to other Anoma chains • Fungible token transfer (ICS020) from Namada to Cosmos What the UI will need to display to the user: • Select a chain (chain ID) as destination • Enter a channel ID for destination (e.g., channel-0) • Specify a token • Specify an amount to transfer The web wallet will need to construct a MsgTransfer struct, which will get wrapped in a normal, signed transaction and broadcasted to the source ledger (this struct is passed into the Tx data): #![allow(unused)] fn main() { MsgTransfer { source_port: String, source_channel: String, token: Option<Coin>, sender: Signer, timeout_height: Height, timeout_timestamp: Timestamp } } A populated MsgTransfer with a disabled block-height timeout (instead using a timestamp timeout), may look like the following: #![allow(unused)] fn main() { MsgTransfer { source_port: PortId("transfer"), source_channel: ChannelId("channel-0"), token: Some(Coin { denom: "atest1v4ehgw36x3prswzxggunzv6pxqmnvdj9xvcyzvpsggeyvs3cg9qnywf589qnwvfsg5erg3fkl09rg5", amount: "1.23456" }), sender: Signer( "atest1v4ehgw36xvmrgdfsg9rrwdzxgfprq32yxvensdjxgcurxwpeg5mrxdpjxfp5gdp3xqu5gs2xd8k4aj" ), ), timeout_height: Height { revision: 0, height: 0 }, timeout_timestamp: Timestamp { time: Some(Time(PrimitiveDateTime { date: Date { year: 2022, ordinal: 124 }, time: Time { hour: 14, minute: 15, second: 33, nanosecond: 0 } })) } } } NOTE Unlike with tx_transfer, the amount we pass with the Token is not submitted in micro-units, but as a regular f32 value. No conversion is needed in the web wallet. Once this transaction is unwrapped and validated, apply_tx will invoke IBC.dispatch() (see: https://github.com/anoma/anoma/blob/master/wasm/wasm_source/src/tx_ibc.rs). When this is executed on the source chain, the balance will be deducted on the source account, so we need to reflect this in the interface. If the transaction succeeds, query the balance for that token and display to the user. ## Testing Instructions for setting up local Namada chains, along with the Hermes relatyer (ibc-rs) can be found here: https://hackmd.io/@heliax/BJ5Gmyxrq The wallet UI will need to be configured to connect to the source chain from which you want to transfer tokens. The user will have to enter a valid channel ID in the interface, in addition to an established address on the destination chain (the receiver). ## Configuration The wallet web app should accept a configuration per-environment that will contain not only the default network, but the possible destination networks that the user can transfer tokens to. We need the following information for each, at a minimum: • A user-friendly alias naming the network • Destination URL • Destination Port • A non-default portId, if necessary, though in most cases, the default of transfer would likely be used. Cosmos relayers: # Client Application ### React Web Application • Built with TypeScript • State-management with Redux Toolkit (@reduxjs/toolkit) • CRA (create-react-app) scripts v5 with Craco to enable yarn workspaces (monorepo package management) • wasm-react-scripts - enabling WebAssembly files into the Webpack pipeline • Styled-Componenents for all application/component styling ## WebAssembly Library Much of the core functionality of the web app requires either direct interfacing with types from the Anoma codebase, or other Rust libraries that provide encryption, key-management, mnemonic-generation, etc., that are more easily and robustly handled in the Rust ecosystem than that of TypeScript. The primary functionality that we currently pull from anoma involves constructing transactions. The web wallet interface should be able to serialize the data broadcast to the ledger for different transactions, and this requires items to be serialized within the WebAssembly code. We created anoma-lib, which houses wrapped Anoma types (wrapped when some work is needed to get it to work well with wasm), and the logic needed for us to be able to interface with it from TypeScript. The Rust source code anoma-lib is structured as follows: . ├── types │ ├── keypair.rs │ ├── mod.rs │ ├── transaction.rs │ ├── tx.rs │ └── wrapper.rs ├── account.rs ├── lib.rs ├── transfer.rs ├── utils.rs Here, we have several types that are essentially built on top of anoma types, allowing us to interface easily from the client app, such as address, keypair, tx, and wrapper, then a generic transaction type that handles the logic common to all transactions. Essentially, we want these types to handle any serialization that the anoma types require entirely within the wasm, then later translate the results into something the client can understand. Outside of types, we have an account.rs file that allows us to call account functions, such as initialize (to construct an "init-account" transaction), from the client app. transfer.rs is similar, in that it provides the bridge for the client to issue a transfer transaction. Additional transactions can be easily created in this way, with a specific differences being handled in a top level Rust source file, the common logic of transactions handled by types/transaction, and any types that need extra work in order to be useful to the client being added as well to types. ## Interfacing between the Client and WebAssembly When compiling the wasm utilizing wasm-pack, we get the associated JavaScript source to interact with the WebAssembly output, as well as a TypeScript type definition file. When we set the wasm-pack target to web, we get an additional exported init function, which is a promise that resolves when the wasm is fully loaded, exposing the memory variable. In most cases we shouldn't need to interact directly with the memory of the wasm, but by awaiting the init() call, we can immediately execute any of the wasm methods. In the case of anoma-lib, there is a corresponding class that initializes and exposes the features of the wasm in anoma-wallet, called AnomaClient. (NOTE: This is one use case for wasm, but we may have any number of wasm projects that the wallet can utilize). Exposing the features through a TypeScript class is a good opportunity to move from Rust-style "snake-casing" to camel-casing (most common in TypeScript), and any additional type definitions we can add at this level as well. The goal of bridging wasm and the client TypeScript application should be to make its usage as straightforward as any TypeScript class. It should also be fairly easy for the developer to add new features to the Rust source and quickly bring that into the client app. ### Dealing with Rust types in TypeScript One of the challenges of working with WebAssembly is how we might go about handling types from Rust code. We are limited to what JavaScript can handle, and often when serializing output from the wasm, we'll choose a simple type like string or number, or send the data as a byte array (very common, especially when dealing with numbers larger than JavaScript can handle by default). Sending raw data to the client is often a decent solution, then any encoding we prefer we can enact on the client-side (hexadecimal, base58, base64, etc), and choosing a Rust type like Vec<u8> makes this straight-forward. (More to come on this topic in the future) There is much more nuance to handling types from Rust wasm in TypeScript when working with wasm-bindgen, and more information can be found at the following URL: https://rustwasm.github.io/wasm-bindgen/reference/types.html ## Testing with WebAssembly The wallet-interface should be able to run within the Jest testing framework. This is made possibly by switching our wasm-pack target and rebuilding before the test is run, as tests run within NodeJS. So, instead of the following: wasm-pack build ../anoma-lib/ --out-dir ../anoma-wallet/src/lib/anoma --out-name anoma --target web Given a master seed (a 12 or 24 word bip39 mnemonic), the user should be able to derive additional accounts deterministically. The wallet currently implements functionality to derive bip32 addresses following bip44 paths for slip-0044 registered coin types, using hardened addresses. The bulk of this funcionality resides in anoma-apps/anoma-lib/lib/src/wallet.rs (https://github.com/heliaxdev/anoma-apps/blob/main/packages/anoma-lib/lib/src/wallet.rs). Creating a new Wallet struct with a provided mnemonic generates a seed byte vector and establishes a root extended key. Calling the derive method on that Wallet providing a derivation path will give us the following struct: #![allow(unused)] fn main() { pub struct DerivedAccount { wif: String, // Address in Wallet Import Format (WIF) private_key: Vec<u8>, // Extended Private key public_key: Vec<u8>, // Extended Public key secret: Vec<u8>, // ed25519 secret key public: Vec<u8>, // ed25519 public key } } The ed25519 keys can then be used to initialize an account on the ledger to receive an Established Address. TBD ## Resources <span class="katex"><span class="katex-html" aria-hidden="true"></span></span> ## Persistence of User Wallet The state of the user's wallet, consisting of their master seed, along with any accounts derived from that seed, should be stored locally in a safe manner. As this requires the use of localStorage, all data should be encrypted. Presently, this challenge is being addressed by using the user's password (specified when creating their master seed) to encrypt/decrypt the mnemonic seed, as well as unlocking the state of their wallet. The accounts in the state are being persisted via redux-persist, with an ecryption transform that handles the encrypting and decrypting of all data stored in localStorage. The mnemonic is stored separately from the accounts data. In anoma-apps/packages/anoma-lib/lib/types/mnemonic.rs implementation of Mnemonic, we provide the ability to specify a password allowing us to retrieve a storage value of the mnemonic, which is encrypted before saving to localStorage. When the wallet is locked, the user must provide a password, which is validated by attempting to decrypt the stored mnemonic. If successful, the password is used to either generate an encrypted Redux persistence layer, or decrypt the existing one, restoring the user's wallet state. redux-persist gives us the ability to specify which sub-sections of the state should be persisted. Presently, this is only enabled for any derived account data. From the persisted store, we can establish a persistor, which can be passed into a PersistGate component that will only display its children once the state is retrieved and decrypted from storage. If we wanted to export the state of the user's accounts, this would be trivial, and simply a matter of exporting a JSON file containing the JSON.stringifyed version of their accounts state. Some work would need to be done in order to restore the data into Redux, however. The localStorage state is stored in one of three places, depending on your environment: • persist:anoma-wallet - Production • persist:anoma-wallet-dev - Devnet • persist:anoma-wallet-local - Local ledger This allows us to keep our wallet state in sync with multiple ledgers while testing. ## Restoring the accounts state from file The user should have the ability to save the state of their accounts in their wallet to a JSON file. It is relatively trivial to take a snapshot of the accounts state once the user is authenticated. Technically, this will likely involve a process by which, following the upload of the file and successful parsing, the existing persist:anoma-wallet storage is cleared, and when the store is initialized, we pass the parsed accounts state in to configureStore by way of the preloadedState parameter. This will only happen once, and on subsequent calls to the makeStore function, it should hydrate from the encrypted value in local storage. Refer to the following to see how our present makeStore Redux store factory functions: https://github.com/heliaxdev/anoma-apps/blob/9551d9d0f20b291214357bc7f4a5ddc46bdc8ee0/packages/anoma-wallet/src/store/store.ts#L18-L50 This method currently accepts a secretKey as required by the encryptTransform, and checks the environment variables REACT_APP_LOCAL and NODE_ENV to determine where the store gets saved in localStorage. This is mostly useful for local testing where you may want to switch between connecting to a local ledger or a testnet, and want to keep your local stores in sync with both. ## Challenges As a secret is required to unlock the persisted store, this store must be instantiated dynamically once a password is entered and validated. In the current implementation of the wallet, any routes that will make use of the Redux store are loaded asynchronously. When they are loaded, the store is initialized with the user's password (which is passed in through the Context API in React, separate from the Redux state). ## Using JSON RPC to Communicate with Ledger To query values from the ledger, the web-wallet must issue JSON RPC calls to the Tendermint abci_query endpoint over HTTP, which if running the ledger locally, would look like: http://localhost:26657/abci_query/ To handle this in the wallet, we can make use of existing functionality from cosmjs, namely, the RpcClient and WebsocketClient. ### RPC HTTP Client Over HTTP, using the abci_query endpoint, we can query the ledger by providing a path to the storage value we wish to query. Here are some examples: • Query balance: value/#{token_address}/balance/#{owner_address} • Query epoch: epoch • Is known address?: has_key/#{address}/? There are many other types of queries in addition to abci_query that can be issued to Tendermint. See https://docs.tendermint.com/master/rpc/ for more information. ### WebSocket Client The most interesting type of interaction with the ledger thus far is via WebSockets. The goal of the implementation in anoma-wallet is to allow us to provide listeners so that we can update the React app according to activity on the ledger. The core functionality of the implementation on the client is as follows: public async broadcastTx( hash: string, tx: Uint8Array, { onBroadcast, onNext, onError, onComplete }: SubscriptionParams ): Promise<SocketClient> { if (!this._client) { this.connect(); } try { const queries = [tm.event='NewBlock', {TxResponse.Hash}='{hash}']; this.client ?.execute( ) .catch(onError); this.client ?.listen( createJsonRpcRequest("subscribe", { query: queries.join(" AND "), }) ) next: onNext, error: onError, complete: onComplete, }); return Promise.resolve(this); } catch (e) { return Promise.reject(e); } } There are a few key things happening here. Once we have constructed a transaction, we receive a transaction hash and a Uint8Array containing the bytes of the wrapped and signed transaction. We first execute the request to broadcast_tx_sync, which can take an onBroadcast callback from the client to listen to the initial response from the ledger. We provide the tx data in base64 format as an argument. Following that, we subcribe to events on the ledger using a query containing tm.event='NewBlock' AND applied.hash='transaction_hash_value', then then register the following listeners so that we may trigger activity in the front-end app: • onNext - called when we receive a NewBlock event that matches our hash • onError - called in the event of an error • onComplete - called when the websocket closes The way this library in anoma-wallet/src/lib/ is implemented, we can also determine when we want to disconnect the WebSocket. For instance, if for some reason we want to issue a series of transactions in succession, we could feasibly leave the connection open, then close after the final transaction is complete. Alternatively, and in most cases, we would simply close the connection when we are finished with a single transaction, which would then trigger the onComplete callback. See Transparent Transactions for more information on how the transactions are initially constructed. # Transparent Transactions ## Constructing Transparent Transactions The web-wallet will need to support many transactions. As the data that gets submitted to the ledger is most easily constructed from anoma types, we perform the assembly of the transaction with in WebAssembly using Rust so that we may natively interact with anoma. The role of wasm in this scenario is to provide two pieces of data to the client (which will handle the broadcasting of the transaction), which are: 1. hash - the hash of the transaction 2. data - A byte array of the final wrapped and signed transaction The following outlines how we can construct these transactions before returning them to the client. ## Part 1 - Token Transfer Transactions There are a few steps involved in creating and signing a transaction: 1. Create an anoma::proto::Tx struct and sign it with a keypair 2. Wrap Tx with a anoma::types::transaction::WrapperTx struct which encrypts the transaction 3. Create a new anoma::proto::Tx with the new WrapperTx as data, and sign it with a keypair (this will be broadcast to the ledger) ### 1.1 - Creating the anoma::proto::Tx struct The requirements for creating this struct are as follow: • A pre-built wasm in the form of a byte array (this is loaded in the client as a Uint8Array type to pass to the wasm) • A serialized anoma::types::token::Transfer object which contains the following: • source - source address derived from keypair • target - target address • token - token address • amount - amount to transfer • A UTC timestamp. NOTE this is created when calling proto::Tx::new(), however, this is incompatible with the wasm in runtime (time is undefined). Therefore, we need to get a valid timestamp from js_sys: #![allow(unused)] fn main() { // anoma-lib/src/util.rs pub fn get_timestamp() -> DateTimeUtc { let now = js_sys::Date::new_0(); let year = now.get_utc_full_year() as i32; let month: u32 = now.get_utc_month() + 1; let day: u32 = now.get_utc_date(); let hour: u32 = now.get_utc_hours(); let min: u32 = now.get_utc_minutes(); let sec: u32 = now.get_utc_seconds(); let utc = Utc.ymd(year, month, day).and_hms(hour, min, sec); DateTimeUtc(utc) } } #### Creating the types::token::Transfer struct to pass in as data: In wasm: #![allow(unused)] fn main() { // anoma-lib/src/transfer.rs let transfer = token::Transfer { source: source.0, target: target.0, token: token.0.clone(), amount, }; // The data we pass to proto::Tx::new let data = transfer .try_to_vec() .expect("Encoding unsigned transfer shouldn't fail"); } In Anoma CLI: https://github.com/anoma/anoma/blob/f6e78278608aaef253617885bb7ef95a50057268/apps/src/lib/client/tx.rs#L406-L411 #### Creating and signing the proto::Tx struct In wasm: #![allow(unused)] fn main() { // anoma-lib/src/types/tx.rs impl Tx { pub fn new(tx_code: Vec<u8>, data: Vec<u8>) -> proto::Tx { proto::Tx { code: tx_code, data: Some(data), timestamp: utils::get_timestamp(), } } } } NOTE Here we provide a work around to an issue with proto::Tx::new() in wasm - instead of calling the method directly on Tx, we create a new implementation that returns a proto::Tx, with the timestamp being set using js_sys in order to make this wasm-compatible. In Anoma CLI: https://github.com/anoma/anoma/blob/f6e78278608aaef253617885bb7ef95a50057268/apps/src/lib/client/tx.rs#L417-L419 ### 1.2 - Creating the anoma::types::transaction::WrapperTx struct The requirements for creating this struct are as follows: • A transaction::Fee type, which contains: • amount - the Fee amount • token - the address of the token • epoch - The ID of the epoch from query • gas_limit - This contains a u64 value representing the gas limit • tx - the proto::Tx type we created earlier. In wasm: #![allow(unused)] fn main() { // anoma-lib/src/types/wrapper.rs transaction::WrapperTx::new( transaction::Fee { amount, token: token.0, }, &keypair, storage::Epoch(u64::from(epoch)), transaction::GasLimit::from(gas_limit), tx, ) } NOTE Here we can directly invoke WrapperTx::new, so we only need to concern ourselves with convering the JavaScript-provided values into the appropriate types. In Anoma CLI: https://github.com/anoma/anoma/blob/f6e78278608aaef253617885bb7ef95a50057268/apps/src/lib/client/tx.rs#L687-L696 #### 1.3 - Create a new Tx with WrapperTx and sign it Here we create a WrapperTx type, and with that we create a new Tx type (our wrapped Tx type) with the WrapperTx as the data, and empty vec![] for code, and a new timestamp, and then we sign it. In wasm: #![allow(unused)] fn main() { // anoma-lib/src/types/wrapper.rs -> sign() (Tx::new( vec![], transaction::TxType::Wrapper(wrapper_tx) .clone() .try_to_vec().expect("Could not serialize WrapperTx") )).sign(&keypair) } We can summarize a high-level overview of the entire process from the anoma-lib/src/types/transaction.rs implementation: #![allow(unused)] fn main() { let source_keypair = Keypair::deserialize(serialized_keypair)?; let keypair = key::ed25519::Keypair::from_bytes(&source_keypair.to_bytes()) .expect("Could not create keypair from bytes"); let tx = Tx::new( tx_code, data, ).sign(&keypair); let wrapper_tx = WrapperTx::new( token, fee_amount, &keypair, epoch, gas_limit, tx, ); let hash = wrapper_tx.tx_hash.to_string(); let wrapper_tx = WrapperTx::sign(wrapper_tx, &keypair); let bytes = wrapper_tx.to_bytes(); // Return serialized wrapped & signed transaction as bytes with hash // in a tuple: Ok(Transaction { hash, bytes, }) } In Anoma CLI: https://github.com/anoma/anoma/blob/f6e78278608aaef253617885bb7ef95a50057268/apps/src/lib/client/tx.rs#L810-L814 ## Part 2 - Initialize Account Transaction Constructing an Initialize Account transaction follows a similar process to a transfer, however, in addition to providing a tx_init_account wasm, we need to provide the vp_user wasm as well, as this is required when constructing the transaction: #![allow(unused)] fn main() { // anoma-lib/src/account.rs let vp_code: Vec<u8> = vp_code.to_vec(); let keypair = &Keypair::deserialize(serialized_keypair.clone()) .expect("Keypair could not be deserialized"); let public_key = PublicKey::from(keypair.0.public.clone()); let data = InitAccount { public_key, vp_code: vp_code.clone(), }; } Following this, we will pass data into to our new transaction as before, along with tx_code and required values for WrapperTx, returning the final result in a JsValue containing the transaction hash and returned byte array. ## Submitting Transparent Transactions See RPC for more information on HTTP and WebSocket RPC interaction with ledger. # Shielded Transfers In Web Client Shielded transfers are based on MASP and allows users of Anoma to performs transactions where only the recipient, sender and a holder of a viewing key can see the transactions details. It is based on the specifications defined at Shielded execution. ## Codebase The code for interacting with the shielded transfers is split in 2 places: • anoma-wallet (TypeScript) • capturing the user interactions • providing user feedback • fetching the existing MASP transactions from the ledger • masp-web (Rust) • generating shielded transfers • encrypting/decrypting data • utilising MASP crate packages │ ├── masp-web # MASP specific Rust code │ ├── anoma-wallet # anoma web wallet ## High level data flow in the client In the current implementation whenever a user start to perform a new action relating to shielded transfers, such as creating a new transfer or retrieving of the shielded balance, the client fetches all existing shielded transfers from the ledger. In the current form this is done in a non optimal way where the already fetched past shielded transactions are not persisted in the client. They are being fetched every time and only live shortly in the memory as raw byte arrays in the form they come in from the ledger. The life time in the client is: between the fetching in the TypeScript code and then being passed and being scanned/decrypted by MASP protocol in the Rust code. This process can be further optimized: • Anoma CLI already does caching of fetched transfers, so that logic can be ru-used by providing virtual filesystem (for example memfs) implementation to Rust: • Likely the scanning can already start parallel while the fetching is running and if a sufficient amount of notes are found in scanning the fetching could be terminated. ## Relation to MASP/Anoma CLI The feature set and logic between the CLI and the web client should be the same. There are however a few differences in how they work, they are listed here: • When optimizing the shielded interaction. We need to fetch and persist the existing shielded transfers in the client. For this the CLI is using the file system of the operating system while the web client will either have to store that data directly to the persistance mechanism of the browser (localhost or indexedDB) or to those through a virtual filesystem that seems compliant to WASI interface. • In the current state the network calls will have to happen from the TypeScript code outside of the Rust and WASM. So any function calls to the shielded transfer related code in Rust must accept arrays of byte arrays that contains the newly fetched shielded transfers. • There are limitations to the system calls when querying the CPU core count in the web client, so the sub dependencies of MASP using Rayon will be limited. ## The API The consumer should use the npm package @anoma/masp-web that lives next to the other packages in the anoma-apps monorepo. It exposes the following: ### getMaspWeb • A util to return an instance of MaspWeb and ensure it is initiated. It it was retrieved and initiated earlier the existing instance is returned. async (): Promise<MaspWeb> ### MaspWeb • this contains the methods to perform the shielded transaction related activities. • the is a utility method getMaspWeb() exported that returns an instance of MaspWeb and ensures it is instantiated. The class exposes the following methods: #### generateShieldedTransaction generateShieldedTransaction = async ( nodesWithNextId: NodeWithNextId[], amount: bigint, transactionConfiguration: TransactionConfiguration ): Promise<Uint8Array> #### getShieldedBalance getShieldedBalance = async ( nodesWithNextId: NodeWithNextId[], transactionConfiguration: TransactionConfiguration ): Promise<string> #### createShieldedMasterAccount • needs to add the return type to reflect derived from Rust packages/masp-web/lib/src/shielded_account/mod.rs:ShieldedAccount createShieldedMasterAccount = ( alias: string, seedPhrase: string, ): any // the return type if from Rust code // packages/masp-web/lib/src/shielded_account/mod.rs:ShieldedAccount // // pub struct ShieldedAccount { // viewing_key: String, // spending_key: String, // } #### decodeTransactionWithNextTxId • Utility that decodes the fetched shielded transactions from the ledger and returns in format that contains the shielded transaction and the id for fetching the next one. decodeTransactionWithNextTxId = (byteArray: Uint8Array): NodeWithNextId type NodeWithNextId = { node: Uint8Array; nextTransactionId: string; }; The above is wrapping the below described Rust API, which is not intended to be used independently at the moment. ### Underlying Rust code currently the masp-web exposes the following API #### create_master_shielded_account #![allow(unused)] fn main() { * creates a shielded master account pub fn create_master_shielded_account( alias: String, seed_phrase: String, ) -> JsValue } #### get_shielded_balance • returns a shielded balance for a spending_key_as_string token_address pair • requires the past transfers as an input #![allow(unused)] fn main() { pub fn get_shielded_balance( shielded_transactions: JsValue, spending_key_as_string: String, ) -> Option<u64> } #### create_shielded_transfer • returns a shielded transfer, based on the passed in data • requires the past transfers as an input #![allow(unused)] fn main() { pub fn create_shielded_transfer( shielded_transactions: JsValue, spending_key_as_string: Option<String>, amount: u64, spend_param_bytes: &[u8], output_param_bytes: &[u8], ) -> Option<Vec<u8>> } #### NodeWithNextId • This is a utility type that is used when the TypeScript code is fetching the existing shielded transfers and extracting the id of the next shielded transfer to be fetched. The returned data from ledger is turned to this type, so that the TypeScript can read the id of the next transfer and fetch it. #![allow(unused)] fn main() { pub struct NodeWithNextId { pub(crate) node: Option<Vec<u8>>, pub(crate) next_transaction_id: Option<String>, } } #### NodeWithNextId::decode_transaction_with_next_tx_id • accepts the raw byte array returned from the ledger when fetching for shielded transfers and returns NodeWithNextId as JsValue #![allow(unused)] fn main() { pub fn decode_transaction_with_next_tx_id(transfer_as_byte_array: &[u8]) -> JsValue } # Web explorer interface • Block explorer • Display PoS state • Display governance state • Display transparent transfers • Display transfers in and out of the MASP • Display total values for the MASP • Allows tx hashes of shielded transfers to be looked up for confirmation # Typhon ## Summary Typhon stores, orders, and executes transactions on Anoma blockchains. It is intended as a replacement for Tendermint. We have a brief overview presentation of some of the features of Typhon here.. Typhon can be broken down into (roughly) 3 layers: • a mempool, which stores received transactions • a consensus, which orders transactions stored by the mempool, and • an execution engine, which executes the transactions on the state machine. We expect each Anoma participant (validator) will run processes in all three layers. Above, we use "client" to refer to matchmakers, ferveo, or anyone else who generates transactions to be ordered. The "critical path" is shown in thicker arrows, with other crucial messages shown in narrower arrows. ## Mempool Validators receive transactions from clients, store them, and make them available for the execution engine to read. The mempool protocol, which is based on Narwhal also produces a DAG of headers, which reference batches of transactions (via hash), and prove that those transactions are available for the execution engine. These headers are ultimately what the consensus decides on, in order to establish a total order of transactions. Read more here. ## Consensus Our consensus is based on Heterogeneous Paxos. Validators choose a totally ordered sequence of headers from the mempool DAG. This establishes a total order of transactions for the execution engine to execute. Read more here. ## Execution Engine Given a total order of transactions, the execution engine updates and stores the "current" state of the virtual machine, using as much concurrency as possible. Proofs from the execution engine allow light clients to read the current state. When the execution engine has finished with a transaction, it communicates to the mempool that the transaction can be garbage-collected from storage. Read more here. # Mempool ## Summary Validators run the mempool protocol. They receive transactions from clients, store them, and make them available for the execution engine to read. The mempool protocol, which is based on Narwhal also produces a DAG of headers, which reference batches of transactions (via hash), and prove that those transactions are available for the execution engine. These headers are ultimately what the consensus decides on, in order to establish a total order of transactions. ## Heterogeneous Narwhal The core idea here is that we run an instance of Narwhal for each learner. For chimera chains, an "atomic batch" of transactions can be stored in any involved learner's Narwhal. We also make 2 key changes: • The availability proofs must show that any transaction is sufficiently available for all learners. This should not be a problem, since in Heterogeneous Paxos, for any connected learner graph, any learner's quorum is a weak quorum for all learners. • Whenever a validator's Narwhal primary produces a batch, it must link in that batch not only to a quorum of that learner's block headers from the prior round, but also to the most recent batch this validator has produced for any learner. This ensures that, within a finite number of rounds (3, I think), any transaction batch referenced by a weak quorum of batches in its own Narwhal will be (transitively) referenced by all batches in all Narwhals for entangled learners. ### Overview Like Narwhal, Heterogeneous Narwhal Validators have multiple concurrent processes (which can even run on separate machines). Each validator has one primary process and many worker processes. When a client submits a transaction, they first send it to a worker process. #### Workers Worker processes ensure transactions are available. Transactions are batched, and erasure-coded (possibly simply replicated) across a weak quorum for every learner of workers, and only signed hashes of those batches are sent to primaries. This separates the high-bandwidth work of replicating transactions from the ordering work of the primaries. #### Primaries Primary processes establish a partial order of transaction batches (and by extension transactions), in the form of a structured DAG. The DAG proceeds in rounds for each learner: each primary produces at most one block for each (correct) learner in each round. That block references blocks from prior rounds. Primaries assemble headers (both their own and for other primaries) from collections of worker hashes, and references to prior blocks. They then sign votes, stating that they will not vote for conflicting headers, and (optionally) that their workers have indeed stored the referenced transactions. Primaries collect votes concerning their own headers, producing blocks: aggregated signatures showing a header is unique. More formally, we present the Heterogeneous Narwhal protocol as the composition of two crucial pieces: the Heterogeneous Narwhal Availability protocol, and the Heterogeneous Narwhal Integrity protocol. ### Vocabulary • Learners dictate trust decisions: just like in Heterogeneous Paxos, we use a Learner Graph. In diagrams, we usually represent learners with colors (red and blue). • Quorum: a set of validators sufficient for a Learner to make blocks. Each Learner has a set of quorums. • Intact Learner: any 2 quorums for an Intact Learner have a correct validator in their intersection. Most of our guarantees apply only to Intact Learners. • Entangled Learners: a pair of learners A and B are entangled if, for any quorum Qa of A, and any quorum Qb of B, the intersection of Qa and Qb contains a correct validator. Some guarantees apply pairwise to Entangled Learners: they are, in a sense, guaranteed to agree on stuff. • Weak Quorum: a set of validators that intersects every quorum. Weak Quorums are Learner-specific, so when we say weak quorum for every learner we mean a set of validators that intersects every quorum of every Learner. • Transaction: data from clients to be ordered. We do not specify how it's formatted. • Batch: a set of transactions collected by a Worker. • Erasure Share: data transmitted to a weak quorum of listening workers, such that any Quorum of validators can re-construct the original data (Transaction or Batch of Transactions). • Worker Hash: a signed digest of a batch of transactions collected by (and signed) by a worker. • an associated Primary (who "created" this header) • a set of Worker Hashes (from workers on the same validator as this primary) • an Availability Certificate for the previous Header issued by this primary • at most one Signed Quorum for each Learner • Availability Certificate: an aggregation of signatures from a Weak Quorum attesting that everything referenced by a particular Header is available. Must include a signature from the Header's primary. • Block: an aggregation of Header signatures from a quorum of a specific learner attesting that they will not attest to any conflicting header. Also includes an Availability Certificate. Should include all signatures a primary has gathered for that header at the time (signatures in the Availability Certificate count). • Signed Quorum: a quorum of blocks with the same learner and round, signed by a primary. These are referenced in headers. ## Heterogeneous Narwhal Availability Protocol (note the giant curly-brace represents a Weak Quorum of validators) ### Batches and Worker Hashes When a worker has collected a batch of transactions, it transmits erasure shares (possibly full copies) of those transactions to other workers on a weak quorum for every learner of validators. What's important about this erasure coding is that any Quorum of any Learner can reconstruct every transaction. Furthermore, workers must be able to verify that they are in fact storing the correct Erasure Share of the data referenced in the Worker Hash. One way to accomplish this is to transmit a complete copy of all the data to an entire Weak Quorum for every Learner. In fact, rather than wait until a batch is complete to start transmitting, workers can stream erasure shares as they receive transactions. When it has completed a batch, a worker also transmits a signed Worker Hash to those other workers, and its own primary. We do not specify when workers should complete batches, but perhaps it should be after some timeout, or perhaps primaries should signal workers to complete batches. Batches should not be empty. Primaries ultimately produce blocks for each round, for each Learner, and send those blocks to other Primaries. When a primary for validator V has received blocks for learner L and round R from an entire quorum of validators for learner L, it signs that collection, producing a Signed Quorum object, which identifies the validator V, the learner L, and the round R. The Signed Quorum is then broadcast (or erasure coded) to primaries on a weak quorum for every learner of validators. Much like batches, it is important that any Quorum for any Learner can re-construct the entire Signed Quorum. Periodically, each primary P produces Headers. Each Header contains: • a set of signed Worker Hashes, all signed by P's validator • a hash referencing at most one Signed Quorum per Learner, all signed by P • an Availability Certificate (we'll get to how those are made shortly) for the previous Header P issued. Headers should be relatively small. Each primary then sends the header to all the other primaries. When a Primary receives a Header, it can produce an Availability Vote (which is a digital signature) iff • the primary has stored its share of all Signed Quorums referenced, • the primary has received messages from its workers indicating that they have stored their shares of all the Batches referenced The Availability Votes are then transmitted to the Header's Primary. When a primary receives Availability Votes for a Header from a weak quorum for every learner, it can aggregate those signatures to produce an Availability Certificate, which proves that the Header (and its contents) are available to any Quorum. Availability Certificates should be small. Note that, if primaries broadcast Availability Certificates as soon as they produce them, other primaries may have all the components necessary to "receive" a Header even before the Header's Primary actually sends it. Specifically, they may have: • Signed Batch Headers from their listening Workers • Signed Quorum shares received earlier from the Primary • Availability Certificate received earlier from the Primary ## Heterogeneous Narwhal Integrity Protocol So far, only Signed Quorums have been Learner-specific: everything else requires a weak quorum for every learner. However, in the Integrity Protocol, almost everything is Learner-specific. Furthermore, Workers are not involved in the Integrity Protocol: only Primaries. Each Header H features a predecessor H': the availability certificate in H references the header H'. When a Primary receives a Header H, it can produce an Integrity Vote iff it has not produced an Integrity vote for any other Header with the same predecessor as H In essence, this means that each correct Primary signs, for each other (even incorrect) Primary, a unique chain of Headers. This will ensure that no primary can produce conflicting blocks for entangled Learners. Integrity Votes are transmitted back to the Primary associated with the Header. In practice, a Integrity and Availability votes may be combined for Primaries who can cast both. For each Header it produces, a Primary can calculate its Learner Vector: this represents, for each Learner, the highest round number of any quorum referenced in this Header or its ancestors (its predecessor, of its predecessor's predecessor, etc.). If, for some Learner L, a header H has a greater round number R in its Learner Vector for L than did H's predecessor, then the Primary can produce a Block for learner R and round L. Intuitively, a Primary produces a block whenever it gets a quorum for a Learner in a latest round. A block for learner L includes an Availability Certificate, as well as an aggregated signature formed from the Integrity Votes of (at least) a quorum (for learner L) for the same Header. Blocks are transmitted to all other Primaries, who use them to form Signed Quorums. If a Primary uses the same Header to make blocks for multiple Learners, each block it produces must use a superset of signatures as the previous. This ensures that if the Primary produces a block for Learner A and then a block for learner B, the Block for learner B effectively includes the block for learner A. We can use this when we later establish a total ordering: any reference to the learner B block also effectively references the learner A block. Here is an example timeline of a Primary producing headers, availability certificates, and blocks. Blocks are color coded by learner and include a round number. Headers display Learner Vectors. ## DAG Properties Independently, the blocks for each Learner form a DAG with the same properties as in the original Narwhal: (In these diagrams, blocks reference prior blocks from the same Primary; I just didn't draw those arrows) Note that blocks reference a quorum of blocks from the previous round. This does not require that the same primary produced a block for the previous round. In round 5, Primary 3 can produce a block if it has received a quorum of round 4 blocks from other Primaries. Of course, primaries do not necessarily produce blocks for the same round at the same literal time. Here we see primaries producing blocks for round 3 for red learner at different times, depending on when they finish batches, or receive a round 2 quorum, or enough votes: In Heterogeneous Narwhal, these two DAGs are being created simultaneously (using the same sequence of Headers from each Primary, and many of the same Votes): Note that round numbers for different learners do not have to be close to each other. Red round 3 blocks are produced after blue round 5 blocks, and that's ok. Furthermore, rounds of different learners are not totally ordered. Red round 3 cannot really be said to happen before, or after, blue round 4. In Homogeneous Narwhal, any block which is referenced by a weak quorum in the following round will be (transitively) referenced by all blocks thereafter. Heterogeneous Narwhal has analogous guarantees: #### Any block for learner A referenced by a weak quorum for learner A will, after 3 rounds, be (transitively) referenced by all future blocks of learners entangled with A. Specifically, such a block B in round R, will be (transitively) referenced by all A-blocks in round R+2. Consider the first round for learner B using at least a quorum of headers either used in A round R+2 or after their primaries' headers for A round R+2. Given that Learner B is entangled with A, any B-quorum for this round will be a descendant of an A-block from round R+2, and therefore, of B. ## Consensus In order to establish a total order of transactions, we use Heterogeneous Paxos to decide on an ever-growing path through the DAG (for each Learner). Heterogeneous Paxos guarantees that, if two Learners are entangled, they will decide on the same path. In order to guarantee liveness (and fairness) for each Learner's transactions, we require that: For any accurate learner L, if one of L's quorums remains live, and an entire quorum of L issues blocks for round R, consensus will eventually append one of L's round-R blocks, or one of its descendants, to L's path. Crucially, if two learners are not entangled, and their blocks never reference each other, consensus should not forever choose blocks exclusively from one learner. This does require a minimal amount of fairness from consensus itself: as long as blocks for learner L keep getting proposed (indefinitely), consensus should eventually append one of them to the path. ### Choosing a total order Given a consensus-defined path, we can impose a total order on all transactions which are ancestors of any block in the path. We require only that, given some block B in the path, all transactions which are ancestors of B are ordered before all transactions which are not ancestors of B. Among the transactiosn which are ancestors of B but not of its predecessor in the path, total order can be imposed by some arbitrary deterministic function. # Heterogeneous Paxos ## Summary This specification intends to describe how heteregenous Paxos can be realized in the blockchain context. Given well-defined quorums, the protocol allows a known set of chains to construct and carry out atomic transactions on a so-called chimera chain, subject to certain conditions. The chimera chain guarantees safety and liveness subject to the usual assumption: at most a third of the stake belongs to Byzantine participants. The protocol involves three kinds of agents: proposers, acceptors and learners. A node may play the role of any combination of the three kinds of agents. Blocks are agreed upon in rounds. Proposer initate a round by a proposing a block. Each block contains atomic batches of transactions (or even a single transaction). An atomic batch of transactions means that either all transactions in the batch are executed or none of them are executed. Acceptors are agents who agree on the proposed blocks and an agent may be acceptor for more than one chain. No correct acceptor acts on any invalid block. This requires checking the validation of blocks or state transition function. Learners are set of agents where this set is intrested in a particualar (combination of?) chain(s) meaning what the voting process decides for these chain(s). The definition of learners is based on the quorums and defined by the protocol, meaning that agents are not free not choose their own quorum setups. Acceptors need to be aware of the different definitions of learners in order to be able to know what correct behavior is defined as. This set of the agents for learners, might empty or have overlaps, the acceptors have to follow the deifnition regardless. We briefly describe how the communication for a consensus round works. Suppose we have two learners and which refer to agents that are interested in blockchain and blockchain . Proposers propose a chimera block, by sending messages, each carrying a value and unique ballot number (round identifier), to acceptors. All acceptors in all involved chains () send messages to each other to communicate that they’ve received a message. When an acceptor receives a message for the highest ballot number it has seen from a learner ’s quorum of acceptors, it sends a message labeled with and that ballot number. There is one exception: once a safe acceptor sends a message for a learner , it never sends a message with a different value for a learner , unless one of the following is true: • It knows that a quorum of acceptors has seen messages with learner and ballot number higher than . • It has seen Byzantine behavior that proves and do not have to agree. A learner decides on a block when it receives messages with the same proposed block and ballot number from one of its quorums of acceptors. ## Preliminaries • Base chains are two or more independent chains between which we would like to carry out atomic transactions. The chains must protocol-wise adhere to some specific set of requirements to be compatible. For example, IBC support. • A chimera chain is a chain that allows atomic transactions to be carried out on objects from the base chains. It carries an additional consensus mechanism, that is dependent on the consensus of the base chains. • A learner is a client that is interested in the value decided by the voting process. Learners might be full nodes, light client nodes, or nodes from other chains. • An acceptor is an agent participating in the voting process of the consensus protocol. Non-byzantine acceptors are called real acceptors. • A quorum is a subset of acceptors sufficient to make a learner decide on a value. For a learner to maintain consistency (avoid deciding on two contradictory values), any two of its quorums must have a common real acceptor. Most chains achieve this practically by making the intersection of the quorums big enough, i.e. the acceptors in the intersection being backed by more than >1/3 stake of each chain under <1/3 Byzantine assumption. For example, suppose: • Base chain A has a total stake of . • Base chain B has a total stake of . • Any set of acceptors backed by is a quorum of chain A. This means would have to back unsafe acceptors for chain A to fork. • Any set of acceptors backed by is a quorum of chain B. This means woudl have to back unsafe acceptors for chain B to fork. • Suppose that, for every quorum of chain A, and every quorum of chain B, the acceptors in are backed by , and . This would mean that, in order for atomic batches on the chimera chain to lose atomicity, and would have to back unsafe acceptors. • When a batch loses atomicity, the transactions on one state machine (say, A) are executed, but not the transactions on the other state machine (say, B). However, each state machine itself remains consistent: neither A nor B forks. • This means some chimera chains offer atomicity with lower (yet well-defined) levels of integrity than their base chains' no-fork guarantees. • A proposer is an acceptor that may propose a new block according to the rules of the blockchain. For Typhon, the "blocks" of the consensus protocol are the headers produced by the mempool. A potential proposer would need (a) data availability, (b) the ability to sign messages, ( c) something at stake (to prevent spam) and (d) the ability to communicate with the acceptors. Acceptors that are in the overlaps of quorums may especially well suited to be proposers, but other Acceptors (or even other machines) might be proposers as well. The Heterogeneous Paxos technical report effectively uses weighted voting for select proposer, but perhaps there are interesting tricks with VRFs that would improve efficiency. ### Assumptions 1. Acceptors are staked: An acceptor has a certain amount of stake backing them. This stake is either fully their own stake or is partly delegated to them by other token holders. 2. Quorums are known: Quorums are defined implicitly by the internal logic of each chain. For most proof-of-stake chains, a quorum is a subset of acceptor that have of stake backing them. 3. Large Overlap of Quorums: A practical way to ensure a safe acceptors in the overlap between quorums. To guarantee atomicity with the same integrity as base chains, each quorum overlap must be backed by of the stake of each base chain. 4. Connectivity: All acceptors in a chimera chain communicate with each other (even if that means talking to acceptors who are not on the same base chain). This communication can be indirect (i.e. gossiping through other acceptors), so long as all pairs of honest acceptors can always communicate. 5. Only one learner per main chain: We model the learner graph as one learner per main chain, and then full nodes can instantiate their base chain's learner to make decisions. The reason is that in Heterogeneous Paxos the learner graph is a closed graph with known nodes, while in the blockchain context we have a open network where we don't know who all the learners are. ## Chimera Chain In this section we describe how chimera chains operate. Upon wanting to include an atomic batch of transactions from the transaction pool into a block, a block proposer either creates a genesis block if there is no existing chimera chain or builds on top of an existing chimera chain. ### Genesis block In order to be safe, the guarantee we want here is that future quorum updates on the main chains are guaranteed to be received before voting happens on chimera chains. 1. Create a genesis block that claims to be a "chimera chain" and allocate some unique name or index. 2. Register (in a transaction) that genesis block on all base chains and allocate it some unique name, so they know to involve it in any future quorum updates. The guarantee we want here is that future quorum updates are guaranteed to be received before votes happen. 3. The first block append on the chimera chain requires IBC messages from all base chains explaining that they have registered this genesis block. ### Producing Blocks The block consists of transactions from both chains or just on of the chains. The transaction can be bundled together to make sure they are carried out atomically. It is possible that more than one block may be finalized like in Grandpa (from Polkadot project) decoupling block prduction from finalization. In thi scase, Heterogeneous Paxos would serve the roll of a "finality gadget" in Grandpa's terms, but blocks could be produced at any rate. #### Moving Objects Suppose state in each state machine is expressed in terms of objects (think object-oriented programming) with state and methods, and a location (each is located on a chain). Our ideas about "movable objects" should be applicable to any program entities which feature: • a unique identifier • a permanent set of public methods (an API) callable by users, or other methods on other objects in the same state machine, which can compute and return values • a set of private methods callable only by other methods on this object • mutable state (which only methods can mutate) These are much like "smart contracts," which also carry internal mutable state and public APIs much like methods. An example of an "object" is any sort of account: multisignature account or single user account: • accounts usually have a unique identifier • public methods include "send money," which takes as input some signed authorization and deducts money from the account. • mutable state includes some value representing the amount of money in the account. One might want to move an object to another state machine. For simplicity, let's suppose it's a state machine with the same quorums (on a different, possibly chimera, chain). This is not so very different from various chains trying to get a lock on an account (objects) and then perform atomic operations on them. This "move" operation should require an IBC send, and then deleting the original object. Or instead of deleting the original object the original object can be made a pointer to the object which now lives primarily on the destination chain. On the destination state machine, an IBC receive should be followed by creating a new copy of the object. Any "object identifiers" found in pointers from other objects will have to be preserved, which could be tricky. ##### Permissions for moving Objects We have to consider who is allowed to move which objects where. One way to do this is to have "move" simply be a "private method" of any object: the programmer has to specifically program in any methods that the transactions or other objects can call to move the object. The most straightforward private method definition would allow anyone (e.g.,owners) who can control the object be able to give permission for moving. We may want to allow chains to specify limits on what objects can move onto that chain. For example, they may require that objects have a method that can move them back to a base chain. ## Epoch Change If new quorums for each base chain do not overlap on a real acceptor, atomicity cannot be guaranteed. We can no longer meaningfully commit atomic batches to the chimera chain. However, extensions to the chimera chain are consistent from the point of view of each base chain (but different base chains may "perceive" different extensions). This allows us to, for example, move objects to base chains even after the chimera chain loses atomic guarantees. ## Kill Chains Likewise, it's probably useful to kill chimera chains no one is using anymore. If we don't kill them, when quorums change all of chimera chains need to get an update, and that can become infeasible. One possibility is to allow chains with no objects on them (everything has moved out) to execute a special "kill" transaction that prohibits any future transactions, and sends an IBC message to all parent chains. On receipt, parent chains could remove that chimera chain from their registry. ## Protocol Description This section describes how a chimera block is appended to the existing chimera chain assuming all the setup has taken place. For simplicity, this protocol is written as if we're deciding on one thing: the specific block that belongs on a specific blockchain at a specific height. All state and references are specific to this blockchain / height pair. We do not care about messages before this height. This height may be defined using the last finalized block in the blockchain. ### Learner Graph For each learner , we call its set of quorums . For each pair of learners and , there are a specific set of conditions under which they require agreement. We enumerate these as , a set of "safe sets" of acceptors: whenever at least one set in is composed entirely of safe (non-byzantine, but possibly crashed) acceptors, and must not decide different things. We designate the set of acceptors who are actually safe . This is of course determined at runtime, and unknown to anyone. ### Messages The protocol is carried out by sending and receiving messsages. The table below describes the structure of a typical heteregenous paxos message. FieldTypeDescription chainIdIdIdentifies this Chimera Chain heightuint64Current height of the chimera chain pktTypeenum PktType {1A, 1B, 2A} ballotBallot = (Timestamp, Hash)This consists of the hash of the proposed block and the timestamp of the initiated ballot. There is a total lexicographical order on Ballot. learnerId sigSigThe signature field sig unforgeably identifying its sender, e.g., the message is signed, and the sender has a known PK in some PKI. refsVec<Hash>The list of hashes of messages received previously. In general, acceptor relay all sent or received messages to all learners and other acceptors. This ensures that any message received by a real acceptor is received by all real acceptors and learners. #### Definition: Message Set Signers We can define over sets of messages, to mean the set of signers of those messages: Messages also contain a field refs, which includes chained hashes of every message the sender has sent or received since (and including) the last message it sent. As a result, we can define the transitive references of a message, which should include every message the sender has ever sent or received: #### Definition: Transitive References To ensure that acceptors and learners fully understand each message they receive, they delay doing any computation on it (sometimes called delivery) until they have received all the messages in refs. As a result, acceptors and learners will always process messages from any given sender in the order they were sent, but also from any messages that sender received, and recursively. ### Consensus Round: Ballot Next, we briefly describe how the communication for a consensus round works. Consensus is reached in four steps: proposing the chimera block, acknowledging receipt of proposals, establishing consensus, and termination. Suppose we have two learners and from two different blockchains. #### message: proposing blocks A proposer proposes a new block by sending a message to all acceptors, which includes • a value (the atomic transaction for example) • a unique ballot number (round identifier) • a message containing the hash of the proposed block along with a time stamp of the ballot initiation. If there is an existing chimera chain, the proposer can built upon that chain and if the proposer is starting a new chimera chain it needs to lock some funds for that. Also, the acceptor needs to check validity as soon as possible: don't even "receive" an invalid proposal (or at least don't send a "1b" message in response). #### message: acknowledging receiving proposals On receipt of a message, an acceptor sends an ackowledgement of its receipt to all other acceptors and learners in the form of message. #### message: establishing consensus When an acceptor receives a message for the highest ballot number it has seen from a learner ’s quorum of acceptors, it sends a message labeled with and that ballot number. There is one exception: once a safe acceptor sends a message for a learner , it never sends a message with a different value for a learner , unless one of the following is true: • It knows that a quorum of acceptors has seen messages with learner and ballot number higher than . • It has seen Byzantine behavior that proves and do not have to agree. ##### Specifics of establishing Consensus In order to make a learner decide, we need to show that another, Entangled learner could not already have decided. ##### Definition: Entangled In an execution, two learners are entangled if their failure assumptions matched the failures that actually happen: If some learner does not agree with some other learner , then learner cannot possibly agree with both and . ##### Definition: Heterogeneous Agreement • Within an execution, two learners have agreement if all decisions for either learner have the same value. • A heterogeneous consensus protocol has agreement if, for all possible executions of that protocol, all entangled pairs of learners have agreement. ##### Definition: Accurate Learner An accurate learner is entangled with itself: A learner whose quorums contain too many failures is inaccurate. This is the equivalent of a chain that can fork. In order to prevent entangled disagreement, we must define the conditions that will ultimately make learners decide: ##### Definition: Get1a It is useful to refer to the that started the ballot of a message: the highest ballot number in its transitive references. ##### Definition: Ballot Numbers The ballot number of a is part of the message, and the ballot number of anything else is the highest ballot number among the s it (transitively) references. ##### Definition: Value of a Message The value of a is part of the message, and the value of anything else is the value of the highest ballot among the messages it (transitively) references. #### Terminate: Finalizing blocks A learner decides when it receives messages with the same ballot number from one of its quorums of acceptors. If no decision can be reached within a certain time, proposers must begin a new round (with a higher timestamp, and thus a higher Ballot). Proposers can start a new round by proposing a new block or by trying to finalize the same block again (in case there was no consensus). ##### Definition: Decision A learner decides when it has observed a set of messages with the same ballot, sent by a quorum of acceptors. This will allow the learner to decide on the value of the messages in the set. We call such a set a decision: Now we are ready to discuss what makes a Well-Formed message. This requires considering whether two learners might be entangled, and (unless we can prove they are not engangled), whether one of them might have already decided: ##### Definition: Caught Some behavior can create proof that an acceptor is Byzantine. Unlike Byzantine Paxos, our acceptors and learners must adapt to Byzantine behavior. We say that an acceptor is Caught in a message if the transitive references of the messages include evidence such as two messages, and , both signed by , in which neither is featured in the other's transitive references (safe acceptors transitively reference all prior messages). Slashing: Caught proofs can be used for slashing. ##### Definition: Connected When some acceptors are proved Byzantine, clearly some learners need not agree, meaning that isn't in the edge between them in the CLG: at least one acceptor in each safe set in the edge is proven Byzantine. Homogeneous learners are always connected unless there are so many failures no consensus is required. ##### Definition: Buried A message can become irrelevant if, after a time, an entire quorum of acceptors has seen s with different values, the same learner, and higher ballot numbers. We call such a buried (in the context of some later message ): ##### Definition: Connected 2a Messages Entangled learners must agree, but learners that are not connected are not entangled, so they need not agree. Intuitively, a message references a message to demonstrate that some learner may have decided some value. For learner , it can be useful to find the set of messages from the same sender as a message (and sent earlier) which are still unburied and for learners connected to . The cannot be used to make any new messages for learner that have values different from these messages. ##### Definition: Fresh Acceptors send a message whenever they receive a message with a ballot number higher than they have yet seen. However, this does not mean that the 's value (which is the same as the 's) agrees with that of messages the acceptor has already sent. We call a message fresh (with respect to a learner) when its value agrees with that of unburied messages the acceptor has sent. ##### Definition: Quorums in Messages messages reference quorums of messages with the same value and ballot. A 's quorums are formed from fresh messages with the same ballot and value. ##### Definition: Well-Formed We define what it means for a message to be well-formed. An acceptor who has received a sends a for every learner for which it can produce a Well-formed . Before processing a received message, acceptors and learners check if the message is well-formed. Non-wellformed messages are rejected. ### Incentive Model Goal: • Incentivizing token holders to put down their stake for security • Disincentivizing byzantine behavior Rewards: • Participating in consensus based on backing stake: this includes validating and voting • Producing blocks Slashing: there are a number of offenses • Invalid blocks • Equivocation (caught) ### Fees The first question is once there is no demand for atomic batches of transactions, do we keep the chimera chain alive? We need to figure out whether killing the chimera chain (and potentially requiring a new genesis later) is not too expensive. The second question is whether we update the quroum changes whenever they happen or when we have a transaction? The first option is expensive and can lead to attack if not handled well. We need to figure out whether the latter option is safe. • If the answer is yes for first question and we pick the first option for the second question: Since all chimera chains need to be updated when the quorums on the base chains change, we need to figure out who pays for these updates. For example, a chimera chain might have had a block produced last week, but since the has been updated 200 times that is 200 blocks that do not have any transactions with transaction fees. If this is not paid by anyone, it becomes a burden for acceptors and an attack vector for the adversary. We may need to add a "locked" fee for making new chimera chains. In particular, we don't want an attacker to make a lot of chains, forcing base chains to update all of them with each quorum change. Alternatively, each "chimera chain" could keep an account on each base chain that is funded by a portion of transaction fees, from which a small fee is extracted with each validator update. When the account runs dry, the (parent)base chains are allowed to kill the chimera chain (even if there are still objects on there). We could allow anyone to contribute to these accounts. We could even prohibit anyone who did not contribute from sending IBC messages between chimera and parent chains. ## Security Discussion Note that the chimera cannot guarantee atomicty under the same adversary assumption as the base chains. For example, if we assume the adversary to control less than 1/3 of the stake to assure safety on the base chains, atomicity is not guaranteed for such an adversary but only a weaker one. This is important for users so they can decide whether for their transaction chimera chians would be secure enough. By setting the quorums of each learner to be the same as the quorums of the corresponding base chain, we ensure that each learner's view is as consistent as the base chain. Specifically, two instantiations of the learner for some base chain should decide on the same blocks in any chimera chain, unless the adversary is powerful enough to fork chain . "Accurate" (or "self-engangled") learners (defined above) correspond to base chains where the adversary is in fact powerful enough to fork the chain. Proving that a learner is accurate is equivalent to proving that its base chain cannot fork. Two learners and corresponding to different base chains will decide on the same blocks (which is what makes atomic batches useful), so long as one of the safe sets in is composed entirely of safe acceptors. The stake backing these safe sets represents the "assurance" that atomic batches remain atomic, as well as the maximum slashing punishment should they lose atomicity. Loss of atomicity is a bit like a "trusted bridge" that turns out not to be trustworthy: each state machine within the chimera chain has as much integrity as its base chain, but atomicity properties of multi-state-machine atomic batches have a lesser, but still well-defined, guarantee. Loss of atomicty allows double spending on the chimera chain. And while misbehavior has happened in such an attack it is not trivial to slash the misbehaving acceptor since according to each chain everything has been carried out correctly. ## Open Challenges ### Programming Model #### Atomic Batches We'll need a way to specify that a batch of transactions should be committed together or not at all. Ideally, this should communicate to the programmer how reliably this atomicity is (see "practical considerations" below). On an chimera chain, batches can include transactions from any of their "main chains". If we want to have match-making, transactions will need to be able to say something like "if I'm in an atomic batch that matches these criteria, then do this...". Each atomic batch should be scheduled within one block. (We encode transactions with Portobuff and Borsht.) We need define structures such that transactions can be bundled and cannot be carried out separately. #### Atomic Communication Transactions within an atomic batch need to be able to send information to each other (and back). For example, in a market with a fluctuating exchange rate, a market transaction could send a message to an account, which transfers money, and sends a message to another account, which transfers goods. We need a language in which this communication takes place with minimal constraints on the state machines involved, so we should probably adapt the IBC communication language. We need to figure out inter-chain communication works for transactions communicating with each other within an atomic batch. #### Can we have synchronous (in terms of blocks) IBC? Yes: when the quorums involved in two chains are the same, then we can guarantee that (unless there are sufficient failures to fork the chain) an IBC message sent in one chain will arrive at the other chain (in a transaction) within a specific number (perhaps as few as 1?) of blocks. This is because some of the machines in the quorum that approved the "send" transaction must be in the quorum that decides the next block, so they can refuse to decide on blocks that lack the "receive" transaction. Note that we may need to rate-limit sends to ensure that all sends can be received in time (availability). Note also that blinding transactions makes this hard. ### Match Making We can in priciple do cross-chain match-making. If we want an on-chain market, an chimera chain might be a good place to do it. However, full nodes in the gossip layer might be able to gather sets of transactions that match the transactions' "If I'm in an atomic batch ..." criteria, bundle them into atomic batches, and then send those off to be committed. We may want to incorporate some incentive scheme for good match-making. Matchmakers include nodes who are full nodes on both chains, and in principle could include any node who sees the request transactions. ### Changing Quorums? #### 2 Phase Commit We could require that any quorum-chainging transaction has to be 2-phase committed. Essentially, the "main chain" announces that it won't progress beyond a certain block until everyone has appended a new block that sets the (same) new quorums, and sends a response by IBC. It can then progress only with IBC responses from all the other chains that use these quorums. #### Synchronous IBC We may be able to leverage our "synchronous" IBC idea above for faster quorum changes. The difficulty is that it allows a finite number of blocks to be appended to the chimera chains before they receive the quorum change method. These chains can be arbitrarily slow, so that could take arbitrarily long. Need to figure out inter-machine communication for acceptors, since they might run many machines. ## Discussion Questions /Practical Considerations • Optimizing messgaing: Pipelining (from Hotstuff), Kauri, BFTree • Replicating state machines • Probems Tendermint has: • Doesnt allow many validators • Lightclient design • Optimizing recoveryfrom slow finalization: Separating block prosuction from finalizing, finalzing more than one block • ABCI ++? Another version of it • Look into other papers of Dalia Malkhi / Fast HotStuff? # Execution Engine ## Summary Given a total order (from the consensus) of transactions (from the mempool), the execution engine updates and stores the "current" state of the virtual machine, using as much concurrency as possible. Proofs from the execution engine allow light clients to read the current state. When the execution engine has finished with a transaction, it communicates to the mempool that the transaction can be garbage-collected from storage. ## Vocabulary • Shards are processes that store and update state. • Different shards may be on different machines. Redistributing state between shards is called Re-Sharding. • Each Shard is specific to 1 learner. However, as an optimization, an implementation could conceivably use 1 process to do the work of 2 shards with different learners so long as those shards are identical, and fork that process if / when the learners diverge. • Executors are processes that actually run the VM and compute updates. Executors should probably be co-located with shards. Either: • We assume the Mempool is using the Heterogeneous Narwhal setup? • In which case Consensus picks leaders in the DAG • Mempool is treated as some kind of black-box set of processes that can each transmit transactions to Shards. • In which case Consensus produces more detailed ordering information Perhaps we should have some general notion of Timestamp on transactions? The VM is largely a black box: we assume we can give it a set of input key-value pairs and a transaction, and get output key-value pairs. ## State State is stored as mutable Values (unlimited size blobs of binary), each of which is identified with an immutable Key. If you want to mutate a Key associated with a specific Value, that's equivalent to deleting the Value associated with the old Key, and writing it to the new Key. Keys that have never had a Value written to them are mapped to an empty value. For each Learner, all processes can map Transactions to a set of Shards whose state they read, and a set of shards whose state they write. This makes Re-Sharding challenging. One way to implement this is to partition the space of Keys across Shards, and Label each Transaction with a Sub-Space of keys it touches. One possible Key-space would be to arrange Keys in some kind of a tree configuration. ## Mempool Interface We assume, for each Learner, that each transaction has a unique Executor: It would be more efficient if is co-located with a shard in or . As an optimization, we can have one process do the work of multiple learners' executors, so long as those learners are identical. We assume that each transaction carries a timestamp: We assume that these timestamps have an unknown total order, and that Consensus and the Mempool can update Shards' knowledge of this total order. In particular, we assume that Consensus informs shards of an ever-growing prefix of this total order. • One way to accomplish this is simply to have each timestamp be the hash of the transaction, and have consensus stream a totally ordered list of all hashes included in the chain to all Shards. This may not be very efficient. • We could instead consider one timestamp to be definitely after another if it is a descendent in the Narwhal DAG. Narwhal workers could transmit DAG information to Shards, and shards would learn some partial ordering information before hearing from Consensus. Consensus could then transmit only the Narwhal blocks it decides on, and shards could determine a total ordering from there. The Mempool transmits each transaction to its executor as soon as possible, using network primitives. The for each transaction that reads or writes state on shard , the Mempool also transmits to shard : We assume that each Shard maintains Timestamps bound below which it will no longer receive new transactions. Specifically, a timestamp below which it will no longer receive new transactions that read from its state, and a timestamp below which it will no longer receive new transactions that write to its state. Generally, we expect that , but I don't know that we require this to be true. It should update this bound based on information from the Mempool. For example, it could maintain partial bounds from each mempool worker (updated whenever that mempool worker sends the Shard a message), and implement and as the greatest lower bound of all the partial bounds. ## Consensus Interface Consensus needs to update each Shard's knowledge of the total order of timestamps. In particular, we assume that Consensus informs shards of an ever-growing prefix of this total order. • One way to accomplish this is simply to have each timestamp be the hash of the transaction, and have consensus stream a totally ordered list of all hashes included in the chain to all Shards. This may not be very efficient. • We could instead consider one timestamp to be definitely after another if it is a descendent in the Narwhal DAG. Narwhal workers could transmit DAG information to Shards, and shards would learn some partial ordering information before hearing from Consensus. Consensus could then transmit only the Narwhal blocks it decides on, and shards could determine a total ordering from there. ## Execution For each learner , for each Transaction , executors wait to receive values for all keys in , then compute the transaction, and transmit to each shard any value stored on in . Generally, transactions do not have side effects outside of state writes. However, we could in principle encode client reads as read-only transactions whose side-effect is sending a message, or allow for VMs with other side effects. Executors can save themselves some communication if they're co-located with Shards. As an optimization, we can save on communication by combining messages for multiple learners if their content is identical and their shards are co-located. Likewise, we can save on computation by using one process to execute for multiple learners so long as they are identical. For each key in its state, each shard needs to establish a total order of all writes between and . Reads to each key need to be ordered with respect to writes to that key. To accomplish this, each Shard maintains a Dependency Multi-Graph of all Shard Summaries they have received, where Summary depends on Summary if the Shard doesn't know that , and can read from a key to which can write. Specifically, if the Shard doesn't know that , then for each key that can write to and can read or write, create an edge labeled with . There can be cycles in the dependency multi-graph, but these will resolve as the Shard learns more about the total order from consensus. Concurrently, for any Summary that no longer depends on any other Summary, if : • transmit the values written most recently before for any key on in to • upon receiving the values for any key on in from : • record that that value is written to key at . • delete edges labeled from the dependency graph. • As an optimization, we may want a compact "don't change this value" message. • When every value in has been updated, delete from the dependency graph. Note that read-only transactions can arrive with timestamps before . These need to be added to the dependency graph and processed just like all other transactions. ## Garbage Collection Each Shard can delete all but the most recent value written to each key before . Once all of a transaction's Executors (for all learners) have executed the transaction, we can garbage collect it. We no longer need to store that transaction anywhere. Read-only transactions can, in principle, bypass Mempool and Consensus altogether: they only need to arrive at each of the relevant shards , and have a timestamp greater than . They could also be executed with a side effect, like sending a message to a client. We can use these read-only transactions to construct checkpoints: Merkle roots of portions of state, building up to a Merkle root of the entire state. Light client reads only need some kind of signed message produced by an executor from each of a weak quorum of validators. They do not, technically, need a Merkle root of the entire state at all. However, it may be more efficient to get a single signed message with a Merkle root of state, and then only one Validator needs to do the read-only transaction. To support this kind of thing, we may want to lag well behind , so we can do reads on recent checkpoints. # Consensus Namada uses Tendermint Go through the tendermint-rs bindings. Namada uses the P2P layer built into Tendermint Go. # Proof of Stake (PoS) This section of the specification describes the proof-of-stake mechanism of Namada, which is largely modeled after Cosmos bonded proof-of-stake, but makes significant changes to bond storage representation, validator set change handling, reward distribution, and slashing, with the general aims of increased precision in reasoning about security, validator decentralisation, and avoiding unnecessary proof-of-stake-related transactions. This section is split into three subcomponents: the bonding mechanism, reward distribution, and cubic slashing. ## Introduction Blockchain system rely on economic security to prevent abuse and for actors to behave according to protocol. The aim is that economic incentive promote correct and long-term operation of the system and economic punishments would discourage diverting from correct protocol execution either by mistake or with the intent to carrying out attacks. Many PoS blockcains rely on the 1/3 Byzantine rule, where they make the assumption the adversary cannot control more 2/3 of the total stake or 2/3 of the actors. ## Goals of Rewards and Slashing: Liveness and Security • Security: Delegation and Slashing: we want to make sure validators backed by enough funds to make misbehaviour very expensive. Security is achieved by punishing (slashing) if they do. Slashing locked funds (stake) intends to disintensivize diverting from correct execution of protocol, which is this case is voting to finalize valid blocks. • Liveness: Paying Rewards. For continued operation of Namada we want to incentivize participating in consensus and delegation, which helps security. ### Security In blockchain system we do not rely on altruistic behavior but rather economic security. We expect the validators to execute the protocol correctly. They get rewarded for doing so and punished otherwise. Each validator has some self-stake and some stake that is delegated to it by other token holders. The validator and delegators share the reward and risk of slashing impact with each other. The total stake behind consensus should be taken into account when value is transferred via a transaction. The total value transferred cannot exceed 2/3 of the total stake. For example, if we have 1 billion tokens, we aim that 300 Million of these tokens is backing validators. This means that users should not transfer more than 200 million of this token within a block. # Bonding mechanism ## Epoch An epoch is a range of blocks or time that is defined by the base ledger and made available to the PoS system. This document assumes that epochs are identified by consecutive natural numbers. All the data relevant to PoS are associated with epochs. ### Epoched data Epoched data are data associated with a specific epoch that are set in advance. The data relevant to the PoS system in the ledger's state are epoched. Each data can be uniquely identified. These are: Changes to the epoched data do not take effect immediately. Instead, changes in epoch n are queued to take effect in the epoch n + pipeline_length for most cases and n + unboding_length for unbonding actions. Should the same validator's data or same bonds (i.e. with the same identity) be updated more than once in the same epoch, the later update overrides the previously queued-up update. For bonds, the token amounts are added up. Once the epoch n has ended, the queued-up updates for epoch n + pipeline_length are final and the values become immutable. ## Entities • Validator: An account with a public consensus key, which may participate in producing blocks and governance activities. A validator may not also be a delegator. • Delegator: An account that delegates some tokens to a validator. A delegator may not also be a validator. Additionally, any account may submit evidence for a slashable misbehaviour. ### Validator A validator must have a public consensus key. Additionally, it may also specify optional metadata fields (TBA). A validator may be in one of the following states: • inactive: A validator is not being considered for block creation and cannot receive any new delegations. • candidate: A validator is considered for block creation and can receive delegations. For each validator (in any state), the system also tracks total bonded tokens as a sum of the tokens in their self-bonds and delegated bonds. The total bonded tokens determine their voting voting power by multiplication by the votes_per_token parameter. The voting power is used for validator selection for block creation and is used in governance related activities. #### Validator actions • become validator: Any account that is not a validator already and that doesn't have any delegations may request to become a validator. It is required to provide a public consensus key and staking reward address. For the action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length and the consensus key is set for epoch n + pipeline_length. • deactivate: Only a validator whose state at or before the pipeline_length offset is candidate account may deactivate. For this action applied in epoch n, the validator's account is set to become inactive in the epoch n + pipeline_length. • reactivate: Only an inactive validator may reactivate. Similarly to become validator action, for this action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length. • self-bond: A validator may lock-up tokens into a bond only for its own validator's address. • unbond: Any self-bonded tokens may be partially or fully unbonded. • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch. • change consensus key: Set the new consensus key. When applied in epoch n, the key is set for epoch n + pipeline_length. #### Active validator set From all the candidate validators, in each epoch the ones with the most voting power limited up to the max_validator_slots parameter are selected for the active validator set. The active validator set selected in epoch n is set for epoch n + pipeline_length. ### Delegator A delegator may have any number of delegations to any number of validators. Delegations are stored in bonds. #### Delegator actions • delegate: An account which is not a validator may delegate tokens to any number of validators. This will lock-up tokens into a bond. • undelegate: Any delegated tokens may be partially or fully unbonded. • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch. ## Bonds A bond locks-up tokens from validators' self-bonding and delegators' delegations. For self-bonding, the source address is equal to the validator's address. Only validators can self-bond. For a bond created from a delegation, the bond's source is the delegator's account. For each epoch, bonds are uniquely identified by the pair of source and validator's addresses. A bond created in epoch n is written into epoch n + pipeline_length. If there already is a bond in the epoch n + pipeline_length for this pair of source and validator's addresses, its tokens are incremented by the newly bonded amount. Any bonds created in epoch n increment the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + pipeline_length. The tokens put into a bond are immediately deducted from the source account. ### Unbond An unbonding action (validator unbond or delegator undelegate) requested by the bond's source account in epoch n creates an "unbond" with epoch set to n + unbounding_length. We also store the epoch of the bond(s) from which the unbond is created in order to determine if the unbond should be slashed if a fault occurred within the range of bond epoch (inclusive) and unbond epoch (exclusive). Any unbonds created in epoch n decrements the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + unbonding_length. An "unbond" with epoch set to n may be withdrawn by the bond's source address in or any time after the epoch n. Once withdrawn, the unbond is deleted and the tokens are credited to the source account. Note that unlike bonding and unbonding where token changes are delayed to some future epochs (pipeline or unbonding offset), the token withdrawal applies immediately. This because when the tokens are withdrawable, they are already "unlocked" from the PoS system and do not contribute to voting power. ### Staking rewards Until we have programmable validity predicates, rewards can use the mechanism outlined in the F1 paper, but it should use the exponential model, so that withdrawing rewards more frequently provides no additional benefit (this is a design constraint we should follow in general, we don't want to accidentally encourage transaction spam). This should be written in a way that allows for a natural upgrade to a validator-customisable rewards model (defaulting to this one) if possible. To a validator who proposed a block, the system rewards tokens based on the block_proposer_reward system parameter and each validator that voted on a block receives block_vote_reward. ### Slashing An important part of the security model of Namada is based on making attacking the system very expensive. To this end, the validator who has bonded stake will be slashed once an offence has been detected. These are the types of offences: • Equivocation in consensus • voting: meaning that a validator has submitted two votes that are confliciting • block production: a block producer has created two different blocks for the same height • Invalidity: • block production: a block producer has produced invalid block • voting: validators have voted on invalid block Unavailability is not considered an offense, but a validator who hasn't voted will not receive rewards. Once an offence has been reported: 1. Kicking out 2. Slashing • Individual: Once someone has reported an offence it is reviewed by validarors and if confirmed the offender is slashed. • cubic slashing: escalated slashing Instead of absolute values, validators' total bonded token amounts and bonds' and unbonds' token amounts are stored as their deltas (i.e. the change of quantity from a previous epoch) to allow distinguishing changes for different epoch, which is essential for determining whether tokens should be slashed. However, because slashes for a fault that occurred in epoch n may only be applied before the beginning of epoch n + unbonding_length, in epoch m we can sum all the deltas of total bonded token amounts and bonds and unbond with the same source and validator for epoch equal or less than m - unboding_length into a single total bonded token amount, single bond and single unbond record. This is to keep the total number of total bonded token amounts for a unique validator and bonds and unbonds for a unique pair of source and validator bound to a maximum number (equal to unbonding_length). To disincentivize validators misbehaviour in the PoS system a validator may be slashed for any fault that it has done. An evidence of misbehaviour may be submitted by any account for a fault that occurred in epoch n anytime before the beginning of epoch n + unbonding_length. A valid evidence reduces the validator's total bonded token amount by the slash rate in and before the epoch in which the fault occurred. The validator's voting power must also be adjusted to the slashed total bonded token amount. Additionally, a slash is stored with the misbehaving validator's address and the relevant epoch in which the fault occurred. When an unbond is being withdrawn, we first look-up if any slash occurred within the range of epochs in which these were active and if so, reduce its token amount by the slash rate. Note that bonds and unbonds amounts are not slashed until their tokens are withdrawn. The invariant is that the sum of amounts that may be withdrawn from a misbehaving validator must always add up to the total bonded token amount. ## System parameters The default values that are relative to epoch duration assume that an epoch last about 24 hours. • max_validator_slots: Maximum active validators, default 128 • pipeline_len: Pipeline length in number of epochs, default 2 (see https://github.com/cosmos/cosmos-sdk/blob/019444ae4328beaca32f2f8416ee5edbac2ef30b/docs/architecture/adr-039-epoched-staking.md#pipelining-the-epochs) • unboding_len: Unbonding duration in number of epochs, default 6 • votes_per_token: Used in validators' voting power calculation, default 100‱ (1 voting power unit per 1000 tokens) • block_proposer_reward: Amount of tokens rewarded to a validator for proposing a block • block_vote_reward: Amount of tokens rewarded to each validator that voted on a block proposal • duplicate_vote_slash_rate: Portion of validator's stake that should be slashed on a duplicate vote • light_client_attack_slash_rate: Portion of validator's stake that should be slashed on a light client attack ## Storage The system parameters are written into the storage to allow for their changes. Additionally, each validator may record a new parameters value under their sub-key that they wish to change to, which would override the systems parameters when more than 2/3 voting power are in agreement on all the parameters values. The validators' data are keyed by the their addresses, conceptually: type Validators = HashMap<Address, Validator>; Epoched data are stored in the following structure: struct Epoched<Data> { /// The epoch in which this data was last updated last_update: Epoch, /// Dynamically sized vector in which the head is the data for epoch in which /// the last_update was performed and every consecutive array element is the /// successor epoch of the predecessor array element. For system parameters, /// validator's consensus key and state, LENGTH = pipeline_length + 1. /// For all others, LENGTH = unbonding_length + 1. data: Vec<Option<Data>> } Note that not all epochs will have data set, only the ones in which some changes occurred. To try to look-up a value for Epoched data with independent values in each epoch (such as the active validator set) in the current epoch n: 1. let index = min(n - last_update, pipeline_length) 2. read the data field at index: 1. if there's a value at index return it 2. else if index == 0, return None 3. else decrement index and repeat this sub-step from 1. To look-up a value for Epoched data with delta values in the current epoch n: 1. let end = min(n - last_update, pipeline_length) + 1 2. sum all the values that are not None in the 0 .. end range bounded inclusively below and exclusively above To update a value in Epoched data with independent values in epoch n with value new for epoch m: 1. let shift = min(n - last_update, pipeline_length) 2. if shift == 0: 1. data[m - n] = new 3. else: 1. for i in 0 .. shift range bounded inclusively below and exclusively above, set data[i] = None 2. rotate data left by shift 3. set data[m - n] = new 4. set last_update to the current epoch To update a value in Epoched data with delta values in epoch n with value delta for epoch m: 1. let shift = min(n - last_update, pipeline_length) 2. if shift == 0: 1. set data[m - n] = data[m - n].map_or_else(delta, |last_delta| last_delta + delta) (add the delta to the previous value, if any, otherwise use the delta as the value) 3. else: 1. let sum to be equal to the sum of all delta values in the i in 0 .. shift range bounded inclusively below and exclusively above and set data[i] = None 2. rotate data left by shift 3. set data[0] = data[0].map_or_else(sum, |last_delta| last_delta + sum) 4. set data[m - n] = delta 5. set last_update to the current epoch The invariants for updates in both cases are that m - n >= 0 and m - n <= pipeline_length. For the active validator set, we store all the active and inactive validators separately with their respective voting power: type VotingPower = u64; /// Validator's address with its voting power. #[derive(PartialEq, Eq, PartialOrd, Ord)] struct WeightedValidator { /// The voting_power field must be on top, because lexicographic ordering is /// based on the top-to-bottom declaration order and in the ValidatorSet /// the WeighedValidators these need to be sorted by the voting_power. voting_power: VotingPower, } struct ValidatorSet { /// Active validator set with maximum size equal to max_validator_slots active: BTreeSet<WeightedValidator>, /// All the other validators that are not active inactive: BTreeSet<WeightedValidator>, } type ValidatorSets = Epoched<ValidatorSet>; /// The sum of all active and inactive validators' voting power type TotalVotingPower = Epoched<VotingPower>; When any validator's voting power changes, we attempt to perform the following update on the ActiveValidatorSet: 1. let validator be the validator's address, power_before and power_after be the voting power before and after the change, respectively 2. let power_delta = power_after - power_before 3. let min_active = active.first() (active validator with lowest voting power) 4. let max_inactive = inactive.last() (inactive validator with greatest voting power) 5. find whether the validator is active, let is_active = power_before >= max_inactive.voting_power 1. if is_active: 1. if power_delta > 0 && power_after > max_inactive.voting_power, update the validator in active set with voting_power = power_after 2. else, remove the validator from active, insert it into inactive and remove max_inactive.address from inactive and insert it into active 2. else (!is_active): 1. if power_delta < 0 && power_after < min_active.voting_power, update the validator in inactive set with voting_power = power_after 2. else, remove the validator from inactive, insert it into active and remove min_active.address from active and insert it into inactive Within each validator's address space, we store public consensus key, state, total bonded token amount and voting power calculated from the total bonded token amount (even though the voting power is stored in the ValidatorSet, we also need to have the voting_power here because we cannot look it up in the ValidatorSet without iterating the whole set): struct Validator { consensus_key: Epoched<PublicKey>, state: Epoched<ValidatorState>, total_deltas: Epoched<token::Amount>, voting_power: Epoched<VotingPower>, } enum ValidatorState { Inactive, Candidate, } The bonds and unbonds are keyed by their identifier: type Bonds = HashMap<BondId, Epoched<Bond>>; type Unbonds = HashMap<BondId, Epoched<Unbond>>; struct BondId { /// The delegator adddress for delegations, or the same as the validator } struct Bond { /// A key is a the epoch set for the bond. This is used in unbonding, where // it's needed for slash epoch range check. deltas: HashMap<Epoch, token::Amount>, } struct Unbond { /// A key is a pair of the epoch of the bond from which a unbond was created /// the epoch of unboding. This is needed for slash epoch range check. deltas: HashMap<(Epoch, Epoch), token::Amount> } For slashes, we store the epoch and block height at which the fault occurred, slash rate and the slash type: struct Slash { epoch: Epoch, block_height: u64, /// slash token amount ‱ (per ten thousand) rate: u8, r#type: SlashType, } ## Initialization An initial validator set with self-bonded token amounts must be given on system initialization. This set is used to pre-compute epochs in the genesis block from epoch 0 to epoch pipeline_length - 1. # Cubic slashing Namada implements cubic slashing, meaning that the amount of a slash is proportional to the cube of the voting power committing infractions within a particular interval. This is designed to make it riskier to operate larger or similarly configured validators, and thus encourage network resilience. When a slash is detected: 1. Using the height of the infraction, calculate the epoch just after which stake bonded at the time of infraction could have been fully unbonded. Enqueue the slash for processing at the end of that epoch (so that it will be processed before unbonding could have completed, and hopefully long enough for any other misbehaviour from around the same height as this misbehaviour to also be detected). 2. Jail the validator in question (this will apply at the end of the current epoch). While the validator is jailed, it should be removed from the validator set (also being effective from the end of the current epoch). Note that this is the only instance in our proof-of-stake model when the validator set is updated without waiting for the pipeline offset. 3. Prevent the delegators to this validator from altering their delegations in any way until the enqueued slash is processed. At the end of each epoch, in order to process any slashes scheduled for processing at the end of that epoch: 1. Iterate over all slashes for infractions committed within a range of (-1, +1) epochs worth of block heights (this may need to be a protocol parameter) of the infraction in question. 2. Calculate the slash rate according to the following function: calculateSlashRate :: [Slash] -> Float calculateSlashRate slashes = let votingPowerFraction = sum [ votingPowerFraction (validator slash) | slash <- slashes] in max 0.01 (min 1 (votingPowerFraction**2)*9) -- minimum slash rate is 1% -- then exponential between 0 & 1/3 voting power -- we can make this a more complex function later Note: The voting power of a slash is the voting power of the validator when they violated the protocol, not the voting power now or at the time of any of the other infractions. This does mean that these voting powers may not sum to 1, but this method should still be close to the incentives we want, and can't really be changed without making the system easier to game. 1. Set the slash rate on the now "finalised" slash in storage. 2. Update the validators' stored voting power appropriately. 3. Delegations to the validator can now be redelegated / start unbonding / etc. Validator can later submit a transaction to unjail themselves after a configurable period. When the transaction is applied and accepted, the validator updates its state to "candidate" and is added back to the validator set starting at the epoch at pipeline offset (active or inactive, depending on its voting power). At present, funds slashed are sent to the governance treasury. In the future we could potentially reward the slash discoverer with part of the slash, for which some sort of commit-reveal mechanism will be required to prevent front-running. <span class="katex"><span class="katex-html" aria-hidden="true"></span></span> # Reward distribution Namada uses the automatically-compounding variant of F1 fee distribution. Rewards are given to validators for voting on finalizing blocks: the fund for these rewards can come from minting (creating new tokens). The amount that is minted depends on how much is staked and our desired yearly inflation. When the total of the tokens staked is very low, the return rate per validator needs to increase, but as the total amount of stake rises, validators will receive less rewards. Once we have acquired the desired stake percentage, the amount minted will just be the desired yearly inflation. The validator and the delegator must have agreed on a commission rate between themselves. Delegators pay out rewards to validators based on a mutually-determined commission rate that both parties must have agreed upon beforehand. The minted rewards are auto-bonded and only transferred when the funds are unbonded. Once we have calculated the total that needs to be minted at the end of the epoch, we split the minted tokens according to the stake the relevant validators and delegators contributed and distribute them to validators and their delegators. This is similar to what Cosmos does. ## Basic algorithm Consider a system with • a canonical singular staking unit of account. • a set of validators . • a set of delegations , each to a particular validator and in a particular (initial) amount. • epoched proof-of-stake, where changes are applied as follows: • bonding after the pipeline length • unbonding after the unbonding length • rewards are paid out at the end of each epoch, to wit, in each epoch , is paid out to validator • slashing is applied as described in slashing. We wish to approximate as exactly as possible the following ideal delegator reward distribution system: • At each epoch, for a validator , iterate over all of the delegations to that validator. Update each delegation , as follows. where and respectively denote the reward and stake of validator at epoch . • Similarly, multiply the validator's voting power by the same factor , which should now equal the sum of their revised-amount delegations. In this system, rewards are automatically rebonded to delegations, increasing the delegation amounts and validator voting powers accordingly. However, we wish to implement this without actually needing to iterate over all delegations each block, since this is too computationally expensive. We can exploit this constant multiplicative factor which does not vary per delegation to perform this calculation lazily, storing only a constant amount of data per validator per epoch, and calculate revised amounts for each individual delegation only when a delegation changes. We will demonstrate this for a delegation to a validator . Let denote the stake of at epoch . For two epochs and with , define the function as Denote as . The function has a useful property. One may calculate the accumulated changes upto epoch as If we know the delegation upto epoch , the delegation at epoch is obtained by the following formula, Using property , Clearly, the quantity does not depend on the delegation . Thus, for a given validator, we need only store this product at each epoch , with which updated amounts for all delegations can be calculated. The product at the end of each epoch is updated as follows. updateProducts -> Epoch -> HashMap<BondId, Token::amount>> updateProducts validatorProducts activeSet currentEpoch = let stake = PoS.readValidatorTotalDeltas validator currentEpoch reward = PoS.reward stake currentEpoch entries = lookup validatorProducts validator lastProduct = lookup entries (Epoch (currentEpoch - 1)) in insert currentEpoch (product*(1+rsratio)) entries ## Commission Commission is charged by a validator on the rewards coming from delegations. These are set as percentages by the validator, who may charge any commission they wish between 0-100%. Let c_V(e)DVep_n
2022-11-29 13:51:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2698706090450287, "perplexity": 4218.50697887896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00002.warc.gz"}
http://cnx.org/content/m14395/latest/?collection=col10151/latest
# Connexions You are here: Home » Content » Connexions Tutorial and Reference » Advanced CNXML using Edit-in-Place ### Lenses What is a lens? #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. #### Endorsed by (What does "Endorsed by" mean?) This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization. • HPC Open Edu Cup This module is included inLens: High Performance Computing Open Education Cup 2008-2009 By: Ken Kennedy Institute for Information TechnologyAs a part of collection: "2008-'09 Open Education Cup: High Performance Computing" Click the "HPC Open Edu Cup" link to see all content they endorse. Click the tag icon to display tags associated with this content. #### Affiliated with (What does "Affiliated with" mean?) This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization. • CNX Documentation This module and collection are included inLens: Connexions Documentation By: Connexions "The canonical how-to guide to using Connexions." Click the "CNX Documentation" link to see all content affiliated with them. Click the tag icon to display tags associated with this content. • JVLA Affiliated This collection is included inLens: Jesuit Virtual Learning Academy Affiliated Material Click the "JVLA Affiliated" link to see all content affiliated with them. Click the tag icon to display tags associated with this content. #### Also in these lenses • eScience, eResearch and Computational Problem Solving This module is included inLens: eScience, eResearch and Computational Problem Solving By: Jan E. OdegardAs a part of collection: "2008-'09 Open Education Cup: High Performance Computing" Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens. • OER/LOR Connexions T This collection is included inLens: OER/LOR Connexions Training By: Connexions "This collection has the basic training for authoring modules (chapters/sections) and collections (textbooks/courses etc)." Click the "OER/LOR Connexions T" link to see all content selected in this lens. Click the tag icon to display tags associated with this content. ### Recently Viewed This feature requires Javascript to be enabled. ### Tags (What is a tag?) These tags come from the endorsement, affiliation, and other lenses that include this content. Inside Collection (Course): Module by: Elizabeth Gregory, Connexions. E-mail the authors Summary: This document explains and elaborates on CNXML tags that you can insert into a Connexions document using Edit-in-Place. ## Para When working in Edit-in-Place, notice that the first item of the "Add Here" drop-down menu is "Paragraph". When you select this item and click Add Here, a text box will appear. You can now insert text in the white box, including inline tags. Note the id="element-143" in the upper left hand part of the blue box in Figure 1. element-143 is the paragraph's unique ID, which you can use to refer to the paragraph directly using a link tag. Also, you can find some helpful tips in the upper right-hand corner of the blue box: "Help editing <para>". ### Example 1: Submitted by J. Cameron Cooper <para id='intro'> Working on trees or bushes can generate a lot of limbs and branches to haul away. If you just carry them, it'll take all day. Instead, make a sledge. </para> <para id="intro2"> Find a large, complex branch to make the base of your sledge. It should be relatively flat, and broad and long enough to make a decent pile; that is, as big or bigger than anything else you need to haul away. Green branches from hardwoods are best. Place it with the cut end pointing the way you want to go. If no single branch is good enough, two can be used. Just place their cut ends a couple feet apart. </para> <para id="intro3"> Then pile on the remaining branches. Most will naturally weave together; if not, give 'em a little help. Once the pile it a few layers deep, smaller waste, like weeds or maybe even leaves can be added to the pile. If it gets unstable, another big branch will help. </para> <para id="intro4"> When you're done, grab the cut end of the bottom branch, and maybe the base of one of the other big branches in the pile, and drag the thing where you want to go. You'll be surprised how much one person can drag! </para> <para id="intro5"> If you have a lot of leaves or similar small stuff to move, you can use a similar technique. Get a tarp, toss the leaves and weeds and whatnot in the middle, and then drag the whole thing away. </para> which displays as the following: Working on trees or bushes can generate a lot of limbs and branches to haul away. If you just carry them, it'll take all day. Instead, make a sledge. Find a large, complex branch to make the base of your sledge. It should be relatively flat, and broad and long enough to make a decent pile; that is, as big or bigger than anything else you need to haul away. Green branches from hardwoods are best. Place it with the cut end pointing the way you want to go. If no single branch is good enough, two can be used. Just place their cut ends a couple feet apart. Then pile on the remaining branches. Most will naturally weave together; if not, give 'em a little help. Once the pile it a few layers deep, smaller waste, like weeds or maybe even leaves can be added to the pile. If it gets unstable, another big branch will help. When you're done, grab the cut end of the bottom branch, and maybe the base of one of the other big branches in the pile, and drag the thing where you want to go. You'll be surprised how much one person can drag! If you have a lot of leaves or similar small stuff to move, you can use a similar technique. Get a tarp, toss the leaves and weeds and whatnot in the middle, and then drag the whole thing away. ## List To insert a new list, select "list" from the "insert" drop-down menu. As with adding a paragraph, adding a list will insert a blue box, with the list's unique ID in the upper left-hand corner and a helpful link in the upper right-hand corner. ### Example 2: Enumerated List <list id='sledge' list-type='enumerated'> <title>Making a Sledge</title> <item> Find a large, complex branch to make the base of your sledge. It should be relatively flat, and broad and long enough to make a decent pile; that is, as big or bigger than anything else you need to haul away. Green branches from hardwoods are best. Place it with the cut end pointing the way you want to go. If no single branch is good enough, two can be used. Just place their cut ends a couple feet apart. </item> <item> Then pile on the remaining branches. Most will naturally weave together; if not, give 'em a little help. Once the pile it a few layers deep, smaller waste, like weeds or maybe even leaves can be added to the pile. If it gets unstable, another big branch will help. </item> <item> When you're done, grab the cut end of the bottom branch, and maybe the base of one of the other big branches in the pile, and drag the thing where you want to go. You'll be surprised how much one person can drag! </item> </list> The resulting list will look like: #### Making a Sledge 1. Find a large, complex branch to make the base of your sledge. It should be relatively flat, and broad and long enough to make a decent pile; that is, as big or bigger than anything else you need to haul away. Green branches from hardwoods are best. Place it with the cut end pointing the way you want to go. If no single branch is good enough, two can be used. Just place their cut ends a couple feet apart. 2. Then pile on the remaining branches. Most will naturally weave together; if not, give 'em a little help. Once the pile it a few layers deep, smaller waste, like weeds or maybe even leaves can be added to the pile. If it gets unstable, another big branch will help. 3. When you're done, grab the cut end of the bottom branch, and maybe the base of one of the other big branches in the pile, and drag the thing where you want to go. You'll be surprised how much one person can drag! ### Example 3: Bulleted List <list id="ex-bulleted-list" list-type="bulleted"> <item>branches</item> <item>leaves</item> <item>sweat</item> </list> • branches • leaves • sweat ## Equation The equation tag is used to set off and number equations in CNXML documents. If you have MathML enabled for your document, you will only be able to place MathML equations within the equation tags. Otherwise, to write the actual equations, you can use ASCII or images. ### Note: Connexions strongly encourages the use equation with MathML tags when displaying math. If you look at Figure 3, you will find the equation's unique ID in the upper left-hand corner and a helpful link in the upper right-hand corner. As with lists, you can add an optional title at the beginning of each equation. ### Example 4: Using Images as Equations <equation id="eqn14"> <media id="img12" display="block" alt="1+2=3" <image mime-type='image/gif' src='euler.gif' /> </equation> displays as: (1) ### Example 5: ASCII equations <equation id='eqn15'> <title>Simple Arithmetic</title> 11+27=38 </equation> This equation will display as: 11+27=38 (2) ## Exercise The exercise tag allows authors to add practice problems into their documents. When you initially add an exercise, you will see the familiar blue box, with the unique ID and the helpful link in the top corners. However, also notice that new tags have been premade in your text box: problem and solution. To continue utilizing edit-in-place to edit your exercise, press the Save button (see Figure 5). You can now add various block tags to your problem and solution, including paragraphs and lists! To create more complex exercises, such as multiple-choice, multiple-response, ordered-response, and free-response questions, QML (Questions Markup Language) may used in place of the problem and solution tags. For more information, please see the information about QML. ### Example 6 <exercise id='hyd_test'> <problem id="id9"> <para id='hyd_testp1'> The color of a hydrangea changes with the pH of the soil. What color would the hydrangea be if the soil were highly acidic? Highly basic? Neutral? </para> </problem> <solution id="id10"> <para id='hyd_sol1p1'> Highly acidic soil produces blue flowers. Highly basic soil produces pink flowers. Neutral soil produces very pale cream flowers. </para> </solution> </exercise> This code will display as: #### Problem 1 The color of a hydrangea changes with the pH of the soil. What color would the hydrangea be if the soil were highly acidic? Highly basic? Neutral? ##### Solution Highly acidic soil produces blue flowers. Highly basic soil produces pink flowers. Neutral soil produces very pale cream flowers. ## Figure The figure tag provides the structure for creating a figure within a document. They can contain either two or more subfigure tags, or a single media, table, or code tag. The optional first tag of the figure tag is title which is used to title a figure. The title tag is followed by any of the tags listed above; however, the most commonly used tag is media, which is used to include any sort of media such as images, video, music, or java applets. For more information on what media you can add to your content, and how to add it, see Adding Multimedia to Your Connexions Content. The final tag is the optional caption which is used to add a small caption to the figure. ### Example 7: Example of a Figure <figure id='blossom'> <title>Momosa Blossom</title> <media id="image-example" display="block" alt="A Momosa Blossom."> <image id="flower" mime-type="image/jpeg" src="alb_jul_flo_1.jpg"> </media> <caption> Picture taken by Jenn Drummond (CC Attribution). </caption> </figure> This code will display as: ## Code As seen in Using Basic CNXML in Edit-in-Place, you can add inline code to your document; edit-in-place also allows you in insert a block of code, separate from text. If you need to use the > and < symbols in your block of code, you must either use the unicode for these characters (&gt; and &lt;, if you have MathML enabled), or use the CDATA method. To utilize the CDATA method, insert <![CDATA[ before your code and ]]> after it, as seen in Example 8. ### Example 8: A Block of Code, Using CDATA When saved, Figure 9 will display as: <para id='copy'> In a unix terminal the command to copy a file is <code display='inline'>cp original copy</code>. </para> ## Note As mentioned in Using Basic CNXML in Edit-in-Place, the note tag creates an "out of line" note to the reader. You can also insert a note using the drop-down box in Edit-in-Place; however, unless you edit the full source, the type of note will be set to the default. ### Example 9 <note> Gardening requires a lot of intense physical exertion. Please drink plenty of water to avoid dehydration! </note> The above markup will display as: #### Note: Gardening requires a lot of intense physical exertion. Please drink plenty of water to avoid dehydration! ## Example As is often the case in textbooks, authors will include examples in the middle of a chapter or section. For this reason CNXML provides the example tag that allows an author to include examples in a document. ### Example 10 Here is the code for Example 9: <example id="notexamp"> <code id="codeseg1" display="block"> <note> Gardening requires a lot of intense physical exertion. Please drink plenty of water to avoid dehydration! </note> </code> <para id="notep2"> The above markup will display as: </para> <note> Gardening requires a lot of intense physical exertion. Please drink plenty of water to avoid dehydration! </note> </example> ## CALS Table The final element you can add using Edit-in-Place is table. To learn more about adding and editting tables using Edit-in-Place, see CALS Table. For a more complete description of the CALS Table consult the CALS Table Spec. ## Content actions PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. #### Collection to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks #### Module to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
2014-03-08 23:37:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2957439422607422, "perplexity": 4530.117042878851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999668190/warc/CC-MAIN-20140305060748-00085-ip-10-183-142-35.ec2.internal.warc.gz"}
https://eccc.weizmann.ac.il/keyword/18557/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > COMMUNICATION COMPRESSION: Reports tagged with communication compression: TR14-049 | 11th April 2014 Anat Ganor, Gillat Kol, Ran Raz #### Exponential Separation of Information and Communication Revisions: 1 We show an exponential gap between communication complexity and information complexity, by giving an explicit example for a communication task (relation), with information complexity $\leq O(k)$, and distributional communication complexity $\geq 2^k$. This shows that a communication protocol cannot always be compressed to its internal information. By a result of ... more >>> ISSN 1433-8092 | Imprint
2021-06-24 12:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3789413273334503, "perplexity": 3288.697269385768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00448.warc.gz"}
https://www.physicsforums.com/threads/cocentration-and-molarity.220930/
# Cocentration and molarity 1. Mar 9, 2008 ### mmg0789 1. The problem statement, all variables and given/known data What volume of water must be added to 45.0 mL of a 1.0 M solution of H2SO4 in order to create a 0.33 M H2SO4 solution? vat contains 2.24 M hydrochloric acid solution. How many kg of Ca(OH)2 will be required to react completely (neutralize) 796 L of the solution? 2. Relevant equations mv=mv 3. The attempt at a solution for the first one, i tried using mv=mv, but noticed that it wouldnt make sense jsut plugging in the numbers directly b/c of the problem's wording for the second one, i'm not sure how to start it Last edited: Mar 9, 2008 2. Mar 9, 2008 ### BlindSpot In regards to your first question: what did you determine to be the final volume of the 0.33 M H2SO4 solution? A good starting point for the second question is this: figure out how many moles of HCl you have in the vat then determine how many moles of Ca(OH)2 will be required to neutralize them. 3. Mar 9, 2008 ### mmg0789 for the first one: 45*1=.33*x x=136.4mL ahh! i see now, subtract 45 from it for the second one M = moles/volume moles = 2.24*796=1783.04mol mols*(g/mol) = g = 1783.04/74 = 24g dont get the right answer..not sure what i did wrong there Last edited: Mar 9, 2008 4. Mar 9, 2008 ### BlindSpot I agree with your math used to determine the total mass of 1783.04 moles of Ca(OH)2: moles Ca(OH)2 * g/moles = g of Ca(OH)2 but your calculations shown are off. But this mass of base will not lead you to the right answer. Go back to the neutralization reaction (this should have been the first thing you did) and see if this gives you some ideas. I believe that the problem assumes something that may not be obvious. As a first hint, look at the acid dissociation constants (pKa values) of each proton in H2SO4. This may point you to a simplifying assumption (accurate or not) that will lead you to the given answer. Last edited: Mar 9, 2008 5. Mar 9, 2008 ### mmg0789 hmm, i'm not too sure what the acid dissociation constants are (as in: we havent studied that yet(?)) but something i noticed between what i came up with and the answer is is that 66/24 = 2.75. hopefully thats significant...not sure where that comes from though Last edited: Mar 9, 2008 6. Mar 9, 2008 ### BlindSpot The first mistake your a making is only a computational one. Try going back to your equation mol * g/mol = g and running your numbers again. I agree that there are 1783.04 moles of HCl in the vat and that the molecular weight of Ca(OH)2 is 74 g/mol. The second mistake will be easier to find if you write out a balanced chemical reaction for the neutralization. This will take the form of aA + bB → cS + dH20 where a, b, c, and d are integer values, A is the acid, B is the base, and S is the salt (byproduct). 7. Mar 9, 2008 ### mmg0789 ah ok first mistake was a dumb one 1783.04*74=131944.96 g Ca(OH)2 then for the second part, 131944.96 g is 131.944 kg from the equation, 1 Ca(OH)2 : 2 HCl (...i guess i also could have done this part earlier when i had mols HCl) 132/2 = 66kg
2017-02-25 12:05:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024665117263794, "perplexity": 1869.3399406897759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00198-ip-10-171-10-108.ec2.internal.warc.gz"}